text stringlengths 9 7.94M |
|---|
\begin{document}
\title{Two-Scale limit of the solution to a Robin Problem in Perforated Media
\footnote{\hspace*{-7mm} \textit{2000 Mathematics Subject Classification:} 35B27. \newline
\textit{Keywords:} Homogenization, Two scale convergence, Robin boundary condition. }}
\author{Abdelhamid Ainouz\\
Department of Mathematics, University of Sciences \\ and Technology Houari Boumediene, \\ Po Box 32 El Alia Bab Ezzouar 16111 Algiers, Algeria. \\aainouz@usthb.dz}
\maketitle
\begin{abstract} The two scale convergence of the solution to a Robin's type-like problem of a stationary diffusion problem in a periodically perforated domain is investigated. It is shown that the Robin's problem converges to a problem associated to a new operator which is the sum of a standard homogenized operator plus an extra first order "strange" term; its appearance is due to the non-symmetry of the diffusion matrix and to the non rescaled resistivity. \end{abstract}
\section{Introduction} Periodic homogenization in perforated media with Robin boundary conditions prescribed on the boundary of the holes has been extensively studied by many authors and we refer for instance to\ \cite{bpc}, \cite{ciodon1}, \cite {ciodon2}, \cite{pas1},\ .... In this paper we study the stationary diffusion equation in a periodic perforated body where the heat flow is proportional to the temperature field on the boundary of the holes with a resistivity having zero average value on the boundary of the reference hole. In \cite{bpc}, the authors studied a model problem for a second-order symmetric elliptic operator in a periodically perforated domain with the Robin boundary condition prescribed on the boundary of the holes. They use the asymptotic expansion technique \cite{blp}, \cite{san}, \cite{tar} to obtain the homogenized problem and they construct correctors to justify the expansion and then estimate the error. In this paper, we consider a problem but with another configuration, namely the holes may or may not be connected and the boundary of the holes may intersect the exterior boundary of the body. Moreover we assume that the matrix diffusion of the second-order operator may or may not be symmetric. We use the two-scale convergence technique \cite{all}, \cite{lnw}, \cite{ngue}, \cite{ngue1} to obtain the two-scale limit system. After the decoupling technique, we show that the homogenized problem contains a convective term. Its appearance is due essentially to the general character of the matrix diffusion and on the fact that the resistivity function is not rescaled as usually assumed when dealing with two-scale convergence on periodic surfaces, see for instance \cite{ain}, \cite{ain1}, \cite{adh}, \cite{nr},...
The paper is organized as follows: in section \ref{1}, we define the geometry of the perforated body and we give the Robin boundary-value problem setting. Section \ref{2} is aimed at showing the existence and unicity of the solution of the Robin problem and obtaining a priori estimates of the solution. The asymptotic limit via two-scale convergence procedure is analyzed in section \ref{3}. We obtain the homogenized boundary-value problem which is a second order elliptic operator containing first and zero order terms. The latter term is classical see e.g. \cite{adh}. The first order term is a convection one and it is null when the diffusion matrix is symmetric or with constant coefficients.
\section{Setting of the Problem\label{1}}
Let $\Omega $ be a bounded domain in $\mathbb{R}^{n}$ of variable $x=\left( x_{i}\right) _{1\leq i\leq n}\ $($n\geq 2$) with a smooth boundary $\Gamma $ and $\varepsilon $ a real parameter taking values in a sequence of positive numbers tending to zero. As usual in periodic homogenization let $Y=\left[ 0,1\right] ^{n}$ be the generic unit cell of periodicity in the auxiliary space $\mathbb{R}^{n}$ of variable $y=\left( y_{i}\right) _{1\leq i\leq n}$. The cell $Y$ is identified to the unit torus $\mathbb{R}^{n}/\mathbb{Z}^{n}$ . A function defined on $\mathbb{R}^{n}$ is said to be $Y$-periodic if it is periodic of period $1$ in each $y_{i}$ variable with $1\leq i\leq n$. In the sequel we will suppose that any function defined on $Y$ is extended periodically to the whole space $\mathbb{R}^{n}$. If $E\left( Z\right) $ is a function space (where $Z$ is a subset of $Y$) we denote $E_{\#}\left( Z\right) :=\left\{ w\in E\left( Z\right) \text{; }w\text{ is extended periodically to }\mathbb{R}^{n}\right\} $.
Let $H$, the reference hole, be an open subset of $Y$ with a smooth boundary $\Sigma $ and set $Y_{s}=Y\backslash cl(H)$ where $cl\left( \cdot \right) $ denotes the closure. Thus $Y$ is partitioned as $Y=Y_{s}\cup \Sigma \cup H$. Note that we do not require that $H$ is strictly included in $Y$. As a consequence the periodic extension of $H$ may or may not be connected. Let us denote $\chi \left( y\right) $ the characteristic function of $Y_{s}$ in $ Y$. We define the perforated material \begin{equation*} \Omega _{\varepsilon }=\left\{ x\in \Omega ;\chi \left( \frac{x}{\varepsilon }\right) =1\right\} \end{equation*} and the one codimensional periodic surface \begin{equation*} \Sigma _{\varepsilon }=\left\{ x\in \Omega ;\frac{x}{\varepsilon }\in \Sigma \right\} . \end{equation*}
Here $\Omega _{\varepsilon }$ represents the matrix or the solid part of $ \Omega ,$ by opposition to the holes or the void part that is represented by the open subset $H_{\varepsilon }:=\Omega \backslash cl\left( \Omega _{\varepsilon }\right) $. By construction, all of these holes are identical and they are periodically distributed in $\Omega $ with period $\varepsilon $ in each $x_{i}$-direction. Since we use the two-scale convergence method we do not require that the boundary $\Sigma _{\varepsilon }=\partial H_{\varepsilon }$ does not intersect $\Gamma $. As in \cite{all}, We shall use the natural extension by zero of any function defined on $\Omega _{\varepsilon }$.
Let $f_{\varepsilon }$ be a given function in $L^{2}\left( \Omega _{\varepsilon }\right) $, $g_{\varepsilon }$ be given in $L^{2}\left( \Sigma _{\varepsilon }\right) $ such that \begin{equation} \Vert f_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }+ \sqrt{\varepsilon }\Vert g_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }\leq C\text{.} \label{dc} \end{equation} Here and throughout this paper $C$ denotes a positive constant independent of $\varepsilon $.
Let $A\left( x,y\right) =\left( a_{ij}\right) _{1\leq i,j\leq n}$ be a real-valued matrix function defined on $\Omega \times Y$, $Y$-periodic in the second variable $y$ such that there exists two positive constants $m$, $M $ independent of $\varepsilon $ satisfying the following inequality: \begin{equation} m\mid \zeta \mid ^{2}\leq (A\zeta ,\zeta )\leq M\mid \zeta \mid ^{2} \label{ac} \end{equation} for all $\zeta \in \mathbb{R}^{n}$. We suppose that the matrix $A$ lies in $ C\left( \Omega ;L_{\#}^{\infty }\left( Y\right) \right) ^{n^{2}}$. We note that no symmetry condition on $A$ is assumed. Let $\mu \left( y\right) \in L_{\#}^{\infty }\left( Y_{s}\right) $ such that $\int_{Y_{s}}\mu \left( y\right) \geq \mu _{0}>0$ where $\mu _{0}$ is independent of $\varepsilon $. Let $\alpha $ be a $Y$-periodic measurable bounded function defined on $ \Sigma $ such that \begin{equation} \int_{\Sigma }\alpha \left( y\right) d\sigma \left( y\right) =0\text{.} \label{as1} \end{equation} Let us decompose the function $\alpha $ into its positive and negative parts as follows: \begin{equation*} \alpha =\alpha ^{+}-\alpha ^{-},\ \alpha ^{+}=\max \left( \alpha ,0\right) \text{, }\alpha ^{-}=\max \left( -\alpha ,0\right) \text{.} \end{equation*} Assume that the positive part of $\alpha $ satisfies the condition: \begin{equation} \alpha ^{+}\left( y\right) \geq \alpha _{0}>0\text{ a.e. in }\Sigma \text{.} \label{ca} \end{equation} Let us consider the following Robin boundary value problem:
\begin{eqnarray} -div\left( A_{\varepsilon }\nabla u_{\varepsilon }\right) +\mu _{\varepsilon }u_{\varepsilon } &=&f_{\varepsilon }\text{ in }\Omega _{\varepsilon }, \label{eq1} \\ \left( A_{\varepsilon }\nabla u_{\varepsilon }\right) \cdot \nu _{\varepsilon }+\alpha _{\varepsilon }u_{\varepsilon } &=&\varepsilon g_{\varepsilon }\text{ on }\Sigma _{\varepsilon }, \label{eq2} \\ u_{\varepsilon } &=&0\text{ on }\Gamma \label{eq3} \end{eqnarray} where \begin{equation*} A_{\varepsilon }\left( x\right) =A\left( x,\frac{x}{\varepsilon }\right) ,\ \mu _{\varepsilon }\left( x\right) =\mu \left( \frac{x}{\varepsilon }\right) ,\ \alpha _{\varepsilon }\left( x\right) =\alpha \left( \frac{x}{\varepsilon }\right) \end{equation*} and $\nu _{\varepsilon }$ is the unit outward normal to $\Omega _{\varepsilon }$.
This problem can be regarded as a simplified model of the condensation of stream in a periodic cooling structure (see \cite{adh}). We can also consider as a model for treatment planning hyperthemia in microvascular tissue- see, e.g. \cite{dh}. The boundary condition (\ref{eq2}) means that the heat flow $\left( A_{\varepsilon }\nabla u_{\varepsilon }\right) \cdot \nu _{\varepsilon }$ is proportional to the temperature $u_{\varepsilon }$ with a periodic resistivity given by the function $\alpha _{\varepsilon }$. In many situations, the resistivity function is taken to be $\varepsilon ^{m}\alpha _{\varepsilon }$. Since the operator is of order $2$ the interesting cases are then $m=-2,-1,0,1$ and $2$. The case $m=2$ is trivial since we obtain the classical homogenized equation.This can be seen easily by using the asymptotic expansion method. The case $m=1$ with $\alpha _{\varepsilon }\geq 0$ has been studied by \cite{adh}, \cite{nr} using the two-scale convergence technique. In this situation $\alpha _{\varepsilon }$ is rescaled since the surface $\Sigma _{\varepsilon }$ is of codimension $1$ \ Here we study the case $m=0$, i.e. a non rescaled resistivity. We use the same technique but with $\alpha _{\varepsilon }$ changing sign. We show that the assumptions (\ref{as1}), (\ref{ca}) and a non-symmetric $A_{\varepsilon }\left( x\right) $ contribute to the description of the effective thermal conductivity with convection. The case $m=-1$ will be studied in a forthcoming paper. \ Note that the case $m=-2$ is also trivial since it yields that the effective thermal conductivity is $0$.
\section{Study of the Problem and A priori Estimates\label{2}}
Let \begin{equation*} V_{\varepsilon }=\left\{ v\in H^{1}\left( \Omega _{\varepsilon }\right) \text{; }v=0\text{ on }\Gamma \right\} \end{equation*} equipped with the scalar product \begin{equation*} (u,v)_{V_{\varepsilon }}=\int_{\Omega _{\varepsilon }}\nabla u\left( x\right) \nabla v\left( x\right) dx \end{equation*} and the associated norm $\Vert u\Vert _{V_{\varepsilon }}=(u,u)_{V_{\varepsilon }}^{1/2}$ which is equivalent to the $H^{1}$-norm thanks to the Poincar\'{e} inequality$.$ The variational formulation of the boundary-value problem (\ref{eq1})-(\ref{eq3}) reads as follows: \begin{equation} \left\{ \begin{array}{c} \text{For each }\varepsilon >0\text{, find }u_{\varepsilon }\in V_{\varepsilon }\text{ such that } \\ a_{\varepsilon }\left( u_{\varepsilon },v\right) =L_{\varepsilon }\left( v\right) \text{ for any }v\in V_{\varepsilon }, \end{array} \right. \label{wf} \end{equation} where $a_{\varepsilon }\left( \cdot ,\cdot \right) $ is the bilinear form defined on $V_{\varepsilon }\times V_{\varepsilon }$ by: \begin{eqnarray*} a_{\varepsilon }\left( u,v\right) &=&\int_{\Omega _{\varepsilon }}A_{\varepsilon }(x)\nabla u\left( x\right) \nabla v\left( x\right) dx+\int_{\Omega _{\varepsilon }}\mu _{\varepsilon }\left( x\right) u\left( x\right) v\left( x\right) dx \\ &&+\int_{\Sigma _{\varepsilon }}\alpha _{\varepsilon }\left( x\right) u\left( x\right) v\left( x\right) d\sigma _{\varepsilon }\left( x\right) \end{eqnarray*} and $L_{\varepsilon }\left( \cdot \right) $ is the linear form defined on $ V_{\varepsilon }$ by: \begin{equation*} L_{\varepsilon }\left( v\right) =\int_{\Omega _{\varepsilon }}f_{\varepsilon }(x)v\left( x\right) dx+\varepsilon \int_{\Sigma _{\varepsilon }}g_{\varepsilon }\left( x\right) v\left( x\right) d\sigma _{\varepsilon }\left( x\right) . \end{equation*}
\begin{lemma} \label{lcs}There exists a positive constant $C_{s}$\ independent of $ \varepsilon $ such that for every $v\in V_{\varepsilon }$ and for every $ \delta >0$ we have \begin{equation} \Vert v\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}\leq C_{s} \left[ \left( \delta \varepsilon \right) ^{-1}\Vert v\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\left( \delta \varepsilon \right) \Vert \nabla v\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}\right] . \label{cs} \end{equation} \end{lemma}
\begin{proof} Let us introduce the notation \begin{equation*} v_{\varepsilon }^{k}\left( x\right) =v\left( \varepsilon \left( k+y\right) \right) \end{equation*} where $k\in K_{\varepsilon }=\left\{ k\in \mathbb{Z}^{n};\varepsilon \left( Y+k\right) \cap \Omega \neq \emptyset \right\} $. By the change of variable $ x=\varepsilon \left( k+y\right) $ we have \begin{equation*} \int_{\Sigma _{\varepsilon }}v^{2}\left( x\right) d\sigma _{\varepsilon }\left( x\right) =\underset{k\in K_{\varepsilon }}{\sum }\int_{\varepsilon \left( \Sigma +k\right) }v^{2}\left( x\right) d\sigma _{\varepsilon }\left( x\right) =\varepsilon ^{n-1}\underset{k\in K_{\varepsilon }}{\sum } \int_{\Sigma }\left[ v_{\varepsilon }^{k}\left( y\right) \right] ^{2}d\sigma \left( y\right) . \end{equation*} From the trace theorem we see that for every $\delta >0$ \begin{eqnarray*} \int_{\Sigma }\left[ v_{\varepsilon }^{k}\left( y\right) \right] ^{2}d\sigma \left( y\right) &\leq &C_{s}\left[ \delta ^{-1}\int_{Y_{s}+k}\left[ v_{\varepsilon }^{k}\left( y\right)
\right] ^{2}dy+\delta \int_{Y_{s}+k}|\nabla _{y}v_{\varepsilon
}^{k}\left( y\right) |^{2}dy\right] \\ &\leq &\frac{C_{s}}{\varepsilon ^{n}}\left[ \delta ^{-1}\int_{\varepsilon \left( Y_{s}+k\right) }v\left( x\right) ^{2}dx+\delta \varepsilon ^{2}\int_{\varepsilon \left(
Y_{s}+k\right) }|\nabla _{x}v\left( x\right) |^{2}dx\right] . \end{eqnarray*} Hence
\begin{eqnarray*} \int_{\Sigma _{\varepsilon }}v^{2}\left( x\right) d\sigma _{\varepsilon }\left( x\right) &\leq &\varepsilon ^{n-1}\frac{C_{s}}{\varepsilon ^{n}} [\delta ^{-1}\underset{k\in K_{\varepsilon }}{\sum }\int_{\varepsilon \left( Y_{s}+k\right) }v\left( x\right) ^{2}dx+ \\ &&\delta \varepsilon ^{2}\underset{k\in K_{\varepsilon }}{\sum }
\int_{\varepsilon \left( Y_{s}+k\right) }|\nabla _{x}v\left(
x\right) |^{2}dx] \\ &\leq &C_{s}\left[ \left( \delta \varepsilon \right) ^{-1}\int_{\Omega _{\varepsilon }}v^{2}\left( x\right) dx+\left( \delta \varepsilon \right) \int_{\Omega _{\varepsilon
}}|\nabla v\left( x\right) |^{2}dx\right] \text{.} \end{eqnarray*} The Lemma is proved. \end{proof}
\begin{lemma} Let $\sqrt{\mu _{0}m}>C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }$ where $C_{s}$ is the constant given in Lemma \ref{lcs}. Then $ a_{\varepsilon }\left( \cdot ,\cdot \right) $ is coercive on $V_{\varepsilon }$. \end{lemma}
\begin{proof} Let $v\in V_{\varepsilon }$. Then using (\ref{ac}), we have \begin{equation*} a_{\varepsilon }\left( v,v\right) \geq m\int_{\Omega _{\varepsilon
}}|\nabla v\left( x\right) |^{2}dx+\mu _{0}\int_{\Omega _{\varepsilon }}v\left( x\right) ^{2}dx-\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\int_{\Sigma _{\varepsilon }}v\left( x\right) ^{2}d\sigma _{\varepsilon }\left( x\right) . \end{equation*} By (\ref{cs}) we see that for every $\delta >0$ \begin{eqnarray*} a_{\varepsilon }\left( v,v\right) &\geq &\left( m-\delta \varepsilon C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma
\right) }\right) \int_{\Omega _{\varepsilon }}|\nabla v\left(
x\right) |^{2}dx+ \\ &&\left( \mu _{0}-\left( \delta \varepsilon \right) ^{-1}C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \int_{\Omega _{\varepsilon }}\left[ v\left( x\right) \right] ^{2}dx. \end{eqnarray*}
Choosing $\delta =\dfrac{1}{\varepsilon }\sqrt{\dfrac{m}{\mu _{0}}}$. Then \begin{equation*} a_{\varepsilon }\left( v,v\right) \geq c_{0}\left[ \Vert \nabla v\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}+\Vert v\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}\right] \end{equation*} where $c_{0}$ is the positive constant given by \begin{equation*} c_{0}=\left( 1-\frac{C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }}{\sqrt{m\mu _{0}}}\right) \min \left( m,\mu _{0}\right) >0. \end{equation*} This completes the proof. \end{proof}
In the sequel we shall assume that the condition $\sqrt{\mu _{0}m} >C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }$ is fulfilled.
\begin{proposition} The variational formulation (\ref{wf}) admits a unique solution $ u_{\varepsilon }\in V_{\varepsilon }$. Moreover we have the a priori estimates. \begin{equation} \Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }\leq C. \label{ape} \end{equation} \end{proposition}
\begin{proof} The existence and uniqueness is a straightforward application of Lemma \ref {lcs} and the Lax-Milgram Lemma. It remains to prove the a priori estimates ( \ref{ape}). Take $v=$
$u_{\varepsilon }$ in (\ref{wf}). We have \begin{eqnarray*} &&\int_{\Omega _{\varepsilon }}\left( A_{\varepsilon }\nabla u_{\varepsilon }\nabla u_{\varepsilon }+\mu _{\varepsilon }u_{\varepsilon }^{2}\right) dx+\int_{\Sigma _{\varepsilon }}\alpha _{\varepsilon }^{+}u_{\varepsilon }^{2}d\sigma _{\varepsilon }\left( x\right) \\ &=&\int_{\Omega _{\varepsilon }}f_{\varepsilon }u_{\varepsilon }dx+\int_{\Sigma _{\varepsilon }}\left( \varepsilon g_{\varepsilon }+\alpha _{\varepsilon }^{-}u_{\varepsilon }\right) u_{\varepsilon }d\sigma _{\varepsilon }\left( x\right) . \end{eqnarray*} Let us denote \begin{equation*} A_{\varepsilon }\left( u_{\varepsilon }\right) :=\Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}. \end{equation*} Then using (\ref{ac}) and (\ref{ca}) we obtain \begin{equation} A_{\varepsilon }\left( u_{\varepsilon }\right) \leq \frac{1}{c_{1}} (\int_{\Omega _{\varepsilon }}f_{\varepsilon }u_{\varepsilon }dx+\int_{\Sigma _{\varepsilon }}\left( \varepsilon g_{\varepsilon }+\alpha _{\varepsilon }^{-}u_{\varepsilon }\right) u_{\varepsilon }d\sigma _{\varepsilon }\left( x\right) ) \label{ione} \end{equation} where $c_{1}=\min \left( m,\mu _{0},\alpha _{0}\right) >0$. Applying Young's inequality on the right hand side of (\ref{ione}), we get \begin{eqnarray} A_{\varepsilon }\left( u_{\varepsilon }\right) &\leq &\frac{1}{c_{1}}[\frac{ \beta ^{2}}{2}\Vert f_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\frac{1}{2\beta ^{2}}\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2} \notag \\ &&+\frac{\gamma ^{2}\varepsilon ^{2}}{2}\Vert g_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\left( \frac{1}{2\gamma ^{2}}+\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}]. \label{itwo} \end{eqnarray} But in view of (\ref{cs}) inequality (\ref{itwo}) becomes \begin{eqnarray*} A_{\varepsilon }\left( u_{\varepsilon }\right) &\leq &\frac{1}{c_{1}} [\left( \frac{1}{2\beta ^{2}}+\left( \frac{\varepsilon }{2\gamma ^{2}}+\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \frac{C_{s}}{ \varepsilon \delta }\right) \Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2} \\ &&+\left( \frac{\varepsilon }{2\gamma ^{2}}+\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) C_{s}\varepsilon \delta \Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}]+C. \end{eqnarray*} Now, appropriate choice of $\beta ,\gamma ,\delta $ yields \begin{equation*} A_{\varepsilon }\left( u_{\varepsilon }\right) =\Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}\leq C. \end{equation*} The Proposition is now proved. \end{proof}
One is led to determine the homogenized problem of (\ref{eq1})-(\ref{eq3}). Namely we study the limiting behavior of the solutions $u_{\varepsilon }$ as $\varepsilon $ tends to zero. This the subject of the next section.
\section{ Homogenization Procedure\label{3}}
We shall use the well-known two-scale convergence method that we briefly recall here the definition and the main results.
\subsection{Two-scale Convergence}
\begin{definition} \begin{enumerate} \item A sequence $v_{\varepsilon }$ in $L^{2}(\Omega )$ \emph{two-scale} converges to $v_{0}(x,y)\in L^{2}((\Omega \times Y)$ and we denote this $ v_{\varepsilon }\rightrightarrows v_{0}$ if for any $\varphi (x,y)\in L^{2}\left( \Omega ;C_{\#}\left( Y\right) \right) $, \begin{equation*} \lim_{\varepsilon \rightarrow 0}\int_{\Omega }v_{\varepsilon }(x)\varphi \left( x,\frac{x}{\varepsilon }\right) dx=\int_{\Omega }\int_{Y}v_{0}(x,y)\varphi (x,y)dydx. \end{equation*}
\item A sequence $v_{\varepsilon }$ in $L^{2}(\Sigma _{\varepsilon })$ \emph{ two-scale} converges to $v_{0}(x,y)\in L^{2}((\Omega \times \Sigma )$ and we denote this $v_{\varepsilon }\overset{S}{\rightrightarrows }v_{0}$ if for any $\varphi (x,y)\in C\left( \overline{\Omega };C_{\#}\left( Y\right) \right) $, \begin{equation*} \lim_{\varepsilon \rightarrow 0}\int_{\Sigma _{\varepsilon }}\varepsilon v_{\varepsilon }(x)\varphi \left( x,\frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{\Sigma }v_{0}(x,y)\varphi (x,y)d\sigma _{\varepsilon }\left( y\right) dx. \end{equation*} \end{enumerate} \end{definition}
\begin{proposition} \label{p1}
\begin{enumerate} \item For any uniformly bounded sequence $v_{\varepsilon }$ in $L^{2}\left( \Omega \right) $ one can extract a subsequent still denoted by $\varepsilon $ and a two-scale limit $v_{0}\in L^{2}((\Omega \times Y)$ such that $ v_{\varepsilon }\rightrightarrows v_{0}$.
\item If $v_{\varepsilon }$ is in $L^{2}\left( \Sigma _{\varepsilon }\right) $ such that \begin{equation*} \varepsilon \Vert v_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}\leq C\text{,} \end{equation*} then one can extract a subsequent still denoted by $\varepsilon $ and a two-scale limit $v_{0}\in L^{2}((\Omega \times \Sigma )$ such that $ v_{\varepsilon }\overset{S}{\rightrightarrows }v_{0}$. \end{enumerate} \end{proposition}
\subsection{Two-scale limit system}
By virtue of the estimate (\ref{dc}) and the proposition \ref{p1}, there exists $f\in L^{2}\left( \Omega \times Y\right) $ and $g\in L^{2}\left( \Omega \times \Sigma \right) $ such that, up to a subsequence, one has \begin{equation} \chi \left( \frac{x}{\varepsilon }\right) f_{\varepsilon }\left( x\right) \rightrightarrows \chi \left( y\right) f\left( x,y\right) ,\ g_{\varepsilon }\left( x\right) \overset{S}{\rightrightarrows }g\left( x,y\right) . \label{fg} \end{equation}
Furthermore we have .
\begin{lemma} \label{l2} \cite{all}, \cite{ngue1}, \cite{adh}. Let $u_{\varepsilon }$ be the solution of (\ref{wf}). Then there exists a subsequence still denoted by $\varepsilon $ and two functions $u\left( x\right) \in H_{0}^{1}\left( \Omega \right) $, $u_{1}\left( x,y\right) \in L^{2}(\Omega ;H_{\#}^{1}(Y_{s})/\mathbb{R})$ such that \begin{equation} \chi \left( \frac{x}{\varepsilon }\right) u_{\varepsilon }\left( x\right) \rightrightarrows \chi \left( y\right) u\left( x\right) \text{,} \label{l2_1} \end{equation} \begin{equation} \chi \left( \frac{x}{\varepsilon }\right) \nabla u_{\varepsilon }\left( x\right) \rightrightarrows \chi \left( y\right) \left( \nabla u\left( x\right) +\nabla _{y}u_{1}\left( x,y\right) \right) \text{.} \label{l2_2} \end{equation} Moreover we have \begin{equation} \lim_{\varepsilon \rightarrow 0}\int_{\Sigma _{\varepsilon }}\varepsilon u_{\varepsilon }(x)\varphi \left( x,\frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{\Sigma }u(x)\varphi (x,y)d\sigma \left( y\right) dx \label{l2_3} \end{equation} for every $\varphi \in C(\overline{\Omega };C_{\#}(Y_{s}))$. \end{lemma}
\begin{lemma} \label{l3}We have \begin{equation*} \lim_{\varepsilon \rightarrow 0}\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\varphi \left( x\right) \alpha \left( \frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{\Sigma }u_{1}(x)\varphi (x)\alpha \left( y\right) d\sigma \left( y\right) dx \end{equation*} for all $\varphi \left( x\right) \in C\left( \overline{\Omega }\right) $. \end{lemma}
\begin{proof} Define a function $\psi \left( y\right) \in $ $H_{\#}^{1}(Y_{s})/\mathbb{R}$ solution of the problem \begin{equation} \left\{ \begin{array}{c} -\Delta \theta \left( y\right) =0\text{ in }Y_{s}\text{,} \\ \left( \nabla \theta \left( y\right) \right) \cdot \nu \left( y\right) =\alpha \left( y\right) \text{ on }\Sigma \text{,} \\ y\longmapsto \theta \left( y\right) \text{ Y-periodic.} \end{array} \right. \label{pr1} \end{equation} Such a function exists since $\alpha (y)$ satisifies (\ref{as1}) which is the compatibility condition for the solvability of the problem (\ref{pr1}). Set $\psi \left( y\right) =\nabla \theta $ and consider the function $\psi _{\varepsilon }\left( x\right) =\psi \left( \frac{x}{\varepsilon }\right) $. Then we have \begin{eqnarray} \int_{\Omega _{\varepsilon }}\nabla u_{\varepsilon }(x)\psi _{\varepsilon }\left( x\right) dx &=&\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\psi _{\varepsilon }\left( x\right) \cdot \nu \left( \frac{x}{\varepsilon } \right) d\sigma _{\varepsilon }\left( x\right) \label{le12} \\ &=&\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\alpha \left( \frac{x}{ \varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) . \notag \end{eqnarray} Passing to the limit in the left hand side of (\ref{le12}) and taking into account (\ref{l2_2}) we find \begin{equation} \underset{\varepsilon \rightarrow 0}{\lim }\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\alpha \left( \frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{Y}\chi \left( y\right) \left( \nabla u\left( x\right) +\nabla _{y}u_{1}\left( x,y\right) \right) \psi \left( y\right) dydx. \label{le13} \end{equation} Since $u\in H_{0}^{1}\left( \Omega \right) $ we have \begin{equation*} \int_{\Omega }\int_{Y}\chi \left( y\right) \nabla u\left( x\right) \psi \left( y\right) dydx=\left( \int_{\Omega }\nabla u\left( x\right) dx\right) \int_{Y}\chi \left( y\right) \psi \left( y\right) dy=0. \end{equation*} Hence the right hand side of (\ref{le13}) becomes \begin{equation*} \int_{\Omega }\int_{Y}\chi \left( y\right) \nabla _{y}u_{1}\left( x,y\right) \psi \left( y\right) dydx. \end{equation*} On the other hand, we have \begin{eqnarray*} \int_{\Omega }\int_{Y}\chi \left( y\right) \nabla _{y}u_{1}\left( x,y\right) \psi \left( y\right) dydx &=&-\int_{\Omega }\int_{Y_{1}}u_{1}\left( x,y\right) div_{y}\psi \left( y\right) dydx \\ &&+\int_{\Omega }\int_{\Sigma }u_{1}\left( x,y\right) \psi \left( y\right) \cdot \nu \left( y\right) d\sigma \left( y\right) dx \\ &=&\int_{\Omega }\int_{\Sigma }u_{1}\left( x,y\right) \alpha \left( y\right) d\sigma \left( y\right) dx \end{eqnarray*} which proves the Lemma. \end{proof}
Now we are able to give the two-scale limit system:
\begin{proposition} The couple $\left( u,u_{1}\right) \in H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) $ is the solution of the following two-scale homogenized system : \begin{gather} -div_{y}\left( A\left( \nabla u+\nabla _{y}u_{1}\right) \right) =0\text{\ \ \ in }\Omega \times Y_{s}, \label{tsl1} \\ \left( A\left( \nabla u+\nabla _{y}u_{1}\right) \cdot \nu \right) +\alpha u=0\ \ \text{ on }\Omega \times \Sigma \text{,} \label{tsl2} \\ y\longmapsto u_{1}\text{\ \ \ }Y-\text{periodic,} \label{tsl3} \\ -div_{x}\left( \int_{Y_{1}}A\left( \nabla u+\nabla _{y}u_{1}\right) dy\right) +\tilde{\mu}u+\int_{\Sigma }\alpha u_{1}d\sigma \left( y\right) =F\ \ \ \text{in}\ \Omega \text{,} \label{tsl4} \\ u=0\text{\ \ \ on }\Gamma \label{tsl5} \end{gather} where $\tilde{\mu}=\int_{Y_{s}}\mu \left( y\right) dy$ and $ F(x)=\int_{Y}\chi \left( y\right) f(x,y)dy+\int_{\Sigma }g\left( x,y\right) d\sigma \left( y\right) .$ \end{proposition}
\begin{proof} Let $\varphi \left( x\right) \in \mathcal{D}\left( \Omega \right) $ and $ \varphi _{1}\left( x,y\right) \in \mathcal{D}\left( \Omega ;C_{\#}^{\infty }\left( Y_{s}\right) \right) $. Choosing $v\left( x\right) =\varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{\varepsilon }\right) $ as a test function in problem (\ref{wf}), we have \begin{gather} \int_{\Omega }\nabla u\left( x\right) \chi \left( \frac{x}{\varepsilon } \right) ^{t}A(\frac{x}{\varepsilon })\left( \nabla \varphi \left( x\right) +\varepsilon \nabla _{x}\varphi _{1}\left( x,\frac{x}{\varepsilon }\right) +\nabla _{y}\varphi _{1}\left( x,\frac{x}{\varepsilon }\right) \right) dx \notag \\ +\int_{\Omega _{\varepsilon }}\mu _{\varepsilon }\left( x\right) u\left( x\right) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x, \frac{x}{\varepsilon }\right) \right] dx \notag \\ +\int_{\Sigma _{\varepsilon }}\alpha _{\varepsilon }\left( x\right) u\left( x\right) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x, \frac{x}{\varepsilon }\right) \right] d\sigma _{\varepsilon }\left( x\right) = \label{e4} \\ \int_{\Omega }\chi \left( \frac{x}{\varepsilon }\right) f_{\varepsilon }(x) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{ \varepsilon }\right) \right] dx \notag \\ \varepsilon \int_{\Sigma _{\varepsilon }}g_{\varepsilon }\left( x\right) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{ \varepsilon }\right) \right] d\sigma _{\varepsilon }\left( x\right) . \notag \end{gather} By virtue of (\ref{l2_2}) the first two terms of the left hand side of (\ref {e4}) converges to \begin{eqnarray} &&\int_{\Omega }\int_{Y_{s}}[A\left( y\right) \left[ \nabla u\left( x\right) +\nabla _{y}u_{1}\left( x,y\right) \right] \left[ \nabla \varphi \left( x\right) +\nabla _{y}\varphi _{1}\left( x,y\right) \right] \notag \\ &&+\mu \left( y\right) u\left( x\right) \varphi \left( x\right) ]dydx. \label{r1} \end{eqnarray} Taking into account (\ref{l2_3}), and the lemma \ref{l3}, the third term of the left hand side of (\ref{e4}) tends to \begin{equation} \int_{\Omega }\int_{\Sigma }\alpha \left( y\right) u_{1}\left( x,y\right) \varphi \left( x\right) d\sigma \left( y\right) dx+\int_{\Omega }\int_{\Sigma }\alpha \left( y\right) u\left( x\right) \varphi _{1}\left( x,y\right) d\sigma \left( y\right) dx. \label{r2} \end{equation} Thanks to (\ref{fg}) the right hand side of (\ref{e4}) converges to \begin{equation} \int_{\Omega }\left[ \int_{Y_{s}}f(x,y)dy+\int_{\Sigma }g\left( x,y\right) d\sigma \left( y\right) \right] \varphi \left( x\right) dx=\int_{\Omega }F(x)\varphi \left( x\right) dx. \label{r3} \end{equation}
By the density of $\mathcal{D}\left( \Omega \right) $ $\times \in \mathcal{D} \left( \Omega ;C_{\#}^{\infty }\left( Y_{s}\right) \right) $ in $ H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) $ we get from the limits (\ref{r1})-(\ref{r3} ) the following two-scale weak formulation system: \begin{equation} \left\{ \begin{array}{c} \left( u,u_{1}\right) \in H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) \text{ is such that } \\ \int_{\Omega }\int_{Y_{s}}A\left[ \nabla u+\nabla _{y}u_{1}\right] \left[ \nabla v+\nabla _{y}v_{1}\right] dydx+ \\ \tilde{\mu}\int_{\Omega }uvdx+\int_{\Omega }\int_{\Sigma }\alpha u_{1}vd\sigma \left( y\right) dx+\int_{\Omega }\int_{\Sigma }\alpha uv_{1}d\sigma \left( y\right) =\int_{\Omega }Fvdx \end{array} \right. \label{e5} \end{equation} for all $\left( v,v_{1}\right) \in H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) $. Integration by parts in (\ref{e5}) \ with respect to $v_{1}$ ($v=0$) gives ( \ref{tsl1})- (\ref{tsl3}) and with respect to $v$ ($v_{1}=0$) yields (\ref {tsl4})-(\ref{tsl5}). The Proposition is now proved. \end{proof}
Thanks to the linearity of the first equation of (\ref{tsl1}) we can compute $u_{1}(x,y)$ in terms of $u\left( x\right) $ as follows: \begin{equation} u_{1}(x,y)=\underset{k=1}{\overset{n}{\sum }}\zeta _{k}\left( y\right) \frac{ \partial u}{\partial x_{k}}\left( x\right) +\gamma (y)u\left( x\right) + \tilde{u}\left( x\right) \label{rel1} \end{equation} where for each $k$ the function $\zeta _{k}\left( y\right) $ satisfies the auxiliary problem: \begin{eqnarray*} -div\left( A\left( y\right) \nabla \zeta _{k}\left( y\right) \right) &=&div\left( A\left( y\right) e_{k}\right) \text{ in }\Omega \times Y_{s} \text{, } \\ A\left( y\right) \nabla \zeta _{k}\left( y\right) \cdot \nu &=&-A\left( y\right) e_{k}\cdot \nu \text{ on }\Omega \times \Sigma \text{,} \\ y &\rightarrow &\zeta _{k}\left( y\right) \text{ }Y\text{-periodic, }x\in \Omega \text{.} \end{eqnarray*} where $e_{k}=\left( \delta _{ik}\right) _{1\leq i\leq n}$, $\delta _{ik}$ is the Kr\"{o}necker symbol.
The function $\gamma (y)$ satisfies \begin{gather*} -div\left( A\left( y\right) \nabla \gamma \left( y\right) \right) =0\text{ in }\Omega \times Y_{s}\text{, } \\ A\left( y\right) \nabla \gamma \left( y\right) \cdot \nu =-\alpha \left( y\right) \text{ on }\Omega \times \Sigma \text{,} \\ y\rightarrow \gamma \left( y\right) \text{ }Y\text{-periodic, }x\in \Omega \text{.} \end{gather*} \ \ \newline
Finally, inserting the relation (\ref{rel1}) into the equation (\ref{tsl4}) yields to the homogenized equation \begin{equation} -div\left( A^{\hom }\nabla u(x)\right) +B\cdot \nabla u(x)+\lambda u\left( x\right) =F\left( x\right) \label{h1} \end{equation} where $A^{\hom }$ is the matrix with coefficients \begin{equation*} a_{ij}^{\hom }=\underset{k=1}{\overset{n}{\sum }}\int_{Y_{s}}\left[ a_{ij}\left( y\right) \left( \delta _{kj}+\frac{\partial \zeta _{j}}{ \partial y_{k}}\left( y\right) \right) \right] dy \end{equation*} $B$ is the vector with components: \begin{eqnarray*} b_{i} &=&\int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma \left( y\right) -\underset{k=1}{\overset{n}{\sum }} \int_{Y_{s}}a_{ik}\left( y\right) \frac{\partial \gamma }{\partial y_{k}} \left( y\right) dy \\ &=&\int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma \left( y\right) -\int_{Y_{s}}A\left( y\right) e_{i}\nabla \gamma \left( y\right) dy \end{eqnarray*} $\lambda $ is the real number: \begin{eqnarray*} \lambda &=&\int_{\Sigma }\alpha \left( y\right) \gamma \left( y\right) d\sigma \left( y\right) +\tilde{\mu} \\ &=&-\int_{Y_{s}}A\left( y\right) \nabla \gamma \left( y\right) \nabla \gamma \left( y\right) dy+\tilde{\mu} \end{eqnarray*}
Thus we have proved the following result
\begin{theorem} Let $u_{\varepsilon }$ be the solution in $V_{\varepsilon }$ of the Robin boundary problem (\ref{eq1})-(\ref{eq3}). Then $\chi _{\varepsilon }\left( x\right) u_{\varepsilon }\left( x\right) $ two-scale converges to $\chi \left( y\right) u\left( x\right) $ where $u\left( x\right) $ is a solution in $H_{0}^{1}\left( \Omega \right) $ of the homogenized problem: \begin{equation} \left\{ \begin{array}{l} -div\left( A^{\hom }\nabla u(x)\right) +B\cdot \nabla u(x)+\lambda u\left( x\right) =F\left( x\right) \text{ in }\Omega \text{,} \\ u=0\text{ on }\Gamma \text{.} \end{array} \right. \label{hom} \end{equation} \end{theorem}
\begin{remark} We observe that the limit equation (\ref{hom}) contains an extra strange term of order $1$. Namely the convection term $B\cdot \nabla u$. The vector $ B$ depends closely on the matrix $A$ and the resistivity function $a$. For example, if $A$ is symmetric then $B=0$. Indeed \begin{equation*} \int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma =-\int_{Y_{s}}A\left( y\right) \nabla \gamma \left( y\right) \nabla \zeta _{i}\left( y\right) dy=-\int_{Y_{s}}A\left( y\right) \nabla \zeta _{i}\left( y\right) \nabla \gamma \left( y\right) dy\text{)} \end{equation*} and since $A$ is symmetric \begin{equation*} \int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma =\int_{Y_{s}}A\left( y\right) e_{i}\nabla \gamma \left( y\right) dy. \end{equation*}
\end{remark}
\end{document} |
\begin{document}
\title{Imprecise Continuous-Time Markov Chains: \ Efficient Computational Methods with Guaranteed Error Bounds}
\begin{abstract}
Imprecise continuous-time Markov chains are a robust type of continuous-time Markov chains that allow for partially specified time-dependent parameters.
Computing inferences for them requires the solution of a non-linear differential equation.
As there is no general analytical expression for this solution, efficient numerical approximation methods are essential to the applicability of this model.
We here improve the uniform approximation method of \cite{2016Krak} in two ways and propose a novel and more efficient adaptive approximation method.
For ergodic chains, we also provide a method that allows us to approximate stationary distributions up to any desired maximal error.
\end{abstract} \begin{keywords}
Imprecise continuous-time Markov chain; lower transition operator; lower transition rate operator; approximation method; ergodicity; coefficient of ergodicity. \end{keywords}
\section{Introduction} \label{sec:Intro} Markov chains are a popular type of stochastic processes that can be used to model a variety of systems with uncertain dynamics, both in discrete and continuous time. In many applications, however, the core assumption of a Markov chain---i.e., the Markov property---is not entirely justified. Moreover, it is often difficult to exactly determine the parameters that characterise the Markov chain. In an effort to handle these modelling errors in an elegant manner, several authors have recently turned to imprecise probabilities \citep{decooman2009,2013Skulj,2012Hermans,2015Skulj,2016Krak,2017DeBock}.
As \cite{2016Krak} thoroughly demonstrate, making inferences about an imprecise continuous-time Markov chain---determining lower and upper expectations or probabilities---requires the solution of a non-linear vector differential equation. To the best of our knowledge, this differential equation cannot be solved analytically, at least not in general. \cite{2016Krak} proposed a method to numerically approximate the solution of the differential equation, and argued that it outperforms the approximation method that \cite{2015Skulj} previously introduced. One of the main results of this contribution is a novel approximation method that outperforms that of \cite{2016Krak}.
An important property---both theoretically and practically---of continuous-time Markov chains is the behaviour of the solution of the differential equation as the time parameter recedes to infinity. If regardless of the initial condition the solution converges, we say that the chain is ergodic. We show that in this case the approximation is guaranteed to converge as well. This constitutes the second main result of this contribution and serves as a motivation behind the novel approximation method. Furthermore, we also quantify a worst-case convergence rate for the approximation. This unites the work of \cite{2015Skulj}, who studied the rate of convergence for discrete-time Markov chains, and \cite{2017DeBock}, who studied the ergodic behaviour of continuous-time Markov chains from a qualitative point of view. One of the uses of our worst-case convergence rate is that it allows us to approximate the limit value of the solution up to a guaranteed error.
This paper is an extended preprint of \citep{2017erreygers}. Recently, it has come to our attention that one of the results in that paper, namely \cref{prop:CoeffOfErgod:ErgodicUpperBound}, is false. Fortunately, none of the other results in \citep{2017erreygers}---and hence also in this preprint---depend on \cref{prop:CoeffOfErgod:ErgodicUpperBound} and the main conclusions and contributions of the paper therefore remain intact. For that reason, we have only made the following two modifications with respect to the previous version: we have omitted the proof of \cref{prop:CoeffOfErgod:ErgodicUpperBound}, and we have added a counterexample to show that the statement is indeed incorrect.
To ensure the readability of the main text, we have gathered the proofs of all the results in the Appendix. In this Appendix, we also discuss the ergodicity of both discrete and continuous-time Markov chains more thoroughly.
\section{Mathematical preliminaries} \label{sec:Preliminaries} Throughout this contribution, we denote the set of real, non-negative real and strictly positive real numbers by $\mathbb{R}$, $\reals_{\geq 0}$ and $\reals_{> 0}{}$, respectively. The set of natural numbers is denoted by $\mathbb{N}$, if we include zero we write $\nats_{0} \coloneqq \mathbb{N} \cup \{ 0 \}$. For any set $S$, we let $\card{S}$ denote its cardinality. If $a$ and $b$ are two real numbers, we say that $a$ is lower (greater) than $b$ if $a \leq b$ ($a \geq b$), and that $a$ is strictly lower (greater) than $b$ if $a < b$ ($a > b$).
\subsection{Gambles and norms} We consider a finite \emph{state space} $\mathcal{X}$, and are mainly concerned with real-valued functions on $\mathcal{X}$. All of these real-valued functions on $\mathcal{X}$ are collected in the set $\setoffna$, which is a vector space. If we identify the state space $\mathcal{X}$ with $\{1, \dots, \card{\mathcal{X}}\}$, then any function $f \in \setoffna$ can be identified with a vector: for all $x\in\mathcal{X}$, the $x$-component of this vector is $f(x)$. A special function on $\mathcal{X}$ is the indicator $\indic{A}$ of an event $A$. For any $A \subseteq \mathcal{X}$, it is defined for all $x \in \mathcal{X}$ as $\indica{A}{x} = 1$ if $x \in A$ and $\indica{A}{x} = 0$ otherwise. In order not to obfuscate the notation too much, for any $y \in \mathcal{X}$ we write $\indic{y}$ instead of $\indic{\{y\}}$. If it is required from the context, we will also identify the real number $\gamma \in \mathbb{R}$ with the map $\gamma$ from $\mathcal{X}$ to $\mathbb{R}$, defined as $\gamma(x) = \gamma$ for all $x \in \mathcal{X}$.
We provide the set $\setoffna$ of functions with the standard maximum norm $\norm{\cdot}$, defined for all $f\in\setoffna$ as $\norm{f} \coloneqq \max \left\{ \abs{f(x)} \colon x \in \mathcal{X} \right\}$. A seminorm that captures the variation of $f \in \setoffna$ will also be of use; we therefore define the variation seminorm $\norm{f}_{v} \coloneqq \max f - \min f$. Since the value $\norm{f}_{v} / 2$ occurs often in formulas, we introduce the shorthand notation $\norm{f}_{c} \coloneqq \norm{f}_{v}/2$.
\subsection{Non-negatively homogeneous operators} An operator $A$ that maps $\setoffna$ to $\setoffna$ is \emph{non-negatively homogeneous} if for all $\mu \in \reals_{\geq 0}$ and all $f \in \setoffna$, $A (\mu f) = \mu A f$. The maximum norm $\norm{\cdot}$ for functions induces an operator norm: \[
\norm{A}
\coloneqq \sup \{ \norm{A f} \colon f \in \setoffna, \norm{f} = 1 \}. \] If for all $\mu \in \mathbb{R}$ and all $f,g \in \setoffna$, $A(\mu f + g) = \mu A f + A g$, then the operator $A$ is \emph{linear}. In that case, it can be identified with a matrix of dimension $\card{\mathcal{X}}\times\card{\mathcal{X}}$, the $(x,y)$-component of which is $[A \indic{y}](x)$. The identity operator $I$ is an important special case, defined for all $f \in \setoffna$ as $I f \coloneqq f$.
Two types of non-negatively homogeneous operators play a vital role in the theory of imprecise Markov chains: lower transition operators and lower transition rate operators. \begin{definition} \label{def:LowerTransitionOperator}
An operator $\underline{T}{}$ \/from $\setoffna$ to $\setoffna$ is called a \emph{lower transition operator} if for all $f \in \setoffna$ and all $\mu \in \reals_{\geq 0}$:
\begin{enumerate}[threecol, label=L\arabic*:, ref=(L\arabic*), series=LTO]
\item \label{def:LTO:DominatesMin}
$\underline{T} f \geq \min f$;
\item \label{def:LTO:SuperAdditive}
$\underline{T}(f + g) \geq \underline{T} f + \underline{T} g$;
\item \label{def:LTO:NonNegativelyHom}
$\underline{T} (\mu f) = \mu \underline{T} f$.
\end{enumerate} \end{definition} Every lower transition operator $\underline{T}$ has a conjugate upper transition operator $\overline{T}$, defined for all $f \in \setoffna$ as $\overline{T} f \coloneqq - \underline{T} (- f)$.
\begin{definition} \label{def:LowerTransitionRateOperator}
An operator $\underline{Q}{}\,$ from $\setoffna$ to $\setoffna$ is called a \emph{lower transition rate operator} if for any $f,g \in \setoffna$, any $\mu \in \reals_{\geq 0}$, any $\gamma\in\mathbb{R}$ and any $x,y \in \mathcal{X}$ such that $x\neq y$:
\begin{enumerate}[twocol, label=R\arabic*:, ref=(R\arabic*), series=LTRO]
\item \label{def:LTRO:Constant}
$\underline{Q} \gamma = 0$;
\item \label{def:LTRO:SuperAdditive}
${\underline{Q}(f + g) \geq \underline{Q} f + \underline{Q} g}$;
\item \label{def:LTRO:NonNegativelyHom}
$\underline{Q} (\mu f) = \mu \underline{Q} f$;
\item \label{def:LTRO:Sign}
$[\underline{Q} \indic{x}](y) \geq 0$.
\end{enumerate} \end{definition} The conjugate lower transition rate operator $\overline{Q}$ is defined for all $f \in \setoffna$ as $\overline{Q} f \coloneqq - \underline{Q} (-f)$.
As will become clear in Section~\ref{sec:MCs}, lower transition operators and lower transition rate operators are tightly linked. For instance, we can use a lower transition rate operator to construct a lower transition operator. One way is to use Eqn.~\eqref{eqn:TDLTO:FunctionDifferentialEquation} further on. Another one is given in the following proposition, which is a strengthened version of \citep[Proposition~5]{2017DeBock}. \begin{proposition} \label{prop:IPlusDeltaQLowTranOp}
Consider any lower transition rate operator $\underline{Q}$ and any $\delta \in \reals_{\geq 0}$.
Then the operator $(I + \delta \underline{Q})$ is a lower transition operator if and only if $\delta \norm{\underline{Q}} \leq 2$. \end{proposition}
We end this section with the first---although minor---novel result of this contribution. The norm of a lower transition rate operator is essential for all the approximation methods that we will discuss. The following proposition supplies us with an easy formula for determining it. \begin{proposition} \label{prop:LTRO:PropositionNorm}
Let $\underline{Q}$ be a lower transition rate operator.
Then $\norm{\underline{Q}} = 2 \max \{ \abs{[\underline{Q} \indic{x}](x)} \colon x \in \mathcal{X} \}$. \end{proposition}
\begin{binex} \label{binex:LTRO}
Consider a binary state space $\mathcal{X} = \{0, 1\}$ and two closed intervals $[\lowq{0}, \upq{0}] \subset \reals_{\geq 0}$ and $[\lowq{1}, \upq{1}] \subset \reals_{\geq 0}$.
Let
\begin{equation*}
\underline{Q} f
\coloneqq \min \left\{
\begin{bmatrix}
q_0 (f(1) - f(0)) \\
q_1 (f(0) - f(1))
\end{bmatrix}
\colon q_0 \in [\lowq{0}, \upq{0}], q_1 \in [\lowq{1}, \upq{1}] \right\}
\text{ for all } f \in \setoffna.
\end{equation*}
Then one can easily verify that $\underline{Q}$ is a lower transition rate operator.
\cite{2016Krak} also consider a running example with a binary state space, but they let $\mathcal{X} \coloneqq \{ \texttt{healthy}, \texttt{sick} \}$.
We here identify \texttt{healthy} with $0$ and \texttt{sick} with $1$.
In \cite[Example~18]{2016Krak}, they propose the following values for the transition rates: $[\lowq{0}, \upq{0}] \coloneqq [1/52, 3/52]$ and $[\lowq{1}, \upq{1}] \coloneqq [1/2,2]$.
It takes \citeauthor{2016Krak} a lot of work to determine the exact value of the norm of $\underline{Q}$, see \cite[Example~19]{2016Krak}.
We simply use Proposition~\ref{prop:LTRO:PropositionNorm}: $\smash{\norm{\underline{Q}} = 2 \max\{ 3/52, 2 \} = 4}$. \end{binex}
\section{Imprecise continuous-time Markov chains} \label{sec:MCs} For any lower transition rate operator $\underline{Q}$ and any $f \in \setoffna$, \cite{2015Skulj} has shown that the differential equation
\begin{equation} \label{eqn:TDLTO:FunctionDifferentialEquation}
\frac{\mathrm{d}}{\mathrm{d} t} \lowtranopa{t} f = \underline{Q} \lowtranopa{t} f
\end{equation} with initial condition $\lowtranopa{0} f \coloneqq f$ has a unique solution for all $t \in \reals_{\geq 0}$.
Later, \cite{2017DeBock} proved that the time-dependent operator $\lowtranopa{t}$ itself satisfies a similar differential equation, and that it is a lower transition operator.
Finding the unique solution of Eqn.~\eqref{eqn:TDLTO:FunctionDifferentialEquation} is non-trivial. Fortunately, we can approximate this solution, as by \cite[Proposition~10]{2017DeBock} \begin{equation} \label{eqn:TDLTO:LimitFormula}
\lowtranopa{t}
= \lim_{n \to \infty} \left( I + \frac{t}{n} \underline{Q} \right)^{n}. \end{equation}
\begin{binex} \label{binex:AnalyticalExpressionsForAppliedLTO}
In the simple case of Example~\ref{binex:LTRO}, we can use Eqn.~\eqref{eqn:TDLTO:LimitFormula} to obtain analytical expressions for the solution of Eqn.~\eqref{eqn:TDLTO:FunctionDifferentialEquation}.
Assume that $\lowq{0} + \upq{1}> 0$ and fix some $t \in \reals_{\geq 0}$. Then
\begin{align*}
[\lowtranopa{t} f](0)
= f(0) + \lowq{0} h(t)
~~\text{and}~~
[\lowtranopa{t} f](1)
= f(1) - \upq{1} h(t)
~~\text{for all $f\in\setoffna$ with $f(0)\leq f(1)$,}
\end{align*}
where $h(t) \coloneqq \norm{f}_{v} (\lowq{0} + \upq{1})^{-1} \big(1 - e^{-t (\lowq{0} + \upq{1})}\big)$.
The case $f(0) \geq f(1)$ yields similar expressions.
\end{binex}
For a linear lower transition rate operator $\underline{Q}$---i.e., if it is a transition rate matrix $Q$---Eqn.~\eqref{eqn:TDLTO:LimitFormula} reduces to the definition of the matrix exponential.
It is well-known---see \citep{1991Anderson}---that this matrix exponential $T_t = e^{t Q}$ can be interpreted as the transition matrix at time $t$ of a time-homogeneous or stationary continuous-time Markov chain: the $(x,y)$-component of $T_t$ is the probability of being in state $y$ at time $t$ if the chain started in state $x$ at time $0$. Therefore, it follows that the expectation of the function $f \in \setoffna$ at time $t \in \reals_{\geq 0}$ conditional on the initial state $x \in \mathcal{X}$, denoted by $\mathrm{E}(f(X_{t})|X_{0} = x)$, is equal to $[T_t f](x)$.
As Eqn.~\eqref{eqn:TDLTO:LimitFormula} is a non-linear generalisation of the definition of the matrix exponential, we can interpret $\lowtranopa{t}$ as the non-linear generalisation of the matrix exponential $T_t = e^{t Q}$. Extending this parallel, we might interpret $\lowtranopa{t}$ as the non-linear generalisation of the transition matrix---i.e., as the lower transition operator---at time $t$ of a generalised continuous-time Markov chain. In fact, \cite{2016Krak} prove that this is indeed the case. They show that---under some conditions on $\underline{Q}$---$[\lowtranopa{t} f](x)$ can be interpreted as the tightest lower bound for $\mathrm{E}(f(X_{t})|X_{0} = x)$ with respect to a set of---not necessarily Markovian---stochastic processes that are consistent with $\underline{Q}$. \cite{2016Krak} argue that, just like a transition rate matrix $Q$ characterises a (precise) continuous-time Markov chain, a lower transition rate operator $\underline{Q}$ characterises a so-called imprecise continuous-time Markov chain.
The main objective of this contribution is to determine $\lowtranopa{t} f$ for some $f \in \setoffna$ and some $t \in \reals_{> 0}$. Our motivation is that, from an applied point of view on imprecise continuous-time Markov chains, what one is most interested in are tight lower and upper bounds on expectations of the form $\mathrm{E}(f(X_t)|X_{0} = x)$.
As explained above, the lower bound is given by $\lowprevacond{f(X_t)}{X_0 = x} = [\lowtranopa{t} f](x)$. Similarly, the upper bound is given by $\upprevacond{f(X_t)}{X_0 = x} = -[\lowtranopa{t} (-f)](x)$. Note that the lower (or upper) probability of an event $A \subseteq \mathcal{X}$ conditional on the initial state $x$ is a special case of a lower (or upper) expectation: $\underline{\mathrm{P}}(X_{t} \in A | X_0 = x) = \lowprevacond{\indica{A}{X_t}}{X_0 = x}$ and similarly for the upper probability. Hence, for the sake of generality we can focus on $\lowtranopa{t} f$ and forget about its interpretation. As in most cases analytically solving Eqn.~\eqref{eqn:TDLTO:FunctionDifferentialEquation} is infeasible or even impossible, we resort to methods that yield an approximation up to some guaranteed maximal error.
\section{Approximation methods} \label{sec:EfficientComputation} \cite{2015Skulj} was, to the best of our knowledge, the first to propose methods that approximate the solution $\lowtranopa{t} f$ of Eqn.~\eqref{eqn:TDLTO:FunctionDifferentialEquation}. He proposes three methods: one with a uniform grid, a second with an adaptive grid and a third that is a combination of the previous two. In essence, he determines a step size $\delta$ and then approximates $\lowtranopa{t + \delta} f$ with $e^{\delta Q} \lowtranopa{t} f$, where $Q$ is a transition rate matrix determined from \smash{$\underline{Q}$ and $\lowtranopa{t} f$}.
One drawback of this method is that it needs the matrix exponential $e^{\delta Q}$, which---in general---needs to be approximated as well. \cite{2015Skulj} mentions that his methods turn out to be quite computationally heavy, even if the uniform and adaptive methods are combined.
We consider two alternative approximation methods---one with a uniform grid and one with an adaptive grid---that both work in the same way. First, we pick a small step $\delta_1 \in \reals_{\geq 0}$ and apply the operator $(I + \delta_1 \underline{Q})$ to the function $g_0 = f$, resulting in a function $g_1 \coloneqq (I + \delta_1 \underline{Q}) f$. Recall from Proposition~\ref{prop:IPlusDeltaQLowTranOp} that if we want $(I + \delta_1 \underline{Q})$ to be a lower transition operator, then we need to demand that $\delta_1 \norm{\underline{Q}} \leq 2$. Next, we pick a (possibly different) small step $\delta_2 \in \reals_{\geq 0}$ such that $\delta_2 \norm{\underline{Q}} \leq 2$ and apply the lower transition operator $(I + \delta_2 \underline{Q})$ to the function $g_1$, resulting in a function $g_2 \coloneqq (I + \delta_2 \underline{Q}) g_1$. If we continue this process until the sum of all the small steps is equal to $t$, then we end up with an approximation for $\lowtranopa{t} f$. More formally, let $s \coloneqq (\delta_1, \dots, \delta_k)$ denote a sequence in $\reals_{\geq 0}$ such that, for all $i \in \{ 1, \dots, k \}$, $\delta_i \norm{\underline{Q}} \leq 2$. Using this sequence $s$ we define the \emph{approximating lower transition operator}
\begin{equation*}
\Phi(s)
\coloneqq (I + \delta_k \underline{Q}) \cdots (I + \delta_1 \underline{Q}).
\end{equation*}
What we are looking for is a convenient way to determine the sequence $s$ such that the error $\norm{\lowtranopa{t} f - \Phi(s) f}$ is guaranteed to be lower than some desired maximal error $\epsilon \in \reals_{> 0}$.
\subsection{Using a uniform grid} \label{ssec:UniformGrid} \cite{2016Krak} provide one way to determine the sequence $s$. They assume a uniform grid, in the sense that all elements of the sequence $s$ are equal to $\delta$. The step size $\delta$ is completely determined by the desired maximal error $\epsilon$, the time $t$, the variation norm of the function $f$ and the norm of $\underline{Q}$; \cite[Proposition~8.5]{2016Krak} guarantees that the actual error is lower than $\epsilon$. Algorithm~\ref{alg:Uniform} provides a slightly improved version of \cite[Algorithm~1]{2016Krak}. The improvement is due to Proposition~\ref{prop:IPlusDeltaQLowTranOp}: we demand that $n \geq t \norm{\underline{Q}} / 2$ instead of $n \geq t \norm{\underline{Q}}$. \begin{algorithm}
\caption{Uniform approximation \label{alg:Uniform}}
\DontPrintSemicolon
\KwData{A lower transition rate operator $\underline{Q}$, a function $f \in \setoffna$, a maximal error $\epsilon \in \reals_{> 0}$, and a time point $t \in \reals_{\geq 0}$.}
\KwResult{$\lowtranopa{t} f \pm \epsilon$}
$g_{0} \gets f$\; \nllabel{line:Uniform:First}
\lIf{$\norm{f}_{c} = 0$ \Or $\norm{\underline{Q}} = 0$ \Or $t = 0$}{$(n, \delta) \gets (0, 0)$}
\Else{
$n \gets \big\lceil \max \{ t \norm{\underline{Q}} / 2, t^2 \norm{\underline{Q}}^2 \norm{f}_{c} / \epsilon \}\big\rceil$\; \nllabel{line:Uniform:DetermineN}
$\delta \gets t / n$\;
\For{$i = 0, \dots, n-1$}{
$g_{i+1} \gets g_{i} + \delta \underline{Q} g_{i}$\; \nllabel{line:Uniform:IncrementOfG}
}
}
\Return $g_{n}$ \end{algorithm}
More formally, for any $t \in \reals_{\geq 0}$ and any $n \in \mathbb{N}$ such that $t \norm{\underline{Q}} \leq 2 n$, we consider the \emph{uniformly approximating lower transition operator}
\begin{equation*}
\Psi_{t}(n)
\coloneqq \left(I + \frac{t}{n} \underline{Q}\right)^{n}.
\end{equation*} As a special case, we define $\Psi_{t}(0) \coloneqq I$. The following theorem then guarantees that the choice of $n$ in Algorithm~\ref{alg:Uniform} results in an error $\norm{\lowtranopa{t} f - \Psi_{t}(n) f}$ that is lower than the desired maximal error $\epsilon$. \begin{theorem} \label{the:UniformApproximationWithError}
Let $\underline{Q}$ be a lower transition rate operator and fix some $f\in\setoffna$, $t \in \reals_{\geq 0}$ and $\epsilon \in \reals_{> 0}$.
If we use Algorithm~\ref{alg:Uniform} to determine $n$, $\delta$ and $g_{0}, \dots, g_n$, then we are guaranteed that
\[
\norm{\lowtranopa{t} f - \Psi_{t}(n) f}
= \norm{\lowtranopa{t} f - g_n}
\leq \epsilon'
\coloneqq
\delta^2 \norm{\underline{Q}}^2 \sum_{i=0}^{n-1} \norm{g_{i}}_{c}
\leq \epsilon.
\] \end{theorem}
Theorem~\ref{the:UniformApproximationWithError} is an extension of \cite[Proposition~8.5]{2016Krak}. We already mentioned that the demand $n \geq t \norm{\underline{Q}}$ can be relaxed to $n \geq t \norm{\underline{Q}} / 2$. Furthermore, it turns out that we can compute an upper bound $\epsilon'$ on the error that is (possibly) lower than the desired maximal error $\epsilon$. If we want to determine this $\epsilon'$ while running Algorithm~\ref{alg:Uniform}, we simply need to add $\epsilon' \gets 0$ to line~\ref{line:Uniform:First} and insert $\epsilon' \gets \epsilon' + \delta^2 \norm{\underline{Q}}^2 \norm{g_{i}}_{c}$ just before line~\ref{line:Uniform:IncrementOfG}.
\begin{binex} \label{binex:UniformApproximation}
We again consider the simple case of Example~\ref{binex:LTRO} and illustrate the use of Theorem~\ref{the:UniformApproximationWithError} with a numerical example based on \cite[Example~20]{2016Krak}.
\cite{2016Krak} use Algorithm~\ref{alg:Uniform} to approximate $\lowtranopa{1} \indic{1}$, and find that $n = \num{8000}$ guarantees an error lower than the desired maximal error $\epsilon \coloneqq \num{1e-3}$.
As reported in Table~\ref{tab:ComparisonOfCompDuration}, we use Theorem~\ref{the:UniformApproximationWithError} to compute $\epsilon'$.
We find that $\epsilon' \approx \num{0.430e-3}$, which is approximately a factor two smaller than the desired maximal error $\epsilon$.
\begin{table}
\caption{Comparison of the presented approximation methods, obtained using a naive, unoptimised implementation of the algorithms in Python.
$N$ is the total number of iterations, $D_{\epsilon}$ ($D_{\epsilon'}$) is the average duration---in seconds, averaged over 50 independent runs---without (with) keeping track of $\epsilon'$, and $\epsilon_a$ is the actual error.
The Python code is made available at \href{https://github.com/alexander-e/ictmc}{github.com/alexander-e/ictmc}.}
\label{tab:ComparisonOfCompDuration}
\begin{center}
\begin{tabular}{rS[table-format=4.]S[table-format=1.4]S[table-format=1.4]S[table-format=1.4]S[table-format=1.4]}
\toprule
{Method} & {$N$} & {$D_{\epsilon}$} & {$D_{\epsilon'}$} & {$\epsilon' \times 10^3$} & {$\epsilon_a \times 10^3$} \\
\midrule
Uniform & 8000 & 0.0345 & 0.0574 & 0.430 & 0.0335 \\
Uniform & 250 & 0.00171 & 0.0264 & 13.8 & 1.07 \\
Adaptive with $m = 1$ & 3437 & 0.0371 & 0.0428 & 1.000 & 0.108 \\
Adaptive with $m = 20$ & 3456 & 0.0143 & 0.0254 & 0.992 & 0.107 \\
Uniform ergodic with $m = 1$ & 6133 & 0.0264 & 0.0449 & 0.560 & 0.0437 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
In this case, since we know the analytical expression for $\lowtranopa{1} \indic{1}$ from Example~\ref{binex:AnalyticalExpressionsForAppliedLTO}, we can determine the actual error $\epsilon_{a} = \norm{\lowtranopa{1} \indic{1} - \Psi_{1}(8000) \indic{1}}$.
Quite remarkably, the actual error is approximately $\num{3.35e-5}$, which is roughly 30 times smaller than the desired maximal error.
This leads us to think that the number of iterations used by the uniform method is too high.
In fact, we find that using as few as \num{250} iterations---roughly \num{8000 / 30}---already results in an actual error that is approximately equal to the desired one: $\norm{\lowtranopa{1} \indic{1} - \Psi_{1}(250) \indic{1}} \approx \num{1.07e-3}$. \end{binex}
\subsection{Using an adaptive grid} \label{ssec:AdaptiveGrid} In Example~\ref{binex:UniformApproximation}, we noticed that the maximal desired error was already satisfied for a uniform grid that was much coarser than that constructed by Algorithm~\ref{alg:Uniform}. Because of this, we are led to believe that we can find a better approximation method than the uniform method of Algorithm~\ref{alg:Uniform}.
To this end, we now consider grids where, for some integer $m$, every $m$ consecutive time steps in the grid are equal. In particular, we consider a sequence $\delta_1, \dots, \delta_n$ in $\reals_{\geq 0}$ and some $k \in \mathbb{N}$ such that $1 \leq k \leq m$ and, for all $i \in \{ 1, \dots, n \}$, $\delta_i \norm{\underline{Q}} \leq 2$. From such a sequence, we then construct the \emph{$m$-fold approximating lower transition operator}: \[
\Phi_{m,k}(\delta_1, \dots, \delta_n)
\coloneqq (I + \delta_n \underline{Q})^{k} (I + \delta_{n-1} \underline{Q})^{m} \cdots (I + \delta_{1} \underline{Q})^{m}, \] where if $n = 1$ only $(I + \delta_1 \underline{Q})^{k}$ remains and if $n = 2$ only $(I + \delta_2 \underline{Q})^{k} (I + \delta_{1} \underline{Q})^{m}$ remains.
The uniform approximation method of before is a special case of the $m$-fold approximating lower transition operator; a more interesting method to construct an $m$-fold approximation is Algorithm~\ref{alg:Adaptive}. In this algorithm, we re-evaluate the time step every $m$ iterations, possibly increasing its length. \begin{algorithm}
\caption{Adaptive approximation \label{alg:Adaptive}}
\DontPrintSemicolon
\KwData{A lower transition rate operator $\underline{Q}$, a gamble $f \in \setoffna$, an integer $m \in \mathbb{N}$, a tolerance $\epsilon \in \reals_{> 0}$, and a time period $t \in \reals_{\geq 0}$.}
\KwResult{$\lowtranopa{t} f \pm \epsilon$}
$(g_{(0,m)}, \Delta, i) \gets (f, t, 0)$\;
\lIf{$\norm{f}_{c} = 0$ \Or $\norm{\underline{Q}} = 0$ \Or $t = 0$}{$(n, k) \gets (0, m)$}
\Else{
\While{$\Delta > 0$ \And $\norm{g_{(i,m)}}_{c} > 0$}{
$i \gets i+1$\;
$\delta_i \gets \min \{ \Delta, 2 / \norm{\underline{Q}}, \epsilon / (t \norm{\underline{Q}}^2 \norm{g_{(i-1,m)}}_{c}) \}$\; \nllabel{line:Adaptive:delta}
\If{$m \delta_i > \Delta$}{
$k_i \gets \lceil \Delta / \delta_i \rceil$\;
$\delta_i \gets \Delta / k_i$\;
}\lElse{$k_i \gets m$}
$g_{(i,0)} \gets g_{(i-1,m)}, \Delta\gets\Delta - k_i \delta_i$\;
\For{$j = 0, \dots, k_i-1$}{
$g_{(i,j+1)} \gets g_{(i,j)} + \delta_i \underline{Q} g_{(i,j)}$\;
}
}
$(n, k) \gets (i, k_i)$\;
}
\Return $g_{(n,k)}$ \end{algorithm}
From the properties of lower transition operators, it follows that for all $\smash{i \in \{ 2, \dots, n-1 \}}$, $\smash{\norm{g_{(i-1,m)}}_{c} \leq \norm{g_{(i-2,m)}}_{c}}$. Hence, the re-evaluated step size $\delta_i$ is indeed larger than (or equal to) the previous step size $\delta_{i-1}$. The only exception to this is the final step size $\delta_n$: it might be that the remaining time $\Delta$ is smaller than $m \delta_n$, in which case we need to choose $k$ and $\delta_n$ such that $k \delta_n = \Delta$.
Theorem~\ref{the:AdaptiveApproximation} guarantees that the adaptive approximation of Algorithm~\ref{alg:Adaptive} indeed results in an actual error lower than the desired maximal error $\epsilon$. Even more, it provides a method to compute an upper bound $\epsilon'$ of the actual error that is lower than the desired maximal error. Finally, it also states that the adaptive method of Algorithm~\ref{alg:Adaptive} needs at most an equal number of iterations than the uniform method of Algorithm~\ref{alg:Uniform}.
\begin{theorem} \label{the:AdaptiveApproximation}
Let $\underline{Q}$ be a lower transition rate operator, $f\in\setoffna$, $t \in \reals_{\geq 0}$, $\epsilon \in \reals_{> 0}$ and $m \in \mathbb{N}$.
We use Algorithm~\ref{alg:Adaptive} to determine $n$ and $k$, and if applicable also $k_i$, $\delta_{i}$ and $g_{(i,j)}$.
If $\norm{f}_{c} = 0$, $\norm{\underline{Q}} = 0$ or $t = 0$, then $\norm{\lowtranopa{t} f - g_{(n,k)}} = 0$.
Otherwise, we are guaranteed that
\begin{align*}
\norm{\lowtranopa{t} f - \Phi_{m,k}(\delta_1 \dots, \delta_n) f}
=
\norm{\lowtranopa{t} f - g_{(n,k)}}
\leq \epsilon'
&\coloneqq \sum_{i = 1}^{n} \delta_i^2 \norm{\underline{Q}}^2 \sum_{j=0}^{k_i - 1} \norm{g_{(i,j)}}_{c}
\leq \epsilon
\end{align*}
and that the total number of iterations has an upper bound:
\begin{equation*}
\sum_{i=1}^{n} k_i
= (n-1) m + k
\leq \left\lceil \max \left\{ \norm{\underline{Q}} t/2, t^2 \norm{\underline{Q}}^2 \norm{f}_{c}/ \epsilon \right\} \right\rceil.
\end{equation*} \end{theorem} Again, we can determine $\epsilon'$ while running Algorithm~\ref{alg:Adaptive}. An alternate---less tight---version of $\epsilon'$ can be obtained by replacing the sum of $\norm{g_{(i,j)}}_{c}$ for $j$ from $0$ to $k_{i}-1$ by $k_i \norm{g_{(i,0)}}_{c} = k_i \norm{g_{(i-1,m)}}_{c}$. Determining this alternative $\epsilon'$ while running Algorithm~\ref{alg:Adaptive} adds negligible computational overhead compared to the $\epsilon'$ of Theorem~\ref{the:AdaptiveApproximation}, as $\norm{g_{(i-1,m)}}_{c}$ is needed to re-evaluate the step size anyway.
The reason why we only re-evaluate the step size $\delta$ after every $m$ iterations is twofold. First and foremost, all we currently know for sure is that for all $\delta \in \reals_{\geq 0}$ such that $\delta \norm{\underline{Q}} \leq 2$, all $m \in \mathbb{N}$ and all $f\in\setoffna$, $\norm{(I + \delta \underline{Q})^{m} f}_{c} \leq \norm{f}_{c}$. Re-evaluating the step size every $m$ iterations is therefore only justified if a priori we are certain that $\smash{\norm{(I + \delta_{i} \underline{Q})^{m} g_{(i-1,m)}}_{c} < \norm{g_{(i-1,m)}}_{c}}$. We come back to this in Section~\ref{sec:ergodicity}. A second reason is that there might be a trade-off between the time it takes to re-evaluate the step size and the time that is gained by the resulting reduction of the number of iterations. The following numerical example illustrates this trade off. \begin{binex} \label{binex:AdaptiveApproximation}
Recall that in Example~\ref{binex:UniformApproximation} we wanted to approximate $\lowtranopa{1} \indic{1}$ up to a maximal desired error $\epsilon = \num{1e-3}$.
Instead of using the uniform method of Algorithm~\ref{alg:Uniform}, we now use the adaptive method of Algorithm~\ref{alg:Adaptive} with $m = 1$.
The initial step size is the same as that of the uniform method, but because we re-evaluate the step size we only need \num{3437} iterations, as reported in Table~\ref{tab:ComparisonOfCompDuration}.
We also find that in this case $\epsilon' = \num{1.00e-3}$, which is a coincidence.
Nevertheless, the actual error of the approximation is $\num{0.108e-3}$, which is about ten times smaller than what we were aiming for.
However, fewer iterations do not necessarily imply a shorter duration of the computations.
Qualitatively, we can conclude the following from Table~\ref{tab:ComparisonOfCompDuration}.
First, keeping track of $\epsilon'$ increases the duration, as expected.
Second, the adaptive method is faster than the uniform method, at least if we choose $m$ large enough.
And third, both methods yield an actual error that is at least an order of magnitude lower than the desired maximal error. \end{binex}
\section{Ergodicity} \label{sec:ergodicity} Let $\Phi_{m,k}(\delta_1, \dots, \delta_n) f$ be an approximation constructed using the adaptive method of Algorithm~\ref{alg:Adaptive}. Re-evaluating the step size is then only justified if a priori we are sure that \begin{equation*}
\nicefrac{1}{2} \norm{(I + \delta_{i} \underline{Q})^m \Phi_{i-1} f}_{v} = \norm{g_{(i,m)}}_{c} < \norm{g_{(i-1,m)}}_{c} = \nicefrac{1}{2} \norm{\Phi_{i-1} f}_{v} \text{ for all } i \in \{ 1, \dots, n-1 \},
\end{equation*} where $\Phi_0 \coloneqq I$ and $\Phi_{i} \coloneqq (I + \delta_i \underline{Q})^m \Phi_{i-1}$. As $(\Phi_{i-1} f) \in \setoffna$, this is definitely true if we require that
\begin{equation} \label{eqn:Ergodicity:IntroEqn}
(\forall \delta \in \{ \delta_1, \dots, \delta_{n-1} \}) (\forall f \in \setoffna) ~ \norm{(I + \delta \underline{Q})^{m} f}_{v} < \norm{f}_{v}.
\end{equation} In fact, since this inequality is invariant under translation or positive scaling of $f$, it suffices if
\begin{equation*}
(\forall \delta \in \{ \delta_1, \dots, \delta_{n-1} \}) (\forall f \in \setoffna \colon 0 \leq f \leq 1) ~ \norm{(I + \delta \underline{Q})^{m} f}_{v} < 1.
\end{equation*} Readers that are familiar with (the ergodicity of) imprecise discrete-time Markov chains---see \citep{2012Hermans} or \smash{\citep{2013Skulj}}---will probably recognise this condition, as it states that the (weak) coefficient of ergodicity of $\smash{(I + \delta \underline{Q})^{m}}$ should be strictly smaller than 1. For all lower transition operators $\underline{T}$, \cite{2013Skulj} define this (weak) \emph{coefficient of ergodicity} as \begin{equation} \label{eqn:CoeffOfErgod}
\coefferga{\underline{T}}
\coloneqq \max \left\{ \norm{\underline{T} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\}. \end{equation}
\subsection{Ergodicity of lower transition rate operators} As will become apparent, whether or not combinations of $m \in \mathbb{N}$ and $\delta \in \reals_{\geq 0}$ exist such that $\delta \norm{\underline{Q}} \leq 2$ and $\coefferga{(I + \delta \underline{Q})^m} < 1$ is tightly connected with the behaviour of $\lowtranopa{t} f$ for large $t$. \cite{2017DeBock} proved that for all lower transition rate operators $\underline{Q}$ and all $f \in \setoffna$, the limit $\lim_{t \to \infty} \lowtranopa{t} f$ exists. An important case is when this limit is a constant function for all $f$. \begin{definition}[Definition~2 of \citep{2017DeBock}]
The lower transition rate operator $\underline{Q}$ is \emph{ergodic} if for all $f\in\setoffna$, $\lim_{t \to \infty} \lowtranopa{t} f$ exists and is a constant function. \end{definition}
As shown by \cite{2017DeBock}, ergodicity is easily verified in practice: it is completely determined by the signs of $[\overline{Q} \indic{x}](y)$ and $[\underline{Q} \indic{A}](z)$, for all $x,y \in \mathcal{X}$ and certain combinations of $z \in \mathcal{X}$ and $A \subset \mathcal{X}$. It turns out that an ergodic lower transition rate operator $\underline{Q}$ does not only induce a lower transition operator $\lowtranopa{t}$ that converges, it also induces discrete approximations---of the form $(I + \delta_{k} \underline{Q}) \cdots (I + \delta_1 \underline{Q})$---with special properties. The following theorem, which we consider to be one of the main results of this contribution, highlights this. \begin{theorem} \label{the:ContinuousErgodicity:CoefficientOfErgodicityOfApproximation}
The lower transition rate operator $\underline{Q}$ is ergodic if and only if there is some $n<\card{\mathcal{X}}$ such that $\coefferga{\Phi(\delta_1,\dots,\delta_{k})} < 1$ for one (and then all) $k \geq n$ and one (and then all) sequence(s) $\delta_1, \dots, \delta_k$ in $\reals_{> 0}$ such that $\delta_i \norm{\underline{Q}} < 2$ for all $i \in \{1, \dots, k\}$. \end{theorem}
\subsection{Ergodicity and the uniform approximation method} \label{ssec:Ergodicity:UniformImprovement} Theorem~\ref{the:ContinuousErgodicity:CoefficientOfErgodicityOfApproximation} guarantees that the conditions that were discussed at the beginning of this section are satisfied. In particular, if the lower transition rate operator is ergodic, then there is some $n < \card{\mathcal{X}}$ such that $\coefferga{(I + \delta \underline{Q})^{m}} < 1$ for all $m \geq n$ and all $\delta \in \reals_{> 0}$ such that $\delta \norm{\underline{Q}} < 2$. Consequently, if we choose $m \geq \card{\mathcal{X}}-1$ then re-evaluating the step size $\delta$ will---except maybe for the last re-evaluation---result in a new step size that is strictly greater than the previous one. Therefore, we conclude that if the lower transition rate operator is ergodic, then using the adaptive method of Algorithm~\ref{alg:Adaptive} is certainly justified; it will result in fewer iterations, provided we choose a large enough $m$.
Another nice consequence of the ergodicity of a lower transition rate operator $\underline{Q}$ is that we can prove an alternate a priori guaranteed upper bound for the error of uniform approximations. \begin{proposition} \label{prop:UniformApproximationErgodicError}
Let $\underline{Q}$ be a lower transition rate operator and fix some $f \in \setoffna$, $m, n \in \mathbb{N}$ and $\delta \in \reals_{> 0}$ such that $\delta \norm{\underline{Q}} < 2$.
If $\beta \coloneqq \coefferga{(I + \delta \underline{Q})^{m}} < 1$, then
\begin{equation*}
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
\leq \epsilon_{e} \coloneqq m \delta^2 \norm{\underline{Q}}^2 \norm{f}_{c} \frac{1 - \beta^{k}}{1 - \beta}
\leq \epsilon_{d} \coloneqq \frac{m \delta^2 \norm{\underline{Q}}^2 \norm{f}_{c}}{1 - \beta}, \end{equation*}
where $t \coloneqq n \delta$ and $k \coloneqq \lceil \nicefrac{n}{m} \rceil$.
The same is true for $\beta = \coefferga{\lowtranopa{m\delta}}$. \end{proposition} Interestingly enough, the upper bound $\epsilon_{d}$ is not dependent on $t$ (or $n$) at all! This is a significant improvement on the upper bound of Theorem~\ref{the:UniformApproximationWithError}, as that upper bound is proportional to $t^2$.
By Theorem~\ref{the:ContinuousErgodicity:CoefficientOfErgodicityOfApproximation}, there always is an $m < \card{\mathcal{X}}$ such that $\coefferga{(I + \delta \underline{Q})^{m}} < 1$ for all $\delta \in \reals_{> 0}$ such that $\delta \norm{\underline{Q}} < 2$. Thus, given such an $m$, we can easily improve Algorithm~\ref{alg:Uniform}. After we have determined $n$ and $\delta$ with Algorithm~\ref{alg:Uniform}, we can simply determine the upper bound of Proposition~\ref{prop:UniformApproximationErgodicError}. If $m (1 - \beta^k) < n (1 - \beta)$ (or $m < n(1 - \beta)$), then this upper bound is smaller than the desired maximal error $\epsilon$, and we have found a tighter upper bound on the actual error. We can even go the extra mile and replace line \ref{line:Uniform:DetermineN} with a method that looks for the smallest possible $n \in \mathbb{N}$ that yields
\begin{equation*}
m \delta^2 \norm{\underline{Q}}^2 \norm{f}_{c} (1 - \beta^{k}) \leq (1 - \beta) \epsilon,
\end{equation*} where $k=\lceil \nicefrac{n}{m} \rceil$ and $\delta=\nicefrac{t}{n}$---and therefore also $\beta$---are dependent of $n$. This method could yield a smaller $n$, but the time we gain by having to execute fewer iterations does not necessarily compensate the time lost by looking for a smaller $n$. In any case, to actually implement these improvements we need to be able to compute $\beta\coloneqq\coefferga{(I + \delta \underline{Q})^m}$.
\begin{binex} \label{binex:UniformErgodic}
For the simple case of Example~\ref{binex:LTRO}, we can derive an analytical expression for $\coefferga{(I + \delta \underline{Q})}$ that is valid for all $\delta\in\mathbb{R}_{\geq0}$ such that $\delta\norm{\underline{Q}}\leq 2$.
Therefore, we can use Proposition~\ref{prop:UniformApproximationErgodicError} to a priori determine an upper bound for the error.
If we choose $m = 1$, then $\epsilon_{e} = \num{0.767e-3}$ and $\epsilon_{d} = \num{1.79e-3}$.
Note that $\epsilon_{e} < \epsilon$, so we can probably decrease the number of iterations $n$.
As reported in Table~\ref{tab:ComparisonOfCompDuration}, we find that $n = \num{6133}$ still suffices, and that this results in an approximation correct up to $\epsilon' = \num{0.560e-3}$, roughly two times smaller than the desired maximal error $\epsilon$.
The actual error is \num{0.0437e-3}, roughly ten times smaller than $\epsilon$. \end{binex}
\subsection{Approximating the coefficient of ergodicity} \label{ssec:CoeffOfErgod:Approximation} Unfortunately, determining the exact value of $\coefferga{(I + \delta \underline{Q})^{m}}$---and of $\coefferga{\underline{T}}$ in general---turns out to be non-trivial and is often even impossible. Nevertheless, the following theorem gives some---actually computable---lower and upper bounds for the coefficient of ergodicity. \begin{theorem} \label{the:CoeffOfErgod:Approximation}
Let $\underline{T}$ be a lower transition operator.
Then
\begin{align}
\coefferga{\underline{T}}
&\leq \max \big\{ \max \{ [\overline{T} \indic{A}](x) - [\underline{T} \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X}\big\}, \label{eqn:CoeffOfErgod:UpperBound} \\
\coefferga{\underline{T}}
&\geq \max \big\{ \max \{ [\underline{T} \indic{A}](x) - [\underline{T} \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X}\big\}. \label{eqn:CoeffOfErgod:LowerBound}
\end{align} \end{theorem}
The upper bound in Theorem~\ref{the:CoeffOfErgod:Approximation} is particularly useful in combination with Proposition~\ref{prop:UniformApproximationErgodicError}, as it allows us to replace $\beta\coloneqq\coefferga{(I + \delta \underline{Q})^{m}}$ with a guaranteed upper bound.
Of course, this only makes sense if this upper bound is strictly smaller than one. In the previous versions of this pre-print, we claimed that for ergodic lower transition rate operators $\underline{Q}$, this is always the case. Unfortunately---and to our great regret---we have since then discovered that this result is in fact incorrect. We have nonetheless included the (incorrect) statement so that we can easily refer to it, and have added a counterexample that demonstrates that it is indeed incorrect. \begin{proposition}[Incorrect] \label{prop:CoeffOfErgod:ErgodicUpperBound}
Let $\underline{Q}$ be an ergodic lower transition rate operator.
Then there is some $n < \card{\mathcal{X}}$ such that, for all $k \geq n$ and $\delta_{1}, \dots, \delta_{k}$ in $\reals_{> 0}$ such that $\delta_{i} \norm{\underline{Q}} < 2$ for all $i \in \{ 1, \dots, k \}$, the upper bound for $\coefferga{\Phi(\delta_{1}, \dots, \delta_{k})}$ that is given by Eqn.~\eqref{eqn:CoeffOfErgod:UpperBound} is strictly smaller than one. \end{proposition}
\begin{counterex}
Consider the lower transition rate operator defined in \cref{binex:LTRO}, with \(\lowq{0} = 0 = \lowq{1}\), \(\upq{0} > 0\) and \(\upq{1} > 0\).
One can easily verify that this lower transition rate operator is ergodic.
Note that if \cref{prop:CoeffOfErgod:ErgodicUpperBound} were to be true, then for all \(\delta \in \reals_{> 0}{}\) such that \(\delta \norm{\underline{Q}{}} < 2\),
\[
\max \big\{ \max \{ [(I + \delta \overline{Q}) \indic{A}](x) - [(I + \delta \underline{Q}) \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X}\big\}
< 1.
\]
However, after some straightforward computations we obtain that
\begin{multline*}
\max \big\{ \max \{ [(I + \delta \overline{Q}) \indic{A}](x) - [(I + \delta \underline{Q}) \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X}\big\} \\
\geq [(I + \delta \overline{Q}) \indic{0}](0) - [(I + \delta \underline{Q}) \indic{0}](1)
= 1.
\end{multline*} \end{counterex}
\subsection{Approximating limit values} The results that we have obtained earlier in this section naturally lead to a method to approximate $\lowtranopa{\infty} f \coloneqq \lim_{t \to \infty} \lowtranopa{t} f$ up to some maximal error. This is an important problem in applications; for instance, \cite{2015Troffaes} try to determine $\lowtranopa{\infty}f$ for an ergodic lower transition rate operator that arises in their specific reliability analysis application. The method they use is rather ad hoc: they pick some $t$ and $n$ and then determine the uniform approximation $\Psi_{t}(n) f$. As $\norm{\Psi_{t}(n) f}_{v}$ is small, they suspect that they are close to the actual limit value. They also observe that $\Psi_{2t}(4n) f$ only differs from $\Psi_{t}(n) f$ after the fourth significant digit, which they regard as further empirical evidence for the correctness of their approximation. While this ad hoc method seemingly works, the initial values for $t$ and $n$ have to be chosen somewhat arbitrarily. Also, this method provides no guarantee that the actual error is lower than some desired maximal error.
Theorem~\ref{the:ContinuousErgodicity:CoefficientOfErgodicityOfApproximation}, Proposition~\ref{prop:UniformApproximationErgodicError}, Theorem~\ref{the:CoeffOfErgod:Approximation} and the following stopping criterion allow us to propose a method that corrects these two shortcomings. \begin{proposition} \label{prop:StoppingCriterionWithErgodicity}
Let $\smash{\underline{Q}}$ be an ergodic lower transition rate operator and let $f \in \setoffna$, \smash{$t \in \reals_{\geq 0}$} and $\epsilon \in \reals_{> 0}$.
Let $s$ denote a sequence $\delta_1, \dots, \delta_k$ in $\reals_{\geq 0}$ such that $\sum_{i = 1}^{k} \delta_i = t$ and, for all $i \in \{1,\dots,k\}$, $\delta_{i} \norm{\underline{Q}} \leq 2$.
If $\norm{\lowtranopa{t} f - \Phi(s) f} \leq\nicefrac{\epsilon}{2}$ and $\norm{\Phi(s) f}_{c} \leq\nicefrac{\epsilon}{2}$, then for all $\Delta \in \reals_{\geq 0}$:
\begin{align*}
\abs{\lowtranopa{t + \Delta} f - \frac{\max \Phi(s) f + \min \Phi(s) f}{2}}
\leq \epsilon
~~~\text{and }~~
\abs{\lowtranopa{\infty} f - \frac{\max \Phi(s) f + \min \Phi(s) f}{2}}
\leq \epsilon.
\end{align*} \end{proposition} Without actually stating it, we mention that a similar---though less useful---stopping criterion can be proved for non-ergodic transition rate matrices as well.
Our method for determining $\lowtranopa{\infty} f$ is now relatively straightforward. Let $\underline{Q}$ be an ergodic lower transition rate operator and fix some $f \in \setoffna$. We can then approximate $\lowtranopa{\infty} f$ up to any desired maximal error $\epsilon \in \reals_{> 0}$ as follows. First, we look for some $m \in \mathbb{N}$ and some---preferably large---$\delta \in \reals_{> 0}$ such that $\delta \norm{\underline{Q}}<2$ and
\begin{equation*}
2m \delta^2 \norm{\underline{Q}}^2 \norm{f}_{c} \leq (1 - \beta)\epsilon,
\end{equation*} where $\beta \coloneqq \coefferga{(I + \delta \underline{Q})^{m}}$. From Theorem~\ref{the:ContinuousErgodicity:CoefficientOfErgodicityOfApproximation}, we know that a possible starting point for $m$ is $\card{\mathcal{X}} - 1$.
If we do not have an analytical expression for $\coefferga{(I + \delta \underline{Q})^{m}}$, then we can instead use the guaranteed upper bound of Theorem~\ref{the:CoeffOfErgod:Approximation}---provided it is strictly smaller than one.
If no such $m$ and $\delta$ exist---for instance because the guaranteed upper bound on $\beta$ is too conservative---then this method does not work. If on the other hand we do find such an $m$ and $\delta$, then we can keep on running the iterative step (line \ref{line:Uniform:IncrementOfG}) of Algorithm~\ref{alg:Uniform} until we reach the first index $i \in \mathbb{N}$ such that $\norm{g_{i}}_{c} \leq \nicefrac{\epsilon}{2}$.
By Propositions~\ref{prop:UniformApproximationErgodicError} and \ref{prop:StoppingCriterionWithErgodicity}, we are now guaranteed that $(\max g_{i} + \min g_{i}) / 2$ is an approximation of $\lowtranopa{\infty} f$ up to a maximal error $\epsilon$.
Alternatively, we can fix a step size $\delta$ ourselves and use the method of Theorem~\ref{the:UniformApproximationWithError} to compute~$\epsilon'$. In that case, we simply need to run the iterative scheme until we reach the first index $i$ such that $\norm{g_{i}}_{c} \leq \epsilon'$. By Proposition~\ref{prop:StoppingCriterionWithErgodicity}, we are then guaranteed that the error $(\max g_{i} + \min g_{i})/2$ is an approximation of $\lowtranopa{\infty} f$ up to a maximal error $\epsilon=2 \epsilon'$. The same is true if we replace $\epsilon'$ by the error $\epsilon_{e}$ that is used in Proposition~\ref{prop:UniformApproximationErgodicError}.
\begin{binex}
Using the analytical expressions of Example~\ref{binex:AnalyticalExpressionsForAppliedLTO}, we obtain $\lowtranopa{\infty} \indic{1} \approx \num{9.5238095e-3}$.
We want to approximate $\lowtranopa{\infty} \indic{1}$ up to a maximum error $\epsilon \coloneqq \num{1e-6}$.
We observe that $m = \num{1}$ and $\delta \approx \num{3.485e-8}$ yield an $\epsilon_{d}$ that is lower than $\nicefrac{\epsilon}{2}$.
After \num{196293685} iterations, the norm of the approximation is sufficiently small, resulting in the approximation $\lowtranopa{\infty} \indic{1} = \num{9.524(1)e-3}$.
Alternatively, choosing $\delta = \num{1e-7}$ and continuing until $\norm{g_{i}}_{c} \leq \epsilon'$ yields the approximation $\lowtranopa{\infty} \indic{1} = \num{9.5242(8)e-3}$ after only \num{69572154} iterations.
Mimicking \cite{2015Troffaes}, we also tried the heuristic method of increasing $t$ and $n$ until we observe empirical convergence.
After some trying, we find that $t = \num{7}$ and $n = 7 \cdot \num{250} = 1750$ already yield an approximation with sufficiently small error: $\norm{\lowtranopa{\infty} \indic{1} - \Psi_{7}(1750) \indic{1}} \approx \num{7e-7} < \epsilon$.
Note however that for non-binary examples, where $\lowtranopa{\infty} f$ cannot be computed analytically, this heuristic approach is unable to provide a guaranteed bound.
\end{binex}
\section{Conclusion} \label{sec:Conclusion} We have improved an existing method and proposed a novel method to approximate $\lowtranopa{t} f$ up to any desired maximal error, where $\lowtranopa{t}f$ is the solution of the non-linear differential equation~\eqref{eqn:TDLTO:FunctionDifferentialEquation} that plays an essential role in the theory of imprecise continuous-time Markov chains. As guaranteed by our theoretical results, and as verified by our numerical examples, our methods outperform the existing method by~\cite{2016Krak}, especially if the lower transition rate operator is ergodic. For these ergodic lower transition rate operators, we also proposed a method to approximate $\lim_{t \to \infty} \lowtranopa{t} f$ up to any desired maximal error.
For the simple case of a binary state space, we observed in numerical examples that there is a rather large difference between the theoretically required number of iterations and the number of iterations that are empirically found to be sufficient. Similar differences can---although this falls beyond the scope of our present contribution---also be observed for the lower transition rate operator that is studied in \citep{2015Troffaes}. The underlying reason for these observed differences remains unclear so far. On the one hand, it could be that our methods are still on the conservative side, and that further improvements are possible. On the other hand, it might be that these differences are unavoidable, in the sense that guaranteed theoretical bounds come at the price of conservatism. We leave this as an interesting line of future research. Additionally, the performance of our proposed methods for systems with a larger state space deserves further inquiry.
\appendix
\acks{Jasper~De~Bock is a Postdoctoral Fellow of the Research Foundation - Flanders (FWO) and wishes to acknowledge its financial support.
The work in this paper was also partially supported by the H2020-MSCA-ITN-2016 UTOPIAE, grant agreement 722734. Finally, the authors would like to express their gratitude to three anonymous reviewers, for their time, effort and constructive feedback.}
\section{Extra material and proofs for Section~\ref{sec:Preliminaries}} \label{app:Preliminaries}
\begin{definition}
An operator $\norm{\cdot}$ on a linear vector space $\mathcal{L}$ is a \emph{norm} if it maps $\mathcal{L}$ to $\reals_{\geq 0}$ and if for all $a, b \in \mathcal{L}$ and all $\mu \in \mathbb{R}$,
\begin{enumerate}[twocol, label=N\arabic*:, ref=(N\arabic*), series=Norm]
\item \label{def:Norm:ScalarMult}
$\norm{\mu a} = \abs{\mu} \norm{a}$,
\item \label{def:Norm:TriangleInequality}
$\norm{a + b} \leq \norm{a} + \norm{b}$,
\item \label{def:Norm:NormZeroOnly}
$\norm{a} = 0 \Leftrightarrow a = 0$.
\end{enumerate}
If an operator only satisfies \ref{def:Norm:ScalarMult} and \ref{def:Norm:TriangleInequality}, then it is called a \emph{seminorm}. \end{definition} It can be immediately checked that the maximum norm $\norm{\cdot}$ on $\setoffna$ is a proper norm, and similarly for the induced operator norm on non-negatively homogeneous operators from $\setoffna$ to $\setoffna$. For all $f \in \setoffna$ we define the variation seminorm $\norm{\cdot}_{v}$ and the centred seminorm $\norm{\cdot}_{c}$ as \begin{equation}
\label{eqn:VariationNorm}
\norm{f}_{v}
\coloneqq \norm{f - \min{f}}
= \max \{ \abs{f(x) - \min{f}} \colon x \in \mathcal{X} \}
= \max f - \min f \end{equation} and \begin{equation}
\label{eqn:CentredNorm}
\norm{f}_{c}
\coloneqq \norm{f - \cent{f}}
= \max \left\{ \abs{f(x) - \cent{f}} \colon x \in \mathcal{X} \right\}
= (\max f - \min f) / 2, \end{equation} where $\cent{f} \coloneqq (\max{f} + \min{f})/2$. Verifying that $\norm{\cdot}_{v}$ and $\norm{\cdot}_{c}$ are seminorms and not norms is straightforward.
\begin{proposition} \label{prop:norms:properties}
For all $f\in\setoffna$, all $\mu\in\mathbb{R}$ and any non-negatively homogeneous operator $A$,
\begin{enumerate}[label=N\arabic*:, ref=(N\arabic*), start=4]
\item \label{prop:norm:CenteredEqVar}
$\norm{f}_{c} = \norm{f}_{v} / 2$,
\item \label{prop:norm:CenteredLeqNormal}
$\norm{f}_{c} \leq \norm{f}$,
\item \label{prop:norm:VarAddConstant}
$\norm{f + \mu}_{v} = \norm{f}_{v}$,
\item \label{prop:norms:BoundOnNormOf_Af}
$\norm{A f} \leq \norm{A} \norm{f}$,
\item \label{prop:norms:NormOfAB}
$\norm{A B} \leq \norm{A} \norm{B}$.
\end{enumerate} \end{proposition} \begin{proof}
Properties \ref{prop:norm:CenteredEqVar}, \ref{prop:norm:CenteredLeqNormal} and \ref{prop:norm:VarAddConstant} follow almost immediately from the definitions of the centred and variation seminorms.
Proofs for \ref{prop:norms:BoundOnNormOf_Af} and \ref{prop:norms:NormOfAB} can be found in \citep{2017DeBock}. \end{proof}
The following properties of lower transition operators will turn out to be useful in the proofs. \begin{proposition} \label{prop:LowerTransitionOperator:Properties}
Let $\underline{T}$, $\lowtranopa{1}$, $\lowtranopa{2}$, $\underline{S}_{1}$ and $\underline{S}_{2}$ be lower transition operators.
Then for all $f, g \in \setoffna$ and all $\mu \in \mathbb{R}$:
\begin{enumerate}[twocol,resume*=LTO]
\item \label{prop:LTO:BoundedByMinAndMax}
$\min f \leq \underline{T} f \leq \overline{T} f \leq \max f$;
\item \label{prop:LTO:AdditionOfConstant}
$\underline{T} (f + \mu) = \underline{T} (f) + \mu$;
\item \label{prop:LTO:Monotonicity}
$f \geq g \Rightarrow \underline{T} f \geq \underline{T} g$ and $\overline{T} f \geq \overline{T} g$;
\item
$\abs{\underline{T} f - \underline{T} g} \leq \overline{T} (\abs{f - g})$;
\item \label{prop:LTO:NormLowerThan1}
$\norm{\underline{T}} \leq 1$;
\item \label{prop:LTO:NonExpansiveness}
$\norm{\underline{T} f - \underline{T} g} \leq \norm{f - g}$;
\item \label{prop:LTO:BoundOnNormTBTB}
$\norm{\underline{T} A - \underline{T} B} \leq \norm{A - B}$;
\item \label{prop:LTO:VarNormTf}
$\norm{\underline{T} f}_{v} \leq \norm{f}_{v}$;
\end{enumerate}
\begin{enumerate}[label=L\arabic*:, ref=(L\arabic*),resume=LTO]
\item \label{prop:LTO:CompositionIsAlsoLTO}
$\lowtranopa{1} \lowtranopa{2}$ is a lower transition operator;
\item \label{prop:LTO:DifferenceIsNonNegativeHomogeneous}
$(\lowtranopa{1} - \lowtranopa{2})$ is a non-negatively homogeneous operator;
\item \label{prop:LTO:BoundOnDifferenceTfSf}
$\norm{\lowtranopa{1} f - \underline{S}_{1} f}_{c} \leq \norm{\lowtranopa{1} f - \underline{S}_{1} f} \leq \norm{\lowtranopa{1} - \underline{S}_{1}} \norm{f}_{c}$;
\item \label{prop:LTO:BoundOnDifferenceTTfSSf}
$\norm{\lowtranopa{1} \lowtranopa{2} f - \underline{S}_{1} \underline{S}_{2} f}_{c} \leq \norm{\lowtranopa{1} \lowtranopa{2} f - \underline{S}_{1} \underline{S}_{2} f} \leq \norm{\lowtranopa{2} f - \underline{S}_{2} f} + \norm{\lowtranopa{1} - \underline{S}_{1}} \norm{\underline{S}_{2} f}_{c}$.
\end{enumerate} \end{proposition} \begin{proof}
Proofs for \ref{prop:LTO:BoundedByMinAndMax}--\ref{prop:LTO:BoundOnNormTBTB} and \ref{prop:LTO:CompositionIsAlsoLTO} can be found in \citep{2017DeBock}.
\ref{prop:LTO:VarNormTf} follows almost immediately from \ref{prop:LTO:BoundedByMinAndMax} and Eqn.~\eqref{eqn:VariationNorm}:
\[
\norm{\underline{T} f}_{v}
= \max \underline{T} f - \min \underline{T} f
\leq \max f - \min f
= \norm{f}_{v}.
\]
Note that for all $f \in \setoffna$ and all $\gamma \in \reals_{\geq 0}$,
\[
(\lowtranopa{1} - \lowtranopa{2})(\gamma f)
= \lowtranopa{1}(\gamma f) - \lowtranopa{2} (\gamma f)
= \gamma (\lowtranopa{1} f - \lowtranopa{2} f)
= \gamma (\lowtranopa{1} - \lowtranopa{2}) (f),
\]
which proves \ref{prop:LTO:DifferenceIsNonNegativeHomogeneous}.
Next, we prove \ref{prop:LTO:BoundOnDifferenceTfSf}.
The first inequality follows from \ref{prop:norm:CenteredLeqNormal}.
By \ref{prop:LTO:DifferenceIsNonNegativeHomogeneous}, $(\lowtranopa{1} - \underline{S}_{1})$ is a non-negatively homogeneous operator, such that
\begin{align*}
\norm{\lowtranopa{1} f - \underline{S}_{1} f}
&= \norm{\lowtranopa{1} f - \cent{f} - \underline{S}_{1} f + \cent{f}}
= \norm{\lowtranopa{1} (f - \cent{f}) - \underline{S}_{1} (f - \cent{f})} \\
&= \norm{(\lowtranopa{1} - \underline{S}_{1}) (f - \cent{f})}
\leq \norm{\lowtranopa{1} - \underline{S}_{1}} \norm{f - \cent{f}}
= \norm{\lowtranopa{1} - \underline{S}_{1}} \norm{f}_{c},
\end{align*}
where the second equality follows from \ref{prop:LTO:AdditionOfConstant}, the inequality follows from \ref{prop:LTO:DifferenceIsNonNegativeHomogeneous} and \ref{prop:norms:BoundOnNormOf_Af} and the last equality follows from Eqn.~\eqref{eqn:CentredNorm}.
\ref{prop:LTO:BoundOnDifferenceTTfSSf} can be proved similarly.
Again, the first inequality of \ref{prop:LTO:BoundOnDifferenceTTfSSf} follows from \ref{prop:norm:CenteredLeqNormal}.
To prove the second inequality of \ref{prop:LTO:BoundOnDifferenceTTfSSf}, we observe that
\begin{align*}
\norm{\lowtranopa{1} \lowtranopa{2} f - \underline{S}_{1} \underline{S}_{2} f}
&= \norm{\lowtranopa{1} \lowtranopa{2} f - \lowtranopa{1} \underline{S}_2 f + \lowtranopa{1} \underline{S}_2 f - \underline{S}_{1} \underline{S}_{2} f} \\
&\leq \norm{\lowtranopa{1} \lowtranopa{2} f - \lowtranopa{1} \underline{S}_2 f} + \norm{\lowtranopa{1} \underline{S}_2 f - \underline{S}_{1} \underline{S}_{2} f} \\
&\leq \norm{\lowtranopa{2} f - \underline{S}_2 f} + \norm{\lowtranopa{1} \underline{S}_2 f - \underline{S}_{1} \underline{S}_{2} f} \\
&\leq \norm{\lowtranopa{2} f - \underline{S}_2 f} + \norm{\lowtranopa{1} - \underline{S}_{1}} \norm{\underline{S}_{2} f}_{c},
\end{align*}
where the first inequality follows from \ref{def:Norm:TriangleInequality}, the second inequality follows from \ref{prop:LTO:NonExpansiveness} and the third inequality follows from \ref{prop:LTO:BoundOnDifferenceTfSf}. \end{proof}
A linear lower transition rate operator $\underline{Q}$---one for which \ref{def:LTRO:SuperAdditive} holds with equality---can be identified with a matrix $Q$ of dimension $\card{\mathcal{X}} \times \card{\mathcal{X}}$. This matrix is called a \emph{transition rate matrix}, the $(x,y)$-component $Q(x,y)$ of which is equal to $[\underline{Q} \indic{y}](x)$. \begin{lemma} \label{lem:BoundsOnElementsOfTransitionMatrix}
Let $Q$ be a transition rate matrix.
Then for all $x,y \in \mathcal{X}$ such that $x \neq y$,
\begin{enumerate}[twocol, label=Q\arabic*:, ref=(Q\arabic*)]
\item $Q(x,y) \geq 0$, \label{lem:RateMatrix:XY}
\item $Q(x,x) = - \sum_{y\neq x} Q(x,y)$. \label{lem:RateMatrix:XX}
\end{enumerate}
Also,
\[
\norm{Q} = 2 \max \left\{ \abs{Q(x,x)} \colon x\in\mathcal{X} \right\}.
\] \end{lemma} \begin{proof}
Note that \ref{lem:RateMatrix:XY} follows immediately from \ref{def:LTRO:Sign}.
From \ref{def:LTRO:Constant}, we find that for all $x\in\mathcal{X}$, $[Q \indic{\mathcal{X}}](x) = 0$.
Using the linearity and \ref{def:LTRO:Constant} yields
\[
Q(x,x)
= [Q \indic{x}](x)
= \left[Q \left(1 - \sum_{y \neq x} \indic{y}\right)\right](x)
= - \sum_{y \neq x} [Q \indic{y}](x) = \sum_{y \neq x} Q(x,y).
\]
It is a matter of straightforward verification to prove that
\[
\norm{Q} = \max \left\{ \sum_{y \in \mathcal{X}} \abs{Q(x,y)} \colon x\in\mathcal{X} \right\} = 2 \max \left\{ \abs{Q(x,x)} \colon x\in\mathcal{X} \right\}. \qedhere
\] \end{proof}
\begin{proposition}[Proposition~7.6 in \citep{2016Krak}]
Let $\underline{Q}$ be a lower transition rate operator.
The associated set of dominating rate matrices $\setofdomratemat$, defined as
\[
\setofdomratemat
\coloneqq \left\{ Q \text{ a transition rate matrix} \colon (\forall f \in \setoffna)~\underline{Q} f \leq Q f \right\},
\]
is non-empty and bounded, and for all $f\in\setoffna$ there is some $Q\in\setofdomratemat$ such that $\underline{Q} f = Q f$. \end{proposition}
\begin{lemma}[Lemma~G.3 in \citep{2016Krak}] \label{lem:NormRateMatixLowerThanNormRateOperator}
Let $\underline{Q}$ be a lower rate operator, then for any $Q \in \setofdomratemat$, $\norm{Q} \leq \norm{\underline{Q}}$. \end{lemma}
\begin{proposition} \label{prop:LowerTransitionRateOperator:Properties}
Let $\underline{Q}$ be a lower transition rate operator.
Then for all $f\in\setoffna$, all $\mu \in \mathbb{R}$ and all $x,y\in\mathcal{X}$ such that $x\neq y$:
\begin{enumerate}[twocol, resume*=LTRO]
\item \label{prop:LTRO:LowUp}
$\underline{Q} f \leq \overline{Q} f$;
\item \label{prop:LTRO:AdditionOfConstant}
$\underline{Q} (f + \mu) = \underline{Q} f$;
\item \label{prop:LTRO:Ixx}
$- \norm{\underline{Q}} / 2 \leq [\underline{Q} \indic{x}](x) \leq [\overline{Q} \indic{x}](x) \leq 0$;
\item
$0 \leq \sum_{y \neq x} [\underline{Q} \indic{x}](y) \leq \norm{\underline{Q}} / 2$;
\item \label{prop:LTRO:Norm}
$\norm{\underline{Q}} = 2 \max \{ \abs{[\underline{Q} \indic{x}](x)} \colon x \in \mathcal{X} \}$.
\end{enumerate} \end{proposition} \begin{proof}
The properties \ref{prop:LTRO:LowUp} and \ref{prop:LTRO:AdditionOfConstant} are proved in \cite{2017DeBock}.
Hence, we only prove the remaining properties.
\begin{enumerate}[label=R\arabic*:,start=7]
\item
By the conjugacy of $\underline{Q}$ and $\overline{Q}$,
\begin{align*}
[\overline{Q} \indic{x}](x)
&= \left[\overline{Q} \left( 1 - \sum_{z \neq x} \indic{z} \right)\right](x)
= - \left[\underline{Q} \left( -1 + \sum_{z \neq x} \indic{z} \right)\right](x) \\
&\leq - [\underline{Q} (-1)](x) - \sum_{z \neq x} [\underline{Q} \indic{z}](x),
\end{align*}
where the inequality follows from \ref{def:LTRO:SuperAdditive}.
By \ref{def:LTRO:Constant} the first term is zero, such that
\[
[\overline{Q} \indic{x}](x) \leq - \sum_{z \neq x} [\underline{Q} \indic{z}](x) \leq 0,
\]
where the second inequality follows from \ref{def:LTRO:Sign}.
Recall that there is some $Q \in \setofdomratemat$ such that $\underline{Q} \indic{x} = Q \indic{x}$.
It holds that
\begin{align*}
[\underline{Q} \indic{x}](x) = [Q \indic{x}](x) = Q(x,x) \geq -\frac{\norm{Q}}{2} \geq -\frac{\norm{\underline{Q}}}{2},
\end{align*}
where for the first inequality we used Lemma~\ref{lem:BoundsOnElementsOfTransitionMatrix} and for the second inequality we used Lemma~\ref{lem:NormRateMatixLowerThanNormRateOperator}.
The property now follows by combining the obtained lower bound for $[\underline{Q} \indic{x}](x)$ and the obtained upper bound for $[\overline{Q} \indic{x}](x)$ with \ref{prop:LTRO:LowUp}.
\item
Recall from \ref{def:LTRO:Sign} that $[\underline{Q} \indic{y}](x)$ is non-negative if $y \neq x$, such that $\sum_{y \neq x} [\underline{Q} \indic{y}](x)$ is non-negative.
Some manipulations yield
\begin{align*}
0
\leq \sum_{y \neq x} [\underline{Q} \indic{y}](x)
\leq \left[\underline{Q} \left(\sum_{y \neq x} \indic{y}\right)\right](x)
&= - \left[\overline{Q} \left(- \sum_{y \neq x} \indic{y}\right)\right](x) \\
&= - \left[\overline{Q} \left(1 - \sum_{y \neq x} \indic{y}\right)\right](x)
= - [\overline{Q} \indic{x}](x) \\
&\leq - [\underline{Q} \indic{x}](x),
\end{align*}
where the second inequality follows from \ref{def:LTRO:SuperAdditive}, the first equality follows from conjugacy, the second equality follows from \ref{prop:LTRO:AdditionOfConstant}, and the final inequality follows from \ref{prop:LTRO:Ixx}.
Also by \ref{prop:LTRO:Ixx}, we know that $- [\underline{Q} \indic{x}](x)$ is non-negative and bounded above by $\norm{\underline{Q}}/2$, hence
\[
0 \leq \sum_{y \neq x} [\underline{Q} \indic{y}](x) \leq \frac{\norm{\underline{Q}}}{2}.
\]
\item
Let $\underline{Q}$ be a lower transition rate operator.
From \cite[R9]{2017DeBock} it follows that
\[
\norm{\underline{Q}} \leq 2 \max_{x \in \mathcal{X}} \abs{ [\underline{Q} \indic{x}](x) }.
\]
From \ref{prop:LTRO:Ixx}, however, we know that for all $x \in \mathcal{X}$, $\abs{ [\underline{Q} \indic{x}](x) } \leq \norm{\underline{Q}} / 2$.
Combining these two inequalities yields $\norm{\underline{Q}} = 2 \max \{\abs{[\underline{Q} \indic{x}](x)} \colon x \in \mathcal{X} \}$. \qedhere
\end{enumerate} \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:IPlusDeltaQLowTranOp}]
Fix some lower transition rate operator $\underline{Q}$ and some $\delta \in \reals_{\geq 0}$.
We first prove that $\delta \norm{\underline{Q}} \leq 2$ implies that the operator $(I + \delta \underline{Q})$ is a lower transition operator.
The operator $(I + \delta \underline{Q})$ trivially satisfies \ref{def:LTO:SuperAdditive} and \ref{def:LTO:NonNegativelyHom}, such that we only need to prove that it satisfies \ref{def:LTO:DominatesMin}.
In order to do so, we fix some arbitrary $x \in \mathcal{X}$ and $f \in \setoffna$.
It holds that
\begin{align*}
[(I + \delta \underline{Q})f](x)
&= f(x) + \delta [\underline{Q} f](x) \\
&= f(x) + \delta [\underline{Q}(f - \min f)](x) \\
&= f(x) + \delta \left[\underline{Q} \left(\sum_{y\in\mathcal{X}} (f(y) - \min f) \indic{y}\right)\right](x) \\
&\geq f(x) + \delta (f(x) - \min f) [\underline{Q} \indic{x}](x) + \delta \sum_{y \neq x} (f(y) - \min f) [\underline{Q} \indic{y}](x) \\
&\geq f(x) + \delta (f(x) - \min f) [\underline{Q} \indic{x}](x) \\
&\geq f(x) - \delta (f(x) - \min f) \frac{\norm{\underline{Q}}}{2},
\intertext{
where the second equality follows \ref{prop:LTRO:AdditionOfConstant}, the first inequality follows from \ref{def:LTRO:SuperAdditive}, the second inequality follows from \ref{def:LTRO:Sign} and the third inequality follows from \ref{prop:LTRO:Ixx}.
Recall that by assumption $\delta \norm{\underline{Q}} \leq 2$, and therefore
}
[(I + \delta \underline{Q})f](x)
&\geq \min f.
\end{align*}
Next, we prove the reverse implication.
Assume that $(I + \delta \underline{Q})$ is a transition rate operator.
By \ref{prop:LTRO:Ixx} and \ref{prop:LTRO:Norm}, there is some $x \in \mathcal{X}$ such that $[\underline{Q} \indic{x}](x) = - \norm{\underline{Q}}/2$.
Hence,
\begin{align*}
[(I + \delta \underline{Q}) \indic{x}](x)
&= \indic{x}(x) + \delta [\underline{Q}\indic{x}](x)
= 1 - \delta \frac{\norm{\underline{Q}}}{2}.
\intertext{
If we now assume that $\delta \norm{\underline{Q}} > 2$, then
}
[(I + \delta \underline{Q}) \indic{x}](x)
&< 0 \leq \min \indic{x},
\end{align*}
which, by \ref{def:LTO:DominatesMin}, contradicts the initial assumption that $(I + \delta \underline{Q})$ is a lower transition operator.
This allows us to conclude that if $(I + \delta \underline{Q})$ is a lower transition operator, then $\delta \norm{\underline{Q}} \leq 2$ . \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:LTRO:PropositionNorm}]
This proposition simply states \ref{prop:LTRO:Norm} of Proposition~\ref{prop:LowerTransitionRateOperator:Properties}. \end{proof}
\begin{proof}[Proof of Example~\ref{binex:LTRO}]
We can immediately verify that $\underline{Q}$ satisfies \ref{def:LTRO:Constant}--\ref{def:LTRO:Sign}, such that it is indeed a lower transition rate operator. \end{proof}
\section{Extra material for Section~\ref{sec:MCs}} We here give a slightly more detailed description of the differential equation of interest. Recall from the beginning of Section~\ref{sec:MCs} that \cite{2015Skulj} proved that for any lower transition rate operator $\underline{Q}$ and any $f \in \setoffna$, the differential equation \begin{equation*}
\frac{\mathrm{d}}{\mathrm{d} t} f_{t} = \underline{Q} f_{t} \end{equation*} with initial condition $f_{0} \coloneqq f$ has a unique solution for all $t \in \reals_{\geq 0}$. As mentioned by \cite{2017DeBock}, this differential equation actually determines a time-dependent operator $\lowtranopa{t}$: for all $t \in \reals_{\geq 0}$, $\lowtranopa{t} f \coloneqq f_{t}$. Even more, \cite[Proposition~9]{2017DeBock} states that for all $t \in \reals_{\geq 0}$, the time-dependent operator $\lowtranopa{t}$ itself satisfies the differential equation \begin{equation} \label{eqn:TDLTO:DifferentialEquation}
\frac{\mathrm{d} }{\mathrm{d} t} \lowtranopa{t} = \underline{Q} \lowtranopa{t} \end{equation} with initial condition $\lowtranopa{0} \coloneqq I$. \cite{2017DeBock} also shows that this operator $\lowtranopa{t}$ is a lower transition operator, and that it satisfies the semi-group property: for all $t_1, t_2 \in\reals_{\geq 0}$, \begin{equation} \label{eqn:TDLTO:SemiGroup}
\lowtranopa{t_1+t_2} = \lowtranopa{t_1} \lowtranopa{t_2}. \end{equation}
For a transition rate matrix, Eqn.~\eqref{eqn:TDLTO:DifferentialEquation} reduces to the linear differential equation \[
\frac{\mathrm{d}}{\mathrm{d} t} T_t = Q T_t \] with initial condition $T_{0} \coloneqq I$. This differential equation is essential to precise continuous-time Markov chains, and is often referred to as the \emph{forward Kolmogorov} equation. The solution to this differential equation is called the \emph{matrix exponential}, and is denoted by $T_t = e^{t Q}$.
\begin{proof}[Proof of \cref{binex:AnalyticalExpressionsForAppliedLTO}]
Fix any $\delta \in \reals_{\geq 0}$ such that $\delta \norm{\underline{Q}} \leq 2$, and let $f$ be an arbitrary element of $\setoffna$.
We immediately obtain that if $f(0) \geq f(1)$, then
\begin{align*}
[\Phi(\delta) f](0)
&= f(0) - \delta \upq{0} (f(0) - f(1))
= f(0) - \delta \upq{0} \norm{f}_{v}, \\
[\Phi(\delta) f](1)
&= f(1) + \delta \lowq{1} (f(0) - f(1))
= f(1) + \delta \lowq{1} \norm{f}_{v}.
\intertext{Similarly, if $f(0) \leq f(1)$, then}
[\Phi(\delta) f](0)
&= f(0) + \delta \lowq{0} \norm{f}_{v}, \\
[\Phi(\delta) f](1)
&= f(1) - \delta \upq{1} \norm{f}_{v}.
\end{align*}
Therefore, if $f(0) \geq f(1)$ then
\begin{align*}
[\Phi(\delta) f](0) - [\Phi(\delta) f](1)
&= \norm{f}_{v} (1 - \delta (\upq{0} + \lowq{1})),
\intertext{and similarly if $f(0) \leq f(1)$, then}
[\Phi(\delta) f](1) - [\Phi(\delta) f](0)
&= \norm{f}_{v} (1 - \delta (\lowq{0} + \upq{1})).
\end{align*}
Consequently
\begin{align*}
f(0) \geq f(1) &\Rightarrow
\begin{cases}
[\Phi(\delta) f](0) \geq [\Phi(\delta) f](1) &\text{if } \delta (\upq{0} + \lowq{1}) \leq 1, \\
[\Phi(\delta) f](0) \leq [\Phi(\delta) f](1) &\text{if } \delta (\upq{0} + \lowq{1}) \geq 1,
\end{cases}
\intertext{and}
f(0) \leq f(1) &\Rightarrow
\begin{cases}
[\Phi(\delta) f](0) \leq [\Phi(\delta) f](1) &\text{if } \delta (\lowq{0} + \upq{1}) \leq 1, \\
[\Phi(\delta) f](0) \geq [\Phi(\delta) f](1) &\text{if } \delta (\lowq{0} + \upq{1}) \geq 1.
\end{cases}
\end{align*}
Fix some $f \in \setoffna$, some $t \in \reals_{\geq 0}$ and let $n \in \mathbb{N}$ such that
\begin{align*}
t (\upq{0} + \lowq{1}) \leq n,
t (\lowq{0} + \upq{1}) \leq n
~\text{and}~
t \norm{\underline{Q}}\leq 2 n.
\end{align*}
In this case, we can use the results obtained above to obtain an analytical expression for $\Psi_{t}(n) f$.
If $f(0) \geq f(1)$, then
\begin{align*}
[\Psi_{t}(n) f](0)
&= f(0) - \frac{t}{n} \upq{0} \norm{f}_{v} \sum_{i = 0}^{n-1} \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)^{i}, \\
[\Psi_{t}(n) f](1)
&= f(1) + \frac{t}{n} \lowq{1} \norm{f}_{v} \sum_{i = 0}^{n-1} \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)^{i}.
\intertext{Similarly, if $f(0) \leq f(1)$, then}
[\Psi_{t}(n) f](0)
&= f(0) + \frac{t}{n} \lowq{0} \norm{f}_{v} \sum_{i = 0}^{n-1} \left(1 - \frac{t}{n} (\lowq{0} + \upq{1})\right)^{i}, \\
[\Psi_{t}(n) f](1)
&= f(1) - \frac{t}{n} \upq{1} \norm{f}_{v} \sum_{i = 0}^{n-1} \left(1 - \frac{t}{n} (\lowq{0} + \upq{1})\right)^{i}.
\end{align*}
We now use Eqn.~\eqref{eqn:TDLTO:LimitFormula} to derive analytical expressions for the components of $\lowtranopa{t} f$.
If $f(0) \geq f(1)$, then
\begin{align*}
[\lowtranopa{t} f](0)
&= \lim_{n \to \infty} [\Psi_{t}(n) f](0) \\
&= \lim_{n \to \infty}\Bigg( f(0) - \frac{t}{n} \upq{0} \norm{f}_{v} \sum_{i = 0}^{n-1} \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)^{i} \Bigg)\\
&= f(0) - \upq{0} \norm{f}_{v} \lim_{n \to \infty} \frac{t}{n} \sum_{i = 0}^{n-1} \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)^{i}.
\intertext{
Let us now assume that $\upq{0} + \lowq{1}>0$.
If $t\neq0$ and $n$ is greater than the lower bounds mentioned above, the expression inside the parenthesis is bounded below by $0$ and strictly bounded above by $1$.
Therefore,
}
[\lowtranopa{t} f](0)
&= f(0) - \upq{0} \norm{f}_{v} \lim_{n \to \infty} \frac{t}{n} \frac{1 - \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)^{n}}{1 - \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)} \\
&= f(0) - \frac{\upq{0}}{\upq{0} + \lowq{1}} \norm{f}_{v} \lim_{n \to \infty} \left(1 - \left(1 - \frac{t}{n} (\upq{0} + \lowq{1})\right)^{n} \right) \\
&= f(0) - \frac{\upq{0}}{\upq{0} + \lowq{1}} \norm{f}_{v} \left(1 - e^{-t (\upq{0} + \lowq{1})} \right),
\intertext{and}
[\lowtranopa{t} f](1)
&= f(1) + \frac{\lowq{1}}{\upq{0} + \lowq{1}} \norm{f}_{v} \left(1 - e^{-t (\upq{0} + \lowq{1})} \right).
\end{align*}
If $t=0$, the obtained expressions hold trivially.
Completely analogous, if $\lowq{0} + \upq{1}>0$, the case $f(0) \leq f(1)$ yields
\begin{align*}
[\lowtranopa{t} f](0)
&= f(0) + \frac{\lowq{0}}{\lowq{0} + \upq{1}} \norm{f}_{v} \left(1 - e^{-t (\lowq{0} + \upq{1})} \right) \\
[\lowtranopa{t} f](1)
&= f(1) - \frac{\upq{1}}{\lowq{0} + \upq{1}} \norm{f}_{v} \left(1 - e^{-t (\lowq{0} + \upq{1})} \right). \qedhere
\end{align*} \end{proof}
\section{Extra material and proofs for Section~\ref{sec:EfficientComputation}} \label{app:EfficientComputation}
In many of the following proofs, we frequently use the following lemma. \begin{lemma}[Lemma~F.9 in \citep{2016Krak}] \label{lem:BoundForErrorOfIPlusDeltaQ}
Let $\underline{Q}$ be a lower transition rate operator.
For any $\delta \in \reals_{\geq 0}$, $\norm{\lowtranopa{\delta} - (I + \delta \underline{Q})} \leq \delta^2 \norm{\underline{Q}}^2$. \end{lemma}
\begin{lemma} \label{lem:ExplicitErrorBound}
Let $\underline{Q}$ be a lower transition rate operator, $f\in\setoffna$ and $t \in \reals_{\geq 0}$.
Let $s \coloneqq (\delta_1, \dots, \delta_k)$ be any sequence in $\reals_{\geq 0}$ such that $\sum_{i = 1}^{k} \delta_i = t$ and, for all $i \in \{ 1, \dots, k \}$, $\delta_{i} \norm{\underline{Q}} \leq 2$.
Then
\begin{align*}
\norm{\lowtranopa{t} f - \Phi(s) f}
&\leq \sum_{i = 1}^{k} \delta_i^2 \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c}
\intertext{and}
\norm{\lowtranopa{t} f - \Phi(s) f}
&\leq \sum_{i = 1}^{k} \delta_i^2 \norm{\underline{Q}}^2 \norm{\lowtranopa{\Delta_{i-1}} f}_{c},
\end{align*}
where $\Phi_{0} \coloneqq I$ and $\Delta_{0} = 0$, and for all $i \in \{1, \dots, k \}$, $\Phi_{i} \coloneqq (I + \delta_i \underline{Q}) \Phi_{i-1}$ and $\Delta_{i} \coloneqq \Delta_{i-1} + \delta_i$. \end{lemma} \begin{proof}
By the semi-group property of Eqn.~\eqref{eqn:TDLTO:SemiGroup},
\begin{align*}
\norm{\lowtranopa{t} f - \Phi(s) f}
&= \norm{\lowtranopa{\delta_k} \lowtranopa{t - \delta_k} f - (I + \delta_k \underline{Q}) \Phi_{k-1} f}.
\intertext{
By Proposition~\ref{prop:IPlusDeltaQLowTranOp}, the operator $(I + \delta_{i} \underline{Q})$ is a lower transition operator for all $i \in \{ 1, \dots, k \}$.
Even more, \ref{prop:LTO:CompositionIsAlsoLTO} implies that the operator $\Phi_{i-1}$ is a lower transition transition operator for all $i \in \{ 1, \dots, k \}$.
Recall that $\lowtranopa{\delta_k}$ and $\lowtranopa{t-\delta_{k}}$ are lower transition operators by definition, such that using \ref{prop:LTO:BoundOnDifferenceTTfSSf} and Lemma~\ref{lem:BoundForErrorOfIPlusDeltaQ} yields
}
\norm{\lowtranopa{t} f - \Phi(s) f}
&\leq \norm{\lowtranopa{\delta_k} - (I + \delta_k \underline{Q})} \norm{\Phi_{k-1} f}_{c} + \norm{\lowtranopa{t - \delta_k} f - \Phi_{k-1} f} \\
&\leq \delta_k^2 \norm{\underline{Q}}^2 \norm{\Phi_{k-1} f}_{c} + \norm{\lowtranopa{t - \delta_k} f - \Phi_{k-1} f}.
\intertext{Repeated application of the same trick yields}
\norm{\lowtranopa{t} f - \Phi(s) f}
&\leq \sum_{i = 1}^{k} \delta_i^2 \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c}.
\end{align*}
The second inequality of the statement can be proved in a completely similar manner. \end{proof}
\begin{lemma} \label{lem:LTRO:SpecialNoApproximationCase}
Let $\underline{Q}$ be a lower transition rate operator, $t \in \reals_{\geq 0}$ and $f \in \setoffna$.
If $\norm{f}_{c} = 0$, $\norm{\underline{Q}} = 0$ or $t = 0$, then $\norm{\lowtranopa{t} f - \Psi_{t}(0) f}=\norm{\lowtranopa{t} f - f} = 0$. \end{lemma} \begin{proof}
If $\norm{f}_{c} = 0$, then $\min f = \max f$, or equivalently $f$ is a constant function.
From \ref{prop:LTO:BoundedByMinAndMax} it follows that in this case $\lowtranopa{t} f = f$ for all $t \in \reals_{\geq 0}$.
If $\norm{\underline{Q}} = 0$, then $\underline{Q} g = 0$ for all $g \in \setoffna$.
Therefore
\[
\frac{\mathrm{d}}{\mathrm{d} t} \lowtranopa{t} f = \underline{Q} \lowtranopa{t} f = 0 \text{ for all } t \in \reals_{\geq 0}.
\]
Consequently, $\lowtranopa{t} f = \lowtranopa{0} f = I f = f$.
If $t = 0$, then we can simply use the initial condition: $\lowtranopa{t} f = \lowtranopa{0} f = I f = f$.
In all three cases we find that $\lowtranopa{t} f = f$, and hence
\[
\norm{\lowtranopa{t} f - \Psi_{t}(0) f} = \norm{\lowtranopa{t} f - f} = \norm{f-f} = 0. \qedhere
\] \end{proof}
\begin{lemma} \label{lem:UniformApproximationWithError}
Let $\underline{Q}$ be a lower transition rate operator, $f\in\setoffna$, $t \in \reals_{\geq 0}$, $\epsilon \in \reals_{> 0}$ and $n \in \mathbb{N}$, and define $\delta \coloneqq t / n$.
If
\[
n \geq \max \left\{ \frac{t \norm{\underline{Q}}}{2}, \frac{t^2 \norm{\underline{Q}}^2 \norm{f}_{c} }{\epsilon} \right\},
\]
then we are guaranteed that
\[
\norm{\lowtranopa{t} f - \Psi_{t}(n) f}
\leq \epsilon'
\coloneqq \delta^2 \norm{\underline{Q}}^2 \sum_{i=0}^{n-1} \norm{\left(I + \delta \underline{Q}\right)^{i} f}_{c}
\leq \epsilon.
\] \end{lemma} \begin{proof}
By Proposition~\ref{prop:IPlusDeltaQLowTranOp}, the operator $(I + \delta \underline{Q})$ is a lower transition operator if and only if $\delta \norm{\underline{Q}} \leq 2$, or equivalently if and only if
\begin{equation}
\label{eqn:UniformApproximationWithError:Ineq1}
n \geq \frac{t \norm{\underline{Q}}}{2}.
\end{equation}
From now on, we assume that $n$ satisfies this inequality.
Therefore, we may use Lemma~\ref{lem:ExplicitErrorBound} to yield
\begin{equation}
\label{eqn:UniformApproximationWithError:UpperBoundError}
\norm{\lowtranopa{t} f - \Psi_{t}(n) f}
\leq \sum_{i = 0}^{n-1} \delta^2 \norm{\underline{Q}}^2 \norm{(I+\delta \underline{Q})^{i} f}_{c}.
\end{equation}
Note that for any $i \in \{0, \dots, n-1\}$, $(I + \delta \underline{Q})^{i}$ is a lower transition operator by \ref{prop:LTO:CompositionIsAlsoLTO}; hence it follows from \ref{prop:LTO:VarNormTf} that $\norm{(I+\delta\underline{Q})^{i} f}_{c} \leq \norm{f}_{c}$.
Therefore
\begin{align*}
\norm{\lowtranopa{t} f - \Psi_{t}(n) f}
\leq \sum_{i = 0}^{n-1} \delta^2 \norm{\underline{Q}}^2 \norm{f}_{c} = \frac{t^2 \norm{\underline{Q}}^2 \norm{f}_{c}}{n}.
\end{align*}
It is now obvious that if
\begin{equation}
\label{eqn:UniformApproximationWithError:Ineq2}
n \geq \frac{t^2 \norm{\underline{Q}}^2 \norm{f}_{c}}{\epsilon},
\end{equation}
then $\norm{\lowtranopa{t} f - \Psi_{t}(n) f} \leq \epsilon$.
It also follows almost immediately from Eqn.~\eqref{eqn:UniformApproximationWithError:UpperBoundError} that if $n$ satisfies both Eqns.~\eqref{eqn:UniformApproximationWithError:Ineq1} and \eqref{eqn:UniformApproximationWithError:Ineq2}, then
\[
\norm{\lowtranopa{t} f - \Psi_t(n) f}
\leq \epsilon'
\coloneqq \delta^2 \norm{\underline{Q}}^2 \sum_{i=0}^{n-1} \norm{(I + \delta \underline{Q})^{i} f}_{c}
\leq \epsilon. \qedhere
\] \end{proof}
\begin{proof}[Proof of Theorem~\ref{the:UniformApproximationWithError}]
First, we assume $t = 0$, $\norm{\underline{Q}} = 0$ or $\norm{f}_{c} = 0$.
In this case, $n = 0$ and $\delta = 0$.
By Lemma~\ref{lem:LTRO:SpecialNoApproximationCase}, we find that
\[
\norm{\lowtranopa{t} f - g_{(0)}} = \norm{\lowtranopa{t} f - \Psi_{t}(0) f} = 0 < \epsilon.
\]
Next, we assume $t > 0$, $\norm{\underline{Q}} > 0$ and $\norm{f}_{c} > 0$.
In this case, the integer $n$ that is determined on line~\ref{line:Uniform:DetermineN} of Algorithm~\ref{alg:Uniform} is just the lowest natural number that satisfies the requirement of Lemma~\ref{lem:UniformApproximationWithError}, from which the stated follows immediately. \end{proof}
\begin{lemma} \label{lem:AdaptiveApproximation}
Let $\underline{Q}$ be a lower transition operator, $f \in \setoffna$, $t' \in \reals_{\geq 0}$, $\epsilon \in \reals_{> 0}$, $n, m, k \in \mathbb{N}$ and let $\delta_{1}, \dots, \delta_{n}$ be a sequence in $\reals_{\geq 0}$.
If (i) $k \leq m$, (ii) $k \delta_{n} + \sum_{i = 1}^{n-1} m \delta_i = t'$, and (iii) for all $i \in \{ 1, \dots, n \}$, $\delta_{i} \norm{\underline{Q}} \leq 2$ and
\[
t' \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c} \delta_i \leq \epsilon,
\]
where $\Phi_0 \coloneqq I$ and for all $i \in \{ 1, \dots, n-1 \}$, $\Phi_i \coloneqq (I + \delta_i \underline{Q})^{m} \Phi_{i-1}$; then
\begin{align*}
\norm{\lowtranopa{t'} f - \Phi_{m,k}(\delta_1,\dots,\delta_n) f}
&\leq \epsilon'
\coloneqq \sum_{i = 1}^{n} \delta_i^2 \norm{\underline{Q}}^2 \sum_{j=0}^{k_i - 1} \norm{(I + \delta_{i} \underline{Q})^{j} \Phi_{i-1} f}_{c} \\
&\leq \sum_{i = 1}^{n} k_{i} \delta_{i}^{2} \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c}
\leq \epsilon,
\end{align*}
where $k_i \coloneqq m$ for all $i \in \{ 1, \dots, n-1 \}$ and $k_{n} \coloneqq k$. \end{lemma} \begin{proof}
Assume that (i) $1 \leq k \leq m$, (ii) $k \delta_{n} + \sum_{i=1}^{n-1} m \delta_{i} = t'$, and (iii) for all $i \in \{ 1, \dots, n \}$, $\delta_{i} \norm{\underline{Q}} \leq 2$.
Observe that by Proposition~\ref{prop:IPlusDeltaQLowTranOp} and \ref{prop:LTO:CompositionIsAlsoLTO}, the operators $\Phi_0, \dots, \Phi_{n-1}$ are all lower transition operators.
From Lemma~\ref{lem:ExplicitErrorBound}, it follows that
\begin{align}
\label{eqn:AdaptiveApproximation:BoundOnError}
\norm{\lowtranopa{t'} f - \Phi_{m,k}(\delta_1,\dots,\delta_n) f}
\leq \sum_{i = 1}^{n} \delta_{i}^2 \norm{\underline{Q}}^2 \sum_{j = 0}^{k_{i}-1} \norm{(I + \delta_{i} \underline{Q})^{j} \Phi_{i-1} f}_{c}.
\end{align}
Hence, it is obvious that the contribution of the $i$-th approximation step to (the upper bound of) the error is
\begin{equation}
\label{eqn:AdaptiveApproximation:UpperBoundOnError}
\delta_i^2 \norm{\underline{Q}}^2 \sum_{j= 0}^{k_{i}-1} \norm{(I + \delta_{i} \underline{Q})^{j} \Phi_{i-1} f}_{c}
\leq k_{i} \delta_i^2 \norm{\underline{Q}}^2 \norm{ \Phi_{i-1} f}_{c},
\end{equation}
where the inequality follows from \ref{prop:LTO:VarNormTf}.
We want that the contribution of the $i$-th approximation step to the error is proportional to its length $k_{i} \delta_i$.
Therefore, we demand that the contribution of the $i$-th approximation step is bounded above by $k_{i} \delta_i \epsilon / t'$, which yields the condition
\begin{equation}
\label{eqn:AdaptiveApproximation:ConditionOnDelta}
t' \delta_i \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c} \leq \epsilon.
\end{equation}
It is obvious that the conditions we have imposed on $\delta_{1}, \dots, \delta_{n}$ are those of the statement.
Combining Eqns.~\eqref{eqn:AdaptiveApproximation:BoundOnError}, \eqref{eqn:AdaptiveApproximation:UpperBoundOnError} and \eqref{eqn:AdaptiveApproximation:ConditionOnDelta} yields
\begin{align*}
\norm{\lowtranopa{t} f - \Phi_{m,k}(\delta_1,\dots,\delta_n) f}
&\leq \epsilon' \coloneqq \sum_{i = 1}^{n} \delta_i^2 \norm{\underline{Q}}^2 \sum_{j = 0}^{k_{i}-1} \norm{(I + \delta_{i} \underline{Q})^{j} \Phi_{i-1} f}_{c} \\
&\leq \sum_{i = 1}^{n} k_{i} \delta_{i}^{2} \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c}
\leq \epsilon. \qedhere
\end{align*} \end{proof}
\begin{proof}[Proof of Theorem~\ref{the:AdaptiveApproximation}]
We use Algorithm~\ref{alg:Adaptive} to determine $n$ and $k$, and if applicable also $k_i$, $\delta_{i}$ and $g_{(i,j)}$.
If $\norm{f}_{c} = 0$, $\norm{\underline{Q}} = 0$ or $t = 0$, then by Lemma~\ref{lem:LTRO:SpecialNoApproximationCase}
\[
\norm{\lowtranopa{t} f - g_{(0,m)}} = \norm{\lowtranopa{t} f - f} = 0 < \epsilon.
\]
We therefore assume that $\norm{f}_{c} > 0$, $\smash{\norm{\underline{Q}} > 0}$ and $t > 0$, and let $\delta_1, \dots, \delta_n \in \reals_{> 0}$ and $k \in \mathbb{N}$ be determined by running Algorithm~\ref{alg:Adaptive}.
Let $t' \coloneqq k \delta_n + \sum_{i=1}^{n-1} m \delta_{i} \leq t$.
It is then a matter of straightforward verification that $\delta_1,\dots, \delta_n$ and $k$ satisfy the requirements of Lemma~\ref{lem:AdaptiveApproximation}: (i) $1 \leq k \leq m$, (ii) $k \delta_n + \sum_{j = 1}^{n-1} m \delta_j = t'$, and (iii) for all $i \in \{ 1, \dots, n\}$, $\delta_i\norm{\underline{Q}}\leq2$ and
\[
t' \delta_{i} \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c}\leq t \delta_{i} \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c} = t \delta_{i} \norm{\underline{Q}}^2 \norm{g_{(i-1,m)}}_{c} \leq \epsilon.
\]
Therefore,
\begin{equation}
\norm{\lowtranopa{t'} f - g_{(n,k)}}
\leq \sum_{i=1}^{n} \delta_i^2 \norm{\underline{Q}}^2 \sum_{j = 0}^{k_i - 1} \norm{g_{(i,j)}}_{c}
\leq \sum_{i=1}^{n} k_i \delta_i^2 \norm{\underline{Q}}^2 \norm{g_{(i-1,m)}}_{c}
\leq \epsilon.\label{eq:tprimeformula}
\end{equation}
If $t'=t$, this concludes the proof of the first part of the statement.
If $t'<t$, we have that $\norm{g_{(n,k)}}_{c} = 0$, which implies that there is some $\mu\in\mathbb{R}$ such that $g_{(n,k)}=\mu$.
Hence, it follows that
\begin{equation*}
\norm{\lowtranopa{t}f-g_{(n,k)}}
=
\norm{\lowtranopa{t}f-\mu}
=\norm{\lowtranopa{t-t'}\lowtranopa{t'}f-\lowtranopa{t-t'}\mu}
\leq\norm{\lowtranopa{t'}f-\mu}
=\norm{\lowtranopa{t'}f-g_{(n,k)}},
\end{equation*}
where the second equality follows from Eqn.~\eqref{eqn:TDLTO:SemiGroup} and \ref{prop:LTO:AdditionOfConstant} and where the inequality follows from \ref{prop:LTO:NonExpansiveness}.
Combined with Eqn.~\eqref{eq:tprimeformula}, this again implies the first part of the statement.
To prove the final part of the statement, we assume that $\norm{f}_{c} > 0$, $\norm{\underline{Q}} > 0$ and $t > 0$, and let $\delta_1, \dots, \delta_n \in \reals_{> 0}$ and $k \in \mathbb{N}$ be constructed by running Algorithm~\ref{alg:Adaptive}.
We let $n_{u}$ denote the number of iterations of the uniform method:
\[
n_{u}
\coloneqq \left\lceil \max \left\{ \frac{t \norm{\underline{Q}}}{2}, \frac{t^2 \norm{\underline{Q}}^2 \norm{f}_{c}}{\epsilon} \right\} \right\rceil.
\]
If we let $\delta_u \coloneqq t / n_{u}$, then obviously
\[
0 < \delta_{u} \leq \min \left\{ t, \frac{2}{\norm{\underline{Q}}}, \frac{\epsilon}{t \norm{\underline{Q}}^2 \norm{f}_{c}} \right\}.
\]
We now consider two cases: $n = 1$ and $n > 1$.
We start with the case $n=1$.
Let
\[
\delta_{1}^{*} \coloneqq \min \left\{ t, \frac{2}{\norm{\underline{Q}}}, \frac{\epsilon}{t \norm{\underline{Q}}^2 \norm{f}_{c}} \right\}.
\]
Since $n=1$, it then holds that $t \leq m \delta_{1}^{*}$ and/or $\norm{g_{(1,m)}}_{c} = 0$.
We first assume that $t \leq m \delta_{1}^{*}$.
Note that $\delta_{1}^{*}$ is strictly positive as we have assumed that $\norm{f}_{c}$, $\norm{\underline{Q}}$ and $t$ are strictly positive.
We let $k \coloneqq \left\lceil t / \delta_{1}^{*} \right\rceil$ and $\delta_{1} \coloneqq t / k$, such that
\[
k = \left\lceil \frac{t}{\delta_{1}^{*}} \right\rceil = \left\lceil \max \left\{ 1, \frac{t \norm{\underline{Q}}}{2}, \frac{t^{2} \norm{\underline{Q}}^2 \norm{f}_{c}}{\epsilon} \right\} \right\rceil.
\]
As in this case the definitions of $n_{u}$ and $k$ are equivalent, we find that $k + m (n-1) = k = n_{u}$.
Next, we assume that $n = 1$ but $t > m \delta_{1}^{*}$.
This can only be the case if $\norm{g_{(1,m)}}_{c} = 0$ and
\[
\delta_{1} \coloneqq \delta_{1}^{*} = \min \left\{ \frac{2}{\norm{\underline{Q}}}, \frac{\epsilon}{t \norm{\underline{Q}}^2 \norm{f}_{c}} \right\}.
\]
Therefore $\delta_{u} \leq \delta_{1}$, such that $n_u \geq t / \delta_{1} > m$.
As the total number of iterations is $k = m$, it immediately follows that $m (n-1) + k = m < n_u$.
Next, we consider the case $n > 1$.
For all $i \in \{ 1, \dots, n-1 \}$,
\[
\delta_{i}
\coloneqq \min \left\{ \frac{2}{\norm{\underline{Q}}}, \frac{\epsilon}{t \norm{\underline{Q}}^2 \norm{\Phi_{i-1} f}_{c}} \right\},
\]
where $\Phi_0 \coloneqq I$ and $\Phi_{i} \coloneqq (I + \delta_{i}\underline{Q})^{m} \Phi_{i-1}$ and this definition is valid because we previously assumed that $\norm{f}_{c} > 0$, $\norm{\underline{Q}} > 0$ and $t > 0$.
Note that our definition of $\delta_{i}$ differs from that of line~\ref{line:Adaptive:delta} in Algorithm~\ref{alg:Adaptive}: we have left out the upper bound $\Delta = t - \sum_{j = 1}^{i-1} m \delta_{j}$ because this upper bound only plays a part for the final step $\delta_{n}$.
As by \ref{prop:LTO:BoundedByMinAndMax} $\norm{\Phi_{i} f}_{c} \leq \norm{\Phi_{i-1} f}_{c}$, we find that
\[
\delta_{u} \leq \delta_1 \leq \delta_2 \leq \cdots \leq \delta_{n-1},
\]
where the first inequality follows from the definition of $\delta_u$.
As the step sizes that are used are all larger than the uniform step size, we intuitively expect that the number of necessary iterations will be bounded above by $n_{u}$.
To formally prove this, we again distinguish two sub-cases: $k \delta_{n} + \sum_{i = 1}^{n-1} m \delta_{i} < t$ and $k \delta_{n} + \sum_{i = 1}^{n-1} m \delta_{i} = t$.
We first consider the sub-case $k \delta_{n} + \sum_{i = 1}^{n-1} m \delta_{i} < t$.
This can only occur if $\norm{g_{(n,m)}}_{c} = 0$ and $k = m$.
As $m \delta_{n} < t - \sum_{i = 1}^{n-1} m \delta_{i}$ and $\norm{g_{(n-1,m)}}_{c} = \norm{\Phi_{n-1} f}_{c} > 0$,
\[
\delta_{n} = \left\{ \frac{2}{\norm{\underline{Q}}}, \frac{\epsilon}{t \norm{\underline{Q}}^2 \norm{\Phi_{n-1} f}_{c}} \right\} \geq \delta_{n-1},
\]
where the inequality follows from $\norm{\Phi_{n-2} f}_{c} \geq \norm{\Phi_{n-1} f}_{c}$.
Note that
\[
m n \delta_{1} = \left(k + m (n-1)\right) \delta_{1} \leq k \delta_{n} + \sum_{i=1}^{n-1} m \delta_{i} < t = n_{u} \delta_{u},
\]
where the first inequality follows from the increasing character of $\delta_1, \dots, \delta_n$.
If we divide both sides of the inequality by $\delta_{1}$, then we find that $m n < n_{u} \delta_{u} / \delta_{1}$.
Using that $\delta_{u} \leq \delta_{1}$ now yields that the total number of iterations $k + (n-1) m = m n$ is strictly smaller than $n_{u}$.
Next, we consider the sub-case $k \delta_{n} + \sum_{i = 1}^{n-1} m \delta_{i} = t$.
Because $1 \leq k \leq m$ and $\delta_{n}>0$, $\sum_{i=1}^{n-1} m \delta_{i} < t = n_{u} \delta_{u}$.
Hence, there is some $n_{u}' < n_{u}$ such that $n_{u}' \delta_{u} < \sum_{i=1}^{n-1} m \delta_{i} \leq (n_{u}' + 1) \delta_{u}$.
The final step size $\delta_{n}$ is derived from the remaining time
\begin{align*}
t - \sum_{i=1}^{n-1} m \delta_{i}
\eqqcolon \Delta
&\geq n_{u} \delta_{u} - (n_{u}' + 1) \delta_{u} = (n_{u} - n_{u}' - 1) \delta_{u}, \\
\Delta
&< n_{u} \delta_{u} - n_{u}' \delta_{u} = (n_{u} - n_{u}') \delta_{u},
\end{align*}
where the first inequality follows from $\sum_{i=1}^{n-1} m\delta_{i} \leq (n_{u}' + 1) \delta_{u}$ and the second inequality follows from $\sum_{i=1}^{n-1} m\delta_{i} > n_{u}' \delta_{u}$.
We first determine the maximal allowable final step size
\[
\delta_{n}^{*}
\coloneqq \min \left\{ \Delta, \frac{2}{\norm{\underline{Q}}}, \frac{\epsilon}{t \norm{\underline{Q}}^{2} \norm{\Phi_{n-1} f}_{c}} \right\},
\]
and then determine the actual final step size as $\delta_{n} \coloneqq \Delta / k$, with $1 \leq k \coloneqq \lceil \Delta / \delta_{n}^{*} \rceil \leq m$.
If $(n_{u} - n_{u}' - 1) > 0$, then $\Delta \geq (n_{u} - n_{u}' - 1) \delta_{u} \geq \delta_{u}$.
Therefore, and because the two other upper bounds of $\delta_{n}^{*}$ are also greater than $\delta_{u}$, we find that $\delta_{n}^{*} \geq \delta_{u}$.
From this, we infer that $k = \left\lceil \nicefrac{\Delta}{\delta_{n}^{*}} \right\rceil \leq \left\lceil \nicefrac{\Delta}{\delta_{u}} \right\rceil$.
As $\Delta < (n_{u} - n_{u}') \delta_{u}$, we now find that $k \leq (n_{u} - n_{u}')$.
Note that
\[
m(n-1) \delta_{1} \leq \sum_{i = 1}^{n-1} m \delta_{i} \leq (n_{u}' + 1) \delta_{u},
\]
where the first inequality follows from the non-decreasing character of $\delta_{1}, \dots, \delta_{n-1}$.
Dividing both sides of the inequality by $\delta_{1}$ and using $\delta_{u} \leq \delta_{1}$ yields $m (n-1) \leq n_{u}' + 1$.
If $m (n-1) < n_{u}' + 1$, then combining this strict inequality with the obtained upper bound for $k$ yields
\[
k + m(n-1) < (n_{u} - n_{u}') + (n_{u}' + 1) = n_{u} + 1,
\]
which implies that $k+m(n-1)\leq n_u$, as desired.
If $m (n-1) = n_{u}' + 1$, then
\[
\Delta = t - \sum_{i = 1}^{n-1} m \delta_{i} \leq t - \sum_{i = 1}^{n-1} m \delta_{u} = (n_{u} - n_{u}' - 1) \delta_{u},
\]
where the inequality is allowed because $\delta_{u} \leq \delta_{1}, \dots, \delta_{n-1}$.
As we previously proved that $\Delta \geq (n_{u} - n_{u}' - 1) \delta_{u}$, we obtain that $m (n-1) = n_{u}' + 1$ implies that $\Delta = (n_{u} - n_{u}' - 1) \delta_{u}$.
As $\delta_{n}^{*} \geq \delta_{u}$, in this case we are guaranteed that $k = \lceil \nicefrac{\Delta}{\delta_{n}^{*}} \rceil = \lceil \nicefrac{(n_{u} - n_{u}' - 1) \delta_{u}}{\delta_{n}^{*}} \rceil \leq (n_{u} - n_{u}' - 1)$.
Hence, we again find that
\[
k + m(n-1) \leq (n_{u} - n_{u}' - 1) + (n_{u}' + 1) = n_{u},
\]
as desired.
If $(n_{u} - n_{u}' - 1) = 0$, then $\Delta < (n_{u} - n_{u}') \delta_{u} = \delta_{u}$.
As the two other upper bounds on $\delta_{n}^{*}$ are greater than $\delta_{u}$, this implies that $\delta_{n}^{*} = \Delta$.
Consequently, $k = \lceil \nicefrac{\Delta}{\delta_{n}^{*}} \rceil = \lceil \nicefrac{\Delta}{\Delta} \rceil = 1$.
Note that
\[
m(n-1) \delta_{1} \leq \sum_{i = 1}^{n-1} m \delta_{i} < n_{u} \delta_{u},
\]
from which it follows that $m (n-1) < n_{u}$.
Hence, we find that $k + m (n-1) < 1 + n_{u}$, and therefore also, once more, that $k + m (n-1) \leq n_{u}$.
This concludes the proof. \end{proof}
\section{A more thorough look at ergodicity} \label{app:Extra:QualitativeErgodicity} Before we prove the results of Section~\ref{sec:ergodicity}, we need to properly introduce the ergodicity of lower transition (rate) operators. We explicitly chose not to do this in the main text, as the main focus of this contribution is approximating $\lowtranopa{t} f$. Nevertheless, we now give a brief overview of the relevant literature, limiting ourselves to the qualitative point of view of \cite{decooman2009}, \cite{2012Hermans} and \cite{2017DeBock}.
\subsection{Qualitatively characterising ergodicity of lower transition operators} Recall that a lower transition rate operator is ergodic if and only if $\lowtranopa{t} f$ converges to a constant function for all $f \in \setoffna$. \cite{2012Hermans} say something similar for lower transition operators. \begin{definition}
A lower transition operator $\underline{T}$ is \emph{ergodic} if, for all $f \in \setoffna$, the limit $\lim_{n \to \infty} \underline{T}^{n} f$ exists and is a constant function. \end{definition}
The condition of this definition can, in general, not be checked in practice. Nonetheless, \citet{2012Hermans} provide a necessary and sufficient condition for the ergodicity of a lower transition operator, based on the following definition. \begin{definition} \label{def:LowTranOp:RegularlyAbsorbing}
The lower transition operator $\underline{T}$ is \emph{regularly absorbing} if it is (i) \emph{top class regular}, i.e.
\[
\statespacesub{PA}
\coloneqq \left\{ x \in \mathcal{X} \colon (\exists n \in \mathbb{N})(\forall y \in \mathcal{X})~[\overline{T}^n \indic{x}](y) > 0 \right\} \neq 0,
\]
and (ii) \emph{top class absorbing}, i.e.
\[
(\forall y \in \mathcal{X} \setminus \statespacesub{PA})(\exists n \in \mathbb{N})~[\underline{T}^n \indic{\statespacesub{PA}}](y) > 0.
\] \end{definition}
\begin{proposition}[Proposition~3 from \citep{2012Hermans}] \label{prop:DiscreteErgodicity:NecAndSuff}
The lower transition operator $\underline{T}$ is ergodic if and only if it is regularly absorbing. \end{proposition}
\cite{decooman2009} mention an equivalent way of looking at top class regularity that uses the ternary accessibility relation $\cdot \upreachda{\cdot} \cdot$. \begin{definition} \label{def:LowTranOp:PossiblyAccesible}
Let $\underline{T}$ be any lower transition operator.
For all $x,y \in \mathcal{X}$ and all $n \in \nats_{0}$, we say that \emph{$x$ is possibly accessible from $y$ in $n$ steps}, denoted by $y \upreachda{n} x$, if and only if $[\overline{T}^n \indic{x}](y)>0$.
If there is some $n \in \nats_{0}$ such that $y \upreachda{n} x$, then the state $x$ is simply said to be \emph{possibly accessible} from the state $y$, denoted by $y \rightsquigarrow x$. \end{definition}
\begin{lemma} \label{lem:LowTranOp:PossiblyAccesibleSequence}
Let $\underline{T}$ be a lower transition operator, $x, y \in \mathcal{X}$ and $n \in \mathbb{N}$.
Then $y \upreachda{n} x$ if and only if there is a sequence $y = x_0, \dots, x_{n} = x$ in $\mathcal{X}$ such that for all $k \in \{ 1,\dots, n \}$, $[\overline{T} \indic{x_{k}}](x_{k-1})>0$. \end{lemma} \begin{proof}
Follows immediately from \cite[Proposition~4]{2012Hermans}. \end{proof}
It can be almost immediately verified---for instance using Lemma~\ref{lem:LowTranOp:PossiblyAccesibleSequence}---that $\cdot \upreachda{\cdot} \cdot$ satisfies the three defining properties of a ternary accessibility relation: \begin{enumerate}[label=A\arabic*:,ref=(A\arabic*)]
\item $(\forall x,y \in \mathcal{X})~x \upreachda{0} y \Leftrightarrow x = y$,
\item \label{def:AccesRelation:xyz}
$(\forall x,y,z \in \mathcal{X}) (\forall n,m \in \nats_{0})~x \upreachda{n} y \text{ and } y \upreachda{m} z \Rightarrow x \upreachda{n+m} z$,
\item $(\forall x \in \mathcal{X})(\forall n \in \mathbb{N})(\exists y \in \mathcal{X}) x \upreachda{n} y$. \end{enumerate}
The following proposition is the reason why we introduced the accessibility relation $\cdot \upreachda{\cdot} \cdot$. \begin{proposition}[Proposition~4.3 from \citep{decooman2009}] \label{prop:LowTranOp:AlternativeDefinitionOfTopClassRegularity}
The lower transition operator $\underline{T}$ is top class regular if and only if
\[
\statespacesub{PA} = \{ x\in\mathcal{X} \colon (\exists n \in \mathbb{N})(\forall k \geq n)(\forall y \in \mathcal{X})~y \upreachda{k} x \} \neq \emptyset.
\] \end{proposition}
\begin{lemma} \label{lem:LowTranOp:TopClassRegularSpecialValues}
If the lower transition operator $\underline{T}$ is top class regular, then for all $x \in \statespacesub{PA}$, all $y \in \statespacesub{PA}^{c}$ and all $k \in \mathbb{N}$,
\begin{align*}
[\overline{T}^k \indic{y}](x)
&= 0
& \text{and} & &
[\underline{T}^k \indic{\statespacesub{PA}}](x)
&= 1.
\end{align*} \end{lemma} \begin{proof}
Let $\underline{T}$ be a top class regular lower transition operator with regular top class $\statespacesub{PA}$.
We first prove the first equality.
To this end, we fix some arbitrary $x \in \statespacesub{PA}$ and $y \in \statespacesub{PA}^{c}$.
Assume ex-absurdo that there is some $k \in \mathbb{N}$ such that $[\overline{T}^{k} \indic{y}](x) > 0$.
By Definition~\ref{def:LowTranOp:PossiblyAccesible}, this assumption is equivalent to $x \upreachda{k} y$.
By Proposition~\ref{prop:LowTranOp:AlternativeDefinitionOfTopClassRegularity}, there is some $n \in \mathbb{N}$ such that for all $n \leq \ell \in \mathbb{N}$ and $z \in \mathcal{X}$, $z \upreachda{\ell} x$.
As a consequence of \ref{def:AccesRelation:xyz}, we find that for all $z \in \mathcal{X}$, $z \upreachda{\ell+k} y$, which in turn implies that $y \in \statespacesub{PA}$.
However, this obviously contradicts our initial assumption, such that for all $k \in \mathbb{N}$, $[\overline{T}^{k} \indic{y}](x) = 0$.
Next, we prove the second statement.
From the conjugacy of $\underline{T}$ and $\overline{T}$ and \ref{prop:LTO:AdditionOfConstant}, it follows that
\begin{align*}
\underline{T} \indic{\statespacesub{PA}}
&= - \overline{T} (-\indic{\statespacesub{PA}})
= 1 - \overline{T} (1 - \indic{\statespacesub{PA}})
= 1 - \overline{T} \indic{\statespacesub{PA}^{c}}.
\end{align*}
From the conjugacy of $\underline{T}$ and $\overline{T}$ and \ref{def:LTO:SuperAdditive}, it follows that
\[
\overline{T} \indic{\statespacesub{PA}^{c}}
\leq \sum_{z \in \statespacesub{PA}^{c}} \overline{T} \indic{z}.
\]
From the---already proven---first equality of the statement, we know that $\sum_{z \in \statespacesub{PA}^{c}} [\overline{T} \indic{z}](x) = 0$, hence
\[
[\underline{T} \indic{\statespacesub{PA}}](x) = 1 - [\overline{T} \indic{\statespacesub{PA}^{c}}](x)
\geq 1 - \sum_{z \in \statespacesub{PA}^{c}} [\overline{T} \indic{z}](x) = 1.
\]
Note that by \ref{prop:LTO:BoundedByMinAndMax}, $[\underline{T} \indic{\statespacesub{PA}}](x) \leq \max \indic{\statespacesub{PA}} = 1$.
By combining the two obtained inequalities, we find that the the second equality of the statement holds for $k = 1$: $[\underline{T} \indic{\statespacesub{PA}}](x) = 1$.
Next, fix some $k > 1$, and assume that the second equality holds for all $1 \leq \ell \leq k-1$.
Then by the induction hypothesis and \ref{prop:LTO:BoundedByMinAndMax}, $\indic{\statespacesub{PA}} \leq \underline{T}^{k-1} \indic{\statespacesub{PA}}$.
By \ref{prop:LTO:Monotonicity}, this implies that $\underline{T} \indic{\statespacesub{PA}} \leq \underline{T}^{k} \indic{\statespacesub{PA}}$.
As by the induction hypothesis $[\underline{T} \indic{\statespacesub{PA}}](x) = 1$, we find that $[\underline{T}^{k} \indic{\statespacesub{PA}}](x) \geq 1$.
It immediately follows from \ref{prop:LTO:BoundedByMinAndMax} and \ref{prop:LTO:CompositionIsAlsoLTO} that $\underline{T}^k \indic{\statespacesub{PA}} \leq 1$.
Hence, we have shown that $[\underline{T}^{k} \indic{\statespacesub{PA}}](x) = 1$, which finalises the proof. \end{proof}
The following proposition is an altered statement of \cite[Proposition~6]{2012Hermans}.
\begin{proposition} \label{prop:LowTranOp:TopClassAbsorbingWithRecursion}
Let $\underline{T}$ be a top class regular lower transition operator.
Then $\underline{T}$ is top class absorbing if and only if $B_{n} = \mathcal{X}$, where $\{ B_k \}_{k\in\nats_{0}}$ is the sequence defined by the initial condition $B_0 \coloneqq \statespacesub{PA}$ and, for all $k \in \nats_{0}$, by the recursive relation
\[
B_{k+1}
\coloneqq B_{k} \cup \big\{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \big\} = \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{k}}](x) > 0 \right\},
\]
and where $n \leq \card{\mathcal{X}\setminus\statespacesub{PA}}$ is the first index such that $B_{n} = B_{n+1}$. \end{proposition} \begin{proof}
Let $\underline{T}$ be a top class regular lower transition operator with regular top class $\statespacesub{PA}$.
By \cite[Proposition~6]{2012Hermans}, $\underline{T}$ is top class absorbing if and only if $A_n = \emptyset$, where $A_n$ is the set determined by the initial condition $A_0 \coloneqq \mathcal{X}\setminus\statespacesub{PA}$ and, for all $k \in \nats_{0}$, by the recursive relation
\begin{align*}
A_{n+1}
\coloneqq \{ x \in A_{k} \colon [\overline{T} \indic{A_{k}}](x) = 1 \},
\end{align*}
and where $n \leq \card{\mathcal{X}\setminus\statespacesub{PA}}$ is the first index for which $A_{n} = A_{n+1}$.
For any $k \in \nats_{0}$,
\[
\overline{T} \indic{A_{k}} = - \underline{T} (- \indic{A_{k}}) =1 - \underline{T} (1 - \indic{A_{k}}) = 1 - \underline{T} \indic{\mathcal{X}\setminus A_{k}},
\]
where the first equality follows from the conjugacy of $\underline{T}$ and $\overline{T}$ and the second equality follows from \ref{prop:LTO:AdditionOfConstant}.
Therefore, for all $x \in A_k$, $[\overline{T} \indic{A_{k}}](x) = 1$ if and only if $[\underline{T} \indic{\mathcal{X}\setminus A_{k}}](x) = 0$.
Observe that $A_{k+1} \subseteq A_{k}$ and define $B_{k} \coloneqq \mathcal{X} \setminus A_{k}$ for all $k \in \nats_{0}$.
Note that for all $k \in \nats_{0}$, $B_{k} \subseteq B_{k+1}$ and
\[
B_{k+1} \setminus B_{k}
= A_{k} \setminus A_{k+1}
= \{ x \in A_{k} \colon [\underline{T} \indic{\mathcal{X}\setminus A_{k}}](x) > 0 \}
= \{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \}.
\]
Observe that $B_0 = \mathcal{X} \setminus A_0 = \statespacesub{PA}$ and by the previous equality, for all $k \in \nats_{0}$,
\[
B_{k+1}
= B_{k} \cup \{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \}.
\]
We now prove by induction that
\[
B_{k+1}
= \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{k}}](x) > 0 \right\} ~\text{for all}~k \in \nats_{0}.
\]
First, we consider the case $k = 0$.
Recall from Lemma~\ref{lem:LowTranOp:TopClassRegularSpecialValues} that $[\underline{T} \indic{\statespacesub{PA}}](x_0) > 0$ for all $x_0 \in \statespacesub{PA}$.
Hence,
\begin{align*}
B_{1}
&= B_{0} \cup \{ x \in \mathcal{X} \setminus B_{0} \colon [\underline{T} \indic{B_{0}}](x) > 0 \} \\
&= \{ x \in B_{0} \colon [\underline{T} \indic{B_{0}}](x) > 0 \} \cup \{ x \in \mathcal{X} \setminus B_{0} \colon [\underline{T} \indic{B_{0}}](x) > 0 \} \\
&= \{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{0}}](x) > 0 \}.
\end{align*}
Next, we fix some $i \in \mathbb{N}$ and assume that the equality holds for all $k < i$.
We now prove that the equality then also holds for $k = i$.
Observe that $B_{k-1} \subseteq B_{k}$ implies $\indic{B_{k-1}} \leq \indic{B_{k}}$, which by \ref{prop:LTO:Monotonicity} implies that $\underline{T} \indic{B_{k-1}} \leq \underline{T} \indic{B_{k}}$.
Therefore, for all $x\in B_{k}$, since the induction hypothesis implies that $[\underline{T} \indic{B_{k-1}}](x) > 0$, we find $[\underline{T} \indic{B_{k}}](x) \geq [\underline{T} \indic{B_{k-1}}](x) > 0$.
Hence,
\begin{align*}
B_{k+1}
&= B_{k} \cup \{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \} \\
&= \{ x \in B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \} \cup \{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \} \\
&= \{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{k}}](x) > 0 \}. \qedhere
\end{align*} \end{proof}
The observant reader might have noticed that our definitions of top class regularity and top class absorption differ slightly from those in \cite{2012Hermans}, but they are actually entirely equivalent. For top class regularity, we demand that there is some $n\in\mathbb{N}$ such that $\overline{T}^n \indic{x} > 0$. By \ref{prop:LTO:BoundedByMinAndMax}, for any $k \geq n$ it then holds that $\smash{\overline{T}^k \indic{x} > 0}$, which is what \citet{2012Hermans} demand. For top class absorption, \citet{2012Hermans} demand that \[
(\forall y \in \statespacesub{PA}^{c})(\exists n\in\mathbb{N})~[\overline{T}^n \indic{\statespacesub{PA}^{c}}](y) < 1, \] where $\statespacesub{PA}^{c}\coloneqq\mathcal{X}\setminus\statespacesub{PA}$. Note that $[\overline{T}^n \indic{\statespacesub{PA}^{c}}](y) = 1 - [\underline{T}^n \indic{\statespacesub{PA}}](y)$, such that their demand is equivalent to our demand
\[
(\forall y \in \statespacesub{PA}^{c})(\exists n\in\mathbb{N})~[\underline{T}^n \indic{\statespacesub{PA}}](y) > 0. \] By Lemma~\ref{lem:LowTranOp:TopClassRegularSpecialValues}, for all $n\in\mathbb{N}$ and all $y\in\statespacesub{PA}$, $[\underline{T}^n \indic{\statespacesub{PA}}](y) > 0$, such that we could actually demand that \[
(\forall y \in \mathcal{X})(\exists n\in\mathbb{N})~[\underline{T}^n \indic{\statespacesub{PA}}](y) > 0. \]
\subsection{Qualitatively characterising ergodicity of lower transition rate operators} We now turn to the ergodicity of imprecise continuous-time Markov chains. A first and thorough study of the quantitative aspects concerning ergodicity was conducted by \citet{2017DeBock}. We only recall the definitions and results from \citep{2017DeBock} that will be relevant to us in the remainder.
\begin{definition} \label{def:LowRateOp:UpperReachable}
A state $x\in\mathcal{X}$ is upper reachable from the state $y\in\mathcal{X}$, denoted by $y \upreachc x$, if (i) $x = y$, or (ii) there is some sequence $y=x_0, \dots, x_{n}=x$ in $\mathcal{X}$ of length $n+1 \geq 2$ such that for all $k\in\{1,\dots,n\}$, $[\overline{Q} \indic{x_{k}}](x_{k-1}) > 0$. \end{definition} Note that a state $x$ is always upper reachable from itself! Rather remarkably, this definition of upper reachability is strikingly similar to the alternative condition of Lemma~\ref{lem:LowTranOp:PossiblyAccesibleSequence} for possible accessibility. The links between these two definition will be made more explicit later.
\begin{lemma} \label{lem:LowRateOp:UpperReachableShorterSequence}
Let $\underline{Q}$ be a lower rate operator, and $x,y\in\mathcal{X}$ such that $x \neq y$.
Then $x$ is upper reachable from $y$ if and only if there is some sequence $y=x_0, \dots, x_{n}=x$ in $\mathcal{X}$ in which every state occurs at most once and for all $k\in\{1,\dots,n\}$, $[\overline{Q} \indic{x_{k}}](x_{k-1}) > 0$.
Consequently, $n < \card{\mathcal{X}}$. \end{lemma} \begin{proof}
The forward implication follows almost immediately from Definition~\ref{def:LowRateOp:UpperReachable}.
Assume that $y \upreachc x$, then by Definition~\ref{def:LowRateOp:UpperReachable} there is some sequence $y=x_0, \dots, x_{n} = x$ in $\mathcal{X}$ such that for all $k\in\{1,\dots,n\}$, $[\overline{Q} \indic{x_{k}}](x_{k-1}) > 0$.
Assume that there is a state $z \in \mathcal{X}$ that occurs more than once in this sequence.
Then we can simply delete every element of the sequence from right after the the first occurrence of $z$ up to and including the last occurrence of $z$, and still have a valid sequence.
If we continue this way, then we end up with a sequence in which every state occurs at most once.
As every state occurs at most once, the length $n+1$ of the sequence is lower than or equal to $\card{\mathcal{X}}$.
Consequently, $n < \card{\mathcal{X}}$.
The reverse implication follows from the fact that the requirements if Definition~\ref{def:LowRateOp:UpperReachable} are trivially satisfied. \end{proof}
\begin{lemma} \label{lem:LowRateOp:PathOfArbitraryLength}
Let $\underline{Q}$ be a lower transition rate operator, and $x,y \in \mathcal{X}$ such that $y \upreachc x$.
Then there is an integer $n < \card{\mathcal{X}}$ such that for all $k \geq n$ and all $\delta_1, \dots, \delta_k \in \reals_{> 0}$ such that $\delta_{i} \norm{\underline{Q}} < 2$ for all $i \in \{ 1, \dots, k \}$, there is a sequence $y = x_0, \dots, x_{k} = x$ in $\mathcal{X}$ such that $[(I + \delta_{i} \overline{Q}) \indic{x_{i}}](x_{i-1}) > 0$ for all $i \in \{ 1, \dots, k \}$. \end{lemma} \begin{proof}
We first consider the special case $x = y$.
For all $\delta \in \reals_{> 0}$ such that $\delta \norm{\underline{Q}} < 2$,
\[
[(I + \delta \overline{Q}) \indic{x}](x)
= \indic{x}(x) + \delta [\overline{Q} \indic{x}](x)
= 1 + \delta [\overline{Q} \indic{x}](x)
> 0,
\]
where the inequality follows from \ref{prop:LTRO:Ixx}.
Therefore, for all $k \in \mathbb{N}$ and all $\delta_1, \dots, \delta_k \in \reals_{> 0}$ such that for all $i \in \{ 1, \dots, k \}$, $\delta_{i} \norm{\underline{Q}} < 2$, we find that $[(I + \delta_{i} \overline{Q})\indic{x}](x) > 0$ for all $i \in \{ 1, \dots, k \}$.
Next, we consider the case $y \neq x$.
From Lemma~\ref{lem:LowRateOp:UpperReachableShorterSequence} we know that there is a sequence $S_y \coloneqq (y = x_0, \dots, x_{n} = x)$ in $\mathcal{X}$ such that every state occurs at most once---i.e. $n < \card{\mathcal{X}}$---and for all $i \in \{ 1,\dots,n \}$, $[\overline{Q} \indic{x_{i}}](x_{i-1}) > 0$.
We fix an arbitrary $k \geq n$ and an arbitrary sequence $\delta_1, \dots, \delta_{k}$ in $\reals_{> 0}$ such that for all $i \in \{ 1, \dots, k \}$, $\delta_{i} \norm{\underline{Q}} < 2$.
Note that for all $i \in \{ 1, \dots, n \}$,
\[
0 < \delta_i [\overline{Q} \indic{x_{i}}](x_{i-1})
= \indic{x_{i}}(x_{i-1}) + \delta_{i} [\overline{Q} \indic{x_{i}}](x_{i-1})
= [(I + \delta_i \overline{Q}) \indic{x_{i}}](x_{i-1}),
\]
where the inequality follows from $0 < \delta_i$ and the first equality is true because---by construction---$x_{i} \neq x_{i-1}$.
Also, from the previous we know that for all $i \in \{ n+1, \dots, k \}$, $[(I + \delta_{i} \overline{Q})\indic{x}](x) > 0$.
Hence, appending the sequence $S_y$ with $(k - n)$ times $x$ yields a sequence $y = x_0, \dots, x_{k} = x$ in $\mathcal{X}$ such that for all $i \in \{ 1, \dots, k \}$, $[(I + \delta_{i} \overline{Q}) \indic{x_{i}}](x_{i-1}) > 0$. \end{proof}
\begin{definition} \label{def:LowRateOp:LowerReachable}
A (non-empty) set of states $A \subseteq \mathcal{X}$ is lower reachable from the state $x$, denoted by $x \lowreachc A$, if $x \in B_n$, where $\{ B_k \}_{k \in \nats_{0}}$ is the sequence that is defined by the initial condition $B_0 \coloneqq A$ and for all $k \in \nats_{0}$ by the recursive relation
\[
B_{k+1}
\coloneqq B_{k} \cup \left\{ y \in \mathcal{X} \setminus B_{k} \colon [\underline{Q} \indic{B_{k}}](y) > 0 \right\},
\]
and $n \leq \card{\mathcal{X} \setminus A}$ is the first index for which $B_{k} = B_{k+1}$. \end{definition} Again, remark the striking similarity between Definition~\ref{def:LowRateOp:LowerReachable} and Proposition~\ref{prop:LowTranOp:TopClassAbsorbingWithRecursion}.
\begin{definition} \label{def:LowRateOp:RegularlyAbsorbing}
A lower transition rate operator $\underline{Q}$ is \emph{regularly absorbing} if it is (i) \emph{top class regular}, i.e.
\[
\statespacesub{R}
\coloneqq \left\{ x \in \mathcal{X} \colon (\forall y \in \mathcal{X})~y \upreachc x \right\} \neq 0,
\]
and (ii) \emph{top class absorbing}, i.e.
\[
(\forall y \in \mathcal{X}\setminus\statespacesub{R})~y \lowreachc \mathcal{X}_R.
\] \end{definition}
\begin{theorem}[Theorem~19 in \citep{2017DeBock}] \label{the:ContinuousErgodicity:NecessaryAndSufficient}
A lower transition rate operator $\underline{Q}$ is ergodic if and only if it is regularly absorbing. \end{theorem}
Not surprisingly, these necessary and sufficient conditions for the ergodicity of lower rate matrices are rather similar to the necessary and sufficient conditions for ergodicity of lower transition operators given in Proposition~\ref{prop:DiscreteErgodicity:NecAndSuff}.
\section{Extra material and proofs for Section~\ref{sec:ergodicity}} \label{app:ergodicity} Before we give any proofs, we first define the coefficient of ergodicity of an upper transition operator $\overline{T}$: \begin{equation} \label{eqn:CoeffOfErgod:UpTranOpDefinition}
\coefferga{\overline{T}}
\coloneqq \max \{ \norm{\overline{T} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \}. \end{equation} \begin{proposition} \label{prop:CoeffOfErgod:Properties}
Let $\underline{T}$ and $\underline{S}$ be lower transition operators.
For any $f \in \setoffna$,
\begin{enumerate}[twocol, label=C\arabic*:, ref=(C\arabic*), series=CoeffErg]
\item \label{prop:CoeffOfErgod:Bounds}
$0 \leq \coefferga{\underline{T}} \leq 1$,
\item \label{prop:CoeffOfErgod:BoundOnNormTf}
$\norm{\underline{T} f}_{v} \leq \coefferga{\underline{T}} \norm{f}_{v}$,
\item \label{prop:CoeffOfErgod:LowerEqualsUpper}
$\coefferga{\overline{T}} = \coefferga{\underline{T}}$,
\item \label{prop:CoeffOfErgod:Composition}
$\coefferga{\underline{T} \, \underline{S}} \leq \coefferga{\underline{T}} \coefferga{\underline{S}}$,
\end{enumerate} \end{proposition} \begin{proof}
\begin{enumerate}[label=C\arabic*:]
\item
Follows immediately from \ref{prop:LTO:BoundedByMinAndMax}.
\item
If $\norm{f}_{v} = 0$, then by \ref{prop:LTO:BoundedByMinAndMax} $\norm{\underline{T} f}_{v} = 0$, such that the stated holds.
Therefore, we now assume---without loss of generality---that $\norm{f}_{v} > 0$.
Note that $0 \leq (f - \min{f})/\norm{f}_{v} \leq 1$.
Combining this with---in that order---\ref{prop:norm:VarAddConstant}, \ref{prop:LTO:AdditionOfConstant}, \ref{def:LTO:NonNegativelyHom}, \ref{def:Norm:ScalarMult} and Eqn.~\eqref{eqn:CoeffOfErgod}, we find that
\begin{align*}
\norm{\underline{T} f}_{v}
&= \norm{\underline{T} f - \min{f}}_{v}
= \norm{\underline{T} (f - \min{f})}_{v}
= \norm{\norm{f}_{v} \underline{T} \left(\frac{f - \min{f}}{\norm{f}_{v}}\right)}_{v} \\
&= \norm{\underline{T} \left(\frac{f - \min{f}}{\norm{f}_{v}}\right) }_{v} \norm{f}_{v} \\
&\leq \coefferga{\underline{T}} \norm{f}_{v}.
\end{align*}
\item
By Eqn.~\eqref{eqn:CoeffOfErgod},
\begin{align*}
\coefferga{\underline{T}}
&= \max \left\{ \norm{\underline{T} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&= \max \left\{ \norm{1 - \underline{T} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&= \max \left\{ \norm{1 + \overline{T} (-f)}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&= \max \left\{ \norm{\overline{T} (1 - f)}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&= \max \left\{ \norm{\overline{T} g}_{v} \colon g \in \setoffna, 0 \leq g \leq 1 \right\} \\
&= \coefferga{\overline{T}},
\end{align*}
where the second equality follows from \ref{prop:norm:VarAddConstant}, the third equality follows from the conjugacy of $\underline{T}$ and $\overline{T}$, the fourth equality follows from \ref{prop:LTO:AdditionOfConstant}, the fifth equality follows from the fact that $0 \leq f \leq 1$ if and only if $0 \leq 1 - f \leq 1$, and the final equality follows from Eqn.~\eqref{eqn:CoeffOfErgod:UpTranOpDefinition}.
\item
By Eqn.~\eqref{eqn:CoeffOfErgod} and \ref{prop:CoeffOfErgod:BoundOnNormTf},
\begin{align*}
\coefferga{\underline{T} \, \underline{S}}
&= \max \{ \norm{\underline{T} \, \underline{S} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&\leq \max \{ \coefferga{\underline{T}} \norm{\underline{S} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&= \coefferga{\underline{T}} \max \{ \norm{\underline{S} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \} = \coefferga{\underline{T}} \coefferga{\underline{S}}. \qedhere
\end{align*}
\end{enumerate} \end{proof}
Theorem~21 in \cite{2013Skulj} highlights the usefulness of the coefficient of ergodicity. \begin{theorem}[Theorem~21 in \citep{2013Skulj}] \label{the:CoeffOfErg:StrictlySmallerThanOneIsNecAndSuffForErg}
A lower transition operator $\underline{T}$ is ergodic if and only if there is some $k\in\mathbb{N}$ such that $\coefferga{\underline{T}^k} < 1$. \end{theorem}
\begin{proposition} \label{prop:CoeffOfErgod:AlternativeFunctions}
Let $\underline{T}$ be a a lower transition operator.
Then
\begin{align}
\coefferga{\underline{T}}
&= \max \left\{ \norm{\underline{T} f}_{v} \colon f\in\setoffna, \max f = 1, \min f = 0 \right\} \label{eqn:CoeffOfErgod:WithMax} \\
&= \max \left\{ \norm{\underline{T} f}_{c} \colon f\in\setoffna, -1 \leq f \leq 1 \right\} \label{eqn:CoeffOfErgod:WithCentered} \\
&= \max \left\{ \norm{\underline{T} f}_{c} \colon f\in\setoffna, \max f = 1, \min f = -1 \right\}. \label{eqn:CoeffOfErgod:WithCenteredAndMax}
\end{align} \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:CoeffOfErgod:AlternativeFunctions}]
Because of Eqn.~\eqref{eqn:CoeffOfErgod}, there is some $g\in\setoffna$ such that $0 \leq g \leq 1$ and $\norm{\underline{T} g}_{v} = \coefferga{\underline{T}}$.
By \ref{prop:CoeffOfErgod:BoundOnNormTf}, $\norm{\underline{T} g}_{v} \leq \coefferga{\underline{T}} \norm{g}_{v}$, such that $\norm{g}_{v} = 1$, or equivalently $\max g = 1$ and $\min g = 0$.
Hence, it follows from Eqn.~\eqref{eqn:CoeffOfErgod} that
\begin{align*}
\coefferga{\underline{T}}
&= \max \{ \norm{\underline{T} f}_{v} \colon f \in \setoffna, \max f = 1, \min f = 0 \}.
\intertext{Next, manipulating Eqn.~\eqref{eqn:CoeffOfErgod} yields}
\coefferga{\underline{T}}
&= \max \{ \norm{\underline{T} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&= \max \left\{ \norm{\underline{T} \left(f - \frac{1}{2}\right)}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&= \max \left\{ \frac{2}{2} \norm{\underline{T} \left(f - \frac{1}{2}\right)}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&= \max \left\{ \frac{1}{2} \norm{\underline{T} \left(2 f - 1\right)}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \right \} \\
&= \max \left\{ \norm{\underline{T} \left(2 f - 1 \right)}_{c} \colon f \in \setoffna, 0 \leq f \leq 1 \right\},
\intertext{
where the second equality follows from \ref{prop:norm:VarAddConstant} and \ref{prop:LTO:AdditionOfConstant}, the fourth equality follows from \ref{def:Norm:ScalarMult} and \ref{def:LTO:NonNegativelyHom}, and the final equality follows from Eqn.~\eqref{eqn:CentredNorm}.
Note that for all $f\in\setoffna$, $0 \leq f \leq 1$ is equivalent to $-1 \leq (2f - 1) \leq 1$.
Hence,
}
\coefferga{\underline{T}}
&= \max \{ \norm{\underline{T} f}_{c} \colon f \in \setoffna, -1 \leq f \leq 1 \}.
\end{align*}
The proof of the final equality of the statement is now similar to that of the first. \end{proof}
The following lemma is a more general version of Lemma~\ref{lem:LowTranOp:PossiblyAccesibleSequence}. \begin{lemma} \label{lem:LowTranOp:SignOfCompositionWithIndicator}
Let $k \in \mathbb{N}$ and $x,y\in \mathcal{X}$.
For all arbitrary upper transition operators $\uptranopa{1}, \dots, \uptranopa{k}$, we define $\uptranopa{1:k} \coloneqq \uptranopa{k} \cdots \uptranopa{1}$.
Then
\[
[\uptranopa{1:k} \indic{x}](y) \geq [\uptranopa{1} \indic{z_{1}}](z_{2}) \cdots [\uptranopa{k} \indic{z_{k}}](z_{k+1}),
\]
for any sequence $y = z_{k+1}, \dots, z_{1} = x$ in $\mathcal{X}$.
Furthermore, $[\uptranopa{1:k} \indic{x}](y) > 0$ if and only if there is some sequence $y = z_{k+1}, \dots, z_{1} = x$ in $\mathcal{X}$ such that for all $i \in \{ 1, \dots, k \}$, $[\uptranopa{i} \indic{z_{i}}](z_{i+1}) > 0$. \end{lemma} \begin{proof}
This proof is a straightforward generalisation of the proof of Proposition~4 in \citep{2012Hermans}.
Fix some $k \in \mathbb{N}$, some $x, y \in \mathcal{X}$ and some arbitrary upper transition operators $\uptranopa{1}, \dots, \uptranopa{k}$.
We also define $\uptranopa{1:k} \coloneqq \uptranopa{k} \cdots \uptranopa{1}$, and note that by \ref{prop:LTO:CompositionIsAlsoLTO} this is also an upper transition operator.
To prove the first part of the statement, we note that for all $i \in \{ 1, \dots, k \}$ and all $z_{i}, z_{i+1} \in \mathcal{X}$,
\[
\uptranopa{i} \indic{z_{i}}
= \sum_{z \in \mathcal{X}} [\uptranopa{i} \indic{z_{i}}](z) \indic{z}
\geq [\uptranopa{i} \indic{z_{i}}](z_{i-1}) \indic{z_{i+1}},
\]
where the inequality is allowed because by \ref{prop:LTO:BoundedByMinAndMax} the sum contains only non-negative terms.
We fix any $z_{2} \in \mathcal{X}$, and use \ref{prop:LTO:Monotonicity} and this inequality to yield
\begin{align*}
\uptranopa{1:k} \indic{x}
&= \uptranopa{1:k-1} \uptranopa{1} \indic{x}
\geq \uptranopa{1:k-1} \left([\uptranopa{1} \indic{x}](z_{2}) \indic{z_{2}}\right)
= [\uptranopa{1} \indic{x}](z_{2}) \uptranopa{1:k-1} \indic{z_{2}},
\end{align*}
where $\uptranopa{1:k-1} \coloneqq \uptranopa{k} \cdots \uptranopa{2}$---which by \ref{prop:LTO:CompositionIsAlsoLTO} is also an upper transition operator---and the final equality follows from \ref{def:LTO:NonNegativelyHom} and \ref{prop:LTO:BoundedByMinAndMax}.
Repeated application of the same reasoning yields
\[
[\uptranopa{1:k} \indic{x}](y)
\geq [\uptranopa{1} \indic{z_1}](z_{2}) \cdots [\uptranopa{k} \indic{z_{k}}](z_{k+1}),
\]
where $z_{k+1} \coloneqq y$, $z_{1} \coloneqq x$, and $z_{2}, \dots, z_{k}$ are arbitrary elements of $\mathcal{X}$.
This proves the first part of the statement.
The reverse implication of the second part of the statement follows immediately from the first part.
We therefore only need to prove that the forward implication holds as well.
To that end, we first note that
\[
[\uptranopa{1:k} \indic{x}](y) = \left[\uptranopa{k} \left( \sum_{z_{k} \in \mathcal{X}} [ \uptranopa{2:k} \indic{x}](z_{k}) \indic{z_{k}} \right) \right](y)
\leq \sum_{z_{k} \in \mathcal{X}} [\uptranopa{2:k} \indic{x}](z_{k}) [\uptranopa{1} \indic{z_{k}}](y),
\]
where $\uptranopa{2:k} \coloneqq \uptranopa{k} \cdots \uptranopa{2}$ and the inequality follows from \ref{def:LTO:SuperAdditive}.
Repeating this same reasoning another $(k-2)$ times yields
\[
[\uptranopa{1:k} \indic{x}](y) \leq \sum_{z_{2} \in \mathcal{X}} \sum_{z_{3} \in \mathcal{X}} \cdots \sum_{z_{k} \in \mathcal{X}} [\uptranopa{1} \indic{x}](z_{2}) [\uptranopa{2} \indic{z_{2}}](z_{3}) \cdots [\uptranopa{k} \indic{z_{k}}](y).
\]
If now $[\uptranopa{1:k} \indic{x}](y) > 0$, then---because all terms are non-negative due to \ref{prop:LTO:BoundedByMinAndMax}---at least one of the terms of the sum on the right hand side has to be strictly positive.
Therefore, $[\uptranopa{1:k} \indic{x}](y) > 0$ implies that there is at least one sequence $y = z_{k+1}, \dots, z_{1} = x$ in $\mathcal{X}$ such that for all $i \in \{ 1, \dots, k\}$, $[\uptranopa{i} \indic{z_{i}}](z_{i+1}) > 0$.
\end{proof}
\begin{lemma} \label{lem:LowTranOp:SignOfCompositionWithEvent}
Let $k \in \mathbb{N}$ and $A \subseteq \mathcal{X}$.
For all arbitrary lower transition operators $\lowtranopa{1}, \dots, \lowtranopa{k}$, we define $\lowtranopa{1:k} \coloneqq \lowtranopa{k} \cdots \lowtranopa{1}$.
Then
\[
c_1 \cdots c_k \indic{A_k} \leq \lowtranopa{1:k} \indic{A} \leq \indic{A_k}.
\]
In this expression, $A_k \subseteq \mathcal{X}$ is derived from the initial condition $A_{0} \coloneqq A$ and, for all $i \in \{ 1, \dots, k \}$, from the recursive relation
\[
A_{i}
\coloneqq \{ x \in \mathcal{X} \colon [\lowtranopa{i} \indic{A_{i-1}}](x) > 0 \}.
\]
The non-negative real numbers $c_1, \dots, c_{k}$ are defined as
\[
c_{i}
\coloneqq \min \left\{ [\lowtranopa{i} \indic{A_{i-1}}](x) \colon x \in A_{i} \right\} \text{ for all } i \in \{ 1, \dots, k \},
\]
with the convention that the minimum of an empty set is zero.
Also, $A_k = \emptyset$ if and only if $c_i = 0$ for some $i \in \{ 1, \dots, k \}$. \end{lemma} \begin{proof}
Let $\underline{T}$ be an arbitrary lower transition operator, and fix an arbitrary $A \subset \mathcal{X}$.
We define the set $A' \coloneqq \{ x \in \mathcal{X} \colon [\underline{T} \indic{A}](x) > 0 \}$.
On the one hand, from \ref{prop:LTO:BoundedByMinAndMax} it follows that $\underline{T} \indic{A} \leq \indic{A'}$.
On the other hand, $\underline{T} \indic{A} \geq c \indic{A'}$, where we let
\[
c
\coloneqq \min \left\{ [\underline{T} \indic{A}](x) \colon x \in A' \right\},
\]
with the convention that the minimum of an empty set is zero.
Note that by \ref{prop:LTO:BoundedByMinAndMax}, $0 \leq c \leq 1$.
Combining these two inequalities yields $c \indic{A'} \leq \underline{T} \indic{A} \leq \indic{A'}$.
Proving the first part of the statement is now fairly trivial; we simply need to apply both inequalities and \ref{prop:LTO:Monotonicity} $k$ times.
To prove the second part of the statement, we observe that $c_i = 0$ is equivalent to $A_{i} = \emptyset$.
Therefore, we assume that there is some $i \in \{ 1, \dots, k \}$ for which $c_i = 0$ and $A_{i} = \emptyset$.
If $c_k = 0$, then obviously $A_k = \emptyset$ and the stated is true.
We therefore assume that $i < k$, and observe that by \ref{prop:LTO:BoundedByMinAndMax}, $\lowtranopa{i+1} \indic{A_{i}} = \lowtranopa{i+1} \indic{\emptyset} = 0$, and therefore $A_{i+1} = \emptyset$.
Repeating the same reasoning, we find that $A_{j} = \emptyset$ and $c_{j} = 0$ for all $j \in \{ i, \dots, k \}$, which proves the stated. \end{proof}
The following lemma is an alternate, slightly extended version of Proposition~\ref{prop:LowTranOp:TopClassAbsorbingWithRecursion}. \begin{lemma} \label{lem:LowTranOp:StrongerNecAndSuffTopClassAbsorption}
Let $\underline{T}$ be a top class regular lower transition operator.
Then $\underline{T}$ is top class absorbing if and only if $B_{n} = \mathcal{X}$, where $\{ B_{i} \}_{i \in \nats_{0}}$ is the sequence defined by the initial condition $B_{0} \coloneqq \statespacesub{PA}$ and the recursive relation
\[
B_{i}
= B_{i-1} \cup \left\{ x \in \mathcal{X} \setminus B_{i-1} \colon [\underline{T} \indic{B_{i-1}}](x) > 0 \right\} ~~\text{for all}~i \in \mathbb{N},
\]
and where $n \leq \card{\mathcal{X} \setminus \statespacesub{PA}}$ is the first index such that $B_{n} = B_{n+1}$.
Alternatively, $\underline{T}$ is top class absorbing if and only if there is some $m \in \nats_{0}$ such that $\underline{T}^{m} \indic{\statespacesub{PA}} > 0$, and in this case $n$ is the lowest such $m$. \end{lemma}
\begin{proof}
We first prove the forward implication.
By Proposition~\ref{prop:LowTranOp:TopClassAbsorbingWithRecursion}, if $\underline{T}$ is top class absorbing then $B_{n} = \mathcal{X}$, where the sequence $\{B_{i}\}_{i \in \nats_{0}}$ is defined from the initial condition $B_{0} \coloneqq \statespacesub{PA}$ and, for all $i \in \mathbb{N}$, from the recursive relation
\[
B_{i} \coloneqq B_{i-1} \cup \left\{ x \in \mathcal{X} \setminus B_{i-1} \colon [\underline{T} \indic{B_{i-1}}](x) > 0 \right\} = \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{i-1}}](x) > 0 \right\},
\]
and where $n \leq \card{\mathcal{X} \setminus \statespacesub{PA}}$ is the first index such that $B_{n} = B_{n+1}$.
We can immediately verify that $\statespacesub{PA} = B_{0} \subseteq B_{1} \subseteq \cdots \subseteq B_{n} = \mathcal{X}$ and $B_{i} \setminus B_{i-1} \neq \emptyset$ for all $i \in \{ 1, \dots, n \}$.
Observe that the sequence $B_{0}, \dots, B_{n}$ satisfies the conditions of Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent}, such that or all $i \in \{ 1, \dots, n \}$,
\[
c_{1} \cdots c_{i} \indic{B_{i}} \leq \underline{T}^{i} \indic{\statespacesub{PA}} \leq \indic{B_{i}},
\]
where $c_1, \dots, c_{n}$ are strictly positive real numbers because $\emptyset \neq B_{1}, \dots, B_{n}$.
From this we infer that $\min \underline{T}^{i} \indic{\statespacesub{PA}} > 0$ if and only if $B_{i} = \mathcal{X}$.
As $B_{n} = \mathcal{X}$ and $B_{0}, \dots, B_{n-1} \neq B_{n}$, this confirms that indeed $\underline{T}^{n} \indic{\statespacesub{PA}} > 0$ and that $n$ is the lowest non-negative natural number for which this holds.
Next, we prove the reverse implication.
Let $B_{0}, \dots, B_{n}$ and $n$ be defined as in the statement.
From the definition, it is obvious that $B_{i-1} \subseteq B_{i}$ for all $i \in \mathbb{N}$.
Also, if $n$ is the first index such that $B_{n} = B_{n+1}$, then $B_{i-1} \neq B_{i}$ for all $i \in \{ 1, \dots, n \}$ and $B_{n} = B_{n+i}$ for all $i \in \mathbb{N}$.
From $B_{0} = \statespacesub{PA}$ and $B_{i} \setminus B_{i-1} \neq \emptyset$ for all $i \in \{ 1, \dots, n \}$, we infer that indeed $n \leq \card{\mathcal{X} \setminus \statespacesub{PA}}$.
If $B_{n} = \mathcal{X}$, then the sequence $B_0, \dots, B_{n}$ satisfies the conditions of Proposition~\ref{prop:LowTranOp:TopClassAbsorbingWithRecursion}, such that $\underline{T}$ is indeed top class absorbing.
Let $B_{0}, \dots, B_{n},\dots$ be the sequence as defined in the statement.
Similar to what we did in the proof of Proposition~\ref{prop:LowTranOp:TopClassAbsorbingWithRecursion}, we now verify using induction that
\[
B_{i} = \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{i-1}}](x) > 0 \right\} ~\text{for all}~i \in\mathbb{N}.
\]
We first consider the case $i = 1$.
By Lemma~\ref{lem:LowTranOp:TopClassRegularSpecialValues}, we know that $[\underline{T} \indic{\statespacesub{PA}}](x) > 0$ for all $x \in \statespacesub{PA}$.
Hence,
\begin{align*}
B_{1}
&= B_{0} \cup \left\{ x \in \mathcal{X} \setminus B_{0} \colon [\lowtranopa{\indic{B_{0}}}](x) > 0 \right\} \\
&= \left\{ x \in B_{0} \colon [\lowtranopa{\indic{B_{0}}}](x) > 0 \right\} \cup \left\{ x \in \mathcal{X} \setminus B_{0} \colon [\lowtranopa{\indic{B_{0}}}](x) > 0 \right\} \\
&= \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{0}}](x) > 0 \right\},
\end{align*}
where the second equality follows from the initial condition $B_{0} = \statespacesub{PA}$.
Fix some $k \in \{ 1, \dots, n-1\}$, and assume that the alternate definition holds for all $i \leq k$.
We now argue that in that case the stated also holds for $i = k+1$.
By the induction hypothesis, $B_{k}$ contains all $x \in \mathcal{X}$ for which $[\underline{T} \indic{B_{k-1}}](x) > 0$.
Also, it holds by definition that $B_{k-1} \subseteq B_{k}$.
Using \ref{prop:LTO:Monotonicity}, we infer from $\indic{B_{k}} \geq \indic{B_{k-1}}$ that $[\underline{T} \indic{B_{k}}](x) \geq [\underline{T} \indic{B_{k-1}}](x) > 0$ for all $x\in B_{k}$.
Hence,
\begin{align*}
B_{k+1}
&= B_{k} \cup \left\{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \right\} \\
&= \left\{ x \in B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \right\} \cup \left\{ x \in \mathcal{X} \setminus B_{k} \colon [\underline{T} \indic{B_{k}}](x) > 0 \right\} \\
&= \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{k}}](x) > 0 \right\}.
\end{align*}
Now that we know that
\[
B_{i} = \left\{ x \in \mathcal{X} \colon [\underline{T} \indic{B_{i-1}}](x) > 0 \right\} ~\text{for all}~i \in \mathbb{N},
\]
we observe that this equivalent definition of the sequence satisfies the conditions of the sequence in Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent}.
Moreover, as $\emptyset \neq B_{0} \subseteq B_{1} \subseteq \dots$, it follows from the second part of Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent} that $c_{i} > 0$ for all $i \in \mathbb{N}$.
Also from Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent}, we know that
\[
c_1 \cdots c_{i} \indic{B_{i}} \leq \underline{T}^{i} \indic{\statespacesub{PA}} \leq \indic{B_{i}} ~\text{for all}~ i \in \mathbb{N}.
\]
Assume now that there is some $m \in \nats_{0}$ such that $\underline{T}^{m} \indic{\statespacesub{PA}} > 0$, and let $n$ be the lowest such $m$.
Then for all $y \in \mathcal{X} \setminus \statespacesub{PA}$, $[\underline{T}^{n} \indic{\statespacesub{PA}}](y) > 0$, such that the second condition of Definition~\ref{def:LowTranOp:RegularlyAbsorbing} is satisfied and $\underline{T}$ is indeed top class absorbing.
If $n = 0$, then $\statespacesub{PA} = \mathcal{X} = B_{0}$, and $n$ is indeed the first index for which $B_{n} = B_{n+1}$.
If $n > 0$, then from the strict positivity of $c_{1}, \dots, c_{n}$ and the lower and upper bound for $\underline{T}^{i} \indic{\statespacesub{PA}}$ we infer that $B_{1}, \dots, B_{n-1} \neq \mathcal{X}$ and $B_{n} = \mathcal{X}$.
We deduce from the recursive relation between $B_{0}, \dots, B_{n}, B_{n+1}$ that $n$ is indeed the first index for which $B_{n} = B_{n+1}$, which finalises this proof. \end{proof}
\begin{proof}[Proof of Theorem~\ref{the:ContinuousErgodicity:CoefficientOfErgodicityOfApproximation}]
We first prove the forward implication.
To this end, we let $\underline{Q}$ be an ergodic lower transition rate operator, and $n \coloneqq \card{\mathcal{X}} - 1$---we ignore the case $\card{\mathcal{X}} = 1$, as this case is trivially ergodic.
We furthermore fix some $k \geq n$ and some $\delta_1, \dots, \delta_k$ in $\reals_{> 0}$ such that for all $i \in \{ 1, \dots, k \}$, $\delta_{i} \norm{\underline{Q}} < 2$.
For all $i \in \{ 1, \dots, k\}$, we define $\lowtranopa{i} \coloneqq (I + \delta_{i} \underline{Q})$.
By Proposition~\ref{prop:IPlusDeltaQLowTranOp}, the operators $\lowtranopa{1}, \dots, \lowtranopa{k}$ are lower transition operators, such that by \ref{prop:LTO:CompositionIsAlsoLTO} their composition $\lowtranopa{1:k} \coloneqq \lowtranopa{k} \cdots \lowtranopa{1}$ is also a lower transition operator.
Note that the same holds for their conjugate upper transition operators, defined as $\uptranopa{i} \coloneqq (I + \delta_i \overline{Q})$ and $\uptranopa{1:k} \coloneqq \uptranopa{k} \cdots \uptranopa{1}$.
We now assume ex-absurdo that $\coefferga{\Phi(\delta_{1}, \dots, \delta_{k})} = \coefferga{\lowtranopa{1:k}} = 1$.
As a consequence of Proposition~\ref{prop:CoeffOfErgod:AlternativeFunctions}, there is some $f^{*} \in \setoffna$ with $\min f^{*} = 0$ and $\max f^{*} = 1$ such that $\norm{\lowtranopa{1:k} f^{*}}_{v} = 1$.
By construction and \ref{prop:LTO:BoundedByMinAndMax}, there are now some $y_0, y_1 \in \mathcal{X}$ such that $[\lowtranopa{1:k} f^{*}](y_0) = 0$ and $[\lowtranopa{1:k} f^{*}](y_1) = 1$.
We define the---obviously non-empty---set
\[
\mathcal{X}^{*}
\coloneqq \left\{ x \in \mathcal{X} \colon f^{*}(x) = 0 \right\},
\]
and distinguish two cases: either $\statespacesub{R} \cap \mathcal{X}^{*} \neq \emptyset$ or $\statespacesub{R} \cap \mathcal{X}^{*} = \emptyset$.
We first consider the case $\statespacesub{R} \cap \mathcal{X}^{*} \neq \emptyset$, and fix any arbitrary $x^{*} \in \statespacesub{R} \cap \mathcal{X}^{*}$.
Note that, by construction, $\indic{x^{*}} \leq 1 - f^{*}$.
Using the conjugacy of $\lowtranopa{1:k}$ and $\uptranopa{1:k}$ and \ref{prop:LTO:Monotonicity}, we find that
\[
\uptranopa{1:k} \indic{x^{*}} \leq \uptranopa{1:k} (1 - f^{*}) = 1 + \uptranopa{1:k} (- f^{*}) = 1 - \lowtranopa{1:k} f^{*},
\]
where the first equality follows from \ref{prop:LTO:AdditionOfConstant} and the second equality follows from the conjugacy.
From the previous inequality and \ref{prop:LTO:BoundedByMinAndMax}, it follows that
\[
0 \leq [\uptranopa{1:k} \indic{x^{*}}](y_1) \leq 1 - [\lowtranopa{1:k} f^{*}](y_1) = 0,
\]
and hence $[\uptranopa{1:k} \indic{x^{*}}](y_1) = 0$.
From Lemma~\ref{lem:LowTranOp:SignOfCompositionWithIndicator}, it now follows that that
\begin{equation}
\label{eqn:ErogidictyOfApproximation:Contradiction1}
0 = [\uptranopa{1:k} \indic{x^{*}}](y_1) \geq \prod_{i=1}^{k} \uptranopa{i} \indic{z_{i}}(z_{i+1}) = \prod_{i=1}^{k} [(I + \delta_{i} \overline{Q}) \indic{z_{i}}](z_{i+1})
\end{equation}
for any arbitrary sequence $y_1 = z_{k+1}, z_2, \dots, z_1 = x^{*}$ in $\mathcal{X}$.
On the other hand, as $k \geq n = \card{\mathcal{X}} - 1$ and $x^{*} \in \statespacesub{R}$ it follows from Lemma~\ref{lem:LowRateOp:PathOfArbitraryLength} that there exists a sequence $y_1 = x_{k+1}, x_{k}, \dots, x_{1} = x^{*}$ in $\mathcal{X}$ such that $[(I + \delta_{i} \overline{Q}) \indic{x_{i}}](x_{i+1}) > 0$ for all $i \in \{1, \dots, k \}$.
This obviously contradicts Eqn.~\eqref{eqn:ErogidictyOfApproximation:Contradiction1}.
Next, we consider the case $\statespacesub{R} \cap \mathcal{X}^{*} = \emptyset$.
In this case, $c \indic{\statespacesub{R}} \leq f^{*}$, where we let
\[
c \coloneqq \min \{ f^{*}(x) \colon x \in \statespacesub{R} \} > 0.
\]
From Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent}, we know that
\[
c_{1} \cdots c_{k} \indic{A_k} \leq \lowtranopa{1:k} \indic{\statespacesub{R}},
\]
where $A_{0} \coloneqq \statespacesub{R}$ and, for all $i \in \{ 1, \dots, k \}$,
\begin{align*}
A_{i}
&\coloneqq \{ x \in \mathcal{X} \colon [\lowtranopa{i} \indic{A_{i-1}}](x) > 0\}
&\text{and} & &
c_{i}
&\coloneqq \min \{ [\lowtranopa{i} \indic{A_{i-1}}](x) \colon x \in A_{i} \}.
\end{align*}
As $c > 0$ and $c \indic{\statespacesub{R}} \leq f^{*}$, it follows from \ref{def:LTO:NonNegativelyHom} and \ref{prop:LTO:Monotonicity} that $c \lowtranopa{1:k} \indic{\statespacesub{R}} \leq \lowtranopa{1:k} f^{*}$.
Combining the two obtained inequalities yields
\[
c c_1 \cdots c_{k} \indica{A_{k}}{y_0} \leq c [\lowtranopa{1:k} \indic{\statespacesub{R}}](y_0) \leq [\lowtranopa{1:k} f^{*}](y_0) = 0.
\]
From the second part of Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent}, it now follows that $y_{0} \notin A_{k}$.
Nonetheless, we now prove that $A_{k} = \mathcal{X}$, an obvious contradiction.
To that end, observe that for all $i \in \{ 1, \dots, k \}$,
\begin{align*}
A_{i}
&= \{ x \in \mathcal{X} \colon [(I + \delta_{i} \underline{Q}) \indic{A_{i-1}}](x) > 0 \} \\
&= \{ x \in A_{i-1} \colon [(I + \delta_{i} \underline{Q}) \indic{A_{i-1}}](x) > 0 \} \cup \{ x \in \mathcal{X} \setminus A_{i-1} \colon [(I + \delta_{i} \underline{Q}) \indic{A_{i-1}}](x) > 0 \}.
\intertext{
Note that for all $x_{i-1} \in A_{i-1}$, $\indic{A_{i-1}} \geq \indic{x_{i-1}}$.
Also, from \ref{prop:LTRO:Ixx} it follows that $[(I + \delta_i \underline{Q}) \indic{x_{i-1}}](x_{i-1}) > 0$.
Using \ref{prop:LTO:Monotonicity} allows us to conclude that for all $x_{i-1} \in A_{i-1}$, $[(I + \delta_{i} \underline{Q}) \indic{A_{i-1}}](x_{i-1}) > 0$.
Therefore,
}
A_{i}
&= A_{i-1} \cup \{ x \in \mathcal{X} \setminus A_{i-1} \colon [(I + \delta_{i} \underline{Q}) \indic{A_{i-1}}](x) > 0 \} \\
&= A_{i-1} \cup \{ x \in \mathcal{X} \setminus A_{i-1} \colon \indica{A_{i-1}}{x} + \delta_{i} [\underline{Q} \indic{A_{i-1}}](x) > 0 \} \\
&= A_{i-1} \cup \{ x \in \mathcal{X} \setminus A_{i-1} \colon [\underline{Q} \indic{A_{i-1}}](x) > 0 \},
\end{align*}
where the third equality is allowed because $\delta_i > 0$.
From this recursive relation, it is obvious that $\statespacesub{R} \subseteq A_{k}$.
Even more, we can prove that $\statespacesub{R}^{c} \subseteq A_{k}$, which implies that $\statespacesub{R} \cup \statespacesub{R}^{c} = \mathcal{X} \subseteq A_{k} \subseteq \mathcal{X}$, and consequently $A_{k} = \mathcal{X}$.
Indeed, note that the sequence $A_{0}, \dots, A_{k}$ is equal to the first $(k+1)$ terms of the sequence $\{B_{i}\}_{i \in \nats_{0}}$ that is defined in Definition~\ref{def:LowRateOp:LowerReachable} for $B_{0} = \statespacesub{R}$.
As $\underline{Q}$ was assumed to be ergodic and $k \geq \card{\mathcal{X}} - 1 \geq \card{\mathcal{X} \setminus \statespacesub{R}}$, it follows from Definitions~\ref{def:LowRateOp:LowerReachable} and \ref{def:LowRateOp:RegularlyAbsorbing} and Theorem~\ref{the:ContinuousErgodicity:NecessaryAndSufficient} that $\statespacesub{R}^{c} \subseteq B_{k}$.
For both $\statespacesub{R} \cap \mathcal{X}^{*} \neq \emptyset$ and $\statespacesub{R} \cap \mathcal{X}^{*} = \emptyset$ we have obtained a contradiction, such that the ergodicity of $\underline{Q}$ indeed implies the stated.
Next, we prove the reverse implication.
Fix some lower transition rate operator $\underline{Q}$, and assume that there is some $k < \card{\mathcal{X}}$ and some $\delta_1, \dots, \delta_k \in \reals_{> 0}$ such that $\delta_{i} \norm{\underline{Q}} < 2$ for all $i \in \{ 1, \dots, k \}$ and
\[
\coefferga{\Phi(\delta_{1}, \dots, \delta_{k})} < 1.
\]
By Proposition~\ref{the:CoeffOfErg:StrictlySmallerThanOneIsNecAndSuffForErg} this implies that the lower transition operator $\lowtranopa{1:k} \coloneqq (I + \delta_{k} \underline{Q}) \cdots (I + \delta_{1} \underline{Q})$ is ergodic.
By Proposition \ref{prop:DiscreteErgodicity:NecAndSuff}, the ergodicity of $\lowtranopa{1:k}$ is equivalent to $\lowtranopa{1:k}$ being regularly absorbing, in the sense that
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{X}_{1:k} \coloneqq \left\{ x \in \mathcal{X} \colon (\exists n \in \mathbb{N})(\forall y \in \mathcal{X})~[(\uptranopa{1:k})^{n} \indic{x}](y) > 0 \right\} \neq \emptyset$;
\item $(\forall y \in \mathcal{X} \setminus \mathcal{X}_{1:k})(\exists n \in \mathbb{N})~[(\lowtranopa{1:k})^{n} \indic{\mathcal{X}_{1:k}}](y) > 0$.
\end{enumerate}
Fix some $x^{*} \in \mathcal{X}_{1:k}$, and let $m \in \mathbb{N}$ such that $(\uptranopa{1:k})^{m} \indic{x^{*}} > 0$.
Fix an arbitrary $y \in \mathcal{X}$.
Then by Lemma~\ref{lem:LowTranOp:SignOfCompositionWithIndicator}, there exists a sequence $y = x_{m+1}, \dots, x_{1} = x^{*}$ in $\mathcal{X}$ such that for all $i \in \{1, \dots, m\}$, $[\uptranopa{1:k} \indic{x_{i}}](x_{i+1}) > 0$.
Again using Lemma~\ref{lem:LowTranOp:SignOfCompositionWithIndicator}, this implies that for all $i \in \{ 1, \dots, m \}$ there is a sequence $x_{i+1} = x_{i,k+1}, \dots, x_{i,1} = x_{i}$ in $\mathcal{X}$ such that for all $j \in \{ 1, \dots, k \}$,
\[
[(I + \delta_{j} \overline{Q}) \indic{x_{i,j}}](x_{i,j+1}) > 0.
\]
As such, we have now constructed one long sequence
\[
y = x_{m, k+1}, x_{m,k} \dots, x_{m,1} = x_{m-1, k+1}, x_{m-1,k}, \dots, x_{m-1,1} = x_{m-2, k+1} \dots, x_{1, 1} = x
\]
in $\mathcal{X}$.
From this sequence we remove all ``loops'' (as we previously did in the proof of Lemma~\ref{lem:LowRateOp:UpperReachableShorterSequence}), and denote this shortened sequence by $y = z_{n'+1}, \dots, z_{1} = x^{*}$ with corresponding time steps $\delta_{n'}', \dots, \delta_{1}'$.
Then for all $i \in \{ 1, \dots, n' \}$,
\[
0 < [(I + \delta_{i}' \overline{Q}) \indic{z_{i}}](z_{i+1}) = \indic{z_{i}}(z_{i+1}) + \delta_{i}' [\overline{Q} \indic{z_{i}}](z_{i+1}) = \delta_{i}' [\overline{Q} \indic{z_{i}}](z_{i+1}).
\]
As all $\delta_{i}'$ are strictly positive, we find that for all $i \in \{ 1, \dots, n' \}$, $[\overline{Q} \indic{z_{i}}](z_{i+1}) > 0$.
By Definition~\ref{def:LowRateOp:UpperReachable}, this means that $y \upreachc x^{*}$.
As $y$ was an arbitrary element of $\mathcal{X}$ and $x^{*}$ an arbitrary element of $\mathcal{X}_{1:k}$, $\mathcal{X}_{1:k} \subseteq \statespacesub{R}$ and hence $\underline{Q}$ is top class regular.
Furthermore, we can show that $\statespacesub{R} \subseteq \mathcal{X}_{1:k}$, such that $\statespacesub{R} = \mathcal{X}_{1:k}$.
To that end, assume that $\statespacesub{R} \setminus \mathcal{X}_{1:k} \neq \emptyset$ and fix some arbitrary $x^{*} \in \statespacesub{R} \setminus \mathcal{X}_{1:k}$.
Then by Definition~\ref{def:LowRateOp:RegularlyAbsorbing}, $y \upreachc x^{*}$ for all $y \in \mathcal{X}$.
By Lemmas~\ref{lem:LowRateOp:PathOfArbitraryLength} and \ref{lem:LowTranOp:SignOfCompositionWithIndicator}, for all $y \in \mathcal{X}$ there is an integer $n_{y}$ such that for all $\ell \geq n_{y}$, $[(\uptranopa{1:k})^{\ell} \indic{x^{*}}](y) > 0$.
Hence, if we let $m \coloneqq \max \{ n_{y} \colon y \in \mathcal{X} \}$, then $[(\uptranopa{1:k})^{m} \indic{x^{*}}](y) > 0$ for all $y \in \mathcal{X}$.
By Definition~\ref{def:LowTranOp:RegularlyAbsorbing}, this implies that $x^{*} \in \mathcal{X}_{1:k}$.
However, this contradicts our assumption that $x^{*} \in \statespacesub{R} \setminus \mathcal{X}_{1:k}$, such that $\statespacesub{R} \setminus \mathcal{X}_{1:k} = \emptyset$ and hence indeed $\statespacesub{R} \subseteq \mathcal{X}_{1:k}$.
We now show that (ii) implies that $\underline{Q}$ is top class absorbing.
Since $\lowtranopa{1:k}$ is top class regular and absorbing, and because $\mathcal{X}_{1:k}=\mathcal{X}_{R}$, it follows from Lemma~\ref{lem:LowTranOp:StrongerNecAndSuffTopClassAbsorption} that there is some $m \in \nats_{0}$ such that $(\lowtranopa{1:k})^{m} \indic{\statespacesub{R}} > 0$.
Also, we know that $B_{m} = \mathcal{X}$, where $B_{0} = \statespacesub{R}$ and
\[
B_{i+1}
\coloneqq B_{i} \cup \left\{ x \in \mathcal{X} \setminus B_{i} \colon [\lowtranopa{1:k} \indic{B_{i}}](x) > 0 \right\} ~\text{for all}~i \in \{ 0, \dots, m-1 \}.
\]
For any $i \in \{ 0, \dots, m-1 \}$ and any $x \in \mathcal{X}$, it follows from Lemma~\ref{lem:LowTranOp:SignOfCompositionWithEvent} that $[\lowtranopa{1:k} \indic{B_{i}}](x) > 0$ if and only if $x \in B_{i,k}$, where $B_{i,k}$ is derived from the initial condition $B_{i,0} \coloneqq B_{i}$ and, for all $j \in \{ 1, \dots, k \}$, from the recursive relation
\begin{align*}
B_{i,j}
&= \left\{ x \in \mathcal{X} \colon [(I + \delta_{j} \underline{Q}) \indic{B_{i,j-1}}](z) > 0 \right\}.
\intertext{Similar to what we did before, we can rewrite this recursive relation as}
B_{i,j}
&= \left\{ x \in B_{i,j-1} \colon [(I + \delta_{j} \underline{Q}) \indic{B_{i,j-1}}](z)>0 \right\} \cup \left\{ x \in \mathcal{X} \setminus B_{i,j-1} \colon [(I + \delta_{j} \underline{Q}) \indic{B_{i,j-1}}](z) > 0 \right\} \\
&= \left\{ x \in B_{i,j-1} \colon 1 + \delta_{j} [\underline{Q} \indic{B_{i,j-1}}](z) > 0 \right\} \cup \left\{ x \in \mathcal{X} \setminus B_{i,j-1} \colon \delta_{j} [\underline{Q} \indic{B_{i,j-1}}](z) > 0 \right\}.
\intertext{As before, we can verify that $1 + \delta_{j} [\underline{Q} \indic{B_{i,j-1}}](z) > 0$ for all $x \in B_{i,j-1}$.
Hence,}
B_{i,j}
&= B_{i,j-1} \cup \left\{ x \in \mathcal{X} \setminus B_{i,j-1} \colon [\underline{Q} \indic{B_{i,j-1}}](z) > 0 \right\}.
\end{align*}
This way, we have constructed a sequence of sets
\[
B_{0} = B_{0, 0}, B_{0, 1}, \dots, B_{0,k} = B_1 = B_{1,0}, B_{1,1}, \dots, B_{1,k} = B_{2} = B_{2,0}, \dots, B_{m-1, k} = B_{m}
\]
with $B_{0} = \statespacesub{R}$ and $B_{m} = \mathcal{X}$.
Denote this sequence by $A_{0}, \dots, A_{mk + 1}$ and let $A_{mk+2}\coloneqq\mathcal{X}$.
Then $A_{0} = \statespacesub{R}$, $A_{mk + 1} =A_{mk + 2}=\mathcal{X}$ and for all $i \in \{0, \dots, mk+1\}$,
\[
A_{i+1} = A_{i} \cup \left\{ x \in \mathcal{X} \setminus A_{i} \colon [\underline{Q} \indic{A_{i}}](z) > 0 \right\}.
\]
Let $n\in\{0,\dots,mk+1\}$ be the first index for which $A_{n} = A_{n+1}$.
From the recursive relation between $A_{n}, \dots, A_{m k + 1},A_{m k + 2}$, we infer that $A_{n} = A_{n+1} = \cdots = A_{m k + 2} = \mathcal{X}$.
Fix an arbitrary $y{*} \in \mathcal{X} \setminus \statespacesub{R}$.
Then the sequence $\statespacesub{R} = A_{0}, \dots, A_{n}, A_{n+1}$ satisfies the recursive relation of Definition~\ref{def:LowRateOp:LowerReachable} and $y^{*}\in\mathcal{X}=A_n$, so $y^{*} \lowreachc \statespacesub{R}$.
As $y^{*}$ was an arbitrary element of $\mathcal{X} \setminus \statespacesub{R}$, it follows that $\underline{Q}$ is top class absorbing.
We have proven that if there is some $k < \card{\mathcal{X}}$ and some sequence $\delta_1, \dots, \delta_k$ in $\reals_{> 0}$ such that $\delta_{i} \norm{\underline{Q}} < 2$ for all $i \in \{ 1, \dots, k \}$ and $\coefferga{\Phi(\delta_{1},\dots,\delta_{k})} < 1$, then $\underline{Q}$ is both top class regular and top class absorbing.
As an immediate consequence of Theorem~\ref{the:ContinuousErgodicity:NecessaryAndSufficient}, this implies that $\underline{Q}$ is ergodic. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:UniformApproximationErgodicError}]
From the requirements on $\delta$, \ref{prop:LTO:CompositionIsAlsoLTO} and Proposition~\ref{prop:IPlusDeltaQLowTranOp}, it follows that $(I + \delta \underline{Q})^{i}$ is a lower transition operator for all $i \in \mathbb{N}$.
By Lemma~\ref{lem:ExplicitErrorBound},
\begin{align*}
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
&\leq \delta^2 \norm{\underline{Q}}^{2} \sum_{i = 0}^{n-1} \norm{(I + \delta \underline{Q})^{i} f}_{c} \\
&= \delta^2 \norm{\underline{Q}}^{2} \sum_{i = 0}^{k-1} \sum_{j = 0}^{m-1} \norm{(I + \delta \underline{Q})^{j}(I + \delta \underline{Q})^{m i} f}_{c}.
\intertext{
We use \ref{prop:LTO:VarNormTf} to yield
}
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
&\leq m \delta^2 \norm{\underline{Q}}^{2} \sum_{i = 0}^{k-1} \norm{(I + \delta \underline{Q})^{m i} f}_{c}.
\intertext{Next, we simply use \ref{prop:CoeffOfErgod:BoundOnNormTf} and \ref{prop:CoeffOfErgod:Composition} to yield}
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
&\leq m \delta^2 \norm{\underline{Q}}^{2} \norm{f}_{c} \sum_{i = 0}^{k-1} \coefferga{(I + \delta \underline{Q})^{m}}^{i}.
\end{align*}
For any $a \in [0, 1)$ and any $\ell \in \mathbb{N}$, it is well known that
\[
\sum_{i = 0}^{\ell} a^{i} = \frac{1 - a^{\ell+1}}{1 - a} \leq \frac{1}{1-a}.
\]
If $\beta \coloneqq \coefferga{(I + \delta \underline{Q})^{m}} < 1$, then we can use this well-known relation to yield
\begin{align*}
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
&\leq m \delta^2 \norm{\underline{Q}}^{2} \norm{f}_{c} \frac{1 - \beta^{k}}{1 - \beta}
\leq \frac{m \delta^2 \norm{\underline{Q}}^{2} \norm{f}_{c}}{1 - \beta}.
\end{align*}
The proof for $\beta = \coefferga{\lowtranopa{m \delta}}$ is entirely analoguous.
We can use the second inequality of Lemma~\ref{lem:ExplicitErrorBound}, the semi-group property and \ref{prop:LTO:VarNormTf}, which yields
\[
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
\leq m \delta^2 \norm{\underline{Q}}^{2} \sum_{i = 0}^{k-1}\norm{(\lowtranopa{m \delta})^{i} f}_{c}.
\]
Next, we again use \ref{prop:CoeffOfErgod:BoundOnNormTf} and \ref{prop:CoeffOfErgod:Composition} to yield
\[
\norm{\lowtranopa{t} f - \Psi_{t}(n)}
\leq m \delta^2 \norm{\underline{Q}}^{2} \norm{f}_{c} \sum_{i = 0}^{k-1} \coefferga{\lowtranopa{m \delta}}^{i}. \qedhere
\]
\end{proof}
\begin{proof}[Proof of Example~\ref{binex:UniformErgodic}]
Let $\delta \in \reals_{\geq 0}$ such that $\delta \norm{\underline{Q}} \leq 2$.
Using Proposition~\ref{prop:CoeffOfErgod:AlternativeFunctions} yields
\begin{align*}
\coefferga{\Phi(\delta)}
&= \max \{ \norm{\Phi(\delta) f}_{v} \colon f \in \setoffna, \max f = 1, \min f = 0 \}.
\end{align*}
In the special case of a binary state space, only two functions satisfy this requirement: $\indic{0}$ and $\indic{1}$.
Therefore
\begin{align*}
\coefferga{\Phi(\delta)}
&= \max \big\{ \abs{[\Phi(\delta) \indic{0}](0) - [\Phi(\delta) \indic{0}](1)}, \abs{[\Phi(\delta) \indic{1}](0) - [\Phi(\delta) \indic{1}](1)} \big\}.
\end{align*}
Recall that in the Proof of Example~\ref{binex:AnalyticalExpressionsForAppliedLTO} we proved that for all $\delta \in \reals_{\geq 0}$ such that $\delta \norm{\underline{Q}} \leq 2$ and all $f \in \setoffna$,
\[
[\Phi(\delta) f](0) - [\Phi(\delta) f](1)
= \begin{cases}
\norm{f}_{v} (1 - \delta (\upq{0} + \lowq{1})) &\text{if } f(0) \geq f(1), \\
\norm{f}_{v} (1 - \delta (\lowq{0} + \upq{1})) &\text{if } f(0) \leq f(1).
\end{cases}
\]
As $\norm{\indic{0}}_{v} = 1 = \norm{\indic{1}}_{v}$, this yields
\begin{equation*}
\coefferga{I + \delta \underline{Q}}
= \coefferga{\Phi(\delta)}
= \max \left\{ \abs{1 - \delta (\upq{0} + \lowq{1})}, \abs{1 - \delta (\lowq{0} + \upq{1})} \right\}. \qedhere
\end{equation*}
\end{proof}
For the proof of Theorem~\ref{the:CoeffOfErgod:Approximation}, we need some definitions and results from the theory of imprecise probabilities. The reason for this is that, as \cite{decooman2009} already mention, the functional $[\underline{T} \cdot](x)$ is actually a coherent (conditional) lower expectation. For a more thorough discussion of coherent lower expectations---often also called coherent lower previsions---we refer to the seminal work of \cite{1991Walley} and the more recent treatment of \cite{2014LowPrev}. \begin{definition}
A functional $\underline{\prev}$ that maps $\setoffna$ to $\mathbb{R}$ is a \emph{coherent lower expectation} if for all $f, g \in \setoffna$ and all $\mu \in \reals_{\geq 0}$:
\begin{enumerate}[label=\underline{E}\arabic*:, ref=(\underline{E}\arabic*)]
\item $\lowpreva{f} \geq \min f$; \label{def:CLP:BoundedByMin}
\item $\lowpreva{f + g} \geq \lowpreva{f} + \lowpreva{g}$; \label{def:CLP:subadditive}
\item $\lowpreva{\mu f} = \mu \lowpreva{f}$. \label{def:CLP:PositiveHomogeneity}
\end{enumerate} \end{definition} The conjugate \emph{coherent upper expectation} is defined for all $f \in \setoffna$ as \[
\uppreva{f} = - \lowpreva{-f}. \] If for all $f \in \setoffna$, $\uppreva{f} = \lowpreva{f} = \linpreva{f}$, then we call $\mathrm{E}$ a \emph{linear expectation}. The reason for this terminology is that the inequality in \ref{def:CLP:subadditive} can then be replaced by an equality, and the condition $\mu \in \reals_{\geq 0}$ for \ref{def:CLP:PositiveHomogeneity} can be relaxed to $\mu \in \mathbb{R}$.
The following corollary highlights the link between the components of a lower transition operator and coherent lower previsions. \begin{corollary} \label{cor:LTOisCLP}
Let $\underline{T}$ be a lower transition operator and $x \in \mathcal{X}$.
Then the functional $[\underline{T} \cdot](x) \colon f \in \setoffna \mapsto [\underline{T} f](x)$ is a coherent lower prevision. \end{corollary} \begin{proof}
The operator $[\underline{T} \cdot](x)$ indeed maps $\setoffna$ to $\mathbb{R}$.
Furthermore, \ref{def:CLP:BoundedByMin} follows from \ref{def:LTO:DominatesMin}, \ref{def:CLP:subadditive} follows from \ref{def:LTO:SuperAdditive} and \ref{def:CLP:PositiveHomogeneity} follows from \ref{def:LTO:NonNegativelyHom}.
Hence, the operator is indeed a coherent lower prevision. \end{proof}
For any coherent lower expectation $\underline{\prev}$, the set $\credseta{\underline{\prev}}$ of dominating linear expectations, defined as \[
\credseta{\underline{\prev}} \coloneqq \{ \mathrm{E} \text{ a linear expectation operator} \colon (\forall f \in \setoffna)~\lowpreva{f} \leq \linpreva{f} \}, \] is non-empty. Moreover, from \cite[Section~3.3.3]{1991Walley} it follows that $\underline{\prev}$ is the lower envelope of $\credseta{\underline{\prev}}$, in the sense that for all $f \in \setoffna$, \[
\lowpreva{f} = \min \{ \linpreva{f} \colon \mathrm{E} \in \credseta{\underline{\prev}} \}. \]
\begin{lemma}[Alternative statement of Proposition~1 in \cite{2013Skulj}] \label{lem:MaxDifferenceBetweenLinearPrevisionsForIndicators}
If\/ $\mathrm{E}_1$ and $\mathrm{E}_2$ are two linear expectation operators, then
\[
\max \{ \mathrm{E}_{1}(f) - \mathrm{E}_{2}(f) \colon f \in \setoffna, 0 \leq f \leq 1 \}
= \max \{ \mathrm{E}_{1}(\indic{A}) - \mathrm{E}_{2}(\indic{A}) \colon f \in \setoffna, \emptyset \neq A \subset \mathcal{X} \}.
\] \end{lemma} \begin{proof}
Let $\mathrm{E}_1$ and $\mathrm{E}_2$ be any two linear expectation operators on $\setoffna$.
Then
\begin{align*}
&\max \{ \mathrm{E}_{1}(f) - \mathrm{E}_{2}(f) \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&\qquad = \max \left\{ \sum_{x \in \mathcal{X}} (\mathrm{E}_{1}(\indic{x}) - \mathrm{E}_{2}(\indic{x})) f(x) \colon f \in \setoffna, 0 \leq f \leq 1 \right\} \\
&\qquad = \sum_{x \in A^*} (\mathrm{E}_{1}(\indic{x}) - \mathrm{E}_{2}(\indic{x}))
= \mathrm{E}_{1}(\indic{A^*}) - \mathrm{E}_{2}(\indic{A^*}),
\end{align*}
where $A^* \subset \mathcal{X}$ is defined as $A^* \coloneqq \left\{ x \in \mathcal{X} \colon \mathrm{E}_{1}(\indic{x}) > \mathrm{E}_{2}(\indic{x}) \right\}$. \end{proof}
\begin{lemma} \label{lem:MaxDifferenceBetweenLowerPrevisionsUpperBoundWithIndicators}
If\/ $\underline{\prev}_{1}$ and $\underline{\prev}_{2}$ are two coherent lower expectations on $\setoffna$, then
\[
\max \{ \underline{\prev}_1(f) - \underline{\prev}_2(f) \colon f \in \setoffna, 0 \leq f \leq 1 \}
\leq \max \{ \overline{\prev}_1(\indic{A}) - \underline{\prev}_2(\indic{A}) \colon 0 \neq A \subset \mathcal{X} \}.
\] \end{lemma} \begin{proof}
Define $\mathcal{M}_1 \coloneqq \credseta{\underline{\prev}_1}$ and $\mathcal{M}_2 \coloneqq \credseta{\underline{\prev}_2}$.
Note that for all $f \in \setoffna$,
\[
0 =\underline{\prev}_{1}(0) = \underline{\prev}_{1}(f - f) \geq \underline{\prev}_{1}(f) + \underline{\prev}_{1}(-f),
\]
where the first equality follows from \ref{def:CLP:PositiveHomogeneity}---with $\mu = 0$ and $f = 0$---and the first inequality follows from \ref{def:CLP:subadditive}.
Bringing the second term to the left hand side and using the conjugacy relation between $\underline{\prev}_{1}$ and $\overline{\prev}_{1}$, we find $\overline{\prev}_{1}(f) \geq \underline{\prev}_{1}(f)$.
Hence
\[
\underline{\prev}_1(f) - \underline{\prev}_2(f)
\leq \overline{\prev}_1(f) - \underline{\prev}_2(f),
\]
and consequently
\[
\max \{ \underline{\prev}_1(f) - \underline{\prev}_2(f) \colon f \in \setoffna, 0 \leq f \leq 1 \}
\leq \max \{ \overline{\prev}_1(f) - \underline{\prev}_2(f) \colon f \in \setoffna, 0 \leq f \leq 1 \}.
\]
Recall that for any $f \in \setoffna$, $\underline{\prev}_i(f) = \min_{\mathrm{E}_i \in \mathcal{M}_{i}} \mathrm{E}_i(f)$, so
\begin{align*}
\overline{\prev}_1(f) - \underline{\prev}_2(f)
&= \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \mathrm{E}_1(f) - \min_{\mathrm{E}_2 \in \mathcal{M}_{2}} \mathrm{E}_2(f)
= \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \max_{\mathrm{E}_2 \in \mathcal{M}_{2}} \mathrm{E}_1(f)-\mathrm{E}_2(f).
\end{align*}
We use the previous equality to rewrite the right hand side of the previous inequality:
\begin{align*}
&\max \{ \overline{\prev}_1(f) - \underline{\prev}_2(f) \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&\qquad = \max \big\{ \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \max_{\mathrm{E}_2 \in \mathcal{M}_{2}} (\mathrm{E}_{1}(f) - \mathrm{E}_{2}(f)) \colon f \in \setoffna, 0 \leq f \leq 1 \big\} \\
&\qquad = \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \max_{\mathrm{E}_2 \in \mathcal{M}_{2}} \max \big\{ (\mathrm{E}_{1}(f) - \mathrm{E}_{2}(f)) \colon f \in \setoffna, 0 \leq f \leq 1 \big\}.
\end{align*}
Next, we use Lemma~\ref{lem:MaxDifferenceBetweenLinearPrevisionsForIndicators} to yield
\begin{align*}
&\max \{ \underline{\prev}_1(f) - \underline{\prev}_2(f) \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&\qquad \leq \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \max_{\mathrm{E}_2 \in \mathcal{M}_{2}} \max \big\{ (\mathrm{E}_{1}(f) - \mathrm{E}_{2}(f)) \colon f \in \setoffna, 0 \leq f \leq 1 \big\} \\
&\qquad = \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \max_{\mathrm{E}_2 \in \mathcal{M}_{2}} \max \big\{ (\mathrm{E}_{1}(\indic{A}) - \mathrm{E}_{2}(\indic{A})) \colon 0 \neq A \subset \mathcal{X} \big\} \\
&\qquad = \max \big\{ \max_{\mathrm{E}_1 \in \mathcal{M}_{1}} \max_{\mathrm{E}_2 \in \mathcal{M}_{2}} (\mathrm{E}_{1}(\indic{A}) - \mathrm{E}_{2}(\indic{A})) \colon 0 \neq A \subset \mathcal{X} \big\} \\
&\qquad = \max \{ \overline{\prev}_{1}(\indic{A}) - \underline{\prev}_{2}(\indic{A}) \colon 0 \neq A \subset \mathcal{X} \}. \qedhere
\end{align*} \end{proof}
\begin{proof}[Proof of \cref{the:CoeffOfErgod:Approximation}]
Fix some lower transition operator $\underline{T}$.
The lower bound on $\coefferga{\underline{T}}$ follows from the fact that for any $\emptyset \neq A \subset \mathcal{X}$, $0 \leq \indic{A} \leq 1$.
Recall from Corollary~\ref{cor:LTOisCLP} that for any $x\in\mathcal{X}$, $[\underline{T} \cdot](x)$ is a coherent lower prevision.
Therefore, we can use Lemma~\ref{lem:MaxDifferenceBetweenLowerPrevisionsUpperBoundWithIndicators} to yield the upper bound:
\begin{align*}
\coefferga{\underline{T}}
&= \max \{ \norm{\underline{T} f}_{v} \colon f \in \setoffna, 0 \leq f \leq 1 \} \\
&= \max \big\{ \max \{ [\underline{T} f](x) - [\underline{T} f](y) \colon x,y \in \mathcal{X} \} \colon f \in \setoffna, 0 \leq f \leq 1 \big\} \\
&= \max \big\{ \max \{ [\underline{T} f](x) - [\underline{T} f](y) \colon f \in \setoffna, 0 \leq f \leq 1 \} \colon x,y \in \mathcal{X} \big\} \\
&\leq \max \big\{ \max \{ [\overline{T} \indic{A}](x) - [\underline{T} \indic{A}](y) \colon \emptyset \neq A \subseteq \mathcal{X} \} \colon x,y \in \mathcal{X} \big\} \\
&= \max \big\{ \max \{ [\overline{T} \indic{A}](x) - [\underline{T} \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subseteq \mathcal{X}\big\}. \qedhere
\end{align*} \end{proof}
\begin{proof}[Proof of the counterexample for \cref{prop:CoeffOfErgod:ErgodicUpperBound}]
We first verify that \(\underline{Q}\) is ergodic.
Since \([\overline{Q} \indic{1}](0) = \upq{0} > 0\) and \([\overline{Q} \indic{0}](1) = \upq{1} > 0\), it follows from \cref{def:LowRateOp:UpperReachable,def:LowRateOp:RegularlyAbsorbing} that \(\statespacesub{R} = \mathcal{X}\), such that \(\underline{Q}\) is regularly absorbing.
Hence, by also invoking \cref{the:ContinuousErgodicity:NecessaryAndSufficient} we can conclude that \(\underline{Q}\) is ergodic.
Next, we fix some \(\delta \in \reals_{> 0}{}\) such that \(\delta \norm{\underline{Q}} < 2\).
Recall from \cref{prop:IPlusDeltaQLowTranOp} that \((I + \delta \underline{Q})\) is a lower transition operator.
Consequently, we can use \cref{eqn:CoeffOfErgod:UpperBound} to compute an upper bound for \(\coefferga{I + \delta \underline{Q}}\).
In this case, there are clearly only two possibilities for \(A\) in the optimisation of \cref{eqn:CoeffOfErgod:UpperBound}: \(A = \{0\}\) and \(A = \{1\}\).
For \(A = \{0\}\), some straightforward calculations yield
\begin{align*}
[(I + \delta \overline{Q}) \indic{0}](0)
&= 1, &
[(I + \delta \overline{Q}) \indic{0}](1)
&= \delta \upq{1}, \\
[(I + \delta \underline{Q}) \indic{0}](0)
&= 1 - \delta \upq{0}, &
[(I + \delta \underline{Q}) \indic{0}](1)
&= 0.
\end{align*}
This implies that
\begin{multline*}
\max \big\{ \max \{ [(I + \delta \overline{Q}) \indic{A}](x) - [(I + \delta \underline{Q}) \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X}\big\} \\
\geq [(I + \delta \overline{Q}) \indic{0}](0) - [(I + \delta \underline{Q}) \indic{0}](1)
= 1.
\qedhere
\end{multline*} \end{proof}
If the lower transition operator is linear, then the lower and upper bounds of Theorem~\ref{the:CoeffOfErgod:Approximation} are equal. Moreover, from this special case we can immediately verify that the ergodic coefficient we use is a proper generalisation of an ergodic coefficient---the delta coefficient $\delta$ of \cite{1991Anderson}, which is equivalent to $\tau_1$, one of the proper coefficients of ergodicity discussed by \cite{1981Seneta}---used in the study of precise Markov chains. \begin{corollary} \label{the:LowTranOp:ApproximationOfCoeffOfErgod}
Let $T$ be a transition matrix, then
\begin{align*}
\coefferga{T}
&= \max \left\{ \frac{1}{2} \sum_{z \in \mathcal{X}} \abs{T(x,z) - T(y,z)} \colon x,y \in \mathcal{X} \right\}.
\end{align*} \end{corollary} \begin{proof}
For a transition matrix, the the upper bound of Eqn.~\eqref{eqn:CoeffOfErgod:UpperBound} and the lower bound of Eqn.~\eqref{eqn:CoeffOfErgod:LowerBound} in Theorem~\ref{the:CoeffOfErgod:Approximation} are equal.
Therefore
\begin{align*}
\coefferga{T}
&= \max \left\{ \max \{ [T \indic{A}](x) - [T \indic{A}](y) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X} \right\} \\
&= \max \left\{ \max \left\{ \frac{1}{2} [T (2 \indic{A})](x) - \frac{1}{2} [T (2 \indic{A})](y) \colon x,y \in \mathcal{X} \right\} \colon \emptyset \neq A \subset \mathcal{X} \right\} \\
&= \max \left\{ \max \left\{ \frac{1}{2} [T (2 \indic{A} - 1)](x) - [T (2 \indic{A} - 1)](y) \colon x,y \in \mathcal{X} \right\} \colon \emptyset \neq A \subset \mathcal{X} \right\},
\intertext{
where the first equality follows from Theorem~\ref{the:CoeffOfErgod:Approximation}, the second equality follows from \ref{def:LTO:NonNegativelyHom} and the third equality follows from \ref{prop:LTO:AdditionOfConstant}.
From the linearity of $T$, it follows that $[T f](x) = \sum_{z \in \mathcal{X}} f(z) [T \indic{z}](x) = \sum_{z \in \mathcal{X}} f(z) T(x,z)$, such that
}
\coefferga{T}
&= \max \left\{ \max \left\{ \frac{1}{2} \sum_{z \in \mathcal{X}} [2 \indic{A} - 1](z) \left(T(x,z) - T(y,z) \right) \colon x,y \in \mathcal{X} \right\} \colon \emptyset \neq A \subset \mathcal{X} \right\}\\
&= \max \left\{ \max \left\{ \frac{1}{2} \sum_{z \in \mathcal{X}} [2 \indic{A} - 1](z) \left(T(x,z) - T(y,z) \right) \colon \emptyset \neq A \subset \mathcal{X} \right\} \colon x,y \in \mathcal{X} \right\}.
\intertext{
Solving the inner maximisation problem for some fixed $x, y \in \mathcal{X}$ is trivial: the maximising $A$ is $\{ z \in \mathcal{X} \colon T(x,z) \geq T(y,z) \}$ as for all $z \in \mathcal{X}$, $[2\indic{A} - 1](z)$ is $1$ if $z \in A$ or $-1$ if $z \notin A$.
This results in
}
\coefferga{T}
&= \max \left\{ \frac{1}{2} \sum_{z \in \mathcal{X}} \abs{T(x,z) - T(y,z)} \colon x,y \in \mathcal{X} \right\},
\end{align*}
which proves that $\coefferga{T}$ is indeed equal to $\delta(T)$ of \cite{1991Anderson} or $\tau_1(T)$ of \cite{1981Seneta}. \end{proof}
Linear transition operators are not the only lower transition operators for which the lower bound of Theorem~\ref{the:CoeffOfErgod:Approximation} is the actual value of the coefficient of ergodicity. \cite{2013Skulj} show that this is also the case for lower transition operators defined using Choquet integrals. Let $\{ L_x \}_{x\in\mathcal{X}}$ be a family of Choquet capacities, and assume that for all $x\in\mathcal{X}$, $[\underline{T} \cdot ](x)$ is the Choquet integral with respect to $L_x$. By \cite[Corollary~23]{2013Skulj}, \begin{align}
\coefferga{\underline{T}}
&= \max \big\{ \max\{ L_{x}(A) - L_{y}(A) \colon x,y \in \mathcal{X} \} \colon 0 \neq A \subset \mathcal{X} \big\}.
\label{eqn:CoeffOfErgod:Choquet} \end{align} This result allows us to exactly compute $\coefferga{\underline{T}}$. However, we are often interested in $\coefferga{\underline{T}^{k}}$, where $k > 1$ is an integer. Let $k\in\mathbb{N}$ and $x\in\mathcal{X}$, then we define the Choquet capacity $L^{k}_{x}$ for all $A \subseteq \mathcal{X}$ as $L^{k}_{x}(A) \coloneqq [\underline{T}^k \indic{A}](x)$. In general, the coherent lower expectation $[\underline{T}^k \cdot](x)$ is \emph{not} a Choquet integral with respect to the Choquet capacity $L^{k}_{x}$, a fact that is seemingly overlooked in \cite[Section~5.5]{2013Skulj}. What is definitely true is that \[
\max \{ \max \{ L^{k}_{x}(A) - L^{k}_{y}(A) \colon x,y \in \mathcal{X} \} \colon \emptyset \neq A \subset \mathcal{X} \}
\] is a lower bound of $\coefferga{\underline{T}^{k}}$, as it is equal to the lower bound of Theorem~\ref{the:CoeffOfErgod:Approximation}.
\begin{lemma} \label{lem:StoppigCriterionWithConvergence}
Let $\underline{Q}$ be a lower transition rate operator and assume that $f$ is an element of $\setoffna$ such that $\lowtranopa{\infty} f \coloneqq \lim_{t \to \infty} \lowtranopa{t} f$ is a constant function.
We let $t \in \reals_{\geq 0}$, $\epsilon \in \reals_{> 0}$ and $\delta_1, \dots, \delta_k \in \reals_{\geq 0}$ such that $\sum_{i=1}^{k} \delta_i = t$ and for all $i \in \{ 1, \dots, k \}$, $\delta_{i} \norm{\underline{Q}} \leq 2$, and define $g \coloneqq \Phi(\delta_1,\dots,\delta_k) f$.
If $\norm{\lowtranopa{t} f - g} \leq \epsilon$ and $\norm{g}_{c} \leq \epsilon$, then
\[
\norm{\lowtranopa{\infty} f - \tilde{g}} \leq 2 \epsilon
\]
and for all $\Delta \in \reals_{\geq 0}$,
\[
\norm{\lowtranopa{t+\Delta} f - \tilde{g}} \leq 2 \epsilon,
\]
where $\tilde{g} \coloneqq (\max \Phi(\delta_1, \dots, \delta_k) f + \min \Phi(\delta_1, \dots, \delta_k) f) / 2$. \end{lemma} \begin{proof} Note that by \ref{prop:LTO:BoundedByMinAndMax},
\[
\min \lowtranopa{t} f \leq \min \lowtranopa{t + \Delta} f \leq \lowtranopa{\infty} f \leq \max \lowtranopa{t + \Delta} f \leq \lowtranopa{t} f.
\]
If we let $g \coloneqq \Phi(\delta_1,\dots,\delta_k) f$ and assume that $\norm{\lowtranopa{t} f - g } \leq \epsilon$, then
\[
\min g - \epsilon \leq \min \lowtranopa{t} f \leq \min \lowtranopa{t + \Delta} \leq \lowtranopa{\infty} f \leq \max \lowtranopa{t + \Delta} f \leq \lowtranopa{t} f \leq \max g + \epsilon.
\]
Hence,
\[
\lowtranopa{\infty} f - \tilde{g} = \lowtranopa{\infty} f - \max g + \frac{\max g - \min g}{2} \leq \epsilon + \norm{g}_{c},
\]
and
\[
\lowtranopa{\infty} f - \tilde{g} = \lowtranopa{\infty} f - \min g - \frac{\max g - \min g}{2} \geq - \epsilon- \norm{g}_{c},
\]
where $\tilde{g} \coloneqq (\max g + \min g) / 2$.
Therefore, if $\norm{g}_{c} \leq \epsilon$, then
\[
\norm{\lowtranopa{\infty} f - \tilde{g}} \leq 2 \epsilon,
\]
which proves the first inequality of the statement.
The proof of the second inequality of the statement is almost entirely similar. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:StoppingCriterionWithErgodicity}]
If $\underline{Q}$ is ergodic, then by definition $\lim_{t \to \infty} \lowtranopa{t} f$ is a constant function for all $f \in \setoffna$.
Therefore, the stated follows immediately from Lemma~\ref{lem:StoppigCriterionWithConvergence}. \end{proof}
In Example~\ref{binex:AdaptiveApproximation}, we have observed that keeping track of $\epsilon'$ increases the duration of the computations. The following proposition shows that, even if one is not really interested in the value of $\epsilon'$, there is still a reason why one nevertheless would want to keep track of $\epsilon'$: it could be that we can stop the approximation because we have already attained the desired maximal error. \begin{proposition} \label{prop:StoppingCriterionWithRemainingCost}
Let $\underline{Q}$ be a lower transition rate operator, $f \in \setoffna$ and $t, \epsilon \in \reals_{> 0}$.
Let $s$ denote some sequence $\delta_1, \dots, \delta_k$ in $\reals_{\geq 0}$ such that $\smash{t' \coloneqq \sum_{i = i}^{k} \delta_i \leq t}$ and, for all $i \in \{ 1, \dots, k \}$, $\smash{\delta_i \norm{\underline{Q}} \leq 2}$.
If $\epsilon' \leq \epsilon$ is an upper bound for $\norm{\lowtranopa{t'} f - \Phi(s) f}$ and $\norm{\Phi(s) f}_{v} \leq \epsilon - \epsilon'$, then
\[
\norm{\lowtranopa{t} f - \Phi(s) f}
\leq \epsilon.
\] \end{proposition} \begin{proof} First, note that by the semi-group property $\lowtranopa{t} f = \lowtranopa{t - t'} \lowtranopa{t'} f$.
Using \ref{prop:LTO:BoundedByMinAndMax} yields
\[
\min \lowtranopa{t'} f \leq \lowtranopa{t} f \leq \max \lowtranopa{t'} f.
\]
Hence
\begin{align*}
\norm{\lowtranopa{t} f - \Phi(s)f}
&= \max \{ \abs{[\lowtranopa{t} f](x) - [\Phi(s)f](x) } \colon x \in \mathcal{X} \} \\
&\leq \max \{ \max \{\abs{\max \lowtranopa{t'} f - [\Phi(s)f](x) }, \abs{\min \lowtranopa{t'} f - [\Phi(s)f](x) } \} \colon x \in \mathcal{X} \},
\end{align*}
where the inequality follows from the obtained bounds on $\lowtranopa{t} f$.
Let $x^{+} \in \mathcal{X}$ such that $[\lowtranopa{t'} f](x^{+}) = \max \lowtranopa{t'} f$.
Then for all $x \in \mathcal{X}$,
\begin{align*}
\abs{\max \lowtranopa{t'} f - [\Phi(s)f](x) }
&= \abs{[\lowtranopa{t'} f](x^{+}) - [\Phi(s)f](x) - [\Phi(s)f](x^{+}) + [\Phi(s)f](x^{+}) } \\
&\leq \abs{[\lowtranopa{t'} f](x^{+}) - [\Phi(s)f](x^{+}) } + \abs{[\Phi(s)f](x) - [\Phi(s)f](x^{+})} \\
&\leq \norm{\lowtranopa{t'} f - \Phi(s) f} + \norm{\Phi(s) f}_{v}.
\intertext{Similarly,
}
\abs{\min \lowtranopa{t'} f - [\Phi(s)f](x) }
&\leq \norm{\lowtranopa{t'} f - \Phi(s) f} + \norm{\Phi(s) f}_{v}.
\end{align*}
Therefore,
\begin{align*}
\norm{\lowtranopa{t} f - \Phi(s)f}
&\leq \max \{ \max \{\abs{\max \lowtranopa{t'} f - [\Phi(s)](x) }, \abs{\min \lowtranopa{t'} f - [\Phi(s)](x) } \} \colon x \in \mathcal{X} \} \\
&\leq \max \{ \norm{\lowtranopa{t'} f - \Phi(s) f} + \norm{\Phi(s) f}_{v} \colon x \in \mathcal{X} \} \\
&= \norm{\lowtranopa{t'} f - \Phi(s) f} + \norm{\Phi(s) f}_{v}.
\end{align*}
If we now assume that $\norm{\lowtranopa{t'} f - \Phi(s) f} \leq \epsilon' \leq \epsilon$ and $\norm{\Phi(s) f}_{v} \leq \epsilon - \epsilon'$, then
\[
\norm{\lowtranopa{t} f - \Phi(s)f}
\leq \epsilon. \qedhere
\] \end{proof}
\end{document} |
\begin{document}
\title{Law of the Iterated Logarithm for the random walk on the infinite percolation cluster} \small\textsc{Abstract:} We show that random walks on the infinite supercritical percolation clusters in $\mathbb{Z}^d$ satisfy the usual Law of the Iterated Logarithm. The proof combines Barlow's Gaussian heat kernel estimates and the ergodicity of the random walk on the environment viewed from the random walker as derived by Berger and Biskup.
\normalsize \section{Introduction}
Asymptotic properties of random walks in $\mathbb {Z}^d$ are very well-understood. Their convergence to $d$-dimensional Brownian motions and their almost sure behavior (such as the law of the iterated logarithm) have been derived decades ago. A natural question to ask is what happens to random walks on graphs that are in some sense perturbations of $\mathbb{Z}^d$. One of the first examples to consider is to look at the random graph obtained by taking the infinite cluster $\mathcal {C}=\mathcal{C}(\omega)$ of a supercritical percolation process. One ``perturbs'' the original lattice by removing some edges independently. Various large-scale properties of this infinite graph have been studied with techniques such as coarse-graining. One of the most natural questions is to look at random walk on this cluster and to study its behavior.
One can for instance consider the {continuous-time simple random walk} (CTSRW) on $\mathcal{C}$. This is the process $X^{\omega}$ that waits an exponential time of mean 1 at each vertex $x$ and jumps along one of the open edges $e$ adjacent to $x$, with each edge chosen with equal probability. This process has been studied in a number of papers. Grimmett, Kesten, and Zhang (\cite{GrimmettKestenZhang},1993) proved that $X^{\omega}$ is almost surely recurrent if $d=2$ and transient if $d \geq 3$. Barlow (\cite{Barlow},2004) proved Gaussian estimates for $X^{\omega}$. An invariance principle in every dimension has been proved independently by Berger and Biskup in (\cite{BergerBiskup},2004) and by Mathieu and Piatnitski (\cite{MathieuPiatnitski},2004). Before that, Sidoravicius and Sznitman proved this result for $d\geq 4$ (\cite{SidoraviciusSznitman}, 2004). All these results show that a property that holds for random walk on $\mathbb{Z}^d$ still holds for random walk on the infinite supercritical percolation cluster.
It is natural to ask if this is still valid if one looks for instance at almost sure properties of the random walk
(recall that almost sure properties often describe the behavior of the walk at exceptional times). Our goal in the present note is to show that it is indeed the case for the law of the iterated logarithm (LIL).
\begin{theorem} \label{law of the iterated logarithm} Consider $d \ge 2$ and suppose that $p>p_c$, where $p_c= p_c (d)$ is the critical bond percolation probability in $\mathbb {Z}^d$. Then, there exists a positive and finite constant $c(p,d)$, such that for almost all realization of percolation with parameter $p$, for all $x$ in the infinite cluster $\mathcal{C}$, the continuous-time random walk $X^{w}$ started from $x$ satisfies almost surely the following LIL:
$$\limsup_{t\rightarrow \infty} \frac{\left|X_t^{\omega}\right|}{\sqrt{t\log \log t}}=c(p,d).$$ \end{theorem}
Here and throughtout the paper, $|x| = |x|_1 = \sum_{j=1}^d |x_j|$ stands for the $L^1$ norm of $x = (x_1, \ldots, x_d) \in {\mathbb Z}^d$. Our proof can trivially be adapted to other norms (this would just change the value of the constant). Note also that we are studying almost sure properties of the walk, so that the annealed and quenched statements are identical here (once we say that the constant $c(p,d)$ does not depend on the environment).
The main ingredients of our proof are the Gaussian bounds derived in Barlow \cite{Barlow} and the ergodicity of Kipnis-Varadhan's \cite {KipnisVaradhan} random walk on the environment as seen from the random walker derived by Berger and Biskup in \cite{BergerBiskup}. These two results have in fact been instrumental in the (much more difficult) derivation of the invariance principle for this random walk.
The paper is organized as follows. In Section 2, we will show that one can find positive and finite $c_1 (p,d)$ and $c_2 (p,d)$ such that almost surely
$$ c_1(p,d) \le \limsup_{t\rightarrow \infty} \frac{\left|X_t^{\omega}\right|}{\sqrt{t\log \log t}} \le c_2(p,d).$$ This will be based on the Gaussian estimates derived in \cite{Barlow}.
The upper bound is an easy application of the Borel-Cantelli Lemma whereas the proof of the lower bound will use the Markov property and the fact that one can apply Gaussian bounds uniformly for $x$ in a ball of sufficiently large radius depending on $t$.
In Section 3, we derive a Zero-one Law for the limit of a discrete analog of the CTSRW. The main ingredient will be the ergodicity of a certain
shift $T$, related to Kipnis-Varadhan's {random walk on the environment}. It has been proved by Noam Berger and Marek Biskup \cite{BergerBiskup} that this shift $T$ is ergodic. Translating properties of the random walk in terms of this shift will allow us to derive a Zero-one law for the limsup in the LIL for this discrete-time random walk.
Finally, in Section 4, we conclude by checking that the time-scales of the discrete-time random walk and of the continuous-time random walk are comparable.
\section{Weak LIL for the continuous random walk}
We will consider Bernoulli bond percolation of parameter $p$ on $\mathbb{Z}^d$ defined on a probability space ($\Omega$,$\mathcal{F}$,$\mathbb{P}_p$). It is well known (Grimmett \cite{grimmett}) that there exists $p_c\in(0,1)$ such that when $p>p_c$ there is a unique infinite open cluster, that we denote by $\mathcal{C}$. For ${\mathbb P}_p$ almost every environment $\omega \in \Omega$ and $x \in \mathcal {C}$, we define a CTSRW $X^{\omega}=(X_t^{\omega},t\geq0)$ started from $x$ under the probability measure $\mathbb{P}_{\omega}^x$. \textbf{In the whole paper, we fix $p>p_c$ and $d\geq2$}.
Because of translation-invariance of our problem (and because we are dealing with almost sure properties), we can restrict ourselves to the case $x=0$ and work with the probability measure
$\tilde{\mathbb{P}}_p=\mathbb{P}_p(.\left|0 \in \mathcal{C}\right.)$ and $X^{\omega}_0=0$. We will use the notation $\Phi(t)=\sqrt{t\log \log t}$ for all $t>e$.
\medbreak
We now recall Barlow's Gaussian estimates.
The first one uses the {chemical distance} $d_{\omega}$ (or graph distance) on $\mathcal{C}$. For every $x$ and $y$ in $\mathcal{C}$, $d_{\omega}(x,y)$ is the length of the shortest path between $x$ and $y$ that uses only edges in $\mathcal{C}$. For every integer $n$ and $x\in \mathcal{C}$, $\mathcal{B}_{\omega}(x,n)$ will denote the ball of radius $n$ and of center $x$ for distance $d_{\omega}$.
\begin{proposition} \label{maximum}(Barlow, \cite{Barlow}) There exist two constants $a_1=a_1(p,d)$ and $a_2=a_2(p,d)$ such that for every $\gamma>0$, there exists a finite random variable $M_{\gamma}$ satisfying for almost every environment $\omega$: $$\text{for all } n\geq M_{\gamma}(\omega),\ \mathbb{P}^0_{\omega}(\max_{k\in \left[0,n\right]} d_{\omega} (0,X_k^{\omega}) > \gamma \Phi(n)) \leq a_1 \exp\left(-a_2\frac{(\gamma\Phi(n))^2}{n}\right).$$ \end{proposition} Recall that this statement holds for a general class of graphs (see Proposition 3.7 of Barlow \cite{Barlow}); percolation estimates (see Theorem 2.18 and Lemma 2.19 of Barlow \cite{Barlow}) show that the percolation cluster belongs to this class. The other result that we will use is the Gaussian bound itself: \begin{theorem} (Barlow, \cite{Barlow}) \label{Gaussian bound} There exist finite constants $c_1$,..., $c_8$ and $\epsilon>0$ only depending on $p$ and $d$ that satisfy the following property. There exists a random variable $S_0$ with $\tilde{\mathbb{P}}_p(S_0\geq n)\leq c_7\exp(c_8 n^{\epsilon})$ and for almost every environment $\omega$ such that $0,y\in \mathcal{C},t\geq 1$:
(1) The transition density $p_t^{\omega}(0,y)$ of $X^{\omega}$ satisfies the Gaussian bound
$$c_1t^{-d/2}e^{-c_2{\left|y\right|^2}/{t}} \leq p_t^{\omega}(0,y) \leq c_3t^{-d/2}e^{-c_4{\left|y\right|^2}/{t}}\text{ for }t\geq S_0(\omega)\vee \left|y\right|.$$
(2) $c_5n^d \leq Vol(\mathcal{B}_{\omega}(0,n))\leq c_6 n^d\ \text{for}\ n\geq S_0(\omega).$ \end{theorem}
Note that translation invariance makes possible for each $x \in \mathbb{Z}^d$, a random variable $S_x$ satisfying the analogous conditions (with the same constants $c_1$,...,$c_8$,$\epsilon$) where one just replaces the origin $0$ by $x$ (and therefore replaces $y$ by $x+y$).
Let remark that there is no uniform Gaussian bounds for every $x,y\in \mathcal{C}$ and every $t>0$ because (almost surely) every finite graph is actually embedded somewhere in the infinite cluster. We can now derive almost sure upper and lower bounds for our limsup.
\begin{proposition} \label{upper bound} \textbf{(Upper bound)} There exists a finite $c_+=c_+(p,d)$ such that for almost every environment $\omega$,
$$\mathbb{P}^0_{\omega}\ a.s.\ \limsup_{t\rightarrow \infty} \frac{\left|X_t^{\omega}\right|}{\Phi(t)}\leq c_+.$$ \end{proposition}
\textbf{Proof:} Fix $\omega$ an environment containing 0. The proof goes along the same lines as in the Brownian case. Let $\gamma>0$, and define the following events: $$A_n^{\omega}=\left\{\max_{k \in \left[0,2^n\right]}d_{\omega}(0,X_k^{\omega}) > \gamma \Phi(2^n) \right\}.$$ Proposition \ref{maximum} shows that for all $n$ large enough, $$\mathbb{P}^0_{\omega}(A_n^{\omega})\leq a_1\exp\left(-a_2\frac{(\gamma\Phi(2^n))^2}{2^n}\right)\leq 2a_1 n^{-a_2\gamma^2}.$$
Providing $\gamma$ large enough, the Borel-Cantelli Lemma claims that almost surely $A_n^{\omega}$ holds finitely often. Using the fact that $\left|.\right| \leq d_{\omega}(0,.)$, we get that for $n$ large enough, $\max_{k\in[0,2^n]}\left|X_k^{\omega}\right| < \gamma \Phi(2^{n})$. We conclude that for $n$ large enough, $\left|X_n^{\omega}\right| < 2\gamma \Phi(n)$.
\begin{flushright} $\square$ \end{flushright}
\begin{proposition} \label{lower bound} \textbf{(Lower bound)} There exists a positive $c_-=c_-(p,d)$ such that for almost every environment $\omega$,
$$\mathbb{P}^0_{\omega}\ \text{a.s.},\ c_-\leq \limsup_{t\rightarrow \infty} \frac{\left|X_t^{\omega}\right|}{\Phi(t)}.$$ \end{proposition}
Let first present the outline of the proof. Consider $q>1$ and $\gamma>0$ (we will choose their values later). As in the Brownian case, set
$D_{n}^{\omega}=X_{q^n}^{\omega}-X_{q^{n-1}}^{\omega}$. We have $\left|X_{q^n}^{\omega}\right|\geq
\left|D_n^{\omega}\right|-|X_{q^{n-1}}^{\omega}|$. Using the upper bound, we obtain that almost surely, for $n$ large enough:
\begin{align}\left|X_{q^n}^{\omega}\right|&\geq \left|D_n^{\omega}\right|-2c_+\Phi(q^{n-1}).\end{align}Because $\Phi(q^{n-1})\leq q^{-1/2}\Phi(q^n)$, the second term can be chosen much smaller than $\Phi(q^n)$, providing $q$ large enough. Then, in order to prove the result, it is enough to bound $D_n^{\omega}$ from below. Define the events $C_n^{\omega}=\left\{\left|D_n^{\omega}\right|>\gamma \Phi(q^n)\right\}.$ If these events hold for infinity many $n$ almost surely, then we are done. We define the $\sigma$-fields $\mathcal{F}_{n}^{\omega}=\sigma(X^{\omega}_k,k\leq q^{n})$. We will apply the Borel-Cantelli Lemma generalized to dependent events (see Durrett \cite{Durrett}, chapter 4, paragraph 4.3). We therefore need to prove that $$\mathbb{P}_{\omega}^0\text{ a.s. }\sum_{n\geq 1} \mathbb{E}^0_{\omega}[C_n^{\omega}
\left|\mathcal{F}^{\omega}_{n-1}\right.]=\infty.$$ Using the Markov property and Gaussian bounds, we will be able to
find a lower bound for $\mathbb{E}^0_{\omega}[C_n^{\omega}
\left|\mathcal{F}^{\omega}_{n-1}\right.].$
In order to apply these bounds, we need to control not only $S_0$ (from Theorem \ref{Gaussian bound}) but also $S_x$ for $x=X^{\omega}_{q^{n-1}}$. We first prove that it is indeed possible, using Gaussian estimates and the upper bound.
\begin{lemma} \label{environment} Let $\gamma>0$, for almost every environment $\omega$ we have almost surely $S_{X^{\omega}_{n}}\leq \gamma\Phi(n)$ for $n$ large enough. \end{lemma}
\textbf{Proof:} Let $\gamma>1$. Define for each integer $n$ the set $$B_n=\left\{\exists y\in B(0, 2c_+\Phi(n))\text{ s.t. } S_y\geq \gamma\Phi(n)\right\}.$$ where $S_y$ is the random variable of Theorem \ref{Gaussian bound}. The Theorem yields $$\tilde{\mathbb{P}}_p(B_n)\leq Vol\left(B(0,2c_+\Phi(n))\right) d_1 \exp\left(-d_2(\gamma\Phi(n))^{\epsilon}\right).$$ The right-hand side of the inequality is summable, so that (by Borel-Cantelli) $B_n$ holds for a finite number of $n$ for almost every environment. But almost surely, $X_{n}^{\omega}$ is less than $2c_+\Phi(n)$ for $n$ large enough. Combining these two facts, we obtain the claim. \begin{flushright} $\ $ \end{flushright} $ $\\
\textbf{Proof of Proposition \ref{lower bound}:} Let $q,\gamma>0$ and $\kappa>0$ such that $c_5\kappa^d>c_6+1$. Note that $\kappa$ does not depend on $\gamma$ and $q$. Set $t_n=q^n-q^{n-1}$. By the Markov property, we get for $n\geq1$,$$\mathbb{E}^0_{\omega}(C_n^{\omega} \left|\mathcal{F}^{\omega}_{n-1}\right.)=\mathbb{P}^0_{\omega_n}[\gamma \Phi(q^n)<X_{t_n}^{\omega_n}]\geq \mathbb{P}^0_{\omega_n}[\gamma \Phi(q^n)<X_{t_n}^{\omega_n}<\kappa\gamma \Phi(q^n)]=G_n(\omega_n)$$ where $\omega_n=\tau_{X_{q^{n-1}}^{\omega}}(\omega)$ ($\tau_x$ is the shift defined by $(\tau_x\omega)_y=\omega_{x+y}$) and:$$G_n(\omega)=\mathbb{P}^0_{\omega}\left[\gamma\Phi(q^n)<X^{\omega}_{t_n}<\kappa\gamma\Phi(q^n)\right].$$
The function $G_n$ is well-defined and measurable. If $\mathcal{A}_n(\omega)$ is the annulus $$\mathcal{A}_n(\omega)=\left\{z\in \mathcal{C},\text{ s.t. }\gamma\Phi(q^n)< \left|z\right| <\kappa \gamma \Phi(q^n) \right\},$$ we find by definition of the transition density $G_n(\omega)=\sum_{z\in \mathcal{A}_n(\omega)}p_{t_n}^{\omega}(0,z)$. We deduce:
\begin{align}\mathbb{E}^0_{\omega}(C_n^{\omega} \left|\mathcal{F}^{\omega}_{n-1}\right.)=\sum_{z\in \mathcal{A}_n(\omega_n)}p_{t_n}^{\omega_n}(0,z).\end{align} Using Lemma \ref{environment}, we know that almost surely there exists $N$ large enough such that for every $n$ larger than $N$, $S_0(\omega_n)=S_{X^{\omega}_{q^{n-1}}}(\omega)\leq \gamma\Phi(q^{n-1})\leq t_n$. For $n\geq N$, one can use Gaussian estimates of Theorem \ref{Gaussian bound} for every $z\in \mathcal{A}_n(\omega_n)$, we get for such a $z$: \begin{align}p_{t_n}^{\omega_n}(0,z)&\geq c_1t_n^{-d/2}\exp\left(-\frac{c_2(\kappa\gamma\Phi(q^n))^2}{t_n}\right)\geq c_1t_n^{-d/2}n^{-c_2(\kappa\gamma)^2}\end{align} Using again the same Lemma, Theorem \ref{Gaussian bound} yields that the volume growth property holds for $S_n(\omega_n)$. Recalling the definition of $\kappa$, we find: \begin{align}Vol(\mathcal{A}_n(\omega_n))&\geq (\gamma \Phi(q^n))^{d}\geq \gamma^d t_n^{d/2}\end{align} Combining (2.3) and (2.4) in (2.2), we obtain that there exists a constant $c>0$ such that almost surely for $n$ large enough: $$G_n(\omega_n)\geq cn^{-c_2(\kappa\gamma)^2}$$
Providing $\gamma$ small enough, we can use the generalized Borel-Cantelli Lemma (e.g. \cite{Durrett}). We get that almost surely, there exist infinitely many integers $n$ such that $\left|D_n^{\omega}\right|>\gamma\Phi(q^n)$. If $q>0$ is taken large enough ($\kappa,\gamma$ and $c_2$ are not depending on $q$), we can use the inequality (2.1) to prove that almost surely:
$$ \left|X_{q^n}^{\omega}\right|\geq \gamma \Phi(q^n)-2c_+q^{-1/2}\Phi(q^n)>\frac{\gamma}{2}\Phi(q^n)$$ for infinitely many $n$, which is the claim. \begin{flushright} $\ $ \end{flushright}
$ $\\ \textbf{Remark 1:} In order to bound the sum in (2.2) from below, Gaussian bounds were not sufficient. Without the volume growth property, the annulus could contain only few elements. Even if the exponential term is not too small (typically of order $n^{-s}$ for $s$ small), the term $t_n^{-d/2}$ (which corresponds to $t^{-d/2}$ for the Brownian motion) could be very small and make the series become summable. The cardinality of the annulus was critical in order to balance out this term.
$ $\\
\textbf{Remark 2:} our goal was to obtain a result in ${L}^1$ norm. Unfortunately, the natural distance on graphs is the chemical distance $d_{\omega}(.,.)$. In the bound from below, this does not create any trouble because of the trivial inequality $\left|x\right|\leq d_{\omega}(0,x)$. But it could happen that the chemical distance is much bigger than the $L^1$ norm. The proof of Theorem 2.18 in Barlow \cite{Barlow} precisely deals with this issue thanks to a result by Antal and Pisztora \cite {AntalPisztora} that shows that the chemical distance on $\mathcal{C}$ and the ${L}^1$ norm are not that different on a supercritical percolation cluster.
\section{Zero-one Law for the blind random walk}
In the present section, we will consider discrete time random walks. We first introduce the two random walks we will use. Then we recall an ergodicity result proved in \cite{BergerBiskup} and we derive the Zero-one Law. Our proofs are rather direct applications of the ergodicity statement of \cite{BergerBiskup}.
For each $x\in \mathbb{Z}^d$, let $\tau_x$ be the shift from $\Omega$ in $\Omega$ defined by: $(\tau_x\omega)_y=\omega_{y+x}$. For each $\omega$, let $Y_n^{\omega}$ be the simple random walk (called \textbf{blind random walk}) on $\mathcal{C}$ started at the origin. At each unit of time, the walk picks a neighbor at random and if the corresponding edge is occupied, the walk moves to this neighbor. Otherwise, it does not move. This random walk may seem less natural than the random walk that chooses randomly one of the accessible neighbors and jumps to it, but this blind random walk preserves the uniform measure on $\mathcal {C}$, so that the stationary measure on the environment as seen from the walker turns out to be simpler.
It is well known (cf Kipnis and Varadhan \cite{KipnisVaradhan}) that the Markov chain $(Y_n^{\omega})_{n\geq 0}$ induces a Markov chain on $\Omega$ (the so-called \textbf{Markov chain on the environment}), that can be interpreted as the trajectory of "environment viewed from the perspective of the walk". It is defined as $$\omega_n ( \cdot) = \omega ( \cdot + Y_n^{\omega})=\tau_{Y_n^{\omega}}\omega(.).$$ One can describe the chain $(\omega_n)$ as follows. At each step $n$, one chooses one of the $2d$ neighbors of the origin at random and calls it $e$. If the corresponding edge is closed for $\omega_n$, then $\omega_{n+1} = \omega_n$, otherwise $\omega_{n+1} (\cdot) = \tau_e \circ \omega_n$, where $\tau_e \circ \omega (\cdot) = \omega ( \cdot - e)$.
It is straightforward to check that the probability measure $\tilde{\mathbb{P}}_p$ is a reversible and therefore stationary measure for the Markov chain $(\omega_n)$. This allows us to extend our probability space to $\Xi= \Omega^{\mathbb{Z}}$ (endowed with the product $\sigma$-algebra $\mathcal {H}= \mathcal{F}^{\otimes\mathbb{Z}}$) and to define $\omega_n$ also for negative $n$'s in such a way that that the family $(\omega_n, n \in \mathbb {Z})$ is stationary. Let $\mu$ denote the probability measure associated to the Markov chain.
Note that under the measure $\mu$, and for all $n \in \mathbb {Z}$, the law of $(\omega_n, \omega_{n+1}, \ldots)$ is identical to that of $(\omega_0, \omega_1, \ldots) $. On the other hand, the marginal law of $\omega_0$ (still under $\mu$) is $\tilde{\mathbb{P}}_p$. One then defines $T:\Xi \rightarrow \Xi,\bar{\omega}\mapsto T\bar{\omega}$ to be $(T\bar{\omega})_n=\bar{\omega}_{n+1}$.
\begin{theorem}(Berger, Biskup, \cite {BergerBiskup}) \label{ergodicity} $T$ is ergodic with respect to $\mu$. In other words, for all $A\in \mathcal{H}$, if $T^{-1}(A) =A$, then $\mu(A)$ is equal to 0 or 1. \end{theorem}
We refer to the paper of Berger and Biskup \cite{BergerBiskup} for proofs. Define for every $a>0$ and $\omega$ the event: $$A_{\omega}(a)=\left\{\limsup_{n\rightarrow \infty}\frac{\left|Y_n^{\omega}\right|}{\Phi(n)}>a\right\}.$$ Let now state and prove a consequence of this ergodicity for our law of the iterated logarithm:
\begin{corollary} \label{loi du tout ou rien} \textbf{(Zero-one Law)} Let $a\geq 0$. The probability that $$B_a=\left\{\mathbb{P}_{\omega}^x\text{ a.s. }A_{\omega}(a)\text{ holds for all }x\in \mathcal{C}\right\}$$ is equal to $0$ or to $1$. \end{corollary}
\textbf{Proof:} Our goal is to use the ergodicity of the environment and to note that the considered event corresponds to a $T$-invariant set in $\Xi$. Let $a \geq 0$ and define the function $F$ on $\Omega$ by: $$F(\omega)=\mathbb{P}_{\omega}^0(A_{\omega}(a))$$ This function is well-defined and measurable. Let fix the environment $\omega$ for a little while and denote $\omega_n=\tau_{Y_n^{\omega}}\omega$. We claim that $(F(\omega_n))_n$ is a martingale with respect to the filtration $\mathcal{F}_n$ associated to the process $Y_n^{\omega}$. Indeed, the Markov property yields
$$F(\omega_n)=\mathbb{P}_{\omega}^0(A_{\omega}(a)\left|\mathcal{F}_n\right.).$$ This martingale is bounded and therefore converges almost surely as $n \to \infty$.
Moreover, it converges to the indicator function of $A_{\omega}(a)$ because this event is clearly in $\mathcal{F}_{\infty}=\sigma (\cup_{n\geq 0}\mathcal{F}_n)$.
By taking the Cesaro mean and then integrating it with respect to $\omega$ (and using the fact that the probabilities are bounded by $1$), we get that $$
\tilde{\mathbb{E}}_0\left[
\mathbb{P}_{\omega}^0 \left( \left| \lim_{N\rightarrow \infty} \frac{1}{N} \sum_{n=0}^{N-1}F(\omega_n) - 1_{A_{\omega}(a)} \right| \right) \right] = 0 $$
On the other hand, $F$ can be viewed as a measurable function on $\Xi$. The ergodicity of $\mu$ implies that for $\mu$ almost every $\bar{\omega}$: $$\lim_{N\rightarrow \infty} \frac{1}{N} \sum_{n=1}^{N}F(\bar{\omega}_n)= \int Fd\mu$$ Let recall that $\bar{\omega}$ has same law under $\mu$ as $(\omega_n)_n$ under $\tilde{\mathbb{E}}_0[\mathbb{P}_{\omega}^0(.)]$. We deduce that the limit $1_{A_{\omega}(a)}$ is (up to a set of zero measure) constant. Since it is an indicator function, this means that either the corresponding event is almost surely true, or almost surely wrong.
\section {The Law of the Iterated Logarithm}
We can now derive the Law of the Iterated Logarithm. Let first note that the previous corollary immediately implies that for a fixed $p > p_c$, there exists a constant $c'(p,d) \in [0, \infty]$ such that for almost every environment $\omega$, the blind random walk satisfies
$$\limsup_{n\rightarrow \infty} \frac{\left|Y_n^{\omega}\right|_1}{\Phi(n)}=c'(p,d)$$ almost surely (just choose $c'(p,d)$ to be the supremum of the set of $a$'s such that the event $B_a$ is almost surely satisfied).
Our next goal is to show that the time scales for the two random walks are comparable. Let $\omega \in \Omega_0$, define the real random variable $(T_n^{\omega})_n$ by $T_0^{\omega}=0$ and $$T_{n+1}^{\omega}=\inf\left\{t>T_n^{\omega}, X_t^{\omega} \neq X_{T_n^{\omega}}^{\omega}\right\}.$$ Clearly, the Law of Large Numbers implies that for all $\omega \in \Omega_{0}$, $T_{n+1}^{\omega} \sim n $ almost surely.
Let $\omega \in \Omega_0$, define in the same way the random variable $(U_n^{\omega})$ by $U_0^{\omega}=0$ and $$U_{n+1}^{\omega}=\inf\left\{p>U_n^{\omega}, Y_p^{\omega} \neq Y_{U_n^{\omega}}^{\omega}\right\}.$$ The $(U_{n+1}^{\omega}-U_n^{\omega})_n$ are not i.i.d. anymore. Conditionally on the environment and on the past up to the $n$-th jump of $Y^{\omega}$, the law of $U_{n+1}^{\omega}- U_n^{\omega}$ is geometric and depends on the number $I(n)$ of incoming open edges at $Y^{\omega}{U_n^{\omega}}$ (its mean is some function $f(I(n))$).
Ergodicity ensures that almost surely and for each $k \le 2d$, $$ \frac {1}{n} \sum_{j=1}^n 1_{ I(j) = k } \to i(k)$$ where $i(k)$ denotes the $\mu$-probability that $\omega_0$ has $k$ incoming open edges at the origin.
Using the Law of Large Numbers for sums of independent geometric random variables of mean $f(k)$ for each $k$, we get readily that for almost all $\omega\in \Omega_0$, $$\mathbb{P}_{\omega}^0\text{ a.s. }U_{n+1}^\omega / n \to \sum_{k=1}^{2d} i(k) f(k)=\alpha_p^{-1}.$$ This last quantity is clearly positive and finite.
We can now conclude the proof of the Law of the Iterated Logarithm for the continuous time random walk.
\medbreak
\textbf{Proof of Theorem \ref{law of the iterated logarithm}:} Consider the natural coupling for which $X_t^{\omega}$ and $Y_n^{\omega}$ have the same trajectories. More precisely, if we consider the \textbf{myopic random walk} $(Z_n^{\omega})_n$ that jumps at each time, choosing uniformly a neighbor, defined on a probability space $(\Omega_{\omega},\mathcal{F}_{\omega},\mathbb{P}^0_{\omega})$. Assume there exists an independent family $(T_i)_{i\in\mathbb{Z}_+}$ of iid exponential mean time 1 random variables and $(S^{\omega}_x)_{x\in \mathbb{Z}^d}$ an independent family of independent random variables such that $S_x^{\omega}$ is a geometrical of parameter ${n_x^{\omega}}/{(2d)}$ where $n_x^{\omega}$ is the number of adjacent open edges of $x$ for the configuration $\omega$.
Define $T_p^{\omega}=\sum_{k=0}^{p-1}T_i$ and $n^{\omega}(t)=\sup\left\{p, T_p^{\omega} \leq t\right\}$. Then we can write the continuous time random walk as follows $$X_t^{\omega}=Z_{n^{\omega}(t)}^{\omega}\ \ \forall t \geq 0.$$ Now, consider $U_p^{\omega}=\sum_{k=0}^{p-1}S_{Z_k^{\omega}}^{\omega}$ and $m^{\omega}(n)=\sup\left\{p, U_p^{\omega} \leq p\right\}$. Then we can write the blind random walk as follow: $$Y_n^{\omega}=Z_{m^{\omega}(n)}^{\omega}\ \ \forall n \geq 0.$$ Because of the estimates of the time-scales of our two walks, we get that $$
\limsup_{t \rightarrow \infty}\frac{\left|X_t^{\omega}\right|}{\Phi(t)} = \limsup_{t \rightarrow \infty}\frac{\left|Z_{n^{\omega}(t)}^{\omega}\right|}{\Phi(n^{\omega}(t))}=\limsup_{n \rightarrow \infty}\frac{\left|Z_{n}^{\omega}\right|}{\Phi(n)}$$ and that $$
\limsup_{n \rightarrow \infty}\frac{\left|Y_n^{\omega}\right|}{\Phi(n)} =\limsup_{n \rightarrow \infty}\frac{\left|Z_{
m^{\omega}(n)}^{\omega}\right|}{\Phi(\alpha_p m^{\omega}(n))}=\frac{1}{\sqrt{\alpha_p}}\limsup_{n \rightarrow \infty}\frac{\left|Z_{n}^{\omega}\right|}{\Phi(n)}.$$ From these two equalities, we deduce that
$$\limsup_{t \rightarrow \infty}\frac{\left|X_t^{\omega}\right|}{\Phi(t)}=\frac{1}{\sqrt{\alpha_p}}\limsup_{n \rightarrow \infty}\frac{\left|Y_n^{\omega}\right|}{\Phi(n)}\text{ a.s.}.$$ The theorem follows readily. \begin{flushright} $\square$ \end{flushright}
Note that this also show that the Law of the Iterated Logarithm holds for the blind and the myopic random walks.
\medbreak
\textbf{Acknowledgements:} This paper was written during my stay at the University of British Columbia, I would like to thank M.T. Barlow, who first taught me about this question, for his availability and the advice he gave me during my whole stay. I would also like to thank W. Werner for his careful reading of this paper and his numerous suggestions.
\begin{flushright}
\footnotesize \textsc{Department of Mathematics}
\textsc{University of British Columbia}
\textsc{Vancouver, British Columbia, Canada}
\medbreak
\textsc{DMA, Ecole Normale Sup\'erieure}
\textsc{45 rue d'Ulm, 75230 Paris cedex 05, France}
\textsc{E-mail:} hugo.duminil@ens.fr\end{flushright}
\end{document} |
\begin{document}
\begin{abstract} Based on results by Dani\v{l}\v{c}enko, in 1987 Burris and Willard have conjectured that on any \nbdd{k}element domain where $k\geq 3$ it is possible to bicentrically generate every centraliser clone from its \nbdd{k}ary part.
Later, for every $k\geq 3$, Snow constructed algebras with a \nbdd{k}element carrier set where the minimum arity of the clone of term operations from which the bicentraliser can be generated is at least $(k-1)^2$, which is larger than~$k$ for $k\geq 3$.
We prove that Snow's examples do not violate the Burris-Willard conjecture nor invalidate the results by Dani\v{l}\v{c}enko on which the latter is based. We also complement our results with some computational evidence for $k=3$, obtained by an algorithm to compute a primitive positive definition for a relation in a finitely generated relational clone over a finite set. \end{abstract} \keywords{centraliser clone,
bicentraliser,
bicentrical closure,
Burris-Willard conjecture,
commutation,
primitive positive formula,
primitive positive definition}
\title{A note on the Burris-Willard conjecture}
\section{Introduction} Centraliser clones are collections of homomorphisms of finite powers of algebras into themselves. That is, if $\alg{A}$ is an algebra and $F$ is the set of fundamental operations of~$\alg{A}$, then the centraliser~$\cent{F}$ of~$F$ is the set $\bigcup_{n<\omega} \Hom\apply{\alg{A}^n,\alg{A}}$. From a categorical perspective, this is a very natural construction that makes sense in every category~$\mathscr{C}$ with arbitrary finite powers. If~$A$ is an object in such a category~$\mathscr{C}$ we call $\bigcup_{n<\omega} \Hom_{\mathscr{C}}\apply{A^n,A}$ the clone over the object~$A$. With this understanding centraliser clones are simply the clones over algebras in the category of algebras of a certain type. If we change the signature of the structures to allow relation symbols (that is, we change the category to relational structures of a certain signature), we obtain clones over some relational structure~$\mathbb{A}$ with set of fundamental relations~$Q$: $\bigcup_{n<\omega} \Hom\apply{\mathbb{A}^n,\mathbb{A}}$. This clone is called the clone $\Pol{Q}$ of polymorphisms of~$Q$ (or just the polymorphism clone of the structure~$\mathbb{A}$), and it is well known by results of Bodnar\v{c}uk, Kalu\v{z}nin, Kotov, Romov~\cite{BodnarcukKaluzninKotovRomovGaloisTheoryForPostAlgebras} and Geiger~\cite{GeigerClosedSystemsOfFunctionsAndPredicates} on the classical $\PolOp$-$\InvOp$ Galois correspondence that every clone on a finite carrier set~$A$ arises as a polymorphism clone of some relational structure~$\mathbb{A}$. \par
As every algebraic structure~$\alg{A}$ can also be understood as a relational one (by taking the graphs of the fundamental operations as the fundamental relations), it is clear that the centraliser clones on a given set~$A$ form a subcollection of the polymorphism clones on that set. This fact is very closely related to restricting the $\PolOp\text{-}\InvOp$ Galois correspondence on the relational side in such a way that the only relations taken into consideration are those which are graphs of a function. This restriction of the preservation relation (underlying $\PolOp\text{-}\InvOp$) between functions and relations to functions and function graphs leads to the notion of commutation of functions, which is exactly the homomorphism property between finite powers of algebras that was used above to introduce the concept of centraliser clone. As the Galois correspondence is restricted on one side only (the relational one), there is a connection between the associated Galois closures: the $\PolOp\text{-}\InvOp$ closure~$\Pol{\Inv{F}}$ of a set of operations~$F$ (which for finite~$A$ agrees with the generated clone~$\genClone{F}$) is weaker than the bicentrical closure~$\bicent{F}$, that is, the double centraliser of~$F$, or, equivalently, all functions commuting with all those functions that commute with the functions in~$F$. The strength of the bicentrical closure in comparison to $\Pol{\Inv{}}$ manifests itself in the following way: while $\Pol{\Inv{F}}$ closes~$F$ against all compositions of \nbdd{F}functions with themselves and projections (i.e.\ one iteratively substitutes functions and variables until nothing new appears), $\bicent{F}$ computes all functions that are primitive positively definable from the function graphs of~$F$ (i.e.\ one interprets all existentially quantified finite conjunctions of predicates of the form $f(\bfa{v}) = x$ and equality predicates $y=z$ (where $f\in F$, $\bfa{v}$ is a tuple of variables and $x,y,z$ are variables) and among these interpretations selects those relations that are function graphs). Functions whose graphs are constructible via such primitive positive formul\ae{} from~$F$ have been called \emph{parametrically expressible} through~$F$~\cite[p.~26]{KuznecovCentralisers1979} (in contrast to functions in the clone~$\genClone{F}$ that are \emph{explicitly expressible} via~$F$), and also the connection of this construction with the preservation of function graphs and the commutation of operations has first been noted in~\cite[p.~27 et seq.]{KuznecovCentralisers1979}. For this reason centraliser clones have also been studied under the name \emph{parametrically closed classes} (see e.g.~\cite{Danilcenko1978ParametricallyClosedClasses3ValuedLogic}) or~\emph{primitive positive clones} (e.g.~\cite{BurrisWillardFinitelyManyPPClones}). \par
It may not seem so at first glance, but the parametrical (primitive positive, bicentrical) closure is notably much stronger than closure under substitution. Namely, it has the remarkable consequence that on every finite set~$A$ there are only finitely many centraliser clones~\cite[Corollary~4, p.~429]{BurrisWillardFinitelyManyPPClones}, which is in sharp contrast to the situation for polymorphism clones, of which there is a continuum whenever $\abs{A}\geq 3$~\cite{JanovMucnik1959}. If $F$ is a centraliser clone (i.e. $\bicent{F} = F$), then $\bicentn[1]{F}\subs\bicentn[2]{F}\subs \dotsm \subs \bicentn{F} \subs F$ holds for all $n<\omega$ and $\bigcup_{n<\omega} \bicentn{F} =\bicent{F} = F$. Since there are only finitely many centraliser clones on a given finite set there must be some $n<\omega$ such that for arities larger than~$n$ none of the inclusions is strict any more, that is, $\bicentn{F} = \bicentn[m]{F}$ for all $n\leq m<\omega$. Hence, $F = \bigcup_{j\leq n}\bicentn[j]{F} = \bicentn{F}$; so there is some arity $n$ such that $F$ is bicentrically generated by its \nbdd{n}ary part. Take this $n_F$ to be minimal and then take the maximum over all (finitely many) $n_F$: \[\cdeg(k)\defeq \max\lset{n_F}{F=\bicent{F} \text{ on } A,\abs{A}=k}.\] We shall refer to this number as the \emph{uniform centraliser degree} for a \nbdd{k}element set, since every centraliser clone~$F$ on a carrier set of size~$k$ satisfies $F = \bicentn[\cdeg(k)]{F}$. \par
With the help of Post's lattice, one can show that $\cdeg(2)=3$. Burris and Willard explain in~\cite[p.~429]{BurrisWillardFinitelyManyPPClones} that $\cdeg(k)\leq 4+k^{k^4-k^3+k^2}$ and they claim that `[b]y slightly different methods [one] can show that any primitive positive clone on a \nbdd{k}element set is [bicentrically] generated by its members of arity at most~$k^k$', which implies $\cdeg(k)\leq k^k$. No written account of the details of this argument has appeared in the literature so far. However, at the end of the sentence cited above Burris and Willard conjecture that $\cdeg(k)\leq k$ for every $k\geq 3$. Besides intuition the only support for this conjecture is a series of works by A.\,F.\ Dani\v{l}\v{c}enko on the case $k=3$~(\cite{Danilcenko1974ParametricallyClosedClasses3ValuedLogic,
Danilcenko1976ParametricallyIndecomposables,
Danilcenko1977ParametricExpressibility3ValuedLogic,
Danilcenko1978ParametricallyClosedClasses3ValuedLogic,
Danilcenko1979-thesis}, all of these are in Russian, \cite{Danilcenko1977ParametricExpressibility3ValuedLogic} has been translated in~\cite{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated}; \cite{Danilcenko1979ParametricalExpressibilitykValuedLogic} is written in English). As a side note we remark that a \nbdd{k}ary example function, stated in~\cite[p.~269]{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated} for a different proof, can be used to show that $\cdeg(k)\geq k$ for $k\geq 3$; so if the Burris-Willard conjecture is true, then it certainly is sharp. \par
In her thesis~\cite[Section~6, p.~125 et seqq.]{Danilcenko1979-thesis} Dani\v{l}\v{c}enko gives a complete description of all $2\,986$ centraliser clones on the three\dash{}element domain. A central step in this process is to identify a set~$\Gamma$ of 197 parametrically indecomposable functions~\cite[Theorem~4, p.~103]{Danilcenko1979-thesis} such that every centraliser clone~$F$ is the centraliser of a subset of~$\Gamma$~\cite[Theorem~5, p.~105]{Danilcenko1979-thesis}. The maximum arity of functions in~$\Gamma$ is three, so Dani\v{l}\v{c}enko's theorems imply that $F = \bicentn[3]{F}$ for every centraliser clone on three elements (cf.\ Proposition~\ref{prop:char-cdeg}\eqref{item:bicent-n},\eqref{item:cent-leq-n}), that is, $\cdeg(3)\leq3$. The results of Theorems~4 and~5 of~\cite{Danilcenko1979-thesis}, which make the Burris-Willard conjecture true for $k=3$, are also mentioned in~\cite[p.~155 et seq.]{Danilcenko1979ParametricalExpressibilitykValuedLogic} and~\cite[Section~5, p.~414 et seq.]{Danilcenko1977ParametricExpressibility3ValuedLogic} (\cite[Section~5, p.~279]{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated}, respectively), but no proofs are given there. \par
Drastically cut down versions of this work have been published in~\cite{Danilcenko1974ParametricallyClosedClasses3ValuedLogic,
Danilcenko1976ParametricallyIndecomposables,
Danilcenko1977ParametricExpressibility3ValuedLogic,
Danilcenko1978ParametricallyClosedClasses3ValuedLogic,
Danilcenko1979ParametricalExpressibilitykValuedLogic}, of which only~\cite{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated,Danilcenko1979ParametricalExpressibilitykValuedLogic} are accessible without difficulties. Given that the whole thesis comprises~141 pages, these excerpts are rough sketches of the classification at best (sometimes containing mistakes, many but not all of which have been corrected in~\cite{Danilcenko1979-thesis}), and leading experts in the field agree that it is very hard if not impossible to reconstruct the proof of the description of all centralisers on three\dash{}element sets from the readily available resources. For example, Theorem~4 of~\cite{Danilcenko1979-thesis} has appeared as part of~\cite[Proposition~2.2, p.~16]{Danilcenko1978ParametricallyClosedClasses3ValuedLogic} with a proof sketch of less than two pages, while the proof from~\cite{Danilcenko1979-thesis} goes through technical calculations and case distinctions for several pages (however from Propositions~2.2, 2.3 and~2.4 of~\cite{Danilcenko1978ParametricallyClosedClasses3ValuedLogic}, a proof of Theorem~5 of~\cite{Danilcenko1979-thesis} \emph{can} be obtained). The chances of understanding might be better using the thesis as a primary source, but for unknown reasons Moldovan librarians seem to be rather reluctant to grant full access to it. In the light of this discussion, Dani\v{l}\v{c}enko's classification is a result that one may believe in, but that should not be trusted unconditionally to build further theory on as it remains not easily verifiable at the moment. This of course also casts some doubts on the basis of the Burris-Willard conjecture. \par
Another possible challenge to the conjecture (and likewise to the correctness of Dani\v{l}\v{c}enko's list of parametrically indecomposable functions) is presented by much later results of Snow~\cite{SnowGeneratingPrimitivePositiveClones}. In this article the minimum arity needed to generate the bicentraliser clone of a finite algebra from its term operations is investigated, and, under certain assumptions on the algebra, quite satisfactory upper bounds for that number are produced. These sometimes match (or almost match) the number~$k$ predicted by the Burris-Willard conjecture, and sometimes even fall below~$k$. This is possible (and supports the conjecture) since the bounds given by Snow do not apply to \emph{all} algebras on a \nbdd{k}element set, but only to some specific subclass. Hence, they are not in contradiction with the \nbdd{k}ary function from~\cite[p.~269]{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated}. Even more interestingly, in Section~3 of~\cite{SnowGeneratingPrimitivePositiveClones} a class of examples of algebras on $k$-element carrier sets is given, for which Snow proves $(k-1)^2$ to be a lower bound for the minimum arity of term functions from which the bicentraliser can be generated. This number is larger than~$k$ whenever $k\geq 3$. Explicitly, when $k=3$, the lower bound is equal to~$4$, which means that arity three or less does not suffice to generate the bicentraliser clone of that specific algebra. \par
In more detail, Snow defines for an algebra \m{\alg{A}} with set~$F$ of fundamental operations the number $\ppc(\alg{A})= \min\lset{n\in\N}{ \bicent{\genClone{F}} = \bicent{{\genClone[n]{F}}}}$. This number certainly only depends on the clone \m{\genClone{F}} of term operations of the algebra, hence no generality is lost in simply considering the number \[ \mu_F \defeq \min\lset{n\in\N}{ \bicent{\genClone{F}} = \bicent{{\genClone[n]{F}}}} =\min\lset{n\in\N}{ \bicent{F} =\bicentn{F}} \] associated with clones~$F$ on a \nbdd{k}element set~$A$. If \m{F} happens to be a centraliser clone, the definition clearly simplifies to \m{\mu_F = \min\lset{n\in\N}{F = \bicentn{F}} = n_F}, which is bounded above by \m{\cdeg(k)}. However, if now~$F$ is the clone of term operations of the example constructed by~Snow, then the lower bound on~$\mu_F$ from~\cite[Theorem~3.1, p.~171]{SnowGeneratingPrimitivePositiveClones} implies the following contradiction \[k<(k-1)^2\leq \mu_F \stackrel{?_2}{=} n_F\leq \cdeg(k)
\stackrel{?_1}{\leq} k.\] This offers two conclusions: either $?_1$ does not hold, which means that the Burris-Willard conjecture and, in particular, the Dani\v{l}\v{c}enko classification on three\dash{}element domains fail, or $?_2$ is false, which simply means that~$F$ is not a centraliser clone. If we are to believe in Dani\v{l}\v{c}enko's theorems, then (for $k=3$) the latter is the only possible consequence. However, for the reasons mentioned above, it would be desirable to obtain such a conclusion independently of Dani\v{l}\v{c}enko's \oe{}uvre. \par
Such is the aim of the present article. We are going to give a proof that the clone~$F$ of term operations of the algebra given in~\cite[Theorem~3.1, p.~171]{SnowGeneratingPrimitivePositiveClones} is not bicentrically closed and hence poses no threat to the Burris-Willard conjecture. To do this, for every $k\geq 3$ we exhibit a \nbdd{(k-1)}ary function in~$\bicent{F}$ that cannot be obtained by composition of the fundamental operation(s) of Snow's algebra. In doing so we use the case $k=3$ as a guideline, where we show, for example, that~$F$ and~$\bicent{F}$ cannot be separated by unary functions, and that the mentioned operation is the only separating binary function.
\section{Notation and preliminaries} Throughout we use \m{\N = \set{0,1,2,\dotsc}} to denote the set of natural numbers, and we write \m{\Np} for~$\N\setminus\set{0}$. It will be convenient for us to understand the elements $n\in\N$ as \nbdd{n}element sets $n= \set{0,1,\dotsc,n-1}$ as originally suggested by John von Neumann in its model of natural numbers as finite ordinals. \par One of the central concepts for this paper are functions, such as $f\colon A\to B$ and $g\colon B\to C$, and we use a left\dash{}to\dash{}right notation for composition. That is, \m{g\circ f\colon A\to C} sends any $x\in A$ to $g(f(x))$. The set of all functions from~$A$ to~$B$ is written as~$B^A$. Moreover, if \m{f\in B^A} and \m{U\subs A} and \m{V\subs B} we denote by \m{f\fapply{U} = \set{f(x)\mid x\in U}} the \emph{image of~$U$ under~$f$} and by \m{f^{-1}\fapply{V} = \set{x\in A \mid f(x)\in V}} the \emph{preimage of~$V$ under~$f$}. We also use the symbol~$\im f$ to denote the full \emph{image} $f\fapply{A}$ of~$f$. All these notational conventions will apply in particular to tuples \m{\bfa{x}\in A^n}, \m{n\in\N}, that we formally understand as maps \m{\bfa{x}\colon \set{0,\dotsc,n-1}\to A}. This does, of course, not preclude us from using a different indexing for the entries of \m{\bfa{x}=(x_1,\dotsc,x_n)}, if that seems more handy. So, e.g., we have \m{\im \bfa{x} = \set{x_1,\dotsc,x_n}} and \m{f\circ \bfa{x} = (f(x_1),\dotsc, f(x_n))\in B^n}.
Notably, we are interested in functions of the form $f\colon A^n\to A$ that we call \emph{\nbdd{n}ary operations} on~$A$. All such operations form the set $A^{A^n}$, and if we let the parameter~$n$ vary in~$\Np$, then we obtain the set \m{\Op{A}= \bigcup_{0<n<\omega} A^{A^n}} of all \emph{finitary (non\dash{}nullary) operations} over~$A$. If \m{F\subs\Op{A}} is any set of finitary operations, we denote by \m{\Fn{F}\defeq A^{A^n}\cap F} its \emph{\nbdd{n}ary part}. In particular, $\Op[n]{A} = A^{A^n}$. Some specific \nbdd{n}ary operations will be needed: for $a\in A$ we denote the constant \nbdd{n}ary function with value~$a$ by $\cna[n]{a}\colon A^n\to A$. Moreover, if \m{n\in\N} and \m{1\leq i\leq n} we call \m{\eni[n]{i}\colon A^n\to A}, given by \m{\eni[n]{i}(x_1,\dotsc,x_n)\defeq x_i} for all \m{(x_1,\dotsc,x_n)\in A^n}, the \nbdd{i}th \nbdd{n}variable \emph{projection} on~$A$. Collecting all projections on~$A$ in one set, we obtain \m{\J{A} = \lset{\eni[n]{i}}{1\leq i\leq n, n\in\N}}. \par We call a set $F\subs\Op{A}$ a \emph{(concrete) clone} on~$A$ if $\J{A}\subs F$ and if~$F$ is closed under composition, i.e., whenever \m{m,n\in\N} and \m{f\in\Fn{F}}, \m{g_1,\dotsc,g_n\in\Fn[m]{F}}, then also the composition \m{f\circ(g_1,\dotsc,g_n)}, given by \m{(f\circ(g_1,\dotsc,g_n))(\bfa{x}) \defeq f(g_1(\bfa{x}),\dotsc,g_n(\bfa{x}))} for any \m{\bfa{x}\in A^m}, belongs to the set~$F$. All sets of operations that were named `clone' in the introduction are indeed clones in this sense (except for the fact that they were allowed to contain nullary operations, which we want to exclude to avoid unnecessary technicalities). Clones are closed under intersections, and hence for any set $G\subs\Op{A}$ there is a least clone~$F$ under inclusion with the property \m{G\subs F}. This clone~$F$ is called the \emph{clone generated by~$G$} and is denoted as~$\genClone{G}$. It is computed by adding all projections to~$G$ and then closing under composition, that is, by forming all term operations (of any positive arity) over the algebra~$\algwops{A}{G}$. \par A function \m{f\in\Op[n]{A}} \emph{preserves} a relation \m{\rho\subs A^m} (with \m{m,n\in\N}) if for every \m{\bfa{r}=(r_1,\dotsc,r_n)\in\rho^n} the tuple \m{f\circ\bfa{r}\defeq (f(r_1(i),\dotsc,r_n(i)))_{1\leq i\leq m}} belongs to~$\rho$. For a set~$Q$ of finitary relations, the set \m{\Pol{Q}} of polymorphisms of~$Q$ consists of all functions preserving all relations belonging to~$Q$. Every polymorphism set is a clone. Dually, for a set~$F\subs\Op{A}$, the set \m{\Inv{F}} contains all \emph{invariant} relations of~$F$, that is, all relations being preserved by all functions in~$F$. \par For the convenience of the reader we now give a perhaps more accessible characterisation of the (non\dash{}nullary part of the) centraliser \m{\cent{F}} of some set of operations~$F\subs\Op{A}$, which was already defined at the beginning of the introduction. A function \m{g\colon A^m\to A} belongs to the centraliser~$\cent{F}$ (\emph{commutes} with all functions from~$F$) if for every function \m{f\in F} the following holds (where~$n$ is the arity of~$f$): for every matrix \m{X\in A^{m\times n}} applying~$g$ to the \nbdd{m}tuple obtained from applying~$f$ to the rows of~$X$ gives the same result as evaluating~$f$ on the \nbdd{n}tuple obtained from applying~$g$ to the columns of the matrix. In symbols: \m{g((f((x_{ij})_{j\in n}))_{i\in m})
= f((g((x_{ij})_{i\in m}))_{j\in n})} has to hold for all \m{(x_{ij})_{(i,j)\in m\times n}\in A^{m\times n}} (and all $f\in F$). A brief moment of reflection shows that this condition is the same as saying that \m{g\colon \algwops{A}{F}^m\to \algwops{A}{F}} is a homomorphism. A yet different way of saying this is that~$g$ is a polymorphism of~$\mathbb{A}=\algwops{A}{\graph{F}}$, that is, $g\in\Pol{\graph{F}}$ preserves all graphs \m{\graph{f}=\lset{(\bfa{x},f(\bfa{x}))}{\bfa{x}\in A^n}\subs A^{n+1}} of all functions \m{f\in F} of any arity \m{n\in \N}. From this, it is again clear that \m{\cent{F}} always must be a clone. On the other hand, it is obvious from the matrix formulation that centralisation is a symmetric condition: for all \m{F,G\subs\Op{A}} we have \m{G\subs \cent{F}} if and only if \m{F\subs \cent{G}}. Hence, we see that \begin{align*} \cent{F} &= \lset{g\in\Op{A}}{g\in\cent{F}} =\lset{g\in\Op{A}}{F\subs\cent{\set{g}}}\\ &=\lset{g\in\Op{A}}{\genClone{F}\subs\cent{\set{g}}} =\lset{g\in\Op{A}}{g\in\cent{\genClone{F}}} = \cent{\genClone{F}} \end{align*} for every \m{F\subs\Op{A}}, so the centraliser of a whole clone is not smaller than the centraliser of its generators. Since the clone constructed in Snow's paper is given in terms of a single generator function, we can thus study its centraliser as the set of all operations commuting with this one generating function. \par
In the introduction the uniform centraliser degree was defined as the least arity~$n$ such that every centraliser clone~$F$ on a given finite set can be bicentrically generated as \m{F=\bicentn{F}}. The following result shows that the search for this number is likewise a search for an arity~$n$ such that every centraliser clone is a centraliser of a set of functions of arity at most~$n$. \begin{proposition}\label{prop:char-cdeg} For any carrier set~$A$ and an integer~$n\in\N$ the following facts are equivalent: \begin{enumerate}[(a)] \item\label{item:bicent-n}
For every centraliser clone~$F$ we have \m{F=\bicentn{F}}. \item\label{item:n-cent}
For every centraliser clone~$F$ we have
\m{\centn{F}=\cent{F}}. \item\label{item:n-cent-n-cent}
For every centraliser clone~$F$ we have
$F^{(n)*(n)*}=F$. \item\label{item:cent-leq-n}
For every centraliser clone~$F$ there is some
\m{G\subs \bigcup_{\ell\leq n} \Op[\ell]{A}} such that
\m{F=\cent{G}}. \item\label{item:cent-n}
For every centraliser clone~$F$ there is some
\m{G\subs \Op[n]{A}} such that
\m{F=\cent{G}}. \item\label{item:cent-n-cent-bicent}
For every set~$F\subs\Op{A}$ we have
$F^{*(n)*}=\bicent{F}$. \item\label{item:cent-n-cent}
For every centraliser clone~$F$ we have
$F^{*(n)*}=F$. \end{enumerate} \end{proposition} \begin{proof} If~\eqref{item:bicent-n} holds and~$F$ is a centraliser clone, then \m{\cent{F}=F^{(n)***}=\centn{F}}, so~\eqref{item:n-cent} is true. If~\eqref{item:n-cent} holds, then \m{F=\bicent{F}=\bicentn{F}} for any centraliser clone~$F$, so \m{\eqref{item:bicent-n}\Leftrightarrow\eqref{item:n-cent}}. \par Suppose now that~\eqref{item:bicent-n}, and thus~\eqref{item:n-cent}, hold. Letting \m{G\defeq\centn{F}} for a centraliser clone~$F$, we have \m{F=\bicentn{F}=\cent{G}} from~\eqref{item:bicent-n}. Applying now~\eqref{item:n-cent} to the centraliser~$G$ gives \m{F=\cent{G}=\centn{G}=F^{(n)*(n)*}}, so~\eqref{item:bicent-n} implies~\eqref{item:n-cent-n-cent}. \par From~\eqref{item:n-cent-n-cent} we get~\eqref{item:cent-n} by letting \m{G=F^{(n)*(n)}\subs\Op[n]{A}}, and~\eqref{item:cent-n} directly gives~\eqref{item:cent-leq-n}. \par Now, suppose that~\eqref{item:cent-leq-n} holds for~$F$ with functions~$G$ of arity at most~$n$. Since we have excluded nullary operations, this implies that \m{G\subs \genClone{\genClone[n]{G}}}, so we obtain \m{\cent{G}\sups\cent{\genClone{\genClone[n]{G}}} = \cent{{\genClone[n]{G}}}\sups\cent{\genClone{G}}=\cent{G}}, which means that \m{F=\cent{G} = \cent{H}} where \m{H\defeq\genClone[n]{G}\subs\Op[n]{A}}. Thus \m{\eqref{item:cent-n}\Leftrightarrow\eqref{item:cent-leq-n}}. \par From~\eqref{item:cent-n}, for every \m{F\subs\Op{A}}, we can express the bicentraliser \m{\bicent{F}=\cent{G}} with some \m{G\subs\Op[n]{A}}. Clearly, \m{G\subs\bicent{G}=\cent{F}}, so \m{G\subs\ncent{F}\subs\cent{F}}. Therefore, we obtain \m{\bicent{F}=\cent{G}\sups F^{*(n)*}\sups\bicent{F}}, i.e.~\eqref{item:cent-n-cent-bicent}. The latter entails~\eqref{item:cent-n-cent} as a special case, for every centraliser clone~$F$ satisfies \m{\bicent{F}=F}. Moreover, \eqref{item:cent-n-cent} directly gives~\eqref{item:cent-n} by letting \m{G\defeq \ncent{F}\subs\Op[n]{A}}. \par It remains to show that~\eqref{item:cent-n-cent} implies~\eqref{item:bicent-n}. Namely, for a centraliser clone~$F$, applying~\eqref{item:cent-n-cent} to \m{G=\cent{F}}, we get \m{G=G^{*(n)*}=F^{**(n)*} = \centn{F}}, so \m{\bicentn{F}=\cent{G}=F}. \end{proof}
\begin{remark}\label{rem:equivalences-for-one-F} A closer inspection of the proof of Proposition~\ref{prop:char-cdeg} reveals that for an individual centraliser clone~$F$ the conditions in statements~\eqref{item:bicent-n} and~\eqref{item:n-cent} are equivalent without the universal quantifier. The same holds for the facts~\eqref{item:cent-leq-n}, \eqref{item:cent-n} and~\eqref{item:cent-n-cent}. \end{remark}
Let us now assume that~$F$ denotes the clone constructed by Snow in~\cite{SnowGeneratingPrimitivePositiveClones}. It is our aim to show that there is a separating function \m{f\in\bicent{F}\setminus F}. Since the clone~$F$ is given in~\cite{SnowGeneratingPrimitivePositiveClones} as \m{F=\genClone{\set{T}}} by means of a generating function~$T$, once we have selected an \nbdd{n}ary candidate function~$f$, it is not too hard to show that \m{f\notin F}. One simply has to describe the \nbdd{n}ary term operations of~$T$ and to show that~$f$ is not among them. The harder part is to choose a suitable function \m{f\in\bicent{F}}: by the definition of the bicentraliser one first has to understand the whole set~$\cent{F}$ in order to calculate~$\bicent{F}$. As~$\cent{F}$ contains functions of all arities this task may require infinitely many steps. Admittedly, there is an upper bound on the arities that have to be considered, but this bound is connected to~$\cdeg(\abs{A})$ (see \m{\eqref{item:bicent-n}\Leftrightarrow\eqref{item:cent-n-cent-bicent}} in Proposition~\ref{prop:char-cdeg}) and hence under current knowledge the number of steps is at least exponentially big. \par As a way out of this dilemma, we can however consider upper approximations of~$\bicent{F}$. Namely, if we cut down the centraliser at some arity~$\ell$, then \m{F^{*(\ell)*}\sups\bicent{F}}. The smaller~$\ell$ the coarser these approximations are, but also the easier it becomes to describe \m{\ncent[\ell]{F}}. In the subsequent section we shall employ a strategy, where we always start with the least interesting arity~$\ell=1$; it turns out that this already produces good results by ruling out many functions that cannot belong to~$\bicent{F}$. \par
To obtain more information about \m{\ncent[\ell]{F}} for some fixed~$\ell$, it will be important to derive as many necessary conditions as possible to help to narrow down the possible candidate functions in the centraliser. This is done by observing that any \m{g\in \cent{F}=\Pol{\graph{F}}} belongs to \m{\Pol{\Inv{\Pol{\graph{F}}}}} and thus has to preserve all relations in the relational clone \m{\Inv{\Pol{\graph{F}}}} generated by the graphs of the functions from~$F$. This set contains all relations that can be defined via primitive positive formul\ae{} from \m{\graph{F}=\lset{\graph{f}}{f\in F}}, and among these there are a few notorious candidates: the image, the set of fixed points and the kernel of any function \m{f\in\Fn{F}}: \begin{align*} \im(f)&= \lset{z\in A}{\exists x_1,\dotsc,x_n\in A\colon z = f(x_1,\dotsc,x_n)},\\ \fix(f) &= \lset{z\in A}{f(z,\dotsc,z)=z},\\ \ker(f) &= \lset{(x_1,\dotsc,x_{2n})\in A^{2n}}{\exists z\in A\colon f(x_1,\dotsc,x_n)=z =f(x_{n+1},\dotsc,x_{2n})}. \end{align*} \par To make this more concrete, we now give the generating function~\m{T\in\Op[n^2]{A}} for the clone \m{F=\genClone{\set{T}}} where \m{\mu_F\geq n^2} on \m{A=\set{0,\dotsc,n}}, \m{n\geq 2} (see p.~172 of~\cite{SnowGeneratingPrimitivePositiveClones}): \m{T\apply{x_{11},\dotsc,x_{1n},x_{21},\dotsc,x_{2n},\dotsc,x_{n1},\dotsc,x_{nn}}=1} if \m{x_{ij}=i} for all \m{i,j\in\set{1,\dotsc,n}} or \m{x_{ij}=j} for all \m{i,j\in\set{1,\dotsc,n}}, and it is zero for all other arguments. Hence, \m{\im(T) = \set{0,1}}, \m{\fix(T) = \set{0}} and \m{\ker(T)} identifies \m{(1,2,\dotsc,n,\dotsc,1,2,\dotsc,n)} with \m{(1,1,\dotsc,1,\dotsc,n,n,\dotsc,n)} in one block, and all other \nbdd{n^2}tuples in a second block. \par Eventually, after we have found a suitable candidate function \m{f\notin F=\genClone{\set{T}}}, upper approximations \m{F^{*(\ell)*}} will not any more be enough to prove that \m{f\in\bicent{F}} (unless we use an exponentially high value for~$\ell$, cf.\ Proposition~\ref{prop:char-cdeg}\eqref{item:bicent-n},\eqref{item:cent-n-cent-bicent}). Instead, we can apply a Galois theoretic trick. Namely, \m{f\in\bicent{F}} if and only if \m{\cent{F}\subs\cent{\set{f}}=\Pol{\set{\graph f}}}, which is equivalent to \m{\graph f\in\Inv{\cent{F}} = \Inv{\Pol{\graph{F}}}}. As the carrier set is finite, this means that the graph of~$f$ must belong to the relational clone generated from the graphs of functions in~$F$, i.e., that it is primitive positively definable from those graphs. Finding a primitive positive formula, which does the job, requires some creativity, and we will try our best to give some intuition how it can be found in the case where $\abs{A}=3$. For the general case $\abs{A}=k\geq 3$ we shall only state the generalisation of the respective formula and verify that it suffices to define the graph of a \nbdd{(k-1)}ary function that does not belong to~$F$.
\section{Separating a clone from its bicentraliser} For the remainder of the paper we let \m{A=\set{0,1,\dotsc,k-1}} where \m{k\geq 3}, and we consider the clone \m{F=\genClone{\set{T}}} constructed by Snow in~\cite[Section~3]{SnowGeneratingPrimitivePositiveClones}. For the definition of the \nbdd{(k-1)^2}ary generating function~$T$, see the end of the preceding section. \par It is our task to identify some arity~$n$ and some \nbdd{n}ary operation \m{f\in\Op[n]{A}} such that \m{f\in\bicent{F} = \bicent{\set{T}}}, but \m{f\notin F=\genClone{\set{T}}}. In order to avoid a combinatorial explosion of the structure of the involved clones, it is of course desirable to keep the arity~$n$ as low as possible. Hence, we shall start with a description of~\m{\genClone[n]{\set{T}}} for \m{n<k-1}. Then, using the method of upper approximations, we shall show that it is impossible to find a separating \m{f\in\bicent{\set{T}}} of such a low arity. So the next step will be to consider $n=k-1$. Here, we will first study the case $k=3$, where we can show that there is a unique function of arity \m{n=k-1=2}, for which we can prove \m{f\in\bicent{\set{T}}}, but \m{f\notin \genClone[2]{\set{T}}}. Subsequently, we shall demonstrate that the construction of this particular~$f$ (and the proof of \m{f\in\bicent{\set{T}}}) can be generalised to any $k\geq 3$.
\begin{lemma}\label{lem:Tclone-small} For any $k\geq 3$ we have \m{\genClone[n]{\set{T}} = \J[n]{A} \cup\set{\cna[n]{0}}} for all $1\leq n<k-1$ where $\cna[n]{0}$ denotes the \nbdd{n}ary constant zero function. \end{lemma} \begin{proof} We have $\cna[n]{0}= \composition{T}{\eni{1},\dotsc,\eni{1}}$ since~$T$ maps every constant tuple to~$0$. Thus the mentioned functions belong to the \nbdd{n}ary part of~$\genClone{\set{T}}$. Moreover, the given set is a subalgebra of \m{\algwops{A}{T}^{A^n}}: namely every composition of~$T$ with functions at least one of which is $\cna[n]{0}$ is $\cna[n]{0}$. This is so since~$T$ maps every tuple containing a zero entry to zero. Furthermore, every composition of~$T$ involving only (some of) the~$n$ projections is also~$\cna[n]{0}$ as~$T$ maps every tuple with at most~$n<k-1$ distinct entries to zero. \end{proof}
To describe \m{\nbicent{\set{T}}} for \m{0<n<k-1}, we shall study lower approximations of~\m{\cent{\set{T}}}. We begin by cutting the arity at the level \m{\ell=1}.
\begin{lemma}\label{lem:Tstar1} For~$A=\set{0,\dotsc,k-1}$ of size $k\geq 3$ we have\footnote{ For $k=3$ the correctness of this lemma can be checked with the Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the ancillary file \texttt{unaryfunccommutingT.z3}. It can also be seen from the file \texttt{Tcent1.txt} produced by the function \texttt{findallunaries()} from the ancillary file \texttt{commutationTs.cpp}.} \[\ncent[1]{\set{T}}
= \set{\id_A} \cup \lset{f\in\Op[1]{A}}{f(0)=f(1)=0}.\] \end{lemma} \begin{proof} Let us fix \m{f\in\Op[1]{A}}, commuting with \m{T}. Since \m{f\in \Pol{\set{\fix(T)}}}, we have \m{f(0) = 0}. Moreover, since $f$ preserves the image of~$T$, we must have \m{f(1)\in\set{0,1}}. If $f(1)=0$, we are done. Otherwise, if $f(1)=1$, we shall show that $f=\id_A$. Namely, since $f$ and~$T$ commute, we have \begin{align*} 1= f(1) &= f(T(1,\dotsc,1,2,\dotsc,2,\dotsc,k-1,\dotsc,k-1))\\
&=T(f(1),\dotsc,f(1),f(2),\dotsc,f(2),\dotsc,f(k-1),\dotsc,f(k-1)), \end{align*} which implies that \begin{align*} (f(1),\dotsc,f(1),&f(2),\dotsc,f(2),\dotsc,f(k-1),\dotsc,f(k-1))\\ &\in T^{-1}[\set{1}]\setminus\set{(1,2,\dotsc,k-1,1,2,\dotsc,k-1,\dotsc,1,2,\dotsc,k-1)}\\ &=\set{(1,\dotsc,1,2,\dotsc,2,\dotsc,k-1,\dotsc,k-1)}, \end{align*} whence clearly $f(x) = x$ for all $0<x<k$, i.e.\ \m{f=\id_A}. \par Conversely, we prove that every $f\in\Op[1]{A}$ with $f(0)=f(1)=0$ commutes with~$T$. Assume, for a contradiction, that for some \m{\bfa{x}\in A^{(k-1)^2}} we had $T(f\circ\bfa{x})=1$; then \m{\set{1,\dotsc,k-1} = \im f\circ\bfa{x} \subs \im f}, so~$f$ would be surjective, and, by finiteness of~$A$, bijective. This would contradict $f(0)=f(1)=0$, so for every \m{\bfa{x}\in A^{(k-1)^2}} we have $T(f\circ \bfa{x}) = 0 = f(0) = f(1) = f(T(\bfa{x}))$, since \m{T(\bfa{x})\in \set{0,1}}. Thus~$f\in\cent{\set{T}}$. \end{proof}
\begin{corollary}\label{cor:Tstar1} For $A=\set{0,\dotsc,k-1}$ of cardinality~$k\geq 3$, we have the inclusion \[\ncent[1]{\set{T}}
\sups \lset{u_{j,a}}{a\in A\land j\in A\setminus\set{0,1}},\] where \m{u_{j,a}} is given by the rule \[u_{j,a}(x) = \begin{cases} a&\text{if }x=j,\\
0&\text{otherwise.}
\end{cases}\] \end{corollary}
The binary part of the centraliser already becomes rather obscure in the general case. So we only give a description for the case $k=3$ (which can certainly also be verified by a brute-force enumeration using a computer).
\begin{lemma}\label{lem:Tstar2} For \m{A=\set{0,1,2}} the set \m{\ncent[2]{\set{T}}} contains the following 65 functions \begin{align*} \ncent[2]{\set{T}}= \set{\eni[2]{1},\eni[2]{2}} &{}\disjointunion \lset{z_a}{a\in \set{0,1,2}}\\ &{}\disjointunion \bigcup_{c\in\set{1,2}} \lset{f_{a,\bfa{x}}}{a\in\set{0,c}\land \bfa{x}\in\set{0,c}^4\setminus\set{(0,0,0,0)}} \end{align*} given by the following tables\footnote{ The correctness of this lemma (and its proof) can be checked with the Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the ancillary file \texttt{binaryfunccommutingT.z3}. The completeness of the list of 65 operations can also be verified with the function \texttt{findallbinaries()} from the ancillary file \texttt{commutationTs.cpp}, resulting in the file \texttt{Tcent2.txt}.}: \begin{align*}
&\begin{array}{r|*{3}{c}} z_a(x\backslash y)& 0&1&2\\\hline 0& 0& 0& 0\\ 1& 0& 0& 0\\ 2& 0& 0& a \end{array}&
&\begin{array}{r|*{3}{c}} f_{a,(b,c,d,e)}(x\backslash y)& 0&1&2\\\hline 0& 0& 0& b\\ 1& 0& 0& c\\ 2& d& e& a \end{array} \end{align*} \end{lemma} \begin{proof} Using a case distinction, one can verify that every function \m{f\in \ncent[2]{\set{T}}} must be among the ones mentioned in the lemma. \begin{enumerate}[1.] \item Assume \m{f(1,1) = 1}. We can show that \m{f(2,2)=2} and
\m{\set{f(1,2),f(2,1)} = \set{1,2}}. Namely, \m{f\in\cent{\set{T}}}
implies
\begin{multline*}
1=f(1,1) = f(T(1,1,2,2),T(1,2,1,2))\\
=T(f(1,1),f(1,2),f(2,1),f(2,2)) = T(1,f(1,2),f(2,1),f(2,2)),
\end{multline*}
which is only possible if \m{f(2,2)=2} and
\m{(f(1,2),f(2,1))\in\set{(1,2),(2,1)}}. \begin{enumerate}[{1.}1.] \item Assume \m{f(1,2) =1} and \m{f(2,1) =2}. It follows that
\m{f=\eni[2]{1}}.
In fact, our assumption \m{f\in\cent{\set{T}}} implies
\begin{multline*}
f(1,0) = f(T(1,1,2,2),T(1,1,1,2))\\
=T(f(1,1),f(1,1),f(2,1),f(2,2)) = T(1,1,2,2)=1,
\end{multline*}
moreover
\begin{multline*}
f(0,1) = f(T(1,1,1,2),T(1,1,2,2))\\
=T(f(1,1),f(1,1),f(1,2),f(2,2)) = T(1,1,1,2)=0,
\end{multline*}
and
\begin{multline*}
1 = f(1,0) = f(T(1,2,1,2),T(1,0,1,2))\\
=T(f(1,1),f(2,0),f(1,1),f(2,2)) = T(1,f(2,0),1,2),
\end{multline*}
which is only possible if \m{f(2,0) = 2}.
Finally, we have
\begin{multline*}
0 = f(0,1) = f(T(1,0,1,2),T(1,2,1,2))\\
=T(f(1,1),f(0,2),f(1,1),f(2,2)) = T(1,f(0,2),1,2),
\end{multline*}
which means \m{f(0,2)\neq 2}, and
\begin{multline*}
0 = f(0,0) = f(T(0,2,1,2),T(2,2,1,2))\\
=T(f(0,2),f(2,2),f(1,1),f(2,2)) = T(f(0,2),2,1,2),
\end{multline*}
which gives \m{f(0,2)\neq 1}.
Thus \m{f(0,2)\in A\setminus\set{1,2}=\set{0}}. \item Assume \m{f(1,2) =2} and \m{f(2,1) =1}. It follows that
\m{f=\eni[2]{2}} by a dual argument.
In fact, \m{f\in\cent{\set{T}}} implies
\begin{multline*}
f(1,0) = f(T(1,1,2,2),T(1,1,1,2))\\
=T(f(1,1),f(1,1),f(2,1),f(2,2)) = T(1,1,1,2)=0,
\end{multline*}
moreover
\begin{multline*}
f(0,1) = f(T(1,1,1,2),T(1,1,2,2))\\
=T(f(1,1),f(1,1),f(1,2),f(2,2)) = T(1,1,2,2)=1,
\end{multline*}
and
\begin{multline*}
1 = f(0,1) = f(T(1,0,1,2),T(1,2,1,2))\\
=T(f(1,1),f(0,2),f(1,1),f(2,2)) = T(1,f(0,2),1,2),
\end{multline*}
which is only possible if \m{f(0,2) = 2}.
Finally, we have
\begin{multline*}
0 = f(1,0) = f(T(1,2,1,2),T(1,0,1,2))\\
=T(f(1,1),f(2,0),f(1,1),f(2,2)) = T(1,f(2,0),1,2),
\end{multline*}
which means \m{f(2,0)\neq 2}, and
\begin{multline*}
0 = f(0,0) = f(T(2,2,1,2),T(0,2,1,2))\\
=T(f(2,0),f(2,2),f(1,1),f(2,2)) = T(f(2,0),2,1,2),
\end{multline*}
which gives \m{f(2,0)\neq 1}.
Thus \m{f(2,0)\in A\setminus\set{1,2}=\set{0}}. \end{enumerate} \item Now assume that \m{f(1,1)\neq 1}. Since \m{f\in\Pol{\im(T)}}, we
must have \m{f(1,1)=0}.
We can show that \m{f(0,1) = 0= f(1,0)}.
In point of fact, we have
\begin{multline*}
f(0,1) = f(T(1,1,0,0),T(1,1,2,2))\\
=T(f(1,1),f(1,1),f(0,2),f(0,2)) = T(0,0,f(0,2),f(0,2)) = 0,
\end{multline*}
and for \m{f(1,0)=0} we argue by swapping the arguments of~\m{f}.
\par
Moreover, if \m{\set{1,2}\subs\set{f(0,2),f(2,0),f(1,2),f(2,1)}},
then \m{f\notin\cent{\set{T}}}.
Indeed, if there are \m{x,y\in \set{0,1}} such that
\begin{enumerate}[(a)]
\item \m{f(2,x) = 1}, \m{f(2,y)=2}, then
\begin{multline*}
f(T(2,2,2,2),T(x,x,y,y))= f(0,z)=0 \\
\neq 1 =T(1,1,2,2) =T(f(2,x),f(2,x),f(2,y),f(2,y)),
\end{multline*}
where \m{z\in \set{0,1}}.
\item \m{f(x,2) = 1}, \m{f(y,2)=2}, then we argue with swapped
arguments for \m{f}.
\item \m{f(2,x) = 1}, \m{f(y,2)=2}, then
\begin{multline*}
f(T(2,2,y,y),T(x,x,2,2)) = f(0,z) = 0\\
\neq 1 = T(1,1,2,2) = T(f(2,x),f(2,x),f(y,2),f(y,2)),
\end{multline*}
where \m{z\in \set{0,1}}.
\item \m{f(x,2) = 1}, \m{f(2,y)=2}, then we argue with swapped
arguments for~\m{f}.
\end{enumerate}
Hence, we know that
\m{\set{1,2}\not\subs\set{f(0,2),f(2,0),f(1,2),f(2,1)}} for
\m{f\in\cent{\set{T}}}. \begin{enumerate}[{2.}1.] \item Suppose that \m{f(2,2)=0}. There is nothing left to prove: we
already have
\m{f\in\set{z_0}\cup\bigcup_{c\in\set{1,2}}
\lset{f_{0,\bfa{x}}}{\bfa{x}\in\set{0,c}^4\setminus\set{\bfa{0}}}}. \item Suppose that \m{f(2,2)=c\in\set{1,2}} and let \m{d} be such that
\m{\set{c,d}=\set{1,2}}.
We prove that \m{d\notin \set{f(0,2),f(2,0),f(1,2),f(2,1)}},
as otherwise \m{f\notin\cent{\set{T}}}. This demonstrates that
\m{\set{f(0,2),f(2,0),f(1,2),f(2,1)}\subs\set{0,c}}, so we have
\m{f\in\set{z_c}\cup\bigcup_{c\in\set{1,2}}
\lset{f_{c,\bfa{x}}}{\bfa{x}\in\set{0,c}^4\setminus\set{\bfa{0}}}}.
\par
For a contradiction suppose that there is some argument \m{x\in \set{0,1}}
such that \m{f(x,2)=d}. Then for some \m{z\in\set{0,1}} we have
\begin{multline*}
f(T(x,x,2,2),T(2,2,2,2)) = f(z,0) = 0\\
\neq 1 = T(d,d,c,c) =T(f(x,2),f(x,2),f(2,2),f(2,2)),
\end{multline*}
when \m{(c,d)=(2,1)}, and
\begin{multline*}
f(T(2,2,x,x),T(2,2,2,2)) = f(z,0) = 0\\
\neq 1 = T(c,c,d,d) =T(f(2,2),f(2,2),f(x,2),f(x,2)),
\end{multline*}
when \m{(c,d)=(1,2)}.
In the case where \m{f(2,x)=d} for some \m{x\in\set{0,1}} we argue
similarly, by swapping the arguments of~\m{f}. \end{enumerate} \end{enumerate} \par For the converse inclusion, we have to check that all mentioned functions commute with~$T$. So let $g=z_a$ for some $a\in A$ or $g=f_{a,(b,c,d,e)}$ and consider $x_1,\dotsc,x_4,y_1,\dotsc,y_4\in A$ to verify that~$g$ commutes with~$T$. Put $u\defeq T(x_1,\dotsc,x_4)$ and $v\defeq T(y_1,\dotsc,y_4)$. Since \m{(u,v)\in \im(T)^2 = \set{0,1}^2}, we have $g(u,v) = 0$. On the other hand, the values $w_i\defeq g(x_i,y_i)$ for $1\leq i\leq 4$ belong to $\im(g)\subs \set{0,a,b,c,d,e}$. If at least one of them equals~$0$, then $T(w_1,\dotsc,w_4)=0$ as needed. Otherwise, all of them belong to \m{\set{a,b,c,d,e}\setminus\set{0}}. If $g=z_a$, then they are all equal to~$a$ and we thus have $T(w_1,\dotsc,w_4)=0$, too. In the case that $g=f_{a,(b,c,d,e)}$, we know from the definition of~$g$ that \m{\set{a,b,c,d,e}\subs\set{0,j}} for some \m{j\in \set{1,2}}. Thus, $w_1=\dots =w_4=j$, and again $T(w_1,\dotsc,w_4)=0$. In any case, we have shown \m{g\in\cent{\set{T}}}. \end{proof}
Next, with the help of the coarse approximations from Lemma~\ref{lem:Tstar1}, we observe that the bicentraliser of~$T$ only contains functions that are close to being conservative and have many congruences. \begin{lemma}\label{lem:almost-conservative} For $A=\set{0,\dotsc,k-1}$ of size \m{k\geq 3} we have \begin{multline*} \genClone{\set{T}}\subs \bicent{\set{T}} \subs \set{T}^{*(1)*}\\ {}\subs \Pol{\lset{U\subs A}{0\in U}} \cap\Pol{\lset{\theta\in\Eq(A)}{(0,1)\in\theta}}, \end{multline*} where $\Eq(A)$ denotes the set of all equivalence relations on~$A$. \end{lemma} \begin{proof} It is clear that \m{\set{T}^{*(1)*} \subs\Pol{\lset{\im(f)}{f\in \ncent[1]{\set{T}}}}} since the image of a function is primitive positively definable from its graph. If \m{0\in U\subsetneq A}, then~$U$ contains \m{t<k-1} elements distinct from~$0$. According to the description of the functions in~\m{\ncent[1]{\set{T}}} given in Lemma~\ref{lem:Tstar1}, there is some \m{f\in \ncent[1]{\set{T}}} whose image is~\m{U}. \par Likewise we have \m{\set{T}^{*(1)*} \subs\Pol{\lset{\ker(f)}{f\in \ncent[1]{\set{T}}}}} since the kernel of a function is primitive positively definable from its graph. Any partition of~$A$ having a class containing the set \m{\set{0,1}} can again be realised as the kernel of a function \m{f\in \ncent[1]{\set{T}}} since the value $f(x)$ can be chosen arbitrarily for every \m{x\in A\setminus\set{0,1}}. \end{proof}
Based on this lemma we can show that the \nbdd{n}ary part of the bicentraliser of~$T$ is not bigger than \m{\genClone[n]{\set{T}}} when $n<k-1$. \begin{lemma}\label{lem:T*1*-small} For~$k=\abs{A}\geq 3$ we have \m{\nbicent{\set{T}} = \set{T}^{*(1)*(n)}= \J[n]{A} \cup\set{\cna[n]{0}}} for all $1\leq n<k-1$ where $\cna[n]{0}$ denotes the \nbdd{n}ary constant zero function. \end{lemma} \begin{proof} We shall prove that \m{\set{T}^{*(1)*(n)}\subs \J[n]{A} \cup\set{\cna[n]{0}} = \genClone[n]{\set{T}}} (cf.\ Lemma~\ref{lem:Tclone-small}). From this it will follow that \m{\genClone[n]{\set{T}}\subs \nbicent{\set{T}} \subs \set{T}^{*(1)*(n)}\subs\genClone[n]{\set{T}}} since \m{\ncent[1]{\set{T}} \subs\cent{\set{T}}} is always true. \par Given that~\m{A} has size \m{k\geq 3}, the set \m{A\setminus\set{0,1}} has \m{k-2\geq n} distinct values. Let \m{f\in \set{T}^{*(1)*(n)}}. By Lemma~\ref{lem:almost-conservative} we know that \m{f\in\Pol{\set{0,2\dotsc,n+1}}}, so we obtain \m{b\defeq f(2,\dotsc,n+1)\in\set{0,2,\dotsc,n+1}}. Now for any \m{(a_1,\dotsc,a_n)\in A^{n}} we consider the unary map~\m{u} sending \m{j \mapsto a_{j-1}} for \m{2\leq j\leq n+1} and \m{j\mapsto 0} otherwise. Since \m{u\in\ncent[1]{\set{T}}} by Lemma~\ref{lem:Tstar1}, we have \m{f\in \cent{\set{u}}} and thus \[f(a_1,\dotsc,a_{n}) = f(u(2),\dotsc,u(n+1)) = u(f(2,\dotsc,n+1))=u(b).\] If \m{b=0}, then \m{f(a_1,\dotsc,a_n)=u(b)=0}, so \m{f=\cna[n]{0}}. If \m{b\neq 0}, then it follows that \m{2\leq b\leq n+1}. Thus, we have $f(a_1,\dotsc,a_{n})=u(b)=a_{b-1}$ for all \m{(a_1,\dotsc,a_{n})\in A^n}, which shows that \m{f=\eni{b-1}}. \end{proof}
According to Lemmata~\ref{lem:Tclone-small} and~\ref{lem:T*1*-small}, it is impossible to find \m{f\in\nbicent{\set{T}}\setminus\genClone[n]{\set{T}}} for \m{n<k-1} where \m{k=\abs{A}}. Next, we thus turn our attention to \m{n=k-1}, where we will first describe \m{\genClone[k-1]{\set{T}}}: besides projections the \nbdd{(k-1)}ary part of~\m{\genClone{\set{T}}} contains only functions being zero everywhere with a possible exception in only one argument tuple which may be sent to one. After that we shall focus for a while on the case \m{k=3} to develop the right ideas in connection with \m{\nbicent[k-1]{\set{T}}=\nbicent[2]{\set{T}}}, which can eventually be generalised to any \m{k\geq 3}.
\begin{lemma}\label{lem:Tclone2-general} Given a set~$A$ of cardinality~$k\geq 3$, put $n=k-1$. We then have $\genClone[n]{\set{T}}=\J[n]{A}\cup\set{\cna[n]{0}}\cup F$ where $F\subs\Op[n]{A}$ is the set of \nbdd{n}ary functions in $\genClone{\set{T}}$ which map exactly one \nbdd{n}tuple to~$1$ and everything else to~$0$. \end{lemma} \begin{proof} We have $\cna[n]{0}= \composition{T}{\eni{1},\dotsc,\eni{1}}\in\genClone[n]{\set{T}}$ as in Lemma~\ref{lem:Tclone-small}, so the inclusion $G\defeq \J[n]{A}\cup \set{\cna[n]{0}}\cup F\subs\genClone[n]{\set{T}}$ is clear. For the opposite inclusion, we prove that~$G$ is a subuniverse of \m{\algwops{A}{T}^{A^n}}. The first step is to check that any variable identification of~$T$ with at most~$n$ variables ends up in~$F\cup\set{\cna[n]{0}}$. \par Let $i\colon\set{1,\dotsc,n^2}\to\set{1,\dotsc,n}$, $j\mapsto i_j$ be a map describing an \nbdd{n}variable identification \m{f = \composition{T}{\eni{i_1},\dotsc,\eni{i_{n^2}}}} of~$T$. Clearly, $\im(f)\subs\im(T)=\set{0,1}$, so every tuple that is not mapped to one by~$f$ will be sent to zero. To obtain a contradiction, let us assume that $\abs{f^{-1}\fapply{\set{1}}}\geq 2$. So there are tuples $\bfa{x}\neq \bfa{y}\in A^n$ such that \m{T\apply{x_{i_1},\dotsc,x_{i_{n^2}}} = 1 =
T\apply{y_{i_1},\dotsc,y_{i_{n^2}}}}. The preimage $T^{-1}\fapply{\set{1}}$ contains only two tuples, and these mention~$n$ distinct elements. To obtain one of them in the form $\apply{x_{i_1},\dotsc,x_{i_{n^2}}}$ or $\apply{y_{i_1},\dotsc,y_{i_{n^2}}}$ one has to use at least~$n$ distinct variable indices, so the map~$i$ has to be surjective. It is therefore impossible that the distinct tuples~$\bfa{x}$ and~$\bfa{y}$ produce the same tuple \m{\apply{x_{i_1},\dotsc,x_{i_{n^2}}} =
\apply{y_{i_1},\dotsc,y_{i_{n^2}}} \in T^{-1}\fapply{\set{1}}}. This means, one of them, say~$\bfa{x}$, gives \m{\apply{x_{i_1},\dotsc,x_{i_{n^2}}}
=\apply{1,\dotsc,n,1,\dotsc,n,\dotsc,1,\dotsc,n}}, from which it follows that $i_1,\dotsc,i_n$ are all distinct (so \m{\set{i_1,\dotsc,i_n} = \set{1,\dotsc,n}}); the other one however produces \m{\apply{y_{i_1},\dotsc,y_{i_{n^2}}}
=\apply{1,\dotsc,1,2,\dotsc,2,\dotsc,n,\dotsc,n}}. This implies that $\set{y_1,\dotsc,y_n} =\set{y_{i_1},\dotsc,y_{i_n}}=\set{1}$, so $\apply{y_{i_1},\dotsc,y_{i_{n^2}}} = (1,\dotsc,1)$, which is a contradiction for $n\geq 2$. \par To prove that~$G$ is closed under application of~$T$, we take functions $f_1,\dotsc,f_{n^2}$ from~$G$ and show that $f=\composition{T}{f_1,\dotsc,f_{n^2}}\in G$. If $f_1,\dotsc,f_{n^2}\in\J{A}$, then the composition is a variable identification of~$T$ that belongs to~$F\cup\set{\cna[n]{0}}\subs G$. Otherwise, suppose that (for some $1\leq j\leq n^2$) $f_j$ is a non\dash{}projection in $F\cup\set{\cna[n]{0}}$. For every $\bfa{x}\in A^n$ with possibly one exception we have $f_j(\bfa{x})=0$. So for all those arguments $\bfa{x}\in A^n$, the \nbdd{j}th component of \m{\apply{f_1(\bfa{x}),\dotsc,f_{n^2}(\bfa{x})}} contains a zero, whence this tuple is mapped to zero by~$T$. Consequently, $f(\bfa{x})=0$ for all but possibly one $\bfa{x}\in A^n$, so \m{f\in F\cup\set{\cna[n]{0}}\subs G}. \end{proof}
For three\dash{}element domains we obtain a more specific result. \begin{lemma}\label{lem:Tclone2} For $A=\set{0,1,2}$ we have \m{\genClone[2]{\set{T}} = \set{\eni[2]{1},\eni[2]{2}, \cna[2]{0}, \delta_{(1,2)}, \delta_{(2,1)}}}, where \m{\cna[2]{0}} is the constant zero function and \m{\delta_a(x) = 1} if \m{x=a} and \m{\delta_a(x)=0} otherwise. \end{lemma} \begin{proof} It is easy to see that the listed binary functions belong to the clone, namely \begin{align*} \cna[2]{0} &= \composition{T}{\eni[2]{1},\eni[2]{1},\eni[2]{1},\eni[2]{1}},\\ \delta_{(1,2)} &= \composition{T}{\eni[2]{1},\eni[2]{1},\eni[2]{2},\eni[2]{2}}
= \composition{T}{\eni[2]{1},\eni[2]{2},\eni[2]{1},\eni[2]{2}},\\ \delta_{(2,1)} &= \composition{T}{\eni[2]{2},\eni[2]{1},\eni[2]{2},\eni[2]{1}}
= \composition{T}{\eni[2]{2},\eni[2]{2},\eni[2]{1},\eni[2]{1}}. \end{align*} It is not hard to verify that the given subset is a subuniverse of \m{\algwops{A}{T}^{A^2}}. Any \nbdd{T}composition involving only projections except for the ones shown to yield~\m{\delta_{(1,2)}} or~\m{\delta_{(2,1)}} produces~\m{\cna[2]{0}}. Any composition involving~\m{\cna[2]{0}}, or just the \m{\delta_a}~functions yields again the constant map~\m{\cna[2]{0}}. Therefore, only compositions involving the \m{\delta_a}~functions \emph{and} projections have to be checked. If all four of them are substituted into~\m{T} (in any order), the result is~\m{\cna[2]{0}}. If only one projection (and possibly some non\dash{}projections) are substituted, then in most cases, the result is~\m{\cna[2]{0}}, and for a few substitutions it is one of the \m{\delta_a}~functions. If both projections and only one of the \m{\delta_a}~functions are substituted, the result is either the substituted function~\m{\delta_a} or~\m{\cna[2]{0}}. \end{proof}
With the aim of finding separating binary functions in \m{\bicent{\set{T}}} for \m{\abs{A}=3}, we collect some properties of binary operations in upper approximations of~\m{\bicent{\set{T}}}.
\begin{lemma}\label{lem:observations-T*1*} Let \m{A=\set{0,1,2}} and \m{g\in \set{T}^{*(1)*(2)}}, then the following implications hold: \begin{enumerate}[(a)] \item \m{g(1,2)=2 \implies \forall a\in A\colon g(0,a)=a}. \item \m{g(2,1)=2 \implies \forall a\in A\colon g(a,0)=a}. \item \m{g(1,2)\in\set{0,1} \implies \forall a\in A\colon g(0,a)=0}. \item \m{g(2,1)\in\set{0,1} \implies \forall a\in A\colon g(a,0)=0}. \end{enumerate} \end{lemma} \begin{proof} By Corollary~\ref{cor:Tstar1} we have \m{g\in\cent{\lset{u_{2,a}}{a\in A}}}. This implies for all \m{a\in A} that \m{a=u_{2,a}(2)=u_{2,a}(g(1,2)) = g(u_{2,a}(1),u_{2,a}(2))=g(0,a)} provided~\m{g(1,2)=2}. A symmetric argument works for \m{g(2,1)=2}. Similarly, if \m{g(1,2)\in\set{0,1}}, then \m{0=u_{2,a}(g(1,2)) = g(u_{2,a}(1),u_{2,a}(2))=g(0,a)} for all \m{a\in A}, and symmetrically, if \m{g(2,1)\in\set{0,1}}. \end{proof}
Not very surprisingly, \m{\ncent[1]{\set{T}}} does not encode enough information about \m{\cent{\set{T}}} to determine functions in \m{\bicent{\set{T}}} sufficiently well. However, using the description of \m{\ncent[2]{\set{T}}} available for \m{\abs{A}=3} from Lemma~\ref{lem:Tstar2}, we are able to derive a more promising result: for \m{\abs{A}=3} there is a unique binary function in \m{\set{T}^{*(2)*(2)}\setminus\genClone{\set{T}}}. This function might---and although we do not know it yet at this point, it actually will---serve to distinguish \m{\bicent{\set{T}}} and \m{\genClone{\set{T}}}. \begin{lemma}\label{lem:unique-bin-func} For \m{A=\set{0,1,2}} we have\footnote{ The correctness of this lemma can be checked with the Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the ancillary file \texttt{func\_Tc2c2.z3}.} \m{\set{T}^{*(2)*(2)}=\genClone[2]{\set{T}}\mathbin{\dot{\cup}}\set{f}} where for all \m{x,y\in A} \[f(x,y) = \begin{cases}
1 &\text{if } \set{x,y} = \set{1,2},\\
0 &\text{else.}
\end{cases}\] \end{lemma} \begin{proof} The proof is by a systematic case distinction. Let \m{g\in\set{T}^{*(2)*(2)}}, which implies that \m{g\in\set{T}^{*(2)*}=\cent{\genClone{\ncent[2]{\set{T}}}} \subs \set{T}^{*(1)*}} since \m{\ncent[1]{\set{T}}\subs\genClone{\ncent[2]{\set{T}}}}. Hence, we can apply the implications from Lemma~\ref{lem:observations-T*1*} to~$g$. \par Assume \m{g(1,2) = 2}. It follows by Lemma~\ref{lem:observations-T*1*} that \m{g(0,a)=a} for all \m{a\in A}. Our goal is to show that \m{g=\eni[2]{2}}. For a contradiction, suppose that \m{g(2,1)=2}. Since \m{g\in\cent{\set{z_1}}} by Lemma~\ref{lem:Tstar2}, we obtain \[ 1 = z_1(2,2) = z_1(g(1,2),g(2,1)) = g(z_1(1,2),z_1(2,1))
= g(0,0), \] in contradiction to $g(0,0)=0$ derived above. Hence \m{g(2,1)\in\set{0,1}}. Using again Lemma~\ref{lem:observations-T*1*}, this implies \m{g(a,0)=0} for all \m{a\in A}. \par Again, for a contradiction, we suppose that \m{g(2,1)=0}. Since \m{g\in\cent{\set{f_{0,(1,1,1,0)}}}} by Lemma~\ref{lem:Tstar2}, we get \begin{align*} 1&=f_{0,(1,1,1,0)}(2,0)=f_{0,(1,1,1,0)}(g(1,2),g(2,1))\\
&= g(f_{0,(1,1,1,0)}(1,2),f_{0,(1,1,1,0)}(2,1)) = g(1,0), \end{align*} which contradicts \m{g(1,0)=0}. \par Hence \m{g(2,1)=1}. Then, since \m{g\in\cent{\set{f_{0,(c,c,c,c)}}}} for \m{c\in\set{1,2}} by Lemma~\ref{lem:Tstar2}, we get \begin{align*} c&=f_{0,(c,c,c,c)}(2,1)=f_{0,(c,c,c,c)}(g(1,2),g(2,1))\\
&= g(f_{0,(c,c,c,c)}(1,2),f_{0,(c,c,c,c)}(2,1)) = g(c,c), \end{align*} which shows that \m{g=\eni[2]{2}}. Note that a symmetric argument shows that the assumption \m{g(2,1)=2} implies \m{g=\eni[2]{1}}. \par Assume \m{\set{g(1,2),g(2,1)}\subs\set{0,1}}. By Lemma~\ref{lem:observations-T*1*} we get that \m{g(0,a)=g(a,0)=0} for all \m{a\in A}. Clearly, \m{h=\composition{g}{\id_A,\id_A}\in \set{T}^{*(2)*(1)}\subs\set{T}^{*(1)*(1)}= \set{\id_A,\cna[1]{0}}}, see Lemma~\ref{lem:T*1*-small}. For a contradiction, suppose that \m{h=\id_A}, whence \m{g(2,2)=2}. As \m{g\in\cent{\set{f_{0,(2,2,2,2)}}}} by Lemma~\ref{lem:Tstar2}, we get \begin{align*}
2&=f_{0,(2,2,2,2)}(2,g(1,2))=f_{0,(2,2,2,2)}(g(2,2),g(1,2))\\
&= g(f_{0,(2,2,2,2)}(2,1),f_{0,(2,2,2,2)}(2,2)) = g(2,0), \end{align*} which contradicts \m{g(2,0)=0} from before. Therefore, \m{h=\cna[1]{0}}, which shows that \m{g\in\set{\cna[2]{0},\delta_{(1,2)},\delta_{(2,1)},f}}. \par Hence, according to Lemma~\ref{lem:Tclone2}, \m{g\in\genClone[2]{\set{T}}\cup\set{f}}. For the converse inclusion, one uses that the containment \m{\genClone{\set{T}}\subs\bicent{\set{T}}\subs\set{T}^{*(2)*}} is trivially true and one verifies that, indeed, \m{f\in\set{T}^{*(2)*}}. We postpone the latter until Lemma~\ref{lem:pp-formula}, where we shall show more generally that even \m{f\in\bicent{\set{T}}\subs\set{T}^{*(2)*}}. Alternatively, one may ask a computer to check that~$f$ commutes with all the $65$~functions given in Lemma~\ref{lem:Tstar2}, immediately giving a positive answer.\footnote{ This can, for example, be done with the Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the ancillary file \texttt{func\_Tc2c2.z3}.} \end{proof}
So far, for the binary operation~$f$ exhibited in Lemma~\ref{lem:unique-bin-func} we do not know whether it actually belongs to~\m{\bicent{\set{T}}} as we have only worked with upper approximations of this bicentraliser, not with~\m{\bicent{\set{T}}} itself. \begin{remark}\label{rem:f-in-cent-3-cent-T} Without much more ingenuity but some additional computational effort, it is possible to show that the unique binary operation~$f$ from Lemma~\ref{lem:unique-bin-func} belongs to \m{\set{T}^{*(3)*}}, which is even closer to \m{\bicent{\set{T}}}. \par To do this one needs to enumerate~\m{\ncent[3]{\set{T}}}. Since \m{\cent{\set{T}}} is a clone, for every ternary $g\in\cent{\set{T}}$ each of its identification minors $\composition{g}{\eni[2]{1},\eni[2]{1},\eni[2]{2}}$, $\composition{g}{\eni[2]{1},\eni[2]{2},\eni[2]{1}}$ and \m{\composition{g}{\eni[2]{2},\eni[2]{1},\eni[2]{1}}} must also belong to the same clone, i.e.\ to \m{\ncent[2]{\set{T}}}. However, the latter set has been completely described in Lemma~\ref{lem:Tstar2} above, it contains precisely~65 functions. Thus, the behaviour of~$g$ on tuples of the form~\m{(x,x,y)} has to coincide with one of these 65 functions, likewise, the results on tuples of the form~\m{(x,y,x)} and of the form~\m{(y,x,x)} are determined by one of these functions, respectively. Moreover, on the three tuples of the form \m{(x,x,x)}, the three binary operations from \m{\ncent[2]{\set{T}}} have to prescribe non\dash{}contradictory values. Therefore, except for the six tuples that are permutations of\/ \m{(0,1,2)} the values of \m{g} are determined by one of at most \m{65^3} choices. Altogether no more than \m{65^3\cdot 3^6 = 200\,201\,625} ternary functions have to be considered. \par This can be done by a computer, resulting in a list\footnote{ This list can be computed using the function \texttt{findallternaries\_optimised()} from the ancillary file \texttt{commutationTs.cpp}, and it is given in the file \texttt{Tcent3\_sorted.txt}.} of exactly $1\,048\,578$~functions belonging to~\m{\ncent[3]{\set{T}}}. Again for each of these ternary operations it is readily verified by a computer that they commute\footnote{ This verification can be carried out using the function \texttt{readTcent3("Tcent3\_sorted.txt")} from the ancillary file \texttt{commutationTs.cpp} and confirms once more the concluding sentence in the proof of Lemma~\ref{lem:unique-bin-func}.} with the binary operation~\m{f} given in Lemma~\ref{lem:unique-bin-func}. Consequently, by a complete case distinction, we have indeed that \m{f\in\set{T}^{*(3)*}}. Together with Lemma~\ref{lem:Tclone2}, this proves \m{f\in\set{T}^{*(3)*(2)}\setminus\genClone[2]{\set{T}}} for \m{A=\set{0,1,2}}.
\end{remark}
It is not a suitable strategy to continue indefinitely with individual verifications that the unique binary operation~$f$ from Lemma~\ref{lem:unique-bin-func} belongs to more and more accurate upper approximations \m{\set{T}^{*(\ell)*}}, \m{\ell\rightarrow\infty}, of \m{\bicent{\set{T}}}. Instead we need a more creative Galois theoretic argument to be sure that \m{f\in\bicent{\set{T}}}. This confirmation is given in the following lemma in the form of a primitive positive definition. As it turns out, the argument used there for \m{k=\abs{A}=3} and the definition of~$f$ from Lemma~\ref{lem:unique-bin-func} can then be generalised to any \m{k\geq 3}, see Theorem~\ref{thm:pp-defining-separating-function}. However, we think it is instructive to first show where the idea for the theorem originates from.
\begin{lemma}\label{lem:pp-formula} The binary function \m{f \in\set{T}^{*(2)*(2)}} defined in Lemma~\ref{lem:unique-bin-func} indeed belongs to~\m{\bicent{\set{T}}} for its graph is definable by a primitive positive formula \footnote{The correctness of this formula has been checked with the Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3}, see the script \texttt{checkformulaforbinfunc.z3} available as an ancillary file.} over \m{A=\set{0,1,2}} involving only the graph of\/~$T$:
\begin{align*} \bigl\{(&x_2,x_3,x_5)\in A^3 \mathrel{\big\vert} f(x_2,x_3)=x_5\bigr\}\\ &=\lset{(x_2,x_3,x_5)\in A^3}{\exists x_1,x_4\in A\colon \begin{aligned}[c]
T(x_1,x_2,x_3,x_4) &= x_5 \land{}\\
(x_2,x_3,x_2,x_3,x_1,x_2,x_4,x_3) &\in \ker(T) \land{}\\
(x_3,x_2,x_3,x_2,x_1,x_3,x_4,x_2) &\in \ker(T)
\end{aligned}}\\ &=\lset{(x_2,x_3,x_5)\in A^3}{\exists x_1,x_4,u,v\in A\colon
\begin{aligned}[c]
T(x_1,x_2,x_3,x_4) &= x_5 \land{}\\
T(x_2,x_3,x_2,x_3) &= u \land{}\\
T(x_1,x_2,x_4,x_3) &= u \land{}\\
T(x_3,x_2,x_3,x_2) &= v \land{}\\
T(x_1,x_3,x_4,x_2) &= v
\end{aligned}} \end{align*} \end{lemma} \begin{proof} The idea how to construct the graph of~$f$ is by considering the full graph of~$T$, that is, the relation \[\lset{(x_1,x_2,x_3,x_4,x_5)\in A^5}{T(x_1,x_2,x_3,x_4) = x_5},\] and to project it to the second, third and fifth coordinate. This is motivated by the fact that~$T$ sends only two arguments, $(1,1,2,2)$ and $(1,2,1,2)$, to one and every other quadruple to zero, and the middle two components of the two mentioned quadruples coincide with those pairs that are mapped to one by~$f$. Of course, such a projection will not result in a function graph, but it almost does. The pairs $(1,2)$ and $(2,1)$ will be assigned two values each: the value one (as desired for~$f$) and an erroneous value zero caused by some other quadruples $(x_1,x_2,x_3,x_4)$ with the same middle component $(1,2)$ or $(2,1)$. Hence, the goal is to remove those quadruples from the relation before projecting. There are 16 disturbing argument tuples in the graph of~$T$ altogether: \[ \lset{(u,a,b,v)}{\set{a,b}=\set{1,2}, u,v\in A}\setminus\set{(1,1,2,2),(1,2,1,2)}.\] They need to be removed by imposing additional conditions that have to be satisfied by the quadruples $(1,2,1,2)$ and $(1,1,2,2)$ since we have to ensure that these are kept in the relation.
It turns out that this is possible by imposing just two additional requirements involving the kernel of~$T$. The kernel is an equivalence relation on quadruples that we interpret as an octonary relation on~$A$, and it partitions $A^4$ into two classes: $\set{(1,2,1,2),(1,1,2,2)}$ and the complement~$B$ of this set in~$A^4$. In particular~$B$ includes all tuples containing a zero or three ones or three twos or a two in the first position or a one in the last position. Using this observation it is easy to verify that the following two sets jointly (i.e.\ their intersection) exclude all 16 undesired quadruples. So these two sets represent the restrictions that we are going to apply to the graph of~$T$: \begin{align*}
\lset{(x_1,\dots,x_4)\in A^4}{T(x_2,x_3,x_2,x_3) = T(x_1,x_2,x_4,x_3)} &=A^4\setminus\set{ \begin{array}{@{}*{8}{c@{\,}}c@{}} 0&0&0&1&1&1&2&2&2\\ 1&1&1&1&1&2&1&1&1\\ 2&2&2&2&2&2&2&2&2\\ 0&1&2&0&1&1&0&1&2 \end{array}}\\
\lset{(x_1,\dots,x_4)\in A^4}{T(x_3,x_2,x_3,x_2) = T(x_1,x_3,x_4,x_2)} &=A^4\setminus\set{ \begin{array}{@{}*{8}{c@{\,}}c@{}} 0&0&0&1&1&1&2&2&2\\ 2&2&2&2&2&2&2&2&2\\ 1&1&1&1&1&2&1&1&1\\ 0&1&2&0&1&1&0&1&2 \end{array}} \end{align*} Both sets also exclude the tuple $(1,2,2,1)$, but this is not harmful, as there are sufficiently many other quadruples left having $(2,2)$ as their middle component, for example $(0,2,2,0)$.
\end{proof}
As the arity of~$T$ is \m{(k-1)^2} where \m{k=\abs{A}}, it is perhaps helpful to arrange the arguments of~$T$ in a \nbdd{((k-1)\times(k-1))}square. Expressing the primitive positive formula from Lemma~\ref{lem:pp-formula} using such \nbdd{(2\times2)}squares then yields \[ \exists\,x_1,x_4\in \set{0,1,2}\colon T\apply{\begin{smallmatrix}x_1&x_2\\x_3&x_4\end{smallmatrix}} = x_5 \land T\apply{\begin{smallmatrix}x_2&x_3\\x_2&x_3\end{smallmatrix}} = T\apply{\begin{smallmatrix}x_1&x_2\\x_4&x_3\end{smallmatrix}} \land T\apply{\begin{smallmatrix}x_3&x_2\\x_3&x_2\end{smallmatrix}} = T\apply{\begin{smallmatrix}x_1&x_3\\x_4&x_2\end{smallmatrix}}. \] This kind of interpretation is key for the understanding of the following main result.
\begin{theorem}\label{thm:pp-defining-separating-function} Let $A=\set{0,\dotsc,k-1}$ where $k\geq 3$ and put $n=k-1$. Let the function $f\colon A^n\to A$ be defined by \[f(\bfa{x}) = \begin{cases}
1 &\text{if } \bfa{x}\in\set{\asc,\desc},\\
0 &\text{else},
\end{cases}\] where $\asc=(1,\dotsc,n)$ and $\desc=(n,\dotsc,1)$. The graph of~$f$ can be defined by a primitive positive formula using the graph of\/~$T$ as follows:
\begin{align*} \bigl\{(&\mathord{\swarrow},y)\in A^k \mathrel{\big\vert} f(\mathord{\swarrow})=y\bigr\}\\ &=\lset{(\mathord{\swarrow},y)\in A^k}{\apply{\exists x_{ij}\in A}_{\substack{1\leq i,j\leq n\\i+j\neq k}}\colon \begin{aligned} T(\rightarrow_1,\rightarrow_2,\dotsc,\rightarrow_n) &= y\\ T(\mathord{\swarrow},\mathord{\swarrow},\dotsc,\mathord{\swarrow}) &=T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)\\ T(\mathord{\nearrow},\mathord{\nearrow},\dotsc,\mathord{\nearrow}) &=T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n) \end{aligned}}\\ &=\lset{(\mathord{\swarrow},y)\in A^k}{
\apply{\exists x_{ij}\in A}_{\substack{1\leq i,j\leq n\\i+j\neq k}}\,
\exists u,v\in A\colon
\begin{aligned}[c]
T(\rightarrow_1,\rightarrow_2,\dotsc,\rightarrow_n) &= y \land{}\\
T(\mathord{\swarrow},\mathord{\swarrow},\dotsc,\mathord{\swarrow}) &= u \land{}\\
T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n) &= u \land{}\\
T(\mathord{\nearrow},\mathord{\nearrow},\dotsc,\mathord{\nearrow}) &= v \land{}\\
T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n) &= v
\end{aligned}}, \end{align*} where the arrows represent the following sequences of variables for $1\leq i\leq n$: \begin{align*} \mathord{\swarrow} &= x_{1,n},x_{2,n-1},\dotsc,x_{n-1,2},x_{n,1}\\ \mathord{\nearrow} &= x_{n,1},x_{n-1,2},\dotsc,x_{2,n-1},x_{1,n}\\ \rightarrow_i &= x_{i,1},\dotsc,x_{i,n}\\ \leftarrow_i &= x_{i,n},\dotsc,x_{i,1}\\ \downarrow_i &= x_{1,i},\dotsc,x_{n,i}\\ \uparrow_i &= x_{n,i},\dotsc,x_{1,i} \end{align*} \end{theorem} \begin{proof} We imagine the $n^2$ variables of~$T$ arranged in a square as follows \[\Sqre = \begin{matrix}x_{1,1},\dotsc,x_{1,n}\\ \vdots\\ x_{n,1},\dotsc,x_{n,n}\end{matrix}, \] which we feed row-wise into~$T$, that is, as a notational convention we identify $\Sqre$ with $\rightarrow_1,\dotsc,\rightarrow_n$ and thus stipulate $T(\Sqre)\defeq T(\rightarrow_1,\dotsc,\rightarrow_n) = T(x_{1,1},\dotsc,x_{n,n})$. Reversing this line of thought, we can as well start with some square~$\Sqre$ of variables, feed its elements into~$f$ in some order (indicated, for instance, by certain arrows) and then interpret this sequence of variables as rows of a new square. For example, given~$\Sqre$, the value $T(\downarrow_1,\dotsc,\downarrow_n)$ is the result of~$T$ applied to a square whose rows are the columns of~$\Sqre$; so we apply~$T$ to the transposed~$\Sqre$. Subsequently, we shall often consider sequences as squares where the rows are connected to the ordering of the given sequence and the meaning of columns, diagonals etc.\ is tied to this particular square interpretation. \par Two squares play a special role for~$T$, namely those where~$T$ outputs~$1$. First, we have $T(p_1) = 1$ where $p_1$ is given by ${\rightarrow_i} = (i,\dotsc,i)$ for all $1\leq i\leq n$ (that is, ${\downarrow_i}=\asc$ for all $1\leq i\leq n$ and also $\mathord{\swarrow}=\asc$). Second we have $T(\asc,\dotsc,\asc) = 1$, and we denote the square all of whose rows $\rightarrow_i$ are $\asc$ by~$p_2$ (this means ${\downarrow_i} = (i,\dotsc,i)$ for all $1\leq i\leq n$ and $\mathord{\swarrow}=\desc$). \par With the square interpretation in mind we form the set \[\theta = \lset{(\Sqre,y)\in A^{n^2+1}}{ \begin{aligned} T(\Sqre) &= y\\ T(\mathord{\swarrow},\mathord{\swarrow},\dotsc,\mathord{\swarrow}) &=T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)\\ T(\mathord{\nearrow},\mathord{\nearrow},\dotsc,\mathord{\nearrow}) &=T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n) \end{aligned}}\] and then project it to the diagonal $\mathord{\swarrow}$ and the last coordinate~$y$, representing the image value of~$T$. To show that this projection coincides with the graph of~$f$, we shall prove the following statements: \begin{enumerate}[(i)] \item\label{item:p1}
For every $(\Sqre,y)\in \theta$ where $\mathord{\swarrow} = \asc$, it
follows $y=1$. This means that $\mathord{\swarrow}=\asc$ implies
$\Sqre=p_1$. \item\label{item:p2}
For every $(\Sqre,y)\in \theta$ where $\mathord{\swarrow} = \desc$, it
follows $y=1$. This means that $\mathord{\swarrow}=\desc$ implies that
$\Sqre=p_2$. \item\label{item:whole-graph-present}
For every $\bfa{x}\in A^n\setminus\set{\asc,\desc}$ there is
some~$\Sqre$ such that $(\Sqre,0)\in\theta$ and
$\mathord{\swarrow} = \bfa{x}$. Moreover, $(p_1,1),(p_2,1)\in\theta$. \end{enumerate} Now, if $(\Sqre,y)\in\theta$ then $y\in\im(T) = \set{0,1}$. If $y=1$, then $\Sqre=p_1$ or $\Sqre=p_2$, whence $\mathord{\swarrow}=\asc$ or $\mathord{\swarrow}=\desc$ and both $(\asc,1),(\desc,1)\in\graph{f}$. If $y=0$, then $\Sqre \neq p_1$, so statement~\eqref{item:p1} yields $\mathord{\swarrow}\neq \asc$; similarly, $\Sqre\neq p_2$ and so $\mathord{\swarrow}\neq\desc$ by statement~\eqref{item:p2}. Hence in each case we have $(\mathord{\swarrow},y)\in\graph{f}$ which shows that the projection of~$\theta$ is a subset of the graph of~$f$. Conversely, statement~\eqref{item:whole-graph-present} shows that the full graph of~$f$ is obtainable as a projection of~$\theta$. \par We proceed with the proof of the three statements. \begin{enumerate}[(i)] \item If $(\Sqre,y)\in\theta$ and $\mathord{\swarrow}=\asc$, then
$1= T(\asc,\asc,\dotsc,\asc) =
T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$.
This means $(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)
\in\set{p_1,p_2}$. Because $\mathord{\swarrow}=\asc$, $x_{1n}=1$ and
$x_{n1}=n$, so the \nbdd{n}th column of
$(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$ is not constant
and hence the latter cannot be equal to~$p_2$. Thus it is~$p_1$ and
therefore also $\Sqre = p_1$. \item If $(\Sqre,y)\in\theta$ and $\mathord{\swarrow}=\desc$, then reading
backwards we have $\mathord{\nearrow}=\asc$, and therefore
\m{1= T(\asc,\asc,\dotsc,\asc) =
T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)}, whence
\m{(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)\in\set{p_1,p_2}}.
As $\mathord{\swarrow}=\desc$, we have $x_{1,n}=n$ and $x_{n,1}=1$, so
the \nbdd{n}th column of
\m{(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)} is not constant
(recall that, by our convention, these tuples are fed as rows
into~$T$). This means that
\m{(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)} must have constant
rows (be equal to~$p_1$), so \m{{\downarrow_1} = (1,\dotsc,1)}, and
\m{{\downarrow_i} = (i,\dotsc,i)} for $2\leq i\leq n$. This means
that~$\Sqre$ has constant columns with values $1,\dotsc,n$, which
means that $\Sqre=p_2$. \item First we check that $(p_1,1)\in\theta$. Clearly, $T(p_1)=1$. For
$p_1$ we have $\mathord{\swarrow}=\asc$ and $\mathord{\nearrow}=\desc$, so
$T(\asc,\dotsc,\asc) = 1 = T(p_1) =
T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$ holds as $p_1$
has constant rows, and
$T(\desc,\dotsc,\desc) = 0=
T(\asc,\desc,\dotsc,\desc)=
T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)$ is true, as
well.
\par
Next we verify that $(p_2,1)\in\theta$. Again, $T(p_2)=1$.
This time we have $\mathord{\swarrow}=\desc$ and $\mathord{\nearrow}=\asc$, so
\m{T(\desc,\dotsc,\desc)=0 = T(\asc,\desc,\dotsc,\desc)
=T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_3)}. Furthermore,
\m{T(\asc,\dotsc,\asc)=1 =T(p_1)
=T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)} because the columns
of~$p_2$ have constant values $1,\dotsc,n$.
\par
Finally, consider some $\bfa{x}\in A^n\setminus\set{\asc,\desc}$
and $\Sqre$ with $\mathord{\swarrow}=\bfa{x}$ and having zeros everywhere
else.
All rows of
$(\mathord{\swarrow},\dotsc,\mathord{\swarrow})$ and of $(\mathord{\nearrow},\dotsc,\mathord{\nearrow})$
are identical, so none of these two squares is~$p_1$. If one of
these were~$p_2$, then $\bfa{x}=\mathord{\swarrow}=\asc$ or $\mathord{\nearrow}=\asc$,
which would mean $\bfa{x}=\mathord{\swarrow}=\desc$. Both options are
excluded by the choice of~$\bfa{x}$. Since neither of these two
squares is $p_1$ or~$p_2$, we have
$T(\mathord{\swarrow},\dotsc,\mathord{\swarrow})=0=T(\mathord{\nearrow},\dotsc,\mathord{\nearrow})$.
As $\Sqre$ has zeros outside the \nbdd{\mathord{\swarrow}}diagonal, it
follows that also
$(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$ and
$(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)$ have zeros somewhere
and are hence mapped to zero by~$T$. Thus~$\Sqre$ satisfies the
two conditions regarding the kernel of~$T$. As~$\Sqre$ contains
zeros, we also have $T(\Sqre) = 0 = y$, concluding the argument.
\qedhere \end{enumerate} \end{proof}
As a corollary we obtain that the example algebras $\algwops{A}{T}$ constructed by Snow in~\cite{SnowGeneratingPrimitivePositiveClones} do not generate centraliser clones as term operations and are thus no counterexample to the Burris-Willard conjecture or to Dani\v{l}\v{c}enko's results.
\begin{corollary}\label{cor:f-separating-clones} For every carrier~$A$ of cardinality $k\geq 3$ the \nbdd{(k-1)}ary function~$f$ defined in Theorem~\ref{thm:pp-defining-separating-function} satisfies $f\in \bicent{\set{T}}\setminus\genClone{\set{T}}$. \end{corollary} \begin{proof} By Theorem~\ref{thm:pp-defining-separating-function} we have $f\in\bicent{\set{T}}$; since~$f$ is \nbdd{(k-1)}ary, it cannot belong to the clone generated by~$T$ as it is maps two distinct tuples to one and is not a projection (cf.\ Lemma~\ref{lem:Tclone2-general}). \end{proof}
\section{Some computational remarks}\label{sect:computations} We conclude with a few comments on computational aspects related to verifying that for \m{A=\set{0,1,2}}, the simplest case in question, the binary function \m{f\in\set{T}^{*(2)*(2)}\setminus\genClone[2]{\set{T}}} found in Lemma~\ref{lem:unique-bin-func} actually belongs to \m{\bicent{\set{T}}}. \par
The first possibility is based on trusting the classification results shown by Da\-ni\v{l}\-\v{c}en\-ko in~\cite[Theorems~4, 5, pp.~103, 105]{Danilcenko1979-thesis}. Using the equivalence of statements~\eqref{item:cent-leq-n} and~\eqref{item:cent-n-cent-bicent} in Proposition~\ref{prop:char-cdeg}, these theorems imply that \m{\bicent{\set{T}}=\set{T}^{*(3)*}}, which contains~$f$ by the calculations described in Remark~\ref{rem:f-in-cent-3-cent-T}. Believing in Dani\v{l}\v{c}enko's thesis obviously does not render Theorem~\ref{thm:pp-defining-separating-function} obsolete, as the latter also covers the cases where \m{\abs{A}>3}. \par
The second option we would like to discuss is whether it is feasible to compute a primitive positive formula over~\m{\graph{T}} that allows to define~\m{\graph{f}}. The formula shown in Lemma~\ref{lem:pp-formula} uses five \nbdd{\graph{T}}atoms and four existentially quantified variables. Of course, these bounds are not known beforehand, and even if they were, simply trying to produce all formul\ae{} with $\ell=1,2,3,\dots$ \nbdd{\graph{T}}atoms and trying to find a \nbdd{3}variable projection that gives~$\graph{f}$ becomes unwieldy very quickly. Indeed, before even dealing with projections, there are \m{\kappa^\kappa} possible variable substitutions, where \m{\kappa\defeq\ell\cdot\arity(\graph{T})} for a primitive positive formula with $\ell$~atoms of type~$\graph{T}$ and at most~$\kappa$ variables. More concretely, to get the formula from Lemma~\ref{lem:pp-formula}, we would have \m{\ell=5} and \m{\arity(\graph{T})=5}, so \m{\kappa=25}, and \m{25^{25}\approx 10^{35}} substitutions are currently too many to check in a reasonable amount of time. \par
However, if \m{f\in\bicent{\set{T}}}, then \m{\graph{f}\in\Inv{\Pol{\set{\graph{T}}}}}, and there is a more systematic method to compute a primitive positive formula for a relation \m{\rho_0\in\Inv{F}} on a finite set~$A$ where \m{F=\Pol{\set{\rho_1,\dotsc,\rho_t}}}, \m{t\in\N}. It comes from an algorithm to compute~$\Fn{F}$ interpreted as a relation~\m{\Gamma_F(\chi_n)} of arity~$\abs{A}^n$, which is given in~\cite[4.2.5., p.~100 et seq.]{PoeKal}, combined with the proof of the second part of the main theorem on the $\PolOp\text{-}\InvOp$ Galois connection, showing that any \m{\rho_0\in\Inv{\Pol{Q}}} belongs to the relational clone generated by~$Q$, as it is primitive positively definable from \m{\Gamma_{F}(\chi_n)} where \m{F=\Pol{Q}} and \m{n=\abs{\rho_0}} (cf.~\cite[1.2.2.~Lemma, p.~53 et seq.]{PoeKal}). \par
The following is slightly more general than what is described in~\cite[4.2.5]{PoeKal} for we can deal with finitely many describing relations \m{\rho_1,\dotsc,\rho_t} for the polymorphism clone~$F$, while only one is used in~\cite{PoeKal}. Taking \m{Q=\set{\rho_1\times\dotsm\times\rho_t}} as a singleton in~\cite{PoeKal} is inefficient from a computational point of view, so we give a proof of this not very original modification. Additionally, we allow for a generating system~$\gamma_0$ of the relation~$\rho_0$ for which a formula is sought (although this is somehow implicit in~\cite[4.2.5]{PoeKal} as \m{\Gamma_{F}(\chi_n)} is generated by the \nbdd{n}element subrelation~$\chi_n$).
\begin{proposition}\label{prop:gen-rel-clone} Assume \m{Q\defeq\lset{\rho_\ell}{1\leq \ell\leq t}}, \m{F\defeq \Pol{Q}}, \m{\rho_0\in\Inv{F}} where \m{\rho_\ell\subs A^{m_\ell}} for \m{0\leq \ell\leq t}. Let \m{\gamma_0\subs\rho_0} with \m{n\defeq \abs{\gamma_0}} be a generating system of~$\rho_0$, that is, \m{\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}}. There is \m{m'\leq m_0} and \m{\gamma\subs\rho\in\Inv[m']{F}} and \m{\alpha\colon m_0\to m'} such that \m{\gamma_0=\lset{x\circ\alpha}{x\in\gamma}} where~$\gamma$ does not have any duplicate coordinates. If we imagine the tuples in~\m{\gamma_0} written as columns of an \nbdd{(m_0\times n)}matrix, then the distinct rows of this matrix are precisely the rows of the matrix whose columns form the tuples of~$\gamma$. Some of these $m'$~rows will be found as rows of a relation \m{\mu\subs A^L} with \m{\abs{\mu}=n} defined below. For notational simplicity we choose~$\alpha$ such that the rows with indices \m{1,\dotsc,m} have this property and put \m{p\defeq m'-m\geq0}. \par The matrix representation of the relation~$\mu$ has~$n$ columns (tuples) and~$L$ rows \m{\apply{\bfa{z}_i}_{0\leq i<L}} where \m{L=\sum_{\ell=1}^t s_\ell^n\cdot m_\ell} with \m{s_\ell=\abs{\rho_\ell}}. Let the columns of~$\mu$ arise by stacking on top of each other all possible submatrices of~$\rho_1$ with~$n$ columns, followed by all possible submatrices of~$\rho_2$, and so forth, finishing with all submatrices obtained by choosing~$n$ of the~$s_t$ columns of~$\rho_t$. Thus \m{\mu\subs\pi\defeq\rho_1^{s_1^n}\times\dotsm\times\rho_t^{s_t^n}}. Define the kernel relation \m{\epsilon\defeq \lset{(i,j)\in L^2}{\bfa{z}_i = \bfa{z}_j}} and identify variables in~\m{\pi} accordingly with \m{\delta_\epsilon = \lset{x\in A^L}{\forall\, (i,j)\in\epsilon\colon x_i=x_j}}. This gives \m{\sigma\defeq\pi\cap\delta_{\epsilon}} having the same row kernel as~$\mu$. By finding the first~$m$ rows of~$\gamma$ among the rows of~$\mu$, we find a projection~$\pr$ to an \nbdd{m}element set of indices such that \m{\gamma\subs\pr(\mu)\times A^p}. It follows that \m{\rho=\pr(\sigma)\times A^p
=\pr\apply{\apply{\rho_1^{s_1^n}\times
\dotsm\times \rho_t^{s_t^n}}\cap\delta_\epsilon}\times A^p}. \end{proposition} \begin{proof} To show that \m{\rho\subs\pr(\sigma)\times A^p} we note that \m{\pr(\sigma)\times A^p\in \Inv{F}} since for every \m{1\leq\ell\leq t} we have \m{\rho_\ell\in Q\subs\Inv{F}}. Moreover, as~$\rho$ is a projection of~$\rho_0$ in the same way as~$\gamma$ is a projection of~$\gamma_0$, and since \m{\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}}, we have \m{\rho=\gapply{\gamma}_{\algwops{A}{F}^{m'}}}. Due to \m{\gamma\subs\pr(\mu)\times A^p \subs \pr(\sigma)\times A^p}, the generating set~$\gamma$ is a subset of the invariant \m{\pr(\sigma)\times A^p}, and so is the generated invariant~$\rho$. \par For the converse inclusion we take a tuple $x\in\sigma$ and some \m{a\in A^p} and denote by~$y$ the tuple obtained from~$x$ by projection to the~$m$ indices that relate the first~$m$ rows \m{\bfa{v}_1,\dotsc,\bfa{v}_m} of~$\gamma$ to a certain section of~$x$. Since \m{x\in\sigma\subs\delta_\epsilon}, this tuple defines a function \m{f_x\colon B\to A} where \m{B\defeq\lset{\bfa{z}_i}{0\leq i<L}\sups
C\defeq\lset{\bfa{v}_i}{1\leq i\leq m}} by finding for any \m{\bfa{z}\in B} some \m{0\leq i<L} such that \m{\bfa{z}=\bfa{z}_i} and letting \m{f_x(\bfa{z}) \defeq x_i} (the choice of \m{i<L} is inconsequential as \m{x\in\delta_\epsilon}). The remaining rows \m{\bfa{v}_i} with \m{m<i\leq m+p} do not belong to~$B$ by the choice of~$m$ and~$p$. This means, $(y,a)$ defines a function \m{f_{y,a}\colon\lset{\bfa{v}_i}{1\leq i\leq m'}\to A}, where \m{f_x\restriction_C=f_{y,a}\restriction_C}. Moreover, it is possible to extend~$f_{y,a}$ to a globally defined function \m{f\colon A^n\to A} such that \m{f\restriction_{\set{\bfa{v}_i\mid 1\leq i\leq m'}}=f_{y,a}} and \m{f\restriction_B=f_x} without contradictory value assignments. We pick one particular such~$f$, no matter which one, and we show below that~$f\in F=\Pol{Q}$. By the hypothesis of the proposition, $\rho$ belongs to~$\Inv{F}$, so~$f$ preserves~$\rho$. Thus, applying~$f$ to the tuples in~$\gamma$, gives \m{(y,a)=\apply{f_{y,a}(\bfa{v}_i)}_{1\leq i\leq m'}
=\apply{f(\bfa{v}_i)}_{1\leq i\leq m'}\in\rho} as needed. \par It remains to argue that~$f\in\Pol{Q}$. Hence, take any \m{1\leq \ell\leq t} and any matrix of $n$~columns taken from~$\rho_\ell$. By the construction of~$\mu$ there are $m_\ell$~consecutive indices \m{0\leq i,i+1,\dotsc,i+m_\ell-1<L} such that the rows of this matrix are \m{\bfa{z}_i,\dotsc,\bfa{z}_{i+m_\ell-1}}. Now \m{\apply{f(\bfa{z}_{i+\nu})}_{0\leq \nu<m_\ell}= \apply{f_x(\bfa{z}_{i+\nu})}_{0\leq \nu<m_\ell}}, and this tuple is in~$\rho_\ell$ because~$f_x$ is defined via \m{x\in\sigma=\pi\cap\delta_{\epsilon}}. \end{proof}
The expression \m{\rho=\pr\apply{\apply{\rho_1^{s_1^n}\times
\dotsm\times \rho_t^{s_t^n}}\cap\delta_\epsilon}\times A^p} in Proposition~\ref{prop:gen-rel-clone} gives a primitive positive definition of~$\rho$ in terms of~$\rho_1,\dotsc,\rho_t$. Duplicating variables as indicated by~$\alpha$, one can then give a primitive positive formula for the original relation~$\rho_0$. The inclusion \m{\rho\subs\pr\apply{\apply{\rho_1^{s_1^n}\times
\dotsm\times \rho_t^{s_t^n}}\cap\delta_\epsilon}\times A^p} holds in any case, regardless of the assumption that \m{\rho\in\Inv{F}}. It can be seen from the proof of Proposition~\ref{prop:gen-rel-clone} that the latter condition is only needed for the opposite inclusion. \par That is, the formula computed by the following algorithm will always be satisfied by all tuples from~$\rho$ (or~$\rho_0$), but if the containment \m{\rho\in\Inv{F}} is only suspected but not known in advance, then one needs to check afterwards that the tuples satisfying the generated primitive positive formula really belong to~$\rho$ (or to~$\rho_0$, respectively).
\begin{algo}\label{alg:gen-rel-clone} Compute a primitive positive definition\footnote{ An implementation is available in the file \texttt{ppdefinitions.cpp}, which can be compiled using \texttt{compile.sh}, resulting in an executable \texttt{getppformula}. This executable expects a file \texttt{input.txt}, the formatting of which is explained in \texttt{input\_template.txt}, which can also be used as \texttt{input.txt}. After a successful run the programme will produce files \texttt{ppoutput.out}, an ascii text file containing the computed primitive positive formula, and \texttt{checkppoutput.z3}, a script to verify the correctness of the formula using the Z3 theorem prover~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3}. \par There are two caveats with the implementation added as ancillary file to this submission: first, the initial preprocessing step turning~$\gamma_0$ into~$\gamma$ has not been implemented. Hence, \texttt{ppdefinitions.cpp} expects a goal relation~$\gamma$ (relation~\texttt{S} in \texttt{input.txt}) without duplicate coordinates. If $\gamma_0$ has duplicate rows, the initial massaging and the final adjustment of the formula by duplicating the respective variables has to be done by hand. Second, it is possible to use a proper generating set $\gamma_0\subs\rho_0$ in the input (provided it does not contain duplicate rows), but then in the output file \texttt{checkppoutput.z3} the goal relation~\texttt{S} has to be completed manually with all tuples from~$\rho_0$, since the closure \m{\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}} is not computed.} \newline (Pseudocode is given on page~\pageref{code:compute-ppdefinition}, line numbers in the description refer to this code.) \begin{description} \item[Input]
finitary relations \m{\rho_1\subs A^{m_1},\dotsc,\rho_t\subs A^{m_t}}
defining \m{F\defeq \Pol{Q}} where
\m{Q=\set{\rho_\ell\mid 1\leq \ell\leq t}}
\par
a generating system \m{\gamma_0\subs A^{m_0}} for a relation
\m{\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}\in \Inv{F}} \item[Output]
a primitive positive formula describing \m{\rho_0} in terms of
\m{\rho_1,\dotsc,\rho_t} \item[Description]
We assume that \m{\gamma_0=\set{r_1,\dotsc,r_n}}, the tuples of which
we represent as a matrix with columns \m{r_1,\dotsc,r_n} and rows
\m{v_1,\dotsc,v_{m_0}}.
We first define a map
\m{\alpha\colon\set{1,\dotsc,m_0}\to\set{1,\dotsc,m'}} to a
transversal of the equivalence relation
\m{\lset{(i,j)\in\set{1,\dotsc,m_0}^2}{v_i=v_j}} (lines~1--9).
For this we iterate over all rows, and, if~$v_j$ has been seen
previously among \m{v_1,\dotsc,v_{j-1}}, we assign to \m{\alpha(j)}
the same index \m{\iota(v_j)} as previously, and if~$v_j$ is a fresh
row, we assign to~\m{\alpha(j)} the least index \m{i\eqdef\iota(v_j)}
not used before (lines~4--9). When this is finished,
\m{\gamma_0=\lset{(x_{\alpha(1)},\dotsc,x_{\alpha(m_0)})}{(x_1,\dotsc,x_{m'})\in\gamma}}
where \m{\gamma\subs A^m} is a projection of~$\gamma_0$ to its
distinct rows, and~$m'$ is the last used value of~$i$.
\par
Next we iterate over all \m{1\leq \ell\leq t} and for each
relation~$\rho_\ell$ we iteratively extend the set~$\mathcal{L}_\ell$
of \nbdd{\rho_\ell}atoms for the final formula, starting from
\m{\mathcal{L}_\ell=\emptyset} (lines~10--13).
We iterate over the rows \m{z_1,\dotsc,z_{m_\ell}} of all possible
matrices with~$n$ columns chosen from~$\rho_\ell$ (lines~14--16). For any of these
matrices we construct an \nbdd{m_\ell}tuple~$a$ of variable symbols (lines~17--24),
which will represent a \nbdd{\rho_\ell}atom and will be added
to~$\mathcal{L}_\ell$ if it is not already present in the list of
atoms (lines~25--26). The atoms have to be constructed in such a way that any two
identical rows occurring within all possible matrices get the same
variable symbol. This ensures that the variable identification
represented in Proposition~\ref{prop:gen-rel-clone} by intersection
with~$\delta_\epsilon$ takes place. Moreover, if a row in the
matrices occurs as a row of~$\gamma$ (or equivalently of~$\gamma_0$),
then the corresponding variable is not going to be existentially
quantified, while all others are. This takes care of the projection
in the formula for~$\rho$ from Proposition~\ref{prop:gen-rel-clone}.
\par
In more detail, if a row~$z_j$ with \m{1\leq j\leq m_\ell} has not occurred
previously (line~17), we have to define its variable
symbol~$u(z_j)$. If \m{z_j\in\set{v_1,\dotsc,v_{m_0}}}, that is,
$z_j$ is among the rows of~$\gamma_0$, we use the
variable \m{u(z_j)\defeq x_{\iota(z_j)}} (lines~19--20).
Otherwise, the fresh row~$z_j$ needs to be projected away by
existential quantification, and we use a different symbol
\m{u(z_j)\defeq y_k} where~$k>0$ is the least previously unused index
for existentially quantified variables (lines~21--23). Regardless of whether~$z_j$
is fresh or not, we define the \nbdd{j}th entry of the current
atom~$a$ as \m{a(j)\defeq u(z_j)} (line~24). Only if the resulting string
$a=(a(1),\dotsc,a(m_\ell))\notin\mathcal{L}_\ell$, that is, if~$a$ is
a new atom, it will be added to~$\mathcal{L}_\ell$ (lines~25--26).
\par
After all iterations, we state that all variables \m{x_1,\dotsc,x_i}
occurring in \m{\set{x_{\alpha(1)},\dotsc,x_{\alpha(m_0)}}} come from
the base set~$A$, we existentially quantify all variables
\m{y_1,\dotsc,y_k} and write out (line~27) a long conjunction over all
relations \m{\rho_1,\dotsc,\rho_t} and over all \nbdd{\rho_\ell}atoms
\m{a\in \mathcal{L}_\ell}
(cf.\ the direct product in the formula for~$\rho$ in
Proposition~\ref{prop:gen-rel-clone}). \end{description} \begin{algorithm}[htp]
\SetAlgoVlined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output} \Input{
finitary relations \m{\rho_1\subs A^{m_1},\dotsc,\rho_t\subs A^{m_t}}\\
\texttt{// defining \m{F\defeq \Pol{Q}} where
\m{Q=\set{\rho_\ell\mid 1\leq \ell\leq t}}}\\
generating system $\gamma_0\subs A^{m_0}$ for a relation
\rlap{$\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}\in\Inv{F}$}\\
\tcp{where \m{\gamma_0=\set{r_1,\dotsc,r_n}}, i.e.,
\m{\abs{\gamma_0}\leq n}
\newline
written as a matrix
$\apply{r_1,\dotsc,r_n} = \apply{\begin{smallmatrix}
v_1\\\vdots\\v_{m_0}
\end{smallmatrix}}$
with rows $v_j\in A^n$}
} \Output{a primitive positive presentation of~$\rho_0$ in terms of
\m{\rho_1,\dotsc,\rho_t}} \Begin{ $i\gets 0$ \tcp*[r]{initialise index for distinct rows of~$\gamma_0$} $D_0\gets\emptyset$ \tcp*[r]{initialise domain of distinct rows of~$\gamma_0$} \tcp{Define $\iota\colon D_0\to \set{1,\dotsc,\abs{D_0}}$,
$\alpha\colon \set{1,\dotsc,m_0}\to\set{1,\dotsc,\abs{D_0}}$} \ForAll{$1\leq j\leq m_0$}{
\If{$v_j\notin D_0$}{
$D_0\gets D_0\cup\set{v_j}$\;
$i\gets i+1$\;
$\iota(v_j)\gets i$\;
}
$\alpha(j)\gets\iota(v_j)$ } \tcp{Now \m{D_0=\set{v_1,\dotsc,v_{m_0}}}} $k\gets 0$ \tcp*[r]{initialise index for $\exists$-quantified variables} $D\gets \emptyset$ \tcp*[r]{\mbox{initialise domain of distinct rows from submatrices}
of~$\rho_1,\dotsc,\rho_t$ to define
$u\colon D\to \set{x_{1},\dotsc,
x_{\abs{D_0}}}\cup\set{y_1,\dotsc,y_{k}}$} \ForAll{$1\leq \ell\leq t$}{ \m{\mathcal{L}_\ell\gets \emptyset} \tcp*[r]{initialise list of atoms pertaining to~$\rho_\ell$} \ForAll{$c\colon n\to\rho_\ell$}{ Form a matrix \m{\apply{c_0,\dotsc,c_{n-1}}=\apply{\begin{smallmatrix}z_1\\\vdots\\z_{m_\ell}\end{smallmatrix}}} with rows \m{z_j\in A^n}\; \tcp{Iterate over its rows and form a possibly new atom~$a$} \ForAll{$1\leq j\leq m_\ell$}{ \If(\tcp*[f]{A previously unseen row $z_j$ appears.}){$z_j\notin D$}{
$D\gets D\cup\set{z_j}$\;
\eIf(\tcp*[f]{It is a row of~$\gamma_0$.}){$z_j\in D_0$}{
$u(z_j)\gets x_{\iota(z_j)}$}
{
$k\gets k+1$\;
$u(z_j)\gets y_{k}$} } $a(j)\gets u(z_j)$ \tcp*[r]{extend current atom with the appropriate variable symbol} } \If(\tcp*[f]{If it is really new\dots}){$a=(a(1),\dotsc,a(m_\ell))\notin \mathcal{L}_\ell$}{
$\mathcal{L}_\ell \gets \mathcal{L}_\ell\cup\set{a}$
\tcp*[r]{\dots add current atom~$a$ to the list.} } }} \Return{String \m{\rho_0=\Bigl\{\apply{x_{\alpha(1)},\dotsc,x_{\alpha(m_0)}} \ \Big\vert\ x_1,\dotsc,x_i\in A\land \exists y_1\dotsm \exists y_k\colon \bigwedge\limits_{1\leq \ell\leq t}\bigwedge\limits_{a\in\mathcal{L}_\ell}\rho_\ell(a)\Bigr\}}} } \NoCaptionOfAlgo \caption{Compute a primitive positive definition} \label{code:compute-ppdefinition} \end{algorithm} \end{algo}
\begin{example}\label{ex:computing-ternary-f.-from-T.} In the case discussed in this section, we have \m{A=\set{0,1,2}}, \m{t=1}, \m{Q=\set{\graph{T}}}, \m{\rho_0=\graph{f}}, \m{m_0=3} and \m{m_1=5}. Moreover, \m{F=\Pol{Q}=\cent{\set{T}}}. As the size~$s_1$ of~$\graph{T}$ is \m{\abs{A}^4=81}, it is crucial for the applicability of Algorithm~\ref{alg:gen-rel-clone} to find a small generating system~$\gamma_0$ of~$\graph{f}$ with respect to \m{\alg{A}^3} where \m{\alg[\cent{\set{T}}]{A}}. Given \m{\abs{\gamma_0}=n}, the algorithm has to iterate over \m{s_1^n=81^n} matrices and thus over \m{m_1\cdot s_1^n = 5\cdot 81^n} rows. Experiments show that if we blindly took \m{\gamma_0=\graph{f}}, i.e., \m{n=\abs{A}^2=9}, the algorithm would need more than eighteen thousand years to finish, perhaps less by a factor of ten if run on a computer much faster than the author's. Fortunately, the number~$n$ can be reduced significantly to a value far below~$9$. \par Indeed, listing the tuples of~$\graph{f}$ as columns, we have \begin{align*} \graph{f}&=\set{ \apply{\begin{smallmatrix}0\\0\\0\end{smallmatrix}}, \apply{\begin{smallmatrix}0\\1\\0\end{smallmatrix}}, \apply{\begin{smallmatrix}0\\2\\0\end{smallmatrix}}, \apply{\begin{smallmatrix}1\\0\\0\end{smallmatrix}}, \apply{\begin{smallmatrix}1\\1\\0\end{smallmatrix}}, \apply{\begin{smallmatrix}1\\2\\1\end{smallmatrix}}, \apply{\begin{smallmatrix}2\\0\\0\end{smallmatrix}}, \apply{\begin{smallmatrix}2\\1\\1\end{smallmatrix}}, \apply{\begin{smallmatrix}2\\2\\0\end{smallmatrix}}}\\ &=\gapply{\set{ \apply{\begin{smallmatrix}1\\2\\1\end{smallmatrix}}, \apply{\begin{smallmatrix}2\\1\\1\end{smallmatrix}} }}_{\alg{A}^3}. \end{align*} To see this, we can take advantage of the unary operations \m{u_{2,a}\in \ncent[1]{\set{T}}} with \m{a\in A}, described in Corollary~\ref{cor:Tstar1}, and \m{f_{0,(2,2,2,2)}\in\ncent[2]{\set{T}}} from Lemma~\ref{lem:Tstar2}. Namely, for \m{a\in\set{0,1,2}}, we have \begin{align*} u_{2,a}(1)&=0& u_{2,a}(2)&=a& f_{0,(2,2,2,2)}(1,2)&=2& u_{2,1}(2)&=1\\ u_{2,a}(2)&=a& u_{2,a}(1)&=0& f_{0,(2,2,2,2)}(2,1)&=2& u_{2,1}(2)&=1\\ u_{2,a}(1)&=0,& u_{2,a}(1)&=0,& f_{0,(2,2,2,2)}(1,1)&=0,& u_{2,1}(0)&=0. \end{align*} \par Hence, we can use the \nbdd{2}element generating set \m{\gamma_0=\set{(1,2,1),(2,1,1)}\subs\graph{f}} and thus we only have to enumerate \m{5\cdot 81^2 = 32\,805} rows. This can be done in a fraction of a second\footnote{ After compilation the programme \texttt{ppdefinitions.cpp} may be run on \texttt{input\_2generated.txt} copied to \texttt{input.txt}. The mentioned files can be found in the ancillary directory of this submission.} and results in a primitive positive formula\footnote{ Running \texttt{ppdefinitions.cpp} on \texttt{input\_2generated.txt} (see the ancillary directory) produces the content of \texttt{ppoutput\_2generated.out} and \texttt{checkppoutput\_2generated.z3}, which both contain the resulting primitive positive formula (as plain text and in \texttt{SMT-LIB2.0}-syntax).} with $6$~existentially quantified variables and $6\,561$~\nbdd{\graph{T}}atoms, the correctness of which can be verified by a sat\dash{}solver in a few minutes\footnote{ This can, for example, be done with the Z3 theorem prover~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the ancillary file \texttt{checkppoutput\_2generated.z3}. This file also contains the computed primitive positive formula for~$\graph{f}$ expressed in the \texttt{SMT-LIB2.0}-format.}. \end{example}
We conclude that it is possible to computationally find a proof that the graph of~$f$ is primitive positively definable from~$\graph{T}$ for \m{A=\set{0,1,2}}. However, the resulting formula is not suitable for a generalisation to larger carrier sets as the one from Lemma~\ref{lem:pp-formula} was.
\end{document} |
\begin{document}
\title{Global Riemann Solvers for Several $3 imes3$ Systems of Conservation Laws with Degeneracies}
\begin{abstract} We study several $3\times 3$ systems of conservation laws, arising in modeling of two phase flow with rough porous media and traffic flow with rough road condition. These systems share several features. The systems are of mixed type, with various degeneracies.
Some families are linearly degenerate, while others are not genuinely nonlinear. Furthermore, along certain curves in the domain the eigenvalues and eigenvectors of different families coincide. Most interestingly, in some suitable Lagrangian coordinate the systems are partially decoupled, where some unknowns can be solved independently of the others. Finally, in special cases the systems reduce to some $2\times2$ models, which have been studied in the literature. Utilizing the insights gained from these features,
we construct global Riemann solvers for all these models. Possible treatments on the Cauchy problems are also discussed. \end{abstract}
\section{Introduction} \label{sec0}
Scalar conservation laws with discontinuous flux functions have attracted significant research interests in recent years, and exciting progresses have been made. See for example a survey paper \cite{And} and references therein. In a general setting, a scalar conservation law \begin{equation}\label{a.1}
u_t + g(a(x),u)_x=0
\end{equation} where $a(x)$ contains discontinuity, can be written into a $2\times 2$ system, by adding a trivial equation for $a(x)$: \begin{equation}\label{a.2}
\begin{cases}
u_t + g(a,u)_x =0,\\
a_t=0.
\end{cases} \end{equation}
For the general triangular system~\eqref{a.2}, when $g_u(a,u)=0$, the two eigenvalues and eigenvectors of the two families coincide, and the system is not hyperbolic. In the literature this is referred to as parabolic degeneracy. Utilizing the vanishing viscosity solution of \begin{equation}\label{a.5}
\begin{cases}
u_t + g(a,u)_x =\varepsilon u_{xx},\\
a_t=0,
\end{cases} \end{equation} as $\varepsilon\to 0+$, solutions of Riemann problems can be uniquely determined. Such admissible condition for jumps in $a(x)$ leads to the {\em minimum jump condition}. See~\cite{GimseRisebro} and some more recent works~\cite{GS2017, ShenN}.
Triangular systems~\eqref{a.2} arises in many physical models. Take for example the two phase flow models in reservoir simulations. Consider a simple polymer flooding model with single component \begin{equation}\label{a.3}
\begin{cases}
s_t + f(s,c)_x =0,\\
(cs)_t + (c f(s,c))_x=0.
\end{cases} \end{equation} Here, $s\in[0,1]$ is the saturation of the water phase, $c\in[0,1]$ is the fraction of polymer dissolved in water, and $f(s,c)$ is the fractional flow for the water phase. One assumes uniform porous media, no gravitation force, and no adsorption of the polymer into the porous media. Introducing a Lagrangian coordinate $(\tau,y)$ (see~\cite{Wagner}), with $$ y_x =s, \quad y_t =-f, \quad y(0,0)=0, \qquad \tau=t,$$ the system~\eqref{a.3} can be written as a triangular system \begin{equation}\label{a.4}
\begin{cases}
(1/s)_\tau - (f(s,c)/s)_y =0 , \\
c_\tau=0.
\end{cases} \end{equation}
In this paper, we consider the two-phase polymer flooding in rough media, where the permeability function of the porous media may be discontinuous. Let $k(x)$ be the absolute permeability of the rock, system~\eqref{a.3} is extended to the following $3\times 3$ systems of conservation laws, where we also take into consideration of the adsorption of polymers into the rock: \begin{align}
s_t + f(s,c,k)_x ~=~0 ,\label{0.1} \\
(m(c)+cs)_t + (cf(s,c,k))_x ~=~0,\label{0.2} \\
k_t ~=~0.\label{0.3} \end{align} Here, the unknown vector is $(s,c,k)$, and the function $m(c)$ describes the adsorption of polymer in the porous media. We assume that $m$ depends only on $c$.
The main objective of this paper is the construction of global Riemann solvers for~\eqref{0.1}-\eqref{0.3}, under 3 different situations: \begin{itemize} \item We neglect the gravity force and the adsorption. See section~\ref{sec2}; \item We consider the adsorption and neglect the gravity force. See section~\ref{sec3}; \item We consider the gravity force and neglect the adsorption. See section~\ref{sec5}. \end{itemize}
As an additional model, we also treat a $3\times 3$ system modeling traffic flow in section~\ref{sec4}. The traffic flow system has some similar features to the polymer flooding models.
We remark that, global Riemann solvers for general nonlinear systems of hyperbolic conservation laws can not always be constructed, due to the nonlinearity of the flux function. Such a task is possible here, thanks to the special properties of the models. Once a Riemann solver is available, remarks are given on possible approaches to establishing existence of solutions for Cauchy problem, for some of the cases. Finally, concluding remarks are given in section~\ref{sec6} where more future works are suggested.
\section{A simple model for polymer flooding in two phase flow with rough media}\label{sec2} \setcounter{equation}{0}
We first consider the two phase flow mode of polymer flooding~\eqref{0.1}-\eqref{0.3}, where we neglect the adsorption effect and the gravitation effect, i.e., \begin{align}
&s_t + f(s,c,k)_x =0, \label{1.1} \\ & (cs)_t + (cf(s,c,k))_x =0,\label{1.2} \\
&k_t =0.\label{1.3} \end{align} The flux function $f(s,c,k)$ has the following properties. For any given $(c,k)$, the mapping $s\mapsto f$ is the famous S-shaped Buckley-Leverett function~\cite{BL} with a single inflection point. We have $$f(s,c,k)\in[0,1], \qquad f_s(s,c,k)\ge0, \qquad \mbox{for all } (s,c,k),$$ and \begin{equation}\label{fconds}
f(0,c,k)=0, \qquad f(1,c,k)=1, \qquad f_s(0,c,k)=0, \qquad f_s(1,c,k) =0,\qquad \forall(c,k). \end{equation} Furthermore, it's physically reasonable to assume that the flux decreases with more dissolved polymer, and increases with increasing permeability, i.e., \begin{equation}\label{fconds2}
f_c(s,c,k) < 0, \qquad f_k(s,c,k) >0, \qquad \forall (s,c,k). \end{equation} The assumptions~\eqref{fconds2} simplify the analysis, allowing clearer presentation of the main ideas. We remark that, if we remove the assumptions~\eqref{fconds2}, a similar analysis can be carried out, but with heavier details.
\subsection{Riemann solver for the reduced $2\times2$ model}\label{sec2.1}
Observe that when $k$ is constant, the system~\eqref{1.1}-\eqref{1.3} reduces to a $2\times 2$ system \begin{align}
&s_t + f(s,c)_x =0, \label{1.10} \\ & (cs)_t + (cf(s,c))_x =0.\label{1.20} \end{align} With a slight abuse of notation we write $f(s,c)=f(s,c,k)$ when $k$ is a constant. The Jacobian matrix of the flux function is triangular $$ J = \begin{pmatrix} f_s(s,c) & f_c(s,c) \\ 0 & f(s,c)/s \end{pmatrix}. $$ The two eigenvalues and the corresponding right eigenvectors of $J$ are $$\lambda_s= f_s(s,c), \quad \lambda_c= f(s,c)/s, \qquad r_s=\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad r_c=\begin{pmatrix} f_c(s,c) \\ \lambda_s-\lambda_c \end{pmatrix}. $$ When $\lambda_s=\lambda_c$, the two eigenvectors also coincide, therefore the system becomes parabolic degenerate. Since the difference $\lambda_s-\lambda_c$ can change sign, nonlinear resonance occurs, and the total variation of the unknown can blow up in finite time, see~\cite{Temple, Temple2}. Therefore weak solutions $(s,c)$ are not defined in the class of BV functions.
System~\eqref{1.10}-\eqref{1.20} has been studied in quite some detail in the literature. It is known that Riemann problems for~\eqref{1.10}-\eqref{1.20} can be solved globally, generating entropy solutions that are the vanishing viscosity limit, see~\cite{GS2017}. See also a recent work~\cite{WS}, where Riemann problems, as well as the existence of solutions for the Cauchy problems are treated, with the consideration of the gravity force.
We briefly summarize the Riemann solver for~\eqref{1.10}-\eqref{1.20}, which will be useful for the solution of the full $3\times3$ system. Given the Riemann data $(s_l,c_l), (s_r,c_r)$, we define the functions $$g(s,c) ~\dot=~ f(s,c)/s,\qquad g_l(s) ~\dot=~ g(s,c_l), \qquad g_r(s) ~\dot=~ g(s,c_r), $$ and the monotone functions \begin{align} G^\sharp(s;s_r) &~\dot=~ \begin{cases} \max\left\{ g_r(s); s\in[s_r,s]\right\}, & \mbox{if}~s\ge s_r,\\ \min\left\{ g_r(s); s\in[s,s_r]\right\}, & \mbox{if}~s\le s_r, \end{cases}\label{Gsharp} \\[1mm] G^\flat(s;s_l) &~\dot=~ \begin{cases} \max\left\{ g_l(s); s\in[s_l,s]\right\}, & \mbox{if}~s\ge s_l,\\ \min\left\{ g_l(s); s\in[s,s_l]\right\}, & \mbox{if}~s\le s_l. \end{cases} \label{Gflat} \end{align} See plots in Figure~\ref{fig:G} for illustrations.
\begin{figure}
\caption{The functions $G^\sharp(s;s_r)$ and $G^\flat(s;s_l)$ for various cases of $s_r$ and $s_l$. The dotted curve is the graph of $s\mapsto g(s,c)$ for a fixed $c$. }
\label{fig:G}
\end{figure}
Note that the mapping $s\mapsto G^\sharp$ is increasing, while $s\mapsto G^\flat$ is decreasing. For any given $(s_l,s_r)$, there exists a unique $\hat G$ value where the graphs of the two mappings cross each other. Let $\sigma$ denote the speed of the $c$~jump, and let $s_\pm$ denote the traces $s(t,\sigma t\pm)$ in the Riemann solution. Then, $s_\pm$ are determined as the {\em minimum jump path that connects $G^\sharp$ and $G^\flat$}, (see also~\cite{GimseRisebro}), where we have \begin{equation}\label{gGGg} g_l(s_-)=G^\flat(s_-;s_l)=G^\sharp(s_+;s_r)=g_r(s_+) ~\dot=~\sigma. \end{equation} Then, the solution of the Riemann problem is obtained by patching together the solution of \begin{equation}\label{Rl}
s_t + f(s,c_l)=0, \qquad s(0,x)=\begin{cases} s_l & (x<0),\\ s_- & (x>0), \end{cases} \end{equation} for $x<\sigma t$, and for $x> \sigma t$ the solution of \begin{equation}\label{Rr}
s_t + f(s,c_r)=0, \qquad s(0,x)=\begin{cases} s_+ & (x<0),\\ s_r & (x>0) .\end{cases}\end{equation}
\subsection{The Lagrangian coordinate}
Define the Lagrangian coordinate $(\phi,\psi)$ (introduced in \cite{Splitting}) \begin{equation}\label{eq:c}
\phi_x = -s, \quad \phi_t = f(s,c,k), \quad \phi(0,0)=0, \qquad \psi=x. \end{equation} Here one can interpret $\phi$ as the potential for the first equation. In fact, for any $(t,x)$, the value $\phi(t,x)$ denotes the line integral $$ \phi(t,x) = \int_{(0,0)}^{(t,x)} f (s,c,k)\, dt - s \, dx.$$ Thanks to \eqref{1.1}, this line integral is path independent. Assuming $s>0$ so that $f>0$, the coordinate change is well defined. In this Lagrangian coordinate, the system \eqref{1.1}-\eqref{1.3} takes the form \begin{align}
\left(\frac{1}{f}\right)_\psi-\left(\frac{s}{f}\right)_\phi =0, \label{1.6}\\ c_\psi =0,\label{1.7}\\ k_\phi=0.\label{1.8} \end{align} Note that the second and third equations are decoupled. Since $k$ is a material parameter, the decoupling is not surprising. The decoupling for $c$ indicates that the thermo-dynamics (governed by~\eqref{1.7}) is independent of the hydro-dynamics (governed by~\eqref{1.6}). This is the most interesting feature of the model. It implies that, in the $(\phi,\psi)$ coordinates, $k$ is constant along lines parallel to $\phi$-axis, and $c$ is constant along lines parallel to the $\psi$-axis.
We illustrate the coordinate change in Figure~\ref{fig:1}, with Riemann data $s_l,s_r>0$. The line $t=0$ now consists of two rays from the origin, in the Lagrangian coordinate $(\phi,\psi)$, indicated in blue in Figure~\ref{fig:1}. For general initial data, the line for $t=0$ will be replaced by a curve, continuous and decreasing, but might not be differentiable everywhere. Here we see clearly how the values of $k$ and $c$ are ``brought into'' the region $t>0$ from the initial condition.
\begin{figure}
\caption{The connection between the two coordinate for Riemann solutions.}
\label{fig:1}
\end{figure}
Note that when $s=0$ then $f=0$, and the conserved quantity for the equation in Lagrangian coordinate blows up to infinity. This is when we have ``vacuum''. If $s(0,x)=0$ for an intervals $[x_1,x_2]$, then the blue curve in Figure~\ref{fig:1} will have a horizontal line segment. Since $c$ has no meaning when $s=0$, we may assign the $c$ value along this segment as a linear function connecting $c_1=c(0,x_1)$ and $c_2=c(0,x_2)$. If $c_1\not=c_2$, then $c$ has a jump in its solution.
This illustration shows that, once the initial data is given, the values for $(k,c)$ are known at every point $(\psi,\phi)$. In particular, if $(c,k)$ are initially smooth and $s(0,x)>0$, then $(c,k)$ remain smooth forever; if they contains discontinuities initially, then they will be discontinuous for $t>0$.
We remark that this Lagrangian coordinate is different from the one used by Wagner in his seminal paper~\cite{Wagner}, for Euler's equation. If we apply Wagner's Lagrangian coordinate to~\eqref{1.1}-\eqref{1.3},
only the equation for $c$ will be decoupled. This doesn't offer
the same insight.
We now consider a scalar conservation law with possibly discontinuous coefficients \begin{equation}\label{eqL}
\left(\frac{1}{f(s;c,k)}\right)_\psi- \left(\frac{s}{f(s;c,k)}\right)_\phi=0,
\end{equation} where $c,k$ are given functions, possibly discontinuous. A typical plot of the ``flux'' function, i.e., the graph for the mapping $(1/f )\mapsto(- s/f)$ is shown in Figure~\ref{fig:fs}.
\begin{figure}
\caption{The graph for the mapping $(1/f)\mapsto (-s/f)$.}
\label{fig:fs}
\end{figure}
We remark that scalar conservation laws with horizontal and vertical discontinuities was studied in~\cite{CocliteRisebro}, where Riemann problem and Cauchy problem are studied, under suitable assumption of the flux function. We speculate that an extension of~\cite{CocliteRisebro} could provide existence and well-posedness for the Lagrangian system~\eqref{1.6}-\eqref{1.8}. Details may be worked out in future works. Furthermore, it would also be interesting to obtain equivalent results directly for the Eulerian system, see Remark~\ref{rm1}.
\subsection{Wave properties and a global Riemann solver} \label{sec2.3}
We now return to the full Eulerian system~\eqref{1.1}-\eqref{1.3}. Treating $(s,c,k)$ as the unknown vector, the Jacobian matrix of the flux function is triangular: $$ J=\begin{pmatrix} f_s & f_c & f_k \\ 0 & f/s & 0 \\ 0 & 0 &0 \end{pmatrix}. $$ Naming the three families as the $s$, $c$ and $k$ families, we have the following 3 eigenvalues $$ \lambda_s= f_s, \qquad \lambda_c=f/s,\qquad \lambda_k=0, $$ and three corresponding right eigenvectors, $$ r_s=\begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}, \qquad r_c= \begin{pmatrix} -f_c \\ f_s-f/s \\ 0 \end{pmatrix}, \qquad r_k=\begin{pmatrix} -f_k\\ 0 \\ f_s\end{pmatrix}. $$
Direct computations give: $$ \nabla \lambda_s \cdot r_s = f_{ss} , \qquad \nabla \lambda_c\cdot r_c =0, \qquad \nabla \lambda_k \cdot r_k =0. $$ Thus, the $c$ and $k$ families are linearly degenerate, where discontinuities are all contacts, and shock curves coincide with rarefaction curves. For the $s$ family, since $f_{ss}$ changes sign, the family is not genuinely nonlinear. However, the integral curves for $s$ family are straight lines where $c,k$ are both constants. These integral curves also coincide with the $s$~shock curves, make it easier to find waves of the $s$~family. Note also that, when $f_s=f/s$, we have $\lambda_s=\lambda_c$ and $r_c=r_c$, and the system is parabolic degenerate, where nonlinear resonance occurs. In summary, the system~\eqref{1.1}-\eqref{1.3} is of Temple class, but of mixed type with degeneracies.
By the Rankine-Hugoniot jump conditions, we have the following wave properties: \begin{itemize} \item The $k$~wave is the slowest which travels with speed 0. Along any $k$ wave, the functions $f, c$ are continuous; \item The $c$~wave travels with positive speed. Crossing it, $f/s, k$ remain continuous; \item The $s$~wave travels with positive speed. Crossing it, $c, k$ remain continuous. \end{itemize}
Thanks to these wave properties, the solution of the Riemann problem for~\eqref{1.1}-\eqref{1.3} is rather simple. Given the left and right states $(s_l,c_l,k_l)$ and $(s_r,c_r,k_r)$, we have the following global Riemann solver: \begin{itemize} \item Let $(s_m,c_l, k_r)$ denote the right state of the $k$-wave. The value $s_m$ is uniquely determined by the condition $$ f(s_m,c_l,k_r) = f(s_l,c_l,k_l).$$
\item For the remaining waves, we have $k\equiv k_r$ throughout. We then solve the Riemann problem for the $2\times 2$ system \eqref{1.1}-\eqref{1.2} with Riemann data $(s_m,c_l)$ and $(s_r,c_r)$ as the left and right states. We use the Riemann solver in section~\ref{sec2.1}. The solution consists of waves with non-negative speed. \end{itemize}
\begin{remark}\label{rm1} It would be of interest to prove existence and well-posedness result for the Cauchy problem directly for the Eulerian system~\eqref{1.1}-\eqref{1.3}. The key estimate is the bound on the total wave strength, suitably defined. We expect that the resonance between the $s$~and $c$~families can be controlled by Temple-style functionals for wave strength. Then, the total wave strength is non-increasing at interactions between $s$ and $c$ waves. However, there are additional difficulties caused by the interactions between $s$ and $k$~waves. In strictly hyperbolic cases, this can be controlled by adding a suitable interaction potential functional. However, the interaction potential functional here must take into account the difficulties caused by the vacuum state where $s=0$. \end{remark}
\section{Polymer flooding with adsorption in rough media}\label{sec3} \setcounter{equation}{0}
In this section we consider the polymer flooding with the adsorption effect, but neglect the gravity force: \begin{align}
s_t + f(s,c,k)_x =0, \label{2.1} \\
(m(c)+cs)_t + (cf(s,c,k))_x =0,\label{2.2} \\
k_t =0.\label{2.3} \end{align} The new term $m(c)$ denotes the adsorption of polymers into the porous media, which we assume to be a function of $c$. Typically one assumes that $$ m'(c)>0, \qquad m''(c)<0, \qquad \forall c. $$
The same coordinate change as \eqref{eq:c} leads to the following Lagrangian system: \begin{align}
\left(\frac{s}{f}\right)_\phi -\left(\frac{1}{f}\right)_\psi =0 ,\label{2.4}\\
m(c)_\phi + c_\psi =0,\label{2.5}\\ k_\phi=0.\label{2.6} \end{align} The equation~\eqref{2.6} for $k$ is unchanged. The equation~\eqref{2.5} for $c$ is stilled decoupled, but now $c$ solves a scalar conservation law, and the $c$~family is genuinely nonlinear. Nevertheless, $c$ can be solved independently. Given initial data for $k,c$, their values at any point $(\phi,\psi)$ can be computed first. Then, $s$ solves the scalar conservation law~\eqref{eqL} with discontinuous coefficient. The discontinuities include the jumps in $k$ and shocks in the solution of $c$, which has a more complex structure, see Remark~\ref{rm2}.
\subsection{The reduced $2\times2$ model}\label{sec3.1}
When $k$ is constant, one has the reduced $2\times2$ system \begin{equation}\label{eq:red} \begin{cases} s_t + f(s,c)_x =0, \\
(m(c)+cs)_t + (cf(s,c))_x =0.\end{cases} \end{equation} Given Riemann data $(s_l,c_l),(s_r,c_r)$, the Riemann problem is studied in the literature, even for multi-component polymers, see~\cite{JW1, MR1000728, Dahl, JTW}. In particular, for any $(s_l,s_r)$, if $c_l>c_r$, the solution contains a $c$~shock; and if $c_l<c_r$ then we have a $c$~rarefaction.
Consider the case $c_l>c_r$ where we have a $c$~shock. We define \begin{equation}\label{agg}
a\,\dot=\,\frac{m(c_l)-m(c_r)}{c_l-c_r},\qquad g_l(s) ~\dot=~\frac{f(s,c_l)}{s+a}, \qquad
g_r(s) ~\dot=~\frac{f(s,c_r)}{s+a}.
\end{equation} Define the functions $G^\flat,G^\sharp$ as in \eqref{Gsharp}-\eqref{Gflat}, and let $s_\pm$ be the {\em minimum jump path} that connects $G^\sharp$ and $G^\flat$. Then, $s_\pm$ will be the trace along $c$ shock which travels with speed $$\sigma = g_l(s_-)=g_r(s_+).$$ Once the $c$~shock is located, we patch up the solutions of a regular conservation law on the left and on the right of the $c$-shock, as described in section~\ref{sec2.1}.
When $c_l<c_r$ and we have a $c$ rarefaction wave, the path of the rarefaction wave goes along the integral curves of the $c$~eigenvectors. See Figure~\ref{fig:r2} for a typical graph of these integral curves. The resonance point occurs at where the integral curves have horizontal tangent. Thus, the $c$ rarefaction path must lie either on the left or on the right of the resonance point. Unique path can be chosen to allow feasible solution with increasing wave speeds from left to right. See for example~\cite{JW1}. We omit further details.
\begin{figure}
\caption{Integral curve for $c$ family in the $(s,c)$-plane. }
\label{fig:r2}
\end{figure}
\subsection{Wave properties and a global Riemann solver}
We go back to the full $3\times3$ system~\eqref{2.1}-\eqref{2.3}. The Jacobian matrix for the flux function is again triangular $$ J=\begin{pmatrix} f_s & f_c & f_k \\ 0 & f/(s+m'(c)) & 0 \\ 0&0&0 \end{pmatrix} $$ with three eigenvalues $$ \lambda_s = f_s(s,c,k), \qquad \lambda_c =\frac{f(s,c,k)}{s+m'(c)} , \qquad \lambda_k=0, $$ and three corresponding right-eigenvectors $$ r_s=\begin{pmatrix} 1 \\ 0 \\ 0\end{pmatrix}, \qquad r_c=\begin{pmatrix} -f_c(s,c,k) \\ \lambda_s-\lambda_c \\ 0 \end{pmatrix}, \qquad r_k=\begin{pmatrix} -f_k(s,c,k) \\ 0 \\ f_s(s,c,k)\end{pmatrix}. $$ Direct computations give the following directional derivatives: \begin{align} \nabla \lambda_s \cdot r_s &~=~ f_{ss}(s,c,k), \nonumber \\ \nabla \lambda_c \cdot r_c &~=~ \frac{-f(s,c,k) m''(c) (\lambda_s-\lambda_c)}{(s+m'(c))^2}, \label{eig} \\ \nabla \lambda_k \cdot r_k &~=~0.\nonumber \end{align}
We observe the following properties. \begin{itemize} \item The $k$ family is linearly degenerate and travels with speed 0, which is the slowest family. Crossing a $k$ contact, both $f$ and $c$ remain continuous. \item The $c$ family is genuinely nonlinear, as indicated by \eqref{2.5} in the Lagrangian system. We can have either a single $c$ shock or a single $c$ rarefaction fan in the solution.
This fact is not clear from the directional derivatives~\eqref{eig} in the Eulerian system. When $\lambda_s\not=\lambda_c$, we can rewrite the $c$~eigenvector as $$ \tilde r_c=\begin{pmatrix} \displaystyle \frac{f_c(s,c,k)}{ \lambda_c-\lambda_s} \\[3mm] 1 \\[2mm] 0 \end{pmatrix}. $$ The integral curves for $\tilde r_c$ is now parametrized by $c$. Straight computations give $$ \nabla \lambda_c \cdot \tilde r_c = -\frac{f(s,c,k) m''(c)}{(s+m'(c))^2} >0. $$ Therefore, we have a $c$ rarefaction when $c_l<c_r$, and the rarefaction curve can never cross the resonant point where $\lambda_s=\lambda_c$. When $c_l>c_r$, we have a $c$ shock. \item The $s$ family is not genuinely nonlinear, but it's a Temple family where shock curves and rarefaction curves coincide. Crossing a $s$ wave, both $c$ and $k$ remain continuous. \end{itemize}
Given any Riemann data $(s_l,c_l,k_l), (s_r,c_r,k_r)$, we now have the following global Riemann solver, similar to section~\ref{sec2.3}: \begin{itemize} \item The right state of the $k$ wave is $(s_m,c_l,k_r)$ where $s_m$ is uniquely determined by $$f(s_l,c_l,k_l)=f(s_m,c_l,k_r).$$ \item For the remaining waves, we have $k\equiv k_r$ which is constant, Then, we solve the Riemann problem for the reduced model~\eqref{eq:red}, with left and right states $(s_m,c_l)$ and $(s_r,c_r)$ respectively, following the Riemann solver in section~\ref{sec3.1}. \end{itemize}
\begin{remark}\label{rm2} We remark that wave interaction estimates for this system remain very complicated, and the control of the total wave strength is not available in the literature. In a recent work~\cite{GS-preprint}, a scalar conservation law with general discontinuous flux is studied $$ u_t + f(\alpha(t,x),u)_x =0, \qquad u(0,x)=\bar u(x).$$ Here $\alpha(t,x)$ is discontinuous w.r.t.~both variables $t,x$. Recall that a function of a single variable $\alpha: I\!\!R \mapsto I\!\!R$ is {\em regulated} if it admits left and right limits at every point. Such a concept can be extend to functions of two variables. Assuming that $\alpha(t,x)$ is regulated, in~\cite{GS-preprint} we prove that the vanishing viscosity solutions of $$ u_t + f(\alpha(t,x),u)_x =\varepsilon u_{xx}, \qquad u(0,x)=\bar u(x)$$ converge to a unique limit solution.
One can show that solutions of scalar conservation laws with convex flux function are regulated functions. An extension of the result in~\cite{GS-preprint} could prove a similar result, at least for the Lagrangian system~\eqref{2.4}-\eqref{2.6}. Details may come in future works. \end{remark}
\section{A second order traffic flow model with rough road condition}\label{sec4} \setcounter{equation}{0}
As a model of intermediate level of complexity, we consider a $3\times3$ system for traffic flow \begin{align} \rho_t + (\rho v)_x &=0, \label{eq:1}\\ [\rho(v+k \rho^\gamma)]_t + [\rho v(v+k \rho^\gamma)]_x &=0,\label{eq:2}\\ k_t &=0.\label{eq:3} \end{align} Here $\rho \ge 0$ denotes the car density, $v \ge 0$ is car velocity, and $k(x) >0$ denotes the road condition. Furthermore, $\gamma\in(1,2)$ is a constant. We consider rough road condition, where $k(x)$ is discontinuous.
When $k$ is constant, the reduced system \eqref{eq:1}-\eqref{eq:2} was proposed in~\cite{AR}. Equation~\eqref{eq:1} denotes the conservation of mass. In~\eqref{eq:2}, the quantity $ k \rho^\gamma$ denotes some kind of ``pressure''. The physical modeling leads to a non-conservative formulation \begin{equation}\label{eq:VP} (v+k \rho^\gamma)_t + v \cdot (v+k \rho^\gamma)_x=0. \end{equation} With some algebraic manipulation and utilizing \eqref{eq:1}, one can rewrite \eqref{eq:VP} in the conservative form of~\eqref{eq:2}. Although equation~\eqref{eq:2} resembles the conservation of momentum, there is no physical meaning for the conserved quantity $\rho (v+k \rho^\gamma)$.
For notational convenience, we denote that \begin{equation}\label{eq:defw} w~\dot=~v+ k \rho^\gamma. \end{equation} Note that if $w$=constant and $\rho>0$, then \eqref{eq:2} reduces to \eqref{eq:1}.
\subsection{A Lagrangian system and the decoupling feature}
Consider a Lagrangian coordinate $(\phi,\psi)$ defined as $$ \phi_x = -\rho, \quad \phi_t = \rho v, \qquad \phi(0,0)=0,\qquad \psi=x. $$ When $\rho v >0$, the coordinate change is well-defined. Direct computation leads to the following Lagrangian system \begin{align} \left( \frac{1}{\rho v}\right)_\psi - \left( \frac{1}{v}\right)_\phi &= 0, \label{b1.4}\\ w_\psi &=0,\label{b1.5}\\
k _\phi &=0.\label{b1.6} \end{align}
We observe the decoupling feature for $k$ and $w$ in this Lagrange coordinate. Herr $ k $ is contant in $\phi$, and $w$ is constant in $\psi$. These features are very similar to those of the system~\eqref{1.6}-\eqref{1.8}. Given the initial data at $t=0$, the values of $(w,k)$ for any coordinate point $(\phi,\psi)$ are determined trivially, see Figure~\ref{fig:1}. Once $( k (\phi,\psi) , w(\phi,\psi))$ are given, we can express $v$ in terms of $\rho$, i.e., \begin{equation}\label{eq:v}
v= v(\rho; w, k )= w- k \rho^\gamma. \end{equation} Then it remains to solve $\rho$ using the scalar conservation law \eqref{b1.4} with variable coefficient, \begin{equation}\label{LL} \left(\frac{1}{\rho \cdot (w- k \rho^\gamma)} \right) _\psi - \left(\frac{1}{ w- k \rho^\gamma} \right)_\phi =0, \end{equation} where $(w,k)$ are given functions, possibly discontinuous, and $\rho$ is the unknown. The discontinuities in the flux function occur along horizontal and vertical lines in the $(\phi,\psi)$ coordinate.
It would be interesting to explore possible ways of extending the result in~\cite{CocliteRisebro} for this case, taking extra care of the vacuum state.
\subsection{Some basic analysis}
For the Eulerian system~\eqref{eq:1}-\eqref{eq:3}, treating $(\rho, v,k)$ as the unknown vector, the Jacobian matrix for the flux function is triangular $$ J=\begin{pmatrix} v & \rho & 0 \\ 0 & v-\gamma k \rho^{\gamma} & v \rho^\gamma\\ 0&0&0 \end{pmatrix}. $$ Denoting the three families as the $\rho$, $v$, and $k$ families, we have three eigenvalues $$ \lambda_\rho = v, \qquad \lambda_v = v-\gamma k \rho^{\gamma} , \qquad \lambda_k =0, $$ with three corresponding right-eigenvectors $$ r_\rho =\begin{pmatrix} 1\\ 0\\ 0\end{pmatrix}, \qquad
r_v=\begin{pmatrix} -1 \\ \gamma k \rho^{\gamma-1} \\ 0\end{pmatrix}, \qquad
r_ k =\begin{pmatrix} \rho^{\gamma+1}\\ -v \rho^\gamma \\
v-\gamma k \rho^{\gamma} \end{pmatrix}. $$
Straight computations give \begin{eqnarray*} \nabla \lambda_\rho \cdot r_\rho &=& 0, \\ \nabla \lambda_v \cdot r_v &=& (\gamma^2+\gamma) k \rho^{\gamma-1} >0,\\ \nabla \lambda_k \cdot r_k &=& 0. \end{eqnarray*}
Thus, the $\rho$ and $k$ families are linearly degenerate, where all discontinuities are contacts. The $v$ family is genuinely nonlinear, where we have either a $v$ shock or a $v$ rarefaction in the solution of a Riemann problem. The $v$ rarefaction curves are integral curves of $r_v$.
We consider the $\rho$ and $v$ jumps. Observe that crossing both $\rho$ and $v$ waves, the value $ k $ remains constant. Fix a $ k $ value, we consider a jump initiated from $(\rho_0,v_0)$. Writing $w_0= v_0+ k \rho_0^\gamma$, and letting $\sigma$ be the jump speed, the RH jump conditions require \begin{align} \sigma (\rho - \rho_0) &= \rho v - \rho_0 v_0, \label{eq:a}\\ \sigma (\rho w-\rho_0 w_0) &= \rho v w - \rho_0 v_0 w_0.\label{eq:a2} \end{align}
We first consider the case with vacuum. If $\rho_0=0$, then for any values of $ v_0$ we have $$ \mbox{either} ~ \Big\{\rho=0, ~ \sigma~\mbox{arbitrary}\Big\} \qquad \mbox{or}\qquad \Big\{\sigma=v, ~ \rho~\mbox{arbitrary} \Big\}. $$ On the other hand, if $\rho=0$, then for any values of $v$, we have $$ \mbox{either} ~ \Big\{\rho_0=0, ~ \sigma~\mbox{arbitrary}\Big\} \qquad \mbox{or}\qquad \Big\{\sigma=v, ~ \rho_0~\mbox{arbitrary} \Big\}. $$ We remark that the vacuum state is special, where the $v$ and $\rho$ families have the same eigenvalue and eigenvector, so the system is both parabolic degenerate and linearly degenerate.
For the rest, we assume $\rho,\rho_0>0$. If $v=v_0$, \eqref{eq:a} gives $\sigma=v_0=v$, and \eqref{eq:a2} trivially holds. This gives a $\rho$ contact discontinuity. Note that $\sigma=v_0$ or $\sigma=v$ leads to the same wave. We conclude that, crossing a $\rho$~wave, both $ k , v$ remain continuous.
Now we consider the $v$~shocks, assuming $v\not=v_0\not=\sigma$. Multiplying \eqref{eq:a} by $w$ and subtracting from \eqref{eq:a2}, we get $$ \rho_0 (\sigma - v_0) (w-w_0)= 0.$$ Or symmetrically, multiplying \eqref{eq:a} by $w_0$ and subtracting from \eqref{eq:a2}, we get $$
\rho(\sigma - v) ( w-w_0)= 0. $$ Since $\rho,\rho_0>0$, this implies $$ w=w_0.$$ The $v$ shock travels with speed: $$ \sigma_v= \frac{\rho v -\rho_0 v_0}{\rho -\rho_0}. $$
We further observe that, along a $v$~integral curve, $w$ remains constant. Indeed, we have $$ \nabla w \cdot r_v = \begin{pmatrix} \gamma k \rho^{\gamma-1} \\ 1 \\ \rho^\gamma\end{pmatrix} \cdot \begin{pmatrix} -1 \\ \gamma k \rho^{\gamma-1} \\ 0\end{pmatrix} = 0. $$ We conclude that the $v$ rarefaction curves coincide with $v$ shock curves. Thus, the $3\times3$ system~\eqref{eq:1}-\eqref{eq:3} is a Temple class, where the system is of mixed type.
We remark that, $$ \lambda_\rho \ge \lambda_v, \qquad \lambda_\rho \ge \lambda_k,$$ but $\lambda_v-\lambda_k$ may change sign. Thus the possible nonlinear resonance only occurs between the linearly degenerate $k$~family and the genuinely nonlinear $v$~family. This fact should make it possible to control the resonance.
We summarize the wave behaviors: \begin{itemize} \item Crossing a $ k $-contact, both $\rho v$ and $w$ remain continuous. \item Crossing a $v$-front, $w$ and $ k $ remain continuous. \item Crossing a $\rho$-contact, $ k $ and $v$ remain continuous. \end{itemize} Vacuum state is considered as a mixing of $v$ and $\rho$ fronts, as it will be clear later. Note also that $(w,v)$ serve as the natural Riemann invariants for the $(v,k)$ families, respectively.
\subsection{The reduced model and its global Riemann solver}\label{sec4.2}
When $k$ is constant, say $k=1$, \eqref{eq:1}-\eqref{eq:3} reduces to a $2\times 2$ system \begin{equation}\label{ar} \begin{cases} \rho_t + (\rho v)_x =0, \\ \left[\rho(v+ \rho^\gamma)\right]_t + \left[\rho v(v+ \rho^\gamma)\right]_x =0. \end{cases} \end{equation} The Riemann problem for~\eqref{ar} was studied in much detail in \cite{AR}. However, utilizing the Riemann invariants $(w,v)$, the Riemann solver can be presented in a very compact manner, as illustrated in Figure~\ref{fig:RP2}.
Note that the vacuum state $\rho=0$ is the straight line $w=v$ in the $(w,v)$-plane, indicated by the red line. The physically feasible region $\rho\ge 0$ lies on the right side of the vacuum line.
Let $L=(w_l,v_l)$ and $R=(w_r,v_r)$ be the Riemann data, both lie on the right of the vacuum line. Consider the state $m=(w_l,v_r)$. We have two cases: \begin{itemize} \item If $m$ lies on the right side of the vacuum line, then the solution of the Riemann problem consists of two waves, with a $v$-wave connecting $L$ to $m$, followed by a $\rho$-wave connecting $m$ to $R$. The $v$ wave is a shock if $v_l>v_r$, and a rarefaction if $v_l<v_r$. See the left plot in Figure~\ref{fig:RP2}. \item If $m$ lies on the left of the vacuum line, then vacuum occurs in the solution, see the right plot in Figure~\ref{fig:RP2}. This could only happen when $v_l<v_r$. We have two intermediate states $m_1,m_2$ where $\rho=0$. The solution of the Riemann problem consists of 3 waves: a $v$ rarefaction connecting $L$ to $m_1$, followed by a vacuum wave connecting $m_1$ to $m_2$, and finally a $\rho$ contact from $m_2$ to $R$. \end{itemize}
\begin{figure}
\caption{Riemann solver for the $2\times2$ model. Left: Without vacuum, the solution consists of a $v$-rarefaction and a $\rho$-contact.
Right: With an intermediate vacuum wave.}
\label{fig:RP2}
\end{figure}
The vacuum wave is rather ``fake'', since $\rho\equiv 0$ is always a solution for any values of $v$. To ``assign'' the $v$ values along a vacuum wave in the solution of a Riemann problem, we set $\rho=0$ in \eqref{eq:2} and obtain the Burgers' equation $$ v_t + (v^2/2)_x=0.$$ Since $v$ increases from $m_1$ to $m_2$, the solution for $v$ is a rarefaction wave.
\textbf{Continuous dependence.} Viewed in the $(w,v)$ plane, it is clear that the path for the solution of a Riemann problem depends continuously on the data $L$ and $R$.
\textbf{Interaction estimates.} One can define the wave strength as the a Manhattan distance in the $(w,v)$-plane, i.e., any wave connecting $(w_l,v_l)$ and $(w_r,v_r)$ has the strength $$ \abs{w_l-w_r}+\abs{v_l-v_r}.$$ We claim that the total wave strength remains non-increasing at any interaction. Indeed, we have the following observations: \begin{itemize} \item Two $\rho$~waves can not interact with each other since the family is linearly degenerate. \item When a $\rho$~wave interacts with a $v$~wave, the total wave strength is unchanged. \item For interactions between two $v$~waves, the total wave strength is non-increasing since the family is genuinely nonlinear. \item When a vacuum wave interacts with either a $v$~wave or a $\rho$~wave, cancellation happens and the total wave strength is decreasing. See Figure~\ref{fig:RP2ex}. \end{itemize}
\begin{figure}
\caption{Left: interaction of a vacuum wave with a $\rho$ wave. Right: interaction of a vacuum wave with a $v$ wave. Here $M$ is the middle state of the incoming waves, and $m$ is the middle state of the outgoing waves.}
\label{fig:RP2ex}
\end{figure}
\textbf{Front tracking.} In a front tracking approximation, we approximate $v$~rarefaction waves with upward jumps of size $\varepsilon$. It's simple to show that the algorithm is well-posed. Total number of fronts is uniformly bounded. Total wave strength, measured with the Manhattan distance,
is also uniformly bounded. Existence of entropy weak solution for the Cauchy problem follows from standard theory.
\textbf{Critics for the model.} This second order traffic flow model admits some unreasonable solutions. For example, if $v(0,x)\equiv 0$, then $\rho_t=0$ and we have the solution $\rho(t,x)=\rho(0,x)$ for all $t>0$. This means, if cars are initially stationary on a road, they will remain stationary for all time. This unreasonable behavior is caused by the conservation of the ``momentum'' $\rho(v+ k \rho^\gamma)$, a concept borrowed from gas dynamics. However, moving cars behave differently from gas particles, and the momentum should not be conserved. High order models for traffic flow are better formulated with a relaxation parameter, where one considers the reaction/acceleration time for each driver.
\subsection{Riemann solver for the $3\times3$ system}
We now describe a global Riemann solver for~\eqref{eq:1}-\eqref{eq:3}. Let $(\rho_l,v_l,k_l), (\rho_r,v_r,k_r)$ denote the Riemann data, and $w_l,w_r$ be the correspond $w$ values. Since the $\rho$~wave is the fastest one, it will have $(\rho_r,v_r,k_r)$ as its right state. Denote the left state of the $\rho$~wave by $(\rho_m,v_m,k_r)$. Since $w$ is constant crossing both $k$ and $v$ waves, we have $w_l$ on the left of the $\rho$~wave. See Figure~\ref{fig:ww}.
\begin{figure}
\caption{Wave structures in the Riemann solution for the $3\times3$ traffic flow model.}
\label{fig:ww}
\end{figure}
The global Riemann solver consists of two steps.
\textbf{Step 1.} We determine the value $(\rho_m,v_m)$. There are two cases, with and without the vacuum state. \begin{itemize} \item
If $ w_l \ge v_r$ (see left plot in Figure~\ref{fig:RP3x}),
we can compute the unique value of $\rho_m$ using
$$ v_r + k _r \rho_m^\gamma = w_l. $$
This gives
$$
\rho_m = \left(\frac{w_l-v_r}{ k _r}\right)^{1/\gamma} ,
\qquad
v_m = v_r.
$$
\item
Otherwise if $w_l<v_r$, we have a vacuum wave in the solution
(see the right plot in Figure \ref{fig:RP3x}).
From $m_2$ to $R$ we have a $\rho$ wave.
On its left there is a vacuum wave that connects $m_1$ to $m_2$.
Denote the left state of the vacuum wave as $(\rho_m,v_m,k_r)$, we set
$$\rho_m=0, \qquad v_m=w_l.$$
\end{itemize}
\begin{figure}
\caption{Riemann solver for the $3\times3$ model: The algorithm that determines the $\rho$-front. Left: No vacuum. Right: With vacuum and a $v$-rarefaction attached on the left of the $\rho$-front.}
\label{fig:RP3x}
\end{figure}
\textbf{Step 2.} As the second step, one solves a Riemann problem for the two states $$ (\rho_l, v_l, k _l; ~w_l) , \qquad (\rho_m, v_m, k _r; ~w_l)$$ with only $k$ wave and $v$ waves. Since $w\equiv w_l$ throughout the solution, \eqref{eq:2} reduces to \eqref{eq:1}. Furthermore, we also have $$v=v(\rho; k )=w_l- k \rho^\gamma.$$ It remains to solve a scalar conservation law with discontinuous flux: \begin{align}
\rho_t + \left(\rho(w_l- k (x) \rho^\gamma )\right)_x =0,
\end{align} where $ k (x)$ is the jump function connecting $ k _l, k _r$ at $x=0$. Calling the flux functions $$ f_l(\rho)=g_l(\rho) = \rho(w_l- k_l \rho^\gamma ), \qquad f_r(\rho)=g_r(\rho) = \rho(w_l- k_r \rho^\gamma ), $$ and defining $G^\sharp,G^\flat$ accordingly as in \eqref{Gsharp}-\eqref{Gflat}, replacing $s$ with $\rho$. Then, the $k$~wave is located at the {\em minimum jump path} that connects $G^\sharp$ and $G^\flat$. The remaining waves in this Riemann solver are determined by patching up solutions, as in~\eqref{Rl}-\eqref{Rr}.
In conclusion, we have constructed a global Riemann solver that generates a unique self similar solution for any given left and right state. In the solution, all the quantities $(\rho, v, k , w)$ are non-negative.
\section{Polymer flooding with gravity and rough media}\label{sec5} \setcounter{equation}{0}
We now consider the polymer flooding model \eqref{1.1}-\eqref{1.3}, taking into account the gravitation force but neglect the adsorption effect, i.e., \begin{align}
&s_t + f(s,c,k)_x =0, \label{5.1} \\ & (cs)_t + (cf(s,c,k))_x =0,\label{5.2} \\
&k_t =0.\label{5.3} \end{align} An example for the flux function $f(s,c,k)$ was derived in~\cite{GimseRisebro3}, where the flux function $f(s,c,k)$ typically becomes negative for small values of $s$, see Figure~\ref{fig:f}. For simplicity of the discussion, we assume the monotone properties~\eqref{fconds}.
\begin{figure}
\caption{A typical flux function $s\mapsto f(s,c,k)$ with gravitation force.}
\label{fig:f}
\end{figure}
\subsection{Lagrangian coordinates}
When we use the Lagrangian coordinate \eqref{eq:c}, it leads to the same system \eqref{1.6}-\eqref{1.8}. Let $A$ be the Jacobian matrix of the coordinate change, we have $$ A=\begin{pmatrix} f & -s \\ 0 & 1 \end{pmatrix}, \qquad \det(A)=f. $$ As $f$ changes the sign, $\det(A)$ changes sign as well, reversing the direction of the ``time" variable in the Lagrangian system. Since such nonlinear PDE is not time reversible, this coordinate change is not valid.
In this case, one may introduce a modified Lagrangian coordinate $(\tilde\phi,\tilde\psi)$ as \begin{equation}\label{Lag2} \tilde \phi_x = - \mbox{sign}(f) \cdot s, \quad \tilde\phi_t =\mbox{sign}(f)\cdot f, \quad \tilde\phi(0,0)=0, \qquad \tilde\psi=x. \end{equation} This leads to the following Lagrangian system: \begin{equation}\label{5.6-5.8} \left\{ \begin{array}{l} \displaystyle \left(\frac{1}{f}\right)_{\tilde\psi} - \mbox{sign}(f) \cdot \left(\frac{s}{f}\right)_{\tilde\phi} =0, \\[3mm] \displaystyle c_{\tilde\psi} =0, \\[1mm] \displaystyle k_{\tilde\phi}=0. \end{array}\right. \end{equation}
\subsection{The reduced models}\label{sec5.2}
There are two types of reduced models, for $k$=constant and for $c$=constant.
\textbf{Type 1}. When $k$ is constant, we have the reduced system~\eqref{1.10}-\eqref{1.20}, i.e., \begin{equation}\label{m1}
\begin{cases} s_t + f(s,c)_x=0,\\ (cs)_t + (cf(s,c))_x=0, \end{cases} \end{equation} where $s\mapsto f$ is as illustrated in Figure~\ref{fig:f}. The solution of the Riemann problem follows the same Riemann solver as in section~\ref{sec2.1}, now with a different flux function $f(s,c)$. We remark that, this reduced model was studied in detail in \cite{WS}, for a more general class of flux function $f(s,c)$, where existence of entropy solutions for the Cauchy problem was established.
The solutions of the Riemann problem for~\eqref{m1} have the following properties. Let $(s_l,c_l)$, $(s_r,c_r)$ be the Riemann data, and denote $f_l=f(s_l,c_l)$. Let $s_0>0 $ be the unique value such that $f(s_0,c_l)=0$. The followings hold. \begin{itemize} \item If $s_l < s_0$, i.e., $f_l<0$, then the $c$-wave in the solution travels with negative speed. \item If $s_l>s_0$, i.e., $f_l>0$, then the $c$-wave in the solution travels with positive speed. \item If $s_l =s_0$, i.e. $f_l=0$, then the $c$-wave is stationary. \end{itemize} We remark that these properties give us the information on the ordering between the $c$ wave and the $k$ wave in the Riemann solution of the $3\times3$ system, making that Riemann solver in Section~\ref{sec5.3} easier to construct.
\textbf{Type 2}. When $c\equiv$ constant, we have the following $2\times2$ system \begin{equation}\label{m2}
\begin{cases} s_t + f(s,k)_x =0,\\
k_t =0. \end{cases} \end{equation} Since $f_s$ can change sign, the system is parabolic degenerate at $f_s=0$. The Riemann solver follows the same construction as for a scalar conservation law with discontinuous flux, where the key step is to locate the path of $k$ wave. Let $ f_l(s)=f(s,k_l)$ and $f_r(s)=f(s,k_r)$, and define $G^\sharp,G^\flat$ accordingly as in~\eqref{Gsharp}-\eqref{Gflat}. The {\em minimum jump path} connecting $G^\sharp$ and $G^\flat$ is the $k$ wave. The rest follows.
\subsection{The Riemann solver for the $3\times 3$ Eulerian system} \label{sec5.3}
Regardless of the signs of the wave speeds, we have the following properties: \begin{itemize} \item The $k$-wave travels with speed 0. Crossing it, $c$ and $f$ remain continuous; \item Crossing a $c$-wave, $k$ and $f/s$ remain continuous; \item Crossing an $s$-wave, $k$ and $c$ remain continuous. \end{itemize}
The properties in Section~\ref{sec5.2} give us the sign of the speed for the $c$ wave, even for the full system. Let $(s_l,c_l,k_l)$, $(s_r,c_r,k_r)$ be the Riemann data, and let $f_l=f(s_l,c_l,k_l)$. Then, if $f_l<0$ or $s_l=0$, the $c$-wave speed is negative;
if $f_l>0$, the $c$ wave speed is positive. We can now construct the Riemann solver.
\textbf{Case (1): The $c$ wave travels with negative speed.} Let $(s_m, c_r,k_l)$ denote the trace at $x=0-$ in the solution of the Riemann problem. We need to solve two Riemann problems: \begin{itemize} \item[(R1):] Riemann problem between the states $(s_l,c_l,k_l)$ and $(s_m,c_r,k_l)$, i.e., $$ \begin{cases} s_t + f(s,c,k_l)_x=0, \\ (cs)_t + (cf(s,c,k_l))_x=0, \end{cases} \qquad (s,c)(0,x)=\begin{cases} (s_l,c_l), & (x<0) ,\\ (s_m,c_r), & (x>0) .\end{cases} $$ This is a reduced model of type 1, discussed in Section~\ref{sec5.2}. \item[(R2):] Riemann problem between the states $(s_m,c_r,k_l)$ and $(s_r,c_r,k_r)$, i.e., $$ \begin{cases} s_t + f(s,c_r,k)_x=0, \\ k_t=0, \end{cases} \qquad (s,k)(0,x)=\begin{cases} (s_m,k_l), & (x<0) ,\\ (s_r,k_r), & (x>0). \end{cases} $$ This is a reduced model of type 2, discussed in Section~\ref{sec5.2}. \end{itemize}
For the solution to be plausible, the speeds for the waves of (R1) must be $<0$, and the speeds of the waves of (R2) must be $\ge 0$. Here we use the strict ``$<$'' relation for the waves from (R1), to ensure that $s_m $ is the trace at $x=0-$, rather than a middle state of two stationary waves.
We denote various related flux functions as: $$ f_l(s)\; \dot=\; f(s,c_l,k_l), \qquad f_m(s) \;\dot=\; f(s,c_r,k_l), \qquad
f_r(s) \;\dot=\; f(s,c_r,k_r). $$ Given the value $s_l$ and the two flux functions $f_l,f_m$, let $I_1$ denote the set of values for $s_m$ such that the Riemann problem (R1) is solved with waves of negative speed. Since the Riemann solution for the above system is uniquely defined, the set $I_1$ can be uniquely constructed, following this general algorithm. For any given $s_l$, we locate all possible $c$~wave paths that travel with negative speed. Then, for all the $c$~wave paths, we locate all possible $s_m$ that connects to the $c$~wave with $s$~waves of negative speed.
There are 4 cases, illustrated in Figure~\ref{fig:I1}. \begin{itemize} \item Consider $c_l<c_r$, and therefore $f_l>f_m$. Let $\hat s>0$ be the unique value that satisfies $f_l(\hat s)/\hat s= f_l'(\hat s)$. The set $I_1$ depends on the relation between $s_l$ and $\hat s$. \begin{itemize} \item Consider $s_l \ge \hat s$. We discuss only this case in detail, as an explanation of the general algorithm, while the other cases being similar. In Figure~\ref{fig:I1detail}, the location of $s_l$ is marked in red. The value $s_2>s_l$ is chosen such that the three points $(0,0), (s_l,f_l(s_l)),(s_2,f_m(s_2))$ are collinear. The points $s_3,s_4$ lie on the same line. We also have $s_1\le s_2$ with $ f_m(s_1)=f_m(s_2)$. There are two sub-cases.
(1) On the right side of $\hat s$, the $c$ wave path can only be the one connecting $s_l$ and $s_2$. Thus, $\{s_2\} \in I_1$. To connect it further with negative $s$ waves, one can connect $s_2$ to any $s$ value between $s_4$ and $s_1$ with a $s$ shock. Thus $[s_4,s_1)\in I_1$.
(2) Now we consider the case where the $c$ wave is on the left side of $\hat s$. Then, $s_l$ can be connected to an $s<s_3$ with an $s$ shock, then connects to $f_m$ through a $c$ wave, reaching a point on the left of $s_4$. This point can further be connected to even smaller values of $s$ through $s$ waves. Thus $(0, s_4)\in I_1$.
\begin{figure}
\caption{The set $I_1$ for case 1, with details for the construction. }
\label{fig:I1detail}
\end{figure}
This case is summarized in the top-left plot in Figure~\ref{fig:I1}. The set $I_1$ contains an open interval $(0,s_1)$ and an isolated point $s_2$, i.e., $$ I_1 = (0,s_1)\cup \{s_2\}.$$
\item Consider $s_l \le \hat s$, and see the top-right plot in Figure~\ref{fig:I1}. Here, the three points $(0,0), (\hat s, f_l(\hat s)), (s_2, f_m(s_2))$ are collinear, and we have $s_1<s_2$ and $f_m(s_1)=f_m(s_2)$. This gives $ I_1 = (0,s_1)\cup \{s_2\}$. \end{itemize} \item Consider $c_l>c_r$ and therefore $f_l < f_m$. Let $\hat s >0$ be the point where $f_m'(\hat s)=0$, and let $\tilde s>0$ be the unique point such that the three points $(0,0), (\hat s, f_m(\hat s)), (\tilde s, f_l(\tilde s))$ are collinear. The set $I_1$ depends on the relation between $s_l$ and $\tilde s$. We have 2 cases. \begin{itemize} \item Consider $s_l \ge \tilde s$, and see the bottom-left plot in Figure~\ref{fig:I1}. Here $s_2>0$ is the chosen such that the three points $(0,0), (s_l, f_l(s_l)), (s_2,f_m(s_2))$ are collinear. The value $s_1$ is chosen such that $s_1<s_2$ and $f_m(s_1)=f_m(s_2)$.
Then we have $$ I_1 = (0,s_1)\cup \{s_2\}.$$ \item Consider $s_l \le \tilde s$, and see the bottom-right plot in Figure~\ref{fig:I1}. For this case we simply have $ I_1=(0,\hat s]$. Note that $\hat s\in I_1$ since any $s$-wave connecting to $\hat s$ must be a rarefaction. \end{itemize} \end{itemize}
\begin{figure}
\caption{The set $I_1$ for various cases. }
\label{fig:I1}
\end{figure}
Now we consider problem (R2), for $x\ge 0$. Given the value $s_r$ and the flux functions $f_m,f_r$, we denote by $I_2$ the set of values for $s_m$ such that (R2) is solved with non-negative speed. There are 4 cases, illustrated in Figure~\ref{fig:I2}. \begin{itemize} \item If $k_l>k_r$, we have $f_m>f_r$. Let $\hat s$ be the resonant point where $f_m'(\hat s)=0$, and $\tilde s<\hat s$ satisfies $f_r(\tilde s)=f_m(\hat s)$. We have 2 sub-cases.
\begin{itemize}
\item
For $s_r\ge\tilde s$, we illustrate it in the top-left plot in Figure~\ref{fig:I2}.
Here we simply have
$$I_2 = [\hat s, 1].$$
\item
For $s_r\le \tilde s$, we illustrate it in the top-right plot in Figure~\ref{fig:I2}.
Here $s_1<s_2$ are two unique values such that
$ f_m(s_1)=f_m(s_2)=f_r(s_r)$. The set $I_2$ contains a closed set $[s_2,1]$
and an isolated point $s_1$, i.e.,
$$ I_2=[s_2,1]\cup \{s_1\}.$$
\end{itemize}
\item If $k_l<k_r$, we have $f_m < f_r$. Let $\hat s>0$ be the resonant point such that $f_r'(\hat s)=0$. Again we have 2 sub-cases.
\begin{itemize}
\item The case $s_r \ge \hat s$ is illustrated in the bottom-left plot of Figure~\ref{fig:I2}.
Here $s_1<s_2$ are chosen such that
$ f_m(s_1)=f_r(\hat s)=f_m(s_2)$. We get
$$ I_2=[s_2,1]\cup \{s_1\}.$$
\item
The case $s_r \le \hat s$ is illustrated in the bottom-right plot of Figure~\ref{fig:I2}.
Here $s_1<s_2$ are chosen such that
$ f_m(s_1)=f_r(s_r)=f_m(s_2)$.
We get
$$ I_2=[s_2,1]\cup \{s_1\}.$$
\end{itemize} \end{itemize}
\begin{figure}
\caption{Illustrations for the set $I_2$ for various cases.}
\label{fig:I2}
\end{figure}
We now summarize.
For all cases, the flux $f_m$ is decreasing on the set $I_1$ and increasing on the set $I_2$.
Furthermore, there exits only one single point on the set $I_1$ where $(f_m)' \ge 0$, and one single point on the set $I_2$ where $(f_m)' \le 0$. Thus, for any combination of the pair $I_1,I_2$, the intersection $I_1 \cap I_2$ is non-empty and consists of a exactly one single point. We now let the $s$ value of this point be the trace $s_m$. Note that in all cases, we have $f_m(s_m)<0$.
\textbf{Case (2): The $c$ wave travels with positive speed.} Let $(s_m,c_l,k_r)$ be the trace along $x=0+$. We have two Riemann problems: \begin{itemize} \item[(R3):] Riemann problem connecting states $(s_l,c_l,k_l)$ and $(s_m,c_l,k_r)$, which is a reduced model of type 2, and should be solved with waves of speed $\le 0$. \item[(R4):] Riemann problem connecting states $(s_m,c_l,k_r)$ and $(s_r,c_r,k_r)$, which is a reduced model of type 1, and should be solved with waves of speed $> 0$. \end{itemize}
With some abuse of notations, we denote the flux functions $$ f_l(s)\;\dot=\; f(s, c_l,k_l), \qquad f_m(s)\;\dot=\; f(s, c_l,k_r). $$ Let $I_3$ be the set for the $s_m$ values such that (R3) is solved with waves of non-positive speed. Recall that $f_l(s_l)>0$. Note then, if $f_l>0$ and $f_m>0$, then $f'_l>0$ and $f'_m>0$. Thus, $I_3$ consists of a single point, call it $s_m$, such that $ f_l(s_l)=f_m(s_m)>0$. As a result, the solution to (R3) consists of a single stationary $k$-wave.
It can be easily verified that with this $s_m$, Riemann problem (R4) is solved with waves of positive speed.
\textbf{Case (3): The $c$ wave is stationary.} We have $f(s_l,c_l,k_l)=0$ and $s_l>0$. Let $ s_m>0$ be the unique value such that $f( s_m,c_r,k_r)=0$. We have a combined $c$+$k$ stationary wave connecting $(s_l,c_l,k_l)$ to $( s_m,c_r,k_r)$. Then, $( s_m,c_r,k_r)$ can be connected $(s_r,c_r,k_r)$ by solving a Riemann problem with flux $f_r(s)=f(s,c_r,k_r)$. Thanks to the special location of $ s_m$, the solution consists of waves of non-negative speeds.
In summary, we have a global Riemann solver which generates a unique solution for any initial Riemann data. Furthermore, all discontinuities are entropy admissible, i.e., they are limits of vanishing viscosity solutions of viscous travel waves.
\section{Concluding Remarks}\label{sec6} \setcounter{equation}{0}
In this paper we construct global Riemann solvers for several $3\times3$ systems of conservations laws, arising in polymer flooding and traffic flow. However, we neglected the polymer flooding model where both the gravitation force and the adsorptive effect are considered. For system~\eqref{0.1}-\eqref{0.3} where $s\mapsto f$ is as in Figure~\ref{fig:f}, the analysis is more complicated, since the Riemann solver for the reduced systems of Type 1 is not available in the literature. Nevertheless, a global Riemann solver can be constructed, following a somewhat similar approach. Due to the new details involved, it deserves to be treated in a separate paper in near future.
It is also interesting to consider the case of multi-component polymer flooding. For the system~\eqref{0.1}-\eqref{0.3}, the equation~\eqref{0.2} is replaced by an $n\times n$ system, where $n$ is the number of different types of polymers. The size of the full system is $(n+2)\times(n+2)$. We denote the families as the $\{s, c_1, \cdots, c_n, k\}$ families. For the non-adsorptive case where $m(c)=\mbox{constant}$, all the $c_i$ families are linearly degenerate, where all waves travel with non-negative speed and they never interact. A global Riemann solver can be easily constructed in a similar way as the one in Section~\ref{sec2} or Section~\ref{sec4}. For the adsorptive model without gravitation force, it depends heavily on the adsorptive function $m(c)$, where $c \in R^n$ and the function $m(c)$ is vector-valued. In the literature, various adsorptive functions have been studied, leading to very different systems of~\eqref{0.2}. Using the Langmuir isotherm (cf.~\cite{Rhee}), $$ m_i(c_1,c_2,\cdots, c_n) = \frac{\kappa_ic_i}{1+ \kappa_1c_1+\cdots+\kappa_nc_n},\qquad i=1,2,\cdots,n,$$ the systems~\eqref{0.2} is a Temple class when $s$ is constant. A rather simple construction for the global Riemann solver can still be achieved following a similar algorithm as in Section~\ref{sec3} and utilizing the reduced Riemann solver in~\cite{Dahl}.
\end{document} |
\begin{document}
\title{Random Distances Associated with Arbitrary Polygons: An Algorithmic Approach\\between Two Random Points}
\author{Fei Tong and Jianping Pan\\ University of Victoria, Victoria, BC, Canada}
\maketitle \begin{abstract} This report presents a new, algorithmic approach to the distributions of the distance between two points distributed uniformly at random in various polygons, based on the extended Kinematic Measure (KM) from integral geometry. We first obtain such random Point Distance Distributions (PDDs) associated with arbitrary triangles (i.e., triangle-PDDs), including the PDD within a triangle, and that between two triangles sharing either a common side or a common vertex. For each case, we provide an algorithmic procedure showing the mathematical derivation process, based on which either the closed-form expressions or the algorithmic results can be obtained. The obtained triangle-PDDs can be utilized for modeling and analyzing the wireless communication networks associated with triangle geometries, such as sensor networks with triangle-shaped clusters and triangle-shaped cellular systems with highly directional antennas. Furthermore, based on the obtained triangle-PDDs, we then show how to obtain the PDDs associated with arbitrary polygons through the decomposition and recursion approach, since any polygons can be triangulated, and any geometry shapes can be approximated by polygons with a needed precision. Finally, we give the PDDs associated with ring geometries. The results shown in this report can enrich and expand the theory and application of the probabilistic distance models for the analysis of wireless communication networks. \end{abstract}
\begin{keywords} Distance distributions; Kinematic Measure; triangles; polygons; ring geometries; wireless communication networks \end{keywords}
\section{PDD within a Triangle}\label{subsec:within-triangle} \begin{figure}
\caption{KM over an arbitrary triangle.}
\label{fig:triangle-theta}
\end{figure}
$\triangle{}ABC$ is an arbitrary triangle with side lengths $|CB|=a$, $|AC|=b$, and $|AB|=c$, internal angles $\angle A=\alpha$, $\angle B=\beta$, and $\angle C=\gamma$, and area $||\triangle ABC||=S$. Without loss of generality (WLOG), $a\geq b\geq c$, and let side $CB$ be on $x$-axis, as shown in \figurename~\ref{fig:triangle-theta}. For simplicity, we can have $a=1$ and other edges normalized correspondingly. The Probability Density Function (PDF) of the PDD, denoted as $f_D(d)$, can be scaled to any size of triangles with $a=s$ by \begin{equation}\label{eq:pdf_scale}
f_{sD}=\frac{1}{s}f_D(\frac{d}{s})~, \end{equation} where $f_{sD}$ is the corresponding PDF of the PDD within the triangle with $a=s$. Such a scaling is applicable to any polygons. When calculating the length of the chord produced by a line intersecting with the triangle with orientation $\theta$ with regard to $x$-axis, there are three cases in terms of the range of $\theta$: (i) $0\leq\theta\leq\gamma$, (ii) $\gamma\leq\theta\leq\pi-\beta$, (iii) $\pi-\beta\leq\theta\leq\pi$. We then design a systematic algorithmic procedure for the numerical integration of the intended PDD, based on which the corresponding closed-form expression is also derived, as shown below in detail.
\begin{figure}
\caption{Algorithmic procedure for obtaining the PDD within an arbitrary triangle.}
\label{alg1:1}
\label{alg1:2}
\label{alg1:3}
\label{alg1:4}
\label{alg:triangle}
\end{figure} Specifically, let $\theta$ increase from $0$ to $\pi$ with a fixed small step of $\delta\theta$ (e.g., $\delta\theta=\frac{\pi}{180}$). For each $\theta$ with a set of lines intersecting with the triangle, $\mathcal{G}$ is the line which produces the longest chord of length $base$. The distance between the two tangents parallel with $\mathcal{G}$, i.e., the support lines $\mathcal{G}_1$ and $\mathcal{G}_2$ which completely encompass the whole triangle, is $p_m$. The distance between $\mathcal{G}_1$ and $\mathcal{G}$ and that between $\mathcal{G}_2$ and $\mathcal{G}$ are $p_1$ and $p_2$, respectively. Obviously, $p_m=p_1+p_2$. With $p$ increasing from 0 to $p_m$ with a fixed small step $\delta{}p$ (e.g., $\delta{}p=\frac{1}{1,000}$), we obtain $\frac{p_m}{\delta p}$ chords. For each chord of length $l$ calculated based on trigonometry, we obtain $f_{\mathcal{G}}(d)$, based on which the PDF of the PDD can be calculated. The derivation is summarized in \figurename~\ref{alg:triangle}, providing the regularity to help obtain the PDF symbolically as $\delta\theta$ and $\delta p$ go to $0$.
We investigate the case (i) where $0\leq\theta\leq\gamma$ first, and show below how to obtain the corresponding closed-form expression based on the algorithmic procedure shown in \figurename~\ref{alg:triangle}. Specifically, for $0\leq p\leq p_1$, a chord is determined by ($\theta,p$) with length of $l=\frac{p\cdot base}{p_1}$ as shown in line \ref{alg1:3} of \figurename~\ref{alg:triangle}. From $l\geq d$ (line \ref{alg1:4} of \figurename~\ref{alg:triangle}), the integration range of $p$ is $[\frac{d\cdot p_1}{base},p_1]$, and $\theta\leq\theta^{\rm i}_1=\arcsin\left(\frac{b\sin(\alpha)}{d}\right)-\beta$ or $\theta\geq\theta^{\rm i}_2=\pi-\arcsin\left(\frac{b\sin(\alpha)}{d}\right)-\beta$. Meanwhile, for $p_1\leq p\leq p_m$, the length of the determined chord is $l=\frac{(p_m-p)\cdot base}{p_2}$. Similarly, from $l\geq d$, we have $p\in[p_1,p_m-\frac{d\cdot p_2}{base}]$, and $\theta\leq\theta^{\rm i}_1$ or $\theta\geq\theta^{\rm i}_2$. Therefore, \begin{equation}\label{eq:f_I} f_D^{\rm i}(d)= \left\{ \begin{array}{ll} f^{\rm i}_1 & ~~~ \gamma\leq\frac{\pi}{2}-\beta\\ f^{\rm i}_{21}+f^{\rm i}_{22} & ~~~ \mathrm{otherwise} \end{array} \right., \end{equation} where \begin{align} &f^{\rm i}_1= \left\{
\begin{array}{ll}
H^{\rm i}_1\left(0,\theta^{\rm i}_1\right)+H^{\rm i}_2\left(0,\theta^{\rm i}_1\right) & 0\leq\theta^{\rm i}_1\leq\gamma\\
H^{\rm i}_1(0,\gamma)+H^{\rm i}_2(0,\gamma) & \theta^{\rm i}_1>\gamma\\
0 & \mathrm{otherwise}\\
\end{array} \right.,\nonumber\\
&f^{\rm i}_{21}= \left\{
\begin{array}{ll}
H^{\rm i}_1\left(0,\theta^{\rm i}_1\right)+H^{\rm i}_2\left(0,\theta^{\rm i}_1\right) & 0\leq\theta^{\rm i}_1\leq \frac{\pi}{2}-\beta\\
0 & \mathrm{otherwise}
\end{array} \right.,\nonumber\\ &f^{\rm i}_{22}= \left\{
\begin{array}{ll}
H^{\rm i}_1\left(\theta^{\rm i}_2,\gamma\right)+H^{\rm i}_2\left(\theta^{\rm i}_2,\gamma\right) & \theta^{\rm i}_2\leq\gamma\\
0 & \mathrm{otherwise}
\end{array} \right.,\nonumber\\ &\begin{array}{rl}\label{H1-x-y} H^{\rm i}_1(\mathcal{X},\mathcal{Y})=&\frac{1}{S^2}\int_\mathcal{X}^\mathcal{Y}\int_{\frac{d\cdot p_1}{base}}^{p_1}2d\left(l_1-d\right )~\mathrm{d}p\mathrm{d}\theta~, \end{array}\\ &\begin{array}{rl}\label{H2-x-y} H^{\rm i}_2(\mathcal{X},\mathcal{Y})=&\frac{1}{S^2}\int_\mathcal{X}^\mathcal{Y}\int_{p_1}^{p_m-\frac{d\cdot p_2}{base}}2d\left(l_2-d\right)~\mathrm{d}p\mathrm{d}\theta~. \end{array} \end{align}
Similarly, for case (ii), we have \begin{equation}\label{eq:f_II} f_D^{\rm ii}(d)=f^{\rm ii}_1+f^{\rm ii}_2~, \end{equation} where \begin{align*}
&f^{\rm ii}_1=
\left\{
\begin{array}{ll}
H^{\rm ii}_1(\gamma,\theta^{\rm ii}_1)+H^{\rm ii}_2(\gamma,\theta^{\rm ii}_1) & \gamma\leq\theta^{\rm ii}_1\leq\frac{\pi}{2}\\
0 & \mathrm{otherwise}
\end{array}
\right.,\\
&f^{\rm ii}_2=
\left\{
\begin{array}{ll}
H^{\rm ii}_1(\theta^{\rm ii}_2,\pi-\beta)+H^{\rm ii}_2(\theta^{\rm ii}_2,\pi-\beta) & \theta^{\rm ii}_2\leq\pi-\beta\\
0 & \mathrm{otherwise}
\end{array}
\right.,\\
&\begin{array}{l}
\theta^{\rm ii}_1=\arcsin\left(\frac{c\sin(\beta)}{d}\right)~,\\
\theta^{\rm ii}_2=\pi-\arcsin\left(\frac{c\sin(\beta)}{d}\right)~,
\end{array} \end{align*} and for case (iii), \begin{equation}\label{eq:f_III} f_D^{\rm iii}(d)= \left\{ \begin{array}{ll} f^{\rm iii}_1 & ~~~ \beta\leq\frac{\pi}{2}-\gamma\\ f^{\rm iii}_{21}+f^{\rm iii}_{22} & ~~~ \mathrm{otherwise} \end{array} \right., \end{equation} where \begin{align*} &f^{\rm iii}_1= \left\{
\begin{array}{ll}
H^{\rm iii}_1(\pi-\beta,\pi)+H^{\rm iii}_2(\pi-\beta,\pi) & \theta^{\rm iii}_1<\pi-\beta\\
H^{\rm iii}_1(\theta^{\rm iii}_1,\pi)+H^{\rm iii}_2(\theta^{\rm iii}_1,\pi) & \pi-\beta\leq\theta^{\rm iii}_1\leq\pi\\
0 & \mathrm{otherwise}\\
\end{array} \right.,\nonumber\\
&f^{\rm iii}_{21}= \left\{
\begin{array}{ll}
H^{\rm iii}_1(\pi-\beta,\theta^{\rm iii}_2)+H^{\rm iii}_2(\pi-\beta,\theta^{\rm iii}_2) & \pi-\beta\leq\theta^{\rm iii}_2\leq \frac{\pi}{2}+\gamma\\
0 & \mathrm{otherwise}
\end{array} \right.,\\ &f^{\rm iii}_{22}= \left\{
\begin{array}{ll}
H^{\rm iii}_1(\theta^{\rm iii}_1,\pi)+H^{\rm iii}_2(\theta^{\rm iii}_1,\pi) & \theta^{\rm iii}_1\leq\pi\\
0 & \mathrm{otherwise}
\end{array} \right.,\\ &\begin{array}{l} \theta^{\rm iii}_1=\pi-\arcsin(\frac{c\sin(\alpha)}{d})+\gamma,\\ \theta^{\rm iii}_2=\arcsin(\frac{c\sin(\alpha)}{d})+\gamma. \end{array} \end{align*} Finally, the PDF of the PDD within an arbitrary triangle is \begin{equation}\label{eq:triangle-gdd} \begin{array}{l}
f_D(d)=f_D^{\rm i}(d)+f_D^{\rm ii}(d)+f_D^{\rm iii}(d)~. \end{array} \end{equation} Similar to $H^{\rm i}_1$ and $H^{\rm i}_2$, $H^{\rm ii}_1$ and $H^{\rm iii}_1$ can be calculated using (\ref{H1-x-y}), and $H^{\rm ii}_2$ and $H^{\rm iii}_2$ can be calculated using (\ref{H2-x-y}), but with different $p_1$, $p_2$ and $base$ for different cases (see line \ref{alg1:1}--\ref{alg1:2} of \figurename~\ref{alg:triangle}). $H^{\rm i}_1$, $H^{\rm i}_2$, $H^{\rm ii}_1$, $H^{\rm ii}_2$, $H^{\rm iii}_1$, and $H^{\rm iii}_2$ in (\ref{eq:triangle-gdd}) are \begin{equation*}
\renewcommand{1.15}{1.5}
\left\{\begin{array}{l}
H^{\rm i}_1(\mathcal{X},\mathcal{Y})=h^{\rm i}_1(\mathcal{Y})-h^{\rm i}_1(\mathcal{X}),
H^{\rm i}_2(\mathcal{X},\mathcal{Y})=h^{\rm i}_2(\mathcal{Y})-h^{\rm i}_2(\mathcal{X}),\\
H^{\rm ii}_1(\mathcal{X},\mathcal{Y})=h^{\rm ii}_1(\mathcal{Y})-h^{\rm ii}_1(\mathcal{X}),
H^{\rm ii}_2(\mathcal{X},\mathcal{Y})=h^{\rm ii}_2(\mathcal{Y})-h^{\rm ii}_2(\mathcal{X}),\\
H^{\rm iii}_1(\mathcal{X},\mathcal{Y})=h^{\rm iii}_1(\mathcal{Y})-h^{\rm iii}_1(\mathcal{X}),
H^{\rm iii}_2(\mathcal{X},\mathcal{Y})=h^{\rm iii}_2(\mathcal{Y})-h^{\rm iii}_2(\mathcal{X})~,
\end{array}\right.\\ \end{equation*} where $$ {\allowdisplaybreaks
\renewcommand{1.15}{1.15} \begin{array}{rl}
h^{\rm i}_1(\theta)&=\frac{d}{2\sin ( \alpha ) }( \frac{{d}^{2}}{2}\sin ( \beta-\gamma+2\,\theta ) -d (4\,b\sin ( \alpha )\cos ( \gamma-\theta )+\\
&d\theta\cos( \beta+\gamma ) ) + \frac{{b}^{2}}{2}\ln ( -{\frac {\sin ( \beta+\theta ) }{\cos ( \gamma-\theta ) }} ) ( 2\,\sin ( \beta+\gamma )-\\
&\sin ( 2\,\alpha+\beta+\gamma ) +\sin ( 2\,\alpha-\beta-\gamma ) ) +\sin ^{2}( \alpha )(2\,{b}^{2} \\
&( \gamma-\theta ) \cos ( \beta+\gamma )-{b}^{2}\ln ( \tan^{2} ( \gamma-\theta ) +1 ) \sin ( \beta+\gamma )) )~,\\
h^{\rm i}_2(\theta)&={\frac {ad}{b\sin ( \alpha ) }} ( \frac{{d}^{2}\theta}{2}\cos ( \beta ) -\frac{{d}^{2}}{4}\sin( \beta+2\,\theta )+{b}^{2}\theta\cos ( \beta ) \sin ^{2} ( \alpha )\\
{}&+2\,bd\sin ( \alpha ) \cos ( \theta ) -{b}^{2}\ln ( \sin ( \beta+ \theta ) ) \sin ( \beta ) \sin^2 ( \alpha ) )~,\\
h^{\rm ii}_1(\theta)&=\frac {bd}{4c\sin ( \beta ) } ( {d}^{2}\sin ( \gamma-2\,\theta ) +2\,{d}^{2}\theta\,\cos ( \gamma )-4\,{c}^{2}\sin^{2} ( \beta ) \\
{}&(\ln ( \sin (\theta ) ) \sin ( \gamma ) -\theta\,\cos ( \gamma ) ) +8\,cd\sin ( \beta ) \cos ( \gamma-\theta ) ) ~,\\
h^{\rm ii}_2(\theta)&=\frac {d}{4\sin ( \beta ) } ( 2\,{d}^{2}\theta\,\cos ( \beta ) -{d}^{2}\sin ( \beta+2\,\theta)+4\,{c}^{2}\sin^2 ( \beta )\\
{}&(\ln ( \sin (\theta ) ) \sin ( \beta )+\theta\,\cos ( \beta ))+8\,cd\cos ( \beta+\theta ) \sin ( \beta ) ) ~,\\
h^{\rm iii}_1(\theta)&= \frac {ad}{4c\sin ( \alpha ) } ( {d}^{2}\sin ( \beta-2\,\theta ) +2\,{d}^{2}\theta\,\cos ( \beta )+8\,cd\sin ( \alpha ) \\
{}& \cos ( \theta )+4\,{c}^{2}\sin^{2} ( \alpha )(\theta\,\cos ( \beta )+\sin ( \beta ) \ln ( -\sin ( \beta-\theta ) ) ) ) ~,\\
h^{\rm iii}_2(\theta)&=\frac {2d}{\sin ( \alpha ) } ( \frac{{d}^{2}}{8}\sin ( \beta-\gamma+2\,\theta )-\\
&\frac{\theta}{4}\cos ( \beta+\gamma )( 2\,{c}^{2} \sin^2 ( \alpha ) +{d}^{2} ) - cd\cos ( \beta+\theta ) \sin ( \alpha )-\\
&\frac{{c}^{2}}{2}\ln ( \sin ( \gamma-\theta ) ) \sin ( \beta+\gamma ) \sin^2 ( \alpha ) ) ~. \end{array}} $$ Therefore, the closed-form expression of the PDF of PDD within an arbitrary triangle, i.e., \eqref{eq:triangle-gdd}, has been obtained.
\textcolor{black}{Although the obtained closed-form expression looks tedious, note that, for the network performance analysis, we do not use the symbolic expression of \eqref{eq:triangle-gdd} directly but the numerical PDF result calculated promptly by \eqref{eq:triangle-gdd} providing the necessary parameters (e.g., two edges and one angle, or two angles and one edge) of an arbitrary triangle. Simulations can also be utilized for obtaining PDDs. However, conducting simulations for each specific triangle is very time-consuming, and requires a large number of runs to obtain statistically significant results. Moreover, by simulations, only the empirical Cumulative Distribution Function (CDF) of nodal distances can be obtained, while the accurate PDF is also indispensable to the modeling and analysis of wireless communication networks.}
The obtained results are verified in comparison with simulation and the approach in \cite{Fei'arxiv'13} based on Chord Length Distribution (CLD). The simulation is conducted in Matlab as below (the following simulations are all conducted in a similar way): \begin{description}
\item[(1)] Generate a point uniformly at random within a triangle.
\item[(2)] Generate another point uniformly at random within the triangle.
\item[(3)] Compute the Euclidean distance between these two points and append the obtained distance to a matrix.
\item[(4)] Repeat steps (1)--(3) $50,000$ times (the more repeats, the more accurate the result). Then use the Matlab function ``ecdf'' with the matrix as its parameter, we can obtain the empirical CDF of the PDD within the triangle. \end{description} \begin{figure}
\caption{PDDs within arbitrary triangles.}
\label{fig:pdd-1-triangle}
\end{figure} For simplicity, only numerical CDFs obtained by integrating the corresponding PDFs are shown here and hereafter. Three triangles are selected: {{[a]} $\left(\frac{60\pi}{180},\frac{60\pi}{180},\frac{60\pi}{180}\right)$, {[b]} $\left(\frac{80\pi}{180},\frac{70\pi}{180},\frac{30\pi}{180}\right)$, and {[c]} $\left(\frac{130\pi}{180},\frac{30\pi}{180},\frac{20\pi}{180}\right)$}, all of which have the longest side length of 1, which can be scaled to any nonzero size as introduced in \eqref{eq:pdf_scale}. For the first triangle, i.e., an equilateral triangle (ET), the result is also compared with that obtained in~\cite{Zhuang'arxiv'12}, which is only applicable for ET. Figure~\ref{fig:pdd-1-triangle} shows a close match between the obtained results based on KM and the simulation results. \section{PDD between Two Triangles}\label{subsec:bet-2-triangles} Our approach can also be utilized to obtain the PDD between two disjoint geometries, with which the PDD-based performance metrics associated with two clusters in ad-hoc networks or two cells in cellular systems can be quantified. In this section, we show how to obtain the PDDs between two triangles sharing either a common side or a vertex. For the former, the two triangles can form either a convex or concave quadrangle, as shown in \figurename~\ref{fig:2-triangles-convex} and~\ref{fig:2-triangles-concave}, respectively. For simplicity, hereafter, only the algorithmic procedure showing the derivation process is provided, based on which the closed-form expressions can be derived following the same method as shown in Section~\ref{subsec:within-triangle}.
\begin{figure}
\caption{Two triangles sharing a common side form a quadrangle ($||\triangle ABD||=S_1$, and $||\triangle BCD||=S_2$).}
\label{fig:2-triangles-convex}
\label{fig:2-triangles-concave}
\end{figure} \begin{figure}
\caption{KM over the convex quadrangle shown in \figurename~\ref{fig:2-triangles-convex} ($\angle C=\angle BCD, \angle D=\angle ADC$).}
\label{fig:2-triangles-convex-theta}
\end{figure}
\subsection{Two Triangles Share a Side, Forming a Convex Quadrangle} $\square ABCD$ is a convex quadrangle formed by two arbitrary triangles, $\triangle ABD$ with $|AB|=a$ and $|AD|=d$, and $\triangle BCD$ with $|CB|=b$ and $|CD|=c$, as shown in \figurename~\ref{fig:2-triangles-convex}. $|BD|=e$, $|AC|=f$, $||\triangle ABD||=S_1$, and $||\triangle BCD||=S_2$. WLOG, let $DC$ be on the $x$-axis. The relationship between $\angle 1$ and $\angle 2$ and that between $\angle 3$ and $\angle 4$ determine the shape of the quadrangle, leading to the following four cases:
\begin{align*}
{\rm [a]}:&
\begin{array}{l}
\angle 1\geq\angle 2,~\angle 3\geq\angle 4,
\end{array}
&{\rm [b]}:
\begin{array}{r}
\angle 1\geq\angle 2,~\angle 3\leq\angle 4,
\end{array}\\
{\rm [c]}:&
\begin{array}{l}
\angle 1\leq\angle 2,~\angle 3\geq\angle 4,
\end{array}
&{\rm [d]}:
\begin{array}{r}
\angle 1\leq\angle 2,~\angle 3\leq\angle 4.
\end{array}
\end{align*} Case [a] is the same as [d] if $BA$ is on $x$-axis after rotating the quadrangle. Similarly, case [b] and [c] are essentially the same. Different cases correspond to different ranges of the orientation angle $\theta$. Below, we will use case [a] to show how to obtain the PDD between the two triangles.
\begin{figure}
\caption{{Algorithmic procedure for the PDD between two triangles sharing a common side and forming a convex quadrangle}.}
\label{alg:2-triangle-convex}
\end{figure} \begin{figure}
\caption{PDD between two triangles sharing a common side and forming a convex/concave quadrangle.}
\label{fig:pdd-2-triangles-convex-concave}
\end{figure} As shown in \figurename~\ref{fig:2-triangles-convex-theta}, for the convenience of calculation based on trigonometry, there are six subcases for case [a] in terms of the range of the line orientation $\theta$ with regard to $x$-axis. For a given $\theta$, only the lines intersecting with both triangles are considered, i.e., the parallel lines between $\mathcal{G}_1$ and $\mathcal{G}_3$ for subcase (i) and (iv), those between $\mathcal{G}_2$ and $\mathcal{G}_3$ for (ii) and (iii), and those between $\mathcal{G}_1$ and $\mathcal{G}_4$ for (v) and (vi). For each line, denote the length of the segment in $\triangle ABD$ as $l_1$, and that in $\triangle BCD$ as $l_3$ ($l_2=0$ in this case). The distances between $\mathcal{G}_1$ and $\mathcal{G}_2$, $\mathcal{G}_2$ and $\mathcal{G}_3$, and $\mathcal{G}_3$ and $\mathcal{G}_4$ are $p_1$, $p_2$, and $p_3$, respectively. The complete derivation process is shown in \figurename~\ref{alg:2-triangle-convex}. Figure \ref{fig:pdd-2-triangles-convex-concave} shows a close match with the simulation results, given the two triangles, $(\angle A=\frac{120\pi}{180},\angle4=\frac{35\pi}{180},\angle 2=\frac{25\pi}{180})$ and $(\angle C=\frac{80\pi}{180}, \angle1=\angle3=\frac{50\pi}{180})$ as an example, as shown in \figurename~\ref{fig:2-triangles-convex}.
\subsection{Two Triangles Share a Side, Forming a Concave Quadrangle} \begin{figure}
\caption{KM over the concave quadrangle shown in \figurename~\ref{fig:2-triangles-concave}.}
\label{fig:2-triangles-concave-theta}
\end{figure}
\begin{figure}
\caption{{Algorithmic procedure for the PDD between two triangles sharing a common side and forming a concave quadrangle.}}
\label{alg3:1}
\label{alg3:2}
\label{alg3:3}
\label{alg:2-triangle-concave}
\end{figure} A concave quadrangle formed by two triangles is shown in \figurename~\ref{fig:2-triangles-concave}. Similarly, for the convenience of calculation by using trigonometry, the line orientation $\theta$ with regard to $x$-axis can be categorized into six cases, as shown in \figurename~\ref{fig:2-triangles-concave-theta}. Given $\theta$, for each line intersecting with both triangles, the length of the segment in $\triangle ABD$ is $l_1$ and that in $\triangle BCD$ is $l_3$. The length of the segment between the above two is $l_2$. Note that $l_2=0$ when the line intersects with $DB$. The distances between $\mathcal{G}_1$ and $\mathcal{G}_2$, and $\mathcal{G}_2$ and $\mathcal{G}_3$ are $p_1$ and $p_2$, respectively. Figure~\ref{alg:2-triangle-concave} summarizes the derivation process. For verification, the following two triangles are used: $(\angle ABD=\frac{110\pi}{180},\angle DAB=\frac{40\pi}{180},\angle ADB=\frac{30\pi}{180})$ and $(\angle CBD=\frac{160\pi}{180},\angle BCD=\frac{15\pi}{180},\angle CDB=\frac{5\pi}{180})$. The analytical results in close match with the simulation results are shown in \figurename~\ref{fig:pdd-2-triangles-convex-concave}.
\subsection{Two Triangles Share a Vertex} \begin{figure}
\caption{Two triangles within a polygon sharing a vertex.}
\label{fig:2-triangles-vertex}
\label{fig:hexagon-2-triangles-vertex}
\end{figure} \begin{figure}
\caption{PDD between two triangles in a polygon sharing a vertex.}
\label{fig:pdd-2-triangles-vertex}
\label{fig:gdd-hexagon-2-triangles-vertex}
\end{figure} For the simplicity of dividing the range of the line orientation, two special cases are employed for demonstration: [a] two triangles sharing a common vertex are within a regular pentagon, as shown in \figurename~\ref{fig:2-triangles-vertex}, and [b] two triangles sharing a common vertex are within a regular hexagon, as shown in \figurename~\ref{fig:hexagon-2-triangles-vertex}. Note that our approach is not limited to regular polygons, but also applies to other cases associated with arbitrary triangles.
For case [a], the two triangles are labeled by \textit{R1} and \textit{R3}, respectively. With $DC$ on $x$-axis, the line orientation $\theta$ with regard to $x$-axis falls into five subcases: (i) $0\leq\theta\leq\angle1$; (ii) $\angle1\leq\theta\leq\angle2$; (iii) $\angle2\leq\theta\leq\angle3$; (iv) $\angle3\leq\theta\leq\angle4$; (v) $\angle4\leq\theta\leq\pi$. Note that the lines with $\theta$ in (ii) will not be considered, since none of them intersects with both of the triangles. The algorithmic procedure shown in \figurename~\ref{alg:2-triangle-concave} can still obtain the PDD between the two triangles, only with the difference in calculating $p_m$ (line \ref{alg3:1}--\ref{alg3:2}), $l_1$, $l_2$, and $l_3$ (line \ref{alg3:3}). Figure~\ref{fig:pdd-2-triangles-vertex} shows the comparison in a close match with simulation. For case [b] where the regular hexagon is triangulated as shown in \figurename~\ref{fig:hexagon-2-triangles-vertex}, the two triangles are either \textit{R1} and \textit{R3}, \textit{R1} and \textit{R4}, or \textit{R2} and \textit{R4}. According to the symmetry of the regular hexagon, the PDD between \textit{R1} and \textit{R3} is identical with that between \textit{R2} and \textit{R4}. With the same method, the results are shown in \figurename~\ref{fig:gdd-hexagon-2-triangles-vertex}. \section{PDDs for Arbitrary Polygons} \begin{figure}
\caption{PDD within a polygon.}
\label{fig:gdd-pentagon}
\label{fig:gdd-hexagon}
\label{fig:gdd-pentagon-hexagon}
\end{figure} With the triangle-PDDs obtained above, the PDD associated with arbitrary polygons can be obtained through a Decomposition and Recursion (D\&R) approach, since any polygon can be triangulated. Therefore, the PDD-based performance metrics of wireless networks associated with arbitrary polygons can be quantified accurately.
Taking the regular pentagon with the triangulation shown as in \figurename~\ref{fig:2-triangles-vertex} for example, through D\&R, the CDF of the PDD within the pentagon is given by a probabilistic sum, \begin{equation} \begin{array}{l}
F=\frac{2S_\textit{1}}{S}(\frac{S_\textit{1}}{S}F_{\textit{11}}+\frac{S_\textit{3}}{S}F_{\textit{13}}+\frac{S_\textit{2}}{S}F_{\textit{12}})+\frac{S_\textit{3}}{S}(\frac{S_\textit{3}}{S}F_{\textit{33}}+\frac{2S_\textit{1}}{S}F_{\textit{13}})\nonumber~, \end{array} \end{equation} where $S$ is the area of the pentagon. The comparison with simulation is shown in \figurename~\ref{fig:gdd-pentagon}. Likewise, the CDF of the PDD within the regular hexagon of area $S$, with the polygon triangulation shown as in \figurename~\ref{fig:hexagon-2-triangles-vertex}, is \begin{equation} \renewcommand{1.15}{1.15} \begin{array}{l}
F=\frac{2S_\textit{1}}{S}(\frac{S_\textit{1}}{S}F_{\textit{11}}+\frac{S_\textit{2}}{S}F_{\textit{12}}+\frac{S_\textit{3}}{S}F_{\textit{13}}+\frac{S_\textit{4}}{S}F_{\textit{14}})+\frac{2S_\textit{2}}{S}(\frac{S_\textit{1}}{S}F_{\textit{12}}+\frac{S_\textit{2}}{S}F_{\textit{22}}+\frac{S_\textit{3}}{S}F_{\textit{23}}+\frac{S_\textit{4}}{S}F_{\textit{24}})\nonumber~. \end{array} \end{equation} The comparison with the existing result obtained in~\cite{hexagons} and simulation in a close match is shown in \figurename~\ref{fig:gdd-hexagon}.
\section{PDDs for Ring Geometries} \begin{figure}
\caption{Two ring geometries.}
\label{fig:square_ring}
\label{fig:hexagon_ring}
\end{figure} \begin{figure}
\caption{PDDs associated with ring geometries.}
\label{fig:square_ring_cdf}
\label{fig:hexagon_ring_cdf}
\label{fig:ring}
\end{figure} In this section, we show the PDDs associated with ring geometries by applying the D\&R approach. For ease of presentation, we use two examples, one is a square ring as shown in \figurename~\ref{fig:square_ring}, and the other is a hexagon ring as shown in \figurename~\ref{fig:hexagon_ring}.
For the square ring, we consider a unit square (i.e., the side length of the square is $a=1$) labeled by $\mathcal{K}_1$, with a smaller square (its side length is $b=0.6$) located in the center and labeled by $\mathcal{K}_2$. The ring area in grey is labeled by $\mathcal{K}_3$. We use $F_{xx}$ to denote the CDF of the random distances within $\mathcal{K}_x$, and $F_{xy}$ the CDF of the random distances between $\mathcal{K}_x$ and $\mathcal{K}_y$. $F_{22}$ and $F_{23}$ can be obtained with the developed approach directly. Then with a weighted probabilistic sum, \begin{equation}\label{eq:F_holes_prob_sum} \renewcommand{1.15}{1.15} \begin{array}{rl}
F_{\textit{11}}&=\frac{S_\textit{2}}{S_\textit{1}}(\frac{S_\textit{2}}{S_\textit{1}}F_{\textit{22}}+\frac{S_\textit{3}}{S_\textit{1}}F_{\textit{23}})+\frac{S_\textit{3}}{S_\textit{1}}(\frac{S_\textit{2}}{S_\textit{1}}F_{\textit{23}}+\frac{S_\textit{3}}{S_\textit{1}}F_{\textit{33}})=\frac{S_\textit{2}}{S_\textit{1}}F_{\textit{12}}+\frac{S_\textit{3}}{S_\textit{1}}F_{\textit{13}}~, \end{array} \end{equation} based on which $F_{\textit{12}}$, $F_{\textit{13}}$, and $F_{\textit{33}}$ can be obtained. The obtained CDFs of the PDDs of interest are shown in \figurename~\ref{fig:square_ring_cdf}. Similarly, the CDFs of the PDDs associated with the hexagon ring are shown in \figurename~\ref{fig:hexagon_ring_cdf}, where the radius of the circle in the center of the unit hexagon (its side length is $h=1$) is $R=0.7$. Note that if there are nodes deployed in $\mathcal{K}_2$ and $\mathcal{K}_3$ but with different node densities from each other, the above weighted probabilistic sum is still applicable with different weights due to the node density differences, which shows the way of handling nonuniform node distributions.
\section{Conclusions} In this report, we first applied the proposed algorithmic approach to obtain the random PDDs associated with arbitrary triangles (triangle-PDDs). Since any polygons can be triangulated, we then used the decomposition and recursion approach for arbitrary polygons based on triangle-PDDs. Finally, the PDDs associated with ring geometries were also shown. The algorithmic procedures were provided to show the mathematical derivation process, based on which either the closed-form expressions or the algorithmic results can be obtained. Together with~\cite{ref2rand}, and the decomposition and recursion approach, all random distances associated with random polygons, regardless between two random points or with an arbitrary reference point, can be obtained, so for any arbitrary geometry shapes with any needed approximation precision.
\section*{Acknowledgment} This work is supported in part by the NSERC, CFI and BCKDF. We also thank Dr. Lin Cai for her constructive suggestions and encouragement, and Drs. Lei Zheng and Maryam Ahmadi for their comments and suggestions to this work.
\end{document} |
\begin{document}
\title{Measurement-device-independent randomness from local entangled states} \section{Introduction} Randomness is a valuable resource for various important tasks ranging from cryptographic applications \cite{Crypto} to numerical simulations such as \emph{Monte Carlo} method \cite{Monte}. Algorithmic information theory shows that true randomness cannot exist from a mathematical point of view \cite{Chaitin,Knuth}. Thus generation of randomness must be based on unpredictability of physical phenomena so that the random nature is guaranteed by the laws of physics. Classical physics being fundamentally deterministic in nature cannot guarantee such randomness \cite{Butterfield}. On the other hand though the outcomes of measurement performed on quantum system are intrinsically random (due to Born rule) \cite{Born,Neumann}, real-life implementation of such randomness generation procedures \cite{Jennewein,Stefanov,Atsushi} demand idealized modeling and detailed knowledge about the internal working process of the devices used for generating randomness. To overcome this issue, nonlocality based \cite{Ekert,Barrett_1,Masanes} and device independent (DI) technique \cite{Mayers,Acin_1,Colbeck,Pironio_1} has been applied for generating randomness. In Ref.\cite{Pironio_2}, Pironio \emph{et al.} have shown that correlation obtained from entangled quantum particles can be used to certify the presence of genuine randomness and they have designed cryptographically secure random number generator which does not require any assumption on the internal working of the devices. The key point is that randomness in the outcomes of measurements performed on the separated parts of the entangled quantum systems can be certified in DI way if the correlation obtained from the entangled state violates a Bell inequality (BI). It is well known that nonlocality \cite{Brunner} and entanglement \cite{Horodecki} are two distinct concepts. Not all entangled states violate BI, rather there exists entangled states for which measurement statistics can be simulated locally \cite{Werner}. Therefore, such \emph{local} entangled states are not useful resource for DI randomness certification. In this work we first introduce the concept of measurement-device-independent (MDI) randomness certification protocol, where the quantum state preparation device behave quantum mechanically but the measurement device is completely untrusted. In such scenario we show that class of \emph{local} entangled states become useful resource for randomness certification task which otherwise are not useful for the corresponding DI scenario.
The concept of MDI information processing scenario has been independently introduced in Ref.\cite{Braunstein} and Ref.\cite{Lo}, where the authors have presented the idea of MDI-quantum key distribution (MDI-QKD) protocol. The important benefit of the MDI protocol over the conventional quantum one is that it requires no trust in the measurement device and hence comes the name. But, in contrast to DI protocols the MDI protocols require almost perfect state preparation device. Recently, Branciard \emph{et al.} have introduced another interesting protocol in MDI scenario. They have shown that presence of entanglement can be demonstrated in MDI way \cite{Branciard}. To arrive at their conclusion Branciard \emph{et al.} have used a recent result of Buscemi, which shows that all entangled states provide an advantage over the separable states for some a \emph{semi quantum} game \cite{Buscemi}.
In this work we first introduce the MDI randomness certification task. We then show that entangled states which are not useful for DI randomness certification turn out to be useful resource for the corresponding MDI scenario. More precisely we consider the two-qubit entangled Werner states $\varrho^v=v|\psi^-\rangle\langle\psi^-|+(1-v)\frac{\mathbb{I}}{2}\otimes \frac{\mathbb{I}}{2}$. It is known that Werner states with visibility parameter $v>1/3$ are entangled and a subclass of these states (states with $v>1/\sqrt{2}$) violates BI and hence are useful for DI randomness certification. On the other hand Werner states with $v\le 1/2$ and $v\le 5/12$ have local description for projective measurement and positive operator valued measurement (POVM), respectively \cite{Werner}, and thus cannot be useful for DI randomness certification. Interestingly, we show that all these entangled Werner states are useful for MDI randomness certification.
\section{Bell scenario and DI randomness}\label{sec2}
A bipartite Bell scenario with $m$ different measurements per subsystem, each measurement having $d$ possible results, is characterized by the joint probabilities $P_{AB|XY}=\{p(ab|xy)\}$, with measurement results denoted by $a,b\in \{1,2,...,d\}$ and measurements denoted by $x,y\in \{1,2,...,m\}$. The quantum distribution $P_{AB|XY}^Q$ is of the form \begin{equation}\label{q}
p(ab|xy)=\mbox{Tr} [M_{a|x}\otimes M_{b|y} \rho] \end{equation}
where $\rho$ is a quantum state (density operator) in some tensor product Hilbert space $\mathcal{H}_A\otimes \mathcal{H}_B$ and $\{M_{a|x}~|~M_{a|x}\ge 0~\forall a;~\sum_{a}M_{a|x}=\mathbb{I}_{\mathcal{H}_A}\}$, $\{M_{b|y}~|~M_{b|y}\ge 0~\forall b;~\sum_{b}M_{b|y}=\mathbb{I}_{\mathcal{H}_B}\}$ are positive operator valued measures (POVMs) \cite{Nielson}. The set of quantum statistics $P_{AB|XY}^Q$ is referred to as $Q$. A Bell expression $I =\sum_{abxy}c_{abxy}p(ab|xy)$ is a linear combination of the probabilities specified by the coefficients $\{c_{abxy}\}$ \cite{Bell}. Correlations which can be expressed as $P(ab|xy)=\int_{\lambda}d\lambda\rho(\lambda)P(a|x,\lambda)P(b|y,\lambda)$ with $\lambda$ being the shared random variable, admit {\em local realistic} description and satisfy the condition $I\le I_L$, where $I_L$ is called the local bound of the BI. Interestingly, there exists entangled quantum states which violate BI and correlations obtain from these states can not be explained in local realistic form. Such correlations are called nonlocal correlations. However, there exists correlations which are more nonlocal than quantum correlation but compatible with relativistic causality or no signaling (NS) principle. The well known Popesku-Rohilick (PR) correlation \cite{Popescu} is an example of this type. If the collections of local, quantum and NS correlations are denoted as $\mathcal{P}^L$, $\mathcal{P}^Q$ and $\mathcal{P}^{NS}$, respectively, then the following strict set inclusion relations hold: $\mathcal{P}^L\subset \mathcal{P}^Q\subset \mathcal{P}^{NS}$ (see \cite{Brunner} for a review on Bell's nonlocality). Note that BI is derived under conjunction of the assumptions called \emph{reality} and \emph{locality} (along with \emph{measurement independence}). Violation of BI by quantum correlations implies that quantum mechanics is not reconcilable with these assumptions. As these assumptions refer to properties of a ontological (hidden-variable) model \cite{Rudolph}, thus from the observed BI violation it is impossible to conclude which one of these assumptions is violated. Interestingly, the BI can be derived under two operational assumptions, namely, \emph{predictability} and \emph{signal locality} \cite{Cavalcanti}. As the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity, thus BI violation implies that events are unpredictable. This alternative derivation of BI from operational assumptions plays important role in the practical question of randomness certification even when the experimental devices are not trusted.
In DI randomness certification scenario one (Say Alice) has a private place which is completely inaccessible from the outside i.e., no illegitimate system may enter in this place. From a cryptographic point of view assumption of such private place is admissible. Alice chooses classical inputs $x\in X$ and $y\in Y$ with probability distributions $\mathcal{P}_X(x)$ and $\mathcal{P}_Y(y)$, respectively, and sends them to two measurement devices ($\mathcal{MD}1$ and $\mathcal{MD}2$ respectively) through some secure classical communication channels. The inputs prescribe the measurement devices to perform some POVM $\{M_{a|x}~|~M_{a|x}\ge 0~\forall a;~\sum_{a}M_{a|x}=\mathbb{I}_{\mathcal{H}_A}\}$ and $\{M_{b|y}~|~M_{b|y}\ge 0~\forall b;~\sum_{b}M_{b|y}=\mathbb{I}_{\mathcal{H}_B}\}$ on some quantum state $\rho$, shared between the two devices. Once the inputs are received, no classical communication between the measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$ is allowed. Alice collects the input-output statistics $P(AB|XY)=\{p(ab|xy)\}$. \begin{figure}
\caption{(Colour on-line) Setup for DI randomness certification. Classical inputs $x,~y$ are sent from Alice's private place to the measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$, respectively, through secure classical channels. The black dots denote the bipartite quantum state $\rho$ shared between the two measurement devices. Classical communication is not allowed between two measurement devices.}
\label{fig1}
\end{figure} Since no communication between two measurement devices is allowed (i.e signal locality assumption is satisfied) hence BI violation implies that operational statistics must be unpredictable. Therefore randomness can be certified against an Eavesdropper with control of specifying the details of the experimental device. The setup for DI randomness certification is depicted in Fig.\ref{fig1}.
The amount of randomness associated with the measurement outcome is quantified by guessing probability $G(x,y,\mathcal{K})= \max_{a,b}p(ab|xy,\mathcal{K})$ \cite{Acin_2} of a malicious Eavesdropper who prepares the experimental devices. Here $p(ab|xy,\mathcal{K})$ are the joint outcome probabilities and $\mathcal{K}$ denotes the shared resources between the two spatially separated system. If the Eavesdropper is restricted by quantum theory then she prepares $\mathcal{K}$ as any bipartite quantum state. On the other hand, if she is restricted only by no signaling (NS) principle then $\mathcal{K}$ can be any correlation satisfying NS principle. The quantity G corresponds to the Eave s probability to guess correctly the outcomepair $(a, b)$, since the best guess is simply to output the most probable pair. The guessing probability can be expressed in bits and is then known as the min-entropy, $H_{\infty}(x,y,\mathcal{K})=-\log_2G(x,y,\mathcal{K})$ \cite{Koenig}. In \cite{Pironio_2}, Pironio \emph{et al.} have shown that whenever a bipartite input-output probability distributions violates BI there is nonzero min-entropy associated with the outputs. To obtain the minimum randomness in quantum theory one has to perform the following optimization problem: \begin{eqnarray}\label{rand_quantum}
p^*_q(ab|xy)&=&~~~~~~\mbox{max}~~~~~~p(ab|xy)\nonumber\\
&&\mbox{subject~to}~~\sum_{abxy}c_{abxy}p(ab|xy)=I\nonumber\\
&& p(ab|xy)\mbox{~is~quantum}, \end{eqnarray}
where the last condition ensures that the obtained correlation is of the form Eq.(\ref{q}). Adapting a straightforward way of technique for approximating the set of quantum correlations using a semi-definite-programs (SDP) as introduced in \cite{Navascues}, one can efficiently lower bound min-entropy obtainable from a quantum correlation. The minimum random bits obtained in quantum theory corresponding to BI violation $I$ is thus $H_\infty(AB|XY)=-\log_2\max_{ab}p_q^*(ab|xy)$. One may, however, be interested in the amount of randomness obtained in NS theory; which mean that instead of the quantum state any correlation satisfying NS condition is allowed to share between the measurement devices (see \cite{Pironio_2} for NS analysis).
\section{Semi-quantum nonlocal game scenario}\label{sec3}
Recently, Buschemi generalizes the standard Bell game scenario into semi quantum scenario \cite{Buscemi}. In this case Alice chooses classical inputs $x\in X$ and $y\in Y$ with probability distributions $\mathcal{P}_X(x)$ and $\mathcal{P}_Y(y)$, respectively. But, instead of sending these classical inputs to the measurement devices she encodes the information of these inputs into sets of quantum states $\{|\phi^x\rangle_{\alpha'}\}_{x\in X}$ and $\{|\psi^y\rangle_{\beta'}\}_{y\in Y}$, chosen from Hilbert spaces $\mathcal{H}_{\alpha'}$ and $\mathcal{H}_{\beta'}$, respectively. The quantum states $|\phi^x\rangle$ and $|\psi^y\rangle$ are then send to the measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$, respectively, through quantum channels. Given these quantum states the respective measurement device $\mathcal{MD}1$ and $\mathcal{MD}2$ produce outcomes $a$ and $b$, respectively, by performing POVMs on the composite system i.e. the system obtained from Alice and the part of a bipartite state $\rho_{\alpha\beta}$, shared between the two measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$. The output probability is \begin{eqnarray}\label{defQ}
p_{\rho_{\alpha\beta}}(ab||\phi^x\rangle_{\alpha'},|\psi^y\rangle_{\beta'}) =\mbox{tr}[(\mathcal{M}^{\alpha'\alpha}_a\otimes \mathcal{M}^{\beta\beta'}_b)\nonumber\\
(|\phi^x\rangle_{\alpha'}\langle\phi^x|\otimes \rho_{\alpha\beta}\otimes |\psi^y\rangle_{\beta'}\langle\phi^y|)], \end{eqnarray} where $\mathcal{M}^{\alpha'\alpha}_a$ ($\mathcal{M}^{\beta\beta'}_b$) is the element of the POVM performed on the composite system $\mathcal{H}_{\alpha'}\otimes \mathcal{H}_{\alpha}$ ($\mathcal{H}_{\beta}\otimes \mathcal{H}_{\beta'}$) to produce the outcomes $a$ and $b$. Expression of Eq.(\ref{defQ}) can also be written as, \begin{eqnarray}\label{defQ1}
p_{\rho_{\alpha\beta}}(ab||\phi^x\rangle_{\alpha'},|\psi^y\rangle_{\beta'})
=\mbox{Tr} [M_{a||\phi^x\rangle_{\alpha'}}\otimes M_{b||\psi^y\rangle_{\beta'}} \rho_{\alpha\beta}], \end{eqnarray}
where the operators $M_{a||\phi^x\rangle_{\alpha'}}=$ and $M_{b||\psi^y\rangle_{\beta'}}$ describe Alice and Bob’s effective POVMs acting on $\rho_{\alpha\beta}$ given $|\phi^x\rangle_{\alpha'},|\psi^y\rangle_{\beta'}$. We shall refer to the set of quantum probabilities of the form of Eq.(\ref{defQ1}) as $Q$.
In this generalized framework Buscemi proved that if the shared state between the measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$ is entangled one then Alice can choose the input quantum states in such way that the produced correlation cannot be achieved by local operation and shared randomness (LOSR). Later it has been shown that in this scenario any entangled state can generate correlations that cannot be simulated by local operation and classical correlation (LOCC) even if there is no restriction on the amount of classical communication \cite{Rosset}, but that such correlations can be simulated if the distribution of the shared variables depends on the input quantum states i.e., it the measurement independence assumptions have been reduced \cite{Banik}. Using these semi quantum game framework, in the following, we explicitly show that all two-qubit entangled Werner states are useful for MDI randomness certification.
We consider the following particular semi-quantum game. The input quantum states are chosen from a regular tetrahedron on the Bloch sphere i.e., \begin{equation}\label{qstate}
|\phi^x\rangle\langle\phi^x|=\frac{\mathbb{I}+\vec{v}_x.\vec{\sigma}}{2},
~~|\psi^y\rangle\langle\psi^y|=\frac{\mathbb{I}+\vec{v}_y.\vec{\sigma}}{2}, \end{equation} for $x,y=1,..,4$ we have $\vec{v}_1=\frac{(1,1,1)}{\sqrt{3}}$, $\vec{v}_2=\frac{(1,-1,-1)}{\sqrt{3}}$, $\vec{v}_3=\frac{(-1,1,-1)}{\sqrt{3}}$ and $\vec{v}_4=\frac{(1,-1,-1)}{\sqrt{3}}$; and $\vec{\sigma}=(\sigma_1,\sigma_2,\sigma_3)$ with $\sigma_i$ ($i=1,2,3$) being the Pauli matrices. The POVM $\{\mathcal{M}^{\alpha'\alpha}_a\}_{a\in \{0,1\}}$ is given by \begin{equation}
\mathcal{M}^{\alpha'\alpha}_1=|\phi^+\rangle\langle\phi^+|,~~\mathcal{M}^{\alpha'\alpha}_0=\mathbb{I}-|\phi^+\rangle\langle\phi^+|, \end{equation} \begin{figure}
\caption{(Colour on-line) Setup for MDI randomness certification. Alice has perfect state preparation device (PD) at her private place. Quantum states $|\phi^x\rangle_{\alpha'}$ and $|\psi^y\rangle_{\beta'}$ are sent from Alice's private place to the measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$, respectively, through secure quantum channels. Black dots are the quantum state $\rho_{\alpha\beta}$ shared between two devices. Classical communication is allowed between two measurement devices but no quantum state transfer is allowed.}
\label{fig2}
\end{figure}
where $|\phi^+\rangle=\frac{|00\rangle+|11\rangle}{\sqrt{2}}$. Same POVM is considered at Bob's end $\{\mathcal{M}^{\beta\beta'}_b\}_{b\in \{0,1\}}$.The probability distribution admitted when $\rho_{\alpha\beta}$ is a singlet state is, \begin{equation}\label{p1}
p(a,b||\phi^x\rangle,|\psi^y\rangle)= \begin{cases} \frac{2-(a+b)}{4}, & \text{if}\ x=y\\ \frac{7-5a-5b+4ab}{12}, & \text{if}\ x \neq y\\ \end{cases} \end{equation} Notice The probability distribution admitted when $\rho_{\alpha\beta}$ is $\frac{\mathbb{I}}{4}$, \begin{equation}\label{p2}
p(a,b||\phi^x\rangle,|\psi^y\rangle)= \begin{cases} \frac{9}{16}, & \text{if}\ a=0 \text{and}\ b=0\\ \frac{3}{16}, & \text{if}\ (a \oplus b=1) \\ \frac{1}{16}, & \text{if}\ a=1 \text{and}\ b=1 \\ \end{cases} \end{equation} for all $x,y$. The Werner state is a classical mixture of these two states and hence the prbability distribution.
It is known that $W=\frac{\mathbb{I}}{2}-|\psi^-\rangle\langle\psi^-|$ is an entanglement witness for the two-qubit Werner state $\varrho^v$ \cite{Toth}. For Werner state $\varrho^v$, $\mbox{tr}[\varrho^vW]=\frac{1-3v}{4}$, which is negative for $v>\frac{1}{3}$ and $\mbox{tr}[\rho W]>0$ for any separable state $\rho$. From this entanglement witness operator Branciard \emph{et al.} have constructed the following MDI-entanglement witness \cite{Branciard}: \begin{equation}\label{mdi-ew}
I(P)=\frac{5}{8}\sum_{x= y}p(1,1||\phi^x\rangle,|\psi^y\rangle)-\frac{1}{8}\sum_{x\ne y}p(1,1||\phi^x\rangle,|\psi^y\rangle). \end{equation}
Here $P$ denotes the probability distribution $\{p(a,b||\phi^x\rangle,|\psi^y\rangle)|a,b=0,1;x,y=1,..,4\}$. For the Werner states the above expression becomes $I(P_{\varrho^v})=\frac{1-3v}{16}$, which is negative for $v>\frac{1}{3}$. For any separable state $\rho$, $I(P_{\rho})=0$, as separable states are the end points of the semi-quantum game relation `$\succcurlyeq_{sq}$' defined in \cite{Buscemi}.
\section{MDI randomness certification}\label{sec4} We are now in the position to show that any two-qubit entangled Werner states can certify the presence of randomness when the measurement apparatuses are not trusted. The set up for MDI randomness certification is depicted in Fig.\ref{fig2}. Here, in contrast to the DI randomness certification scenario (Fig.\ref{fig1}), Alice has a perfect state preparation device at her private place. The quantum states, chosen from the set described in Eq.(\ref{qstate}) are prepared by Alice and are sent to measurement devices $\mathcal{MD}1$ and $\mathcal{MD}2$ through quantum channels. No leakage of the information about the classical index $x$ (or $y$) is allowed. In DI scenario, after sending the classical index $x$ and $y$ to the respective measurement devices no classical communication is allowed between the measurement devices. In this case no such restriction is required. But after receiving the quantum states from Alice any kind of quantum state transfer is prohibited between the two measurement devices. When the quantum states reach to the measurement devices, both the devices produce classical outcomes $a,b\in \{1,0\}$. Alice collects the input-output statistics and tests whether the the collected data satify certain conditions.
{\bf Results}:
To find the minimum randomness associated with the probability distribution $P=\{p(ab|xy)\}$ one has to solve the following optimization problem,
\begin{eqnarray}\label{mdi_rand_quantum}
p^*(ab|xy)&=&~~~~~~\mbox{max}~~~~~~p(ab|xy)\nonumber\\ &&\mbox{subject~to}~~I(P)=\frac{1-3v}{16}\nonumber\\
&& p(ab|xy)\in Q, \end{eqnarray}
where $I(P)$ is the expression of Eq.(\ref{mdi-ew}). The minimum random bits obtained in quantum theory corresponding to Werner state visibility parameter $v$ is thus $H_\infty(AB|XY)=-\log_2\max_{ab}p_q^*(ab|xy)$. While the optimization problem (\ref{mdi_rand_quantum}) is computationally tough, one can solve for a relaxed condition $p(ab|xy)\in Q_{1+AB}$ using SDP. Alternatively $p(ab|xy)\in NS$ can be used to quantify minimum random bits obtained from no-signaling principle. Our results point out that there is zero min-entropy against a $Q_{1+AB}$ and no-signaling (see Appendix). Changing the visibility parameter $v$ given each free runs of the protocol corresponds to movement on the line joining (\ref{p1}) and (\ref{p2}) in the probability distribution space (two party quadruple inputs binary output). Hence we look for characteristics of (\ref{p1}) and (\ref{p2}) that guarantee randomness.
\emph{Additional conditions on statistics:} As the protocol used above is same up-to relabeling for outputs, Eq.(\ref{mdi-ew}) in general can be written as, \begin{equation}\label{mdi-ew1}
I(P)=\frac{5}{8}\sum_{x= y}p(i,j||\phi^x\rangle,|\psi^y\rangle)-\frac{1}{8}\sum_{x\ne y}p(i,j||\phi^x\rangle,|\psi^y\rangle) \end{equation} where $i,j\in\{0,1\}$. For $i=j$, following two conditions (should hold simultaneously) are sufficient for guaranteeing randomness (positive min-entropy) associated with the distribution $P$ for the parameter ranges $v\in(\frac{1}{3},1]$ under no-signaling and $Q_{1+AB}$.\\
Condition (I): \begin{equation}
P(0,1|l,l)=P(0,1|m,m),~ \forall~ l,m\in \{1,2,3,4\}, \end{equation} i.e. when Alice and Bob have the same input the probability of obtaining outcomes $a=0$ and $b=1$ should be the same.\\ Condition (II): \begin{equation}
P(1,0|l,l)=P(1,0|m,m),~ \forall~ l,m\in \{1,2,3,4\}, \end{equation} i.e. when Alice and Bob have the same input the probability of obtaining outcomes $a=1$ and $b=0$ should be the same.
After performing the optmization of Eq.(\ref{mdi_rand_quantum}) with the aditional conditions (I) and (II) the min-entropy is plotted in Fig. \ref{plot1}. \begin{figure}
\caption{(Colour on-line) The min-entropy statistics obtained by solving the optimization problem (\ref{mdi_rand_quantum}) using $Q_{1+AB}$ level of NPA hierarchy against visibility parameter $v$ under conditions (I) and (II). The min entropy under NS condition is same as obtained under $Q_{1+AB}$.}
\label{plot1}
\end{figure} However for the case when $i\neq j$ the following two conditions produce the same statistics as in Fig. \ref{plot1}.
Condition (III): \begin{equation}
P(0,0|l,l)=P(0,0|m,m),~ \forall~ l,m\in \{1,2,3,4\}, \end{equation} i.e. when Alice and Bob have the same input the probability of obtaining outcomes $a=0$ and $b=0$ should be the same.\\ Condition (IV): \begin{equation}
P(1,1|l,l)=P(1,1|m,m),~ \forall~ l,m\in \{1,2,3,4\}, \end{equation} i.e. when Alice and Bob have the same input the probability of obtaining outcomes $a=1$ and $b=1$ should be the same.
It is important to note that positive min entropy is obtain for $I(P)<0$, the condition which is satisfied by any two qubit entangled Werner states. Moreover no seperable state satifies this condition hence no cheating strategy is possible by sharing seperable correlations. Two qubits entangled Werner class of states also satified the additional conditions and hence they are useful for MDI min-entropy (randomness) certification. Also we obtain that the min entopy graph of the optimazation problem (\ref{mdi_rand_quantum}) (along with the additional conditions) in NS scenario (i.e. $p(ab|xy)\in NS$) is same as $Q_{1+AB}$ plotted in Fig.\ref{plot1}.
\section{Discussion}\label{sec5} Specifying various device independent protocols based on the study of quantum nonlocality has importance practical implications. Various such potocols has been reported \cite{DI1,DI2,DI3,DI4,DI5} some with experimental realization. Among these one of the very interesting is DI randomness certification and generation. Violation of Bell- inequality guarantees randomness even from uncharacterised experimental devices. Nevertheless, the practical implementation of such protocols is extremely challenging as it requires the genuine violation of Bell’s inequality \cite{DI-Ex}. So different variant of randomness certification protocol has been reported which requires some assumptions on the devices. As for example Ref.\cite{Lunghi} degine a practical self testing QRNG protocol which requires some knowledge about the dimension of the quantum systems used in the protocol. However in all such DI or semi DI independent prtocols only those entangled states are useful that exhibit nonlocality. Hoever there exist entangled states which are local even under (nonsequential) generalized measurement.
Here we introduce the MDI randomness certification protocol which requires trusted quantum state preparation device but the measurement device is completely unspecified i.e. it can be supplied even by eavesdropper. In this scenario we show that some \emph{local} entangled states become useful in the task which othwise were useless in the corresponding DI secnario. One practicle advantage of our protocol over the DI or semi DI protocol is that in the DI scenario (see Fig.\ref{fig1}) any particle transfer or field interaction with the potentiality of sending classical communication between the two measurement devices need to be bolcked. But in our MDI scenarion this requirment is relaxed. One need not to bother about the classical communications between these devices but of course no quantum state tranfer between the devices is allowed.
Our works motivates further research. First of all note that we have considered single shot scenario and the protocol presented here is not optimal one. It is interesting to find the optimal protocol and then compare its rate with the DI protocol. On the other, it is also interesting to study where all entangled state are useful for the MDI randomness certification task. In Ref.\cite{Koh} the authors have shown that relaxation of `measurement independence' assumption in Bell's theorem potentially enhance the adversary's capabilities in the task of randomness expansion. In Ref.\cite{Banik} one of the author of this letter has shown that correlations achieved in semi-quantum nonlocal game scenario can be simulated by reducing `measurement independence'. In light of these two results it will be interesting to study the effect of reduced `measurement independence' in MDI randomness certification task.
\section{Acknowledgments} MB likes to thank G. Kar for simulating discussions. Discussions with D. Rosset at ISI-Kolkata and comments of A. Ac\'{i}n in a private communication are gratefully acknowledged by MB. It is a great pleasure to thank T. Chakraborty for the help in improving the presentation of the manuscript. MB likes to acknowledge ISI (Kolkata), as part of this work is done there during his PhD.
\section{Appendix}
We use the perspective of the Eave's dropper to present the results. Eve prepares measurement apparatus for Alice. The optimization problem (\ref{mdi_rand_quantum}) can be seen as Eave's best strategy to increase the guessing probability $p_q^*(ab|xy)$ of outcome $ab$ given inputs $xy$. The min-entropy, \begin{equation}
H_\infty(AB|XY)=-\log_2\max_{ab}p_q^*(ab|xy). \end{equation}
For all $v\in(1/3,1]$ without any extra condition Eave could always find $ab$ such that $H(ab|xy)$ is zero for both $Q_{1+AB}$ and no-signaling correlations which implies zero $H_\infty(AB|XY)$. However she gets positive $H(00|xy)$ when $x=y$ for some $v\in (1/3,1]$.
Under the Conditions (I) and (II) $H(ab|x=y)$ statistics are given in Fig. \ref{plot2} and $H(ab|x\neq y)$ statistics are given in Fig. \ref{plot3}. Notice best strategy for (no-signaling or $Q_{1+AB}$) Eave in both the cases ($x=y$ or $x\neq y$) is to maximize the guessing probability of $p(01|xy)$ or $p(10|xy)$. \begin{figure}
\caption{The $H(ab|x=y)$ statistics obtained by solving the optimization problem (\ref{mdi_rand_quantum}) of the form using $Q_{1+AB}$ level of NPA hierarchy (blue) and no-signaling (red) against visibility parameter $v$ along with $i=j=0$ in (\ref{mdi-ew1}).}
\label{plot2}
\end{figure} \begin{figure}
\caption{The $H(ab|x=y)$ statistics obtained by solving the optimization problem (\ref{mdi_rand_quantum}) of the form using $Q_{1+AB}$ level of NPA hierarchy (blue) and no-signaling (red) against visibility parameter $v$ along with $i=j=0$ in (\ref{mdi-ew1}) and Conditions 1 and 2 hold.}
\label{plot3}
\end{figure}
\end{document} |
\begin{document}
\title{The regularity of quotient paratopological groups}
\author{Taras Banakh} \address{Department of Mathematics, Ivan Franko Lviv National University, Universytetska 1, Lviv, 79000, Ukraine} \email{tbanakh@yahoo.com}
\author{Alex Ravsky} \address{Department of Functional Analysis, Pidstryhach Institute for Applied Problems of Mechanics and Mathematics National Academy of Sciences of Ukraine, Naukova 2-b, Lviv, 79060, Ukraine} \email{oravsky@mail.ru}
\keywords{paratopological group, quotient paratopological group, group reflexion, regularity} \subjclass{22A15, 54H10, 54H11} \begin{abstract} Let $H$ be a closed subgroup of a regular abelian paratopological group $G$. The group reflexion $G^\flat$ of $G$ is the group $G$ endowed with the strongest group topology, weaker that the original topology of $G$. We show that the quotient $G/H$ is Hausdorff (and regular) if $H$ is closed (and locally compact) in $G^\flat$. On the other hand, we construct an example of a regular abelian paratopological group $G$ containing a closed discrete subgroup $H$ such that the quotient $G/H$ is Hausdorff but not regular.\end{abstract}
\maketitle
In this paper we study the properties of the quotients of paratopological groups by their normal subgroups.
By a paratopological group $G$ we understand a group $G$ endowed with a topology $\tau$ making the group operation continuous, see \cite{ST}. If, in addition, the operation of taking inverse is continuous, then the paratopological group $(G,\tau)$ is a topological group. A standard example of a paratopological group failing to be a topological group is the Sorgefrey line ${\mathbb{L}}$, that is the real line ${\mathbb{R}}$ endowed with the Sorgefrey topology (generated by the base consisting of half-intervals $[a,b)$, $a<b$).
Let $(G,\tau)$ be a paratopological group and $H\subset G$ be a closed normal subgroup of $G$. Then the quotient group $G/H$ endowed with the quotient topology is a paratopological group, see \cite{Ra}. Like in the case of topological groups, the quotient homomorphism $\pi:G\to G/H$ is open. If the subgroup $H\subset G$ is compact, then the quotient $G/H$ is Hausdorff (and regular) provided so is the group $G$, see \cite{Ra}. The compactness of $H$ in this result cannot be replaced by the local compactness as the following simple example shows.
\begin{example}\label{ex1} The subgroup $H=\{(-x,x):x\in{\mathbb{Q}}\}$ is closed and discrete in the square $G={\mathbb{L}}^2$ of the Sorgenfrey line ${\mathbb{L}}$. Nonetheless, the quotient group $G/H$ fails to be Hausdorff: for any irrational $x$ the coset $(-x,x)+H$ cannot be separated from zero $(0,0)+H$. \end{example}
A necessary and sufficient condition for the quotient $G/H$ to be Hausdorff is the closedness of $H$ in the topology of group reflexion $G^\flat$ of $G$.
By {\em the group reflexion} $G^\flat=(G,\tau^\flat)$ of a paratopological group $(G,\tau)$ we understand the group $G$ endowed with the strongest topology $\tau^\flat\subset\tau$ turning $G$ into a topological group. This topology admits a categorial description: $\tau^\flat$ is a unique topology on $G$ such that\begin{itemize} \item $(G,\tau^\flat)$ is a topological group; \item the identity homomorphism ${\operatorname{id}}:(G,\tau)\to(G,\tau^\flat)$ is continuous; \item for each continuous group homomorphism $h:G\to H$ into a topological group $H$ the homomorphism $h\circ {\operatorname{id}}^{-1}:G^\flat\to H$ is continuous. \end{itemize} Observe that the group reflexion of the Sorgenfrey line ${\mathbb{L}}$ is the usual real line ${\mathbb{R}}$.
For so-called 2-oscillating paratopological groups $(G,\tau)$ the topology $\tau^\flat$ admits a very simple description: its base at the origin $e$ of $G$ consists of the sets $UU^{-1}$, where $U$ runs over open neighborhoods of $e$ in $G$. Following \cite{BR} we define a paratopological group $G$ to be {\em 2-oscillating} if for each neighborhood $U\subset G$ of the origin $e$ there is another neighborhood $V\subset G$ of $e$ such that $V^{-1}V\subset UU^{-1}$. The class of 2-oscillating paratopological groups is quite wide: it contains all abelian (more generally all nilpotent) as well as saturated paratopological groups. Following I.Guran we call a paratopological group {\em saturated} if for each neighborhood $U$ of the origin in $G$ its inverse $U^{-1}$ has non-empty interior in $G$.
Given a subset $A$ of a paratopological group $(G,\tau)$ we can talk of its properties in the topology $\tau^\flat$. In particular, we shall say that a subset $A\subset G$ is {\em $\flat$-closed} in $G$ if it is closed in the topology $\tau^\flat$. Also with help of the group reflexion many helpful properties of paratopological groups can be defined.
A paratopological group $G$ is called \begin{itemize} \item {\em $\flat$-separated} if the topology $\tau^\flat$ is Hausdorff; \item {\em $\flat$-regular} if it has a neighborhood base at the origin, consisting of $\flat$-closed sets; \item {\em $\flat$-compact} if $G^\flat$ is compact. \end{itemize}
It is clear that each $\flat$-separated (and $\flat$-regular) paratopological group is functionally Hausdorff (and regular). Conversely, each Hausdorff (resp. regular) 2-oscillating group is $\flat$-separated (resp. $\flat$-regular), see \cite{BR}. On the other hand, there are examples of (nonabelian) Hausdorff paratopological groups $G$ which are not $\flat$-separated, see \cite{Ra}, \cite{BR}. The simplest example of a $\flat$-compact non-compact paratopological group is the Sorgefrey circle
$\{z\in\mathbb C:|z|=1\}$ endowed with the topology generated by the base consisting of ``half-intervals" $\{e^{i\varphi}:\varphi\in[a,b)\}$, $a<b$.
Now we are able to state our principal positive result.
\begin{theorem}\label{main1} Let $H$ be a normal subgroup of a $\flat$-separated paratopological group $G$. Then the quotient paratopological group $G/H$ is \begin{enumerate} \item $\flat$-separated if and only if $H$ is closed in $G^\flat$; \item $\flat$-regular if $G$ is $\flat$-regular and the set $H$ is locally compact in $G^\flat$.
\end{enumerate} \end{theorem}
\begin{proof} Let $\pi:G\to G/H$ denote the quotient homomorphism.
1. If $H$ is closed in $G^\flat$ then $G^\flat/H$ is Hausdorff as a quotient of a Hausdorff topological group $G^\flat$. Since the identity homomorphism $G/H\to G^\flat/H$ is continuous, the paratopological group $G/H$ is $\flat$-separated.
Now assume conversely that the paratopological group $G/H$ is $\flat$-separated. Since the quotient map $\pi^\flat:G^\flat\to (G/H)^\flat$ is continuous its kernel $H$ is closed in $G^\flat$.
2. Assume that $G$ is $\flat$-regular and $H$ is locally compact in $G^\flat$. It follows that $H$ is closed in $G^\flat$ (this so because the subgroup $H\subset G^\flat$, being locally compact, is complete). Then there is a closed neighborhood $W_1\subset G^\flat$ of the neutral element $e$ such that the intersection $W_1\cap H$ is compact in $G^\flat$. Take any closed neighborhood $W_2\subset G^\flat$ of $e$ such that $W_2^{-1}W_2\subset W_1$. We claim that $W_2\cap gH$ is compact for each $g\in G$. This is trivial if $W_2\cap gH$ is empty. If not, then $gh=w$ for some $h\in H$ and $w\in W_2$. Hence $W_2\cap gH\subset W_2\cap wh^{-1}H=W_2\cap wH=w(w^{-1}W_2\cap H)\subset w(W_2^{-1}W_2\cap H)\subset w(W_1\cap H)$ and the closed subset $W_2\cap gH$ of $G$ lies in the compact subset $w(W_1\cap H)$ of $G$. Consequently, $W_2\cap gH$ is compact for any $y\in G$. Let $W_3\subset G^\flat$ be a neighborhood of $e$ such that $W_3^{-1}W_3\subset W_2$.
To prove the $\flat$-regularity of the quotient group $G/H$, given any neighborhood $U\subset G$ of $e$ it suffices to find a neighborhood $V\subset U$ of $e$ such that $\pi(V)$ is $\flat$-closed in $G/H$. By the $\flat$-regularity of $G$, we can find a $\flat$-closed neighborhood $V\subset U\cap W_3$. We claim that $\pi(V)$ is $\flat$-closed in $G/H$. Since the identity map $(G/H)^\flat\to G^\flat/H$ is continuous, it suffices to verify that $\pi(V)$ is closed in the topological group $G^\flat/H$.
Take any point $gH\notin\pi(V)$ of $G^\flat/H$. It follows from $gH\cap V=\emptyset$ and the compactness of the set $W_2\cap gH$ that there is an open neighborhood $W_4\subset W_3$ of $e$ in $G^\flat$ such that $W_4(W_2\cap gH)\cap V=\emptyset$. We claim that $W_4z\cap V=\emptyset$ for any $z\in gH$. Assuming the converse, find a point $v\in W_4z\cap V$. It follows that $z\notin W_2$. On the other hand, $z\in W_4^{-1}v\subset W_4^{-1}V\subset W_2$. This contradiction shows that $W_4gH\cap V=\emptyset$ and thus $\pi(W_4g)$ is a neighborhood of $gH$ in $G^\flat/H$, disjoint with $\pi(V)$. \end{proof}
\begin{corollary} If $H$ is a $\flat$-compact normal subgroup of a $\flat$-regular paratopological group $G$, then the quotient paratopological group $G/H$ is $\flat$-regular. \end{corollary}
\begin{proof} It follows that the identity inclusion $H^\flat\to G^\flat$ is continuous and thus $H$ is compact in $G^\flat$. Applying the preceding theorem, we conclude that the quotient group $G/H$ is $\flat$-regular. \end{proof}
\begin{remark} It is interesting to compare the latter corollary with a result of \cite{Ra} asserting that the quotient $G/H$ of a Hausdorff (regular) paratopological group $G$ by a compact normal subgroup $H\subset G$ is Hausdorff (regular). \end{remark}
Since for a 2-oscillating paratopological group $G$ the Hausdorff property (the regularity) of $G$ is equivalent to the $\flat$-separatedness (the $\flat$-regularity), Theorem~\ref{main1} implies
\begin{corollary}\label{cor1} Let $H$ be a normal subgroup of a Hausdorff 2-oscillating paratopological group $G$. Then the quotient paratopological group $G/H$ is \begin{enumerate} \item Hausdorff if $H$ is closed in $G^\flat$; \item regular if $G$ is regular and the set $H$ is locally compact in $G^\flat$.
\end{enumerate} \end{corollary}
Example~\ref{ex1} supplies us with a locally compact closed subgroup $H$ of a $\flat$-regular paratopological group $G={\mathbb{L}}^2$ such that the quotient $G/H$ is not Hausdorff. Next, we construct a $\flat$-regular abelian paratopological group $G$ containing a locally compact $\flat$-closed subgroup $H$ such that the quotient is Hausdorff but not regular. This will show that in Theorem~\ref{main1} and Corollary~\ref{cor1} the local compactness of $H$ in $G^\flat$ cannot be replaced by the local compactness plus $\flat$-closedness of $H$ in $G$.
Our construction is based on the notion of a {\em cone topology} (see the paper~\cite{Ra4} of the second author). Let $G$ be a topological group and $S\subset G$ be a closed subsemigroup of $G$, containing the neutral element $e\in G$. The {\em cone topology} $\tau_S$ on $G$ consists of sets $U\subset G$ such that for each $x\in U$ there is an open neighborhood $W\subset G$ of $e$ such that $x(W\cap S)\subset U$. It is clear that the group $G$ endowed with the cone topology $\tau_S$ is a regular paratopological groups and its neighborhood base at $e$ consists of the sets $W\cap S$, where $W$ is a neighborhood of $e$ in $G$. Moreover, the paratopological group $(G,\tau_S)$ is saturated if $e$ is a cluster point of the interior of $S$ in $G$. In the latter case the paratopological group $(G,\tau_S)$ is 2-oscillating and thus $\flat$-regular, see \cite[Theorem 3]{BR}.
In the following example using the cone topology we construct a saturated regular paratopological group $G$ containing a $\flat$-closed discrete subgroup $H$ with non-regular quotient $G/H$.
\begin{example} Consider the group ${\mathbb{Q}}^3$ endowed with the usual (Euclidean) topology. A subsemigroup $S$ of ${\mathbb{Q}}^3$ is called a {\em cone} in ${\mathbb{Q}}^3$ if $q\cdot \vec x\in S$ for any non-negative $q\in{\mathbb{Q}}$ and any vector $\vec x\in S$.
Fix a sequence $(z_n)$ of rational numbers such that $0<\sqrt{2}-z_n<2^{-n}$ for all $n$ and let $S\subset {\mathbb{Q}}^3$ be the smallest closed cone containing the vectors $(1,0,0)$ and $(\frac1n,1,z_n)$ for all $n$. Let $\tau_S$ be the cone topology on the group ${\mathbb{Q}}^3$ determined by $S$. Since the origin of ${\mathbb{Q}}^3$ is a cluster point of the interior of $S$, the paratopological group $G=({\mathbb{Q}}^3,\tau_S)$ is saturated and $\flat$-regular. Moreover, its group reflexion coincides with ${\mathbb{Q}}^3$.
Now consider the $\flat$-closed subgroup $H=\{(0,0,q):q\in{\mathbb{Q}}\}$ of the group $G$. Since $H\cap S=\{(0,0,0)\}$, the subgroup $H$ is discrete (and thus locally compact) in $G$. On the other hand $H$ fails to be locally compact is ${\mathbb{Q}}^3$, the group reflexion of $G$.
We claim that the quotient group $G/H$ is not regular. Let $\pi:G\to G/H$ denote the quotient homomorphism. We can identify $G/H$ with ${\mathbb{Q}}^2$ endowed with a suitable topology.
Let us show that $(0,1)\notin\pi(S)$. Assuming the converse we would find $x\in{\mathbb{Q}}$ such that $(0,1,x)\in S$. It follows from the definition of $S$ that $x\ge0$ and there is a sequence $(\vec x_i)$ converging to $(0,1,x)$ such that
$$\vec x_i=\sum_n\lambda_{i,n}(n^{-1},1,z_n)+\lambda_i(1,0,0)$$ where all $\lambda_i,\lambda_{in}\ge 0$ and almost all of them vanish. Taking into account that $\{\vec x_i\}$ converges to $(0,1,x)$ we conclude that \begin{itemize} \item $\lambda_i\to0$ as $i\to\infty$; \item $\lambda_{in}\underset{i\to\infty}\longrightarrow0$ for every $n$; \item $\sum_n\lambda_{in}$ tends to $1$ as $i\to\infty$. \end{itemize}
Let $\varepsilon>0$. Then
$\exists N_1(\forall n>N_1)\{|z_n-\sqrt{2}|<\varepsilon\}$,
$\exists N_2(\forall i>N_2)(\forall n\le N_1)\{\lambda_{in}<\varepsilon/N_1\}$ and
$\exists N_3(\forall i>N_3)(|\sum \lambda_{in}-1|<\varepsilon\}$.
Put $N=\max\{N_2,N_3\}$. Let $i>N$. Then
$$|\sqrt{2}-\sum_n\lambda_{in}z_n|\le
|\sqrt{2}-\sum_n\lambda_{in}\sqrt{2}|+ |\sum_{n\le N_1}\lambda_{in}(\sqrt{2}-z_n)|+|\sum_{n>N_1}\lambda_{in}(\sqrt{2}-z_n)|\le$$ $$\varepsilon\sqrt{2}+\varepsilon+\sum_{n>N_1}\lambda_{in}\varepsilon\le \varepsilon(\sqrt{2}+1+1+\varepsilon).$$
So $x=\sqrt{2}$ which is impossible. This contradiction shows that $(0,1)\notin\pi(S)$ and thus $(0,\frac1n)\notin\pi(S)$ for all $n\in{\mathbb{N}}$ (since $S$ is a cone).
It remains to prove that for each neighborhood $V\subset {\mathbb{Q}}^3$ of the origin we get $\overline{\pi(V\cap S)}\not\subset\pi(S)$, where the closure is taken in $G/H$. This will follow as soon as we show that $(0,\frac1m)\in\overline{\pi(V\cap S)}$ for some $m$. Since $V$ is a (usual) neighborhood of $(0,0,0)$ in ${\mathbb{Q}}^3$, there is $m\in{\mathbb{N}}$ such that $\frac1m(\frac1n,1,z_n)\in V$ for all $n\in{\mathbb{N}}$. Then $\frac1m(\frac1n,1)\in\pi(V\cap S)$ for all $n\in{\mathbb{N}}$. Observe that the sequence $\{(\frac1{nm},\frac1m)\}_n$ converges to $(0,\frac1m)$ in $G/H$ since for each neighborhood $W\subset{\mathbb{Q}}^3$ of $(0,0,0)$ the difference $(\frac1{nm},\frac1m)-(0,\frac1m)=(\frac1{nm},0)$ belongs to $\pi(W\cap S)$ for all sufficiently large $n$. Therefore $(0,\frac1m)\in\overline{\pi(V\cap S)}\not\subset \pi(S)\not\ni(0,\frac1m)$, which means that $G/H$ is not regular. \end{example}
As we understood, in the submitted version of the paper~\cite{XieLiTu} Li-Hong Xie, Piyu Li, and Jin-Ji Tu proved that if $\mathcal P$ is one of the following properties $\{T_1,T_2,T_3,$ $regular\}$, a paratopological group $G$ has the property $\mathcal P$, and $H$ is a compact normal semigroup of the group $G$ then the quotient group $G/H$ has the property $\mathcal P$ too. But the case when $\mathcal P=T_0$ was remarked as unknown. We fill this gap here.
\begin{proposition} Let $H$ be a compact normal subgrop of a $T_0$ paratopological group $G$. Then the quotient group $G/H$ is $T_0$ too. \end{proposition} \begin{proof} Let $\mathcal B$ be the family of all open neighborhoods of the unit of the group $G$ and $\mathcal B'$ be the family of all open neighborhoods of the unit of the group $G/H$. Let $\pi:G\to G/H$ be the quotient map. Let $S=\bigcap_{U\in\mathcal B} U$ and $S'=\bigcap_{U'\in\mathcal B'} U'$. Then $S'\subset \bigcap_{U\in\mathcal B}\pi(UH)\subset \pi(\bigcap_{U\in\mathcal B}UH)$. Let $x\in \bigcap_{U\in\mathcal B}UH$ be an arbitrary point and $U\in\mathcal B$ be an arbitrary neighborhood. There exists a neighborhood $V\in\mathcal B$ such that $V^2\subset U$. Then $U^{-1}x\supset \overline{V^{-1}x}\supset V^{-1}x$. So $U^{-1}x\cap H\supset \overline{V^{-1}x}\cap H\ne\emptyset$. Since the set $H$ is compact there exists point $y\in \bigcap_{U\in\mathcal B}(\overline{U^{-1}x}\cap H)=\bigcap_{U\in\mathcal B}(U^{-1}x\cap H)$. So $x\in Sy\subset SH$. Hence $S'\subset\pi(SH)$ and $S'\cap S'^{-1}\subset\pi(SH)\cap \pi(SH)^{-1} \subset\pi(SH\cap S^{-1}H)$. Let $x\in SH\cap S^{-1}H$ be an arbitrary point. Then there exist elements $s_1,s_2\in S$ and $h_1,h_2\in H$ such that $x=s_1h_1=s_2^{-1}h_2$. Then $s_2s_1=h_2h_1^{-1}\in S\cap H$. But since $H$ is a compact paratopological group, by Lemma 5.4 from~\cite{Ra3}, $H$ is a topological group. Since $H$ is a $T_0$ topological group the space $H$ is $T_1$ (in fact, $T_{31/2}$), so $H\cap S=H\cap \bigcap_{U\in\mathcal B} U=\{e\}$. Thus $s_2s_1=h_2h_1^{-1}=e$, so $s_2=s_1^{-1}$ and $h_2=h_1$. Then $xh^{-1}\in S\cap S^{-1}=\{e\}$. Hence $x\in H$. At last, $S'\cap S'^{-1}\subset \pi(SH\cap S^{-1}H)\subset \pi(H)=\{e\}$ and thus the group $G/H$ is $T_0$.
\end{proof}
\end{document} |
\begin{document}
\title{Compact hyperbolic Coxeter four-dimensional polytopes with eight facets}
\author{Jiming Ma} \address{School of Mathematical Sciences \\Fudan University\\Shanghai 200433, China} \email{majiming@fudan.edu.cn}
\author{Fangting Zheng} \address{Department of Mathematical Sciences \\ Xi'an Jiaotong Liverpool University\\ Suzhou 200433, China } \email{Fangting.Zheng@xjtlu.edu.cn}
\keywords{compact Coxeter polytopes, hyperbolic orbifolds, acute-angled, 4-polytopes with 8 facets} \subjclass[2010]{52B11, 51F15, 51M10} \date{Nov. 22, 2022} \thanks{Jiming Ma was partially supported by NSFC 11771088 and 12171092. Fangting Zheng was supported by NSFC 12101504 and XJTLU Research Development Fund RDF-19-01-29}
\begin{abstract}
In this paper, we obtain the complete classification for compact hyperbolic Coxeter four-dimensional polytopes with eight facets. \end{abstract}
\maketitle
\section{Introduction}
A Coxeter polytope in the spherical, hyperbolic or Euclidean space is a polytope whose dihedral angles are all integer sub-multiples of $\pi$. Let $\mathbb{X}^d$ be $\mathbb{E}^d$, $\mathbb{S}^d$, or $\mathbb{H}^d$. If $\Gamma\subset Isom(\mathbb{X}^d)$ is a finitely generated discrete reflection group, then its fundamental domain is a Coxeter polytope in $\mathbb{X}^d$. On the other hand, if $\Gamma=\Gamma(P)$ is generated by reflections in the bounding hyperplanes of a Coxeter polytope $P\subset\mathbb{X}^d$, then $\Gamma$ is a discrete group of isometries of $\mathbb{X}^d$ and $P$ is its fundamental domain.
There is an extensive body of literature in this field. In early work, \cite{Coxeter:1934} has proved that any spherical Coxeter polytope is a simplex and any Euclidean Coxeter polytope is either a simplex or a direct product of simplices. See, for example, \cite{Coxeter:1934,Bourbaki: 1968} for full lists of spherical and Euclidean Coxeter polytopes.
However, for hyperbolic Coxeter polytopes, the classification remains an active research topic. It was proved by Vinberg \cite{Vinberg:1985} that no compact hyperbolic Coxeter polytope exists in dimensions $d\geq 30$, and non-compact hyperbolic Coxeter polytope of finite volume does not exist in dimensions $d \geq 996$ \cite{Prokhorov:1987}. These bounds, however, may not be sharp. Examples of compact polytopes are known up to dimension $8$ \cite{Bugaenko:1984,Bugaenko:1992}; non-compact polytopes of finite volume are known up to dimension $21$ \cite{Vinberg:1972,VK:1978,Borcherds: 1998}. As for the classification, complete results are only available in dimensions less than or equal to three. Poincare finished the classification of 2-dimensional hyperbolic polytopes in \cite{Poincare: 1882}. That result was important to the work of Klein and Poincare on discrete groups of isometries of the hyperbolic plane. In 1970, Andreev proved an analogue for the $3$-dimensional hyperbolic convex finite volume polytopes \cite{Andreev1: 1970,Andreev2: 1970}. This theorem played a fundamental role in Thurston's work on the geometrization of 3-dimensional Haken manifolds.
In higher dimensions, although a complete classification is not available, interesting examples have been displayed in \cite{Makarov: 1965,Makarov: 1966,Vinberg: 1967,Makarov: 1968,Vinberg: 1969,Rusmanov: 1989,ImH: 1990,Allcock: 2006}. In addition, enumerations are reported for the cases in which the differences between the numbers of facets $m$ and the dimensions $d$ of polytopes are fixed to some small number. When $m-d=1$, Lann\'{e}r classified all compact hyperbolic Coxeter simplices \cite{Lanner: 1950}. The enumeration of non-compact hyperbolic simplices with finite volume has been reported by several authors, see e.g. \cite{Bourbaki: 1968,Vinberg: 1967,Koszul:1967}. For $m-d=2$, Kaplinskaja described all compact or non-compact but of finite volume hyperbolic Coxeter simplicial prisms \cite{Kaplinskaya: 1974}. Esselmann \cite{Esselmann: 1996} later enumerated other compact possibilities in this family, which are named \emph{Esselmann polytopes}. Tumarkin \cite{Tumarkin: n2} classified all other non-compact but of finite volume hyperbolic Coxeter $d$-dimensional polytopes with $n + 2$ facets. In the case of $m-d=3$, Esselman proved in 1994 that compact hyperbolic Coxeter $d$-polytopes with $d+3$ facets only exist when $d\leq 8$ \cite{Esselmann}. By expanding the techniques derived by Esselmann in \cite{Esselmann} and \cite{Esselmann: 1996}, Tumarkin completed the classification of compact hyperbolic Coxeter $d$-polytopes with $d+3$ facets \cite{Tumarkin: n3}. In the non-compact case, Tumarkin proved in \cite{Tumarkin: n3fv,Tumarkin: n3fvs} that such polytopes do not exist in dimensions greater than or equal to $17$; there is a unique polytope in dimension of $16$. Moreover, the author provided in the same papers the complete classification of a special family of pyramids over a product of three simplices, which exist only in dimension of $4,5,\cdots,9$ and $13$. The classification for the case of finite volume has not completed yet. Regarding this sub-family, Roberts provided a list with exactly one non-simple vertex \cite{Roberts:15}. In the case of $m-d=4$, Flikson-Tumarkin showed in \cite{FT:08} that compact hyperbolic Coxeter $d$-polytope with $d + 4$ facets does not exist when $d$ is greater than or equal to $8$. This bound is sharp because of the example constructed by Bugeanko \cite{Bugaenko:1984}. In addition, Flikson-Tumarkin showed that Bugeanko's example is the only $7$-dimensional polytope with $11$ facets. However, complete classifications for $d=4,5,6$ are not presented there.
Besides, some scholars have also considered polytopes with small numbers of disjoint pairs \cite{FT:08s,FT:09,FT:14} or of certain combinatoric types, such as $d$-pyramid \cite{Tumarkin: n2,Tumarkin: n3fv} and $d$-cube \cite{Jacquemet2017,JT:2018}. An updating overview of the current knowledge for hyperbolic Coxeter polytopes is available on Anna Felikson's webpage \cite{Annahomepage}.
In this paper, we classify all the compact hyperbolic Coxeter $4$-polytope with $8$ facet. The main theorem is as follows:
\begin{theorem}\label{thm:main}
There are exactly $348$ compact hyperbolic Coxeter $4$-polytopes with $8$ facets. In particular, $P_{21}$ has two dihedral angles of $\displaystyle\frac{\pi}{12}$, and $P_{17,8}$ has an dihedral angle of $\displaystyle\frac{\pi}{7}$. Among hyperbolic Coxeter polytopes of dimensions larger than or equal to $4$, these two values of dihedral angles appear for the first time and $\displaystyle\frac{\pi}{12}$ is the smallest dihedral angle known so far. \end{theorem}
We remark that Burcroff has also carried out the same list independently almost in the same time \cite{Amanda:2022}. Mutually comparing the results benefit both authors. Burcroff communicated with us when our preprint appear. She kindly pointed out several typos about conveying the information into Coxeter diagrams. Ours also help she find out two lost or double-counted combinatoric types that admit hyperbolic structure. We now all agree that $348$ is the correct number. The correspondence between the notions of our and Burcroff's polytopes is presented in Section \ref{section:vadilation}.
The paper \cite{JT:2018} is the main inspiration for our recent work on enumerating hyperbolic Coxeter polytopes. In comparison with \cite{JT:2018}, we use a more universal ``block-pasting" algorithm, which is first introduced in \cite{MZ:2018}, rather than the ``tracing back" attempt. More geometric obstructions are adopted and programmed to considerably reduce the computational load. Our algorithm efficiently enumerates hyperbolic Coxeter polytopes over arbitrary combinatoric type rather than merely the $n$-cube.
Last but not the least, our main motivation in studying the hyperbolic Coxeter polytopes is for the construction of high-dimensional hyperbolic manifolds. However, this is not the theme here. Readers can refer to, for example, \cite{KM: 2013}, for interesting hyperbolic manifolds built via special hyperbolic Coxeter polytopes.
The paper is organized as follows. We provide in Section $2$ some preliminaries about (compact) hyperbolic (Coxeter) polytopes. In Section $3$, we recall the 2-phases procedure and related terminologies introduced by Jacquemet and Tschantz \cite{Jacquemet2017,JT:2018} for numerating all hyperbolic Coxeter $n$-cubes. The $37$ combinatorial types of simple $4$-polytopes with $8$ facets are reported in Section $4$. Enumeration of all the ``SEILper"-potential matrices are explained in Section $5$. The ``6-rounds" procedure are applied to the ``SEILper"-potential matrices for the Gram matrices of actual hyperbolic Coxeter polytopes in Section $6$. Validations and the complete lists of the resulting Coxeter diagrams of the Theorem \ref{thm:main} are shown in Section $7$.
\textbf{Acknowledgment}
We would like to thank Amanda Burcroff a lot for communicating with us about her result and pointing out several confusing drawing typos in the first arXiv version. The computations is pretty delicate and complex, and the list now is much more convincing due to the mutual check. We are also grateful to Nikolay Bogachev for his interest and discussion about the results, and noting the missing of a hyperparallel distance data and some textual mistakes in previous version. The computations throughout this paper are performed on a cluster of server of PARATERA, engrid12, line priv$\_$para (CPU:Intel(R) Xeon(R) Gold 5218 16 Core v5@2.3GHz).
\section{preliminary} \label{section:cchp} In this section, we recall some essential facts about compact Coxeter hyperbolic polytopes, including Gram matrices, Coxeter diagrams, characterization theorems, etc. Readers can refer to, for example, \cite{Vinberg:1993} for more details.
\subsection{Hyperbolic space, hyperplane and convex polytope} We first describe a hyperboloid model of the $d$-dimensional hyperbolic space $\mathbb{H}^d$. Let $\mathbb{E}^{d,1}$ be a $d+1$-dimensional Euclidean vector space equipped with a Lorentzian scalar product $\langle\cdot,\cdot \rangle$ of signature $(d,1)$. We denote by $C_+$ and $C_-$ the connected components of the open cone $$C=\{x=(x_1, ...,x_d,x_{d+1})\in \mathbb{E}^{d,1}:\langle x,x\rangle<0\}$$ with $x_{d+1}>0$ and $x_{d+1}<0$, respectively. Let $R_{+}$ be the group of positive numbers acting on $\mathbb{E}^{d,1}$ by homothety. The hyperbolic space $\mathbb{H}^d$ can be identified with the quotient set $C_+/R_+$, which is a subset of $P\mathbb{S}^d=(\mathbb{E}^{d,1}\backslash \{0\})/R_+.$ There is a natural projection $$\pi:(\mathbb{E}^{d,1}\backslash \{0\})\rightarrow P\mathbb{S}^d.$$
\noindent We denote $\overline{\mathbb{H}^d}$ as the completion of $\mathbb{H}^d$ in $P\mathbb{S}^d$. The points of the boundary $\partial \mathbb{H}^d= \overline{\mathbb{H}^d}\backslash \mathbb{H}^d$ are called the \emph{ideal points}. \iffalse Let $\mathbb{X}^n \in \{\mathbb{S}^n, \mathbb{E}^n,\mathbb{H}^n\}$ be one of the three standard geometric spaces of constant curvature, realized in a suitable linear real space. All subspaces of $\mathbb{X}^n$ to be considered are \emph{affine subspaces}, that is, are given by a system of (not necessarily homogeneous) linear equations. A set of points $X = \{x_1,x_2,\dots , x_k\}$ is \emph{affinely dependent} if there exist scalars of real number $\lambda_1,\lambda_2,\cdots,\lambda_k$, not all zero, such that $$\lambda_1x_1+\lambda_2x_2+\cdots+\lambda_kx_k=0,~ \lambda_1+\lambda_2+\cdots +\lambda_k=0,$$ and $X$ is \emph{affinely independent} if $\lambda_i$ have no other choices but $0$. Any set $S\subset \mathbb{X}^n$ is said to have (affine) dimension $r$ (written $\text{dim}S=r$) if a maximal affinely independent subset of $S$ contains exactly $r +1$ points. The (unique) affine subspace of smallest dimension containing $S$ is called the \emph{affine hull} of $S$ and is denoted by $\text{aff}S$. Clearly $\text{dim}S=\text{dim}(\text{aff}S)$. In $\mathbb{X}^d$, \fi The affine subspaces of $\mathbb{H}^d$ of dimension $d-1$ are \emph{hyperplanes}. In particular, every hyperplane of $\mathbb{H}^d$ can be represented as $$H_e=\{\pi(x):x\in C_+,\langle x,e\rangle=0\},$$ where $e$ is a vector with $\langle e,e \rangle=1$. The half-spaces separated by $H_e$ are denoted by $H_e^+$ and $H_e^-$, where \begin{equation} H_e^-=\{\pi(x):x\in C_+,\langle x,e\rangle\leq 0\}. \label{1} \end{equation}
The \emph{mutual disposition of hyperplanes} $H_{e}$ and $H_{f}$ can be described in terms of the corresponding two vectors $e$ and $f$ as follows: \begin{itemize}
\item The hyperplanes $H_{e}$ and $H_{f}$ intersect if $\vert\langle e,f \rangle\vert<1$. The value of the dihedral angle of $H_{e}^-\cap H_{f}^-$, denoted by $\angle H_e H_f$, can be obtained via the formula $$\cos \angle H_e H_f=-\langle e,f\rangle;$$
\item The hyperplanes $H_{e}$ and $H_{f}$ are ultra-parallel if $\vert\langle e,f\rangle\vert=1$;
\item The hyperplanes $H_{e}$ and $H_{f}$ diverge if $\vert\langle e,f\rangle\vert>1$. The distance $\rho(H_e,H_f)$ between $H_e$ and $H_f$, when $H_e^+\subset H_f^-$ and $H_f^+\subset H_e^-$, is determined by $$\cosh \rho(H_e,H_f)=-\langle e,f \rangle.$$ \end{itemize}
We say a hyperplane $H_e$ \emph{supports} a closed bounded convex set $S$ if $H_e\cap S \ne \empty 0$ and $S$ lies in one of the two closed half-spaces bounded by $H_e$. If a hyperplane $H_e$ supports $S$, then $H_e \cap S$ is called a \emph{face} of $S$.
\begin{definition}
A $d$-dimensional convex hyperbolic polytope is a subset of the form
\begin{equation}
P=\overline{\mathop{\cap}\limits_{i\in \mathcal{I}} H_{i}^-}\in\overline{\mathbb{H}^d}, \label{2}
\end{equation}
where $H_i^-$ is the negative half-space bounded by the hyperplane $H_i$ in $\mathbb{H}^d$ and the line ``--- " above the intersection means taking the completion in $\overline{\mathbb{H}^d}$,
under the following assumptions:
\begin{itemize}
\item $P$ contains a non-empty open subset of $\mathbb{H}^d$ and is of finite volume;
\item Every bounded subset $S$ of $P$ intersects only finitely many $H_{i}$.
\end{itemize} \end{definition}
A convex polytope of the form (\ref{2}) is called \emph{acute-angled} if for distinct $i,j$, either $\angle H_iH_j\leq \frac{\pi}{2}$ or $H_i^+\cap H_j^+=\emptyset$. It is obvious that Coxeter polytopes are acute-angled. We denote $e_i$ as the corresponding unit vector of $H_i$, namely $e_i$ is orthogonal to $H_i$ and point away from $P$. The polytope $P$ has the following form in the hyperboloid model. $$P=\pi(K)\cap \overline{\mathbb{H}^d},$$ where $K=K(P)$ is the convex polyhedral cone in $\mathbb{E}^{d,1}$ given by $$K=\{x\in \mathbb{E}^{d,1}: \langle x,e_i\rangle \leq 0~ \text{for all}~ i \}.$$
\iffalse \begin{definition}
A family of half-spaces $\{H_i^-\}$ is said to be acute-angled if for distinct $i,j$ whether the hyperplanes $H_i$ and $H_j$ intersect and the dihedral angle $H_i^-\cap H_j^-$ does not exceed $\frac{\pi}{2}$, or $H_i^+\cap H_j^+=\emptyset$. A convex polytope in (\ref{2}) is said to be acute-angled if $\{H_i^-\}$ is an acute-angled family of half-spaces. \end{definition} \fi In the sequel, a $d$-dimensional convex polytope $P$ is called a \emph{$d$-polytope}. A $j$-dimensional face is named a $j$-face of $P$. In particular, a $(d-1)$-face is called a \emph{facet} of $P$. We assume that each of the hyperplane $H_i$ intersects with $P$ on its facet. In other words, the hyperplane $H_i$ is uniquely determined by $P$ and is called a \emph{bounding hyperplane} of the polytope $P$. A hyperbolic polytope $P$ is called \emph{compact} if all of its $0$-faces, i.e., vertices, are in $\mathbb{H}^d$. It is called of \emph{finite volume} if some vertices of $P$ lie in $\partial\mathbb{H}^d$.
\iffalse A convex polytope $P^d\subset \mathbb{H}^d$ is said to be \emph{non-degenerate} if its bounding hyperplanes do not have a common ordinary or ideal point, and there is no hyperplane orthogonal to all of them. In the following, we are only interested in the non-degenerate polytopes. It is obvious that every finite volume hyperbolic convex polytope is non-degenerate.
It is obvious from the definition that a convex $d$-polytope $P^d$ satisfies that \begin{itemize}
\item The faces of $P$ are polytopes;
\item $P$ possesses faces of every dimension $0,1,\cdots,d-1$;
\item Every face of a face of $P$ is also a face of $P$. \end{itemize}
Therefore, there are two natural operations in the set of all faces of $P$ $\mathcal{F}(P)$: For every two faces $F_1$ and $F_2$ of $P$, $F_1\cap F_2$ is the uniquely defined ``biggest" face that is contained in both $F_1$ and $F_2$. We write $F_1 \wedge F_2$ for $F_1\cap F_2$; there exists a uniquely defined face $F_1\vee F_2$ of $P$, namely the ``smallest" face of P which contains both $F_1$ and $F_2$. With the two operations defined above, $\mathcal{F}(P)$ becomes a \emph{lattice}, called the \emph{face-lattice} of $P$. Two polytopes $P_1$ and $P_2$ are said to be \emph{combinatorially equivalent}, written as $P_1\approx P_2$, if their face-lattices $\mathcal{F}(P_1)$ and $\mathcal{F}(P_1)$ are isomorphic.
Equivalently, $P_1\approx P_2$ if and only if there exists a one-to-one inclusion-preserving mapping from $\mathcal{F}(P_1)$ onto $\mathcal{F}(P_2)$. Note that, combinatorially equivalent polytopes have the same number of faces for each dimension arranged in the same way, but possibly of different shapes. \fi
\subsection{Gram matrices, Perron-Frobenius Theorem, and Coxeter diagrams}\label{hcp}
Most of the content in this subsection is well-known by peers in this field. We present them for the convenience of the readers. In particular, Theorem \ref{thm:signature} and \ref{Vinberg:thm3.1} are extremely important throughout this paper.
For a hyperbolic Coxeter $d$-polytope $P=\overline{\mathop{\cap}\limits_{i\in \mathcal{I}} H_{i}^-}$, we define the Gram matrix of polytope $P$ to be the Gram matrix $(\langle e_i,e_j\rangle)$ of the system of vectors $\{e_i\in \mathbb{E}^{d,1}\vert i\in\mathcal{I}\}$ that determine $H_i^-$s. The Gram matrix of $P$ is the $m\times m$ symmetric matrix $G(P)=(g_{ij})_{1\leq i,j\leq m}$ defined as follows:
\begin{center}
$g_{ij}=
\left\{
\begin{array}{ccl}1 \hspace{0.7cm} &\mbox~~~~~{{\rm if}}& j=i, \\
-\cos\frac{\pi}{k_{ij}}&\mbox ~~~~~{{\rm if}}& H_i~ \text{and}~ H_j~ \text{intersect} \text{ at~a~dihedral~ angle~}~ \frac{\pi}{k_{ij}},\\
-\cosh \rho_{ij}&\mbox ~~~~~{{\rm if}}& H_i~ \text{and}~ H_j~ \text{divergeat~at~a~distance}~\rho_{ij},\\
-1 \hspace{0.7cm} &\mbox~~~~~{{\rm if}}& H_i~\text{and}~H_j~\text{are~ultra-parallel}.
\end{array}\right.
$ \end{center}
Other than the Gram matrix, a Coxeter polytope $P$ can also be described by its \emph{Coxeter graph} $\Gamma=\Gamma(P)$. Every node $i$ in $\Gamma$ represents the bounding hyperplane $H_i$ of $P$. Two nodes $i_1$ and $i_2$ are joined by an edge with weight $2\leq k_{ij} \leq \infty$ if $H_i$ and $H_j$ intersect in $\mathbb{H}^n$ with angle $\frac{\pi}{k_{ij}}$. If the hyperplanes $H_i$ and $H_j$ have a common perpendicular of length $\rho_{ij}>0$ in $\mathbb{H}^n$, the nodes $i_1$ and $i_2$ are joined by a dotted edge, sometimes labelled $\cosh \rho_{ij}$. In the following, an edge of weight $2$ is omitted, and an edge of weight $3$ is written without its weight. The rank of $\Gamma$ is defined as the number of its nodes. In the compact case, $k_{ij}$ is not $\infty$, and we have $2\leq k_{ij}< \infty$.
A square matrix $M$ is said to be the direct sum of the matrices $M_1, M_2,\cdots,M_n$ if by some permutation of the rows and of columns, it can be brought to the form \begin{center}
$ \begin{pmatrix}
M_1&&&&0 \\
&M_2&&&\\
&& \cdot&&\\
&&&\cdot& \\
0&&&&M_n
\end{pmatrix}_. $
\end{center}
\noindent A matrix $M$ that cannot be represented as a direct sum of two matrices is said to be \emph{indecomposible}\footnote{It is also referred to as ``irreducible" in some references.}. Every matrix can be represented uniquely as a direct sum of indecomposible matrices, which are called (indecomposible) components. We say a polytope is \emph{indecomposible} if its Gram matrix $G(P)$ is indecomposible.
\begin{figure}
\caption{Connected elliptic (left) and connected parabolic (right) Coxeter diagrams.}
\label{figure:coxeter}
\end{figure}
In 1907, Perron found a remarkable property of the the eigenvalues and eigenvectors of matrices with positive entries. Frobenius later generalized it by investigating the spectral properties of indecomposible non-negative matrices.
\begin{theorem}[Perron-Frobenius, \cite{G:1959}]\label{thm:PF}
An indecomposible matrix $A=(a_{ij})$ with non-positive entries always has a single positive eigenvalue $r$ of $A$. The corresponding eigenvector has positive coordinates. The moduli of all of the other eigenvalues do not exceed $r$. \end{theorem}
It is obvious that Gram matrices $G(P)$ of an indecomposible Coxeter polytope is an indecomposible symmetric matrix with non-positive entries off the diagonal. Since the diagonal elements of $G(P)$ are all $1$s, $G(P)$ is either positive definite, semi-positive definite or indefinite. According to the Perron-Frobenius theorem, the defect of a connected semi-positive definite matrix $G(P)$ does not exceed $1$, and any proper submatrix of it is positive definite. For a Coxeter $n$-polytope $P$, its Coxeter diagram $\Gamma(P)$ is said to be \emph{elliptic} if $G(P)$ is positive definite; $\Gamma (P)$ is called \emph{parabolic} if any indecomposable component of $G(P)$ is degenerate and every subdiagram is elliptic. The elliptic and connected parabolic diagrams are exactly the Coxeter diagrams of spherical and Euclidean Coxeter simplices, respectively. They are classified by Coxeter \cite{Coxeter:1934} as shown in Figure \ref{figure:coxeter}.
A connected diagram $\Gamma$ is a \emph{Lann\'{e}r diagram} if $\Gamma$ is neither elliptic nor parabolic; any proper subdiagram of $\Gamma$ is elliptic. Those diagrams are irreducible Coxeter diagrams of compact hyperbolic Coxeter simplices. All such diagrams, reported by Lann\'{e}r \cite{Lanner: 1950}, are listed in Figure \ref{figure:lanner}.
\begin{figure}
\caption{The Lann\'{e}r diagrams.}
\label{figure:lanner}
\end{figure}
Although the full list of hyperbolic Coxeter polytopes remains incomplete, some powerful algebraic restrictions are known \cite{Vinberg:1985}:
\begin{theorem} \label{thm:signature}
(\cite{Vinberg:1985s}, Th. 2.1). Let $G=(g_{ij})$ be an indecomposable symmetric matrix
of signature $(d,1)$, where $g_{ii}=1$ and $g_{ij}\leq 0$ if $i\ne j$. Then there exists a unique (up to isometry of $\mathbb{H}^d$) convex hyperbolic polytope $P\subset\mathbb{H}^d$, whose Gram matrix coincides with $G$. \end{theorem}
\begin{theorem} \label{Vinberg:thm3.1}
(\cite{Vinberg:1985s}, Th. 3.1, Th. 3.2)
Let $P=\mathop{\cap}\limits_{i\in I} H_i^- \in \mathbb{H}^d$ be a compact acute-angled polytope and $G=G(P)$ be the Gram matrix. Denote $G_J$ the principal submatrix of G formed from the rows and columns whose indices belong to $J\subset I$. Then,
\begin{enumerate}
\item The intersection $\mathop{\cap}\limits_{j\in J}H_j^-,J\subset I$, is a face $F$ of $P$ if and only if the matrix $G_J$ is positive definite
\item For any $J\subset I$ the matrix, $G_J$ is not parabolic.
\end{enumerate}
\end{theorem}
A convex polytope is said to be \emph{simple} if each of its faces of codimension $k$ is contained in exactly $k$ facets. By Theorem \ref{Vinberg:thm3.1}, we have the following corollary:
\begin{corollary}
Every compact acute-angled polytope is simple. \end{corollary}
\section{Potential hyperbolic Coxeter matrices}\label{section:potential}
In order to classify all of the compact hyperbolic Coxeter $4$-polytopes with $8$ facets, we firstly enumerate all Coxeter matrices of simple $4$-polytope with $8$ facets that satisfy spherical conditions around all of the vertices. These are named \emph{potential hyperbolic Coxeter matrices} in \cite{JT:2018}. Almost all of the terminology and theorems in this section are proposed by Jacquemet and Tschantz. We recall them here for reference, and readers can refer to \cite{JT:2018} for more details.
\subsection{Coxeter matrices}\label{subsection:coxeterMatrix}
The \emph{Coxeter matrix} of a hyperbolic Coxeter polytope $P$ is a symmetric matrix $M=(m_{ij})_{1\leq i,j\leq N}$ with entries in $\mathbb{N}\cup\{\infty\}$ such that \[m_{ij}=\left\{\begin{array}{cl} 1,&\text{if } j=i,\\ k_{ij}, &\text{if }H_i\text{ and }H_j\text{ intersect in }\mathbb{H}^n\text{ with angle }\frac{\pi}{k_{ij}},\\ \infty, & \text{otherwise}. \end{array}\right.\] Note that, compared with Gram matrix, the Coxeter matrix does not involve the specific information of the distances of the disjoint pairs.
\begin{remark}
In the subsequent discussions, we refer to \textit{the Coxeter matrix $M$ of a graph $\Gamma$} as the Coxeter matrix $M$ of the Coxeter polyhedron $P$ such that $\Gamma=\Gamma(P)$.\\
\end{remark}
\subsection{Partial matrices}\label{subsection:partial}
\begin{definition}
Let $\Omega=\{n\in \mathbb{Z}\,|\,n\geq 2\}\cup\{\infty\}$ and let $\bigstar$ be a symbol representing an undetermined real value. A \textit{partial matrix of size $m\geq 1$} is a symmetric $m\times m$ matrix $M$ whose diagonal entries are $1$, and whose non-diagonal entries belong to $\Omega\cup\{\bigstar\}$. \end{definition}
\begin{definition}
Let $M$ be an arbitrary $m\times m$ matrix, and $s=(s_1,s_2,\cdots,s_k)$, $1\leq s_1<s_2<\cdots<s_k\leq m$. Let $M^{s}$ be the $k\times k$ submatrix of $M$ with $(i,j)$-entry $m_{s_i,s_j}$. \end{definition}
\begin{definition}
We say that a partial matrix $M=(p_{ij})_{1\leq i,j,\leq m}$ is a \emph{potential matrix} for a given polytope $P$ if
$\bullet$ There are no entries with the value $\bigstar$;
$\bullet$ There are entries $\infty$ in positions of $M$ that correspond to disjoint pair;
$\bullet$ For every sequence $s$ of indices of facets meeting at a vertex $v$ of $P$, the matrix, obtained by replacing value $n$ with $\cos\frac{\pi}{n}$ of submatrix $M^s$, is elliptic.
\end{definition}
For brevity, we use a \emph{potential vector} $$C=(p_{12},p_{13},\cdots p_{1m},p_{23},p_{24},\cdots,p_{2m},\cdots p_{ij},\cdots p_{m-1,m}),~ p_{ij}\ne \infty$$ to denote the potential matrix, where $1\leq i<j\leq m$ and non-infinity entries are placed by the subscripts lexicographically. The potential matrix and potential vector $C$ can be constructed one from each other easily. In general, an arbitrary Coxeter matrix corresponds to a Coxeter vector following the same manner. We mainly use the language of \emph{vectors} to explain the methodology and report the enumeration results. It is worthy to remark that, for a given Coxeter diagram, the corresponding (potential / Gram) matrix and vector are not unique in the sense that they are determined under a given labeling system of the facets and may vary when the system changed. In Section \ref{chapter:algorithm}, we apply a permutation group to the nodes of the diagram and remove the duplicates to obtain all of the distinct desired vectors
For each rank $r\geq 2$, there are infinitely many finite Coxeter groups, because of the infinite 1-parameter family of all dihedral groups, whose graphs consist of two nodes joined by an edge of weight $k\geq 2$. However, a simple but useful truncation can be utilized:
\begin{proposition}
There are finitely many finite Coxeter groups of rank $r$ with Coxeter matrix entries at most seven. \end{proposition}
It thus suffices to enumerate potential matrices with entries at most seven, and the other candidates can be obtained from substituting integers greater than seven with the value seven. In other words, we now have more variables, that are restricted to be integers larger than or equal to seven, besides length unknowns. In the following, we always use the terms ``Coxeter matrix" or ``potential matrix" to mean the one with integer entries less than or equal to seven unless otherwise mentioned.
In \cite{JT:2018}, the problem of finding certain hyperbolic Coxeter polytopes is solved in two phases. In the first step, potential matrices for a particular hyperbolic Coxeter polytope are found; the ``Euclidean-square obstruction" is used to reduce the number. Secondly, relevant algebraic conditions are solved for the admissible distances between non-adjacent facets. In our setting, additional universal necessary conditions, except for the vertex spherical restriction and Euclidean square obstruction, are adopted and programmed to reduce the number of the potential matrices.
\section{ combinatorial type of simple 4-polytopes with 8 facets}\label{section:4d8f} In 1909, Br\"{u}ckner reported the enumeration of all the different combinatorial types of simple 4-polytopes with 8 facets. Br\"{u}ckner used the Schlegel diagrams to represent all of the combinatorial types of simple 4-polytope. However, not every Br\"{u}ckner's diagram is a Schlegel diagram. Gr\"{u}nbaum and Sreedharan then used the "beyond-beneath" technique, for example see [ \cite{G:1967}, Section 5.2], and the Gale diagram developed by M. A. Perles to enumerate once more and corrected some results of Br\"{u}ckner's. Here is the main theorem:
\begin{theorem}[Br\"{u}ckner,Gr\"{u}nbaum-Sreedharan] There are $37$ different combinatorial types of simple $4$-polytopes with $8$ facets. \end{theorem}
We correct some minor errors in their list as in Figure \ref{figure:errs}, where the polytopes $P^8_i$ in \cite{GS:1967} is now represented by $P_i$ instead.
\begin{figure}
\caption{Corrections to Table 4 in \cite{G:1967}}
\label{figure:errs}
\end{figure}
Each line in the third column of Table $4$ in \cite{GS:1967} is for one simple polytope with $8$ facets. The data on the first line is as follows:
{\color{blue}
\noindent [1,2,4,5] [1,2,3,4] [1,3,4,5] [1,3,5,6] [2,3,5,6] [1,2,3,7] [1,2,6,7] [1,2,5,8] [1,2,6,8] [2,3,4,5] [1,3,6,7] [2,3,6,7] [1,5,6,8] [2,5,6,8]
}
\noindent where the number $1$, $2$, $\cdots$, $8$ denote the eight facets and each square bracket corresponds to one vertex that is incident to the enclosed four facets. For example, there are $14$ vertices of the above polytope $P_{1}$.
From the original information, we can search out the following \textbf{\emph{data}} for each polytope:
\begin{enumerate}
\item The permutation subgroup $g_i$ of $S_8$ that is isomorphic to the symmetry group of $P_k$;
\item The set $d_k$ of pairs of the disjoint facets;
\item The set $l_4$ of sets of four facets of which the intersection is of the combinatorial type of a tetrahedron.
\item The set $l_4\_basis$ of sets of four facets of which bound a 3-simplex facet. The label of the bounded 3-simplex facet is recorded as well. Note that $l_4\_basis$ is a subset of $l_4$ and its can be non-empty only for a $4$-dimensional polytope.
\item The set $i_2$ of sets of facets, where the intersection is of the combinatorial type of a $2$-cube.
\item The set $s_3~ /~ s_4$ of sets of three / four facets of which the intersection is not an edge / a vertex of $P_k$, and no disjoint pairs are included.
\item The set $e_3~ /~ e_4$ of sets of three / four facets of which no disjoint pairs are included.
\item The set $se_5~ /~ se_6$ of sets of five / six facets of which no disjoint pairs are included.
\end{enumerate}
For example, for the polytope $P_{1}$, the above sets are as shown in Table \ref{table:p1data}.
\begin{table}[h]
{\footnotesize
\begin{tabular}{|c|c|l|}
\Xcline{1-3}{1.2pt}
\multicolumn{3}{|c|}{\textbf{$P_{1}$ }}\\
\hline
\multirow{2}{*}{Vert}& \multirow{2}{*}{14}& $\{\{1, 2, 4, 5\}, \{1, 2, 3, 4\}, \{1, 3, 4, 5\}, \{1, 3, 5, 6\}, \{2, 3, 5, 6\}, \{1, 2, 3, 7\}, \{1, 2, 6, 7\}, \{1, 2, 5, 8\}, $\\
&&$ \{1, 2, 6, 8\}, \{2, 3, 4, 5\}, \{1, 3, 6, 7\}, \{2, 3, 6, 7\}, \{1, 5, 6, 8\}, \{2, 5, 6, 8\}\} $\\
\hline
$d_{1}$ & 6&$\{ \{3, 8\}, \{4, 6\}, \{4, 7\}, \{4, 8\}, \{5, 7\}, \{7, 8\} \}$ \\
\hline
$l_4$ & 3 & $\{\{1, 2, 3, 5\}, \{1, 2, 3, 6\}, \{1, 2, 5, 6\}\}$ \\
\hline
$l_4\_basis$& 3 & $\{\{4, \{1, 2, 3, 5\}\}, \{7, \{1, 2, 3, 6\}\}, \{8, \{1, 2, 5, 6\}\}\}$ \\
\hline
$s_3$ &0& $\emptyset$ \\
\hline
$s_4$& 3 & $\{\{1, 2, 3, 5\}, \{1, 2, 3, 6\}, \{1, 2, 5, 6\}\}$\\
\hline
\multirow{3}{*}{$e_3$}& \multirow{5}{*}{$28$}& $\{\{1, 2, 3\}, \{1, 2, 4\}, \{1, 2, 5\}, \{1, 2, 6\}, \{1, 2, 7\}, \{1, 2, 8\}, \{1, 3, 4\}, \{1, 3, 5\}, \{1, 3, 6\}, \{1, 3, 7\},$\\
&&$\{1, 4, 5\}, \{1, 5, 6\}, \{1, 5, 8\}, \{1, 6, 7\}, \{1, 6, 8\}, \{2, 3, 4\}, \{2, 3, 5\}, \{2, 3, 6\}, \{2, 3, 7\}, \{2, 4, 5\},$\\
&&$ \{2, 5, 6\}, \{2, 5, 8\}, \{2, 6, 7\}, \{2, 6, 8\}, \{3, 4, 5\}, \{3, 5, 6\}, \{3, 6, 7\}, \{5, 6, 8\},$\\
\hline
\multirow{2}{*}{$e_4$}& \multirow{2}{*}{$17$}&
$\{\{1, 2, 3, 4\}, \{1, 2, 3, 5\}, \{1, 2, 3, 6\}, \{1, 2, 3, 7\}, \{1, 2, 4, 5\}, \{1, 2, 5, 6\}, \{8, 1, 2, 5\}, \{1, 2, 6, 7\}, \{8, 1, 2, 6\}, $\\
&&$ \{1, 3, 4, 5\}, \{1, 3, 5, 6\}, \{1, 3, 6, 7\}, \{8, 1, 5, 6\}, \{2, 3, 4, 5\}, \{2, 3, 5, 6\}, \{2, 3, 6, 7\}, \{8, 2, 5, 6\}\} $\\
\hline
$i_2$ &0& $\emptyset$\\
\hline
$se_5$ &$3$& $\{\{1, 2, 3, 4, 5, 6\}, \{2, 3, 4, 5, 6, 7\}, \{3, 4, 5, 6, 7, 8\}, \{4, 5, 6, 7, 8, 9\}\}$\\
\hline
$se_6$ &0& $\emptyset$\\
\hline
\multirow{12}{*}{$g_{1}$}& \multirow{12}{*}{12}& \multicolumn{1}{c|}{$(1 2 3 4 5 6 7 8)$}\\
&& \multicolumn{1}{c|}{$( 1 2 3 7 6 5 4 8)$}\\
&& \multicolumn{1}{c|}{$(1 2 5 4 3 6 8 7)$}\\
&& \multicolumn{1}{c|}{$(1 2 5 8 6 3 4 7)$}\\
&& \multicolumn{1}{c|}{$(1 2 6 7 3 5 8 4)$}\\
&& \multicolumn{1}{c|}{$(1 2 6 8 5 3 7 4)$}\\
&& \multicolumn{1}{c|}{$(2 1 3 4 5 6 7 8)$}\\
&& \multicolumn{1}{c|}{$(2 1 3 7 6 5 4 8)$}\\
&& \multicolumn{1}{c|}{$(2 1 5 4 3 6 8 7)$}\\
&& \multicolumn{1}{c|}{$(2 1 5 8 6 3 4 7)$}\\
&& \multicolumn{1}{c|}{$(2 1 6 7 3 5 8 4)$}\\
&& \multicolumn{1}{c|}{$(2 1 6 8 5 3 7 4)$}\\
\Xcline{1-3}{1.2pt}
\end{tabular}
}
\caption{Combinatorics of $P_{1}$.}
\label{table:p1data} \end{table}
It is worthy to mention that the set $l_4\_basis$, if not empty, can help to reduce the computation since the list of simplicial $4$-prisms is available. For example, suppose $\{1,2,3,5\}$ (referring to facets $F_1,F_2,F_3,F_5$) bound a facet $4$ (means $F_4$) of 3-simplex. Then we can assume that $F_4$ is orthogonal to $F_1$, $F_2$, $F_3$, and $F_5$ (i.e., $m_{14}=m_{24}=m_{34}=m_{54}=2$). The vectors obtained this way can be treated as \emph{bases}, named basis vectors, and all of the other potential vectors that may lead to a Gram vector can be realized by gluing the simplicial $4$-prisms, as shown in Figure \ref{figure:prism4}, at their orthogonal ends.
\begin{figure}
\caption{Compact prisms in $\mathbb{H}^4$.}
\label{figure:prism4}
\end{figure}
Moreover, among all of the $37$ polytopes, we only need to study those with number of hyperparall pairs larger than or equal to three due to the following theorems:
\begin{theorem}[\cite{FT:08}, part of Theorem A]
If $d\leq 4$ and the $d$-polytope $P$ has no pair of disjoint facets then $P$ is either a simplex
or one of the seven Esselmann polytopes. \end{theorem}
\begin{theorem}[\cite{FT:09}, Main Theorem A]
A compact hyperbolic Coxeter $d$-polytope with exactly one pair of non-intersecting facets has at most $d + 3$ facets. \end{theorem}
\begin{theorem}[\cite{FT:14}, Theorem 7.1]
Compact hyperbolic Coxeter $4$-polytopes with two pairs of disjoint facets has at most $7$ facets. \end{theorem}
There are $24$ polytopes with at least three pairs of hyperparallel facets. We group them by the number $d_k$ of disjoint pairs as illustrated in Table \ref{table: group}.
\begin{table}[h]
{\footnotesize
\begin{tabular}{c|c}
\Xcline{1-2}{1.2pt}
\textbf{$d_k$} & \textbf{~~ \quad ~~ \quad~~ \quad ~~ \quad ~~ \quad ~~ \quad~~ \quad ~~ \quad labels of polytopes~~ \quad ~~ \quad~~ \quad ~~ \quad ~~ \quad ~~ \quad~~ \quad ~~ \quad}\\
\hline
$6$ & 1 2 3 \\
\hline
$5$ &4 5 6 7 13 \\
\hline
$4$ & 8 9 10 14 15 16 17 34
\\
\hline
$3$ &11 12 18 19 20 21 22 26
\\
\Xcline{1-2}{1.2pt}
\end{tabular}
}
\hspace*{0.5cm}
\caption{Four groups with respect to different numbers of disjoint pairs.}
\label{table: group} \end{table}
\section{Block-pasting algorithms for enumerating all the candidate matrices over certain combinatorial type. } \label{chapter:algorithm}
We now use the \emph{block-pasting algorithm} to determine all of the potential matrices for the $24$ compact combinatorial types reported in Section \ref{section:4d8f}. Recall that the entries have only finite options, i.e., $k_{ij}\in\{1,2,3,\cdots, 7\}\cup \{\infty\}$. Compared to the backtracking search algorithm raised in \cite{JT:2018}, ``block-pasting" algorithm is more efficient and universal. Generally speaking, the backtracking search algorithm uses the method of ``a series circuit", where the potential matrices are produced one by one. Whereas, the block-pasting algorithm adopts the idea of ``a parallel circuit", where different parts of a potential matrix are generated simultaneously and then pasted together.
For each vertex $v_i$ of a $4$-dimensional hyperbolic Coxeter polytope $P_k$, we define the \emph{i-chunk}, denoted as $k_i$, to be the ordered set of the four facets intersecting at the vertex $v_i$ with increasing subscripts. For example, for the polytope $P_{1}$ discussed above, there are $14$ chunks as it has $14$ vertices. We may also use $k_i$ to denote the ordered set of subscripts, i.e., $k_i$ is referred to as a set of integers of length four.
Since the compact hyperbolic $4$-dimensional polytopes are simple, each chunk possesses $\tbinom{4}{2}=6$ dihedral angles, namely the angles between every two adjacent facets. For every chunk $k_i$, we define an \emph{i-label} set $e_i$ to be the ordered set $\{10a+b|\{a,b\}\in E_i\}$, where $E_i$, named the \emph{i-index} set, is the ordered set of pairs of facet labels. These are formed by choosing every two members from the chunk $k_i$ where the labels increase lexicographically. For example, suppose the four facets intersecting at the first vertex are $F_1$, $F_2$, $F_4$, and $F_5$. Then, we have $$k_1=\{F_1,F_2,F_4,F_5\} (\text{or} \{1,2,4,5\}),$$ $$E_1= \{\{1,2\},\{1,4\},\{1,5\},\{2,4\},\{2,5\},\{4,5\}\},$$ $$e_1= \{12,14,15,24,25,45\}.$$
Next, we list all of the Coxeter vectors of the elliptic Coxeter diagrams of rank $4$. Note that we have made the convention of considering only the diagrams with integer entries less than or equal to seven. The qualified Coxeter diagrams are shown in Figure \ref{figure:elliptic4}:
\begin{figure}
\caption{Spherical Coxeter diagram of rank $4$ with labels less than or equal to seven.}
\label{figure:elliptic4}
\end{figure}
We apply the permutation group on five letters $S_4$ to the labels of the nodes of the Coxeter diagrams in Figure \ref{figure:elliptic4}. This produces all of the possible vectors when varying the order of the four facets. For example, there are $4$ vectors for the single diagram $D_4$ as shown in Figure \ref{figure:d4}. There are $242$ distinct such vectors of rank $4$ elliptic Coxeter diagrams in total. The set of all of these vectors is called the \emph{pre-block}; it is denoted by $\mathcal{S}$. The set $\mathcal{S}$ can be regarded as a $242\times 6$ matrix in the obvious way. In the following, we do not distinguish these two viewpoints and may refer to $\mathcal{S}$ as either a set or a matrix.
\begin{figure}
\caption{prepare the pre-block}
\label{figure:d4}
\end{figure}
We then generate every dataframe $B_i$, named the $i$-block, of size $242\times 6$, corresponding to every chunk $k_i$, for a given polytope $P_k$, where $1\leq i \leq |V_k|$ and $|V_k|$ is the number of vertices of $P_k$. Firstly we evaluate $B_i$ by $\mathcal{S}$ and take the ordered set $e_i$ defined above as the column names of $B_i$. For example, for $e_1=\{12,14,15,24,25,45\}$, the columns of $B_i$ are referred to as $(12)$-, $(14)$-, $(15)$-, $(24)$-, $(25)$-, $(45)$-columns.
Denote $L$ to be a vector of length $28$ as follows:
$$L=\{12,13,...,18{\color{red},}~23,24,...,28{\color{red},}~34,35,...,38{\color{red},}~45,46,...,48{\color{red},}~56,57,58{\color{red},}~67,68{\color{red},}~ 78\}.$$
Then all of the numbers in the label set of $d_k$ (the set of disjoint pair of facets of $P_k$) are excluded from $L$ to obtain a new vector For brevity, the new vector is also denoted by $L$. For example, the numbers excluded for the polytope $P_{1}$ are $38$, $46$, $47$, $48$, $57$, and $78$ as illustrated in Table \ref{table:p1data}. The length of $L$ is denoted by $l$.
Next, we extend every $242\times 6$ dataframe to a $242\times l$ one, with column names $L$, by simply putting each $(ij)$-column to the position of corresponding labeled column, and filling in the value zero in the other positions. We continue to use the same notation $B_i$ for the extended dataframe. In the rest of the paper, we always mean the extended dataframe when using the notation $B_i$.
After preparing all of the the blocks $B_i$ for a given polytope $P_k$, we proceed to paste them up. More precisely, when pasting $B_1$ and $B_2$, a row from $B_1$ is matched up with a row of $B_2$ where every two entries specified by the same index $i$, where $i\in e_1\cap e_2$, have the same values. The index set $e_1\cap e_2$ is called a \emph{linking key} for the pasting. The resulting new row is actually the sum of these two rows in the non-key positions; the values are retained in the key positions. The dataframe of the new data is denoted by $B_1\cup^*B_2$.
We use the following example to explain this process. Suppose
$B_1=\{x_1,x_2\}=\{(1,2,4,4,2,6,0,0,\cdots ,0), (1,2,4,5,2,6,0,0,\cdots,0)\}$,
$B_2=\{y_1,y_2,y_3\}=\{(1,2,4,4,0,0,1,7,0,0,\cdots,0),(1,2,4,4,0,0,6,5,0,0,\cdots,0)$,
\hspace{3.5cm}$(1,2,3,4,0,0,1,7,0,0,\cdots,0)\} .$
In this example, $x_1$ has the same values with $y_1$ and $y_2$ on the $(12)$-, $(13)$-, $(14)$- and $(15)$- positions. In other words, the linking key here is $\{12,13,14,15\}$. Thus, $y_1$ and $y_2$ can paste to $x_1$, forming the Coxeter vectors $$(1,2,4,4,2,6,1,7,0,...,0)~\text{and}~(1,2,4,4,2,6,6,5,0,...,0), \text{respectively}.$$
\noindent In contrast, $x_2$ cannot be pasted to any element of $B_2$ as there are no vectors with entry $5$ on the $(15)$-position. Therefore, $$B_1\cup^* B_2=\{(1,2,4,4,2,6,1,7,0,0,\cdots,0),(1,2,4,4,2,6,6,5,0,0,\cdots,0)\}.$$
We then move on to paste the sets $B_1 \cup^*B_2$ and $B_3$. We follow the same procedure with an updated index set. Namely the linking key, is now $e_1\cup e_2\cap e_3$. We conduct this procedure until we finish pasting the final set $B_{\vert V_k\vert}$. The set of linking keys used in this procedure is $$\{e_1\cap e_2, e_1\cup e_2\cap e_3,\cdots,e_1\cup e_2\cup\cdots\cup e_{i-1}\cap e_i,\cdots ,e_1\cup e_2\cup \cdots \cup e_{\vert V\vert -1}\cap e_{\vert V\vert}\}.$$
After pasting the final block $B_{\vert V_k\vert}$, we obtain all of the potential vectors of the given polytope. This approach has been Python-programmed on a PARATERA server cluster.
When apply this approach , it successfully enumerated all truncated candidate for $P_1$. However it usually encounter memory error in solving other case. For example, for polytope $P_{14}$, the computer get stuck at pasting $B_{11}$. We turn to operate it in a serve and it indeed work out finally, see Figure \ref{figure:p1p14} for more details. The peak of the amount of resulting vector has reached $180,063,922$, which far exceed the abilities of storage and computation of an ordinary laptop. Moreover, even using the server, we can not further solve more cases. A refined algorithm is needed to continue this research.
\begin{figure}
\caption{Use ``block-pasting" algorithm over polytope $P_1$ and $P_{14}$.}
\label{figure:p1p14}
\end{figure}
The philosophy of the refined one is to introduce more necessary conditions other than the vertex spherical restriction, to reduce the amount of vectors in the process of block-pasting. The refined algorithm relies on symmetries of the polytopes and some remarks.
Firstly, we collect data sets $\mathcal{L}_4$, $\mathcal{L}_4\_basis$, $\mathcal{S}_3$, $\mathcal{S}_4$, $\mathcal{S}_5$, $\mathcal{S}_6$, $\mathcal{E}_3$, $\mathcal{E}_4$,
$\mathcal{E}_5$, $\mathcal{E}_6$, $\mathcal{I}_2$, as claimed in Table \ref{table:library}, by the following two steps:
\begin{enumerate}
\item Prepare Coxeter diagrams of rank $r$, as assigned in Table \ref{table:library}, and write down the Coxeter vectors under an arbitrary system of node labeling.
\item Apply the permutation group $S_r$ to the labels of the nodes and produce the desired data set consisting of all of the distinct Coxeter vectors under all of the different labelling systems.
\end{enumerate}
Note that the set $\mathcal{S}_4$ is exactly the pre-block $\mathcal{S}$ we construct before. Readers can refer to the process of producing $\mathcal{S}$ for the details of building these data sets.
\begin{table}[H]
{\footnotesize
\begin{tabular}{c|c|c|c}
\Xcline{1-4}{1.2pt}
\multirow{2}{*}{\textbf{Types of Coxeter diagrams}}
& \textbf{\# Coxeter} & \textbf{\# distinct Coxeter Vectors }
& \multirow{2}{*}{\textbf{data sets}}\\
&\textbf{diagrams}& \textbf{after permutation on nodes} & \\
\hline
Coxeter diagrams of
compact hyperbolic 3-simplex& 9& 108 &$\mathcal{L}_4$\\
\hline
rank $3$ elliptic Coxeter diagrams & 9& 31&$\mathcal{S}_3$ \\
\hline
rank $4$ elliptic Coxeter diagrams & 29& 242 & $\mathcal{S}_4$ \\
\hline
rank $5$ elliptic Coxeter diagrams &47 & 1946 &$\mathcal{S}_5$\\
\hline
rank $6$ elliptic Coxeter diagrams & 117& 20206 &$\mathcal{S}_6$\\
\hline
rank $3$ connected parabolic Coxeter diagrams &3 & 10 &$\mathcal{E}_3$ \\
\hline
rank $4$ connected parabolic Coxeter diagrams & 3 & 27 &$\mathcal{E}_4$\\
\hline
rank $5$ connected parabolic Coxeter diagrams&5&257&
$\mathcal{E}_5$\\
\hline
rank $6$ connected parabolic Coxeter diagrams&4&870& $\mathcal{E}_6$\\
\hline
Coxeter diagrams of Euclidean $2$-cube &4&3&$\mathcal{I}_2$\\
\Xcline{1-4}{1.2pt}
\end{tabular}
}
\caption{Data sets used to reduce the computational load.}
\label{table:library} \end{table}
Next, we modify the block-pasting algorithm by using additional metric restrictions. More precisely, remarks \ref{remark:1}--\ref{remark:4}, which are practically reformulated from Theorem \ref{Vinberg:thm3.1}, must be satisfied.
\begin{remark}(``$l4$-condition") \label{remark:1}
The Coxeter vector of the six dihedral angles formed by the four facets with the labels indicated by the data in $l_4$ is {\color{red} IN} $\mathcal{L}_4$ \end{remark}
\begin{remark}(``$s3$/$s4$/$s5$/$s6$-condition") \label{remark:2}
The Coxeter vector of the three/six/ten/fifteen dihedral angles formed by the three/four/five/six facets with the labels indicated by the data in $s_3$/$s_4$/$se_5$/$se_6$ is {\color{red} NOT IN} $\mathcal{S}_3$/$\mathcal{S}_4$/$\mathcal{S}_5$/$\mathcal{S}_6$. \end{remark}
\begin{remark}(``$e3$/$e4$/$e5$/$e6$-condition") \label{remark:3}
The Coxeter vector of the three/six/ten/fifteen dihedral angles formed by the three/four/five/six facets with the labels indicated by the data in $e_3$/$e_4$/$se_5$/$se_6$ is {\color{red} NOT IN} $\mathcal{E}_3$/$\mathcal{E}_4$/$\mathcal{E}_5$/$\mathcal{E}_6$. \end{remark}
\begin{remark}(``$i2$-condition") \label{remark:4}
The Coxeter vector formed by the four facets with the labels indicated by the data in $i_2$ is {\color{red} NOT IN} $\mathcal{I}_2$. \end{remark}
The {\color{red}``IN"} and {\color{red}``NOT IN"} tests are called the ``saving" and the ``killing" conditions, respectively. The ``saving" conditions are much more efficient than the ``killing" ones, because the ``what kinds of vectors are qualified" is much more restrictive than the ``what kinds of vectors are not qualified". Moreover, we remark that the $l3$-condition and the sets $l_3$ and $\mathcal{L}_3$, which can be defined analogously as the $l4$ setting, are not introduced. This is because the effect of using both $s3$- and $e3$- conditions is equivalent to adopting the $l3$-condition.
We now program these conditions and insert them into appropriate layers during the pasting to reduce the computational load. Here the ``appropriate layer" means the layer where the dihedral angles are non-zero for the first time. For example, for $\{1,2,3\}\in e_3$, we find that after the $j$-th block pasting, the data in columns ($12$-,$13$-,$23$-) of the dataframe $B_1\cup^*B_2\cdots \cup^*B_j$ become non-zero. Therefore, the $e3$-condition for $\{1,2,3\}$ is inserted immediately after the $j$-th block pasting. The symmetries of the polytopes are factored out when the pastes are finished. The matrices (or vectors) after all these conditions (metric restrictions and symmetry equivalence) are called \emph{``SEILper"-potential matrices (or vectors)} of certain combinatorial types. All of the numbers of the results are reported in Tables \ref{table:gall}. The numbers in red indicate that the corresponding polytopes have a non-empty set $l_4\_basis$; therefore, the results obtained are basis SEILper potential vectors. This calculation is called the \emph{basis approach}. We confirm these cases without using the $l4$-condition, called the \emph{direct approach}, in the validation part presented in Section \ref{section:vadilation}.
\begin{table}[h]
{\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
\Xcline{1-11}{1.2pt}
~~ ~~\#~$d_k$~~~~ & ~~~~\textbf{label}~~~~ &~~~~ \# \textbf{SEILper} ~~~~& &~~~~$d_k$~~~~&~~~~\textbf{label}~~~~&~~~~\# \textbf{SEILper} ~~~~&&~~~~$d_k$~~~~&~~~~\textbf{label}~~~~& ~~~~\# \textbf{SEILper} ~~~~\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
6& {\color{red}1}& {\color{red} 8} && 5& 13& 88,738 && 3& {\color{red}11}& {\color{red}0}\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
6& {\color{red}2} &{\color{red}12} && 4& {\color{red}9}& {\color{red}142} && 3& {\color{red}12}& {\color{red}1,071}\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
6& {\color{red}3}& {\color{red}18} && 4& {\color{red}10}& {\color{red}2} && 3& 18& 92,886\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
5& {\color{red}4}& {\color{red}231} && 4& 14& 0 && 3& 19& 532\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
5& {\color{red}5}& {\color{red} 398} && 4& 15& 4,723&& 3& 20& 138\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
5& {\color{red}6}& {\color{red}10}&& 4& 16& 73,006 && 3& 21& 193,77\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
5& {\color{red}7}& {\color{red}4,247}&& 4& 17& 325,957 && 3& 22& 150,444\\
\Xcline{1-3}{0.8pt}\Xcline{5-7}{0.8pt}\Xcline{9-11}{0.8pt}
5& {\color{red}8}& {\color{red}2,176}&& 4& 34& 7,608&& 3& 26& 49,599\\
\Xcline{1-11}{1.2pt}
\end{tabular}
}
\hspace*{0.5cm}
\caption{Results of ``SEILper"-potential matrices. Recall that $d_k$ in the table means the number of disjoint pairs as defined before}
\label{table:gall} \end{table}
\section{Signature Constraints of hyperbolic Coxeter \texorpdfstring{$n$}-polytopes}\label{section:signature}
After preparing all of the SEILper matrices, we proceed to calculate the signatures of the potential Coxeter matrices to determine if they lead to the Gram matrix $G$ of an actual hyperbolic Coxeter polytope.
Firstly, we modify every SEILper matrix $M$ as follows:
\begin{enumerate}
\item Replace $\infty$s by length unknowns $x_i$;
\item Replace $2$, $3$, $4$, $5$, and $6$ by $0$, $-\frac{1}{2}$, $-\frac{l}{2}$, $-\frac{m}{2}$, $-\frac{n}{2}$, where $$l^2-1=2,~l>0,~m^2-m-1=0,~m>0,~n^2-3=0,~n>0;$$
\item Replace $7$s by angle unknowns of $-\frac{y_i}{2}$. \end{enumerate}
By Theorem \ref{thm:signature}, the resulting Gram matrix must have signature $(4, 1)$. This implies that the determinant of every $6\times 6$ minor of each modified $8\times 8$ SEILper matrix is zero. Therefore, we have the following system of $28$ equations and inequality on $x_i$, $l$, $m$, $n$, and $y_i$ to further restrict and lead to the Gram matrices of the desired polytopes: $$(6.1) ~~~ \begin{cases}
2\det (M_i)=0,\text{for~ any of the~}\tbinom{8}{2}=28 ~6\times 6~\text{minor}~M_i~\text{of}~ M. \\
1.8<y_i<2 ~\text{for~ all}~y_i\\
x_i<-1~\text{for~ all}~x_i\\
l^2-1=2,~l>0,~m^2-m-1=0,~m>0,~n^2-3=0,~n>0 \end{cases}$$
The above conditions are initially stated by Jacquemet and Tschantz in \cite{JT:2018}. Due to practical constraints in \emph{Mathematica}, we denote $2cos(\frac{\pi}{4}),~2cos(\frac{\pi}{5}),~2cos(\frac{\pi}{6})$ by $\frac{l}{2},~\frac{m}{2},~\frac{n}{2}$, rather than $l,~m,~n$ and set $2\det(M_i)=0$ rather than $\det(M_i)=0$. Delicate reasons for doing so can be found in \cite{JT:2018}. Moreover, we first find the \emph{Gr\"{o}bner bases} of the polynomials involved, i.e., $2\det(M_i),~l^2-1,~m^2-m-1,~ n^2-3$, before solving the system. This might help to quickly pass over the cases that have no solution. However, when dealing with some combinatorial types, like $P_{17},~ P_{22},~P_{13},~P_{16}$, etc., the computation cannot be accomplished in reasonable amount of time. In some cases, a single matrix can require more than two hours to compute, which is costly and impedes the validation process. Hence, we introduce the following $6$-round strategy to make the computation much more feasible and efficient.
\begin{enumerate}
\item ``One equation killing"
\noindent Select $2$--$4$ equations, where each of them corresponds to a $6\times 6$ minor and the deleted two rows and columns containing $d_i$ (or $d_i-1$) \footnote{Only if there is no hope to have at least 2 cases where deleted two rows and columns containing all $d_i$ length unknowns, we compromise to find those including $d_i-1$ length unknowns.} length unknowns $x_i$. We use each equation together with the inequalities corresponding to the unknowns in the minor as a condition set and solve them sequentially with time constraint of 1s.
\\
\noindent The result consists of ``out set" (the solution is non-empty after the killing), ``left set" (the solution is aborted due to the time constraint), and ``break set" (the solution is empty). We pass the SEILper matrices whose results are either in the ``out set" or the ``left set" to the second round.\\
\item ``Twenty-eight equations killing"
\noindent We now apply the condition system (6.1) to the SEILper matrices that pass the first round with time constraints 10s, where all of the 28 equations are involved. The result also consists of ``out / left / break" sets. We save the ``out set" to the ``pre-result set" and pass those in ``left set" to the third round.\\
\item ``Seven equations killing"
\noindent Select $2$--$4$ groups of $7$ equations, where each of the groups corresponds to a $7\times 7$ minor. There are $\tbinom{7}{6}=7$ $6\times 6$ minors and the deleted one row and column containing as many length unknowns $x_i$ as possible. We use each group of equations together with the inequalities corresponding to the unknowns left in the minor as condition set and solve with time constraint of 1s.\\
\noindent Note that, what is left from the second round are those that can not be solved in 10s when allowing all the $28$ equations in. We therefore reduce the number of constraint equations from $28$ to $7$ to move forward. We pass the SEILper matrices whose results are either in the ``out set" or the ``left set" to the fourth round.\\
\item ``Non-Gr\"{o}bner killing"
\noindent There are not many candidates left up to now. We find in the practice that the function \emph{GroebnerBasis} in \emph{Mathematica} either works quite quickly or consumes an unaffordable amount of time. We therefore drop this step and proceed to solve the system (6.1) directly with time constraint of 300s. We save the ``out set" to the ``pre-result set" and pass the ``left set" into next round. \\
\item ``Range Analysis"
\noindent What we expect about the angle unknown $y_i$ is not an arbitrary integer in the interval $(1.8,2)$, it should be twice an cosine value of an angle of the form $\displaystyle\frac{\pi}{n},\text{where}~n\geq 7$. Namely, $ \displaystyle \frac{\pi}{\arccos(y_i/2)}\in \mathbb{Z}_{\geq 7}.$ We now choose a small number of equations involving angle and (as less as possible) length unknowns, where the upper bound for $y_i$ is strictly less than $2$, e.g. $1.99$. Then we can run out all the possibilities for the angle unknowns due to the integrality restriction. Alternatively, we might solve for the unknowns explicitly. We use the two methods to analyze the left cases. And for all the polytopes, no new candidates left from the previous step survived.\\
\item ``Pre-result checking"
\noindent The last step is to check whether the signature is indeed $(4,1)$ and whether $\displaystyle \frac{\pi}{\arccos~ (y_i/2)}$ is an integer among the pre-result set.
\end{enumerate}
\begin{table}[h]
{\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c}
\Xcline{1-8}{1.2pt}
\# \textbf{SEILper} & \textbf{R6}& \#\textbf{Pre-result} & \textbf{R1} &\textbf{R2} & \textbf{R3} & \textbf{R4} & \textbf{R5}\\
&{\color{red}\#\textbf{Result}}&&$\#$~left+out &$\#$out / $\#$left &$\#$~left+out &$\#$out / $\#$left&\\
&&&(excluded rows) && (excluded rows)&&\\
&&&[\#$x_i$ / \#$x_i$ excluded]&&&&\\
\Xcline{1-8}{0.8pt}
325,957 &{\color{red}8} &{\color{red}47+1=48}& 40934~(7,8)~ [4/3]& {\color{red}47}/ 89 &15 (8)&
{\color{red}1}/ {\color{blue}1}& 0\\
&&&
22963~(4,6)~[4/3]&&13 (6)& &\\
&&&9371~(6,8)~[4/3]&&&&\\
&&&8899~(4,8)~[4/3] &&&&\\
\Xcline{1-8}{1.2pt}
\end{tabular}
}
\hspace*{0.5cm}
\caption{The 6-round procedure about the polytope $P_{17}$.}
\label{table:p17M} \end{table}
This approach has been Mathematica-programmed on a PARATERA server cluster. And we illustrate the $6$-round procedure on the polytope $P_{17}$ as an example in Table \ref{table:p17M}. In the fifth round, the twice Gram matrix $2M$ of the only unsolved case (marked in blue in Table \ref{table:p17M}) is as below:
{\tiny
\begin{center}
$ 2\begin{pmatrix}
1& -(1/2)& 0& -(1/2)& -(1/2)& 0& 0& {\color{red}x_1}\\
-(1/2)& 1& -(h_1/2)& 0& 0& 0& 0& 0\\
0& -(h_1/2) &1& 0& 0& 0& -(1/\sqrt{2})& 0\\
-(1/2)& 0& 0& 1& {\color{red}x_2}& -(1/\sqrt{2})& -(1/2)& 0\\
-(1/2)& 0& 0& {\color{red}x_2}& 1& -(1/\sqrt{2})& -(1/2)& 0\\
0& 0& 0& -(1/\sqrt{2})& -(1/\sqrt{2})& 1& {\color{red}x_3}& {\color{red}x_4}\\
0& 0& -(1/\sqrt{2})& -(1/2)& -(1/2) &{\color{red}x_3} &1 &0\\
{\color{red}x_1}& 0& 0& 0& 0& {\color{red}x_4}& 0& 1\\
\end{pmatrix}_. $
\end{center} }
\noindent We use $M_{i,j}$ to denote the minor of the Gram matrix $2M$ after excluding the $i$- and $j$- rows and columns. The minors $M_{7,8}$ and $M_{1,5}$ contain the only angle unknown $h_1$ and one length variable $x_2$. The determinants of them are:
\begin{center}
$-2 + 5b - 3 x_2^2 - \sqrt{2} h_1 + \sqrt{2} x_2 h_1 - 2 x_2 h_1^2 + 2 x_2^2 h_1^2 = 0$,\\
$-4 + 10 x_2 - 6 x_2^2 + h_1^2 - 3 x_2 h_1^2 + 2 x_2^2 h_1^2 = 0$, respectively.\\ \end{center}
\noindent The graph for these equations are as shown in Figure \ref{figure:analysis}, where $x_2<-1$ and $1.8<h_1<2$.
We have two methods to make the claim that no real solutions can be obtained. On one hand, the only solution for the constraints mentioned above is $h_1\approx 1.81129,~ x_2\approx -1.28078$, which means that is no qualified solution for $h_1$, i.e., the solution is not of the form $2\cos \frac{\pi}{n}$. On the other hand, we find out that region bounds for $h_1$ and $x_2$ under the restriction of
$\left\{\begin{array}{l}
-2 + 5x_2 - 3 x_2^2 - \sqrt{2} h_1 + \sqrt{2} x_2 h_1 - 2 x_2 h_1^2 + 2 x_2^2 h_1^2 = 0\\
x_2<-1\\
1.8<h_1<2\\ \end{array} \right.$
\noindent are $1.8<h_1< 1.97374$, $-1.3062\leq x_2 <-1$. That means $h_1\in \{2\cos\frac{\pi}{n},n=7,8,9,...,20\}$. We can plug in the value of $h_1$ into some other determinants containing $h_1$ and finally find that the solution set is empty.
\begin{figure}
\caption{Tackling with the solved case of $P_{17}$ in the fifth round.}
\label{figure:analysis}
\end{figure}
After conducting all of these procedures in \emph{Mathematica}, we find that only fourteen of all simple $4$-polytopes with $8$ facets admit compact hyperbolic structure. Besides, we can glue the $3$-prism to seven of them. The results are reported in Table \ref{table:glu}. The polytopes with labels in red are the ``basis" polytopes, and the ones on last line can be obtained by gluing prism ends to those of the second line from the bottom. The Coxeter diagrams and length information are shown in the end of this paper (pages 22--55).
\begin{table}[h]
{\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\Xcline{1-15}{1.2pt}
$d_k$ & \multicolumn{3}{c|}{6} & \multicolumn{5}{c|}{5} & \multicolumn{3}{c|}{4}& \multicolumn{3}{c}{3}\\
\hline
polytope&{\color{red}1}\quad& {\color{red}2}\quad & {\color{red}3}\quad&{\color{red}4}\quad&{\color{red}6}\quad&{\color{red}7}\quad&{\color{red}8}\quad&$13$\quad&$16$\quad&$17$\quad&$34$\quad&$18$\quad&$21$\quad&$26$\quad\\
\hline
\# (selected) SEILper potential &8 &12& 18& 1 &1& 11&4&4&12&48&12&75&1&4\\
\hline
\# gram (of basis vectors) & 8& 12& 18& 1 &1& 5&1&3&4&8&12&4&1&2\\
\hline
\# gram after suitably gluing & \multirow{2}{*}{130} &\multirow{2}{*}{49}& \multirow{2}{*}{115}&\multirow{2}{*}{2}&\multirow{2}{*}{1} &\multirow{2}{*}{15}&\multirow{2}{*}{2}&\multirow{2}{*}{N}&\multirow{2}{*}{N}&\multirow{2}{*}{N}&\multirow{2}{*}{N}&\multirow{2}{*}{N}&\multirow{2}{*}{N}&\multirow{2}{*}{N}\\
$3$-prisms to the orthoganal ends&&&&&&&&&&&&&&\\
\Xcline{1-15}{1.2pt}
\end{tabular}
}
\caption{Results of the compact hyperbolic Coxeter $4$-polytopes with 8 facets. The value of polytope labeled by $13,16,17,34,18,21,$ or $26$ on the last line is ``N", which means the $l_4\_basis$ set of $P_{13}$/$P_{16}$/$P_{17}$/ $P_{34}$/$P_{18}$/$P_{21}$/$P_{26}$ is empty and we are not be able to glue them with prism ends.}
\label{table:glu}
\end{table}
\section{Validation and Results}\label{section:vadilation}
1. ``Basis Approach" vs. ``Direct approach".
We calculate SEILper potential matrices without using $l_4\_basis$-conditions for those polytopes having $3$-simplex facets. The numbers of Gram matrices corresponding to all of the possible compact hyperbolic polytopes and the results after the \emph{Mathematica} round are reported in Table \ref{table:direct}. They are the same as the previous work accomplished via ``basis approach".
\begin{table}[H]
{\footnotesize
\begin{tabular}{c|ccccccccc}
\Xcline{1-10}{1.2pt}
grp &\multicolumn{3}{c|}{6} &\multicolumn{6}{c}{5} \\
\Xcline{1-10}{1.2pt}
polytopes& 1& 2 &\multicolumn{1}{c|}{3}& &4& 6& 7& 8 &\\
\#SEILper& 130& 49& \multicolumn{1}{c|}{115}&& 571& 26& 8,579 &5,258&\\
\#Gram& 130& 49& \multicolumn{1}{c|}{115}&& 2& 1& 15& 2&\\
\Xcline{1-10}{1.2pt}
\end{tabular}
}
\hspace*{0.5cm}
\caption{Results obtained by direct approach. }
\label{table:direct}
\end{table}
2. The compact hyperbolic $4$-cubes are in the family of compact hyperbolic Coxeter $4$-polytope with 8 facets. We obtain the same results of exactly $12$ compact hyperbolic $4$-cubes obtained by Jacquemet and Tschantz in \cite{JT:2018} \footnote{It seems that there is a small typo in Table $5$ of \cite{JT:2018}, where the lengths for $\Sigma_2^2$ and $\Sigma_2^3$ should be swapped. } as shown in Figure \ref{figure:p34}.
3. Flikson and Turmarkin constructed 8 compact hyperbolic Coxeter polytopes with 8 facets in \cite{FT:14} as follows, which are exactly the eight bases of the polytope $P_1$ shown in red in Figure \ref{figure:p810}--\ref{figure:p817}. \begin{figure}
\caption{Known cases from Flikson and Turmarkin}
\label{figure:knowcase}
\end{figure}
4. A. Burcroff obtained independently and confirmed the same result in \cite{Amanda:2022} after our mutual check. The correspondence of notions of the polytopes, which admit a hyperbolic structure, between our list and Burcroff's are presented in Table \ref{comparison}.
\begin{table}[h]
{\footnotesize
\begin{tabular}{c|cccccccccccccc}
\Xcline{1-15}{1.2pt}
MZ& 1 &2& 3& 4& 6& 7&8&13&16&17&18&21&26&34\\
\hline
A. Burcroff& $G_1$& $G_3$ &$G_2$& $G_7$& $G_8$& $G_9$&$G_6$& $G_5$ & $G_{13}$ & $G_{11}$& $G_{12}$& $G_{10}$ & $G_{14}$ &$G_{4}$\\
\Xcline{1-15}{1.2pt}
\end{tabular} }
\hspace*{0.5cm} \caption{Notion correspondence between our list and the list in \cite{Amanda:2022}. } \label{comparison} \end{table}
\newgeometry{left=0.5cm,right=0.5cm,top=1.5cm,bottom=2cm}
1. Coxeter diagrams for $P_1$
\begin{figure}
\caption{$P_1$(1/8)}
\label{figure:p810}
\end{figure}
\restoregeometry \begin{figure}
\caption{$P_1$(2/8)}
\label{figure:p811}
\end{figure}
\begin{figure}
\caption{$P_1$(3/8)}
\label{figure:p812}
\end{figure}
\begin{figure}
\caption{$P_1$(4/8)}
\label{figure:p813}
\end{figure}
\begin{figure}
\caption{$P_1$(5/8)}
\label{figure:p814}
\end{figure}
\begin{figure}
\caption{$P_1$(6/8)}
\label{figure:p815}
\end{figure}
\begin{figure}
\caption{$P_1$(7/8)}
\label{figure:p816}
\end{figure}
\begin{figure}
\caption{$P_1$(8/8)}
\label{figure:p817}
\end{figure}
\newgeometry{left=0.5cm,right=0.5cm,top=2cm}
\bgroup \everymath{\displaystyle}
\begin{table}[H]
\resizebox{17cm}{!}{
\renewcommand{2.7}{2.5}
\begin{tabular}{|l|l|l|l|}
\hline
\multirow{2}{*}{} & a & b & c \\ \cline{2-4}
& d & e & f \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$P_{1,1}$}} & $ \frac{1}{2}\sqrt{5+3\sqrt{5}+4\sqrt{3+\sqrt{5}}}$ & $ \sqrt{\frac{1}{14}(11+6\sqrt{2}+\sqrt{5(9-4\sqrt{2})})}$ & $ \sqrt{\frac{1}{7}(10+4\sqrt{10}+\sqrt{253+80\sqrt{10}})}$ \\ \cline{2-4}
\multicolumn{1}{|c|}{} &$ \sqrt{\frac{1}{7}(5+4\sqrt{2})(2+\sqrt{5})}$ & $ \frac{1}{2}\sqrt{\frac{1}{2}(5+3\sqrt{5}+4\sqrt{3+\sqrt{5}})}$ & $ \frac{1}{2}\sqrt{9+4\sqrt{2}+\sqrt{5}}$ \\ \hline
\multirow{2}{*}{$P_{1,19}$} & $ \sqrt{\frac{61}{62}+\frac{5\sqrt{5}}{62}}$ & $ \sqrt{\frac{73}{38}+\frac{25\sqrt{5}}{38}}$ & $ 2\sqrt{\frac{1}{19}(8+3\sqrt{5})}$ \\ \cline{2-4}
& $ 3\sqrt{\frac{1}{589}(63+26\sqrt{5})}$ & $ \frac{1}{2}\sqrt{7+2\sqrt{5}}$ & $ \sqrt{\frac{74}{31}+\frac{33\sqrt{5}}{31}}$ \\ \hline
\multirow{2}{*}{$P_{1,28}$} & $ \sqrt{\frac{20}{11}+\frac{6\sqrt{5}}{11}}$ & $ \sqrt{\frac{20}{11}+\frac{6\sqrt{5}}{11}}$ & $ \sqrt{\frac{16}{11}+\frac{7\sqrt{5}}{11}}$ \\ \cline{2-4}
& $ \frac{3}{11}(3+2\sqrt{5})$ & $ \frac{1}{2}\sqrt{5+\sqrt{5}}$ & $ \sqrt{\frac{16}{11}+ \frac{7\sqrt{5}}{11}}$ \\ \hline
\multirow{2}{*}{$P_{1,54}$} & $ \frac{1}{2}(1+\sqrt{5}) $ & $ \frac{1}{2}(1+\sqrt{5}) $ & $ \frac{1}{2}(1+\sqrt{5})$ \\ \cline{2-4}
& $ \frac{1}{2}(1+\sqrt{5})$ & $ \frac{1}{2}(1+\sqrt{5}) $ & $ \frac{1}{2}(1+\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{1,61}$ } & $ \sqrt{\frac{23}{11}+\frac{8\sqrt{5}}{11}}$ & $ \sqrt{\frac{23}{11}+\frac{8\sqrt{5}}{11}}$ & $ \sqrt{\frac{23}{11}+\frac{8\sqrt{5}}{11}}$ \\ \cline{2-4}
& $ \frac{1}{11}(28+15\sqrt{5})$ & $ \frac{1}{2}(1+\sqrt{5}) $ & $ \sqrt{\frac{23}{11}+\frac{8\sqrt{5}}{11}}$ \\ \hline
\multirow{2}{*}{$P_{1,71}$} & $ \sqrt{\frac{17}{22}+\frac{13}{22\sqrt{5}}}$ & $ \sqrt{\frac{42}{11}+\frac{17\sqrt{5}}{11}}$ & $ \sqrt{\frac{65}{22}+\frac{25\sqrt{5}}{22}}$ \\ \cline{2-4}
& $ \sqrt{\frac{13}{11}+\frac{2}{\sqrt{5}}}$ & $ \sqrt{\frac{19}{8}+\frac{7\sqrt{5}}{8}}$ & $ \sqrt{\frac{2}{55}(25+9\sqrt{5})}$ \\ \hline
\multirow{2}{*}{$P_{1,83}$ } & $ \sqrt{\frac{1}{3}(1+\sqrt{10}-\sqrt{7-2\sqrt{10}})}$ & $ \sqrt{\frac{1}{2}(2+\sqrt{5}+\sqrt{7+3\sqrt{5}})} $ & $ \sqrt{\frac{1}{11}(24+6\sqrt{2}+5\sqrt{5}+4\sqrt{10})}$ \\ \cline{2-4}
& $ (\frac{13}{9}+\frac{4\sqrt{10}}{9})^{1/4}$ & $ \frac{2}{\sqrt{7+6\sqrt{2}-\sqrt{5}-4\sqrt{10}}}$ & $ \sqrt{\frac{1}{33}(1+28\sqrt{2}+4\sqrt{5}(2+\sqrt{2}))}$ \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$P_{1,95}$}} & $ \frac{1}{4}(5+\sqrt{5})$ & $ \sqrt{\frac{65}{22}+\frac{25\sqrt{5}}{22}}$ & $ \sqrt{\frac{431}{341}+\frac{170\sqrt{5}}{341}}$ \\ \cline{2-4}
\multicolumn{1}{|c|}{} & $ \sqrt{\frac{26}{11}+\frac{10\sqrt{5}}{11}}$ & $ \sqrt{\frac{5}{31}(6+\sqrt{5})}$ & $ \sqrt{\frac{57}{62}+\frac{25\sqrt{5}}{62}}$ \\ \hline
\end{tabular}
}
\end{table} \egroup
\newgeometry{left=0.5cm,right=0.5cm,top=1.5cm,bottom=2cm}
1. Coxeter diagrams for $P_2$
\begin{figure}
\caption{$P_2$(1/4)}
\label{figure:p821}
\end{figure}
\restoregeometry
\begin{figure}
\caption{$P_2$(2/4)}
\label{figure:p822}
\end{figure}
\begin{figure}
\caption{$P_2$(3/4)}
\label{figure:p823}
\end{figure}
\begin{figure}
\caption{$P_2$(4/4)}
\label{figure:p824}
\end{figure}
\newgeometry{left=0.5cm,right=0.5cm,top=0.5cm,bottom=0.5cm} \bgroup \everymath{\displaystyle}
\begin{table}[H]
\resizebox*{19cm}{!}{
\renewcommand{2.7}{2.7}
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{} &a &b &c \\ \cline{2-4}
&d &e &f \\ \hline
\multirow{2}{*}{$P_{2,1}$} & $ \sqrt{\frac{2}{11}(7+\sqrt{5})}$ & $ \frac{1}{38}(16+6\sqrt{5}+19\sqrt{\frac{1728}{361}+\frac{496\sqrt{5}}{361}})$ & $ \frac{1}{19}(16+6\sqrt{5}+\sqrt{\frac{1}{2}(169-15\sqrt{5})})$ \\ \cline{2-4}
&$ \frac{1}{2}\sqrt{\frac{1}{2}(9+\sqrt{5})}$ & $ \frac{1}{19}\sqrt{\frac{1}{22}(22539+9889\sqrt{5}+4\sqrt{56071390+25075410\sqrt{5}})}$ & $ \frac{1}{19}\sqrt{\frac{1}{22}(76723+34293\sqrt{5}+4\sqrt{526827338+235604346\sqrt{5}})}$ \\ \hline
\multirow{2}{*}{$P_{2,13}$} & $ \frac{1}{2}\sqrt{\frac{1}{2}(9+\sqrt{5})}$ & $ \frac{1}{2}(3+\sqrt{5})$ & $ \sqrt{\frac{39}{8}+\frac{17\sqrt{5}}{8}}$ \\ \cline{2-4}
& $ \frac{1}{2}\sqrt{\frac{1}{2}(9+\sqrt{5})}$ & $ \sqrt{\frac{39}{8}+\frac{17\sqrt{5}}{8}}$ & $ \frac{1}{4}(11+5\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,19}$ } & $ \sqrt{\frac{2}{11}(7+\sqrt{5})}$ & $ \frac{1}{4}(5+\sqrt{5})$ & $ \sqrt{\frac{26}{11}+\frac{10\sqrt{5}}{11}}$ \\ \cline{2-4}
& $ \sqrt{\frac{2}{11}(7+\sqrt{5})}$ & $ \sqrt{\frac{26}{11}+\frac{10\sqrt{5}}{11}}$ & $ \frac{1}{11}(35+16\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,29}$} & $ \frac{1}{2}(1+\sqrt{5})$ & $ 1+\frac{\sqrt
5}{2} $ & $ \frac{1}{2}\sqrt{5(3+\sqrt{5})}$ \\ \cline{2-4}
& $ \frac{1}{2}(1+\sqrt{5})$ & $ \frac{1}{2}\sqrt{5(3+\sqrt{5})}$ & $ \frac{1}{2}(7+3\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,32}$} & $ \sqrt{\frac{43}{38}+\frac{9\sqrt{5}}{38}}$ & $ \frac{1}{8}(13+3\sqrt{5})$ & $ \frac{1}{2}\sqrt{\frac{5}{38}(87+35\sqrt{5})}$ \\ \cline{2-4}
& $ \sqrt{\frac{43}{38}+\frac{9\sqrt{5}}{38}}$ & $ \frac{1}{2}\sqrt{\frac{5}{38}(87+35\sqrt{5})} $ & $ \frac{1}{38}(83+43\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,33}$} & $ \frac{1}{2}\sqrt{5+\sqrt{5}}$ & $ \frac{1}{2}(3+\sqrt{5})$ & $ \sqrt{5+2\sqrt{5}}$ \\ \cline{2-4}
&$ \frac{1}{2}\sqrt{5+\sqrt{5}}$ & $ \sqrt{5+2\sqrt{5}}$ & $ \frac{1}{2}(7+3\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,36}$} & $ \sqrt{\frac{1}{2}(2+\sqrt{5}+\sqrt{7+3\sqrt{5}})} $ & $ \frac{1}{4}(7+3\sqrt{5}+4\sqrt{3+\sqrt{5}}) $ & $ \frac{1}{2}\sqrt{61+43\sqrt{2}+\sqrt{5(1451+1026\sqrt{2})}} $ \\ \cline{2-4}
& $ \sqrt{\frac{1}{2}(2+\sqrt{5}+\sqrt{7+3\sqrt{5}})}$ & $ \frac{1}{2}\sqrt{61+43\sqrt{2}+27\sqrt{5}+19\sqrt{10}}$ & $ \sqrt{2}(2+\sqrt{5})+\frac{3}{2}(3+\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,39}$} & $ \sqrt{\frac{2}{19}(9+\sqrt{5})}$ & $ \frac{1}{22}\sqrt{1063+419\sqrt{5}+8\sqrt{29530+13204\sqrt{5}}}$ & $ \frac{1}{22}(26+10\sqrt{5}+11\sqrt{\frac{340}{121}+\frac{124\sqrt{5}}{121}})$ \\ \cline{2-4}
& $ \frac{1}{2}\sqrt{5+\sqrt{5}}$ & $ \frac{1}{11}\sqrt{\frac{1}{19}(5426+1707\sqrt{5}+8\sqrt{356050+159164\sqrt{5}})}$ &$ \frac{11}{\sqrt{4628-2011\sqrt{5}-4\sqrt{52445-22999\sqrt{5}}}}$ \\ \hline
\multirow{2}{*}{$P_{2,41}$}& $ \sqrt{\frac{43}{38}+\frac{9\sqrt{5}}{38}}$ & $ \frac{1}{4}\sqrt{23+9\sqrt{5}+2\sqrt{202+90\sqrt{5}}}$ & $\frac{1}{4}(1+\sqrt{5}+2\sqrt{8+3\sqrt{5}}) $ \\ \cline{2-4}
& $\frac{1}{2}(1+\sqrt{5})$ & $ \frac{1}{2}\sqrt{\frac{1}{19}(143+37\sqrt{5}+2\sqrt{2442+1082\sqrt{5}})}$ & $ \sqrt{\frac{1}{38}(319+141\sqrt{5}+2\sqrt{43618+19506\sqrt{5}})}$ \\ \hline
\multirow{2}{*}{$P_{2,43}$} & $ \sqrt{\frac{2}{19}(9+\sqrt{5})} $ & $ 2+\frac{\sqrt{5}}{2} $ & $ \sqrt{\frac{78}{19}+\frac{34\sqrt{5}}{19}} $ \\ \cline{2-4}
& $ \sqrt{\frac{2}{19}(9+\sqrt{5})}$ & $ \sqrt{\frac{78}{19}+\frac{34\sqrt{5}}{19}}$ & $ \frac{1}{19}(47+20\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{2,44}$} & $ 1.9923$ & $4.04306 $ & $ 4.7181$ \\ \cline{2-4}
& $ 1.08754$ & $ 4.60491 $ & $ 8.13984$ \\ \hline
\multirow{2}{*}{$P_{2,46}$} & $ 1.9923$ & $ 3.71598 $ & $ 5.56213$ \\ \cline{2-4}
& $ 1.345$ & $ 4.25068$ & $ 9.72857$ \\ \hline
\end{tabular}
}
\end{table} \egroup
\restoregeometry
\newgeometry{left=0.5cm,right=0.5cm,top=2cm} \begin{remark}.
For $P_{2,44}$, the acute solutions are:
$f=\displaystyle \frac{1}{11\sqrt{19}}
\sqrt{22104 + 12082 \sqrt{2} + 9885 \sqrt{5} + 5400 \sqrt{10} +
4 \sqrt{86889872 + 59414261 \sqrt{2} + 38858338 \sqrt{5} +
26570859 \sqrt{10}}}$
$e =\sqrt{1 - 31 f^2 + 14 \sqrt{5} f^2}$
$b=\displaystyle\frac{1}{248} (-42 - 49 \sqrt{2} - 38 \sqrt{5} - 3 \sqrt{10} + (5666 + 679 \sqrt{2} - 2538 \sqrt{5} - 295 \sqrt{10}) f^2)$
$a=\displaystyle\frac{1}{1264} e(584 + 137 \sqrt{2} + 350 \sqrt{5} + 163 \sqrt{10} + (50886 + 13227 \sqrt{2} - 22792 \sqrt{5} - 5903 \sqrt{10}) f^2)$
$d=\displaystyle\frac{1}{1896}e f (30344+ 21847 \sqrt{2} - 12830 \sqrt{5} -
10679 \sqrt{10} + 19 (-203414 - 160301 \sqrt{2} + 90964 \sqrt{5} + 71693 \sqrt{10}) f^2) $
$c=\displaystyle\frac{1}{117552} e f (-1719980 - 1190961 \sqrt{2} + 844320 \sqrt{5} +
495395 \sqrt{10} + (242831436 + 150419207 \sqrt{2} - 108605296 \sqrt{5}$
\hspace{0.6cm}$-67264185 \sqrt{10}) f^2)$
For $P_{2,46}$, the acute solutions are:
$f=\displaystyle\frac{1}{11\sqrt{2}}\sqrt{3358+1857\sqrt{2}+1498\sqrt{5}+831\sqrt{10}+2\sqrt{2(3804500+2601475\sqrt{2}+1701426\sqrt{5}+1163413\sqrt{10}}}$
$e=\displaystyle\sqrt{1-11f^2+5\sqrt{5}f^2}$
$d=\displaystyle\frac{1}{712}ef(-1188-100\sqrt{5}+547\sqrt{2}+85\sqrt{10}-3f^2(-51628+23074\sqrt{5}-39527\sqrt{2}+17687\sqrt{10}))$
$c=\displaystyle\frac{1}{22072}ef((-5147665-3402751\sqrt{2}+2303503\sqrt{5}+1520761\sqrt{10})f^2+(10517-15966\sqrt{2}+2905\sqrt{5}+935\sqrt{10}))$
$b=\displaystyle\frac{1}{124}(-22-36\sqrt{2}-14\sqrt{5}-6\sqrt{10}-69f^2-313\sqrt{2}f^2+35\sqrt{5}f^2+139\sqrt{10}f^2)$
$a=\displaystyle\frac{1}{356}e(171+313\sqrt{2}+9\sqrt{5}-21\sqrt{10}+(-10376-9016\sqrt{2}+4644\sqrt{5}+4027\sqrt{10})f^2)$
\end{remark}
\restoregeometry
\newgeometry{left=0.5cm,right=0.5cm,top=1.5cm,bottom=2cm} 3. Coxeter diagrams for $P_3$
\begin{figure}
\caption{$P_3$(1/8)}
\label{figure:p831}
\end{figure}
\restoregeometry
\begin{figure}
\caption{$P_3$(2/8)}
\label{figure:p832}
\end{figure}
\begin{figure}
\caption{$P_3$(3/8)}
\label{figure:p833}
\end{figure}
\begin{figure}
\caption{$P_3$(4/8)}
\label{figure:p834}
\end{figure}
\begin{figure}
\caption{$P_3$(5/8)}
\label{figure:p835}
\end{figure}
\begin{figure}
\caption{$P_3$(6/8)}
\label{figure:p836}
\end{figure}
\begin{figure}
\caption{$P_3$(7/8)}
\label{figure:p837}
\end{figure}
\begin{figure}
\caption{$P_3$(8/8)}
\label{figure:p838}
\end{figure}
\newgeometry{left=0.5cm,right=0.5cm,top=0.2cm,bottom=0.2cm}
\begin{table}[H]
\resizebox*{\textwidth}{!}{
\renewcommand{2.7}{1.85}
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{} &$a$ &$b$ &$c$ \\
\cline{2-4}
&$d$ &$e$ &$f$ \\
\hline
\multirow{2}{*}{$P_{3,1}$} & $\frac{3}{11}(3+2\sqrt{5})$ & $\frac{2}{11}\sqrt{265+118\sqrt{5}}$ &$\frac{1}{2}\sqrt{5+\sqrt{5}}$ \\
\cline{2-4}
&$\frac{1}{2}\sqrt{5+\sqrt{5}}$ &$\frac{2}{11}\sqrt{265+118\sqrt{5}}$ & $\frac{7}{11}(3+2\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{3,4}$} &$\frac{1}{11}(17+4\sqrt{5})$ &$\sqrt{\frac{17208}{2299}+\frac{7688\sqrt{5}}{2299}} $&$\sqrt{\frac{2}{19}(9+\sqrt{5})}$ \\
\cline{2-4}
&$\sqrt{\frac{2}{19}(9+\sqrt{5})}$ &$\sqrt{\frac{17208}{2299}+\frac{7688\sqrt{5}}{2299}} $ &$\frac{1}{209}(257+208\sqrt{5})$ \\
\hline
\multirow{2}{*}{$P_{3,5}$} &$\frac{1}{19}(17+4\sqrt{5})$ &$\sqrt{\frac{20936}{3971}+\frac{9352\sqrt{5}}{3971}}$ &$\sqrt{\frac{2}{11}(7+\sqrt{5})}$ \\
\cline{2-4}
&$\sqrt{\frac{2}{11}(7+\sqrt{5})}$ &$\sqrt{\frac{20936}{3971}+\frac{9352\sqrt{5}}{3971}} $&$\frac{1}{209}(257+208\sqrt{5})$ \\ \hline
\multirow{2}{*}{$P_{3,15}$} &$3.54605$ &$5.88536$ &$1.9923$ \\
\cline{2-4}
&$1.345$ &$5.88536$ &$6.61923$ \\
\hline
\multirow{2}{*}{$P_{3,19}$} &$\frac{1}{11}(8+13\sqrt{2}+9\sqrt{5}+5\sqrt{10})$ &$\frac{1}{11}\sqrt{2074+1462\sqrt{2}+950\sqrt{5}+630\sqrt{10}}$ &$\sqrt{\frac{1}{2}(2+\sqrt{5}+\sqrt{7+3\sqrt{5}})}$ \\
\cline{2-4}
&$\sqrt{\frac{1}{2}(2+\sqrt{5}+\sqrt{7+3\sqrt{5}})}$ &$\frac{1}{11}\sqrt{2074+1462\sqrt{2}+950\sqrt{5}+630\sqrt{10}}$ &$\frac{1}{11}(37+12\sqrt{2}+2\sqrt{5}(5+4\sqrt{2}))$ \\
\hline
\multirow{2}{*}{$P_{3,22}$} &$\frac{1}{2}(3+\sqrt{5})$ &$\sqrt{\frac{115}{11}+\frac{51\sqrt{5}}{11}}$ & $\sqrt{\frac{10}{11}+\frac{3\sqrt{5}}{11}}$ \\
\cline{2-4}
&$\sqrt{\frac{10}{11}+\frac{3\sqrt{5}}{11}}$ &$\sqrt{\frac{115}{11}+\frac{51\sqrt{5}}{11}}$ &$\frac{7}{11}(3+2\sqrt{5})$ \\
\hline
\multirow{2}{*}{$P_{3,32}$} &$4.41427$ &$6.42282$ &$1.82559$ \\
\cline{2-4}
&$1.23245$ &$6.42282$ &$6.61923$ \\
\hline
\multirow{2}{*}{$P_{3,48}$} &$1+\sqrt{5}+\sqrt{7+3\sqrt{5}}$ &$\sqrt{\frac{1}{11}(222+161\sqrt{2}+104\sqrt{5}+67\sqrt{10})}$ &$\sqrt{\frac{1}{22}(19+9\sqrt{5}+2\sqrt{147+65\sqrt{5}})}$ \\
\cline{2-4}
&$\sqrt{\frac{1}{22}(19+9\sqrt{5}+2\sqrt{147+65\sqrt{5}})}$ &$\sqrt{\frac{1}{11}(222+161\sqrt{2}+104\sqrt{5}+67\sqrt{10})}$ &$\frac{1}{11}(37+12\sqrt{2}+2\sqrt{5}(5+4\sqrt{2}))$ \\
\hline
\multirow{2}{*}{$P_{3,58}$} & $\frac{1}{76}(25+7\sqrt{5}+8\sqrt{108+31\sqrt{5}})$ &$\frac{1}{19}\sqrt{\frac{1}{22}(30747+13689\sqrt{5}+8\sqrt{29359122+13129762\sqrt{5}})}$ &$\sqrt{\frac{2}{11}(7+\sqrt{5})}$ \\
\cline{2-4}
&$\frac{1}{2}\sqrt{\frac{1}{2}(9+\sqrt{5})}$ &$\frac{1}{38}(32+12\sqrt{5}+19\sqrt{\frac{1592}{361}+\frac{711\sqrt{5}}{361}})$ &$\frac{1}{19}\sqrt{\frac{1}{22}(26791+8529\sqrt{5}+8\sqrt{11214278+5015082\sqrt{5}})}$ \\
\hline
\multirow{2}{*}{$P_{3,70}$} &$5.67675$ &$7.83841$ & $1.82559$ \\
\cline{2-4}
&$1.14412$ &$6.04875$ &$6.24279$ \\
\hline
\multirow{2}{*}{$P_{3,82}$} &$\frac{1}{11}\sqrt{197+69\sqrt{5}+4\sqrt{2(845+358\sqrt{5})}}$ & $\frac{1}{11}\sqrt{\frac{1}{19}(7186+3203\sqrt{5}+4\sqrt{6387730+2856676\sqrt{5}}}$ &$\sqrt{\frac{2}{19}(9+\sqrt{5})}$ \\
\cline{2-4}
&$\frac{1}{2}\sqrt{5+\sqrt{5}}$ &$\frac{1}{22}(26+10\sqrt{5}+11\sqrt{\frac{1385}{121}+\frac{619\sqrt{5}}{121})}$ & $\frac{1}{11}\sqrt{\frac{1}{19}(11128+3983\sqrt{5}+8\sqrt{2439905+1091149\sqrt{5}})}$ \\
\hline
\multirow{2}{*}{$P_{3,84}$} &$3.84864$ & $4.97946$ &$1.08754$ \\
\cline{2-4}
&$1.9923$ &$6.47518$ &$5.62568$ \\
\hline
\multirow{2}{*}{$P_{3,86}$} &$\frac{3}{4}+\frac{\sqrt{5}}{4}+\sqrt{\frac{5}{2}+\sqrt{5}}$ & $1+\frac{\sqrt{5}}{2}+\sqrt{\frac{5}{2}+\sqrt{5}}$ &$\frac{1}{2}\sqrt{3+\sqrt{5}}$ \\
\cline{2-4}
&$\frac{1}{\sqrt{2-\frac{3}{\sqrt{5}}}}$ &$\sqrt{\frac{1}{22}(173+75\sqrt{5}+4\sqrt{3625+1621\sqrt{5}})}$ & $\sqrt{\frac{1}{22}(123+49\sqrt{5}+4\sqrt{1385+619\sqrt{5}})}$ \\
\hline
\multirow{2}{*}{$P_{3,98}$} &$\frac{3}{19}(8+3\sqrt{5})$ &$\frac{1}{19}\sqrt{2442+1082\sqrt{5}}$ &$\frac{1}{2}\sqrt{\frac{1}{2}(9+\sqrt{5})}$ \\
\cline{2-4}
& $\frac{1}{2}\sqrt{\frac{1}{2}(9+\sqrt{5})}$ & $\frac{1}{19}\sqrt{2442+1082\sqrt{5}}$ & $\frac{1}{19}(20+17\sqrt{5})$ \\
\hline
\multirow{2}{*}{$P_{3,104}$} &$2+\sqrt{5}$ &$3+\sqrt{5}$ &$\frac{1}{2}\sqrt{3+\sqrt{5}}$ \\
\cline{2-4}
&$\frac{1}{2}\sqrt{3+\sqrt{5}}$ & $3+\sqrt{5}$ & $2+\sqrt{5}$ \\
\hline
\multirow{2}{*}{$P_{3,110}$} &$\frac{1}{2}(1+\sqrt{5})$ &$\sqrt{7+3\sqrt{5}}$ &$\frac{1}{2}(1+\sqrt{5})$ \\
\cline{2-4}
&$\frac{1}{2}(1+\sqrt{5})$ &$\sqrt{7+3\sqrt{5}}$ &$2+\sqrt{5}$ \\
\hline
\multirow{2}{*}{$P_{3,113}$} &$\frac{1}{2}\sqrt{\frac{3}{2}(3+\sqrt{5})+\sqrt{8+3\sqrt{5}}}$ & $\sqrt{\frac{1}{38}(94+40\sqrt{5}+\sqrt{2(8331+3725\sqrt{5})})}$ & $\sqrt{\frac{43}{38}+\frac{9\sqrt{5}}{38}}$ \\
\cline{2-4}
& $\frac{1}{2}(1+\sqrt{5})$ &$\frac{1}{2}(2+\sqrt{5}+\sqrt{8+3\sqrt{5})}$ &$\sqrt{\frac{1}{19}(8+3\sqrt{5})(9+2\sqrt{8+3\sqrt{5})}}$ \\
\hline
\multirow{2}{*}{$P_{3,115}$} &$\frac{1}{4}(5+\sqrt{5})$ &$\sqrt{\frac{109}{19}+\frac{48\sqrt{5}}{19}}$ &$\sqrt{\frac{43}{38}+\frac{9\sqrt{5}}{38}}$ \\
\cline{2-4}
& $\sqrt{\frac{43}{38}+\frac{9\sqrt{5}}{38}}$ &$\sqrt{\frac{109}{19}+\frac{48\sqrt{5}}{19}}$ &$\frac{1}{19}(20+17\sqrt{5})$ \\
\hline
\end{tabular}
} \end{table}
\restoregeometry
\newgeometry{left=0.5cm,right=0.5cm,top=2cm} \begin{remark}.
For $P_{3,15}$, the acute solutions are:
$f=\frac{1}{11}\sqrt{799+406\sqrt{2}+364\sqrt{5}+168\sqrt{10}+2\sqrt{2(211980+145015\sqrt{2}+94834\sqrt{5}+64817\sqrt{10}}}$
$e=\frac{1}{2}\sqrt{-1-\sqrt{5}+f^2+\sqrt{5}f^2}$
$d={\displaystyle\frac{1}{496}}e(1248+161\sqrt{2}+240\sqrt{5}+67\sqrt{10}+(140+81\sqrt{2}-3\sqrt{5}(36+7\sqrt{2}))f^2)$
$c={\displaystyle\frac{1}{1488}}ef(4396+1661\sqrt{2}+436\sqrt{5}+511\sqrt{10}+(-896-1327\sqrt{2}+304\sqrt{5}+595\sqrt{10})f^2)$
$b={\displaystyle\frac{1}{186}}ef(248+211\sqrt{2}-31\sqrt{5}-94\sqrt{10}+(-1000-443\sqrt{2}+443\sqrt{5}+200\sqrt{10})f^2)$
$a={\displaystyle\frac{1}{248}}(-259+13\sqrt{2}-7\sqrt{5}-3\sqrt{10}+(37+69\sqrt{2}+\sqrt{5}-35\sqrt{10})f^2)$
For $P_{3,32}$, the acute solutions are:
$f=\frac{1}{11}\sqrt{799+406\sqrt{2}+364\sqrt{5}+168\sqrt{10}+2\sqrt{2(211980+145015\sqrt{2}+94834\sqrt{5}+64817\sqrt{10}}}$
$e={\displaystyle\frac{1}{2\sqrt{2}}}\sqrt{-1-3\sqrt{5}+f^2+3\sqrt{5}f^2}$
$d={\displaystyle\frac{1}{2728}}e(4968+731\sqrt{2}+1464\sqrt{5}+315\sqrt{10}+11(20-28\sqrt{5}-3\sqrt{2}(-7+\sqrt{5}))f^2)$
$c={\displaystyle\frac{1}{8184}}ef(16476+7091\sqrt{2}+3724\sqrt{5}+2619\sqrt{10}+11(-216-287\sqrt{2}+56\sqrt{5}+129\sqrt{10})f^2)$
$b={\displaystyle\frac{1}{186}}ef(248+211\sqrt{2}-31\sqrt{5}-94\sqrt{10}+(-1000-443\sqrt{2}+443\sqrt{5}+200\sqrt{10})f^2)$
$a={\displaystyle\frac{1}{496}}(-517+53\sqrt{2}-19\sqrt{5}-17\sqrt{10}+(127+329\sqrt{2}-15\sqrt{5}-157\sqrt{10})f^2)$
For $P_{3,70}$, the acute solutions are:
$f={\displaystyle\frac{1}{\sqrt{22}}}\sqrt{135+64\sqrt{2}+57\sqrt{5}+28\sqrt{10}+4\sqrt{2743+1884\sqrt{2}+1231\sqrt{5}+838\sqrt{10}}}$
$e={\displaystyle\frac{1}{2\sqrt{2}}}\sqrt{-1-3\sqrt{5}+f^2+3\sqrt{5}f^2}$
$d={\displaystyle\frac{1}{23684}}e(36906+2881\sqrt{2}+11886\sqrt{5}+847\sqrt{10}+(4504+3808\sqrt{2}-4044\sqrt{5}-874\sqrt{10})f^2$
$c={\displaystyle\frac{1}{2913132}}((2545015+877512\sqrt{2}+2315748\sqrt{5}+1194982\sqrt{10})ef+(2442218+1364747\sqrt{2}-1270198\sqrt{5}-$
\hspace{0.6cm}$586721\sqrt{10})ef^3)$
$b={\displaystyle\frac{1}{3813}}((6454+4099\sqrt{2}-743\sqrt{5}-1864\sqrt{10})ef+\frac{1}{2}(-8619-7529\sqrt{2}+3601\sqrt{5}+3483\sqrt{10})ef^3)$
$a={\displaystyle\frac{1}{248}}(-97-15\sqrt{2}-73\sqrt{5}+13\sqrt{10}+(21-53\sqrt{2}+19\sqrt{5}+17\sqrt{10})f^2)$
For $P_{3,84}$, the acute solutions are:
$f={\displaystyle\frac{1}{11\sqrt{19}}}\sqrt{11104+5702\sqrt{2}+5133\sqrt{5}+2276\sqrt{10}+8\sqrt{4842088+3311169\sqrt{2}+2165554\sqrt{5}+1480687\sqrt{10}}}$
$e={\displaystyle\frac{1}{2}}\sqrt{-1-2\sqrt{5}+f^2+2\sqrt{5}f^2}$
$ d={\displaystyle\frac{1}{24216}}e(-17104+5893\sqrt{2}-3602\sqrt{5}-20597\sqrt{10}+(-115066-24517\sqrt{2}+50384\sqrt{5}+12469\sqrt{10})f^2)$
c=${\displaystyle\frac{1}{1937062056}}ef(9466659574+914636627\sqrt{2}-3948733168\sqrt{5}-1618127971\sqrt{10}+(-23059580156-23947802279\sqrt{2}+$
\hspace{0.6cm}$10158748906\sqrt{5}+10850958719\sqrt{10})f^2)$
$b={\displaystyle\frac{1}{9918884}}ef(-31360330-1617047\sqrt{2}+13795674\sqrt{5}+3094731\sqrt{10}+(61953682+76754869\sqrt{2}-5\sqrt{5}(5508882+$
\hspace{0.6cm} $6899345\sqrt{2}))f^2)$
$a={\displaystyle\frac{1}{248}}(-57+\frac{177}{\sqrt{2}}-265\sqrt{\frac{5}{2}}+37\sqrt{5}+(-679-\frac{117}{\sqrt{2}}+89\sqrt{\frac{5}{2}}+295\sqrt{5})f^2)$ \end{remark} \restoregeometry
\begin{figure}
\caption{$P_4,P_6,P_8,P_{13},P_{16}$}
\label{figure:p4681316}
\end{figure}
\newgeometry{left=0.5cm,right=0.5cm,top=2cm,bottom=2cm} \bgroup \everymath{\displaystyle}
\begin{table}[H]
\resizebox*{19cm}{!}{
\renewcommand{2.7}{2.7}
\begin{tabular}{|c |c | c | c | c | c |}
\hline
& a & b & c & d & e \\ \hline
$P_{4,1}$ & $\frac{ 1}{ 2} \sqrt{3(3+\sqrt{5})}$ & $\sqrt{\frac{ 3}{ 19} (8+3\sqrt{5})}$ & $\frac{ 1}{ 2} \sqrt{7 + 3\sqrt{5}}$ & $\sqrt{\frac{ 2}{19} (8 +3\sqrt{5})}$ & $\sqrt{\frac{ 5}{ 19} (8 +3\sqrt{5})}$\\[7pt] \hline
$P_{4,2}$ & $\frac{ 1}{ 4} \sqrt{57 + 23\sqrt{5} + 4\sqrt{30(9 + 4\sqrt{5})}}$ & $\sqrt{\frac{ 24}{ 19} + \frac{ 9\sqrt{5}}{19}}$ & $\frac{ 1}{ 4} (2(2+\sqrt{5})+\sqrt{21+9\sqrt{5}})$ & $\sqrt{\frac{ 16}{ 19} + \frac{ 6\sqrt{5}}{19}}$ & $\frac{ 1}{ 2} \sqrt{\frac{ 1}{ 19} (249+91\sqrt{5} +60\sqrt{6} +32\sqrt{30})}$ \\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{6}{l}{}\\
\hline
$P_{6,1}$ & $ \sqrt{\frac{ 5}{ 19} (8 +3\sqrt{5})} $ & $ \sqrt{\frac{ 5}{ 19} (8 +3\sqrt{5})} $ & $ \sqrt{\frac{ 5}{ 38} (9 +\sqrt{5})} $ & $ \sqrt{\frac{ 5}{ 38} (9 +\sqrt{5})} $ & $ \frac{ 25}{ 19} (9 +\sqrt{5})-9 $ \\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{6}{l}{}\\
\hline
$P_{8,1}$ & $\frac{ 1}{ 2} \sqrt{5+\sqrt{5}}$ & $\frac{ 1}{ 2} (3+\sqrt{5})$ & $\frac{ 1}{ 2} (1+\sqrt{5})$ & $\sqrt{5+2\sqrt{5}}$& \\[7pt]
\hline
$P_{8,2}$ & $\frac{ 1}{2} (1+\sqrt{5+2\sqrt{5}})$ & \footnotesize $\sqrt{7+3\sqrt{5} + \sqrt{85+38\sqrt{5}}}$ & $\frac{ 1}{2} (1+\sqrt{5})$ & \footnotesize $\frac{ 1}{2} \sqrt{43+19\sqrt{5} + 2\sqrt{890 + 398\sqrt{5}}}$& \\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{6}{l}{}\\
\hline
$P_{13,1}$ & $\frac{ 1}{ 2} \sqrt{3+\sqrt{2}} $ & $\frac{ 1}{ 2} \sqrt{3+\sqrt{2}} $ & $\frac{ 1}{ 2} (2+\sqrt{2}) $ & $1+\sqrt{2}$ & $\frac{ 1}{ 2} (2+\sqrt{2}) $ \\[7pt]
\hline
$P_{13,2}$ & $\sqrt{\frac{ 17}{ 23} + \frac{ 8\sqrt{2}}{ 23}}$ & $\frac{ 1}{ 2} +\frac{ 1}{ \sqrt{2}}$ & $\frac{ 3}{ 2}+ \frac{ 1}{ \sqrt{2}}$ & $1 + \frac{ 1}{ \sqrt{2}}$ & $\sqrt{\frac{ 17}{ 23} + \frac{ 8\sqrt{2}}{ 23}}$ \\[7pt]
\hline
$P_{13,3}$ & $\sqrt{\frac{ 17}{ 23} + \frac{ 8\sqrt{2}}{ 23}}$ & $\sqrt{\frac{ 17}{ 23} + \frac{ 8\sqrt{2}}{ 23}}$ & $1+\frac{ 1}{ 2\sqrt{2}}$ & $1+\frac{3}{ 2\sqrt{2}}$ & $1+\frac{ 1}{\sqrt{2}}$ \\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{6}{l}{}\\
\hline
$P_{16,1}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+ \frac{1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$& \\[7pt]
\hline
$P_{16,2}$ & $2+\sqrt{2}$ & $\sqrt{\frac{ 13}{ 7} + \frac{ 9 \sqrt{2}}{ 7}}$ & $\sqrt{\frac{ 2}{ 7}(3+\sqrt{2})}$ & $1+\frac{ 1}{ \sqrt{2}}$& \\[7pt]
\hline
$P_{16,3}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$& \\[7pt]
\hline
$P_{16,4}$ & $\sqrt{\frac{ 13}{ 7} + \frac{ 9 \sqrt{2}}{ 7}}$ & $2+\sqrt{2}$ & $1+\frac{ 1}{ \sqrt{2}}$ & $\sqrt{\frac{2}{ 7}(3+\sqrt{2})}$& \\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{6}{l}{}\\
\end{tabular}
}
\end{table} \egroup \restoregeometry
\begin{figure}
\caption{$P_{7}$}
\label{figure:p7}
\end{figure}
\newgeometry{left=0.5cm,right=0.5cm,top=2cm,bottom=2cm} \bgroup \everymath{\displaystyle}
\begin{table}[H]
\renewcommand{2.7}{2}
\resizebox{17cm}{!}{
\begin{tabular}{|l|l|l|l|}
\hline
\multirow{2}{*}{} &a &b &c \\ \cline{2-4}
&d &e & \\ \hline
\multirow{2}{*}{$P_{7,1}$} &$\sqrt{\frac{23}{8}+\frac{9\sqrt{5}}{8}}$ & $\frac{1}{2}\sqrt{\frac{1}{2}(6+\sqrt{5})}$ & $\frac{1}{4}\sqrt{5(3+\sqrt{5})}$ \\ \cline{2-4}
&$\frac{1}{2}(1+\sqrt{5})$ & $\sqrt{\frac{23}{8}+\frac{9\sqrt{5}}{8}}$ & \\ \hline
\multirow{2}{*}{$P_{7,2}$} & $\frac{1}{8}(3(1+\sqrt{5})+\sqrt{206+86\sqrt{5}})$ & $\frac{1}{2}\sqrt{\frac{1}{2}(6+\sqrt{5})}$ & $\frac{1}{4}\sqrt{22+\frac{17\sqrt{5}}{2}+\sqrt{655+290\sqrt{5}}}$ \\ \cline{2-4}
&$\frac{1}{2}(1+\sqrt{5})$ & $\frac{1}{8}(5+\sqrt{5}+\sqrt{206+86\sqrt{5}})$ & \\ \hline
\multirow{2}{*}{$P_{7,3}$} & $\frac{1}{8}(5+3\sqrt{5}+\sqrt{206+86\sqrt{5}})$ & $\frac{1}{2}\sqrt{\frac{1}{2}(6+\sqrt{5})}$ & $\frac{1}{4}\sqrt{\frac{1}{2}(39+11\sqrt{5}+\sqrt{470+130\sqrt{5}})}$ \\ \cline{2-4}
& $\frac{1}{2}(1+\sqrt{5})$ & $\frac{1}{8}(5+3\sqrt{5}+\sqrt{206+86\sqrt{5}})$ & \\ \hline
\multirow{2}{*}{$P_{7,4}$} & $1+\sqrt{2} $ & $1+\frac{1}{2\sqrt{2}} $ & $1+\frac{1}{\sqrt{2}} $ \\ \cline{2-4}
& $\sqrt{\frac{9}{14}+\frac{2\sqrt{2}}{7}}$ & $\sqrt{\frac{1}{7}(9+4\sqrt{2})} $ & \\ \hline
\multirow{2}{*}{$P_{7,5}$} & $1+\frac{1}{\sqrt{2}}+\sqrt{\frac{26}{7}+\frac{18\sqrt{2}}{7}}$ & $1+\frac{1}{2\sqrt{2}} $ & $\frac{1}{14}(7+7\sqrt{2}+2\sqrt{91+63\sqrt{2}})$ \\ \cline{2-4}
& $\sqrt{\frac{9}{14}+\frac{2\sqrt{2}}{7}}$ & $\sqrt{\frac{2}{7}(6+3\sqrt{2}+2\sqrt{5+3\sqrt{2}})}$ & \\ \hline
\multirow{2}{*}{$P_{7,6}$} & $1+\frac{1}{\sqrt{2}}+\sqrt{\frac{26}{7}+\frac{18\sqrt{2}}{7}}$ & $1+\frac{1}{2\sqrt{2}} $ & $\frac{1}{2}(\sqrt{2}+\sqrt{\frac{52}{7}+\frac{36\sqrt{2}}{7}})$ \\ \cline{2-4}
& $\sqrt{\frac{9}{14}+\frac{2\sqrt{2}}{7}}$ & $\sqrt{\frac{1}{7}(13+8\sqrt{2}+2\sqrt{54+38\sqrt{2}})}$ & \\ \hline
\multirow{2}{*}{$P_{7,7}$} & $\sqrt{\frac{1}{31}(29+10\sqrt{5})}$ & $\frac{1}{2}\sqrt{4+\sqrt{5}}$ & $\sqrt{\frac{1}{31}(23+9\sqrt{5})}$ \\ \cline{2-4}
& $\frac{1}{2}(1+\sqrt{5})$ & $\sqrt{\frac{2}{31}(29+10\sqrt{5})}$ & \\ \hline
\multirow{2}{*}{$P_{7,8}$} & $\frac{1}{62}\sqrt{4347+1577\sqrt{5}+8\sqrt{11(21727+9289\sqrt{5})}}$ & $\frac{1}{2}\sqrt{4+\sqrt{5}}$ & $\frac{1}{62}\sqrt{\frac{1}{2}(7899+2851\sqrt{5}+48\sqrt{29048+12674\sqrt{5}})}$ \\ \cline{2-4}
& $\frac{1}{2}(1+\sqrt{5})$ & $\frac{1}{124}(49+3\sqrt{5}+8\sqrt{448+178\sqrt{5}}$ & \\ \hline
\multirow{2}{*}{$P_{7,9}$} & $\frac{1}{31}\sqrt{1079+433\sqrt{5}+4\sqrt{75257+33535\sqrt{5}}}$ & $\frac{1}{2}\sqrt{4+\sqrt{5}}$ & $\frac{1}{31}\sqrt{968+213\sqrt{5}+4\sqrt{6613+787\sqrt{5}}}$ \\ \cline{2-4}
& $\frac{1}{2}(1+\sqrt{5})$ & $\frac{1}{62}(22+14\sqrt{5}+31\sqrt{\frac{7168}{961}+\frac{2848\sqrt{5}}{961}})$ & \\ \hline
\multirow{2}{*}{$P_{7,10}$} & $\sqrt{\frac{13}{7}+\frac{9\sqrt{2}}{7}}$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $2\sqrt{\frac{1}{7}(3+\sqrt{2})}$ \\ \cline{2-4}
& $1+\frac{1}{\sqrt{2}}$ & $\sqrt{\frac{45}{7}+\frac{29\sqrt{2}}{7}}$ & \\ \hline
\multirow{2}{*}{$P_{7,11}$} & $2+\sqrt{2}$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $1+\sqrt{2}$ \\ \cline{2-4}
& $1+\frac{1}{\sqrt{2}}$ & $3+\frac{5}{\sqrt{2}}$ & \\ \hline
\multirow{2}{*}{$P_{7,12}$} & $\frac{1}{14}(25+13\sqrt{2})$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $\frac{8}{7}+\frac{17}{7\sqrt{2}}$ \\ \cline{2-4}
& $1+\frac{1}{\sqrt{2}}$ & $\frac{1}{14}(41+37\sqrt{2})$ & \\ \hline
\multirow{2}{*}{$P_{7,13}$} & $1+\sqrt{2}$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $1+\sqrt{2}$ \\ \cline{2-4}
& $\sqrt{\frac{9}{14}+\frac{2\sqrt{2}}{7}}$ & $\sqrt{\frac{9}{7}+\frac{4\sqrt{2}}{7}}$ & \\ \hline
\multirow{2}{*}{$P_{7,14}$} & $1+\frac{1}{\sqrt{2}}+\sqrt{\frac{26}{7}+\frac{18\sqrt{2}}{7}}$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}+\sqrt{\frac{26}{7}+\frac{18\sqrt{2}}{7}}$ \\ \cline{2-4}
& $\sqrt{\frac{9}{14}+\frac{2\sqrt{2}}{7}}$ & $\sqrt{\frac{1}{7}(13+8\sqrt{2}+2\sqrt{54+38\sqrt{2}})}$ & \\ \hline
\multirow{2}{*}{$P_{7,15}$} & $1+\frac{1}{\sqrt{2}}+\sqrt{\frac{26}{7}+\frac{18\sqrt{2}}{7}}$ & $\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $1+\frac{1}{\sqrt{2}}+\sqrt{\frac{26}{7}+\frac{18\sqrt{2}}{7}}$ \\ \cline{2-4}
& $\sqrt{\frac{9}{14}+\frac{2\sqrt{2}}{7}}$ & $\sqrt{\frac{2}{7}(6+3\sqrt{2}+2\sqrt{5+3\sqrt{2}})}$ & \\ \hline
\end{tabular}
} \end{table} \egroup \restoregeometry
\begin{figure}
\caption{$P_{17},P_{18},P_{26}$}
\label{figure:p17182621}
\end{figure}
\newgeometry{left=0.5cm,right=0.5cm,top=2cm,bottom=2cm}
\begin{table}[H]
\resizebox*{17cm}{!}{
\renewcommand{2.7}{2.3}
\begin{tabular}{|c |c | c | c | c |}
\hline
& $a$ & $b$ & $c$ & $d$ \\
\hline
$P_{17,1}$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{4} (3+\sqrt{5})$& $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$\\[7pt]
\hline
$P_{17,2}$ & $\displaystyle 1+\frac{1}{ \sqrt{2}}$ & $\displaystyle\frac{1}{2}+\frac{1}{\sqrt{2}}$ & $\displaystyle 1+\frac{1}{ \sqrt{2}}$ & $\displaystyle\frac{3}{2}+\frac{1}{ \sqrt{2}}$ \\[7pt]
\hline
\renewcommand2.7{2}
$P_{17,3}$ & $\displaystyle 1+\frac{1}{\sqrt{2}}$ & $\displaystyle\frac{1}{2}+\frac{1}{ \sqrt{2}}$ & $\displaystyle 1+\frac{1}{ \sqrt{2}}$ & $\displaystyle\frac{3}{2}+\frac{1}{\sqrt{2}}$ \\[7pt]
\hline
$P_{17,4}$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2}\sqrt{3+\frac{3}{ \sqrt{5}}}$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ \\[7pt]
\hline
$P_{17,5}$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{10}(5+3\sqrt{5}) $& $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ \\[7pt]
\hline
$P_{17,6}$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle \frac{1}{2} \sqrt{3+\sqrt{5}} $& $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2} (3+\sqrt{5})$\\[7pt]
\hline
$P_{17,7}$ & $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2} (1+\sqrt{5}) $& $\displaystyle\frac{1}{2} (1+\sqrt{5})$ & $\displaystyle\frac{1}{2} (3+\sqrt{5})$\\[7pt]
\hline
$P_{17,8}$ & $\displaystyle\frac{1}{\sqrt{2}}(2 \cos\frac{\pi}{7} (1 + 2 \cos\frac{\pi}{7})-1)$
&
$\displaystyle 2 \cos^2\frac{\pi}{7}-\frac{1}{2}$ &
$\displaystyle \frac{1}{\sqrt{2}}(2 \cos\frac{\pi}{7} (1 + 2 \cos\frac{\pi}{7})-1)$&
$\displaystyle 2 \cos\frac{\pi}{7} (1 + 2 \cos\frac{\pi}{7})$
\\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{5}{l}{}\\
\hline
$P_{18,1}$ & $2+\sqrt{2}$ & $2+\sqrt{2}$ & $5+4\sqrt{2}$& \\[7pt]
\hline
$P_{18,2}$ & $\sqrt{2+\sqrt{2}}$ & $\sqrt{2+\sqrt{2}}$ & $1+\sqrt{2}$& \\[7pt]
\hline
$P_{18,3}$ & $2+\sqrt{2}$ & $2+\sqrt{2}$ & $5+4\sqrt{2}$& \\[7pt]
\hline
$P_{18,4}$ & $\sqrt{2+\sqrt{2}}$ & $\sqrt{2+\sqrt{2}}$ & $1+\sqrt{2}$&
\\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{5}{l}{}\\
\hline
$P_{26,1}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & \\[7pt]
\hline
$P_{26,2}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$&
\\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{5}{l}{}\\
\end{tabular}
} \end{table}
\iffalse
\begin{tabular}{|c |c | c | c |}
\hline
& $a$ & $b$ & $c$ \\
\hline
$P_{26,1}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ \\[7pt]
\hline
$P_{26,2}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ & $\sqrt{1+\frac{ 1}{ \sqrt{2}}}$ \\[7pt]
\hline \end{tabular} \fi
\restoregeometry \begin{figure}
\caption{$P_{34}$}
\label{figure:p34}
\end{figure}
\begin{figure}
\caption{$P_{21}$}
\label{figure:p21}
\end{figure} \restoregeometry
\newgeometry{left=0.5cm,right=0.5cm,top=2cm,bottom=2cm} \begin{table}[H]
\resizebox*{16cm}{!}{
\renewcommand{2.7}{2.7}
\begin{tabular}{|c |c | c | c | c |}
\hline
& $a$ & $b$ & $c$ & $d$ \\
\hline
$P_{34,1}$ & $ \frac{1}{4}(1+\sqrt{13})$ & $ \frac{1}{4}(1+\sqrt{13})$ & $\sqrt{\frac{1}{6}{(5+\sqrt{13})}}$ & $ \frac{1}{4}(1+\sqrt{13})$ \\[7pt]
\hline
$P_{34,2}$ & $\frac{1}{2}\sqrt{3+\sqrt{5}}$ & $\frac{1}{2}\sqrt{3+\sqrt{5}}$ &
$\frac{1}{2}\sqrt{3+\sqrt{5}}$ & $\frac{1}{2}\sqrt{3+\sqrt{5}}$ \\[7pt]
\hline
$P_{34,3}$ & $ \frac{\sqrt{5}}{2}$ & $ \sqrt{\frac{1}{10}(15-\sqrt{5})}$ & $ \frac{\sqrt{5}}{2}$ &$ \frac{\sqrt{5}}{2}$ \\[7pt]
\hline
$P_{34,4}$& $ \frac{1}{2}+\frac{1}{\sqrt{2}}$ & $ \frac{1}{2}+\frac{1}{\sqrt{2}}$ & $ \frac{1}{2}+\frac{1}{\sqrt{2}}$ & $ \frac{1}{2}+\frac{1}{\sqrt{2}}$ \\[7pt]
\hline
$P_{34,5}$ & $ \frac{1}{2}(1+\sqrt{5})$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}} $ \\[7pt]
\hline
$P_{34,6}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}} $ & $ \frac{1}{2}(1+\sqrt{5}) $ & $ \frac{1}{2}(1+\sqrt{5}) $ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ \\[7pt]
\hline
$P_{34,7}$ & $ \frac{1}{4}(1+\sqrt{13})$ & $ \frac{1}{4}(1+\sqrt{13}) $ & $ \frac{1}{4}(5+\sqrt{13})$ & $ \frac{1}{4}(1+\sqrt{13})$ \\[7pt]
\hline
$P_{34,8}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ & $ \frac{1}{4}(3+\sqrt{5}+\sqrt{3+\sqrt{5}}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ \\[7pt]
\hline
$P_{34,9}$& $ \frac{\sqrt{5}}{2} $ & $ \frac{\sqrt{5}}{2}$ & $ \frac{1}{4}(5+\sqrt{5})$ & $ \frac{\sqrt{5}}{2}$
\\[7pt]
\hline
$P_{34,10}$ & $ \frac{1}{4}(1+\sqrt{13}) $ & $ \frac{1}{4}(1+\sqrt{13}) $ & $ \frac{1}{3}(2+\sqrt{13})$ & $ \frac{1}{4}(1+\sqrt{13})$\\[7pt]
\hline
$P_{34,11}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ & $ \frac{1}{2}(1+\sqrt{5})$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ & $ \frac{1}{2}\sqrt{3+\sqrt{5}}$ \\[7pt]
\hline
$P_{34,12}$& $ \frac{\sqrt{5}}{2}$ & $ 2-\frac{1}{\sqrt{5}}$ & $ \frac{\sqrt{5}}{2}$ & $ \frac{\sqrt{5}}{2}$ \\[7pt]
\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{5}{l}{}\\
\hline
$P_{21,1}$ & $-\frac{ 1}{2} (2 \sqrt{3+\sqrt{3}}-(2+\sqrt{3})^{3/2})$ & $\sqrt{2(2+\sqrt{3})}$ & $-\frac{ 1}{2} (2 \sqrt{3+\sqrt{3}}-(2+\sqrt{3})^{3/2})$& \\[7pt]\hline
\specialrule{0.5em}{0.5pt}{0.5pt} \multicolumn{5}{l}{}\\
\end{tabular}
} \end{table} \restoregeometry
\end{document} |
\begin{document}
\title{Analysis of Schwarz methods for a hybridizable discontinuous Galerkin discretization}
\begin{abstract} Schwarz methods are attractive parallel solvers for large scale linear systems obtained when partial differential equations are discretized. For hybridizable discontinuous Galerkin (HDG) methods, this is a relatively new field of research, because HDG methods impose continuity across elements using a Robin condition, while classical Schwarz solvers use Dirichlet transmission conditions. Robin conditions are used in optimized Schwarz methods to get faster convergence compared to classical Schwarz methods, and this even without overlap, when the Robin parameter is well chosen. We present in this paper a rigorous convergence analysis of Schwarz methods for the concrete case of hybridizable interior penalty (IPH) method. We show that the penalization parameter needed for convergence of IPH leads to slow convergence of the classical additive Schwarz method, and propose a modified solver which leads to much faster convergence. Our analysis is entirely at the discrete level, and thus holds for arbitrary interfaces between two subdomains. We then generalize the method to the case of many subdomains, including cross points, and obtain a new class of preconditioners for Krylov subspace methods which exhibit better convergence properties than the classical additive Schwarz preconditioner. We illustrate our results with numerical experiments.
\end{abstract}
\begin{keywords} Additive Schwarz, optimized Schwarz, discontinuous Galerkin methods \end{keywords}
\begin{AMS} 65N22, 65F10, 65F08, 65N55, 65H10 \end{AMS}
\pagestyle{myheadings} \thispagestyle{plain}
\markboth{Martin J.~Gander and Soheil Hajian}{OSM and DG}
\section{Introduction}
We consider the elliptic model problem \begin{equation}\label{eq:pde}
\begin{array}{rcll}
\eta(x) u(x) - \nabla \cdot ( a(x) \nabla u ) &=&
f,\quad & \textrm{in $\Omega \subset \mathbb{R}^2$},\\
u &=& 0, & \textrm{on $\partial \Omega$},
\end{array} \end{equation}
in the weak sense where $f \in \textrm{L}^2(\Omega)$, $a(x) \in L^\infty(\Omega)$ and uniformly positive, $\eta_0 \geq \eta(x) \geq 0$ and $\Omega$ is assumed to be a convex polygon for simplicity. Any discretization of this problem, for example by a finite element method (FEM) or a discontinuous Galerkin (DG) method, leads to a large sparse linear system \begin{equation} \label{eq:linsys}
A {\boldsymbol u} = {\boldsymbol f} , \end{equation}
where ${\boldsymbol u} $ is the vector of degrees of freedom representing an approximation of $u$ and $A$ represents the disretized differential operator. In this paper we consider a hybridizable interior penalty (IPH\footnote{We use the acronym IPH for {\it hybridizable interior
penalty} because this has become the common abbreviation following
its introduction in \cite{cockburn} as a member of the family of HDG
methods.}) discretization which results in a symmetric positive definite (s.p.d.)~matrix $A$. An IPH discretization seeks $u_h \in \textrm{L}^2(\Omega)$ over a triangulation of the domain where $u_h$ is not necessarily continuous across elements. As common to DG methods, IPH imposes the continuity of the solution approximately through penalization techniques, i.e.~penalizing jumps of $u_h$ across elements in the bilinear form. The penalization is controlled by a penalty parameter $\mu$.
Since the matrix $A$ of IPH is s.p.d.~and sparse, one can use the Conjugate Gradient (CG) method to solve the linear system (\ref{eq:linsys}). The convergence of CG slows down as the condition number $\kappa(A)$ grows. It is not hard to show that $\kappa(A) = O(h^{-2})$, where $h$ is the maximum diameter of the elements in the triangulation, see for instance \cite{castillo}. Therefore preconditioning is unavoidable and domain decomposition (DD) preconditioners have been developed and studied for such discretizations, see \cite{paola2011, karakashian}. IPH as local solvers were also used to precondition classical IP discretizations \cite{blanca2}. One can also design a substructuring preconditioner for a $p$-version of IPH with poly-logarithmic growth in the condition number, see for details \cite{joachim1}. For a similar discretization where the approximation is continuous inside subdomains but discontinuous across subdomains, a substructuring preconditioner was proposed and analyzed for the $h$-version with logarithmic growth in the condition number, see \cite{dryja}.
A favorite preconditioner is the additive Schwarz preconditioner, for which the set of unknowns is partitioned into overlapping or non-overlapping subsets, corresponding to subdomains with maximum diameter $H$. In this paper we only consider the non-overlapping case\footnote{There is
a subtle difference between overlap at the continuous level of the
subdomains, and the discrete level of unknowns, see
\cite{gander2008schwarz}: no overlap at the level of unknowns means
minimal overlap of one mesh size at the continuous level for
classical discretizations like finite elements or finite
differences. This becomes however even more subtle here with DG
discretizations, since the discrete unknowns are coupled through
Robin conditions, and no overlap at the level of unknowns really
means no overlap at the continuous level, see
\cite{hajian2013block}.} and for simplicity study first only two subdomains, a generalization is given in Section \ref{sec:multi}. The non-overlapping two subdomain decomposition results in a natural partitioning of the unknowns ${\boldsymbol u} = ({\boldsymbol u} _1, {\boldsymbol u} _2)^\top$. The solution of the linear system by the additive Schwarz method without overlap is equivalent to the block Jacobi iteration
\begin{equation} \label{eq:blockJacobi}
M \mathbf{u}^{(n+1)} = N \mathbf{u}^{(n)} + \mathbf{f},\quad
M = \MATT{A_{1}}{}{}{A_{2}},\ N = M - A. \end{equation}
The matrix $M$ is also s.p.d.~and can be considered as a preconditioner for CG. It can be shown that in this case we have $\kappa(M^{-1} A) \leq O(h^{-1})$ in the absence of a coarse solver; see \cite{karakashian}. Preconditioned CG satisfies then the convergence factor estimate $\rho \leq \frac{\sqrt{\kappa(M^{-1}
A)}-1}{\sqrt{\kappa(M^{-1} A)}+1}= 1 - O(\sqrt{h})$.
On the other hand it has been recently shown in \cite{hajian2013block} that the block Jacobi iteration in (\ref{eq:blockJacobi}) for an IPH discretization can be viewed as a discretization of a non-overlapping Schwarz method with Robin transmission conditions, i.e.~ \begin{equation}\label{eq:DD-DGH}
\begin{array}{rcllrcll}
(\eta-\Delta) u_1^{(n+1)} &=& f & \text{in $\Omega_1$},&
(\eta-\Delta) u_2^{(n+1)} &=& f & \text{in $\Omega_2$},\\
\mathcal{B}_1 u_1^{(n+1)} &=& \mathcal{B}_1 u_2^{(n)} & \text{on $ \Gamma $},&
\mathcal{B}_2 u_2^{(n+1)} &=& \mathcal{B}_2 u_1^{(n)}& \text{on $ \Gamma $},
\end{array} \end{equation} where $\mathcal{B}_i w = \mu\, w + \PDif{w}{{\boldsymbol n} _i} $, $ \Gamma $ is the interface between the two subdomains and $\mu$ is precisely the penalty parameter of the IPH discretization. This parameter $\mu$ has to be chosen such that it ensures coercivity and optimal approximation properties. For an IPH discretization, we must have $\mu={\alpha}{h^{-1}}$ for some constant $\alpha>0$ large enough, independent of $h$, and this scaling cannot be weakened, since otherwise coercivity is lost. On the other hand, optimized Schwarz theory suggests that the iteration in (\ref{eq:DD-DGH}) converges faster if $\mu=O({h^{-1/2}})$, see \cite{ganderos}. In that case for the contraction factor we have $\rho = 1 - O(\sqrt{h})$ while with the choice $\mu = O(h^{-1})$ for IPH, we have $\rho = 1 - O(h)$.
The challenge is therefore to design a Schwarz algorithm for IPH with convergence factor $\rho = 1 - O(\sqrt{h})$, while having the same fixed point as the original additive Schwarz or block Jacobi method for IPH. An idea for doing this can be found for Maxwell's equation in \cite{dolean}. This approach was also adopted for IPH in \cite{hajian2014}, where numerical experiments show that the convergence factor is indeed $\rho = 1 - O(\sqrt{h})$, while maintaining the same fixed point, but there is no convergence analysis.
We provide in this paper a convergence theory for Schwarz methods applied to IPH discretizations and prove these numerical observations. A similar analysis exists for classical FEM using Schur complement formulations and exploiting eigenvalues of the Dirichlet-to-Neumann (DtN) operator, see \cite{lui}. Our analysis uses similar DtN arguments, but is substantially different from \cite{lui}, since in a DG method continuity conditions are imposed only weakly. We focus in our analysis on the $h$-version with polynomial degree one, and do not study the effect of possible jumps in $a(x)$ or higher polynomial degree.
Our paper is organized as follows: in Section \ref{sec:iph} we describe two different but equivalent formulations of IPH, and construct a Schur complement system. In Section \ref{sec:tech} we provide mathematical tools to analyze Schwarz methods formulated using Schur complements. In Section \ref{sec:schwarz} we present the additive Schwarz and a new Schwarz algorithm for IPH in a two subdomain setting and prove their convergence with concrete contraction factor estimates. Section \ref{sec:multi} contains a generalization of the algorithms to the multi-subdomain case. We show in Section \ref{sec:num} numerical experiments to illustrate our analysis, and also verify numerically that the new algorithm provides a better preconditioner for Krylov subspace methods: we observe that the contraction factor is $\rho = 1 - O(h^{1/4})$ which is much faster than the CG solver preconditioned by one level additive Schwarz.
\section{Hybridizable Interior Penalty method} \label{sec:iph}
This section is devoted to recall the definition of IPH in two different but equivalent forms, namely the primal and hybridizable formulation. We later in Section \ref{sec:schwarz} design and analyze two Schwarz methods for the hybridizable form and show that the first one is slow and equivalent to a block Jacobi method applied to a primal form, i.e.~(\ref{eq:blockJacobi}). However the second Schwarz method takes advantage of hybridizable formulation and achieve faster convergence.
IPH was first introduced in \cite{ewing} as a stabilized discontinuous finite element method and later was studied as a member of the class of hybridizable DG methods in \cite{cockburn}. It has been shown that it is equivalent to a method called Ultra Weak Variational Formulation (UWVF) for the Helmholtz equation; see \cite{MZA:8194617}. IPH also fits into the framework developed in \cite{dgunified} for a unified analysis of DG methods. IPH is further studied in \cite{lehrenfeld2010hybrid} in the context of incompressible flows.
\subsection{Notation} \label{sec:notation}
We follow the notation introduced in \cite{dgunified}. Let
$\mathcal{T}_h=\{K\}$ be a shape-regular and quasi-uniform triangulation of the domain $\Omega$. Let $h_K$ be the diameter of an element of the triangulation defined by $h_K := \max_{x,y \in K}|x-y|$ and $h = \max_{K \in \mathcal{T}_h} h_K$. If $e$ is an edge of an element, we denote by $h_e$ the length of that edge. The quasi-uniformity of the mesh implies $h \approx h_K \approx h_e$.
We denote by $ \mathcal{E} ^0$ the set of interior edges shared by two elements in $\mathcal{T}_h$, that is \begin{equation*} \mathcal{E} ^0 := \left\lbrace e = \partial K_1 \cap \partial K_2, \forall K_1, K_2 \in \mathcal{T}_h \right\rbrace, \end{equation*}
by $ \mathcal{E} ^\partial $ the set of boundary edges, and all edges by $ \mathcal{E} := \mathcal{E} ^\partial \cup \mathcal{E} ^0$.
We introduce the broken Sobolev space $\textrm{H}^l (\mathcal{T}_h) := \prod_{K \in
\mathcal{T}_h} \textrm{H}^l(K)$
where $\textrm{H}^l(K)$ is the Sobolev space in $K \in \mathcal{T}_h$ and $l$ is a positive integer. Note that $q \in \textrm{H}^l(\mathcal{T}_h)$ is not necessarily continuous across elements. Therefore the element boundary traces of functions in $\textrm{H}^l (\mathcal{T}_h)$ belong to $ \textrm{T}( \mathcal{E} ) = \prod_{K \in \mathcal{T}_h} \textrm{L}^2( \partial K ) $, where $q \in \textrm{T}( \mathcal{E} )$ can be double-valued on $ \mathcal{E} ^0$, but is single-valued on $ \mathcal{E} ^\partial$.
We now define two trace operators: let $q \in \textrm{T}( \mathcal{E} )$ and $q_i :=
\left. q \right|_{\partial K_i}$. Then on $e = \partial K_1 \cap \partial K_2$ we define the average and jump operators \begin{equation*}
\begin{array}{lrlr}
\average{q} := \frac{1}{2} ( q_1 + q_2 ),
&
&
\jump{q} := q_1 \, {\boldsymbol n} _1 + q_2 \, {\boldsymbol n} _2,
&
\end{array} \end{equation*} where ${\boldsymbol n} _i$ is the unit outward normal from $K_i$ on $e \in \mathcal{E} ^0$. It is clear that these operators are independent of the element enumeration. Similarly for a vector-valued function ${\boldsymbol \sigma} \in \left[
\textrm{T}( \mathcal{E} ) \right]^2 $ we define on interior edges \begin{equation*}
\begin{array}{lrlr}
\average{{\boldsymbol \sigma} } := \frac{1}{2} ( {\boldsymbol \sigma} _1 + {\boldsymbol \sigma} _2 ),
&
&
\jump{{\boldsymbol \sigma} } := {\boldsymbol \sigma} _1 \cdot {\boldsymbol n} _1 + {\boldsymbol \sigma} _2 \cdot {\boldsymbol n} _2.
&
\end{array} \end{equation*} On the boundary, we set the average and jump operators to $\average{{\boldsymbol \sigma} } := {\boldsymbol \sigma} $ and $\jump{q} = q \, {\boldsymbol n} $.
We do not need to define $\average{q}$ and $\jump{{\boldsymbol \sigma} }$ on $e \in \mathcal{E} ^\partial$.
We define a finite dimensional subspace of $\textrm{H}^l(\mathcal{T}_h)$ by \begin{equation}
V_h :=
\left\lbrace v \in \textrm{L}^2(\Omega) :
\left. v \right|_{K} \in \mathbb{P}^k(K), \forall K \in \mathcal{T}_h \right\rbrace, \end{equation} where $\mathbb{P}^k(K)$ is the space of polynomials of degree $\leq k$ in the simplex $K \in \mathcal{T}_h$. We denote boundary integrals on an edge $e \in \mathcal{E} $ by \begin{equation*}
\dotS{a}{b}_{e} := \int_{e} a \, b \quad \textrm{if } a,b \in \textrm{T}(e),
\quad
\dotS{\BO{a}}{\BO{b}}_{e} := \int_{e} \BO{a} \cdot \BO{b} \quad
\textrm{if } \BO{a},\BO{b} \in [\textrm{T}(e)]^2,
\end{equation*} and similarly for volume terms on an element $K \in \mathcal{T}_h$ \begin{equation*}
\dotV{a}{b}_{K} := \int_{K} a \, b \quad \textrm{if } a,b \in
\textrm{H}^l(K), \quad
\dotV{\BO{a}}{\BO{b}}_{K} := \int_{K} \BO{a}
\cdot \BO{b} \quad \textrm{if } \BO{a},\BO{b} \in [\textrm{H}^l(K)]^2.
\end{equation*}
If $ \Gamma $ is a subset of $ \mathcal{E} $, we denote the $\textrm{L}^2$-norm of $q \in \textrm{T}( \mathcal{E} )$ along $ \Gamma $ by $ \norm{q}_{ \Gamma }^{2} := \sum_{e \in \Gamma } \norm{q}^2_{e} $ and $\norm{q}^2_e := \dotS{q}{q}_{e}$. Similarly if $\Ti{i}$ is a subset of $\mathcal{T}_h$, we denote the $\textrm{L}^2$-norm of a $v \in \textrm{H}^l(\Ti{i})$ by $ \norm{v}_{\Ti{i}}^{2} := \sum_{K \in \Ti{i}} \norm{v}_{K}^2 $.
For $v \in \textrm{H}^1(\mathcal{T}_h)$ we define functions whose restrictions to each element, $K \in \mathcal{T}_h$, are equal to the gradient of $v$. This operator in the literature is called piecewise gradient and is usually denoted by $\nabla_h$. For the sake of simplicity we use $\nabla v$ instead of $\nabla_h v$.
\subsection{Primal formulation} To simplify our presentation, we set $\eta \geq 0 $ to be a constant and $a(x)=1$ in the model problem (\ref{eq:pde}). Let $u,v \in \textrm{H}^{2}(\mathcal{T}_h)$, then the IPH bilinear form of the model problem (\ref{eq:pde}) is defined as
\begin{equation}
\begin{array}{rcl}
a(u,v) &:=& \eta \dotV{u}{v}_{\mathcal{T}_h} + \dotV{\nabla u}{\nabla v}_{\mathcal{T}_h}
- \dotS{\average{\nabla u}}{\jump{v}}_{ \mathcal{E} }
- \dotS{\average{\nabla v}}{\jump{u}}_{ \mathcal{E} }
\\
&& + \dotS{\frac{\mu}{2} \jump{u}}{\jump{v}}_{ \mathcal{E} } -
\dotS{\frac{1}{2\mu} \jump{\nabla u}}{\jump{\nabla v}}_{ \mathcal{E} ^0},
\end{array} \end{equation}
where $\mu \in \textrm{T}( \mathcal{E} )$, $\left. \mu \right|_{e} = {\alpha}{h_e^{-1}}$ and $\alpha>0$. Observe that $a(\cdot,\cdot)$ is symmetric. The definition of the IPH bilinear form is different from the classical Interior Penalty (IP) method only in the last term, i.e.~the last term in $a(\cdot,\cdot)$ is not present in IP.
There are two natural energy norms which are equivalent at the discrete level. Let $u \in V(h) := V_h + \textrm{H}^2(\Omega) \cap \textrm{H}^1_0(\Omega) \subset \textrm{H}^2(\mathcal{T}_h)$ then
\begin{equation}
\begin{array}{lcl}
\DGnorm{u}{}^{2} &:=& \eta \norm{u}_{\mathcal{T}_h}^{2} +
\norm{\nabla u}_{\mathcal{T}_h}^{2} +
\sum_{e \in \mathcal{E} } \mu_e \norm{ \jump{u} }_{e}^{2},
\\
\DGnorm{u}{,\ast}^{2} &:=&
\DGnorm{u}{}^{2} + \sum_{K \in \mathcal{T}_h} h_K^2 |u|_{K,2}^2.
\end{array} \end{equation}
One can show that they are equivalent at the discrete level by a local application of the inverse inequality (\ref{eq:invineq}).
\begin{proposition} Let $u \in V_h$. Then we have \begin{equation*}
\DGnorm{u}{}^{2} \leq \DGnorm{u}{,\ast}^{2} \leq {C}^2 \DGnorm{u}{}^{2}, \end{equation*} where ${C}^2 > 1 $ and independent of $h$ and $\alpha$. \end{proposition}
The norm $\DGnorm{\cdot}{,\ast}$ provides a natural norm for boundedness and $\DGnorm{\cdot}{}$ can be used for showing coercivity. The main ingredients for coercivity are the following inequalities which hold for all $u \in V_h$: \begin{equation}
\label{eq:coerc1}
\begin{array}{rcl}
2 \dotS{\average{\nabla u} }{ \jump{u} }_{ \mathcal{E} } &\leq&
\frac{1}{2} \norm{ \nabla u }^{2}_{\mathcal{T}_h}
+
\sum_{e \in \mathcal{E} } \frac{C_1}{h_e} \norm{ \jump{u} }_{e}^{2},
\\
\dotS{\frac{1}{2\mu} \jump{\nabla u}}{\jump{\nabla u}}_{ \mathcal{E} ^0} &\leq&
\frac{C_2}{\alpha} \norm{ \nabla u }^2_{\mathcal{T}_h},
\end{array} \end{equation} where $C_1$ and $C_2$ are both independent of $h$ and $\alpha$ but depend on the polynomial degree. This can be obtained from the trace inequality \begin{equation}
\label{eq:warburton}
\norm{w}_{\partial K}^2 \leq c \frac{k^2}{h} \norm{w}_K^2, \quad
\forall w \in \mathbb{P}^k(K), \end{equation} where $k$ is the polynomial degree, for details see \cite{hesthaven, dgunified}.
\begin{proposition} If $\mu = {\alpha}{h^{-1}}$, for $\alpha>0$ and sufficiently large, then we have \begin{equation*}
\begin{array}{rcccll}
&& a(u,v) &\leq&
\overline{C} \DGnorm{u}{,\ast} \DGnorm{v}{,\ast} & \forall u,v \in V(h),
\\
\underline{c} \, C^{-2} \DGnorm{u}{,\ast}^2 \leq
\underline{c} \DGnorm{u}{}^2 & \leq & a(u,u) & & & \forall u \in V_h,
\end{array} \end{equation*} where $\underline{c} = \min\{ \frac{1}{2} - \frac{C_2}{\alpha} , 1 - \frac{C_1}{\alpha} \} < 1$ , $\overline{C} = 1 + \frac{C_3}{\alpha} > 1$ and both constants are independent of $h$. \end{proposition}
Note that coercivity holds only for $u \in V_h$ and that $\alpha>0$ has to be big enough to result in a positive $\underline{c}$. Since $C_1$ and $C_2$ come from the trace inequality, we can choose $\alpha = O(k^2)$ where $k$ is the degree of the polynomials in the simplex. Throughout this paper we assume that $\alpha$ is chosen big enough to ensure that any term of type $1 - \frac{c}{\alpha}$ (with $c>0$, independent of $h$ and $\alpha$) is positive.
Having established that $a(\cdot,\cdot)$ is bounded and coercive, we obtain that the following approximation problem has a unique solution: find $u_h \in V_h$ such that \begin{equation} \label{eq:varIPH}
a(u_h,v) = \dotV{f}{v}_{\mathcal{T}_h}, \quad \forall v \in V_h. \end{equation} Assuming the exact solution is regular enough, it can be shown that \begin{equation*}
\begin{array}{lcl}
\DGnorm{u_h - u}{,\ast} &\leq& c\, h^{k} | u |_{k+1,\Omega},
\\
\norm{u_h - u}_{0} &\leq& c\, h^{k+1} | u |_{k+1,\Omega},
\end{array} \end{equation*}
i.e.~IPH has optimal approximation order \cite{dgunified,lehrenfeld2010hybrid}. We emphasize that without setting $\mu = \alpha h^{-1}$, the coercivity and optimal approximation properties are lost.
\subsection{Hybridizable formulation}
In this section we exploit the fact that IPH is a hybridizable method. A method is hybridizable if one can eliminate the degrees of freedom inside each element to obtain a linear system in terms of a single-valued function along the edges, say $\lambda_h$. Not all DG methods have this property, for example classical IP is not hybridizable. A unified hybridization procedure for DG methods has been introduced and studied in \cite{cockburn} where IPH is also included.
We introduce the general setting by decomposing the domain into two non-overlapping subdomains $\Omega_1$ and $\Omega_2$. Denoting the interface by $ \Gamma := \overline{\Omega}_1 \cap \overline{\Omega}_2$, we assume $ \Gamma \subset \mathcal{E} ^0$, i.e.~the cut does not go through any element of the triangulation. This will result in a natural partitioning of $\mathcal{T}_h$ into $\Ti{1}$ and $\Ti{2}$ which do not overlap but share $ \Gamma $ as a boundary; see for an example Figure \ref{fig:ddmesh}.
\begin{figure}
\caption{An unstructured mesh with the interface $\Gamma$
(thick-dashed).}
\label{fig:ddmesh}
\end{figure} We denote by $H$ the maximum diameter of the subdomains and by $H_\Omega$ the diameter of the mono-domain $\Omega$. We assume $0 < h \leq H < H_\Omega$.
We introduce local spaces on $\Omega_1$ and $\Omega_2$ by \begin{equation}
V_{h,i} := \big\{ v \in \textrm{L}^2(\Omega_i) : \left. v \right|_{K \in \Ti{i}} \in \mathbb{P}^k(K) \big\},
\text{ for } i=1,2. \end{equation}
Note that this domain decomposition setting implies $V_h = V_{h,1} \oplus V_{h,2}$. We define on the interface the space of broken single-valued functions by \begin{equation}
\Lambda_h :=
\big\{ \varphi \in \textrm{L}^2( \Gamma ) : \left. \varphi \right|_{e \in \Gamma } \in \mathbb{P}^k(e) \big\}. \end{equation}
For the sake of simplicity we denote the restriction of $v \in V_{h}$ on $V_{h,i}$ by $v_i$. Observe that the trace of $v_i \in V_{h,i}$ on $ \Gamma $ belongs to $\Lambda_h$.
Let $(u,\lambda), (v,\varphi) \in V_{h} \times \Lambda_h$ and consider the symmetric bilinear form \begin{equation}
\tilde{a}( (u,\lambda), (v,\varphi) ) := \tilde{a}_{ \Gamma }(\lambda,\varphi) +
\sum_{i=1}^{2} \Big( \tilde{a}_i(u_i,v_i)
+ \tilde{a}_{i \Gamma }(v_i,\lambda) + \tilde{a}_{i \Gamma }(u_i,\varphi) \Big), \end{equation} where \begin{equation}
\label{eq:deftildea}
\begin{array}{rcl}
\tilde{a}_{ \Gamma }(\lambda,\varphi) &:=& 2 \dotS{\mu \, \lambda}{\varphi}_{ \Gamma },
\\
\tilde{a}_{i \Gamma }(v_i,\varphi) &:=&
\dotS{ \PDif{v_i}{{\boldsymbol n} _i} - \mu v_i }{\varphi}_{ \Gamma },
\end{array} \end{equation} and \begin{equation}
\label{eq:IPHloc}
\begin{array}{rcl}
\tilde{a}_i(u_i,v_i) &:=&
\eta \dotV{u_i}{v_i}_{\Ti{i}} + \dotV{\nabla u_i}{\nabla v_i}_{\Ti{i}}
- \dotS{\average{\nabla u_i}}{\jump{v_i}}_{ \mathcal{E} _i^0}
- \dotS{\average{\nabla v_i}}{\jump{u_i}}_{ \mathcal{E} _i^0}
\\
&& + \dotS{\frac{\mu}{2} \jump{u_i}}{\jump{v_i}}_{ \mathcal{E} _i^0}
- \dotS{\frac{1}{2\mu} \jump{\nabla u_i}}{\jump{\nabla v_i}}_{ \mathcal{E} _i^0}
\\
&& - \dotS{ \PDif{u_i}{{\boldsymbol n} _i}}{v_i}_{\partial \Omega_i}
- \dotS{ \PDif{v_i}{{\boldsymbol n} _i}}{u_i}_{\partial \Omega_i}
+ \dotS{ {\mu} \, {u_i}}{{v_i}}_{\partial \Omega_i}.
\end{array} \end{equation}
This is an IPH discretization of the model problem in $\Omega_i$ and $\partial \Omega_i$ is treated as a Dirichlet boundary. Therefore $\tilde{a}_i(\cdot,\cdot)$ inherits coercivity and continuity of the original bilinear form, $a(\cdot,\cdot)$.
The global bilinear form $\tilde{a}(\cdot,\cdot)$ is also coercive at the discrete level, if $\alpha>0$ is sufficiently large, independent of $h$. To see this we introduce an energy norm for all $(v_i,\varphi) \in V_{h,i} \times \Lambda_h$ such that \begin{equation}
\Bnorm{(v_i,\varphi)}{,i}^{2} :=
\eta \Lnorm{ v_i }_{\Ti{i}}^{2} +
\Lnorm{ \nabla v_i }_{\Ti{i}}^{2} +
\mu \Lnorm{ \jump{v_i} }_{ \mathcal{E} _i \setminus \Gamma }^{2}
+ \mu \Lnorm{ v_i - \varphi }_{ \Gamma }^{2}, \quad (i=1,2). \end{equation}
then by definition of $\tilde{a}(\cdot,\cdot)$ for all $(v,\varphi) \in V_h \times \Lambda_h$ we have \begin{equation} \label{eq:coercAtilde}
\begin{array}{rcl}
\tilde{a}( (v,\varphi), (v,\varphi) ) &=&
\tilde{a}_ \Gamma (\varphi,\varphi) + \sum_{i=1}^2
\big( \tilde{a}_i(v_i,v_i) + 2 \tilde{a}_{i \Gamma }(v_i,\varphi) \big),
\\
&=& \sum_{i=1}^2 \big( \tilde{a}_i(v_i,v_i) + 2 \tilde{a}_{i \Gamma }(v_i,\varphi)
+ \half \tilde{a}_ \Gamma (\varphi,\varphi) \big).
\end{array} \end{equation}
We can bound the contribution of each subdomain from below separately: \begin{equation*}
\begin{array}{rcl}
\tilde{a}( (v,\varphi), (v,\varphi) ) &=&
\sum_{i=1}^2 \eta \norm{v_i}_{\Ti{i}}^2 +
\norm{\nabla v_i}_{\Ti{i}}^2
\\
&& \quad
- 2 \dotS{\average{\nabla v_i} }{ \jump{v_i} }_{ \mathcal{E} _i \setminus \Gamma }
+ \frac{\mu}{2} \norm{\jump{v_i}}_{ \mathcal{E} _i \setminus \Gamma }^2
- \frac{1}{2 \mu} \norm{ \jump{\nabla v_i} }_{ \mathcal{E} _i^0}^2
\\
&& \quad
-2 \dotS{\PDif{v_i}{{\boldsymbol n} _i}}{v_i - \varphi}_ \Gamma
+ \mu \norm{ v_i - \varphi }_ \Gamma ^2,
\\
&\geq& c \sum_{i=1}^2 \Bnorm{ (v_i, \varphi) }{,i}^2,
\end{array} \end{equation*}
where we used the inverse inequalities (\ref{eq:warburton}) for terms acting on the interface and (\ref{eq:coerc1}) for terms acting inside subdomains. Here $0<c<1$ is a constant independent of $h$. Note that we proved the coercivity in a subdomain by subdomain fashion by splitting the $\tilde{a}_ \Gamma (\cdot,\cdot)$ terms.
Consider the following discrete problem: find $(u_h,\lambda_h) \in V_h \times \Lambda_h$ such that \begin{equation}
\label{eq:varIPH2}
\tilde{a}( (u_h,\lambda_h), (v,\varphi) ) =
\dotV{f}{v}_{\mathcal{T}_h}, \quad \forall (v,\varphi) \in V_h \times \Lambda_h, \end{equation} which has a unique solution since $\tilde{a}(\cdot,\cdot)$ is coercive on $V_h \times \Lambda_h$. One can eliminate the interface variable, $\lambda_h$, and obtain a variational problem in terms of $u_h$ only. It turns out that this coincides with the variational problem (\ref{eq:varIPH}); for a proof see \cite{lehrenfeld2010hybrid}.
The advantage of the variational problem (\ref{eq:varIPH2}) is that each subproblem is communicating through the auxiliary unknown $\lambda_h$. Therefore we can eliminate the interior unknowns, $u_i$, and obtain a Schur complement system. If we test (\ref{eq:varIPH2}) with $v_i\not=0$, $v_j=0$ $(j\not=i)$, $\varphi = 0$ and assume that $\lambda_h$ is known, we obtain a local problem: find $u_{i} \in V_{h,i}$ such that \begin{equation} \label{eq:harmonicsat}
\tilde{a}_i(u_i,v_i) + \tilde{a}_{i \Gamma }(v_i,\lambda_h) =
\dotV{f}{v_i}_{\Ti{i}}, \quad \forall v_i \in V_{h,i}. \end{equation} This is an IPH discretization of the continuous problem \begin{equation*}
\begin{array}{rcll}
(\eta-\Delta) u &=& f,\quad & \text{in $\Omega_i$},\\
u &=& \lambda_h, \quad & \text{on $ \Gamma $}, \\
u &=& 0, & \text{on $\partial \Omega_i \setminus \Gamma$}.
\end{array} \end{equation*} However the boundary condition on $ \Gamma $ is imposed weakly and therefore
$u_i |_{ \Gamma } \not = \lambda_h$ in the strong sense, see \cite{cockburn,hajian2013block,lehrenfeld2010hybrid}.
\subsection{Schur complement formulation} \label{sec:schur}
We choose nodal basis functions for $\mathbb{P}^k(K)$ and denote the space of degrees of freedom (DOFs) of $V_h$ by $V$ and similarly for subspaces by $\{ V_i \}$. The variational form in (\ref{eq:varIPH}) is equivalent to the linear system $A {\boldsymbol u} = {\boldsymbol f} $. $A$ is the system matrix and ${\boldsymbol u} \in V$ are the corresponding DOFs of the approximation $u_h \in V_h$. We can partition ${\boldsymbol u} $ into $\{ {\boldsymbol u} _i\}$ where ${\boldsymbol u} _i$ corresponds to DOFs of $u_{i} \in V_{h,i}$. Then we can arrange the entries of $A$ and rewrite the linear system as \begin{equation}
\MATT{A_1}{A_{12}}{A_{21}}{A_{2}} \Arr{{\boldsymbol u} _1}{{\boldsymbol u} _2} = \Arr{{\boldsymbol f} _1}{{\boldsymbol f} _2}.
\label{eq:linsyspart} \end{equation} We use nodal basis functions for $\Lambda_h$ and denote by ${\boldsymbol \lambda} $ the corresponding DOFs for $\lambda_h \in \Lambda_h$. Then the variational form (\ref{eq:varIPH2}) can be written as \begin{equation} \label{eq:DDshur}
\MATnine{\tilde{A}_1}{}{\tilde{A}_{1 \Gamma }}{}{\tilde{A}_2}{\tilde{A}_{2 \Gamma }}{\tilde{A}_{ \Gamma 1}}{\tilde{A}_{ \Gamma 2}}{\tilde{A}_{ \Gamma }}
\Arrtri{{\boldsymbol u} _1}{{\boldsymbol u} _2}{{\boldsymbol \lambda} }
= \Arrtri{{\boldsymbol f} _1}{{\boldsymbol f} _2}{0}, \end{equation} where $\tilde{A}_{ \Gamma i}^{} = \tilde{A}_{i \Gamma }^{\top}$. Since this matrix is s.p.d.~and the same holds also for its diagonal blocks, we can form a Schur complement system. We define
$ \tilde{B}_i := \tilde{A}_{ \Gamma i}^{} \tilde{A}_{i}^{-1} \tilde{A}_{i \Gamma }^{}
$ and $
\BO{g}_{ \Gamma }^{} := - \sum_{i=1}^{2} \tilde{A}_{ \Gamma i}^{} \tilde{A}_{i}^{-1} {\boldsymbol f} _i^{}. $
Then the Schur complement system reads
\begin{equation} \label{eq:shur}
\tilde{S}_ \Gamma {\boldsymbol \lambda} := \Big( \tilde{A}_{ \Gamma } - \sum_{i=1}^{2} \tilde{B}_i \Big) {\boldsymbol \lambda} = \BO{g}_{ \Gamma }. \end{equation}
\begin{definition}[discrete harmonic extension]\label{HarmExtDef} For all $\varphi \in \Lambda_h$, we denote by $\mathcal{H}_i(\varphi) \in V_{h,i}$ the discrete harmonic extension into $\Omega_i$, \begin{equation}
\mathcal{H}_i(\varphi) \equiv - \tilde{A}_{i}^{-1} \tilde{A}_{i \Gamma }^{} {\boldsymbol \varphi} . \end{equation} The corresponding $\varphi$ is called {\normalfont generator}. In other words $u_{i} := \mathcal{H}_i(\varphi)$ is an approximation obtained from the IPH discretization in $\Omega_i$ using $\varphi$ as Dirichlet data; i.e.~$ \tilde{A}_{i} {\boldsymbol u} _i + \tilde{A}_{i \Gamma } {\boldsymbol \varphi} = 0 $. \end{definition}
The following result shows that an application of $\tilde{B}_i {\boldsymbol \lambda} $ can be viewed as finding the harmonic extension, $u_i := \mathcal{H}_i(\lambda_h)$, and then evaluating a ``Robin-like trace'' on the interface. \begin{proposition} \label{prop:Bopt} Let $\lambda_h \in \Lambda_h$ and define its harmonic extension by $u_i := \mathcal{H}_i(\lambda_h)$. Then
$ {\boldsymbol \varphi} ^{\top} \tilde{B}_i {\boldsymbol \lambda} = \dotS{ \mu u_i - \PDif{u_i}{{\boldsymbol n} _i} }{\varphi}_ \Gamma $ for all $\varphi \in \Lambda_h$.
\end{proposition} \begin{proof} Let $u_i := \mathcal{H}_i(\lambda_h)$. Then by definition of $\tilde{B}_i$ and $\tilde{a}_{i \Gamma }(\cdot,\cdot)$ we have\\ \begin{equation*}
{\boldsymbol \varphi} ^{\top} \tilde{B}_i^{} {\boldsymbol \lambda} = {\boldsymbol \varphi} ^{\top} \tilde{A}_{ \Gamma i}^{} \tilde{A}_{i}^{-1} \tilde{A}_{i \Gamma }^{} {\boldsymbol \lambda}
= - {\boldsymbol \varphi} ^{\top} \tilde{A}_{ \Gamma i}^{} {\boldsymbol u} _i^{} =
\dotS{ \mu u_i - \PDif{u_i}{{\boldsymbol n} _i} }{\varphi}_ \Gamma , \end{equation*} for all $\varphi \in \Lambda_h$, which completes the proof, since $\tilde{A}_{ \Gamma i}^{} = \tilde{A}_{i \Gamma }^{\top}$. \quad \end{proof}
\section{Properties of the Schur complement and technical tools} \label{sec:tech}
The main goal of this section is to provide estimates for the minimum and maximum eigenvalues of the $\tilde{S}_ \Gamma $ and $\tilde{B}_i$ for $i=1,2$. We use the estimate for the $\tilde{B}_i$ operators to prove convergence of the Schwarz method and provide the contraction factor later in Section \ref{sec:schwarz}. In particular we prove in this section that the following estimates hold for all $\varphi \in \Lambda_h$: \begin{eqnarray}
\label{eq:Best}
c_B \, \mu \norm{\varphi}_ \Gamma ^2 &\leq {\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi}
\leq& \Big( 1 - C_B \frac{h}{H \alpha} \Big) \mu \norm{\varphi}_ \Gamma ^2,
\\
\label{eq:Sest}
c \frac{H}{H_\Omega^2} \norm{\varphi}_ \Gamma ^2 &\leq {\boldsymbol \varphi} ^\top \tilde{S}_ \Gamma {\boldsymbol \varphi}
\leq& C \frac{\alpha}{h} \norm{\varphi}_ \Gamma ^2, \end{eqnarray} where all constants are positive and independent of $h$, $H$ and $H_\Omega$. Since $\tilde{S}_ \Gamma $ and $\tilde{B}_i$ are symmetric, we can use Rayleigh quotient arguments and obtain an estimate for the minimum and maximum eigenvalues. One can also obtain an estimate with polynomial degree dependency using the techniques of this section.
The only constraint on the shape of the subdomains is a star-shape assumption. To prove the above estimates we need trace and Poincar\'e inequalities for totally discontinuous functions. The following trace estimate is due to Feng and Karakashian \cite[Lemma
3.1]{karakashian}. The Poincar\'e inequality is due to Brenner, see \cite{brenner-poincare}.
\begin{lemma}[Trace inequality] \label{lemma:karakashian}
Let $D$ be a star-shape domain with diameter $H_D$, and
triangulation $\mathcal{T}_h$. Then, for any $u \in {\normalfont
\textrm{H}}^1(\mathcal{T}_h)$, we have
\begin{equation*}
\norm{ u }_{\partial D}^{2} \leq c
\Big[ H_D^{-1} \norm{u}_{D}^{2} + H_D^{} \big( \norm{ \nabla u }_{D}^{2} +
h^{-1} \norm{\jump{u}}_{ \mathcal{E} \setminus \partial D}^{2} \big) \Big].
\end{equation*} \end{lemma}
\begin{lemma}[Poincar\'e inequality]\label{lemma:poincare} Let $D$ be an open connected polygonal domain with diameter $H_D$, and triangulation $\mathcal{T}_h$. Then, for any $u \in {\normalfont \textrm{H}}^1(\mathcal{T}_h)$ we have \begin{equation*}
\norm{ u }_{D}^2 \leq c H_D^2
\Big[ \norm{ \nabla u }_{D}^2
+ h^{-1} \norm{ \jump{ u } }_{ \mathcal{E} \setminus \partial D}^2
+ h^{-1} \norm{ { u } }_{ \nu}^2
\Big], \end{equation*} where $\nu$ is a measurable subset of $\partial D$ with nonzero measure. \end{lemma}
\subsection{Eigenvalue estimates for $\tilde{B}_i$}
In order to obtain estimates for the eigenvalues of the $\tilde{B}_i$ operator, we first recall Definition \ref{HarmExtDef} of a harmonic extension: $u_i \in V_{h,i}$ is called harmonic extension of $\varphi \in \Lambda_h$ if it satisfies $\tilde{A}_i {\boldsymbol u} _i + \tilde{A}_{i \Gamma } {\boldsymbol \varphi} = 0$. Now multiplying this relation by ${\boldsymbol u} _i^\top$ from left we get
\begin{equation*}
\begin{array}{llcl}
&
{\boldsymbol u} _i^\top \tilde{A}_i^{} {\boldsymbol u} _i^{} + {\boldsymbol u} _i^\top \tilde{A}_{i \Gamma }^{} {\boldsymbol \varphi} ^{}
&=& 0
\\
\Leftrightarrow &
{\boldsymbol u} _i^\top \tilde{A}_i^{} {\boldsymbol u} _i^{} - {\boldsymbol \varphi} ^\top \tilde{A}_{ \Gamma i}^{}
\tilde{A}_i^{-1} \tilde{A}_{i \Gamma }^{} {\boldsymbol \varphi} ^{}
&=& 0
\\
\Leftrightarrow &
{\boldsymbol u} _i^\top \tilde{A}_i^{} {\boldsymbol u} _i^{} - {\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} &=& 0,
\end{array} \end{equation*} where we used ${\boldsymbol u} _i^{} = - \tilde{A}_i^{-1} \tilde{A}_{i \Gamma }^{} {\boldsymbol \varphi} $, $\tilde{A}_{ \Gamma i}^{} = \tilde{A}_{i \Gamma }^\top$ and the definition of $\tilde{B}_i$. Hence if $u_i = \mathcal{H}_i(\varphi)$ then we have \begin{equation} \label{eq:BtoA}
{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} = \tilde{a}_i(u_i, u_i). \end{equation} Now recall that $\tilde{a}_i(\cdot,\cdot)$ is coercive and bounded over $V_{h,i}$, therefore $\underline{c} \DGnorm{u_i}{}^2 \leq \tilde{a}_i(u_i,u_i) \leq \overline{C} \DGnorm{u_i}{}^2$. Thus if we relate the energy norm of the harmonic extension, $u_i := \mathcal{H}_i(\varphi) \in V_{h,i}$, to the $\textrm{L}^2$-norm of $\varphi$ we obtain the desired estimate (\ref{eq:Best}). More precisely we can show that the estimate \begin{equation} \label{eq:harmIneq}
{c}_{\mathcal{H}} \cdot \mu \norm{ \varphi }_{ \Gamma }^{2} \leq
\DGnorm{ u_i }{}^{2}
\leq {C}_{\mathcal{H}} \cdot \mu \norm{ \varphi }_{ \Gamma }^{2} \end{equation} holds, where $0 < c_\mathcal{H} < 1$ and $C_\mathcal{H} > 1$ are constants independent of $h$. Observe that $C_\mathcal{H} > 1$ while the upper bound estimate in (\ref{eq:Best}) is less than one. We show later how one can obtain a sharp upper bound estimate as in (\ref{eq:Best}).
Let us start with the lower bound of inequality (\ref{eq:harmIneq}). First we introduce an extension by zero operator $ {\boldsymbol \theta}_i : \Lambda_h \rightarrow V_{h,i} $ which is defined for all $\varphi \in \Lambda_h$ as \begin{equation*}
{\boldsymbol \theta}_i( \varphi ) :=
\left\{
\begin{array}{ll}
\varphi & \text{on edges belonging to $ \Gamma $},
\\
0 & \text{on other nodes}.
\end{array}
\right. \end{equation*}
For a graphical illustration see Figure \ref{fig:extzero}.
\begin{figure}
\caption{Illustration of the extension by zero, ${\boldsymbol \theta}_i(\varphi)$,
for elements which share an edge with the interface,
e.g.~$\{K_1,K_3\}$, and those which do not, e.g.~$K_2$.}
\label{fig:extzero}
\end{figure}
Note that there are elements like $K_2$ which physically share a node and not an {\it edge} with the interface, but we leave ${\boldsymbol \theta}_i(\varphi)$ in $K_2$ to be zero. More precisely, only those elements which share an edge with the interface are non-zero.
We show in the Appendix, see also \cite{qin}, that in an element, $K \in \Ti{i}$, with an edge $e \in \Gamma $ we have \begin{equation} \label{eq:zoptineq}
\begin{array}{lcl}
\norm{ {\boldsymbol \theta}_i (\varphi ) }_{K}^2
&\leq& C_3\, h \norm{ \varphi }_{e}^2,
\\
\norm{ \nabla {\boldsymbol \theta}_i( \varphi ) }_{K}^2 &\leq& {C_4}{h^{-1}}
\norm{ \varphi }_{e}^2,
\\ \norm{ \jump{ {\boldsymbol \theta}_i(\varphi ) } }_{ \mathcal{E} _i}^2 &\leq& C_5 \norm{ \varphi
}_{ \Gamma }^{2},
\end{array} \end{equation} where $C_3>0$, $C_4>0$ and $ C_5 \geq 1 $ and all are independent of $h$. This yields the following result which relates the energy of the extension by zero to its $\textrm{L}^2$-norm on the interface. \begin{lemma} \label{lemma:Zoptphi}
Let $\varphi \in \Lambda_h$ and ${\boldsymbol \theta}_i( \varphi )$ be its extension
by zero into $\Omega_i$. We have
\begin{equation*}
\DGnorm{ {\boldsymbol \theta}_i(\varphi) }{}^{2} \leq \mu\, C_\theta \, \norm{
\varphi }_{ \Gamma }^{2},
\end{equation*}
where $C_\theta = C_3 \eta + C_4 \alpha^{-1} + C_5 > 1$. \end{lemma}
\begin{proof}
First note that by definition ${\boldsymbol \theta}_i(\varphi)$ and $ \nabla
{\boldsymbol \theta}_i(\varphi) $ are non-zero only on those elements which share
an edge with the interface. We call them $\{ K_{ \Gamma } \} \subset
\Ti{i}$. Then we have
\begin{equation*}
\begin{array}{rcll}
\DGnorm{ {\boldsymbol \theta}_i(\varphi) }{}^2 &=&
\displaystyle
\sum_{K \in \{K_{ \Gamma } \}} \eta \norm{ {\boldsymbol \theta}_i( \varphi ) }_{K}^2 +
\norm{ \nabla {\boldsymbol \theta}_i( \varphi ) }_{K}^2 +
\mu \norm{ \jump{ {\boldsymbol \theta}_i(\varphi) } }_{ \mathcal{E} _i}^{2}
\\
&\leq&
C_3 \, \eta \, h \norm{ \varphi }_{ \Gamma }^{2} +
\frac{C_4}{{h}} \norm{ \varphi }_{ \Gamma }^{2} +
C_5 \, \mu \norm{ \varphi }_{ \Gamma }^{2}
\\
&\leq&
\mu \left( C_3 \, \eta + \frac{C_4}{\alpha} + C_5 \right) \norm{ \varphi }_{ \Gamma }^{2},
\end{array} \end{equation*} which completes the proof with $C_\theta := C_3 \, \eta + \frac{C_4}{\alpha} + C_5 > 1$. \quad \end{proof}
Now we are able to relate the energy of a harmonic extension, $u_i := \mathcal{H}_i(\varphi)$, to the $\textrm{L}^2$-norm of $\varphi$ on the interface.
\begin{lemma} \label{lemma:trinvphi}
Let $\varphi \in \Lambda_h$ and $u_i := \mathcal{H}_i(\varphi)$ be its
harmonic extension into $\Omega_i$. Then we have
\begin{equation*}
{c}_{\mathcal{H}} \cdot \mu \norm{ \varphi }_{ \Gamma }^{2} \leq \DGnorm{ u_i }{}^{2},
\end{equation*}
where $c_{\mathcal{H}} = (1 - \frac{c}{\alpha})^2 \cdot \frac{1}{C_\theta
\overline{C}^2} < 1$. \end{lemma}
\begin{proof}
Since $u_i$ is the harmonic extension of $\varphi$, it satisfies
(\ref{eq:harmonicsat}) (with $f=0$). Let $v =
{\boldsymbol \theta}_i(\varphi)$. Then by definition of
$\tilde{a}_{i \Gamma }(\cdot,\cdot)$ we have
\begin{equation*}
\tilde{a}_i(u_i,{\boldsymbol \theta}_i(\varphi) ) =
\dotS{ \mu\, {\boldsymbol \theta}_i(\varphi) - \PDif{{\boldsymbol \theta}_i(\varphi)}{{\boldsymbol n} _i}}{ \varphi }_{ \Gamma }.
\end{equation*}
Note that ${\boldsymbol \theta}_i(\varphi)|_{ \Gamma } = \varphi$. We can bound the
right-hand side from below, therefore
\begin{equation*}
\begin{array}{rcll}
\tilde{a}_i(u_i,{\boldsymbol \theta}_i(\varphi) ) &\geq& \mu \norm{\varphi}_{ \Gamma }^{2} -
\norm{ \PDif{{\boldsymbol \theta}_i(\varphi)}{{\boldsymbol n} _i} }_{ \Gamma } \,\norm{ \varphi }_{ \Gamma }
&
\\
&\geq&
\mu \norm{\varphi}_{ \Gamma }^{2} -
\frac{c}{\sqrt{h}}\norm{ \nabla {{\boldsymbol \theta}_i(\varphi)} }_{K_ \Gamma }
\,\norm{ \varphi }_{ \Gamma }
&
\quad \textrm{by ineq.~(\ref{eq:warburton})}
\\
&\geq&
\mu \norm{\varphi}_{ \Gamma }^{2} -
\frac{c'}{h} \, \norm{ \varphi }_{ \Gamma }^{2}
& \quad \textrm{by ineq.~(\ref{eq:zoptineq})}
\\
&=&
\mu \left( 1 - \frac{c'}{\alpha} \right) \norm{ \varphi }_{ \Gamma }^{2},
\end{array}
\end{equation*}
which is positive if $\alpha>0$ and sufficiently large. By
continuity of $\tilde{a}_{i}(\cdot,\cdot)$ we have
\begin{equation*}
\mu \left( 1 - \frac{c'}{\alpha} \right) \norm{ \varphi }_{ \Gamma }^{2}
\leq \overline{C} \, \DGnorm{u_i}{} \cdot
\DGnorm{ {\boldsymbol \theta}_i(\varphi)}{}.
\end{equation*}
Note that we are able to use $\DGnorm{ \cdot }{}$ instead of
$\DGnorm{ \cdot }{,\ast}$ since we work with discrete spaces. An
application of Lemma \ref{lemma:Zoptphi} completes the proof with
$c_{\mathcal{H}} = ( 1 - \frac{c'}{\alpha} )^2 \cdot \frac{1}{C_\theta
\overline{C}^2} < 1$. \quad \end{proof}
The upper bound in (\ref{eq:harmIneq}) can be obtained much easier using coercivity of the $\tilde{a}_i(\cdot,\cdot)$.
\begin{lemma} Let $\varphi \in \Lambda_h$ and $u_i := \mathcal{H}_i(\varphi)$ be its harmonic extension into $\Omega_i$. Then we have \begin{equation*}
\DGnorm{ u_i }{}^{2} \leq {C}_{\mathcal{H}} \cdot \mu \norm{ \varphi }_{ \Gamma }^{2}, \end{equation*} where $C_{\mathcal{H}} = \left( 1 + \frac{C}{\sqrt{\alpha} } \right)^{2} \cdot \frac{1}{\underline{c}^2} > 1$. \end{lemma}
\begin{proof}
Since $u_i$ is the harmonic extension of $\varphi$, it satisfies
(\ref{eq:harmonicsat}) (with $f=0$). Using the fact that
$\tilde{a}_i(\cdot,\cdot)$ is coercive we have
\begin{equation*}
\begin{array}{rcl}
\underline{c} \DGnorm{ u_i }{}^{2} \leq \tilde{a}_i(u_i,u_i) &=&
- \tilde{a}_{i \Gamma }(u_i,\varphi)
\\
&=& \dotS{\mu u_i - \PDif{u_i}{{\boldsymbol n} _i} }{ \varphi }_{ \Gamma }
\\
&\leq& \mu \norm{ u_i }_{ \Gamma } \norm{ \varphi }_{ \Gamma } +
\norm{ \PDif{u_i}{{\boldsymbol n} _i} }_{ \Gamma } \norm{ \varphi }_{ \Gamma }
\\
&\leq& \mu \norm{ u_i }_{ \Gamma } \norm{ \varphi }_{ \Gamma } +
\frac{C}{\sqrt{h}} \norm{ \nabla u_i }_{\Ti{i}} \norm{ \varphi }_{ \Gamma }
\\
&\leq& \mu^{\half} \left( 1 + \frac{C}{\sqrt{\alpha} } \right)
\DGnorm{ u_i }{} \cdot \norm{ \varphi }_{ \Gamma },
\end{array}
\end{equation*}
which completes the proof with $C_{\mathcal{H}} := \left( 1 +
\frac{C}{\sqrt{\alpha} } \right)^{2} \cdot \frac{1}{\underline{c}^2}
> 1 $. \quad \end{proof}
We see that $C_\mathcal{H} > 1$, which does not provide a sharp estimate for the maximum eigenvalue of $\tilde{B}_i$. We now show how to obtain a sharp estimate for the maximum eigenvalue of the $\tilde{B}_i$. Recall that the global matrix $\tilde{A}$ is s.p.d.~and the positive definiteness is proved by using for each subdomain $\half \tilde{A}_ \Gamma $ in (\ref{eq:coercAtilde}). Therefore we consider the s.p.d.~matrix \begin{equation*}
\hat{A} := \MATT{\tilde{A}_i}{\tilde{A}_{i \Gamma }}{\tilde{A}_{ \Gamma i}}{\half \tilde{A}_ \Gamma }. \end{equation*} To show positive-definiteness, let ${\boldsymbol w} :=({\boldsymbol u} _i, {\boldsymbol \varphi} )^\top$ and observe \begin{equation} \label{eq:hatAspd}
{\boldsymbol w} ^\top \hat{A} {\boldsymbol w} = \tilde{a}_i(u_i,u_i) + 2
\tilde{a}_{i \Gamma }(u_i,\varphi) + \half
\tilde{a}_ \Gamma (\varphi,\varphi) \geq c \Bnorm{(u_i,\varphi)}{,i}^2, \end{equation} for all $u_i \in V_{h,i}$ and $\varphi \in \Lambda_h$. Now let $u_i = \mathcal{H}_i(\varphi)$, then by a simple manipulation we have $ {\boldsymbol \varphi} ^\top \big( \half \tilde{A}_ \Gamma - \tilde{B}_i \big) {\boldsymbol \varphi} = {\boldsymbol w} ^\top \hat{A} {\boldsymbol w} $. Combining with (\ref{eq:hatAspd}) and recalling that $ {\boldsymbol \varphi} ^\top \tilde{A}_ \Gamma {\boldsymbol \varphi} = 2 \mu \norm{\varphi}^2_ \Gamma $ we obtain \begin{equation} \label{eq:Bsharp}
\mu \norm{\varphi}_ \Gamma ^2 - c \Bnorm{(\mathcal{H}_i(\varphi),\varphi)}{,i}^2 \geq
{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} . \end{equation} This gives a sharp estimate for the maximum eigenvalue of $\tilde{B}_i$ if we can bound the second term from below which is stated in the following lemma.
\begin{lemma} \label{lemma:HDGlowerforB} Let $\varphi \in \Lambda_h$ and $u_i \in V_{h,i}$ for $i=1,2$. Let $H_i$ be the diameter of the subdomain. Then we have \begin{equation*}
\frac{c}{H_i} \norm{ \varphi }_ \Gamma ^2 \leq
\Bnorm{(u_i,\varphi)}{,i}^{2}. \end{equation*}
\end{lemma}
\begin{proof} We first invoke triangle inequality and then Young's inequality \begin{equation*}
\norm{ \varphi }_ \Gamma ^2 \leq
\norm{ u_i - \varphi }_ \Gamma ^2 + \norm{ u_i }_ \Gamma ^2 \leq
{H_i} h^{-1} \norm{ u_i - \varphi }_ \Gamma ^2 + \norm{ u_i }_ \Gamma ^2, \end{equation*} where the last inequality is due to the fact that $h \leq {H_i}$. Now for the second term on the right-hand side we apply the trace inequality from Lemma \ref{lemma:karakashian}, and subsequently the Poincar\'e inequality from Lemma \ref{lemma:poincare} with $\nu = \partial \Omega_i \setminus \Gamma $. We obtain
\begin{equation*} \begin{array}{rcl}
\norm{ \varphi }_ \Gamma ^2 &\leq&
{H_i} h^{-1} \norm{ u_i - \varphi }_ \Gamma ^2 +
c_1 H_i \big(
\norm{ \nabla u_i }_{\Omega_i}^2 + h^{-1} \norm{ \jump{u_i} }_{ \mathcal{E} _i \setminus
\Gamma }^2
\big)
\\
&\leq& c_2 H_i \Bnorm{(u_i,\varphi)}{,i}^2, \end{array} \end{equation*} which completes the proof. \quad \end{proof}
We are now in the position to prove the estimate for the eigenvalues of $\tilde{B}_i$. \begin{lemma} \label{lemma:Beigen} There exists $\alpha>0$, sufficiently large, such that \begin{equation*}
c_B \mu \norm{\varphi}_ \Gamma ^2 \leq {\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} \leq \Big( 1
- C_B \frac{h}{H \alpha} \Big) \mu \norm{\varphi}_ \Gamma ^2,
\quad \forall \varphi \in \Lambda_h, \end{equation*} where $0<c_B<1$. Therefore $\tilde{B}_i$ is s.p.d.~Moreover $\tilde{A}_ \Gamma - 2 \tilde{B}_i$ is s.p.d. \end{lemma}
\begin{proof} To show the proof of the lower bound we use (\ref{eq:BtoA}), coercivity of $\tilde{a}_i(\cdot,\cdot)$ and Lemma \ref{lemma:trinvphi} to obtain \begin{equation*}
{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} = \tilde{a}_i(u_i,u_i) \geq \underline{c}
\DGnorm{u_i}{}^2 \geq \underline{c} \cdot c_\mathcal{H} \cdot \mu \norm{\varphi}_ \Gamma ^2. \end{equation*} This completes the lower bound by setting $c_B := \underline{c} \cdot c_\mathcal{H} < 1$. For the upper bound we use inequality (\ref{eq:Bsharp}) and Lemma \ref{lemma:HDGlowerforB} where we obtain \begin{equation*}
{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} \leq \Big( 1 - \frac{c}{H} \frac{h}{\alpha}
\Big) \mu \norm{\varphi}_ \Gamma ^2. \end{equation*} Finally from inequality (\ref{eq:Bsharp}) we have that $\tilde{A}_ \Gamma - 2 \tilde{B}_i$ is s.p.d. \end{proof}
\begin{remark} This estimate shows that the condition number satisfies \begin{equation*}
\kappa(\tilde{B}_i) \leq c_B^{-1} \big( 1 - C_B \frac{h}{H \alpha } \big), \end{equation*} which implies that $\tilde{B}_i$ is {\normalfont scalable}. In other words if we keep the ratio $h/H$ constant the condition number does not change. Geometrically that is equivalent of scaling the subdomain and the triangulation at the same rate which does not change the {\normalfont entries} of the $\tilde{B}_i$ nor its {\normalfont
size}. Therefore the condition number of $\tilde{B}_i$ is expected not to change. \end{remark}
\subsection{Eigenvalue estimate for $\tilde{S}_ \Gamma $}
Estimating eigenvalues of the Schur complement is similar to estimating eigenvalues of $\tilde{B}_i$. To show the lower bound in estimate (\ref{eq:Sest}), we need the following lemma.
\begin{lemma} \label{lemma:HDGlower} Let $\varphi \in \Lambda_h$ and $u_i \in V_{h,i}$ for $i=1,2$. Let $H_\Omega$ be the diameter of the domain and $H$ be the maximum diameter of the subdomains. Then we have \begin{equation*}
c \frac{H}{H_\Omega^2} \norm{\varphi}_ \Gamma ^2 \leq
\sum_{i=1}^2 \Bnorm{(u_i,\varphi)}{,i}^{2}. \end{equation*} \end{lemma}
\begin{proof} First we invoke a triangle inequality \begin{equation*}
H_i \norm{ \varphi }_ \Gamma ^2 \leq
H_i \norm{ u_i - \varphi }_ \Gamma ^2 + H_i \norm{ u_i }_ \Gamma ^2 \leq
{H_i^2} h^{-1} \norm{ u_i - \varphi }_ \Gamma ^2 + H_i \norm{ u_i }_ \Gamma ^2, \end{equation*} where the last inequality is due to the fact that $h \leq {H_i}$. Now for the second term on the right-hand side, observe that using Lemma \ref{lemma:karakashian} we have \begin{equation*}
c_i H_i \norm{ u_i }_{ \Gamma }^{2} \leq
c_i H_i \norm{ u_i }_{\partial \Omega_i}^{2} \leq
\norm{u_i}_{\Omega_i}^{2} + H_i^{2} \big( \norm{ \nabla u_i }_{\Omega_i}^{2} +
h^{-1} \norm{\jump{u_i}}_{ \mathcal{E} _i \setminus \partial \Omega_i}^{2} \big). \end{equation*} We sum over both subdomains and invoke Lemma \ref{lemma:poincare} for the $\textrm{L}^2$-norm of $u$ over $\Omega$ \begin{equation*} \begin{array}{rcl}
c H \sum_{i=1}^2 \norm{u_i}_{ \Gamma }^2
&\leq& \norm{ u }_\Omega^2 + H^2 \sum_{i=1}^2
\big(
\norm{ \nabla u_i }_{\Omega_i}^{2} + h^{-1} \norm{\jump{u_i}}_{ \mathcal{E} _i
\setminus \partial \Omega_i}^{2}
\big)
\\
&\leq& C H_\Omega^2 \big(
\norm{ \nabla u }_{\Omega}^2
+ h^{-1} \norm{ \jump{ u } }_{ \mathcal{E} \setminus \partial \Omega}^2
+ h^{-1} \norm{ { u } }_{ \partial \Omega}^2 \big)
\\
&& \quad
+
H^2 \sum_{i=1}^2 \big(
\norm{ \nabla u_i }_{\Omega_i}^{2} + h^{-1} \norm{\jump{u_i}}_{ \mathcal{E} _i
\setminus \partial \Omega_i}^{2}
\big). \end{array} \end{equation*} Noting that $H \leq H_\Omega$ and by definition of $\Bnorm{(u_i,\varphi)}{,i}$ we obtain \begin{equation*} \begin{array}{rcl}
c H \sum_{i=1}^2 \norm{u_i}_{ \Gamma }^2
&\leq& H_\Omega^{2}
\sum_{i=1}^2 \big(
\norm{ \nabla u_i }_{\Omega_i}^2
+ h^{-1} \norm{\jump{u_i}}_{ \mathcal{E} _i
\setminus \partial \Omega_i}^{2}
+ h^{-1} \norm{{u_i}}_{ \partial \Omega
\cap \partial \Omega_i}^{2}
\big)
\\
&& + H_\Omega^{2} h^{-1} \norm{\jump{u}}_{ \Gamma }^{2}
\\
&\leq& H_\Omega^{2}
\sum_{i=1}^2 \big(
\norm{ \nabla u_i }_{\Omega_i}^2
+ h^{-1} \norm{\jump{u_i}}_{ \mathcal{E} _i
\setminus \partial \Omega_i}^{2}
+ h^{-1} \norm{{u_i}}_{ \partial \Omega
\cap \partial \Omega_i}^{2}
\big)
\\
&& + H_\Omega^{2} h^{-1} \big(
\norm{u_1 - \varphi }_{ \Gamma }^{2}
+ \norm{u_2 - \varphi }_{ \Gamma }^{2}
\big)
\\
&\leq& H_\Omega^2 \sum_{i=1}^2 \Bnorm{(u_i,\varphi)}{,i}^2. \end{array} \end{equation*} Substituting back into the first inequality completes the proof. \quad \end{proof}
\begin{lemma} \label{lemma:shurpos} There exists $\alpha>0$, sufficiently large, such that
\begin{equation*}
c \frac{H}{H_\Omega^2} \norm{\varphi}_{ \Gamma }^{2} \leq {\boldsymbol \varphi} ^\top \tilde{S}_{ \Gamma } {\boldsymbol \varphi}
\leq \frac{2 \alpha}{h} \norm{\varphi}_{ \Gamma }^2,
\end{equation*} Therefore $\tilde{S}_ \Gamma $ is s.p.d. Moreover $\tilde{A}_ \Gamma - \tilde{B}_i$ is s.p.d. \end{lemma} \begin{proof} The symmetry is easy to check since $\tilde{A}_ \Gamma $ and $\tilde{B}_1$, $\tilde{B}_2$ are symmetric. For the upper bound in the estimate we recall that $\tilde{B}_1$, $\tilde{B}_2$ are positive definite and hence \begin{equation*} {\boldsymbol \varphi} ^\top \tilde{S}_{ \Gamma } {\boldsymbol \varphi} = {\boldsymbol \varphi} ^\top (\tilde{A}_ \Gamma - \sum_{i=1}^2 \tilde{B}_i ) {\boldsymbol \varphi} \leq {\boldsymbol \varphi} ^\top \tilde{A}_{ \Gamma } {\boldsymbol \varphi} = 2\mu \norm{\varphi}_ \Gamma ^{2}. \end{equation*} Now let $u_i := \mathcal{H}_i(\varphi)$ and ${\boldsymbol v} := ( {\boldsymbol u} _1, {\boldsymbol u} _2, {\boldsymbol \varphi} )^\top$. A straightforward calculation shows that $ {\boldsymbol \varphi} ^\top \tilde{S}_ \Gamma {\boldsymbol \varphi} = {\boldsymbol v} ^\top \tilde{A} {\boldsymbol v} $. Then the coercivity of the bilinear form $\tilde{a}(\cdot,\cdot)$ and an application of Lemma \ref{lemma:HDGlower} yields \begin{equation*}
{\boldsymbol \varphi} ^\top \tilde{S}_ \Gamma {\boldsymbol \varphi} = {\boldsymbol v} ^\top \tilde{A} {\boldsymbol v}
\equiv \tilde{a}( (u,\varphi), (u,\varphi) )
\geq c \sum_{i=1}^2 \Bnorm{(u_i,\varphi)}{,i}^{2}
\geq c \frac{H}{H_\Omega^2} \norm{ \varphi }_ \Gamma ^2. \end{equation*} For the final statement, observe that for all $\varphi \not = 0$ we have \begin{equation*}
{\boldsymbol \varphi} ^\top ( \tilde{A}_ \Gamma - \tilde{B}_i ) {\boldsymbol \varphi} > {\boldsymbol \varphi} ^\top (\tilde{A}_ \Gamma - \sum_{j=1}^2 \tilde{B}_j) {\boldsymbol \varphi}
> 0, \end{equation*} since the $\{ \tilde{B}_i \}$ are positive definite. This completes the proof. \quad \end{proof}
\begin{remark}
Note that Lemma \ref{lemma:shurpos} provides an upper bound for the
condition number,
$
\kappa(\tilde{S}_ \Gamma ) \leq O(\frac{\alpha}{h})
$.
A similar result also holds for classical FEM,
see \cite{brenner} and \cite[Lemma 4.11]{widlund}. \end{remark}
\section{Schwarz methods and the Schur complement} \label{sec:schwarz}
In order to solve the Schur complement system we can devise a Schwarz method to obtain $\lambda_h$. We will prove that a natural Schwarz method for the Schur complement is equivalent to the block Jacobi iteration in (\ref{eq:blockJacobi}), but it suffers from slow convergence. Later we show how to obtain an optimized Schwarz method for the Schur complement which converges much faster to the same fixed point.
Let us relax the constraint that $\lambda_h$ is single-valued. Let $\lambda_{h,1}, \lambda_{h,2} \in \Lambda_h $. Assume $\lambda_{h,2}$ is known; that is we know $u_2 \in V_{h,2}$. Then we can split the Schur complement system (\ref{eq:shur}) and obtain an approximation for $\lambda_{h,1}$ and consequently $u_{1}\in V_{h,1}$ from \begin{equation*}
(\tilde{A}_ \Gamma - \tilde{B}_1 ) {\boldsymbol \lambda} _1 = \tilde{B}_2 {\boldsymbol \lambda} _2 + \BO{g}_{ \Gamma }. \end{equation*} As a consequence of Lemma \ref{lemma:shurpos}, $(\tilde{A}_ \Gamma - \tilde{B}_1 )$ is invertible and we can obtain $\lambda_{h,1}$. This suggests an iterative method to obtain $\lambda_h$. We will see that this produces identical iterates as the block Jacobi method.
\begin{algorithm} \label{algo:addschwarz} Let $\lambda_{h,1}^{(0)}, \lambda_{h,2}^{(0)} \in \Lambda_h$ be two random initial guesses. Then for $n=1,2,\hdots$ find $\big\{ \lambda_{h,i}^{(n)} \big\}$ such that \begin{equation} \label{eq:shurITE}
\begin{array}{rcl}
(\tilde{A}_ \Gamma ^{} - \tilde{B}_1^{} ) {\boldsymbol \lambda} _1^{(n)} = \tilde{B}_2^{} {\boldsymbol \lambda} _2^{(n-1)} + \BO{g}_{ \Gamma }^{},
\\
(\tilde{A}_ \Gamma ^{} - \tilde{B}_2^{} ) {\boldsymbol \lambda} _2^{(n)} = \tilde{B}_1^{} {\boldsymbol \lambda} _1^{(n-1)} + \BO{g}_{ \Gamma }^{}.
\end{array} \end{equation} \end{algorithm}
At convergence, we have $\tilde{A}_{ \Gamma }({\boldsymbol \lambda} _1 - {\boldsymbol \lambda} _2)=0$ which implies $ {\boldsymbol \lambda} _1 = {\boldsymbol \lambda} _2 = \tilde{S}_{ \Gamma }^{-1} \BO{g}_{ \Gamma }$.
The following result shows that the above method generates the same iterates as the block Jacobi iteration (\ref{eq:blockJacobi}). By linearity it suffices to consider the error equation, $f=0$, which implies $\BO{g}_ \Gamma = 0$.
\begin{proposition} Let $\lambda_{h,1}^{(0)}, \lambda_{h,2}^{(0)}$ be two random initial guesses of Algorithm \ref{algo:addschwarz} and without loss of generality suppose $f=0$. Set the initial guess of the block Jacobi iteration (\ref{eq:blockJacobi}) to be $ u_i^{(0)} = \mathcal{H}_i^{}(\lambda_{h,i}^{(0)}) $. Then $ u_i^{(n)} = \mathcal{H}_i^{}(\lambda_{h,i}^{(n)}) $ for all $n>0$, i.e.~both methods produce the same iterates. \end{proposition} \begin{proof}
See \cite{hajian2014}. \end{proof}
\subsection{Analysis of classical Schwarz for the Schur complement}
By linearity we consider the error equations and we denote by $ \BO{e}_{i}^{(n)} := {\boldsymbol \lambda} _{i}^{(n)} - {\boldsymbol \lambda} $. The iterations in \eqref{eq:shurITE} can be rewritten in a more suitable form for analysis. Since $\tilde{A}_{ \Gamma }$ is s.p.d.~(it is just a scaled mass matrix), the square-root $\tilde{A}_{ \Gamma }^{1/2}$ exists and is also s.p.d. Therefore, for $i,j \in \{1,2\}$ and $i\not = j$ we can write equivalently \begin{equation*}
\begin{array}{lrcl}
&
(\tilde{A}_ \Gamma ^{} - \tilde{B}_i^{} ) \BO{e}_i^{(n)} &=& \tilde{B}_j^{} \BO{e}_j^{(n-1)}
\\
\Leftrightarrow &
\tilde{A}_{ \Gamma }^{1/2} ( I - \tilde{A}_{ \Gamma }^{-1/2} \tilde{B}_i^{} \tilde{A}_{ \Gamma }^{-1/2} )
\tilde{A}_{ \Gamma }^{1/2} \BO{e}_i^{(n)}
&=& \tilde{B}_j^{} \BO{e}_j^{(n-1)}
\\
\Leftrightarrow &
( I - \tilde{A}_{ \Gamma }^{-1/2} \tilde{B}_i^{} \tilde{A}_{ \Gamma }^{-1/2} ) \tilde{\BO{e}}_i^{(n)} &=&
(\tilde{A}_{ \Gamma }^{-1/2} \tilde{B}_j^{} \tilde{A}_{ \Gamma }^{-1/2}) \tilde{\BO{e}}_j^{(n-1)},
\end{array} \end{equation*} where $ \tilde{\BO{e}}_{i} = \tilde{A}_{ \Gamma }^{1/2} \BO{e}_i^{} $. We define \begin{equation}
C_i := \tilde{A}_{ \Gamma }^{-1/2} \tilde{B}_i^{} \tilde{A}_{ \Gamma }^{-1/2}, \end{equation} which is invertible and symmetric. Since $\tilde{A}_ \Gamma - \tilde{B}_i$ is invertible and $\tilde{A}_ \Gamma ^{1/2}$ exists we can conclude that $I-C_i$ is also invertible by definition. Therefore we have
\begin{equation*}
( I - C_i^{} ) \tilde{\BO{e}}_{i}^{(n)} = C_j^{} \tilde{\BO{e}}_{j}^{(n-1)}
= C_j^{} ( I - C_j^{} )^{-1} C_i^{} \tilde{\BO{e}}_{i}^{(n-2)}, \end{equation*} or \begin{equation*}
{\boldsymbol \varphi} _{i}^{(n)} =
C_j^{} ( I - C_j^{} )^{-1} \cdot C_i^{} ( I - C_i^{} )^{-1} {\boldsymbol \varphi} _{i}^{(n-2)}, \end{equation*} where ${\boldsymbol \varphi} _i = ( I - C_i ) \tilde{\BO{e}}_{i}$. Finally the iterations can be rewritten as \begin{equation} \label{eq:shurITEan}
{\boldsymbol \varphi} _{i}^{(n)} =
( C_j^{-1} - I )^{-1} \cdot ( C_i^{-1} - I )^{-1} {\boldsymbol \varphi} _{i}^{(n-2)}. \end{equation}
We show how the contraction factor of the iteration in \eqref{eq:shurITEan} is related to the eigenvalues of $\{C_i\}$. Let $\norm{\cdot}_{2}$ be the usual 2-norm in $\mathbb{R}^{n}$, and denote by $D_i := ( C_i^{-1} - I )^{-1}$. Then we can estimate
\begin{equation*}
\norm{{\boldsymbol \varphi} _{i}^{(n)}}_2 \leq
\norm{D_j D_i}_2 \, \norm{{\boldsymbol \varphi} _{i}^{(n-2)} }_2 \leq
\norm{D_j}_{2} \, \norm{D_i}_2 \, \norm{{\boldsymbol \varphi} _{i}^{(n-2)} }_2 =
\rho( D_j ) \, \rho( D_i ) \, \norm{{\boldsymbol \varphi} _{i}^{(n-2)} }_2, \end{equation*} since $\{ D_i \}$ are symmetric. In other words we have used a different norm for the error: with $E_i := (I - C_i^{} ) \tilde{A}_{ \Gamma }^{1/2}$, which is invertible, we have \begin{equation*}
\norm{ \BO{e}_i }_{E_i^\top E_i^{}} = \norm{ E_i \BO{e}_i }_{2}
= \norm{ {\boldsymbol \varphi} _i }_{2}. \end{equation*} Let $\sigma(M)$ denote an eigenvalue of a given matrix $M$. Then we have \begin{equation*}
\rho(D_i) := \max_{\sigma(D_i)} | \sigma(D_i) | =
\max_{\sigma(C_i)} \left| \frac{\sigma(C_i)}{1- \sigma(C_i)} \right|. \end{equation*} Hence a sufficient condition for convergence is that $ \sigma(C_i) \in (-\infty, 1/2) $. On the other hand by definition of $C_i$ we know that $\sigma(C_i)$ are the eigenvalues of the generalized eigenvalue problem $\tilde{B}_{i} {\boldsymbol \varphi} = \sigma \, \tilde{A}_ \Gamma {\boldsymbol \varphi} $. Since both $\tilde{A}_ \Gamma $ and $\tilde{B}_i$ are s.p.d.,~$\sigma(C_i)$ is positive.
Therefore a sufficient condition for convergence is to show that $ \sigma(C_i) \in (0, 1/2) $.
Recall that since $C_i$ is symmetric we have \begin{equation}
\label{eq:CestL}
\sigma_{\text{min}}(C_i) =
\inf_{{\boldsymbol \varphi} \not = 0} \frac{{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} }{{\boldsymbol \varphi} ^\top \tilde{A}_{ \Gamma } {\boldsymbol \varphi} }
=
\inf_{{\boldsymbol \varphi} \not = 0} \frac{{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} }{2\mu
\norm{\varphi}_{ \Gamma }^{2}}
\geq \frac{c_B}{2}, \end{equation} where we have used the lower bound estimate of Lemma \ref{lemma:Beigen}. Here $0 < c_B < 1$.
The upper bound for $\sigma_{\max}(C_i)$ can also be obtained using Lemma \ref{lemma:Beigen}. Hence \begin{equation}
\label{eq:CestU}
\sigma_{\max}(C_i) =
\sup_{{\boldsymbol \varphi} \not = 0} \frac{{\boldsymbol \varphi} ^\top \tilde{B}_i {\boldsymbol \varphi} }{2\mu \norm{\varphi}_{ \Gamma }^{2}}
\leq \frac{1}{2} \Big(1 - C \frac{h}{\alpha H} \Big), \end{equation} which is strictly less than $\frac{1}{2}$. Consequently for the eigenvalues of $D_i$, we obtain the estimate \begin{equation*}
0 < \frac{c_B}{2 - c_B} \leq \sigma(D_i) \leq
1 - C \frac{h}{\alpha H} < 1. \end{equation*} We summarize the convergence result in the following theorem. \begin{theorem} There exists an $\alpha>0$ independent of $H$ and $h$ such that Algorithm \ref{algo:addschwarz} converges and the contraction factor is bounded by \begin{equation}
\rho \leq 1 - O(h). \end{equation} \end{theorem}
\subsection{Analysis of an optimized Schwarz method for the Schur complement}
As it has been shown in \cite{hajian2013block}, the IPH discretization is imposing Robin transmission conditions between subdomains, and the Robin parameter is precisely the penalty parameter $\mu$ of the DG method. For approximation purposes and ensuring coercivity, $\mu$ is set to be ${\alpha}{h^{-1}}$ for some $\alpha>0$ large and independent of $h$.
In the Schwarz theory with Robin transmission conditions this choice of $\mu$ corresponds to damping high frequencies of the DtN operator. In other words, the low frequencies are responsible for the slow convergence of the algorithm that we have analyzed in the previous subsection; as we have shown the contraction factor is $\rho = 1 - O(h)$. Optimized Schwarz theory suggests to choose the Robin parameter $O(h^{-1/2})$, see \cite{ganderos}, while this is not possible for an IPH discretization since we lose coercivity and optimal approximation properties.
The remedy comes from an idea first introduced in \cite{discacciati2004operator} and later independently in \cite{dolean} for Maxwell's equations. The idea is to perturb the transmission conditions such that while iterating we produce a different sequence but obtaining the same fixed-point as the original Schwarz algorithm.
Let us introduce two new unknowns, one for each subdomain, along the interface called $\{ r_{12}, r_{21} \}$ such that $r_{ij} \in \Lambda_h$. Recall that by Proposition \ref{prop:Bopt} an application of $\tilde{B}_i {\boldsymbol \lambda} _i$ is equivalent to $\mu u_i - \PDif{u_i}{{\boldsymbol n} _i}$ on the interface where $u_i := \mathcal{H}_i(\lambda_{h,i})$. Now let $r_{ij}
= (\mu u_j - \PDif{u_j}{{\boldsymbol n} _j} ) |_{ \Gamma }$. Let us denote by $M_ \Gamma $ the mass matrix along the interface and $\BO{r}_{ij}$ the corresponding DOFs of $r_{ij}$. Then we observe that \begin{equation*} {\boldsymbol \varphi} ^\top M_ \Gamma \BO{r}_{ij} = \dotS{r_{ij} }{\varphi}_{ \Gamma } = \dotS{ \mu u_j - \PDif{u_j}{{\boldsymbol n} _j} }{\varphi}_{ \Gamma } = {\boldsymbol \varphi} ^{\top} \tilde{B}_{j} {\boldsymbol \lambda} _{j}, \quad \forall \varphi \in \Lambda_h. \end{equation*} Therefore we conclude that \begin{equation*}
M_ \Gamma \BO{r}_{ij} = \tilde{B}_j {\boldsymbol \lambda} _j, \end{equation*} and the Schwarz iteration \eqref{eq:shurITE} can be rewritten as \begin{equation*}
\begin{array}{rcl}
(\tilde{A}_ \Gamma - \tilde{B}_i ) {\boldsymbol \lambda} _i^{(n)} &=& M_ \Gamma \BO{r}_{ij}^{(n)} + \BO{g}_{ \Gamma },
\\
M_ \Gamma \BO{r}_{ij}^{(n)} &=& \tilde{B}_j {\boldsymbol \lambda} _j^{(n-1)}.
\end{array} \end{equation*}
We modify the second equation as suggested in \cite{dolean} and \cite{hajian2014} to the form \begin{equation*}
M_ \Gamma ^{} \BO{r}_{ij}^{(n)} - \hat{p}\, \tilde{B}_i^{} {\boldsymbol \lambda} _i^{(n)} =
\tilde{B}_j^{} {\boldsymbol \lambda} _j^{(n-1)} - \hat{p}\, M_ \Gamma ^{} \BO{r}_{ji}^{(n-1)}, \end{equation*} for $i,j \in \{1,2\}$ and $i\not = j$. Here $0 \leq \hat{p} < 1$ is a parameter which we use for optimization. At convergence one recovers the original equations and therefore the fixed point of the iteration is the same as for the original method.
\begin{remark} \label{remark:phat} The above modification is shown in \cite{hajian2014} to be equivalent (at the continuous level) to imposing \begin{equation}
\Big( \frac{1 - \hat{p}}{1 + \hat{p}} \, \mu + \PDif{}{{\boldsymbol n} _i} \Big) u_i^{(n)} =
\Big( \frac{1 - \hat{p}}{1 + \hat{p}} \, \mu + \PDif{}{{\boldsymbol n} _i} \Big) u_j^{(n-1)} \end{equation} for $i,j \in \{1,2\}$ and $i\not=j$. Note that if $\hat{p} = \frac{1-\sqrt{h}}{1+\sqrt{h}}$ then $\frac{1 - \hat{p}}{1 + \hat{p}} \mu \propto \frac{1}{\sqrt{h}}$ which is the right choice of parameter according to optimized Schwarz theory. We will see that this is exactly the right choice for $\hat{p}$ at the discrete level. \end{remark}
The analysis of this algorithm is possible using the framework established for the original method. We can eliminate the $\{ r_{ij} \}$ as follows: \begin{equation*}
\begin{array}{rcll}
(\tilde{A}_ \Gamma ^{} - \tilde{B}_i^{} ) {\boldsymbol \lambda} _i^{(n)} &=&
\hat{p} \tilde{B}_i^{} {\boldsymbol \lambda} _i^{(n)} + \tilde{B}_j^{} {\boldsymbol \lambda} _j^{(n-1)} -
\hat{p} M_ \Gamma ^{} \BO{r}_{ji}^{(n-1)}
& + \BO{g}_{ \Gamma }^{}
\\
&=&
\hat{p} \tilde{B}_i^{} {\boldsymbol \lambda} _i^{(n)} + \tilde{B}_j^{} {\boldsymbol \lambda} _j^{(n-1)}
- \hat{p} (\tilde{A}_ \Gamma ^{} - \tilde{B}_j^{} ) {\boldsymbol \lambda} _j^{(n-1)}
& + (1+\hat{p})\BO{g}_{ \Gamma }^{} ,
\end{array} \end{equation*} which simplifies to \begin{equation*}
(\tilde{A}_ \Gamma ^{} - (1+\hat{p})\tilde{B}_i^{} ) {\boldsymbol \lambda} _i^{(n)} =
- ( \hat{p} \tilde{A}_ \Gamma ^{} - (1+\hat{p})\tilde{B}_j^{} ) {\boldsymbol \lambda} _j^{(n-1)}
+ (1+\hat{p}) \BO{g}_{ \Gamma }^{}. \end{equation*}
\begin{algorithm} \label{algo:OS} Let $\lambda_{h,1}^{(0)}, \lambda_{h,2}^{(0)} \in \Lambda_h$ be two random initial guesses. Then for $n=1,2,\hdots$ find $\big\{ \lambda_{h,i}^{(n)} \big\}$ such that \begin{equation} \label{eq:OSITE}
\begin{array}{rcl}
(\tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_1 ) {\boldsymbol \lambda} _1^{(n)} &=&
- ( \hat{p} \tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_2 ) {\boldsymbol \lambda} _2^{(n-1)}
+ (1+\hat{p}) \BO{g}_{ \Gamma } ,
\\
(\tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_2 ) {\boldsymbol \lambda} _2^{(n)} &=&
- ( \hat{p} \tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_1 ) {\boldsymbol \lambda} _1^{(n-1)}
+ (1+\hat{p}) \BO{g}_{ \Gamma } .
\end{array} \end{equation} \end{algorithm}
Since $\hat{p}<1$, we can use Lemma \ref{lemma:Beigen} and conclude that the left hand side is positive definite and therefore invertible. At convergence we have $(1-\hat{p})\tilde{A}_ \Gamma ( {\boldsymbol \lambda} _1 - {\boldsymbol \lambda} _2 ) = 0 $ which implies ${\boldsymbol \lambda} _1={\boldsymbol \lambda} _2 = \tilde{S}_ \Gamma ^{-1} \BO{g}_ \Gamma $ if $\hat{p} \not = 1$.
Comparing to the original Schwarz method, Algorithm \ref{algo:addschwarz}, we weakened the positive-definiteness of the left-hand side. This plays a key role in faster convergence. The optimized algorithm can be viewed as a different splitting of the Schur complement. More precisely we multiplied it by $(1+\hat{p})$ and this time a fraction of $\tilde{A}_ \Gamma $, namely $\hat{p} \tilde{A}_ \Gamma $, has been put to the right-hand side.
We consider the error equation and we can proceed as before to obtain an iteration for $\BO{e}_i$ only, \begin{equation*}
\begin{array}{ll}
& (\tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_i ) \BO{e}_i^{(n)} =
\\
& \qquad
( \hat{p} \tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_j ) \cdot
(\tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_j )^{-1} \cdot
( \hat{p} \tilde{A}_ \Gamma - (1+\hat{p})\tilde{B}_i )
\BO{e}_i^{(n-2)}.
\end{array} \end{equation*} With ${\boldsymbol \varphi} _i = ( I - (1+\hat{p})C_i ) \tilde{A}_ \Gamma ^{1/2} \BO{e}_i$, we have \begin{equation*}
\begin{array}{ll}
&{\boldsymbol \varphi} _i^{(n)} = ( \hat{p} I - (1+\hat{p}) C_j ) \cdot
( I - (1+\hat{p})C_j )^{-1}
\\
&
\qquad \qquad
\cdot
( \hat{p} I - (1+\hat{p}) C_i ) \cdot
( I - (1+\hat{p})C_i )^{-1} {\boldsymbol \varphi} _{i}^{(n-2)}.
\end{array} \end{equation*} Denoting by $\hat{D}_i := ( \hat{p} I - (1+\hat{p}) C_i ) \cdot ( I - (1+\hat{p})C_i )^{-1} $ and simplifying, we get \begin{equation}
\hat{D}_i = I - (1-\hat{p}) \big( I - (1+\hat{p})C_i \big)^{-1}, \end{equation} which shows that $\hat{D}_i$ is symmetric. Therefore we have \begin{equation*}
\norm{{\boldsymbol \varphi} _{i}^{(n)}}_2 \leq
\rho( \hat{D}_j ) \, \rho( \hat{D}_i ) \, \norm{{\boldsymbol \varphi} _{i}^{(n-2)} }_2. \end{equation*}
The estimate for the eigenvalues of $\hat{D}_i$ can be obtained as before. More precisely we have \begin{equation*}
\sigma( \hat{D}_i ) = 1 - \frac{1-\hat{p}}{1 - (1+\hat{p}) \,
\sigma(C_i)}. \end{equation*} Recall that $\sigma(C_i) \in \left[\frac{1}{2} - c, \frac{1}{2} - C
\frac{h}{\alpha H} \right]$ for $0< c < \frac{1}{2}$ and $C>0$ and independent of $h$, $H$. We can use $\hat{p}$ to optimize $\rho( \hat{D}_i )$. Following Remark \ref{remark:phat}, let us make the ansatz \begin{equation*}
\hat{p} = \frac{1 - (\frac{h}{\alpha})^{\gamma}}{1 + (\frac{h}{\alpha})^{\gamma}} < 1,
\quad \gamma \in \mathbb{R}^+. \end{equation*} This implies that \begin{equation}
1 - \frac{1}{\half + { \frac{C}{H} } (\frac{h}{\alpha})^{1-\gamma} }
\leq \sigma(\hat{D}_i) \leq
1 - \frac{1}{\half + c (\frac{h}{\alpha})^{-\gamma} }, \end{equation}
Best performance is achieved, if $\gamma := \half$ which as $h \rightarrow 0$ leads to
\begin{equation}
-1 + c_1 \sqrt{ \frac{h}{\alpha} } \leq \sigma(\hat{D}_i) \leq
1 - c_2 \sqrt{ \frac{ h}{\alpha} }. \end{equation} Note that the iteration matrix, $\hat{D}_i$, is not positive definite anymore but it has a converging spectrum and the contraction factor is much better than the one in Algorithm \ref{algo:addschwarz}. We summarize our results in \begin{theorem} There exists an $\alpha>0$ independent of $H$ and $h$ such that Algorithm \ref{algo:OS} converges and the contraction factor is bounded by \begin{equation}
\rho \leq 1 - O(\sqrt{h}). \end{equation} \end{theorem}
\section{A multi subdomain algorithm}\label{sec:multi}
We have introduced and analyzed a two subdomain optimized Schwarz method (OSM) so far. In this section we introduce a multi subdomain algorithm for the IPH discretization. This algorithm is a natural generalization of the two subdomain method. Often special care has to be taken in OSMs for classical FEM discretizations at cross-points, that is a node which is shared by more than two subdomains, see \cite{gander2012best,gander2013applicability,gander2014CPDD}. This is not the case when we work with a DG discretization because subdomains communicate with each other only if they have an intersection of non-zero measure. Therefore the problem with cross-points does not arise, since a cross-point is of measure-zero, as at the continuous level.
Let us start defining the multi-subdomain geometry. We first partition the mono-domain $\Omega$ into $N_s$ subdomains such that the interface, $ \Gamma $ between them is a subset of internal edges, $ \mathcal{E} ^0$. More precisely, we denote the subdomains by $\{ \Omega_i^{} \}_{i=1}^{N_s}$ and the interface between two subdomain by \begin{equation*}
\Gamma _{ij}^{} := \partial \Omega_i \cap \partial \Omega_j,
\quad (i \not= j), \end{equation*} and the global interface by \begin{equation*}
\Gamma := \bigcup_{i\not=j} \Gamma _{ij} \subset \mathcal{E} ^0. \end{equation*}
Now the hybridizable formulation of IPH can be written as: find $(u_h,\lambda_h) \in V_h \times \Lambda_h$ such that \begin{equation}
\tilde{a}( (u_h,\lambda_h),(v,\varphi) ) = \dotV{f}{v}_{\mathcal{T}_h}, \quad
\forall (v, \varphi) \in V_h \times \Lambda_h, \end{equation}
where the bilinear form is defined as \begin{equation}
\tilde{a}( (u,\lambda), (v,\varphi) ) := \tilde{a}_ \Gamma (\lambda,
\varphi) + \sum_{i=1}^{N_s} \big( \tilde{a}_i(u_i,v_i) +
\tilde{a}_{i \Gamma }(u_i,\varphi) + \tilde{a}_{i \Gamma }(v_i,\lambda) \big). \end{equation} The only modified bilinear form is $\tilde{a}_{i \Gamma }(\cdot,\cdot)$
since it acts now on $\partial \Omega_i$, that is \begin{equation}
\tilde{a}_{i \Gamma }(u_i,\varphi) :=
\dotS{ \PDif{u_i}{{\boldsymbol n} _i} - \mu u_i }{\varphi}_{\partial \Omega_i}. \end{equation}
\begin{figure}\label{fig:multiabs}
\end{figure}
Let us focus on two subdomains which share an interface, $ \Gamma _{ij}$. We observe that there are two sub-problems which are communicating through $\lambda_h$ on $ \Gamma _{ij}$. That is \begin{equation*}
\begin{array}{rcl}
\tilde{a}_i(u_i,v_i) + \tilde{a}_{i \Gamma }(v_i,\lambda_h) &=&
\dotV{f}{v_i}_{\Ti{i}}, \quad \forall v_i \in V_{h,i},
\\
\tilde{a}_j(u_j,v_j) + \tilde{a}_{j \Gamma }(v_j,\lambda_h) &=&
\dotV{f}{v_j}_{\Ti{j}}, \quad \forall v_j \in V_{h,j},
\end{array} \end{equation*} and the continuity is imposed using \begin{equation} \label{eq:contII}
\lambda_h = \frac{1}{2\mu} \Big(\mu u_i - \PDif{u_i}{{\boldsymbol n} _i} \Big) +
\frac{1}{2\mu} \Big(\mu u_j - \PDif{u_j}{{\boldsymbol n} _j} \Big),
\quad \text{on} \, \Gamma _{ij}. \end{equation} Now we relax the constraint that $\lambda_h$ is single-valued on $ \Gamma $ and allocate $\lambda_{h,i}$ to each subdomain $\Omega_i$. Each $\lambda_{h,i}$ is defined on $\partial \Omega_i \setminus \partial \Omega$; for an example see Figure \ref{fig:multiabs}. We have therefore twice DOFs along $ \Gamma _{ij}$. Therefore we should split the continuity equation (\ref{eq:contII}) to provide two conditions; one for each $\lambda_{h,i}$. We use the same idea as in Algorithm \ref{algo:OS} and relax the continuity equation in the same fashion:
\begin{equation*}
\frac{1}{1 + \hat{p}} \lambda_{h,i} +
\frac{\hat{p}}{1 + \hat{p}} \lambda_{h,j} =
\frac{1}{2\mu} \Big(\mu u_i - \PDif{u_i}{{\boldsymbol n} _i} \Big) +
\frac{1}{2\mu} \Big(\mu u_j - \PDif{u_j}{{\boldsymbol n} _j} \Big),
\quad (i\not=j). \end{equation*} Here $\hat{p}$ is a parameter which is used for optimization purposes. This suggests the following iterative method to find the pairs $\big\{(u_i,\lambda_{h,i})\big\}_{i=1}^{N_s}$ in parallel:
\begin{algorithm} \label{algo:multi} Let $\big\{(u_i^{(0)},\lambda_{h,i}^{(0)})\big\}_{i=1}^{N_s}$ be a set of initial guesses for all subdomains. Then for $n=1,2,\hdots$ find $\big\{(u_i^{(n)},\lambda_{h,i}^{(n)})\big\}_{i=1}^{N_s}$ such that
\begin{equation} \label{eq:multi-loc}
\begin{array}{rcl}
\tilde{a}_i(u_i^{(n)},v_i) +
\tilde{a}_{i \Gamma }(v_i,\lambda_{h,i}^{(n)} ) &=&
\dotV{f}{v_i}_{\Ti{i}}, \quad \forall v_i \in V_{h,i},
\end{array} \end{equation} and the continuity condition on $ \Gamma _{ij}$ reads \begin{equation} \label{eq:multi-cont}
\frac{1}{1 + \hat{p}} \lambda_{h,i}^{(n)} -
\frac{1}{2\mu} \Big(\mu u_i - \PDif{u_i}{{\boldsymbol n} _i} \Big)^{(n)}
=
- \frac{\hat{p}}{1 + \hat{p}} \lambda_{h,j}^{(n-1)}
+ \frac{1}{2\mu} \Big(\mu u_j - \PDif{u_j}{{\boldsymbol n} _j} \Big)^{(n-1)}. \end{equation} \end{algorithm}
At convergence we obtain $(1- \hat{p})(\lambda_{h,i} - \lambda_{h,j}) = 0$. Therefore if $\hat{p} \not = 1$, we recover that $\lambda_h$ is single valued.
\begin{remark}
We can make an ansatz for the optimal choice of $\hat{p}$ similar
to the two-subdomain case. The transmission condition
(\ref{eq:contII}) can be viewed as a Robin transmission condition at
the continuous level. The Robin parameter is $\mu^\star :=
\frac{1-\hat{p}}{1+\hat{p}} \mu$. In order to converge fast we
should set $\mu^\star = O(h^{-1/2})$. This corresponds to the choice
$\hat{p} := \frac{1-\sqrt{h}}{1+\sqrt{h}} < 1$. \end{remark}
\subsection{OSM as a preconditioner}
We show now how one can use OSM as a preconditioner for a Krylov subspace method. We start by writing Algorithm \ref{algo:multi} at the algebraic level. We first partition the DOFs associated with $u_h \in V_h$ into \begin{equation*}
{\boldsymbol u} := ( {\boldsymbol u} _1, {\boldsymbol u} _2, \hdots, {\boldsymbol u} _{N_s} )^\top. \end{equation*} Then we form DOFs associated to the interface unknowns $\{ \lambda_{h,i} \}_{i=1}^{N_s}$ by \begin{equation*}
\boldsymbol \ell := ( {\boldsymbol \lambda} _{1}, {\boldsymbol \lambda} _{2}, \hdots, {\boldsymbol \lambda} _{N_s} )^\top, \end{equation*} and define the augmented DOFs by ${\boldsymbol w} := ( {\boldsymbol u} , \boldsymbol \ell )^\top $.
Algorithm \ref{algo:multi} can be written at the algebraic level as \begin{equation}
\begin{array}{rcl}
\underbrace{
\MATT{K_{u u}}{K_{u \ell}}{K_{\ell u}}{K_{\ell \ell}}
}_{K}
{\boldsymbol w} ^{(n)}
=
\underbrace{
\MATT{0}{0}{L_{\ell u}}{L_{\ell \ell}}
}_{L}
{\boldsymbol w} ^{(n-1)}
+
\underbrace{\Arr{{\boldsymbol f} }{0}}_{\BO{g}}.
\end{array}
\label{eq:multi-alg} \end{equation} Note that the left-hand side matrix $K$ consists of block matrices communicating only with each pair $(u_{h,i},\lambda_{h,i})$. Therefore we can ``invert'' subdomain blocks independently and in parallel. This gives a parallel preconditioner for a Krylov subspace method applied to the system $ (K-L) {\boldsymbol w} = \BO{g} $.
Since the stationary iterates (\ref{eq:multi-alg}) converge with the contraction factor $\rho \leq 1 - O(\sqrt{h})$, we expect that a preconditioned Krylov subspace method achieves another square-root in the contraction factor, that is $\rho \leq 1 - O(h^{1/4})$. This is observed in the numerical experiments. Therefore this is a more attractive method compared to the CG method with an additive Schwarz preconditioner which has the contraction factor $\rho \leq 1 - O(\sqrt{h})$.
\section{Numerical experiments} \label{sec:num}
We perform numerical experiments on the model problem \begin{equation}
\begin{array}{rcll}
(\eta-\Delta) u &=& f,\quad & \textrm{in $\Omega$},\\
u &=& 0, & \textrm{on $\partial \Omega$},
\end{array} \end{equation} where $\eta=1$ and $\Omega$ is either a unit square, i.e.~$(0,1)^2$, or an L-shaped (non-convex) domain. The interface is such that it does not cut through any element, therefore $ \Gamma \subset \mathcal{E} $. We use $\mathbb{P}^1$ elements and $\alpha = c {(k+1)(k+2)}$ where $c>0$ is a constant independent of $h$ and $k=1$ (polynomial degree). The algorithms are implemented using a FORTRAN90 library for DG methods called {\tt
GDG90}. The codes are accessible at \begin{center}
\url{http://unige.ch/~hajian/gdg90/} \end{center}
\subsection{Minimum and maximum eigenvalues of $B_i$}
Before performing convergence experiments on the Algorithm \ref{algo:addschwarz} and \ref{algo:OS}, let us validate numerically the asymptotic behavior of the minimum and maximum eigenvalues of the operator $B_i$, i.e.~inequality (\ref{eq:Best}). To do so, we should measure the minimum and maximum eigenvalues of $C_i^{} := \tilde{A}_ \Gamma ^{-1/2} B_i^{} \tilde{A}_ \Gamma ^{-1/2}$. We generate a sequence of quasi-uniform triangulations and construct the operators $B_i$ and $\tilde{A}_{ \Gamma }$ for each triangulation. We denote the size of each operator by $N$, i.e.~$B_i \in \mathbb{R}^{N \times N}$. We have $1/h \propto \sqrt{N}$ as $h$ goes to zero.
According to (\ref{eq:CestL}), the minimum eigenvalue of $C_i$ is bounded from below independently of the mesh size. This can be seen from Table \ref{tab:eigC}. \begin{table}
\caption{Minimum and maximum eigenvalues of $C_i$.}
\label{tab:eigC}
\centering
\begin{tabular}{l|cccccc}
$\sqrt{N}$ & 6 & 13 & 26 & 55 & 112 & 225
\\
\hline
$\sigma_{\min}$ & 0.295 & 0.288 & 0.286 & 0.286 & 0.286 & 0.286
\\
$\sigma_{\max}$ & 0.335 & 0.415 & 0.457 & 0.478 & 0.489 & 0.494
\end{tabular} \end{table} For the maximum eigenvalues of $C_i$, observe that $\sigma_{\max}$ is less than $\frac{1}{2}$ and is increasing. In order to see the growth rate we plot $\frac{1}{2} - \sigma_{\max}$ in Figure \ref{fig:eigC} which decreases like $1/\sqrt{N} = O(h)$ as $N$ goes to infinity. \begin{figure}
\caption{Behavior of $(\frac{1}{2} - \sigma_{\max})$ versus total
number of unknowns, $N$.}
\label{fig:eigC}
\end{figure} This is in agreement with (\ref{eq:CestU}).
\subsection{Two subdomain case}
In this section we compare the contraction factor of the two Schwarz algorithms with respect to $h$-dependency. We perform both algorithms on a sequence of unstructured meshes. We measure the number of iterations required to reduce the relative error to $tol := 1\mbox{\sc{e}-}10$ while refining the mesh, that is \begin{equation*}
{\norm{ u_h^{(n)} - u_h }_{0}} \leq tol \, \norm{f}_{0}. \end{equation*} This level of accuracy is not necessary in practice since the error between the exact and approximate solution, $\norm{u-u_h}_0$, is much bigger and one usually can terminate the iteration after reaching the accuracy level of the method. The domain is partitioned into two by a non-straight interface; see Figure \ref{fig:ddmesh} (left).
As we see in the Figure \ref{fig:h-dep} (left),
\begin{figure}
\caption{Convergence of Schwarz methods on a square domain
(left) and L-shape domain (right).}
\label{fig:h-dep}
\end{figure}
on a square domain the number of iterations for Algorithm \ref{algo:addschwarz} grows like $1/h$, which is equivalent to $\rho \leq 1 - O(h)$, while for Algorithm \ref{algo:OS} it behaves like $1/\sqrt{h}$, or in other words we have $\rho \leq 1 - O(\sqrt{h})$, which illustrates well our analysis. This is the case for the L-shape domain too, see Figure \ref{fig:h-dep} (right).
\subsection{Multi subdomains case}
We now show some numerical results on the multi subdomain algorithm. The subdomains are formed by a coarse triangulation of the domain which we call $\Ti{H}$. We consider a nested fine mesh and therefore $\Ti{H} \subset \mathcal{T}_h$. An example is given in Figure \ref{fig:ddmesh} (right). We consider here {\it four} subdomains which share a cross-point, and similarly to the two subdomain case we measure the number of iterations necessary to reach the desired tolerance. We observe in Table \ref{tab:conv4sub} \begin{table}
\caption{Convergence of OSM for four subdomains} \centering
\label{tab:conv4sub}
\begin{tabular}{l|ccccc}
Mesh size & $h_0$ & $h_0/2$ & $h_0/4$ & $h_0/8$ & $h_0/16$
\\
\hline
\# iterations & 25 & 35 & 57 & 82 & 117
\end{tabular} \end{table}
that the contraction factor asymptotically is $\rho = 1 - O(\sqrt{h})$, i.e.~$82/57\approx1.43$ or $117/82 \approx 1.42$ which are close to $\sqrt{2}$.
\subsection{OSM as a preconditioner}
We use now the optimized Schwarz method as a preconditioner for GMRES with the tolerance $tol := 1\mbox{\sc{e}-}6$. In order to provide a qualitative comparison we also consider the widely used conjugate gradient method with a one-level additive Schwarz preconditioner applied to the original system (\ref{eq:linsys}). We consider 16 subdomains illustrated in Figure \ref{fig:ddmesh} (right). We observe in Table \ref{table:gmres2} \begin{table}
\caption{Number of iterations required by OSM-GMRES and PCG
to reach the desired tolerance.}
\label{table:gmres2}
\centering
\begin{tabular}{l|ccccc}
Mesh size & $h_0$ & $h_0/2$ & $h_0/4$ & $h_0/8$ & $h_0/16$
\\
\hline
OSM-GMRES & 20 & 52 & 60 & 72 & 87
\\
PCG & 14 & 38 & 55 & 104 & 154
\end{tabular} \end{table}
that the number of iterations for OSM-GMRES grows like $O(h^{-1/4})$. This is because Krylov methods benefit often from another square-root in their contraction factor compared to the stationary iteration method. Therefore the contraction factor of OSM-GMRES is $\rho = 1 - O(h^{1/4})$, i.e.~$72/60 \approx 1.2, 87/72 \approx 1.2$ which are close to $2^{1/4}$. For preconditioned (additive Schwarz) conjugate gradient method, we have $\rho = 1 - O(\sqrt{h})$.
We would like to comment on the size of the augmented system. In case of mesh size $h_0/16$ we have 19,032 DOFs for the primal variable $u_h$ and 1,296 DOFs for the interface unknowns. Therefore the augmented system is very little changed in size compared to the original system.
\section{Conclusion}
We have presented and analyzed classical and optimized Schwarz methods for IPH discretizations. The interesting fact is that both use Robin transmission conditions, but we proved that for an arbitrary two-subdomain decomposition the classical Schwarz algorithm has a convergence factor $1 - O(h)$, while the optimized one has a contraction factor $1-O(\sqrt{h})$. This is because the IPH discretization imposes a bad choice of the Robin parameter on the method. We then generalized the definition of the algorithms to the multi-subdomain case, and showed by numerical experiments that our theoretical results still hold. We finally illustrated the potential benefit that one obtains using OSM as a preconditioner compared to PCG.
\Appendix \label{appendix} \section*{proof of inequalities for ${\boldsymbol \theta}(\cdot)$} In this part we provide some proofs regarding the extension by zero operator, ${\boldsymbol \theta}_i(\cdot)$. First we recall inverse and mass matrix inequalities; see \cite[Appendix B]{widlund} and references therein. All constants are independent of $h$. Let $w \in \mathbb{P}^1(K)$ where $K$ is a simplex in $\mathbb{R}^d$. Then the inverse inequality \begin{equation} \label{eq:invineq}
\norm{ \nabla w }_{K} \leq \frac{c}{h} \norm{ w }_{K} \end{equation} holds. Let $\BO{w}$ be the DOFs of $w$ and $M_d$ be the corresponding mass matrix. Then we have \begin{equation*}
c_1 \, h^{d} \, \BO{w}^{\top} \BO{w} \leq \BO{w}^{\top} M_d \BO{w} \leq
c_2 \, h^{d} \, \BO{w}^{\top} \BO{w}. \end{equation*} \begin{lemma} Let $\varphi \in \Lambda_h$ and ${\boldsymbol \theta}_i(\varphi)$ be its extension by zero operator into $\Omega_i$. For an element $K$ which shares an edge with the interface, we have \begin{equation*}
\begin{array}{lcl}
\norm{ \nabla {\boldsymbol \theta}_i( \varphi ) }_{K}^{2} &\leq& {C_1}{h^{-1}}
\norm{\varphi}_{e}^{2}, \\ \norm{ {\boldsymbol \theta}_i( \varphi ) }_{K}^{2}
&\leq& C_2 \, h \norm{\varphi}_{e}^{2}.
\end{array} \end{equation*} \end{lemma}
\begin{proof}
Let ${\boldsymbol \varphi} _e := (\varphi_1, \varphi_2)$ be the DOFs of $\varphi$ on
the edge shared with the interface. Moreover let $w =
{\boldsymbol \theta}_i(\varphi)|_{K}$. Then we have $\BO{w} = (\varphi_1,
\varphi_2, 0)$. For the first inequality we invoke the inverse
inequality. Assuming the mesh is quasi-uniform, i.e.~$h_e \approx
h_K \approx h$, we get
\begin{equation*}
\begin{array}{rcl}
\norm{ \nabla w }_{K}^{2} \leq \frac{c^2}{h^2} \norm{ w }^{2}_{K}
&\leq& c_1 h^{d-2} ( \varphi_1^2 + \varphi_2^2 + 0)
\\
&\leq& c_2 h^{d-2} h_e^{-(d-1)} {\boldsymbol \varphi} _e^\top M_{d-1} {\boldsymbol \varphi} _e
\\
&\leq& c_3 h^{-1} {\boldsymbol \varphi} _e^\top M_{d-1} {\boldsymbol \varphi} _e
\\
&=& c_3 h^{-1} \norm{ \varphi }_{e}^{2}.
\end{array}
\end{equation*}
The proof for the second inequality follows the same steps. \quad \end{proof}
\begin{lemma} Let $\varphi \in \Lambda_h$ and ${\boldsymbol \theta}_i(\varphi)$ be its extension by zero operator into $\Omega_i$. Then \begin{equation*}
\norm{ \jump{ {\boldsymbol \theta}_i( \varphi ) } }_{ \mathcal{E} _i}^{2} \leq
C \norm{ \varphi }_{ \Gamma }^{2}, \end{equation*} where $C \geq 1$. \end{lemma}
\begin{proof} We start by those edges which are part of the interface, see Figure \ref{fig:extzero}, e.g.~$e_1$ and $e_3$. We have $$
\sum_{e \in \Gamma } \norm{ \jump{ {\boldsymbol \theta}_i( \varphi ) } }_{e}^2 =
\sum_{e \in \Gamma } \norm{ {\boldsymbol \theta}_i( \varphi ) }_{e}^{2}
= \sum_{e \in \Gamma } \norm{ \varphi }_{e}^2 = \norm{ \varphi }_{ \Gamma }^2, $$
which shows already that $C \geq 1$. Consider those edges $e \in \mathcal{E} _i$ that are not on the interface but belong to an element which shares an edge with the interface, e.g.~$e^\ast := \partial K_1 \cap \partial K_2$ in Figure \ref{fig:extzero}. Let ${\boldsymbol \varphi} _e := (\varphi_1, \varphi_2)$ be the DOFs of $\varphi$ on $e_2$ and assume $\varphi_2$ is the DOF which is also located on $e^\ast$. Then we have \begin{equation*}
\norm{ \jump{ {\boldsymbol \theta}_i( \varphi ) } }^{2}_{e^\ast}
= (\varphi_2,0) M_{d-1} (\varphi_2,0)^{\top}
\leq c h_{e^{\ast}}^{d-1} \varphi_2^{2}
\leq c h_{e^{\ast}}^{d-1} ( \varphi_2^{2} + \varphi_1^{2} )
\leq c_1 \norm{ \varphi }_{e}^{2}, \end{equation*} where we again used the quasi-uniformity of the mesh ($h_e \approx h \approx h_{e^\ast}$). The other case would be $K_1$ and $K_3$ share an edge, for which we can use the same argument. For other edges $ \jump{
{\boldsymbol \theta}_i(\varphi) } $ is simply zero. \quad \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We deal with a random graph model where at each step, a vertex is chosen uniformly at random, and it is either duplicated or its edges are deleted. Duplication has a given probability. We analyse the limit distribution of the degree of a fixed vertex, and derive a.s.\ asymptotic bounds for the maximal degree. The model shows a phase transition phenomenon with respect to the probabilities of duplication and deletion. \end{abstract}
\maketitle \thispagestyle{empty}
\noindent {\small {\it Keywords:} scale free, duplication, deletion,
random graphs, maximal degree. }
\section{Introduction}
For researchers in mathematical biology it is evident that duplication of the information in the genome is a dominant evolutionary force in shaping biological networks. On the other hand, due to injuries, deletion of edges or vertices is also a phenomenon which is natural to consider. We analyse the random graph model that was described in a recent paper of Th\"ornblad \cite[2014+]{Th}. This model evolves in discrete time steps, and it has a parameter $0<\theta<1$.
We start from a single vertex without edges. At each step, we choose a vertex $v$ uniformly at random. With probability $\theta$ we duplicate $v$; that is, we add a new vertex and connect it to the neighbours of $v$ and to $v$ itself with single edges. Otherwise, with probability $1-\theta$, all edges of $v$ are deleted (the vertex itself stays in the graph, and has the chance to get new edges later on).
As it was presented in \cite{Th} and as we will see later, the asymptotic behaviour of the model depends on the value of $\theta$, and there is a phase transition phenomenon. Naturally, our results will also be different for certain regimes of the duplication probability. The case $0<\theta<1/2$ is the subcritical case, where deletion is more likely, and as we will prove, the maximal degree has the order of logarithm of the actual number of vertices, almost surely. In the critical case ($\theta=1/2$) the maximal degree grows faster; we have the square of the logarithm of the number of vertices. Finally, in the supercritical case, when $1/2<\theta<1$ and the duplication is dominant, the maximal degree will be compared to the number of vertices (without logarithm) in some sense. We remark that similar phase transition is present in some other random graph models where deletion (but no duplication) is introduced; see e.g.\ the work of Vallier \cite[2013]{vallier}.
The random graph defined above consists of disjoint cliques. The vertices form separate clusters, and two vertices are connected if and only if they belong to the same cluster. Due to this simple structure, it may be interesting from the point of view of coagulation--fragmentation models or population dynamics. Champagnat, Lambert and Richard described certain properties of a continuous version of this model, the so-called splitting trees with mutations, where the clusters correspond to different alleles of a gene \cite[2012]{amaury1, amaury3}, \cite[2013]{amaury2}.
This shows that the current model does not have a fine structure as a graph. In the paper \cite[2014+]{dup} we described the connection of the critical case of the model analysed here to a model which is also built on duplication and deletion, but has a nontrivial graph structure. That model also turns out to be highly clustered: as one can expect, due to the duplication steps, some dense clusters evolve, while edges between clusters are rare. These highly clustered networks come up in mathematical biology, e.g.\ for modelling protein--protein interaction networks. Thus graph models with duplication (but with a richer graph structure than the actual one) may also be found in the literature. In most of those models edges are never deleted, but only some randomly chosen edges of the chosen vertex are duplicated, and some extra random edges may be added to the new vertex (Kim at al. \cite[2002]{kim}, Pastor-Satorras, Smith and Sol\'e \cite[2003]{pastor}, Chung et al. \cite[2003]{chung}, Bebek et al. \cite[2006]{bebek}). However, these papers did not contain mathematically rigorous arguments, and some results of the earlier ones (stating that the degree distribution is polynomially decaying with an exponential cutoff) were disclaimed by the latter ones. Recently, Hermann and Pfaffelhuber \cite[2014+]{pfaff} have proved several results on the frequency of isolated vertices and cliques, and also on the evolution of the degree of a fixed vertex in the initial graph. Various other models were also introduced, where the choice of the duplicated vertex is not uniform but depends on the degrees (Jordan \cite[2011]{jordan}, Cohen, Jordan and Voliotis \cite[2010]{cohen}, Farczadi and Wormald \cite[2014+]{farczadi}) or on the state of a hidden Markov chain (Hamdi, Krishnamurthy and Yin \cite[2013+]{hamdi}).
Notice that in this duplication--deletion model vertices of larger degree are more likely to increase their degree, because the probability that one of their neighbours is duplicated is larger. On the other hand, the probabilty of decreasing their degree is also larger. However, this is a kind of a preferential attachment phenomenon. Preferential attachment models are still popular for modelling web graphs or biological networks since the seminal paper of Albert and Barab\'asi \cite[1999]{ba}, and also from a theoretical point of view. It is worth mentioning the model free approaches of Ostroumova, Ryabchenko and Samosvat \cite[2013]{ostr} and Dereich and Ortgiese \cite[2014]{marcel}. However, for example, due to the possibility that the degree of a vertex can decrease to $1$ in a single step by deletion, our model does not fit into those frameworks, in which the degree of a fixed vertex can not decrease.
Maximal degree was also investigated in certain preferential attachment models, see e.g. \cite[M\'ori, 2005]{M05}. Bubeck, Mossel and R\'acz showed that the seed graph may have influence on the limiting distribution of the maximal degree in some kind of preferential attachment models. \cite[2014]{bubeck}. Since all vertices of the initial configuration are deleted after finitely many steps, this phenomenon does not occur in the current model.
Preferential attachment models are often investigated due to their scale free property: the proportion of vertices of degree $d$ tends to some contant $c_d$ in some sense, and $c_d$ decays polynomially as $d\rightarrow \infty$. The current model has a rather different asymptotic degree distribution, as the results of Th\"ornblad \cite{Th} and \cite{dup} shows. Since we will use it later on, we sum up these results, as follows. In accordance with \cite{Th} we introduce \[ \beta=\frac{\theta}{2\theta-1}\,,\quad \gamma=\frac{1-\theta}{\theta}\,. \]
\begin{thmcite}\cite[B--M, 2014+, for $\theta=1/2$]{dup}
\cite[Th\"ornblad, 2014+, for $\theta\neq 1/2$]{Th}. Let $X[n,k]$ denote the proportion of vertices of degree $k$ after $n$ steps in the duplication--deletion model defined above. Then we have \[ X[n,k]\rightarrow c_k \quad \text{almost surely as } n\rightarrow \infty, \] where $(c_k)_{k=1}^{\infty}$ is the unique sequence of positive numbers satisfying the following equations. \[ c_0=\frac{1-\theta}{1+\theta}(1+c_1); \qquad c_k=\frac{k+1}{k+1+\theta}\big(\theta c_{k-1}+(1-\theta)c_{k+1}\big) \quad (k\geq 1). \] Furthermore, $\sum_{k=0}^{\infty} c_k=1$. \begin{itemize} \item Subcritical case. If $0<\theta<1/2$, then \[ c_k=\gamma^{-k-1}\int_0^1\frac{t^{k+1}(1-t)^{-1-\beta}} {(1-\gamma^{-1}t)^{1-\beta}}\,dt\quad (k\geq 0), \text{ and } \] \[ c_k\sim(-\beta)^{-1}(1-\beta)^{-\beta}\,\Gamma(1-\beta)\gamma^{-k} k^{\beta}\quad\text{ as } k\rightarrow\infty. \] \item Critical case. For $\theta=1/2$ we have \[ c_k=(k+1)\int_0^\infty\frac{t^k e^{-t}}{(1+t)^{k+2}}\,dt \quad (k\geq 0), \text{ and } \] \[ c_k\sim (e\pi)^{1/2}\,k^{1/4}\,e^{-2\sqrt{k}}\quad \text{ as } k\rightarrow\infty. \] \item Supercritical case. If $1/2<\theta<1$, then \[ c_k=\gamma\int_0^1\frac{t^{k+1}(1-t)^{\beta-1}}{(1-\gamma t)^{1+\beta}}\,dt \quad (k\geq 0), \text{ and } \] \[ c_k\sim\gamma\,\beta^{\beta}\,\Gamma(\beta+1)k^{-\beta}\quad \text{ as } k\rightarrow\infty. \] \end{itemize} \end{thmcite} (Note that our $c_k$ corresponds to Th\"ornblad's $d_{k+1}$.)
Therefore the asymptotic degree distribution decays exponentially in the subcritical case, polynomially in the supercritical case, and slower than exponential but faster than polynomial in the critical case.
The paper is built up as follows. In Section \ref{sth} we describe useful variants of the model and analyse the evolution of the number of vertices. Section \ref{fixed} contains our results on the asymptotic behaviour of the degree of a fixed vertex. This is used in Section \ref{maximal}, where we give bounds for the maximal degree, which are valid with probability $1$. It will follow that the index of the vertex with maximal degree tends to infinity, that is, there is no persistent hub in this model.
\section{Variants of Th\"ornblad's model} \label{sth}
This is a discrete time model. Let us start from a single vertex. The graph is modified in two ways: at every step a vertex is selected at random, with equal probability, then this vertex is either duplicated or deleted. Duplication means that a new vertex is added to the graph, and it is connected to the selected vertex and its neighbours. Deletion means that the edges of the selected vertex get deleted, but not the vertex itself. Every step, independently of the past, is a duplication with probability $\theta$, and a deletion with probability $1-\theta$ $(0<\theta<1)$. This model is called Version 1 or Th\"ornblad's model \cite{Th}.
Further versions differ from Version 1 only by time transforms.
In Version 2 the development of the graph is slowed down. Let $N_{n-1}$ be the number of vertices of the graph after the $(n-1)$th step. At the $n$th step, the graph is not modified with probability $1-N_{n-1}/n$. Otherwise each existing vertex has equal probability to be selected; the selected vertex (exactly one) is duplicated or deleted with probabilities $\theta$ and $1-\theta$, resp. All the above randomizations are independent of each other. As we will see, in this way the graph does not change in the majority of the steps.
Version 3 is defined in continuous time. Every vertex is given two clocks at its birth, which alarm according to independent homogeneous Poisson processes with rates $\theta$ and $1-\theta$. When the first clocks rings, the vertex gets duplicated, while the second clock determines deletion times. In this model, steps occur at an exponentially accelerating pace.
Focusing only on moments when something happens all versions look identical. This makes it possible to choose the most convenient version for different proofs.
In all versions, at every moment, the graph is a disjoint union of complete gaphs (cliques). This is obviously true in the beginning, and it is easy to see that neither duplication, nor deletion can break this property.
\begin{lemma}\label{size} Let us denote the number of vertices after $n$ steps by $N_n$, and in Version 3, the size of the graph at time $t$ will be denoted by $N(t)$. Then a.s. \begin{align*} N_n&\sim \theta n &&\text{in Version $1$};&&\\ N_n&\sim \zeta\,n^{\theta}&& \text{in Version $2$, where $\zeta$ is a positive random variable};&&\\ N(t)&\sim \eta\,e^{\theta t}&& \text{in Version $3$, where $\eta$ is a positive random variable}.&& \end{align*} \end{lemma} \textbf{Proof.} The proof for Version 1 is obvious.
Version 2. Let $\mathcal F_n$ denote the $\sigma$-field generated by the first $n$ steps. At step $n$ the number of vertices increases by $1$ with (conditional) probability $\theta N_{n-1}/n$. Thus \[ E\bigl(N_n\bigm\vert\mathcal F_{n-1}\bigr)=\Big(1+\frac{\theta}{n} \Big)N_{n-1}. \] Introduce \[ \varkappa_n=\prod_{i=1}^n \Big(1+\frac{\theta}{i}\Big)^{-1}= \frac{\Gamma(1+\theta)\Gamma(n+1)}{\Gamma(n+1+\theta)} \sim \Gamma(1+\theta)n^{-\theta}. \] Then $(\varkappa_n N_n,\,\mathcal F_n)$ is a nonnegative martingale, which is known to be a.s. convergent. Let $\zeta=\lim_{n\to\infty}n^{-\theta}N_n$.
We still have to show that $\zeta>0$. Let $R_n=(N_n-1)^{-1}$, if $N_n\ge 2$, and $R_n=1$ otherwise. This time let \[ \kappa_n=\prod_{i=1}^n\left(1-\frac{\theta}{i}I(N_{i-1}\ge 2)\right)^{\!-1}\asymp n^\theta \] as $n\to\infty$. Here $I(\,\cdot\,)$ stands for the indicator of the event in brackets. Clearly, \[
E\bigl(R_n\bigm|\mathcal F_{n-1}\bigr)=\frac{1}{N_{n-1}}\cdot\frac{\theta N_{n-1}}{n}+\frac{1}{N_{n-1}-1}\Big(1-\frac{\theta N_{n-1}}{n}\Big) =\frac{1}{N_{n-1}-1}\Big(1-\frac{\theta}{n}\Big) \] on the event $\{N_{n-1}\ge 2\}\in\mathcal F_{n-1}$, and $=1$ on its complement. Hence $(\kappa_nR_n,\,\mathcal F_n)$ is a nonnegative martingale. Consequently, $n^{\theta}/N_n$ converges a.s., and its limit is obviously $1/\zeta$.
Version 3. $N(\theta^{-1})$ is a Yule process (see e.g. \cite{karlin, yule}), thus it is geometrically distributed, namely $\mathrm{Geom} \big(e^{-\theta t}\big)$, and $N(t)\sim\eta e^{\theta t}$, where $\eta$ is an exponential random variable of expected value $1$. \qed
\section{The degree process of a fixed vertex} \label{fixed}
In this section we consider Version 3, because, as we will see, that is the most natural choice for the individual degree processes. Of course, the results of this section can easily be transferred to Version 1, by using Lemma \ref{size}.
Let the vertices be labelled by $1,\,2,\,\dots$ in the order of birth, and let $d_i(t)$ denote the degree of vertex $i$ at time $t$.
\begin{theorem}\label{degstac} For every $i=1,2,\dots$ we have \[ \lim_{t\to\infty}P\big(d_i(t)=k\big)=q_k,\quad k=0,\,1,\,\dots, \] where \begin{equation}\label{stacmego} q_0=\gamma(1-c_0),\quad q_k=c_{k-1}-\gamma c_k,\ k=1,2,\dots, \end{equation} where the sequence $(c_k)$ is defined in Theorem A. Here $q_k>0$, $k=0,1,\dots$, and $\sum_{k=0}^\infty q_k=1$. \end{theorem} \textbf{Proof.} Let us fix a vertex. Clearly, its degree is a continuous time Markov process with infinitesimal generator \[ \mathbf{M}=\left\lceil \begin{matrix} 1-\theta&0&\dots\\ 1-\theta&0&\dots\\ \vdots&\vdots \end{matrix}\right\rceil +\left\lceil \begin{matrix} -1 & \theta \\ 1-\theta & -2 & 2\theta\\
& 2(1-\theta) & -3 & 3\theta\\
& & 3(1-\theta) & -4 & 4\theta \\
& & &\ddots&\ddots&\ddots \end{matrix}\right\rceil \] The process is positive recurrent, because deletion cuts back the degree to $0$ at a constant rate. Hence it has a stationary distribution $q=(q_0,\,q_1,\,\dots)$ which is the unique discrete distribution satisfying $q\mathbf M=0$ \cite{karlin}. Thus, \begin{equation}\label{stac} q_0=(1-\theta)(1+q_1),\quad q_k=\frac{k\theta q_{k-1}+(k+1)(1-\theta)q_{k+1}}{k+1}\,,\ k\ge 1. \end{equation}
From Theorem A it follows that the numbers $q_k$ in \eqref{stacmego} satisfy \eqref{stac}. They sum up to $1$, because \[ \sum_{k=0}^\infty q_k=\gamma(1-c_0)+\sum_{k=1}^\infty(c_{k-1}- \gamma c_k)=\gamma+(1-\gamma)\sum_{k=0}^\infty c_k=1. \] Finally, their positivity follows from the integral form of $c_k$, which can be found in \cite{dup} for the critical case, and in \cite{Th} for the subcritical and supercritical cases; see Theorem A. Namely, we immediately obtain in the subcritical case \[ q_k=\gamma^{-k}\int_0^1\frac{t^k (1-t)^{-\beta}}{(1-\gamma^{-1}t)^{1-\beta}}\,dt; \] and in the supercritical case \[ q_k=\gamma\int_0^1\frac{t^k(1-t)^{\beta-1}} {(1-\gamma t)^\beta}\,dt \] for $k\ge 1$. In the critical case, by equation \eqref{stac} and partial integration we get \begin{align*} q_k&=k\int_0^\infty\frac{t^{k-1} e^{-t}}{(1+t)^{k+1}}\,dt- (k+1)\int_0^\infty\frac{t^k e^{-t}}{(1+t)^{k+2}}\,dt\\ &=k\int_0^\infty\frac{t^{k-1} e^{-t}}{(1+t)^{k+1}}\,dt+ \left[\frac{t^k e^{-t}}{(1+t)^{k+1}}\right]_0^\infty -\int_0^\infty\frac{kt^{k-1} e^{-t}-t^k e^{-t}}{(1+t)^{k+1}}\,dt\\ &=\int_0^\infty\frac{t^k e^{-t}}{(1+t)^{k+1}}\,dt.\qed \end{align*}
It is somewhat surprising. In spite that the limit distribution of the degree is the same \textit{for every vertex}, the asymptotic degree distribution of the graph is different. If the the degrees were independent and identically distributed, the proportion of vertices with fixed degree would converge to the corresponding probability. In our model neither condition is satisfied, not even approximately. On one hand, in Version 3, the size of the graph grows exponentially. Consequently, at every moment the vast majority of the vertices are relatively young, so the limit distribution cannot be applied to them. On the other hand, if the vertices were nearly independent, the number of vertices with high degree would follow the Poisson distribution; but in our model, if there exists at least one vertex of a large degree $d$, then all its neighbours have the same degree, therefore many of such vertices are coexistent.
This phenomenon can be better understood if we consider the degree process of an arbitrary vertex. The higher the degree is, the shorter it sustains. Therefore a reversed size biased sampling can be observed: at a given moment the probability that a given vertex has degree $d$ is less than the proportion of degree $d$ ones among all vertices.
In Section \ref{maximal} we shall need asymptotics for the tail of the stationary distribution.
\begin{theorem}\label{stacasymp} \begin{align} \label{substac} &\text{Subcritical case.}&&q_k+q_{k+1}+\dots\sim (1-\beta)^{2-\beta}\Gamma(1-\beta)\,k^{\beta-1}\gamma^{-k}\,;&&\\ \label{critstac} &\text{Critical case.}&&q_k+q_{k+1}+\dots\sim (e\pi)^{1/2}\,k^{1/4}\,e^{-2\sqrt{k}}\,;&&\\ \label{supstac} &\text{Supercritical case.}&&q_k+q_{k+1}+\dots\sim \gamma\,\beta^{\beta}\Gamma(\beta-1)\,k^{1-\beta},&& \end{align} as $k\to\infty$. \end{theorem} \textbf{Proof.} In the subcritical case we have \[ \sum_{j=k}^\infty q_j=\gamma^{-k}\int_0^1\frac{t^k(1-t)^{-\beta}} {(1-\gamma^{-1}t)^{2-\beta}}\,dt, \] while in the supercritical case \[ \sum_{j=k}^\infty q_j=\gamma\int_0^1\frac{t^k(1-t)^{-2+\beta}} {(1-\gamma t)^\beta}\,dt. \] If $k\to\infty$, both integrals become relatively negligible over any interval $[0,\,1-\varepsilon]$, compared to those over $[1-\varepsilon,\,1]$. Hence in the denominators we can replace $t$ with $1$, thus reducing to complete beta integrals.
In the critical case $q_k+q_{k+1}+\dots=c_{k-1}$, hence Theorem A can be applied.\qed
Obviously, every vertex becomes isolated infinitely many times, due to deletion. What can be said about the extremely high degrees?
\begin{theorem}\label{vertmax} \begin{align*} &\text{Subcritical case.}&& \limsup_{t\to\infty}\frac{d_i(t)}{\log\log N(t)}=\frac{1}{\log\gamma} \,;&&\\ &\text{Critical case.}&& \limsup_{t\to\infty}\frac{d_i(t)}{(\log\log N(t))^2}=1,&&\\ &\text{Supercritical case.}&& \limsup_{t\to\infty}\frac{\log d_i(t)}{\log\log N(t)}= \frac{1}{\beta-1}\,.&& \end{align*} for $i=1,2,\dots$ . \end{theorem} \textbf{Proof.} First we investigate how large can the degree grow between two consecutive deletions. Let $p_i(r)$ denote the probability that a vertex of degree $i$ will sometimes have degree $r$ at least once before it is selected for deletion. Then we clearly have \begin{align*} p_0(r)&=\theta p_1(r);\\ p_i(r)&=\frac{(i+1)\theta p_{i+1}(r)+i(1-\theta)p_{i-1}(r)}{i+1}\,,\quad i=1,2,\dots,r-1;\\ p_r(r)&=1. \end{align*} Introduce $a_i=p_i(r)/p_0(r)$; it does not depend on $r$ provided $r>i$. The probability we are interested in is $p_0(r)=1/a_r$. The sequence $(a_r)_{r\ge 0}$ satisfies the following recursion. \begin{equation}\label{arec} a_0=1,\quad a_1=\frac{1}{\theta}\,,\quad a_i=\theta a_{i+1}+\frac{i}{i+1}\,(1-\theta)a_{i-1},\quad i=1,2,\dots\ . \end{equation} In the critical case we can use some well-known facts about Laguerre polynomials $L_r(x)$ \cite{Szego}. They can be defined by the following recursion formula. \begin{multline}\label{legrec} L_0(x)=1,\ L_1(x)=1-x,\\ L_{r+1}(x)=\frac{(2r+1-x)L_r(x)-rL_{r-1}(x)}{r+1}\,,\ r=1,2,\dots\ . \end{multline} Their asymptotic behaviour for large $r$ and fixed $y>0$ is given by \begin{equation}\label{laguerre} L_r(-y)\sim 2^{-1}\pi^{-1/2}r^{-1/4}e^{-y/2}y^{-1/4}e^{2\sqrt{yr}}. \end{equation} Recursions \eqref{arec} and \eqref{legrec} coincide if $\theta=1/2$ and $x=-1$. Hence we obtain \begin{equation}\label{critepoch} p_0(r)=\frac{1}{a_r}=\frac{1}{L_r(-1)}\sim 2\sqrt{e\pi}\,r^{1/4}e^{-2\sqrt{r}} \end{equation} in the critical case.
If $\theta\ne 1/2$, we can analyse the asymptotic behaviour of the sequence $(a_r)$ by computing its generating function $G(z)=\sum_{r=0}^\infty a_rz^r$. From \eqref{arec} it follows that \[ \sum_{r=1}^\infty (r+1)a_rz^r=(1-\theta)\sum_{r=1}^\infty ra_{r-1}z^r +\theta\sum_{r=1}^\infty(r+1)a_{r+1}z^r, \] that is, \[ \big(zG(z)\big)'-1=(1-\theta)z\big(zG(z)\big)'+ \theta\Big(G'(z)-\frac{1}{\theta}\Big). \] This leads to the following homogeneous linear ODE. \[ \big(\theta-z+(1-\theta)z^2\big)G'(z)=\big(1-(1-\theta)z\big)G(z), \quad G(0)=1. \] Its solution can easily be expanded into a power series. \[ G(z)=(1-z)^{-\beta}(1-\gamma z)^{\beta-1}= \sum_{r=0}^\infty (-z)^r\sum_{i=0}^r\binom{\beta-1}{i} \binom{-\beta}{r-i}\gamma^i. \] Thus, \begin{equation*} a_r=(-1)^r\sum_{i=0}^r\binom{\beta-1}{i} \binom{-\beta}{r-i}\gamma^i. \end{equation*}
Suppose first that $\theta>1/2$. Then $\gamma<1$ and $\beta>1$. Since \[ (-1)^r r^{1-\beta}\binom{-\beta}{r}=\frac{r^{1-\beta}\Gamma(r+\beta)} {\Gamma(r+1)\Gamma(\beta)} \] converges as $r\to\infty$, therefore it is bounded. Consequently, we have \[
\left|(-1)^r r^{1-\beta}\binom{\beta-1}{i}\binom{-\beta}{r-i}
\gamma^i\right|\le b_i, \] uniformly in $r\ge i$, where the infinite series $\sum b_i$ converges. Hence \begin{multline*} \lim_{r\to\infty}r^{1-\beta}a_r= \sum_{i=0}^\infty \lim_{r\to\infty}(-1)^r r^{1-\beta}\binom{\beta-1}{i} \binom{-\beta}{r-i}\gamma^i\\ =\frac{1}{\Gamma(\beta)} \sum_{i=0}^\infty \binom{\beta-1}{i}(-\gamma)^i=\frac{ (1-\gamma)^{\beta-1}}{\Gamma(\beta)}=\frac{\beta^{1-\beta}}{\Gamma(\beta)}\,, \end{multline*} and, by this, \begin{equation}\label{supepoch} p_0(r)\sim\beta^{\beta-1}\Gamma(\beta)r^{1-\beta} \end{equation} in the supercritical case.
The subcritical case is easy to reduce to the supercritical one. Let $a_r'=\gamma^{-r} a_r$. Then $a_r'$ satisfies the same recursion that $a_r$ does when $\theta$ is replaced by $1-\theta$. This substitution transforms the subcritical case into the supercritical one, furthermore, $\beta$ changes to $1-\beta$. Hence we get \begin{equation}\label{subepoch} p_0(r)=\gamma^{-r}(a_r')^{-1}\sim(1-\beta)^{-\beta}\Gamma(1-\beta)\, r^{\beta}\gamma^{-r}. \end{equation}
Up to time $t$ there are $(1-\theta)t(1+o(1))\sim \gamma\log N(t)$ epochs (time intervals between consecutive deletions), hence $\max\{d_i(s):s\le t\}$ is asymptotically equal to the maximum of $(1+o(1))\gamma\log N(t)$ i.i.d. random variables with distribution $P(\xi\ge r)=p_0(r)$. Starting from \eqref{subepoch}, \eqref{critepoch} and \eqref{supepoch}, standard Borel--Cantelli arguments yield \[ \max\{d_i(s):s\le t\}\sim\frac{\log\log N(t)}{\log\gamma} \] in the subcritical case, \[ \max\{d_i(s):s\le t\}\sim\log^2t\sim(\log\log N(t))^2 \] in the critical case, and \[ \log\max\{d_i(s):s\le t\}\sim\frac{\log\log N(t)}{\beta-1} \] in the supercritical case, completing the proof. (Alternatively, one can apply \cite[Theorem 4.4.4]{Galambos}.)\qed
\section{Maximal degree}
\label{maximal}
Let $M_n$ denote the maximal degree in Version 1 after $n$ steps. From Theorem A it is clear that$M_n\to\infty$. In many scale-free random graph
processes the order of magnitude of the maximal degree $M_n$ can be
characterized in the following way: $M_n\asymp \min\{d: N_n c_d<1\}$,
where $(c_d)$ is the asymptotic degree distribution, and $N_n$ is
the size of the graph, see e.g. \cite[2005]{M05}, \cite[2007]{M07}, \cite[2010]{M09},
\cite[2014]{harom}. This would give $M_n\asymp\log N_n$ in the subcritical case, $M_n\asymp\log^2 N_n$ in the critical one, and $M_n\asymp N_n^{1/\beta}$ in the supercritical one. We will show that this estimate is valid in the subcritical and critical cases, but in the supercritical case we can prove less.
\begin{theorem}\label{maxup} \begin{align*} &\text{Subcritical case.}&&\frac{1-\theta}{\log\gamma}\le \liminf_{n\to\infty}\frac{M_n}{\log N_n}\le \limsup_{n\to\infty}\frac{M_n}{\log N_n} \le\frac{1+\theta}{\theta\log\gamma}\,.\\ &\text{Critical case.}&&\frac{1}{16}\le\liminf_{n\to\infty} \frac{M_n}{\log^2N_n}\le\limsup_{n\to\infty}\frac{M_n}{\log^2 N_n} \le\frac{9}{4}\,.\\ &\text{Supercritical case.}&& \frac{\theta}{\beta}\le \liminf_{n\to\infty}\frac{\log M_n}{\log N_n}\le\limsup_{n\to\infty} \frac{\log M_n}{\log N_n}\le\frac{1}{\beta}\,. \end{align*}
\end{theorem}
\textbf{Proof of the upper bounds.}
The proof will be given for Version 2.
Let $d_i(n)$ denote the degree of vertex $i$ after step $n$, $i=1,\dots, N_n$, where $N_n$ is the size of the graph after $n$ steps. Introduce \[ S_n(r)=\sum_{i=1}^{N_n}\binom{d_i(n)}{r}. \] \begin{lemma}\label{esn2} For every $n=1,2,\dots$ and $r=0,1,2,\dots $ we have \begin{align} \label{subbound} &\text{Subcritical case.}&& ES_n(r)\le 2(r+1)(-\beta)^r n^{\theta};&&\\ \label{critbound} &\text{Critical case.}&& ES_n(r)\le 2(r+1)!\sqrt{n}\,;&&\\ \label{supbound} &\text{Supercritical case.}&& ES_n(r)\le\left\{ \begin{array}{ll} C_r(\theta)\,n^\theta,&\text{if } r<\beta-1; \rule[-6pt]{0pt}{18pt}\\ C_r(\theta)\,n^\theta(1+\log n),&\text{if }r=\beta-1; \rule[-6pt]{0pt}{18pt}\\ C_r(\theta)\,n^{(r+1)(2\theta-1)},&\text{if }r>\beta-1. \rule[-6pt]{0pt}{18pt}\\ \end{array} \right. \end{align} where $C_r(\theta)$ is a constant depending on $r$ and $\theta$ but independent of $n$.. \end{lemma} \textbf{Proof.} First we will verify the following recursion. For every $n=1,2,\dots$ and $r=1,2,\dots$ we have \begin{equation}\label{esnrec} ES_n(r)=\Big(1+\frac{(2\theta-1)(r+1)}{n}\Big)ES_{n-1}(r)+ \frac{\theta(r+1)}{n}\,ES_{n-1}(r-1). \end{equation}
At the $n$th step the $i$th term of $S_n(r)$ can change in the following way. With the notation $d=d_i(n-1)$, \[ \binom{d_i(n)}{r}= \left\{ \begin{array}{cll} \dbinom{d+1}{r}&\text{with conditional probability}& \dfrac{d+1}{n}\,\theta\\ &\multicolumn{2}{l}{\text{(vertex $i$ or one of its neighbours is
duplicated);}}\\ \dbinom{d-1}{r}&\text{with conditional probability}& \dfrac{d}{n}(1-\theta)\\ &\multicolumn{2}{l}{\text{(a neighbour of vertex $i$ is deleted);}}\\ 0&\text{with conditional probability}& \dfrac{1-\theta}{n}\\ &\multicolumn{2}{l}{\text{(vertex $i$ is deleted);}}\\ \dbinom{d}{r}&\text{otherwise.}& \end{array} \right. \] Thus, \[
E\biggl(\binom{d_i(n)}{r}\biggm|\mathcal F_{n-1}\biggr) =\binom{d}{r}\left(1-\frac{d+1}{n}\right) +\binom{d+1}{r}\frac{d+1}{n}\,\theta +\binom{d-1}{r}\frac{d}{n}(1-\theta). \] Besides, when vertex $i$ is duplicated, an additional term $\binom{d+1}{r}$ also appears as the yield of the new vertex. Hence the total contribution of vertex $i$ in
$E\bigl(S_n(r)\bigm|\mathcal F_{n-1}\bigr)$ is \begin{multline*} \binom{d}{r}\left(1-\frac{d+1}{n}\right) +\binom{d+1}{r}\frac{d+2}{n}\,\theta +\binom{d-1}{r}\frac{d}{n}(1-\theta)\\ =\binom{d}{r}\left(1+\frac{(2\theta-1)(r+1)}{n}\right) +\binom{d}{r-1}\frac{\theta(r+1)}{n}\,. \end{multline*} This implies that \[
E\bigl(S_n(r)\bigm|\mathcal F_{n-1}\bigr)= \left(1+\frac{(2\theta-1)(r+1)}{n}\right)S_{n-1}(r)+ \frac{\theta(r+1)}{n}\,S_{n-1}(r-1), \] as needed.
Next, we prove the lemma by double induction over $r$ and $n$, basing on the recursion \eqref{esnrec}.
Clearly, $S_n(0)=N_n$. From the proof of Lemma \ref{size} we know that \[ EN_n=\prod_{i=1}^n\Big(1+\frac{\theta}{i}\Big)\le 2 \prod_{i=1}^{n-1}\Big(1+\frac{\theta}{i}\Big)\le 2\prod_{i=1}^{n-1}\Big(1+\frac 1i\Big)^{\theta}=2n^\theta \] in all three cases. Furthermore, $ES_1(1)=2\theta$ and $ES_1(r)=0$ if $r>1$. Thus, for all pairs $(n,0)$ and $(1,r)$ Lemma \ref{esn2} holds true.
Let us check the induction step.
In the subcritical case, by using the induction hypothesis we can write \begin{multline*} ES_n(r)\leq 2(r+1)(-\beta)^r(n-1)^\theta\Big(1+\frac{(r+1)\theta}{n\beta} \Big)\\ \qquad+2r(-\beta)^{r-1}(n-1)^\theta\,\frac{\theta(r+1)}{n} \le 2(r+1)(-\beta)^{r}\,n^\theta, \end{multline*} as needed.
In the critical case we have \begin{multline*} ES_n(r)=ES_{n-1}(r)+\frac{r+1}{2n}\,ES_{n-1}(r-1)\\ \le 2(r+1)!\,\sqrt{n-1}\,\Big(1+\frac{1}{2n}\Big)\le 2(r+1)!\,\sqrt{n}. \end{multline*}
Finally, in the supercritical case $C_0(\theta)=2$ will do. Suppose we have proved inequality \eqref{supbound} for $r-1$ (and all $n$). Introduce $s=(r+1)(2\theta-1)$ and \[ \kappa_n=\frac{\Gamma(n+1+s)}{\Gamma(n+1)}\,, \] then \[ \frac{\kappa_n}{\kappa_{n-1}}=1+\frac sn\le\Big(\frac {n}{n-1}\Big)^s, \] because $1+\frac sn <\big(1+\frac 1n\big)^s\le \big(1+\frac{1}{n-1}\big)^s=\big(\frac{n}{n-1}\big)^s$, if $s\ge 1$, and $1+\frac sn\le\big(1-\frac sn\big)^{-1}\le\big(1-\frac 1n\Big)^{-s} =\big(\frac{n}{n-1}\big)^s$, if $0<s<1$. Hence, for $1\le j\le n$ we have \begin{equation}\label{kappa} \frac{\kappa_n}{\kappa_j}\le\Big(\frac nj\Big)^s. \end{equation}
By iterating equation \eqref{esnrec} we get \begin{multline*} \frac{ES_n(r)}{\kappa_n}=\frac{ES_{n-1}(r)}{\kappa_{n-1}}+ \frac{(r+1)\theta}{n\kappa_n}\,ES_{n-1}(r-1)=\dots\\ =\frac{ES_1(r)}{\kappa_1}+(r+1)\theta\sum_{j=2}^n \frac{ES_{j-1}(r-1)}{j\kappa_j}\,, \end{multline*} hence, by \eqref{kappa}, \[ ES_n(r)\le n^s ES_1(r) +(r+1)\theta\,n^s\sum_{j=1}^{n-1}j^{-s}ES_j(r-1)=A+B. \] In the right-hand side $A$ vanishes if $r>1$. For $r=1$ it is equal to $2\theta n^{2(2\theta-1)}$, which satisfies \eqref{supbound} in all three cases. Let us turn to $B$.
First, suppose $r<\beta-1$, that is, $\theta>s$. Then, by the induction hypothesis we have \[ B\le (r+1)\theta\,C_{r-1}(\theta)\,n^s\sum_{j=1}^{n-1} j^{\theta-s-1} \le \frac{(r+1)\theta}{\theta-s}\,C_{r-1}(\theta)\,n^{\theta}. \]
Next, let $r=\beta-1$, that is, $\theta=s$. Then again \[ B\le (r+1)\theta\,C_{r-1}(\theta)\,n^{\theta}\sum_{j=1}^{n-1}\frac 1j \le (r+1)\theta\,C_{r-1}(\theta)\,n^{\theta}(1+\log n). \]
If $\beta-1<r\le\beta$, that is, $r(2\theta-1)\le\theta<s$, then \[ B\le (r+1)\theta\,C_{r-1}(\theta)\,n^s\sum_{j=1}^{n-1} j^{\theta-s-1}(1+\log j) \le (r+1)\theta\,C_{r-1}(\theta)\,Q\,n^s, \] where \[ Q=\sum_{j=1}^\infty j^{\theta-s-1}(1+\log j)<\infty. \]
Finally, if $\beta<r$, then \[ B\le (r+1)\theta\,C_{r-1}(\theta)\,n^s\sum_{j=1}^{n-1} j^{-2\theta}\le (r+1)\theta\,C_{r-1}(\theta)\,\zeta(2\theta)\,n^s, \] where $\zeta(\,.\,)$ is the Riemann zeta function.\qed
Let us continue the proof of Theorem \ref{maxup}.
In the subcritical case, let us fix $z$ and $a$ in such a way that $0<z<-1/\beta$, and $a\log(1+z)>1+\theta$. Then, by Lemma \ref{esn2} we have \begin{multline*} E\bigg(\sum_{i=1}^{N_n}(1+z)^{d_i(n)}\bigg) =E\bigg(\sum_{i=1}^{N_n}\sum_{r=0}^n\binom{d_i(n)}{r}z^r\bigg) =E\bigg(\sum_{r=0}^n\sum_{i=1}^{N_n}\binom{d_i(n)}{r}z^r\bigg)\\ =\sum_{r=0}^n ES_n(r)z^r\le 2\sum_{r=0}^\infty(r+1)(-\beta z)^rn^\theta =\frac{2n^\theta}{(1+\beta z)^2}=K\,n^\theta. \end{multline*}
By the Markov inequality, \begin{multline*} P(M_n\ge a\log n)= P\left((1+z)^{M_n}\ge (1+z)^{a\log n}\right)\\ \le n^{-a\log(1+z)}\,E\bigg(\sum_{i=1}^{N_n}(1+z)^{d_i(n)}\bigg)\le K\,n^{\theta-a\log(1+z)}. \end{multline*} The infinite sum of the right-hand side is convergent as $n$ runs through the positive integers, thus the Borel--Cantelli lemma implies $M_n<a\log n$ a.s. for every sufficiently large $n$. Consequently,\ \[ \limsup_{n\to\infty}\frac{M_n}{\log n}\le \frac{1+\theta}{\log(1-\beta^{-1})}=\frac{1+\theta}{\log\gamma}\,. \] From Lemma \ref{size} we know that $\log N_n\sim\theta\log n$ as $n\to\infty$.
In the critical case we can make use of Laguerre polynomials again. Their explicit form is \begin{equation*} L_k(y)=\sum_{r=0}^k\binom kr\frac{(-y)^r}{r!}\,. \end{equation*}
Since the multiplicity of the maximal degree is at least $M_n+1$, we have \[ S_n(r-1)\ge(M_n+1)\binom{M_n}{r-1}=r\binom{M_n+1}{r}. \] Therefore, by Lemma \ref{esn2}, \[ E\binom{M_n}{r}\le\frac 1r\,ES_n(r-1)\le 2r!\sqrt{n}\,, \] for $r=1,2,\dots\ $, hence \[ E\big(L_{M_n}(-y)\big)=E\bigg(\sum_{r=0}^n\binom{M_n}{r}\frac{y^r}{r!} \bigg)\le 1+2\sqrt{n}\sum_{r=1}^n y^r\le\frac{2\sqrt{n}}{1-y} \] if $0<y<1$. Let $k=k_n\ge a\log^2 n$, where $ya>9/16$. Then the Markov inequality, combined with \eqref{laguerre}, implies \begin{multline*} P(M_n\ge k)=P\big(L_{M_n}(-y)\ge L_k(-y)\big)\le \frac {E\big(L_{M_n}(-y)\big)}{L_k(-y)}\\ =O\Big(\sqrt{n}\,e^{-2(1+o(1))\sqrt {yk}}\Big)=O\big(n^{1/2-2\sqrt{ya}}\big). \end{multline*} The exponent in $O(\,.\,)$ is less than $-1$, hence it makes a convergent series again, and from the Borel--Cantelli lemma \[ \limsup_{n\to\infty}\frac{M_n}{\log^2n}\le a \] follows for every $a>9/16$. This time $\log n\sim 2\log N_n$ by Lemma \ref{size}.
Finally, let us turn to the supercritical case. Let $a>2\theta-1$ and $r$ so large that $r>\beta-1$ and $r(2\theta-1-a)<-1$ hold. Then by the Markov inequality and \eqref{supbound} we have \begin{multline*} P\big(M_n\ge n^a\big)=P\bigg(\binom{M_n}{r}\ge\binom{n^a}{r}\bigg)\le \frac{E\binom{M_n}{r}}{\binom{n^a}{r}}\\ \le\frac{ES_n(r-1)}{r\binom{n^a}{r}}=O\big(n^{r(2\theta-1-a)}\big). \end{multline*}
The proof can be completed with the help of the Borel--Cantelli lemma and Lemma \ref{size}.\qed
\textbf{Proof of the lower bounds.}
The proof will be performed for Version 3. Let $\varepsilon$ be an arbitrarily small fixed positive number. The proof will consist of the following steps.
We first show that at time $(1-\theta)\sqrt{n}$ there are quite many isolated points in the graph. Clearly, they behave independently of each other after time $(1-\theta)\sqrt{n}$.
Then we give a lower bound for the probability that such an isolated vertex increases its degree above $k_n$ by time $\sqrt{n}$, where $k_n$ is an increasing positive sequence depending on $\theta$. It will follow that the probability that none of them can do it is so small that its sum over $n$ is convergent. Hence the Borel--Cantelli lemma implies that a.s. $M\big(\sqrt{n}\big)\ge k_n$ if $n$ is large enough.
Finally, we will show that the probability that a vertex having such a high degree at time $t_n$ will lose from its degree at least $\varepsilon k_n$ times in the interval $[\sqrt{n},\,\sqrt{n+1}]$ is very small: it is also finitely summable. Thus, with $n=\lfloor t^2\rfloor$ we have a.s. $M(t)\ge (1-\varepsilon)k_n$ if $t$ is large enough.
In more details, let us start with the number of isolated vertices. For the sake of brevity denote $N\big((1-\theta)\sqrt{n}-1\big)$ by $N_n$. For each vertex present at time $(1-\theta)\sqrt{n}-1$, the probability that, during the next time unit, it will be deleted some time and not duplicated after that is \[ \sigma=(1-\theta)\big(1-e^{-1}\big). \]
(Something happens to the vertex, and the last event is a deletion.) Therefore the number of isolated vertices at time $(1-\theta)\sqrt{n}$ is at least as big as a binomial random variable $Q_n$ with parameters $N_n$ and $\sigma$. Since $N_n$ itself is $\mathrm{Geom}(p_n)$ distributed with parameter $p_n=\exp(-\theta(1-\theta)\sqrt{n}+\theta)$, straightforward calculation gives that the distribution of $Q_n$ is a mixture: \[ Q_n=\left\{ \begin{array}{cll} \mathrm{Geom}\bigg(\dfrac{p_n}{\sigma+(1-\sigma)p_n}\bigg) &\text{ with weight }&\dfrac{\sigma}{\sigma+(1-\sigma)p_n}\,;\\ 0&\text{ with weight }&\dfrac{(1-\sigma)p_n}{\sigma+(1-\sigma)p_n}\,. \end{array} \right. \]
Let us turn to the estimation of the probability that a fixed isolated vertex can considerably increase its degree in a time interval of length $\theta\sqrt{n}$.
How fast is the convergence to the stationary distribution? This can be answered easily by coupling. Let us start two degree processes, one from the stationary distribution, and another one from an isolated vertex (i.e., from degree $=0$). Let the deletion of the vertex in question be governed by the same Poisson process in both cases. After the first deletion stick the two processes together. Then the probability that the two processes differ after time $t$ is at most $e^{-(1-\theta)t}$. Hence the same bound is valid for the total variation distance of the degree distribution at time $t$ from the stationary one.
Consequently, if a vertex is isolated at time $(1-\theta)\sqrt{n}$, then the probability that its degree at time $\sqrt{n}$ is larger than $k_n$, is at least \[ \pi_n=\sum_{k=k_n}^\infty q_k-\exp\big(-\theta(1-\theta)\sqrt{n}\big). \] If $k_n$ is specified in such a way that \begin{equation}\label{suppose1} \sum_{n=1}^\infty\frac{p_n}{\pi_n}<\infty \end{equation} holds, then $p_n=o(\pi_n)$, and we have \begin{multline*} P\left(M\big(\sqrt{n}\big)\le k_n\right)\le E\left((1-\pi_n)^{Q_n}\right)\\ =\frac{(1-\sigma)p_n}{\sigma+(1-\sigma)p_n}+\frac{\sigma} {\sigma+(1-\sigma)p_n}\cdot\frac{\dfrac{p_n}{\sigma+(1-\sigma)p_n} (1-\pi_n)}{1-\dfrac{\sigma(1-p_n)}{\sigma+(1-\sigma)p_n}(1-\pi_n)} \sim\frac{p_n}{\sigma\pi_n}\,. \end{multline*} The sum of these probabilities is convergent by supposition. Hence the
Borel--Cantelli lemma implies that, almost surely, $M\big(\sqrt{n}\big)>
k_n$ if $n$ is large enough.
Finally, we have to show that $M(t)$ cannot decrease significantly between $\sqrt{n}$ and $\sqrt{n+1}$. Suppose $M\big(\sqrt{n}\big)>k_n$. Choose a vertex with maximal degree and select $k_n$ from its neighbours. Let us compute the probability that more than $\varepsilon k_n$ of them will be deleted between $\sqrt{n}$ and $\sqrt{n+1}$. The number $Z$ of deleted vertices is binomial with parameters $k_n$ and \[ 1-\exp\big((1-\theta)\big(\sqrt{n}-\sqrt{n+1}\big)\big)\le (1-\theta)\big(\sqrt{n}-\sqrt{n+1}\big)\le 2(1-\theta)n^{-1/2}. \] By Hoeffding's inequality \[ P\big(Z>\varepsilon k_n\big)\le \exp\left(-2\big(\varepsilon- 2(1-\theta)n^{-1/2}\big)^2 k_n\right). \] If, in addition to \eqref{suppose1}, sequence $(k_n)$
satisfies \begin{equation}\label{suppose2} \sum_{n=1}^\infty \exp\left(-\varepsilon^2 k_n\right)<\infty, \end{equation} then the Borel--Cantelli lemma gives \[ \min\big\{M(t): \sqrt{n}\le t\le \sqrt{n+1}\big\}> (1-\varepsilon)k_n \] if $n$ is sufficiently large. Consequently, with $n=\lfloor t^2\rfloor$ we have a.s. $M(t)\ge (1-\varepsilon)k_n$ for all sufficiently large $t$.
Let us specify $k_n$ in all three cases to meet conditions \eqref{suppose1} and \eqref{suppose2}.
In the subcritical case let \[ k_n=\frac{\theta(1-\theta)}{\log\gamma}(1-\varepsilon)\sqrt{n}. \] Then condition \eqref{suppose2} is satisfied. Moreover, by \eqref{substac} we have \[ \pi_n=\exp\big(-(1+o(1))\log\gamma\cdot k_n\big). \] Hence \[ \frac{p_n}{\pi_n}=\exp\big(-(1+o(1))\,\theta(1-\theta) \varepsilon \sqrt{n}\big), \] thus condition \eqref{suppose1} is satisfied as well. Consequently, \[ M(t)>(1+o(1))(1-\varepsilon)^2\frac{\theta(1-\theta)}{\log\gamma}t =(1+o(1))(1-\varepsilon)^2\frac{1-\theta}{\log\gamma}\log N(t) \] if $t$ is sufficiently large.
In the critical case let \[ k_n=\frac{(1-\varepsilon)^2}{64}\,n, \] then \eqref{suppose2} is fulfilled. In addition, \eqref{critstac} implies \[ \pi_n=\exp\left(-(1+o(1))\frac{1-\varepsilon}{4}\sqrt{n}\right). \] Therefore \[ \frac{p_n}{\pi_n}=\exp\left(-(1+o(1))\frac{\varepsilon}{4}\sqrt{n}\right), \] and requirement \eqref{suppose1} is also met. Hence \[ M(t)>(1+o(1))\frac{(1-\varepsilon)^3}{64}\,t^2= (1+o(1))\frac{(1-\varepsilon)^3}{16}\,\log^2N(t) \] if $t$ is large enough.
Finally, in the supercritical case set \[ k_n=\exp\big((1-\varepsilon)\theta(2\theta-1)\sqrt{n}\big). \] Then \eqref{suppose2} is satisfied. By \eqref{supstac} we can write \[ \pi_n=k_n^{(1+o(1))(1-\beta)}=\exp\big(-(1+o(1))(1-\varepsilon)\theta (1-\theta)\sqrt{n}\big), \] from which it follows that \[ \frac{p_n}{\pi_n}=\exp\big(-(1+o(1))\varepsilon\theta(1-\theta)\sqrt{n} \big). \] This produces a finite sum, thus \eqref{suppose1} holds true. Consequently, with $n=\lfloor t^2\rfloor$, \begin{multline*} M(t)>(1-\varepsilon)k_n=\exp\big((1+o(1))(1-\varepsilon) \theta(2\theta-1)t\big)\\ =\exp\big((1+o(1))(1-\varepsilon)(2\theta-1)\log N(t)\big), \end{multline*} if $t$ is sufficiently large.\qed
Due to deletions, in our graph there is no persistent hub in the sense of Krapivsky and Redner \cite[2001]{krap} or Galashin \cite[2014+]{galashin} (namely, a single vertex which emerges forever as vertex of maximal degree), unlike in certain preferential attachment models \cite[Dereich and M\"orters, 2009]{DM}, \cite[M\'ori, 2005]{M05}. As a corollary to Theorems \ref{vertmax} and \ref{maxup}, it follows that the index of the vertex with the maximal degree tends to infinity with time.
\end{document} |
\begin{document}
\begin{abstract} We give bilateral pointwise estimates for positive solutions of the equation \begin{equation*} \left\{ \begin{aligned} -\triangle u & = \omega u \, \,& & \mbox{in} \, \, \Omega, \quad u \ge 0, \\ u & = f \, \, & &\mbox{on} \, \, \partial \Omega , \end{aligned} \right. \end{equation*} in a bounded uniform domain $\Omega\subset \mathbb{R}^n$, where $\omega$ is a locally finite Borel measure in $\Omega$, and $f\ge 0$ is integrable with respect to harmonic measure $d H^{x}$ on $\partial\Omega$.
We also give sufficient and matching necessary conditions for the existence of a positive solution in terms of the exponential integrability of $M^{*} (m \omega)(z)=\int_\Omega M(x, z) m(x)\, d \omega (x)$ on $\partial\Omega$ with respect to $f \, d H^{x_0}$, where $M(x, \cdot)$ is Martin's function with pole at $x_0\in \Omega, m(x)=\min (1, G(x, x_0))$, and $G$ is Green's function.
These results give bilateral bounds for the harmonic measure associated with the Schr\"{o}dinger operator $-\triangle - \omega $ on $\Omega$, and in the case $f=1$, a criterion for the existence of the gauge function. Applications to elliptic equations of Riccati type with quadratic growth in the gradient are given. \end{abstract}
\maketitle
\eject
\tableofcontents
\section{Introduction}
Let $\Omega \subset \mathbb{R}^n$ ($n \geq 3$) be a nonempty, connected, open set (a domain). It is called a non-tangentially accessible (NTA) domain if it is bounded, and satisfies both the interior and exterior corkscrew conditions, and the Harnack chain condition (\cite{JK}). For instance, any bounded Lipschitz domain is an NTA domain. The exterior corkscrew condition yields that any NTA domain is regular (in the sense of Wiener).
More generally, a uniform domain is defined as a bounded domain which satisfies the interior corkscrew condition and the Harnack chain condition. Uniform domains satisfy the local (or uniform) boundary Harnack principle (\cite{Aik}; see \cite{An1}, \cite{JK} for Lipschitz and NTA domains). However, they are not necessarily regular.
Our main results hold for bounded uniform domains, and the regularity of $\Omega$ is not used below.
In \cite{Ken}, a slightly more general version of an NTA domain $\Omega$ is defined as a uniform domain
of class $\mathcal{S}$ (Definition 1.1.20), i.e., satisfying the volume density condition, which ensures that $\Omega$ is a regular domain.
Most of our results, including Theorem \ref{mainufest} and Theorem \ref{gaugecrit} below, hold in this setup for uniformly elliptic operators in divergence form
$\mathcal{L} = \text{div} (A \nabla \cdot)$, with bounded measurable symmetric $A$,
in place of the Laplacian $\triangle$, as in \cite{JK}, p. 138 and \cite{Ken}, Sec. 1.3. The same class of operators $\mathcal{L}$ in uniform domains with Ahlfors regular boundary
can be covered as well (see \cite{Zha}).
In this paper, for simplicity, we consider mostly the case $n\ge 3$. In two dimensions, our results hold if $\Omega$ is a bounded finitely connected domain in $\mathbb{R}^2$, in particular, a bounded Lipschitz domain (see
\cite{CZ}, Theorem 6.23; \cite{Han1}, Remark 3.5).
Let $\omega$ be a locally finite Borel measure on $\Omega$ and let $f$ be a non-negative Borel measurable function on $\partial \Omega$. We consider the equation \begin{equation}\label{ufeqn} \left\{ \begin{aligned} -\triangle u & = \omega u \, \,& & \mbox{in} \, \, \Omega, \quad u \ge 0, \\ u & = f \, \, & &\mbox{on} \, \, \partial \Omega. \end{aligned} \right. \end{equation} Solutions of \eqref{ufeqn} are understood either in the potential theoretic sense, or $d \omega$-a.e. The precise definitions are discussed in \S \ref{sec2} below. In the case of $C^2$ domains, or bounded Lipschitz domains $\Omega$, they coincide with ``very weak'' solutions in the sense of Brezis (see \cite{BCMR}, \cite{FV2}, \cite{MV}, Sec. 1.2, \cite{MR}).
Let $\Omega \subseteq \mathbb{R}^n$ be a bounded domain with Green's function $G(x,y)$; then $G$ is symmetric and strictly positive on $\Omega \times \Omega$. For a Borel measure $\nu$ on $\Omega$, \begin{equation} \label{Greenop} G \nu (x) = \int_{\Omega} G(x,y) \, d\nu(y), \qquad x\in\Omega, \end{equation} is Green's operator. We call $G \nu$ the Green's potential if $G \nu\not\equiv +\infty$.
For a Borel measurable function $f$ on $\partial \Omega$, define the harmonic extension $Pf$ of $f$ into $\Omega$ (the generalized solution to the Dirichlet problem) by \begin{equation} \label{harm-rep} Pf (x) = \int_{\partial \Omega} f(z) \, dH^x (z), \qquad x \in \Omega, \end{equation} where $dH^x$ is the harmonic measure at $x$, if the integral in \eqref{harm-rep} exists.
A solution $u$ to \eqref{ufeqn} satisfies, formally, the equation \begin{equation} \label{form-sol} u (x) = G(u \omega) (x) + Pf (x), \qquad x \in \Omega . \end{equation} We remark that if $u\not\equiv +\infty$ satisfies \eqref{form-sol}, then it is a superharmonic function in $\Omega$, and $Pf$ is its greatest harmonic minorant. In particular, $u\in L^1_{loc}(\Omega, dx)\cap L^1_{loc} (\Omega, d\omega)$, and $u< +\infty$ q.e., that is, quasi-everywhere with respect to the Green capacity, see \cite{AG}, \cite{Lan}.
We also consider more general equations with an arbitrary positive harmonic function $h$ in place of $Pf$ (see \S \ref{sec2} and \S \ref{sec3}), when irregular boundary points may come into play.
For an appropriate function $g$ on $\Omega$, we define \begin{equation} \label{defT}
T g (x) = G(g \omega)(x) = \int_{\Omega} G(x,y) \, g(y) \, d \omega(y) , \end{equation} for $x \in \Omega$, so that equation \eqref{form-sol} becomes $(I-T)u = Pf$, with formal solution \begin{equation} \label{ufsolndef}
u_f = \sum_{j=0}^{\infty} T^j (Pf) . \end{equation} This \textit{minimal} solution $u_f$ of \eqref{ufeqn} satisfies \begin{equation} \label{minsolnform} u_f (x) = G(u_f \omega) (x) + Pf (x), \qquad x \in \Omega , \end{equation} if $u_f\not\equiv +\infty$. Under conditions which guarantee the finiteness of the right side of equation (\ref{ufsolndef}) (see Theorem \ref{mainufest} and Theorem \ref{gaugecrit}), we will see that $u_f$ defined by (\ref{ufsolndef}) gives a (generalized) solution of (\ref{ufeqn}).
It was shown in \cite{FV3}, Lemma 2.5, that the following are equivalent: for $\beta >0$, \begin{equation} \label{normTless1} T \,\, \mbox{is bounded on} \,\, L^2 (\Omega, \omega) \,\, \mbox{with} \,\, \Vert T \Vert = \Vert T \Vert_{L^2 (\Omega, \omega) \rightarrow L^2 (\Omega, \omega)} \le \beta^2 \end{equation} and \begin{equation} \label{equivnormTless1} \Vert \varphi \Vert_{L^2 (\omega)} \leq \beta \, \Vert \nabla \varphi \Vert_{L^2 (dx)}, \,\, \mbox{for all} \,\, \varphi \in C^{\infty}_0 (\Omega) . \end{equation}
Our results are expressed in terms of the \textit{Martin kernel} $M(x,z)$. In a bounded uniform domain $\Omega\subset\mathbb{R}^n$, the Martin boundary $\triangle$ is homeomorphic to the Euclidean boundary $\partial \Omega$ (\cite{Aik}, Corollary 3; see \cite{HW}, \cite{AG}, for a bounded Lipschitz domain, and \cite{JK}, \cite{Ken}
for an NTA domain.) Martin's kernel, defined with respect to a reference point $x_0 \in \Omega$, is given by \begin{equation} \label{martin-K} M(x, z)=\lim_{y\to z, \, \, y\in \Omega} \frac{G(x, y)}{G(x_0, y)}, \quad x \in \Omega, \, \, z \in \partial \Omega, \end{equation} where the limit exists, and is a minimal harmonic function in $x \in \Omega$. We will see in \S \ref{sec2} that \begin{equation} \label{harmmeasiden}
dH^x (z) = M(x,z) \, dH^{x_0} (z) , \qquad (x, z) \in \Omega\times\partial \Omega , \end{equation} for uniform domains (see \cite{HW}, p. 519; \cite{CZ}, p. 137 for Lipschitz domains;
\cite{JK}, pp. 104, 115 for NTA domains). Combining \eqref{harm-rep} and \eqref{harmmeasiden} yields \begin{equation} \label{pf-rep-martin} Pf (x) = \int_{\partial \Omega} M(x,z) \, f(z) \, dH^{x_0} (z), \quad x \in \Omega, \end{equation} for Borel measurable $f\ge 0$, whenever the integral exists. Hence, \eqref{ufsolndef} yields \begin{equation} \label{ufrepresent}
u_f (x) = \int_{\partial \Omega} \sum_{j=0}^{\infty} T^j M(\cdot,z) (x)\, f(z) \, dH^{x_0} (z), \quad x \in \Omega.
\end{equation} We define \begin{equation} \label{Martin-schro} \mathcal{M}(x, z)= \sum_{j=0}^{\infty} T^j M(\cdot, z) (x), \qquad (x, z) \in \Omega\times\partial \Omega , \end{equation} and \begin{equation} \label{harm-schro} d \mathcal{H}^x(z) =\mathcal{M}(x, z) \, dH^{x_0} (z), \qquad (x, z) \in \Omega\times\partial \Omega. \end{equation} Then \eqref{ufrepresent} gives \begin{equation} \label{ufrepresent2} \begin{aligned}
u_f(x) &= \int_{\partial \Omega} \mathcal{M}(x,z)\, f(z) \, dH^{x_0} (z) \\ & = \int_{\partial\Omega} f(z) \, d \mathcal{H}^x(z), \qquad x\in \Omega.
\end{aligned} \end{equation}
Comparing this last equation with equation \eqref{harm-rep}, we see that $ d \mathcal{H}^x$ is harmonic measure for the Schr\"{o}dinger operator $-\triangle -\omega$.
By \eqref{Martin-schro}, \begin{align*} \mathcal{M}(x,z) &= M(x,z) + \sum_{j=1}^{\infty} T^j M (\cdot, z) (x) \\ & = M(x,z) + T \mathcal{M}(\cdot, z)(x)\\ & = M(x,z) + G (\mathcal{M}(\cdot, z) \omega)(x). \end{align*} Hence $\mathcal{M}(x, z)$ is a superharmonic function of $x \in \Omega$, and $M(x,z)$ is its greatest harmonic minorant, for every $z\in \partial \Omega$, provided $M(\cdot,z)\not\equiv\infty$. In fact, $\mathcal{M}(\cdot, z)$ is $\omega$-harmonic, i.e., it satisfies the Schr\"odinger equation $-\triangle u=\omega \, u$ in $\Omega$.
Notice that $\mathcal{H}^x$ defined by \eqref{harm-schro} is not a probability measure on $\partial \Omega$ unless $\omega=0$. Letting $f\equiv 1$ on $\partial\Omega$, we see by \eqref{ufrepresent2} that $\mathcal{H}^x$ is a finite measure on $\partial \Omega$ if and only if $u_1(x)<\infty$, where $u_1$ is the so-called gauge function defined by \eqref{gauge-def} below (see Corollary \ref{cor} for conditions under which $u_1<\infty$ $d\omega$-a.e.).
We remark that for the normalized version of $\mathcal{M}(x, z)$ defined by \[
\widetilde{\mathcal{M}}(x, z)=\frac{\mathcal{M}(x, z)}{\mathcal{M}(x_0, z)}, \qquad (x, z) \in \Omega\times\partial \Omega , \] where $x_0\in \Omega$ is to be chosen so that $\mathcal{M}(x_0, z)<\infty$ for every $z \in \partial \Omega$, we have \[ d \mathcal{H}^x(z) = \widetilde{\mathcal{M}}(x, z)
\, d \mathcal{H}^{x_0} (z), \qquad (x, z) \in \Omega\times\partial \Omega , \] which is analogous to \eqref{harmmeasiden}. Obviously, $\widetilde{\mathcal{M}}(x_0, z)=1$, as for the unperturbed Martin's kernel $M(x,z)$. Moreover, \textit{formally} we have \[
\widetilde{\mathcal{M}}(x, z)=\lim_{y\to z, \, y \in \Omega}\frac{\mathcal{G}(x, y)}{\mathcal{G}(x_0, y)}, \qquad (x, z) \in \Omega\times\partial \Omega , \] where $\mathcal{G}(x, y)$ is the minimal Green's function associated with the Schr\"{o}dinger operator $-\triangle -\omega$ (see \cite{FNV}). Thus, $\widetilde{\mathcal{M}}(x, z)$ serves the role of the (normalized) Martin kernel associated with the Schr\"{o}dinger operator $-\triangle -\omega$.
Nevertheless, we prefer to use the kernel $\mathcal{M}(x, z)$, since it does not exclude the case $\mathcal{M}(x_0, z)=\infty$, and is more convenient in applications. Pointwise estimates of $\widetilde{\mathcal{M}}(x, z)$ are deduced easily from the estimates of $\mathcal{M}(x, z)$ discussed below.
Our bilateral estimates of $\mathcal{M}(x, z)$ (see \eqref{upperTM} and \eqref{lowerTM} below) are stated in terms of exponentials: \begin{equation} \label{exp-term} \begin{aligned}
M(x,z) & \, e^{\int_{\Omega} G(x,y) \, \frac{M(y,z)}{M(x,z)} \, d \omega (y)} \le \mathcal{M}(x, z)
\\ & \le M(x,z) \, e^{C\int_{\Omega} G(x,y) \, \frac{M(y,z)}{M(x,z)} \, d \omega (y)} ,
\end{aligned}
\end{equation} for all $(x, z) \in \Omega\times\partial \Omega$, with an appropriate constant $C>0$. We remark that \[ \mathcal{M}(x, z) = U(x, z) \, M(x,z), \qquad (x, z) \in \Omega\times\partial \Omega , \] where \begin{equation} \label{cond-gauge-def} U(x, z) = 1+ \frac{1}{M(x,z)}\sum_{j=1}^\infty T^j M(\cdot, z) (x) , \qquad (x, z) \in \Omega\times\partial \Omega , \end{equation} is the so-called \textit{conditional gauge} (\cite{CZ}, Sec. 4.3).
From \eqref{exp-term} it is immediate that \begin{equation} \label{exp-gauge}
e^{\int_{\Omega} G(x,y) \, \frac{M(y,z)}{M(x,z)} \, d \omega (y)} \le U(x, z)
\le e^{C\int_{\Omega} G(x,y) \, \frac{M(y,z)}{M(x,z)} \, d \omega (y)} ,
\end{equation}
for all $(x, z) \in \Omega\times\partial \Omega$. We emphasize that in the exponents of \eqref{exp-gauge} we only use the \textit{first term} in the sum on the right-hand side of \eqref{cond-gauge-def}.
A probabilistic definition of the conditional gauge in the case $d\omega =q \, dx$ ($q \in L^1_{loc} (\Omega)$) is provided by \[ U(x, z)= {E}_z^{x} \left[ e^{\int_0^{\zeta} q(X_t) \, dt}\right], \qquad (x, z) \in \Omega\times\partial \Omega ,
\] where $X_t$ is a path of the Brownian motion (properly scaled to replace $\frac{1}{2} \triangle$ used in the probabilistic literature with $\triangle$) starting at $x$, ${E}_z^{x}$ is the conditional expectation conditioned on the event that $X_t$ exits $\Omega$ at $z\in \partial\Omega$, and $\zeta$ is the time when $X_t$ first hits $z$. Properties of the conditional gauge for potentials $q$ in Kato's class in a bounded Lipschitz domain $\Omega$ are discussed in \cite{CZ}, Ch. 7; in particular, $U(x, z)\approx 1$ if $U(x, z)\not\equiv +\infty$.
For general $\omega \ge 0$, we clearly have $U(x, z)\ge 1$, but $U(x, z)$
is no longer uniformly bounded from above, even if $U(x, z) \not\equiv +\infty$ and $||T||<1$. Consequently,
the so-called Conditional Gauge Theorem fails in this setup.
\begin{Thm} \label{mainufest} Let $\Omega\subset \mathbb{R}^n$ be a bounded uniform domain, $\omega$ a locally finite Borel measure on $\Omega$, and $f \geq 0$ a Borel measurable function on $\partial \Omega$.
(A) If $\Vert T \Vert <1$ (equvalently, (\ref{equivnormTless1}) holds with $\beta <1$), then there exists a positive constant $C$ depending only on $\Omega$ and $\Vert T \Vert$ such that \begin{equation} \label{ptwiseupperbnd} u_f (x) \leq \int_{\partial \Omega} e^{C \int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega (y)} f(z) \, dH^x(z), \quad x \in \Omega . \end{equation}
(B) If $u$ is a positive solution of \eqref{ufeqn}, then $\Vert T \Vert \leq 1$ (equivalently, (\ref{equivnormTless1}) holds for some $\beta \leq 1$) and \begin{equation} \label{ptwiselowerbnd}
u (x) \geq \int_{\partial \Omega} e^{\int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega (y)} f(z) \, dH^x(z), \quad x \in \Omega . \end{equation} \end{Thm}
In view of \eqref{ufrepresent2}, Theorem \ref{mainufest} gives estimates for the Schr\"{o}dinger harmonic measure $d \mathcal{H}^x$ in terms of the harmonic measure $dH^{x}$ for the Laplacian.
The solution $u_1$ of (\ref{ufeqn}), in the case where $f$ is identically $1$ on $\partial \Omega$, is called the (Feynman-Kac) \textit{gauge}: \begin{equation} \label{gauge-def}
u_1 = 1 + \sum_{j=1}^{\infty} T^j 1 , \end{equation} provided $u_1\not\equiv +\infty$. An equivalent probabilistic interpretation of the gauge when $d\omega =q(x) \, dx$ ($q \in L^1_{loc} (\Omega)$, $q \ge 0$) is given by (see \cite{CZ}, Sec. 4.3) \[ u_1(x)= {E}^{x} \left[ e^{\int_0^{\tau_\Omega} q(X_t) \, dt}\right], \qquad x \in \Omega , \] where $X_t$ is the Brownian path (properly scaled as above) starting at $x$, ${E}^{x} $ is the expectation operator, and $\tau_\Omega$ is the exit time from $\Omega$. Notice that $u_1$ given by \eqref{gauge-def} is related to the conditional gauge $U(x,z)$ defined by \eqref{cond-gauge-def} via the equation \[ u_1(x)= \int_{\partial\Omega} U(x, z) \, dH^x(z), \qquad x \in \Omega . \] In particular, \[ \inf_{z \in \partial \Omega} U(x, z) \le u_1(x ) \le\sup_{z \in \partial \Omega} U(x, z), \qquad x \in \Omega . \]
The following theorem gives sufficient and matching necessary criteria for the existence of $u_f$. For Martin's kernel $M(x,z)$, we define the adjoint operator $M^*$ for a Borel measure $\mu$ on $\Omega$ by \begin{equation} \label{defMstar} M^* \mu (z) = \int_{\Omega} M(x,z) \, d \mu (x), \quad \mbox{for} \,\, z \in \partial \Omega. \end{equation} The role of $M^*$ in the following theorem is analogous to the role of the balayage operator $P^*$ in \cite{FV2} for $C^{1,1}$ domains $\Omega$, where all integrals over $\partial \Omega$ are taken with respect to surface area in place of harmonic measure.
\begin{Thm} \label{gaugecrit} Suppose $ \Omega \subset \mathbb{R}^n$ is a bounded uniform domain, $\omega$ is a locally finite Borel measure on $\Omega$, and $f \geq 0$ ($f$ not a.e. $0$ with respect to harmonic measure) is a Borel measurable function on $\partial \Omega$. Let $x_0 \in \Omega$ be the reference point in the definition of Martin's kernel. Let $m(x) = \min (1, G(x,x_0))$.
(A) There exists $C>0$ ($C$ depending only on $\Omega$ and $\Vert T \Vert$) such that if $\Vert T \Vert <1$ (equivalently, (\ref{equivnormTless1}) holds with $\beta <1$) and \begin{equation} \label{martincritsuff}
\int_{\partial \Omega} e^{C M^* (m \omega)} \, f \, dH^{x_0} < \infty , \end{equation} then $u_f \in L^1_{loc} (\Omega, dx)$.
(B) If $u_f \in L^1_{loc} (\Omega, dx) $, then $\Vert T \Vert \leq 1$ and \begin{equation} \label{martincritnec}
\int_{\partial \Omega} e^{M^* (m \omega)} \, f \, dH^{x_0} < \infty . \end{equation}
\end{Thm}
\noindent {\bf Remark.}
More general results for equation (\ref{form-sol}) with an arbitrary positive harmonic function $h$ in place of $Pf$, in terms of
Martin's representation, are given in Theorem \ref{mainu-harm} and Theorem \ref{u_h-exist} below.
For $C^{1,1}$ domains $\Omega$ and absolutely continuous $\omega$, Theorem \ref{mainufest} and an analogue of Theorem \ref{gaugecrit} were proved in the special case $f=1$ in \cite{FV2}, Theorem 1.2. To see this observation, note that for a $C^{1,1}$ domain, $M(x, z) = P(x,z)/P(x_0, z)$, by \eqref{harmmeasiden}, which shows that inequalities (1.12) and (1.14) in \cite{FV2} follow from Theorem \ref{mainufest} above. To see that (1.10) and (1.13) in \cite{FV2} follow from Theorem \ref{gaugecrit}, choose $x_0$ with dist$(x_0, \partial \Omega) > \delta$, where $0< \delta < \mbox{diam} ( \Omega) /2$, so that $P(x_0, z)$ is equivalent to a constant depending only on $\Omega$. An extension to the case of uniform domains of the criteria in \cite{FV2} for the existence of the nontrivial gauge ($u_1\not\equiv +\infty$) is provided by the following corollary.
\begin{Cor} \label{cor} Suppose $ \Omega \subset \mathbb{R}^n$ is a bounded uniform domain, and $\omega$ is a locally finite Borel measure on $\Omega$. Let $x_0 \in \Omega$ be the reference point in the definition of Martin's kernel, and $m(x) = \min (1, G(x,x_0))$.
(A) There exists $C>0$ ($C$ depending only on $\Omega$ and $\Vert T \Vert$) such that if $\Vert T \Vert <1$ and \begin{equation} \label{martincritsuff-g}
\int_{\partial \Omega} e^{C M^* (m \omega)} \, dH^{x_0} < \infty , \end{equation} then the gauge $u_1$ is nontrivial.
(B) If the gauge $u_1$ is nontrivial, then $\Vert T \Vert \leq 1$ and \begin{equation} \label{martincritnec-g}
\int_{\partial \Omega} e^{M^* (m \omega)} \, dH^{x_0} < \infty . \end{equation}
\end{Cor}
As an application of Corollary \ref{cor}, we consider elliptic equations of Riccati type with quadratic growth in the gradient, \begin{equation}\label{nonlineareqn-1} \left\{ \begin{aligned}
-\triangle v & = |\nabla v|\,^2 + \omega \, \, & \mbox{in} \, \, \Omega \\ v & = 0 \quad & \mbox{on} \, \, \partial \Omega \end{aligned} \right. \end{equation} for locally finite Borel measures $\omega$, in bounded uniform domains $\Omega \subset \mathbb{R}^n$. Although \eqref{nonlineareqn-1} is formally related to equation \eqref{ufeqn} with $f=1$ by the relation $v = \log u$, it is well-known that this formal relation is not sufficient to guarantee equivalence of the two equations (see \S4). Nevertheless we obtain the following result.
\begin{Thm}\label{riccatithm} Suppose $\Omega \subset \mathbb{R}^n$ is a bounded uniform domain, and $\omega$ is a locally finite Borel measure in $\Omega$.
(A) Suppose $||T||<1$, or equivalently (\ref{equivnormTless1}) holds with $\beta<1$, and (\ref{martincritsuff-g}) holds with a large enough constant $C>0$ (depending only on $\Omega$ and $||T||$). Then $v= \log u_1 \in W^{1,2}_{loc} (\Omega)$ is a weak solution of (\ref{nonlineareqn-1}).
(B) Conversely, if (\ref{nonlineareqn-1}) has a weak solution $v\in W^{1,2}_{loc} (\Omega)$, then $u=e^v$ is a supersolution to
(\ref{ufeqn}) with $f=1$, i.e., $u\ge G(\omega u) +1$.
Moreover, $||T||\le 1$, or equivalently (\ref{equivnormTless1}) holds with $\beta = 1$, and (\ref{martincritnec-g}) holds. \end{Thm}
\noindent{\bf Remarks.} 1. In Theorem \ref{gaugecrit}, $u_f \in L^1_{loc} (\Omega, dx)$ actually yields $u_f \in L^1(\Omega, m dx)\cap L^1(\Omega, m d \omega)$, or equivalently $G (u_f \omega) \not\equiv+\infty$.
2. For bounded Lipschitz domains $\Omega$, $u_1$ is a ``very weak'' solution in the sense of \cite{MR}. More precisely, $u=u_1 -1 $ is a ``very weak'' solution to $-\triangle u = \omega u + \omega$ with $u=0$ on $\partial \Omega$. Here one can use $\phi_1$ in place of $m$, where $\phi_1$ is the first eigenfunction of the Dirichlet Laplacian in $\Omega$ (see \cite{AAC}, Lemma 3.2). Then $u_1 \in L^1(\Omega, \phi_1 dx)$ and $\int_\Omega \phi_1 \, d \omega<+\infty$.
3. Our main results for uniform domains $\Omega$ are based on the exponential bounds for Green's function $\mathcal{G}(x,y)$ (see Theorem \ref{FNVTheorem} below) obtained in \cite{FNV}. Here $\mathcal{G}(x,y)$ is the kernel of
the operator $(I-T)^{-1}$ defined by \eqref{defGreenSchr}, where $T$ is an integral operator with positive quasi-metric kernel. The case of $C^{1,1}$ domains $\Omega$ and $d \omega =q \, dx$, where $q\in L^1_{loc}(\Omega, dx)$, was treated earlier in \cite{FV1} for
small $||T||$, and in \cite{FV2} for $||T||<1$.
4. In the special case of Kato class potentials, or more generally, $G$-bounded perturbations $\omega$ for the Schr\"{o}dinger operator $-\triangle -\omega$, it is known that $\mathcal{G}(x,y)\approx G(x, y)$. In this case,
the gauge $u_1$ exists, and is uniformly bounded, if and only if $||T||<1$ (see \cite{CZ}, \cite{Han1}, \cite{Pin}).
5. For the fractional Schr\"{o}dinger operator $(-\triangle)^{\frac{\alpha}{2}} -\omega$, criteria of the existence of the gauge $u_1$ in the case $0<\alpha<2$ were obtained in
\cite{FV3}. They are quite different from Corollary \ref{cor}
and require no extra boundary restrictions on $\Omega$ like \eqref{martincritsuff-g},
\eqref{martincritnec-g} in the case $\alpha=2$.
\section{Pointwise estimates for $u_f$}\label{sec2}
Recall that the Martin kernel is defined by \eqref{martin-K}. Then $M(x, z)$ is a H\"older continuous function in $z\in \partial \Omega$ (\cite{Aik}, Theorem 3). It is worth mentioning that in uniform domains, harmonic measure may vanish on some surface balls, and so the Radon-Nykodim derivative formula $M(x,z)=\frac{dH^x }{dH^{x_0}}(z)$, which holds for NTA domains, is no longer available as a means to recover \eqref{martin-K} at every point $z \in \partial \Omega$. Instead, it can be determined via \eqref{martin-K}, so that \eqref{harmmeasiden} still holds (see \cite{Aik}, p. 122).
In this case, the Martin representation for every nonnegative harmonic function $h$ in $\Omega$ can be expressed in the form \begin{equation} \label{martin-rep} h(x)= \int_{\partial \Omega} M(x,z) \, d \mu_h (z), \qquad x \in \Omega, \end{equation} where $\mu_h$ is a finite Borel measure on $\partial \Omega$ uniquely determined by $h$.
The connection between Martin's kernel and harmonic measure in a uniform domain is provided by the equation (see \cite{Aik}, p. 142): \begin{equation} \label{hm-mu1} dH^x (z) = M(x,z) \, d \mu_1 (z), \qquad x \in \Omega, \, z \in \partial \Omega. \end{equation} Here $\mu_1$ is the representing measure in \eqref{martin-rep} for the function $h \equiv 1$.
Equation \eqref{hm-mu1} can be justified using \cite{AG}, Theorem 9.1.7 (in the special case $h \equiv 1$) for a bounded domain whose Martin boundary $\triangle$ is identified with $\partial \Omega$. It yields that, for every $f \in C(\partial \Omega)$, its harmonic extension $Pf$
via harmonic measure \eqref{harm-rep} can be represented in the form \begin{equation} \label{pf-rep} Pf (x) = \int_{\partial \Omega} M(x,z) \, f(z) \, d \mu_1 (z), \quad x \in \Omega. \end{equation} By the uniqueness of the representing measure in \eqref{harm-rep} for all $f \in C(\partial \Omega)$, it follows that \eqref{hm-mu1} holds.
In particular, since $M(x_0,z) =1$ for all $z \in \partial \Omega$, letting $x=x_0$ in \eqref{hm-mu1} yields $dH^{x_0}= d \mu_1$, and consequently \eqref{harmmeasiden} holds.
Let $\Omega$ be a bounded uniform domain in $\mathbb{R}^n$, $\omega$ a finite Borel measure in $\Omega$, and $f\ge 0$ a Borel measurable function in $\partial \Omega$ integrable with respect to harmonic measure. We consider solutions $u$ to \eqref{ufeqn} understood in the \textit{potential theoretic} sense. Namely, a function $u: \Omega \rightarrow [0, +\infty]$ is said to be a solution to \eqref{ufeqn} if $u$ is \textit{superharmonic} in $\Omega$ ($u\not\equiv+\infty$), and \begin{equation} \label{u-def}
u(x) = G(u \omega)(x) + Pf (x), \qquad \text{for all} \, \, x \in \Omega , \end{equation} where $Pf$ is the harmonic function defined by \eqref{harm-rep}. Then $Pf$ is the greatest harmonic minorant of $u$, and $u \in L_{loc}^1(\Omega, \omega)$, so that $u \, d \omega$ is the associated Riesz measure of $u$, where $-\triangle u = \omega u$ in the distributional sense. In fact, if a potential theoretic solution to \eqref{u-def} exists, then $u \in L^1(\Omega, m \omega)$, where $m(x)=\min (1, G(x, x_0))$ for some $x_0 \in \Omega$; otherwise $G(u \omega)\equiv +\infty$ (see \cite{AG}, Theorem 4.2.4).
We note that all potential theoretic solutions are by definition lower semicontinuous functions in $\Omega$. For a superharmonic function $u$, it is enough to require that equation \eqref{u-def} holds $dx$-a.e. Moreover, in a bounded uniform domain, any potential theoretic solution $u\in L^1(\Omega, m dx)$. This is not difficult to see using the estimate $G (m dx) \le C \, m$ in $\Omega$, which is a consequence of the so-called $3$-G inequality (see \cite{CZ}, \cite{Han1}, \cite{Han2}, \cite{Pin}). We remark that in $C^2$ domains $Pf$ is the Poisson integral, and in fact $u\in L^1(\Omega, dx)$ (see \cite{FV2}, \cite{MV}, Theorem 1.2.). The latter
is no longer true for bounded Lipschitz domains (see, e.g., \cite{MR}).
Another useful way to define a solution of \eqref{ufeqn} is to require that \eqref{u-def} hold
$d \omega$-a.e. More precisely, a measurable function $0\le u <+\infty$ $d\omega$-a.e. is said to be a solution of \eqref{ufeqn} with respect to $\omega$ if \begin{equation} \label{u-omega}
u = G(u \omega) + Pf \qquad d \omega\text{-a.e.} \, \, \text{in}\,\, \Omega .
\end{equation} If such a solution exists, then obviously $u \in L^1_{loc} (\Omega, \omega)$, and in fact, as above, $u \in L^1(\Omega, m \omega)$.
We remark that if $f \not= 0$ (with respect to $d H^{x}$), and \eqref{u-omega} has a positive solution in this sense, then $||T||\le 1$ by Schur's lemma, and consequently \eqref{equivnormTless1} holds for $\beta=1$. It follows that $\omega (K) \le \text{cap} (K)$ for any compact set $K \subset \Omega$. In particular, $\omega$ must be absolutely continuous with respect to the Green (or Wiener) capacity, i.e., \begin{equation} \label{abs-cap} \text{cap} (K)=0 \, \Longrightarrow \, \omega (K)=0. \end{equation} (See details in \cite{FNV}, \cite{FV2}.)
A connection between these two approaches is provided by the following claim used below. If $u$ is a solution of \eqref{u-omega} (with respect to $\omega$), then there exists a unique superharmonic function $\hat u \ge 0$ in $\Omega$ such $\hat u=u$ $d \omega$-a.e. in $\Omega$, and
$\hat u \in L^1_{loc} (\Omega, \omega)$ is a potential theoretic solution that satisfies \eqref{u-def}.
Indeed, let $\hat u:= G(u \omega) + Pf $ everywhere in $\Omega$. Then $\hat u= u$ $d \omega$-a.e. by \eqref{u-omega}, $\hat u \in L_{loc}^1(\Omega, \omega)$, and consequently \[ \hat u(x) =G(u \omega) (x) + Pf (x) = G(\hat u \omega) (x)+ Pf (x) \quad \text{for all} \,\, x \in \Omega . \] Clearly, $\hat u$ is superharmonic since $G(u \omega)<+\infty$ $d \omega$-a.e., and hence $G(u \omega)$ is a Green potential, and $Pf$ is the greatest harmonic minorant of $\hat u$. Thus, $\hat u$ is a potential theoretic solution.
Moreover, such a superharmonic solution $\hat u$ is unique: if $\hat v$ is a superharmonic solution to \eqref{u-def} for which $\hat v = u$ $d \omega$-a.e., it follows that \[ \hat v = G(\hat v \omega) + Pf =G(u \omega) + Pf = \hat u \]
everywhere in $\Omega$.
If $\omega$ satisfies \eqref{abs-cap}, then it is enough to require that $u<+\infty$ and \eqref{u-def} hold q.e. Then $u$ is a solution of \eqref{u-omega} with respect to $\omega$, and $\hat u:=G(u \omega) + Pf $ is a potential theoretic solution to \eqref{ufeqn}, and $\hat u$ is a quasicontinuous representative of $u$, so that $\hat u = u$ q.e.
From now on, we will not distinguish between a solution $u$ to \eqref{u-omega} understood $d \omega$-a.e., and its superharmonic representative $\hat u= u$ $d \omega$-a.e. which satisfies \eqref{u-def} everywhere in $\Omega$.
In particular, the solution $u_f$ of \eqref{u-def} defined by \eqref{ufsolndef} \textit{everywhere} in $\Omega$ is a potential theoretic (superharmonic) solution of \eqref{ufeqn} provided $u_f\not\equiv +\infty$. Indeed, for $m \in \mathbb{N}$, \[ \sum_{j=0}^m T^j(Pf)) (x) = Pf(x) + T \sum_{j=0}^{m-1} T^j(Pf)) (x), \quad \text{for all} \, \, x \in \Omega . \] Letting $m \to \infty$, by the monotone convergence theorem we have \begin{align*} u_f & := \sum_{j=0}^\infty T^j(Pf)\\ & = Pf + T(\sum_{j=0}^\infty T^j(Pf))\\& = Pf + G(u_f \omega) \end{align*}
everywhere in $\Omega$.
Clearly, $u_f$ is a superharmonic function provided $u_f \not\equiv+\infty$
in $\Omega$, which occurs if and only if $G (u_f \, \omega)\not\equiv +\infty$ in $\Omega$, or equivalently $u_f \in L^1(\Omega, m \omega)$.
Moreover, $u_f$ is a \textit{minimal} solution since, for every other
solution $u$, we obviously have, for every $m \in \mathbb{N}$,
\[
u=G(u \omega) + Pf = G(u \omega) +\sum_{j=0}^m T^j(Pf) \ge \sum_{j=0}^m T^j(Pf).
\] Letting $m \rightarrow \infty$, we see that $u \ge u_f$.
\begin{Def} \label{qmkernel} Let $(\Omega, \omega)$ be a measure space. A quasi-metric kernel $K$ is a measurable function $K: \Omega \times \Omega \rightarrow (0, +\infty]$ such that $K$ is symmetric ($K(x,y)=K(y,x)$) and $d= \frac{1}{K}$ satisfies \[ d(x,y) \leq \kappa (d(x,z) + d(z,y)) \quad \mbox{for all} \quad x,y,z \in \Omega, \] for some $\kappa >0$, called the \textbf{quasi-metric constant} for $K$.
A measurable function $K: \Omega \times \Omega \rightarrow (0, +\infty]$ is called \textbf{quasi-metrically modifiable} if there exists a measurable function $m: \Omega \rightarrow (0, \infty)$ such that $\tilde{K} (x,y) = \frac{K(x,y)}{m(x)m(y)}$ is a quasi-metric kernel. The function $m$ is called a \textbf{modifier} for $K$. \end{Def}
We will use the following result, from \cite{FNV}, Corollary 3.5.
\begin{Thm} \label{FNVTheorem} Let $(\Omega, \omega)$ be a measure space. Suppose $K$ is a quasi-metrically modifiable kernel on $\Omega$ with modifier $m$. Let $\kappa$ be the quasi-metric constant for $\frac{K(x,y)}{m(x)m(y)}$. For a non-negative, measurable function $h$ on $\Omega$, define \[ Th (x) = \int_{\Omega} K(x,y) h(y) \, d \omega (y), \quad \mbox{for} \,\, x \in \Omega. \] For $j \in \mathbb{N}$, let $T^j$ be the $j^{th}$ iterate of $T$, and let $T^0 h =h$.
(A) If $\Vert T \Vert <1$, then there exists a positive constant $C$, depending only on $\kappa$ and $\Vert T \Vert$, such that \begin{equation} \label{qmkernelupperbnd} \sum_{j=0}^{\infty} T^j m (x) \leq m(x) e^{C (Tm(x))/m(x)}, \qquad \text{for all} \, \, x \in \Omega . \end{equation}
(B) There exists a positive constant $c$, depending only on $\kappa$, such that \begin{equation} \label{qmkernellowerbnd} \sum_{j=0}^{\infty} T^j m (x)\geq m(x) e^{c (Tm(x))/m(x)}, \qquad \text{for all} \, \, x \in \Omega . \end{equation}
\end{Thm}
It is known (\cite{An2}, \cite{Han1}) that in a bounded uniform domain $\Omega$ (in particular, an NTA domain), the Green's kernel $G(x,y)$ is quasi-metrically modifiable, with modifier $m(x) = \min (1, G(x,x_0))$, where $x_0$ is any fixed point in $\Omega$, and the quasi-metric constant of the modified kernel $G(x,y)/(m (x) \, m(y))$ is independent of $x_0$.
In fact, in a bounded uniform domain $\Omega\subset \mathbb{R}^n$ ($n\ge 3$), the following slightly stronger property (called the strong generalized triangle property) holds (\cite{Han1}, p. 465): \begin{equation}\label{strong-quasi}
|x_1-x_2| \le |x_1-y| \Longrightarrow \frac{G(x_1, y)}{m(x_1)} \le \kappa \, \frac{G(x_2, y)}{m(x_2)}, \end{equation} for all $x_1, x_2, y \in \Omega$, where $\kappa$ depends only on $\Omega$. It is known (\cite{Han1}, Corollary 2.8) that \eqref{strong-quasi} is equivalent to the uniform boundary Harnack principle established for uniform domains in (\cite{Aik}, Theorem 1). By (\ref{strong-quasi}), \begin{equation}\label{liminf-limsup} \limsup_{x_1\rightarrow z, \, x_1 \in \Omega} \frac{G(x_1, y)}{m(x_1)} \le \kappa \, \liminf_{x_2\rightarrow z, \, x_2 \in \Omega} \frac{G(x_2, y)}{m(x_2)} , \end{equation}
for all $y\in\Omega$ and $z \in \partial \Omega$, where $\kappa$ depends only on $\Omega$, because the condition $|x_1-x_2| \le |x_1-y|$ is satisfied for $x_1$ and $x_2$ sufficiently close to $z$.
We will need the following lemma for punctured quasi-metric spaces due to Hansen and Netuka (\cite{HN}, Proposition 8.1 and Corollary 8.2); it originated in (Pinchover \cite{Pin}, Lemma A.1) for normed spaces.
\begin{Lemma}\label{hansen} Suppose $d$ is a quasi-metric on a set $\Omega$ with quasi-metric constant $\kappa$. Suppose $x_1 \in \Omega$. Then \begin{equation}\label{quasi-cond} \tilde d(x,y) = \frac{d(x,y)}{d(x,x_1) \cdot d(y,x_1)}, \qquad x, y \in \Omega \setminus\{x_1\}, \end{equation} is a quasi-metric on $\Omega\setminus\{x_1\}$ with quasi-metric constant $4 \kappa^2$. \end{Lemma}
\begin{Lemma} \label{Martinkernelquasimetric} Let $\Omega$ be a bounded uniform domain with Green's function $G(x,y)$. Fix some $x_0 \in \Omega$ and define Martin's kernel $M(x,z)$ for $x \in \Omega$ and $z \in \partial \Omega$ by (\ref{martin-K}). Then for each $z \in \partial \Omega$, the function $\tilde{m}(x) = M(x,z)$ is a quasi-metric modifier for $G$, with quasi-metric constant $\kappa$ independent of $z \in \partial \Omega$. \end{Lemma}
\begin{proof} Fix $x_0\in \Omega$, $z\in \partial \Omega$. As noted above, $m (x) = \min (1, G(x,x_0))$ is a modifier for $G$, so that $d(x, y)=\frac{m (x) \, m(y)}{G(x,y)}$ is a quasi-metric on $\Omega$ with positive constant $\kappa$ independent of $x_0$, so that \[ \frac{m (x) \, m(y)}{G(x,y)} \leq \kappa \left( \frac{m (x) \, m(w)}{G(x,w)} + \frac{m (w) \, m(y)}{G(w,y)} \right) ,\] for all points $x,y,w \in \Omega$. Suppose $x_1 \in \Omega$ with $x_1 \neq x_0$. Clearly, for $\tilde d$ defined by \eqref{quasi-cond}, we have \[ \tilde d(x,y) = \frac{1}{m(x_1)^2} \, \frac{G(x, x_1) \, G(y, x_1)}{G(x, y)}, \qquad x, y\in \Omega\setminus\{x_1\} . \] Then by Lemma \ref{hansen} it follows that $\tilde d$ is a quasi-metric on $\Omega\setminus\{x_1\}$ with quasi-metric constant $4 \kappa^2$. Assuming that $x, y, w \in \Omega\setminus\{x_1\}$, from the inequality $\tilde d(x,y) \le 4 \kappa^2 [\tilde d(x,w) + \tilde d(y,w)]$, we deduce \begin{align*} \frac{1}{m(x_1)^2} \, \frac{G(x, x_1) \, G(y, x_1)}{G(x, y)} &\le \frac{4 \kappa^2}{m(x_1)^2} \, \\ \times & \left[ \frac{G(x, x_1) \, G(w, x_1)}{G(x, w)} + \frac{G(y, x_1) \, G(w, x_1)}{G(y, w)} \right]. \end{align*} Multiplying both sides of the preceding inequality by $\frac{m(x_1)^2}{[G(x_0, x_1)]^2}$ yields \begin{align*}
& \frac{G(x, x_1) \, G(y, x_1)}{G(x_0, x_1) \, G(x, y) \, G(x_0, x_1)} \le 4 \kappa^2 \, \\
& \times \left[ \frac{G(x, x_1) \, G(w, x_1)}{G(x_0, x_1) \, G(x, w) \, G(x_0, x_1)} + \frac{G(y, x_1) \, G(w, x_1)}{G(x_0, x_1) \, G(y, w) \, G(x_0, x_1)} \right]. \end{align*}
Letting $x_1 \rightarrow z$, with $x_1 \in \Omega$, we have \[ \lim_{x_1 \rightarrow z, \, x_1 \in \Omega} \frac{G(x, x_1)}{G(x_0, x_1)}= M(x,z)=\tilde{m} (x), \] by (\ref{martin-K}), and similarly with $x$ replaced by $y$ or $w$. We obtain \[ \frac{\tilde{m} (x)\tilde{m}(y)}{G(x,y)} \leq 4 \kappa^2 \left( \frac{\tilde{m} (x)\tilde{m}(w)}{G(x,w)} + \frac{\tilde{m}(w)\tilde{m}(y)}{G(w,y)} \right) . \] \end{proof}
\begin{proofof} Theorem \ref{mainufest}. By Lemma \ref{Martinkernelquasimetric}, $\tilde{m}(x)=M(x,z)$ is a quasi-metric modifier for $T$, for all $z \in \partial \Omega$, with quasi-metric constant independent of $z$. Hence by part (A) of Theorem \ref{FNVTheorem} with $\tilde m$ in place of $m$, under the assumption that $\Vert T \Vert <1$, (note that the estimates in Theorem \ref{FNVTheorem} hold everywhere) \begin{equation}\label{upperTM} \begin{aligned}
\mathcal{M}(x, z) & = \sum_{j=0}^{\infty} T^j M(\cdot, z) (x) \leq M(x,z) e^{C \, (TM(\cdot,z))(x)/M(x, z)}\\ & = M(x,z) e^{C \int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega(y)} , \qquad (x, z) \in \Omega\times\partial \Omega ,
\end{aligned}
\end{equation}
with $C$ depending only on $\Omega$ and $||T||$. Substituting this estimate in \eqref{ufrepresent} and using equation (\ref{harmmeasiden}) gives (\ref{ptwiseupperbnd}). This proves part (A) of Theorem \ref{mainufest}.
Suppose now that $u$ is a solution to \eqref{ufeqn}. Assuming without loss of generality that $f \not=0$ $d H^{x}$-a.e., so that $u \geq Pf>0$ is a positive solution, we see that $T u \leq u$, where $0<u<\infty$ $d \omega$-a.e. Hence, $\Vert T \Vert \leq 1$, and consequently \eqref{equivnormTless1} holds with $\beta=1$, by Schur's lemma (see \cite{FNV}, \cite{FV2}). In particular, \eqref{abs-cap} holds.
Since $Pf$ is a positive harmonic function, obviously $Pf\ge c_K>0$ on every compact set $K\subset \Omega$,
and consequently \begin{equation}\label{F-M-dom}
c_K \, G(\chi_K \omega)\le G(Pf \omega) \le G (u \omega) \le u < \infty \quad d \omega\text{-a.e.}
\end{equation}
This simple observation will be used below.
For the minimal solution $u_f$ to \eqref{ufeqn} given by \eqref{ufsolndef} we have $u\ge u_f$. Applying part (B) of Theorem \ref{FNVTheorem} with $\tilde m=M(\cdot, z)$ in place of $m$ gives \begin{equation}\label{lowerTM} \begin{aligned} \mathcal{M}(x, z) & = \sum_{j=0}^{\infty} T^j M(\cdot, z) (x) \geq M(x,z) e^{c \, (TM(\cdot,z))(x)/M(x, z)}\\ & = M(x,z) e^{c \int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega(y)} , \qquad (x, z) \in \Omega\times\partial \Omega ,
\end{aligned}
\end{equation} with $c$ depending only on $\Omega$.
In fact, we can let $c=1$ in \eqref{lowerTM} if instead of statement (B) of Theorem \ref{FNVTheorem} we use a recent lower estimate of solutions obtained in \cite{GV2}, Theorem 1.2, with $q=1$, $\mathfrak{b}=1$, and $h=\tilde m$. Here $\mathfrak{b}$ is the constant in the so-called weak domination principle, which states that, for any bounded measurable function $g$ with compact support, \begin{equation}\label{weak-dom} G (g \omega)(x)\leq h(x)\ \text{in }\mathrm{supp} (g)\ \ \Longrightarrow \ \ G(g \omega)(x) \leq \mathfrak{b}\ h(x) \, \, \text{in\ }\Omega , \end{equation} where $h$ is a given positive lower semicontinuous function on $\Omega$.
For Green's kernel $G$, this property with $\mathfrak{b}=1$ is a consequence of the classical Maria--Frostman
domination principle (see \cite{Hel}, Theorem 5.4.8), for any positive superharmonic function $h$. We only need to verify that
$G (g \omega)<\infty$ $d \omega$-a.e., which is immediate from \eqref{F-M-dom}. Hence,
\eqref{weak-dom} holds with $\mathfrak{b}=1$, and so
\eqref{lowerTM} holds with $c=1$ by \cite{GV2}, Theorem 1.2.
Consequently, by the same argument as above,
\begin{align*}
u_f (x)& = \int_{\partial \Omega} f(z) \,
\sum_{j=0}^{\infty} T^j M(\cdot, z) (x) \, d H^{x_0}(z)\\ & \geq
\int_{\partial \Omega} e^{\int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega(y)}\, f(z) \, M(x,z) \, d H^{x_0}(z), \quad \text{for all} \, \, x \in \Omega,
\end{align*} where $M(x,z) \, H^{x_0}(z)= d H^{x}(z)$. This yields the lower bound (\ref{ptwiselowerbnd}),
The proof of part (B) of Theorem \ref{mainufest} is complete. \end{proofof}
We complete this section with an extension of Theorem \ref{mainufest} which covers solutions of \eqref{form-sol} with an arbitrary positive harmonic function $h$ in place of $Pf$. Such solutions arise naturally, because, if $u$ positive superharmonic function in $\Omega$ such that \begin{equation}\label{u-harm} -\triangle u = \omega u, \quad u\ge 0, \, \, \mbox{in} \, \, \Omega , \end{equation} and if the greatest harmonic minorant of $u$ is $h>0$, then by the Riesz decomposition theorem, \begin{equation}\label{u-harm-int} u= G(u \omega)+ h \quad u\ge 0, \, \, \mbox{in} \, \, \Omega , \end{equation} where $G(u \omega)\not\equiv +\infty$, and $u d \omega$ is the corresponding Riesz measure, a locally finite Borel measure in $\Omega$.
Given a positive harmonic function $h$ on $\Omega$, we will estimate the minimal solution \[ u_h = h + \sum_{j=1}^\infty T^j h \] of \eqref{u-harm-int} and in particular give conditions for $u_h$ to exist, i.e., such that $u_h \not\equiv +\infty$. The proof is based on Martin's representation \eqref{martin-rep}, which takes the place of \eqref{harm-rep} in the proof of Theorem \ref{mainufest}.
\begin{Thm} \label{mainu-harm} Let $\Omega\subset \mathbb{R}^n$ be a bounded uniform domain,
$\omega$ a locally finite Borel measure on $\Omega$, and $h$ a positive harmonic
function in $\Omega$.
(A) If $\Vert T \Vert <1$, then there exists a positive constant $C$ depending only on $\Omega$ and $\Vert T \Vert$ such that \begin{equation} \label{upperbnd-harm} u_h (x) \leq \int_{\partial \Omega} e^{C \int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega (y)} M(x, z) \, d \mu_h(z), \quad x \in \Omega . \end{equation}
(B) If $u$ is a positive solution of \eqref{u-harm-int}, then $\Vert T \Vert \leq 1$, and \begin{equation} \label{lowerbnd-harm}
u (x) \geq \int_{\partial \Omega} e^{\int_{\Omega} G(x,y) \frac{M(y,z)}{M(x,z)} d \omega (y)} M(x, z) \,
d \mu_h(z), \quad x \in \Omega . \end{equation} \end{Thm}
The proof of Theorem \ref{mainu-harm} is very similar to that of Theorem \ref{mainufest} above. We only need to integrate both sides of estimates \eqref{upperTM} and \eqref{lowerTM} over $\partial\Omega$ against $d \mu_h(z)$ in place of $f(z) \, dH^{x_0} (z)$.
\section{Existence criteria for $u_f$}\label{sec3}
We require a few results prior to giving the proof of Theorem \ref{gaugecrit}. The following lemma is well-known (see, for instance, \cite{AG}, Lemma 4.1.8 and Theorem 5.7.4), but we include a proof for the sake of completeness. Recall that $x_0 \in \Omega$ is a fixed reference point and $m(x) = \min (1, G(x,x_0))$.
\begin{Lemma} \label{estGchiK} Let $\Omega \subseteq{\mathbb{R}^n}$ ($n\ge 2$) be a domain with nontrivial Green's function $G$. Let $K$ be a compact subset of $\Omega$ and let $\chi_K$ be the characteristic function of $K$. There exists a constant $C_K$ depending on $\Omega$, $K$, and the choice of $x_0$, such that \begin{equation}\label{GchiKest}
G \chi_K (x) \leq C_K \, m (x), \quad x \in \Omega. \end{equation}
Also, if $|K|>0$, there exists a constant $c_K>0$ depending on $\Omega$, $K$ and $x_0$ such that \begin{equation}\label{lowerGchiKest}
G \chi_K (x) \geq c_K \, m (x), \quad x \in \Omega. \end{equation} \end{Lemma}
\begin{proof} We first prove inequality (\ref{GchiKest}). Suppose $n \geq 3$ (the case $n=2$ is handled in a similar way with obvious modifications). We assume $|K|>0$, else the result is trivial. We also assume that $x_0 \in K$; if not, replacing $K$ with $K \cup \{x_0\}$ does not change $G \chi_K$. We first claim that there exists a constant $C_1(K)$ depending on $K$ and $x_0$ such that \begin{equation} \label{GchiKbnd1} G \chi_K (x) \leq C_1 (K) , \end{equation}
for all $x \in \Omega$. To prove this claim, we recall the standard fact that $G(x,y) \leq C|x-y|^{2-n}$ for all $x, y \in \Omega$. Let $R$ be the diameter of $K$. Then there exists $y_0 \in K$ such that $K \subseteq \overline{B(y_0, R)}$. If $x \in B(y_0, 2R)$, then $K \subseteq B(x, 3R)$ and
\[ \int_K G(x,y) \, dy \leq \int_{B(x, 3R)} \frac{c}{|x-y|^{n-2}} \, dy \leq c \int_0^{3R} \frac{r^{n-1}}{r^{n-2}} \, dr = c R^2. \]
If $x \not\in B(y_0, 2R)$ then $|x-y|^{2-n} \leq R^{2-n}$ for all $y \in K$, so
\[ \int_K G(x,y) \, dy \leq C R^{2-n} |K| \leq c R^2 . \]
Next we claim that there exists a constant $C_2$ depending on $\Omega$, $K$ and $x_0$ such that \begin{equation} \label{GchiKbnd2} G \chi_K (x) \leq C_2 \, G(x, x_0) , \end{equation} for all $x \in \Omega$. For this claim, let $U$ be a subdomain of $\Omega$ such that $x_0\in U$, $K \subseteq U$ and $\overline{U} \subseteq \Omega$. If $x \in \Omega \setminus U$, then $G(x,y)$ is a positive harmonic function of $y$ in $U$, so by Harnack's inequality (e.g., see \cite{AG}, Corollary 1.4.4), there exists a constant $C(K, U)$ such that $G(x,y) \leq C(K, U) \, G(x, x_0)$ for all $y \in K$. Hence
\[ \int_K G(x,y) \, dy \leq C(K, U) \, |K| \, G(x, x_0).\]
Since a fixed domain $U$ depends only on $x_0$, $K$, and $\Omega$, we can replace $C(K, U)$ with $C(x_0, K, \Omega)$. On the other hand, suppose $x \in U$. Note that $G(z, x_0)$ is a strictly positive lower semi-continuous function of $z \in \Omega$ and hence $M = \min \{ G(z, x_0): \, z \in \overline{U} \} >0$, where $M$ depends on $\Omega, x_0$ and $U$, hence $K$. Hence by equation (\ref{GchiKbnd1}), \[ G \chi_K (x) \leq C_1 (K) \leq \frac{C_1(K)}{M} \, G(x, x_0). \]
Since $m(x) = \min (1, G(x, x_0))$, inequalities (\ref{GchiKbnd1}) and (\ref{GchiKbnd2}) imply inequality (\ref{GchiKest}).
To prove inequality (\ref{lowerGchiKest}), let $U$ be as above. For $x \in \Omega \setminus U$, the same application of Harnack's inequality as above gives that $G(x,y) \geq C (x_0, K, \Omega)^{-1} G(x, x_0)$ for all $y \in K$. Hence
\[ \int_K G(x,y) \, dy \geq C(K, \Omega)^{-1} \, |K| \, G(x, x_0) \geq C(K, \Omega)^{-1} |K| \, m(x). \] Now suppose $x \in \overline{U}$. Note that $G(z,y) $ is a strictly positive lower semi-continuous function of $(z,y)$ in $\Omega\times \Omega$ (see \cite{AG}, Theorem 4.1.9). Hence $C_3 (\overline{U}) = \min \{ G(z,y) \,: \, (z, y) \in \overline{U}\times \overline{U}$ is attained at some point in the compact set $\overline{U}\times \overline{U}$.
In particular, $C_3(\overline{U})>0$. Since $m(x) \leq 1$,
\[ \int_K G(x,y) \, dy \geq C_3 (\overline{U}) \, |K| = C_3 (x_0, K, \Omega) \, m(x) . \] \end{proof}
\begin{Lemma}\label{low-M-est} Suppose $\Omega \subset \mathbb{R}^n$ ($n\ge 2$) is a bounded uniform domain. Suppose $x_0\in \Omega$ is a reference point for the Martin kernel. Then there exists a positive constant $c$ depending only on $x_0$ and $\Omega$ such that \begin{equation}\label{martin-low} M(x, z) \ge c \, m(x), \qquad \text{for all} \, \, (x, z) \in \Omega\times\partial \Omega , \end{equation} where $m(x)=\min (1, G(x, x_0))$.
In particular, if $\omega$ is a locally finite Borel measure in $\Omega$ such that $M^{*} (m \, \omega) \not\equiv+\infty$, then $m \in L^2 (\Omega, \omega)$. \end{Lemma}
\begin{proof} Fix $z \in \partial \Omega$. Let $B(x_0, r) \subset \Omega$, where $0<r\le \frac{1}{2} \, \text{dist} \, (x_0, \partial \Omega)$. Since $M(\cdot, z)$ is a positive harmonic function in $\Omega$, by Harnack's inequality in $B(x_0, 2r)$, there exists a constant $c>0$ depending only on $x_0$ and $r$ such that $M(x, z) \ge c \, M(x_0, z)$, for all $x \in B(x_0, r)$ where $M(x_0, z)=1$. Hence, \begin{equation}\label{m-c} M(x, z) \geq c >0, \quad \text{for all} \, \, x \in B(x_0, r) . \end{equation}
For $x \in \Omega\setminus B(x_0, r)$, we argue that by the $3$-G inequality in a bounded uniform domain ($n\ge 3$), \[
\frac{G(x, x_0) \, G(x_0, y)}{G(x, y)}\le C \, \left( |x-x_0|^{2-n} + |y-x_0|^{2-n} \right) , \] for all $y \in \Omega$, where $C$ depends only on $\Omega$, see \cite{Han1}. Hence, for $x, y \in \Omega\setminus B(x_0, r)$, \[
\frac{G(x, y)}{G(x_0, y)}\ge C^{-1} \, \frac{G(x, x_0)}{|x-x_0|^{2-n} + |y-x_0|^{2-n} }\ge C^{-1} 2 r^{n-2} \, G(x, x_0). \] (For $n=2$, an analogue of the $3$-G inequality holds in any bounded domain \cite{Han2}.) Letting $y \rightarrow z$, where without loss of generality we may assume that $y \in \Omega\setminus B(x_0, r)$, we deduce \begin{equation}\label{m-g} M(x, z) \ge C^{-1} 2 r^{n-2}\, G(x, x_0) , \quad \text{for all} \, \, x \in \Omega\setminus B(x_0, r) . \end{equation} Combining estimates \eqref{m-c} and \eqref{m-g} yields \eqref{martin-low}.
If $\omega$ is a locally finite Borel measure in $\Omega$ such that $M^{*} (m \, \omega)\not\equiv +\infty$, then for some $z\in \partial \Omega$ by \eqref{martin-low} $\int_\Omega m^2 d \omega \le c M^* (m \omega)(z) <+\infty$, i.e., $m \in L^2(\Omega, \omega)$. \end{proof}
\begin{Lemma}\label{conv-lemma} Suppose $\Omega \subset \mathbb{R}^n$ ($n\ge 2$) is a bounded uniform domain. Suppose $\mu$ is a finite Borel measure with compact support in $\Omega$. Let $z \in \partial \Omega$. Then \begin{equation}\label{min-thin2}
\lim_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{G(x, x_0)} = \int_{\Omega} M(y,z) \, d\mu(y) = M^* \mu (z).
\end{equation} In addition, if $z$ is a regular point of $\partial \Omega$, then \begin{equation}\label{min-thin}
\lim_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{m(x)} = \int_{\Omega} M(y,z) \, d\mu(y) = M^* \mu (z).
\end{equation}
\end{Lemma}
\begin{proof} By \eqref{martin-K}, if $y\in\Omega$ and $x_j\rightarrow z$ ($x_j\in \Omega$), then \[ \lim_{j \rightarrow \infty} G(y, x_j) / G (x_j, x_0) = M(y,z) . \] As in the proof of Lemma \ref{estGchiK}, we denote by $U$ a relatively compact domain in $\Omega$ that contains both $x_0$ and $K$. Since $x_j \rightarrow z$, where $z\in \partial\Omega$, we have that $x_j \not \in \overline{U}$ for $j \ge j_0$. Then $G(y, x_j)$ is a harmonic function of $y\in U$, and for each $j \ge j_0$, by Harnack's inequality, \[ G(y, x_j) \le C(K, U) \, G(x_0, x_j), \qquad \text{for all} \, \, y \in K . \]
Since $\mu$ is a finite measure, we obtain \eqref{min-thin2} by the dominated convergence theorem.
If $z$ is a regular point of $\partial \Omega$, then $G (x_j, x_0)\rightarrow 0$ as $j \rightarrow \infty$, and consequently $m(x_j)=G (x_j, x_0)$ for $j$ large enough. Hence, \eqref{min-thin} follows from
\eqref{min-thin2}.
\end{proof}
In Lemma \ref{conv-lemma}, $\mu$ is a finite Borel measure with compact support in $\Omega$. We remark that more generally, for $\mu$ only locally finite, \begin{equation}\label{martin1} \liminf_{x\rightarrow z, \, \, x\in \Omega} \frac{G \mu(x)}{G(x_0, x)}\ge \int_\Omega M(x, z) \, d \mu(x), \end{equation} for $z\in \triangle$ (a Martin boundary point), by Fatou's Lemma. In fact, by \cite{AG}, Theorem 9.2.7, for any Green's potential $G \mu$ and $z\in \triangle_1$ (a Martin boundary point where $\Omega$ is not minimally thin), \begin{equation}\label{martin2} \liminf_{x\rightarrow z, \, \, x\in \Omega} \frac{G \mu(x)}{G(x_0, x)}= \int_\Omega M(x, z) \, d \mu(x). \end{equation} For uniform domains, $\triangle=\triangle_1=\partial\Omega$, so that \eqref{martin2} holds for all $z\in \partial\Omega$. We could use this fact in our proof below, but we prefer the more elementary approach in Lemma \ref{conv-lemma}. The compact support restriction can be removed later in the proof by exhausting $\Omega$ with a sequence of nested domains $\Omega_j$, and using the monotone convergence theorem.
\begin{proofof} Theorem \ref{gaugecrit}.
(A) Suppose $\Vert T \Vert <1$ and \eqref{martincritsuff} holds. Define \begin{equation}\label{defngreenpot}
Gf (x) = \int_{\Omega} G(x,y) f(y) \, dy, \,\,\, x \in \Omega. \end{equation} Let $G_1 =G$, and let $G_j (x,y)$ be the kernel of the $j^{th}$ iterate $T^j$ of $T$ defined by (\ref{defT}), so that \begin{equation} \label{defGj} T^j h(x) = \int_{\Omega} G_j(x,y) h(y)\, d \omega (y). \end{equation} Then $G_j$ in \eqref{defGj} is determined inductively for $j \geq 2$ by \begin{equation}\label{new-defGj}
G_j (x,y) = \int_{\Omega} G_{j-1} (x, w) G(w,y) \, d\omega(w). \end{equation} We define the minimal Green's function associated with the Schr\"{o}dinger operator $-\triangle - \omega$ to be \begin{equation}\label{defGreenSchr}
\mathcal{G} (x,y) = \sum_{j=1}^{\infty} G_j (x,y), \qquad \text{for all} \, \, x, y \in \Omega.
\end{equation}
The corresponding Green's operator is \[ \mathcal{G}f(x) = \int_{\Omega} \mathcal{G}(x,y) f(y) \, dy, \quad x \in \Omega .\]
Let $K$ be a compact set in $\Omega$. Denote by $u_K$ a solution to the equation \begin{equation}\label{K-eqn} \left\{ \begin{aligned} -\triangle u & = \omega u + \chi_K\, \,& & \mbox{in} \, \, \Omega, \quad u \ge 0, \\ u & = 0 \, \, & &\mbox{on} \, \, \partial \Omega . \end{aligned} \right. \end{equation}
In other words, \begin{equation} \label{eqnforuK} u_K = G(u_K \omega) + G\chi_K . \end{equation}
By Lemma \ref{estGchiK}, $G\chi_K (x)\approx m(x)$ in $\Omega$ if
$m(x)=\min(1, G(x, x_0))$.
Without loss of generality we may assume that $m \in L^2 (\Omega, \omega)$; otherwise
$M^*(m \, \omega) \equiv +\infty$ by Lemma \ref{low-M-est}, and
condition \eqref{martincritsuff} is not valid. It follows that $G \chi_K \in L^2 (\Omega, \omega)$.
But $||T||<1$, so that $u_K=(I-T)^{-1} G \chi_K\in L^2(\Omega, \omega)$, and the series in \eqref{defuK} converges in
$L^2(\Omega, \omega)$ (and hence $d \omega$-a.e.). In particular, $G(u_K \omega)\not\equiv \infty$.
From this fact it is immediate that the minimal superharmonic solution to \eqref{eqnforuK} is given by \begin{equation} \label{defuK} \begin{aligned} u_K (x) & := G(u_K \omega) + G\chi_K = (I-T)^{-1} G \chi_K (x) \\& = \sum_{j=0}^{\infty} T^j (G \chi_K) (x)= \int_{K} \mathcal{G} (x,y) \, dy , \end{aligned} \end{equation} for all $x \in \Omega$.
By equation (\ref{ufsolndef}), \begin{align*}
u_f (x) & = Pf(x) + \sum_{j=1}^{\infty} T^j (Pf)(x) \\
& = Pf(x)+ \int_{\Omega} \mathcal{G} (x,y) \, Pf (y) \, d\omega(y),
\end{align*} for all $x \in \Omega$. Integrating both sides of this equation over $K$ with respect to $dx$, \begin{equation}\label{intKuf} \begin{aligned} \int_{K} u_f (x) \, dx & = \int_K Pf(x) \, dx + \int_{K} \int_{\Omega} \mathcal{G} (x,y) \, Pf(y) \, d \omega (y) \, dx \\
& = \int_K Pf(x) \, dx + \int_{\Omega} \int_{K} \mathcal{G} (x,y) \, dx \, Pf (y) \, d\omega(y) \\ & = \int_K Pf(x) \, dx + \int_{\Omega} u_K (y)\, Pf (y) \, d\omega(y) , \end{aligned} \end{equation} by Fubini's theorem, equation (\ref{defuK}) and the symmetry of $\mathcal{G}$.
The term $\int_K Pf(x) \, dx $ is finite because \eqref{martincritsuff} guarantees that $f$ is integrable with respect to harmonic measure, so $Pf$ is not identically infinite, and so is harmonic. Thus to prove that $u_f \in L^1 (K, dx)$, it suffices to show that $u_K Pf \in L^1 (\Omega, \omega)$.
By \eqref{pf-rep-martin} and Fubini's theorem, \[ \int_{\Omega} u_K(y) \, Pf(y) \, d\omega(y) = \int_{\partial \Omega} \int_{\Omega} M(y,z) u_K (y) \, d \omega (y) \, f(z) \, dH^{x_0} (z) . \]
We claim that \begin{equation} \label{Mu0est} \int_{\Omega} M(y,z) u_K (y) \, d \omega (y) \leq C_K \, e^{C \, M^* (m\omega) (z)}, \end{equation} if $z$ is a regular point of $\partial\Omega$. Assuming \eqref{Mu0est} for the moment, the set of irregular boundary points $E \subset \partial\Omega$
is known to be Borel and polar, i.e., $\text{cap}(E)=0$ (\cite{AG}, Theorem 6.6.8), and consequently
negligible, i.e., of harmonic measure zero (\cite{AG}, Theorem 6.5.5). Therefore \eqref{Mu0est} yields \begin{equation} \label{u_K-Pf}
\int_{\Omega} u_K(y) \, Pf(y) \, d\omega(y)
\leq C_K \int_{\partial \Omega} e^{C \, M^* (m\omega) (z)} \, f(z) \, d H^{x_0} (z).
\end{equation} Hence our assumption \eqref{martincritsuff} guarantees that $u_K \, Pf \in L^1 (\Omega, \omega)$.
To prove (\ref{Mu0est}), let us assume first that $\omega$ is compactly supported. Then as mentioned above after \eqref{eqnforuK}, $u_K \in L^2 (\Omega, \omega)$. Hence, by Cauchy's inequality, $d \mu= u_K \, d \omega$ is a finite compactly supported measure. By equation \eqref{defuK}, Lemma \ref{estGchiK}, and Theorem \ref{FNVTheorem}, \begin{equation} \label{uKexpest}
u_K (x) \leq C_K \sum_{j=0}^{\infty} T^j m (x) \leq C_K \, m(x) \, e^{C G(m\omega)(x) /m(x) },
\end{equation} since $Tm= G(m \omega)$. Using the trivial estimate $m(\cdot) \le G(x_0, \cdot)$, followed by \eqref{eqnforuK} and then \eqref{uKexpest}, \begin{equation} \label{GuKomegaest}
\frac{G(u_K\omega)(x)}{G(x, x_0)} \leq \frac{G(u_K\omega)(x)}{m(x)} \leq \frac{u_K(x)}{m(x)} \leq C_K e^{C G(m\omega)(x)/m(x)} , \end{equation} for $x \in \Omega$. Applying \eqref{min-thin2} with $d\mu = u_K d\omega$ and then \eqref{min-thin} with $d\mu = m d\omega$, \begin{align*}
\int_{\Omega} M(y,z) u_K (y) \, d \omega (y) & = \lim_{x \rightarrow z, x \in \Omega} \frac{G(u_K\omega)(x)}{G(x, x_0)} \\
& \leq \lim_{x \rightarrow z, x \in \Omega} C_K e^{C G(m\omega)(x)/m(x)} \\
& = C_K e^{ M^* (m\omega) (z) } ,
\end{align*} where the regularity of $z \in \partial \Omega$ is used only at the last step. Hence \eqref{Mu0est} is established for compactly supported measures $\omega$.
In the general case, consider an exhaustion $\Omega=\cup_{k=1}^{\infty} \Omega_k$, where $\{\Omega_k\}$ is a family of nested, relatively compact subdomains of $\Omega$. Without loss of generality we may assume that $x_0\in \Omega_k$, for all $k \in \mathbb{N}$.
In $\Omega\times\Omega$, define the iterated Green's kernels $G_j^{(k)} (x,y)$ for $j\in \mathbb{N}$, and $\mathcal{G}^{(k)}(x,y)=\sum_{j=1}^\infty G_j^{(k)} (x,y)$, as in (\ref{new-defGj}), (\ref{defGreenSchr}), except with $\omega$ replaced by $\omega_k$, $k \in \mathbb{N}$. Let $u_K^{(k)} = \mathcal{G}^{(k)} \chi_K$. By repeated use of the monotone convergence theorem, we see that $G_j^{(k)} (x,y)$ increases monotonically as $k \rightarrow \infty$ to $G_j (x,y)$ for each $j$, $\mathcal{G}^{(k)}(x,y)$ increases monotonically to $\mathcal{G} (x,y)$, and $u_K^{(k)}$ increases monotonically to $u_K$. Applying the compact support case gives \begin{align*} \int_{\Omega} M(y,z) u_K^{(k)} (y) \ \chi_{\Omega_k}(y) \, d\omega(y) & \leq C_K e^{C \, M^* (m \, \omega_k) (z)} \\ & \leq C_K e^{C \, M^* (m \, \omega) (z)} . \end{align*} Then, as $k \rightarrow \infty$, the monotone convergence theorem yields (\ref{Mu0est}).
(B) Suppose $u_f \in L^1_{loc} (\Omega, dx)$, where $f \not= 0$ a.e. relative to harmonic measure, and \[ u_f = Tu_f + Pf \qquad \,\, \text{on} \, \, \Omega .\] So $Tu_f \leq u_f$, where $0<u_f<\infty$ $d \omega$-a.e. It follows by Schur's lemma that $\Vert T \Vert_{L^2 (\omega)\rightarrow L^2 (\omega)} \leq 1$.
It remains to show that \eqref{martincritnec} holds. We remark that this condition follows immediately from \eqref{ptwiselowerbnd} with
$x=x_0$ provided $u_f(x_0) < \infty$. Since this is not necessarily the case, we proceed as follows.
Choose any compact set $K \subseteq \Omega$ with $|K|>0$. By Lemma \ref{estGchiK} and Theorem \ref{FNVTheorem}, \begin{equation} \label{lowestuK}
u_K (x) = \sum_{j=0}^{\infty} T^j G\chi_K (x) \geq c_K \sum_{j=0}^{\infty} T^j m (x) \geq c_K m(x) e^{c(Tm(x))/(m(x))}, \end{equation} for all $x \in \Omega$. In fact, we can let $c=1$ in the preceding
estimate, exactly as in the proof of \eqref{lowerTM} above, by using \cite{GV2}, Theorem 1.2 with $q=1$, $h=m$, and $\mathfrak{b}=1$. Notice
that $m$ is a superharmonic function in $\Omega$, and so the Maria-Frostman domination
principle yields \eqref{weak-dom} with $\mathfrak{b}=1$ and $h=m$.
By inequality \eqref{lowestuK}, equation (\ref{eqnforuK}) and inequality (\ref{GchiKest}), \begin{equation} \label{Tm-m} \begin{aligned}
e^{Tm(x)/(m(x))} & \leq c_K^{-1} \frac{u_K(x)}{m(x)} = c_K^{-1} \left( \frac{G(u_K \omega)(x)}{m(x)} + \frac{G\chi_K (x)}{m(x)} \right) \\ & \leq c_K^{-1} \frac{G(u_K \omega)(x)}{m(x)} + C_K c_K^{-1} .
\end{aligned}
\end{equation}
Let $z \in \partial \Omega$ be a regular point. Applying Lemma \ref{conv-lemma} with $d\mu= m d\omega$ on the left side of (\ref{Tm-m}) (recalling that $Tm= G(m \omega)$), and with $d\mu = u_K \omega$ on the right side, we obtain \begin{equation} \label{M-claim}
e^{ M^* (m \omega) (z)} \leq c_K^{-1} \, \int_{\Omega} M(y,z) u_K (y) \, d \omega (y) + C_K c_K^{-1}, \end{equation} if $\omega$ has compact support in $\Omega$. By the same exhaustion process that was used in the opposite direction, (\ref{M-claim}) holds for $\omega$ locally finite in $\Omega$.
Since the set of irregular points in $\partial \Omega$ has harmonic measure $0$, as noted above, we can integrate \eqref{M-claim} over $\partial \Omega$ with respect to $f \, dH^{x_0}$ and apply Fubini's theorem to obtain \begin{align*}
& \int_{\partial \Omega} e^{M^* (m \omega) (z)} \, f(z) \, dH^{x_0} (z) \\ &
\leq C_1 c_K^{-1} \int_{\Omega} \int_{\partial \Omega} M(y,z) \, f(z) \, dH^{x_0} (z) u_K (y) \, d \omega (y) \\& + C_K c_K^{-1} \int_{\partial \Omega} \, f(z) \, dH^{x_0} (z) \\ & = C_1 c_K^{-1} \int_{\Omega} u_K (y) \, Pf(y) \, d \omega (y) + C_K c_K^{-1} \int_{\partial \Omega} \, f(z) \, dH^{x_0} (z) , \end{align*} using equation \eqref{pf-rep-martin}. Since $u_K \, Pf \in L^1 (\Omega, \omega)$ by \eqref{intKuf}, we have condition \eqref{martincritnec}.
\end{proofof}
\noindent{\bf Remark.} For part (A) of Theorem \ref{gaugecrit} and Corollary \ref{cor}, if $\Omega$ is a bounded $C^{1,1}$ domain, or a bounded Lipschitz domain with sufficiently small Lipschitz constant, then $G\chi_{\Omega} \approx m$ (see, for instance, \cite{AAC}, Theorem 1.1 and Remark 1.2(i)). Hence, $\int_\Omega M(x, z) \, dx\le C$, where $C$ does not depend on $z \in\partial \Omega$. Then one can replace $\chi_K$ above with $\chi_{\Omega}$ and obtain that $u_f \in L^1 (\Omega, dx)$ with \[ \int_{\Omega} u_f (x) \, dx \leq C \int_{\partial \Omega} f(z) \, dH^{x_0} (z)
+ C \int_{\partial \Omega} e^{CM^* (m \omega) (z)} \, f(z) \, dH^{x_0} (z) . \]
In the same way that Theorem \ref{mainu-harm} generalizes Theorem \ref{mainufest}, there is a complete analogue of Theorem \ref{gaugecrit} for solutions of equation \eqref{u-harm-int}, with an arbitrary positive harmonic function $h$ in place of $Pf$. It gives sufficient and matching necessary conditions for the existence of solutions whose pointwise estimates are provided in Theorem \ref{mainu-harm}. The primary difference in this case is that $\mu_h$ is not necessarily zero on the set of irregular points of $\partial\Omega$. Hence we need to consider \begin{equation}\label{phi-def} \begin{aligned} \varphi(z) & = \liminf_{x \rightarrow z, \, x\in\Omega} \max(1, G(x, x_0)), \\ \psi(z) & = \limsup_{x \rightarrow z, \, x\in\Omega} \, \max(1, G(x, x_0)) ,
\end{aligned}
\end{equation} for $z \in \partial \Omega$. Note that $\varphi = \psi =1$ at regular boundary points. The following result is a generalization of Lemma \ref{conv-lemma}, which allows us to control the behavior of $\varphi $ and $\psi$ at irregular points in a uniform domain.
\begin{Lemma}\label{conv-lemma-alt} Suppose $\Omega \subset \mathbb{R}^n$ is a bounded uniform domain, for $n \geq 2$. Suppose $\mu$ is a finite Borel measure with compact support in $\Omega$. Let $z \in \partial \Omega$. Then \begin{equation}\label{min-thin3a} 1\le \varphi(z) \le \psi(z) \le \kappa \, \varphi(z) \le \kappa \, C_1 , \qquad z \in \Omega ,
\end{equation} for constants $\kappa$ and $C_1$, where $\kappa$ depends only on $\Omega$ and $C_1$ depends only on ${\rm dist} (x_0, \partial \Omega)$. Moreover, for all $z \in \partial\Omega$, \begin{equation} \label{min-thin3} \begin{aligned}
\limsup_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{m(x)} = \psi(z) M^*\mu(z)
& \leq \kappa \varphi(z) M^*\mu(z) \\ &= \kappa \liminf_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{m(x)} .
\end{aligned} \end{equation} \end{Lemma}
\begin{proof} The inequalities $1 \leq \varphi (z) \leq \psi(z)$ are trivial. The inequality $\psi (z) \leq \kappa \varphi(z)$ follows from inequality (\ref{liminf-limsup}) with $y=x_0$ and the observation that $\max(1, G(x, x_0)) = G(x, x_0)/m(x)$. Since $x\rightarrow z$,
we may assume that $|x-x_{0} | \geq c_1$ for any $c_1 < \text{dist} \, (x_0, \partial \Omega)$, for $x$ close enough to $z$. Then
\[
G(x, x_0) \le c(n) \, |x-x_0|^{2-n} \le c(n) \, c_1^{2-n} , \] where we suppose again that $n\ge 3$ (the case $n=2$ is treated in a similar way). Hence, \[ \psi (z) \le C_1=\max \left(1, c(n) \, [\text{dist} \, (x_0, \partial \Omega)]^{2-n}\right) , \quad \text{for all} \, \, z\in \partial \Omega , \] and consequently \eqref{min-thin3a} holds.
To prove \eqref{min-thin3},
note that by \eqref{min-thin2},
\[ \limsup_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{m(x)} = \limsup_{x \rightarrow z, \, x\in\Omega} \frac{G(x, x_0)}{m(x)} \, \lim_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{G(x, x_0)} = \psi(z) M^* \mu (z) \]
and \[ \liminf_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{m(x)} =
\liminf_{x \rightarrow z, \, x\in\Omega} \frac{G(x, x_0)}{m(x)} \, \lim_{x \rightarrow z, \, x\in\Omega} \frac{G \mu (x)}{G(x, x_0)} = \varphi(z) M^* \mu (z) . \] Hence, \eqref{min-thin3} is immediate from \eqref{min-thin3a}. \end{proof}
\begin{Thm} \label{u_h-exist} Suppose $ \Omega \subset \mathbb{R}^n$ is a bounded uniform domain, $\omega$ is a locally finite Borel measure on $\Omega$, and $h$ is a positive
harmonic function in $\Omega$. Let $x_0 \in \Omega$ be the reference point in the definition of Martin's kernel. Let $m(x) = \min (1, G(x,x_0))$, and let $\mu_h$ be the Martin's representing measure for $h$.
(A) There exists $C>0$ ($C$ depending only on $\Omega$ and $\Vert T \Vert$) such that if $\Vert T \Vert <1$ (equivalently, (\ref{equivnormTless1}) holds with $\beta <1$) and \begin{equation} \label{martin-suff}
\int_{\partial \Omega} e^{C \, \varphi (z)\, M^* (m \omega)(z)} \, d\mu_h (z)< \infty , \end{equation} then $u_h = \sum_{j=0}^{\infty} T^j h \in L^1_{loc} (\Omega, dx)$ is a positive solution to (\ref{u-harm-int}).
(B) If $u \in L^1_{loc} (\Omega, dx) $ is a positive solution of \eqref{u-harm-int}, then $\Vert T \Vert \leq 1$ and \begin{equation} \label{martin-nec}
\int_{\partial \Omega} e^{\psi(z) \, M^* (m \omega)(z)} \, d\mu_h(z) < \infty . \end{equation} \end{Thm}
\begin{proof}
The proof follows the lines of the proof of Theorem \ref{gaugecrit}, so we only sketch the differences. Let $K \subseteq \Omega$ be compact with $|K|>0$. Replacing $Pf$ with $h$, we obtain \begin{equation} \label{martin2-alt}
\int_K u_h (x) \, dx = \int_K h (x) \, dx + \int_{\Omega} u_K (y) h(y) \, d\omega (y)\end{equation} instead of \eqref{intKuf}. Using Martin's representation \eqref{martin-rep} instead of \eqref{pf-rep-martin}, \begin{equation} \label{uKhdomega}
\int_{\Omega} u_K (y) h(y) \, d\omega (y) = \int_{\partial \Omega} \int_{\Omega} M(y,z) u_K (y) \, d \omega (y) \, d \mu_h (z) . \end{equation}
For part (A), it suffices to show that $u_K h \in L^1 (\Omega, d\omega)$. We claim that \begin{equation} \label{Mu0est-alt} \int_{\Omega} M(y,z) u_K (y) \, d \omega (y) \leq C_K \, e^{C \, \varphi(z) \, M^* (m\omega) (z)}, \quad z \in \partial \Omega, \end{equation} which replaces \eqref{Mu0est}, and completes the proof of (A). To prove \eqref{Mu0est-alt}, we can assume $\omega$ is compactly supported by the exhaustion process above. Choose a sequence of points $x_j$ in $\Omega$ converging to $z$, such that \[ \lim_{j \rightarrow \infty} \frac{G(m \omega)(x_j)}{m(x_j)} = \lim_{w \rightarrow z} \inf_{w \in \Omega} \frac{G(m\omega)(w)}{m(w)}. \] Then by \eqref{min-thin} with $d \mu= u_K d \omega$, \eqref{GuKomegaest}, and \eqref{min-thin3} with $\mu = m \omega$, \begin{align*}
\int_{\Omega} M(y,z) u_K (y) \, d \omega (y) & = \lim_{j \rightarrow \infty} \frac{G(u_K\omega)(x_j)}{G(x_j, x_0)} \\
& \leq \lim \inf_{j \rightarrow \infty} C_K e^{C G(m\omega)(x_j)/m(x_j)} \\
& = C_K e^{C \varphi(z) M^* (m\omega) (z) } .
\end{align*}
For part (B), equation \eqref{u-harm-int} and Schur's Lemma show that $\Vert T \Vert \leq 1$, as in Theorem \ref{gaugecrit}. If $u \in L^1_{loc} (\Omega, dx)$, then the minimal solution $u_h$ also belongs to $L^1_{loc} (\Omega, dx)$ (see the remarks before Definition \ref{qmkernel}). We claim that the following analogue of \eqref{M-claim} holds: \begin{equation} \label{M-claim-alt} e^{\psi(z) M^* (m \omega) (z)} \leq c_K^{-1} \kappa C_1 \, \int_{\Omega} M(y,z) u_K (y) \, d \omega (y) + C_K c_K^{-1}, \end{equation} for all $z \in \partial \Omega$, where $C_1\ge 1$ is the constant in \eqref{min-thin3a}, which depends only on $x_0$ and $\Omega$. Assuming this claim, then \eqref{uKhdomega} implies \eqref{martin-nec} since $u_K h \in L^1 (\Omega, d \omega)$ by \eqref{martin2-alt}. To prove \eqref{M-claim-alt}, let $x_j$ be a sequence of points such that \[ \lim_{j \rightarrow \infty} \frac{G(m \omega)(x_j)}{m(x_j)} = \lim_{w \rightarrow z} \sup_{w \in \Omega} \frac{G(m\omega)(w)}{m(w)}. \] By \eqref{min-thin3} with $d\mu = m\, d\omega$, and recalling that $G(m \omega) = Tm$, \begin{align*}
e^{\psi(z) M^* (m \omega) (z)} & \leq \lim_{j \rightarrow \infty} e^{Tm(x_j)/m(x_j)} \\
& \leq \lim \sup_{j \rightarrow \infty} C_K^{-1} \frac{G(u_K \omega) (x_j)}{m(x_j)} + C_K c_K^{-1},
\end{align*} by \eqref{Tm-m} with $c=1$. By \eqref{min-thin3} with $\mu= u_k \omega$, \[ \lim \sup_{j \rightarrow \infty} \frac{G(u_K \omega) (x_j)}{m(x_j)}= \psi (z) M^* (u_k \omega) (z) \leq \kappa C_1 M^* (u_k \omega) (z) , \] which establishes \eqref{M-claim-alt}. \end{proof}
\section{Nonlinear elliptic equations of Riccati type}\label{riccati}
In this section we treat equation \eqref{nonlineareqn-1}. The definition of solutions of \eqref{nonlineareqn-1} is consistent with our approach in the previous sections.
\begin{Def}\label{defveryweakriccati} A nonnegative function $v \in W^{1,2}_{loc} (\Omega) $ is a solution of (\ref{nonlineareqn-1}) if $v$ is a weak solution in $\Omega$, i.e., \begin{equation}\label{weakriccati}
\int_{\Omega} \nabla v \cdot \nabla h \, dx = \int_{\Omega} |\nabla v|^2 h \, dx + \int_{\Omega} h \, d\omega, \,\,\, \mbox{for all} \,\,\, h \in C^\infty_0 (\Omega), \end{equation} and $v$ has a superharmonic representative (denoted also by $v$) in $\Omega$ whose greatest harmonic minorant is the zero function. \end{Def}
Since $v \in W^{1,2}_{loc} (\Omega) $, it is easy to see that \eqref{weakriccati} is equivalent to \begin{equation}\label{ric-eq-1}
- \triangle v = |\nabla v|^2 + \omega \quad \mbox{in} \, \, \, \, D^{\, \prime}(\Omega), \end{equation} i.e., $v$ is a distributional solution in $\Omega$. In other words, by the Riesz decomposition theorem (\cite{AG}, Sec. 4.4),
$|\nabla v|^2 + \omega$ is the Riesz measure associated with
$-\triangle v$, and $v$ satisfies the integral equation \begin{equation} \label{integralformmeasure}
v = G (|\nabla v|^2 + \omega) \,\, \hbox{in} \,\, \Omega. \end{equation} In bounded Lipschitz domains, \eqref{integralformmeasure} is equivalent to $v$ being a very weak solution of \eqref{nonlineareqn-1} in the sense of \cite{MR}.
Via the relation $v=\log u$, solutions $v$ of \eqref{nonlineareqn-1} correspond formally to solutions $u$ of \eqref{ufeqn} with $f=1$, i.e., \begin{equation}\label{dirichlet} \left\{ \begin{aligned} -\triangle u & = \omega \, u, \, \, & u > 0 \quad &\mbox{in} \, \, \Omega, \\ u & = 1 \, \, &\mbox{on} \, \, \partial \Omega. \end{aligned} \right. \end{equation} The minimal solution $u_1$ to \eqref{dirichlet} (the gauge) is given by \eqref{gauge-def}.
Earlier results on \eqref{nonlineareqn-1}
were obtained in \cite{HMV}, where the problem was posed of
finding precise conditions on the boundary behavior of $\omega$ that ensure the existence of solutions.
The precise relation between solutions to \eqref{dirichlet} and \eqref{nonlineareqn-1} is complicated, as discovered by Ferone and Murat (see \cite{FM1}--\cite{FM3} or Remark 4.2 in \cite{FV2}). In the special case of smooth domains and absolutely continuous $\omega$, the problem was studied by the authors in \cite{FV2}, where the condition of the exponential integrability of the balayage of $m \, \omega$ appeared for the first time. In that setup, it was shown that if $u_1$ is the minimal solution of (\ref{dirichlet}), then $v = \log u_1$ is a solution of (\ref{nonlineareqn-1}). However, if $v$ is a solution to (\ref{nonlineareqn-1}) then $u=e^v$ is in general only a supersolution to (\ref{dirichlet}).
In Theorem \ref{riccatithm}, we treat general measures $\omega$ and uniform domains $\Omega$ based on the results of the previous sections. We take this opportunity to give further details on some points in the arguments presented in \cite{FV2}, Sec. 4. We also improve the constant in the exponent of the necessary condition (exponential integrability of the balayage).
\begin{proofof} Theorem \ref{riccatithm}. First suppose that $\Vert T \Vert<1$ and (\ref{martincritsuff-g}) holds with sufficiently large $C>0$. By Corollary~\ref{cor}, the Schr\"odinger equation (\ref{dirichlet}) has a positive solution $ u= 1 + \mathcal{G} \omega$. (This solution was called $u_1$ in the statement of Corollary~\ref{cor}.) Then $u \in L^1_{loc} (\Omega, d\omega)$ and $u$ satisfies the integral equation $u = 1 + G(\omega u)$. Therefore $u: \Omega \to [1, +\infty]$ is defined everywhere as a positive superharmonic function in $\Omega$ and hence is quasi-continuous by the known properties of superhamonic functions.
In particular, the infinity set $E=\{x\in \Omega: \, u(x)=+\infty\}$ has zero capacity, $\text{cap}(E)=0$, and
$u \in W^{1,p}_{loc}(\Omega)$ when $p< \frac{n}{n-1}$. In fact, $u \in W^{1,2}_{loc}(\Omega)$ as shown in \cite{JMV}, Theorem 6.2, but the proof of this stronger property is more involved, and it will not be used below.
Define $d \mu = -\triangle u = \omega \, u$, where a solution $u \in L^1_{loc}(\Omega, \omega)$ to \eqref{dirichlet} is understood as in \S \ref{sec2} above. Notice that $u = \frac{d\mu}{d \omega}$ is the Radon--Nikodym derivative defined $d\omega$-a.e. Let $v=\log u$. Then $0\leq v < +\infty$ $d\omega$-a.e., $v$ is superharmonic in $\Omega$ by Jensen's inequality, and
$v \in W^{1,2}_{loc}(\Omega)$ (see \cite{HKM}, Theorem 7.48; \cite{MZ}, Sec. 2.2).
We claim that
\eqref{weakriccati} holds. We will apply the integration by parts formula
\begin{equation}\label{by-parts} \int_\Omega g \, d \rho = - \langle g, \triangle r \rangle= \int_\Omega \nabla g \cdot \nabla r \, dx, \end{equation} where $g\in W^{1,2}(\Omega)$ is compactly supported and quasi-continuous in $\Omega$, and $\rho = -\triangle r$ where $r \in W^{1,2}_{loc}(\Omega)$ is superharmonic (see, e.g., \cite{MZ}, Theorem 2.39 and Lemma 2.33). This proof would simplify if we could apply (\ref{by-parts}) with $g = \frac h u, \rho = \mu$, and $r=u$, for $h \in C^{\infty}_0 (\Omega)$. However, we do not use the property $u \in W^{1,2}_{loc}(\Omega)$, so we need an approximation argument. For $k \in \mathbb{N}$, let \[ u_k = \min (u, \, e^k), \quad v_k=\min (v, \, k), \quad \mbox{and} \quad \mu_k = - \triangle u_k.\] Clearly $u_k$ and $v_k$ are superharmonic, hence $\mu_k $ is a positive measure. Moreover, $u_k$ and $v_k$ belong to $W^{1,2}_{loc}(\Omega)\bigcap L^\infty(\Omega)$ (see \cite{HKM}, Corollary 7.20).
Let $h\in C^\infty_0(\Omega)$. We invoke (\ref{by-parts}) with $g= \frac {h}{u_k}, \rho=\mu_k$, and $r=u_k$. Note that $u_k\ge 1$, $g $ is compactly supported since $h$ is, and $ g \in W^{1,2} (\Omega)$ since $u_k\in W^{1,2}_{loc} (\Omega)$ and $h \in W^{1,\infty}(\Omega)$ is compactly supported. Then by (\ref{by-parts}), we have \begin{equation}\label{approx-v_k} \begin{aligned} \int_\Omega \frac {h}{u_k} \, d \mu_k & = \int_{\Omega} \nabla \left(\frac {h}{u_k}\right) \cdot \nabla u_k \, dx\\
& = \int_\Omega \frac {\nabla h}{u_k} \cdot \nabla u_k \, dx - \int_\Omega \frac {|\nabla u_k|^2}{u_k^2} h \, dx \\
& = \int_\Omega \nabla h \cdot \nabla v_k \, dx - \int_\Omega |\nabla v_k|^2 \, h \, dx. \end{aligned} \end{equation}
As mentioned above, $v \in W^{1,2}_{loc} (\Omega)$, and consequently $\nabla v_k = \nabla v$ a.e. on $\{ v<k\}$, and $\nabla v_k = 0$ a.e. on $\{ v\ge k\}$ (see \cite{MZ}, Corollary 1.43). Hence, \begin{equation*} \begin{aligned} \lim_{k \rightarrow \infty} \int_\Omega \nabla h \cdot \nabla v_k \, dx &= \int_\Omega \nabla h \cdot \nabla v \, dx , \\
\lim_{k \rightarrow \infty} \int_\Omega |\nabla v_k|^2 \, h \, dx & = \int_\Omega |\nabla v|^2 \, h \, dx \end{aligned} \end{equation*} by the dominated convergence theorem.
Since $u$ is superharmonic, $u$ is lower semi-continuous, so the set $\{ x \in \Omega: u(x) > e^k \} \equiv \{u>e^k\}$ is open, and the measure $\mu_k = - \triangle u_k$ is supported on the closed set $\{u \le e^k\}$ where $u=u_k$. Hence $u=u_k$ $d \mu_k$-a.e., and \[ \int_\Omega \frac {h}{u_k} \, d \mu_k=\int_\Omega \frac {h}{u} \, d \mu_k. \]
We next show that, for any continuous function $h$ with compact support in $\Omega$, \begin{equation}\label{claim} \lim_{k\to \infty} \int_{\Omega} \frac{h}{ u} \, d \mu_k = \int_\Omega \frac h u \, d \mu. \end{equation}
Without loss of generality we assume here that $h\ge 0$. Otherwise we apply the argument below to $h_{+}$ and $h_{-}$ separately.
Notice that $u_k \uparrow u$, and consequently $\mu_k \to \mu$ weakly in $\Omega$, by the weak continuity property (see, for instance, \cite{TW} in a rather more general setting), i.e., \begin{equation*} \lim_{k\to \infty} \int_\Omega \phi \, d \mu_k = \int_\Omega \phi \, d \mu \end{equation*} for all continuous functions $\phi$ with compact support in $\Omega$. It follows (see \cite{Lan}, Lemma 0.1) that \begin{equation}\label{lsc} \liminf_{k\to \infty} \int_\Omega \phi \, d \mu_k \ge \int_\Omega \phi \, d \mu \end{equation} for all lower semicontinuous functions $\phi$ with compact support in $\Omega$. The function $\frac{h}{ u}$ is obviously upper semicontinuous with compact support, so by \eqref{lsc} applied to $-\frac{h}{ u}$, we deduce \begin{equation}\label{upper} \limsup_{k\to \infty} \int_{\Omega} \frac{h}{ u} \, d \mu_k \le \int_\Omega \frac h u \, d \mu. \end{equation}
To prove an estimate in the opposite direction, we claim that $ \mu_k \ge \mu$ on the closed set $F_k=\{ x \in \Omega: \, u(x)\le e^k\}$. It is enough to prove that \begin{equation}\label{claim-u_k} \mu_k (K) \ge \mu(K), \quad \text{for every compact set} \, \, K \subset F_k. \end{equation}
We verify \eqref{claim-u_k} by using another approximation argument based on a version of Lusin's theorem for certain Green potentials (the so-called semibounded potentials, see \cite{Fug}, Sec. 2.6). Notice that $u = G \mu+1$, where $d \mu =u \, d \omega$, and $u<\infty$ $d \omega$-a.e., as discussed in
\S \ref{sec2}. Moreover, $u<\infty$ on $\Omega\setminus\!E$, i.e., outside the infinity set $E$, which is obviously a Borel set such that $\mu(E)=0$ since $\omega(E)=0$.
This is also a consequence of the fact that $E$ is a set of zero capacity, and $\omega(E)\le \text{cap}(E)$, which follows immediately from \eqref{equivnormTless1}. In fact, the condition $\mu(E)=0$ is equivalent to absolute continuity of $\mu$ with respect to capacity, i.e., $\text{cap}(K)=0 \Longrightarrow \mu(K)=0$ for all compact sets $K\subset \Omega$.
Consequently (see \cite{Fug}, Theorem 2.6; \cite{Hel}, Theorem 4.6.3), there exists an increasing sequence of compactly supported measures $\mu^j$ such that $u^j=G \mu^j +1\in C(\Omega)$, so that $\mu^j(K) \uparrow \mu(K)$, for every compact set $K\subset\Omega$, and $G \mu^j\uparrow G \mu$ on $\Omega$, as $j\to \infty$ . It follows that $u^j\uparrow u$, and so $\min(u^j, e^k) \uparrow \min(u, e^k)=u_k$ as $j\to \infty$, which yields that the corresponding Riesz measures $\mu_k^j$ associated with the superharmonic functions $\min(u^j, e^k)$ have the property $\mu_k^j \to \mu_k$ weakly in $\Omega$ as $j\to \infty$.
Without loss of generality we may assume that actually $u^j(x)<u(x)$ for all $x\in \Omega$. Otherwise we replace $u^j $ with $\epsilon_j \, u^j$, where $\epsilon_j\uparrow 1$ is a strictly increasing sequence of positive numbers. Then all the properties of $u^j$ remain true.
Obviously, $F_k\subset G^j_k$ where $G^j_k=\{ x \in \Omega: \, u^j(x)< e^k\}$ is an open set for every $j, k \in \mathbb{N}$, since $u^j \in C(\Omega)$. Clearly, $u^j=\min(u^j, e^k)$ on $G^j_k$, and so $\mu^j$ coincides with $\mu_k^j$ on $G^j_k$. In particular, $\mu_k^j(K)=\mu^j(K)$ for every compact set $K \subseteq F_k \subset G_k^j$.
Since $\mu_k^j \to \mu_k$ weakly, it follows
by \eqref{lsc} applied to the lower semicontinuous function
$-\chi_K$ that
\[ \limsup_{j\to \infty} \mu_k^j(K) \le \mu_k(K). \]
Hence,
\[
\mu(K)=\lim_{j \to \infty} \mu^j(K) =\limsup_{j\to \infty} \mu_k^j(K) \le \mu_k(K),
\]
which proves \eqref{claim-u_k}. Consequently,
\begin{equation}\label{usc-appl}
\begin{aligned}
\liminf_{k \to \infty} \int_{\Omega} \frac{h}{ u} \, d \mu_k & \ge \liminf_{k \to \infty} \int_{F_k} \frac{h}{ u} \, d \mu_k \\ & \ge \liminf_{k \to \infty} \int_{F_k} \frac{h}{ u} \, d \mu = \int_{\Omega\setminus E} \frac{h}{ u} \, d \mu,
\end{aligned} \end{equation} where $E$ is the infinity set of $u$. As mentioned above, $\mu(E)=0$, so \eqref{usc-appl} actually yields \[
\liminf_{k \to \infty} \int_{\Omega} \frac{h}{ u} \, d \mu_k \ge \int_{\Omega} \frac{h}{ u} \, d \mu. \] Combining the preceding inequality with \eqref{upper} proves \eqref{claim}.
In fact, $\mu_k$ coincides with $\mu$ on the set $G_k=\{x\in \Omega: \, \, u(x)<e^k\}$, i.e., \begin{equation}\label{claim-great} \mu_k (K) = \mu(K), \quad \text{for every compact set} \, \, K \subset G_k. \end{equation}
To prove \eqref{claim-great}, notice that the set $G_k$ is finely open (see \cite{AG}, Sec. 7.1). Let $U_k=\{x\in \Omega: \, \, u(x)>e^k\}$, and $\lambda=\chi_{U_k} \mu$. Then clearly $G \lambda\le G \mu=u$ in $\Omega$, and so $G \lambda <e^k$ on $G_k$. Moreover, $\lambda (G_k)=0$ since $U_k$ and $G_k$ are disjoint. Hence by \cite{Fug}, Theorem 8.10, $G \lambda$ is finely harmonic on $G_k$.
On the other hand, let
\[
\tilde \mu = \mu_k -\mu|_{F_k}, \] where $\mu_k$ is supported on the closed set $F_k=\Omega\setminus U_k$. By \eqref{claim-u_k}, $\tilde \mu$ is a nonnegative measure on $\Omega$. Clearly, $G \tilde \mu \le G \mu_k=u_k\le e^k$ in $\Omega$.
Since $u_k-u=0$ on $G_k$, it follows that \[ G \tilde \mu=u_k-u + G \lambda \] is finely harmonic on $G_k$. Hence applying
\cite{Fug}, Theorem 8.10 in the opposite direction, we deduce that $\tilde\mu(G_k)=0$, so $\tilde \mu(K)=\mu_k(K) - \mu(K) =0$ for every compact set $K\subset G_k$. The proof of \eqref{claim-great} is complete.
As noted above, $u = \frac{d\mu}{d \omega}$ is the Radon--Nikodym derivative defined $d\omega$-a.e., and $\mu(E)=\omega(E)=0$, where $E= \{ x \in \Omega: u(x)=\infty\}$, hence \[ \int_\Omega h \, d\omega = \int_\Omega \frac{h}{u} \, d \mu = \lim_{k \to \infty} \, \int_\Omega \frac {h}{u_k} \, d \mu_k. \]
Passing to the limit as $k \to \infty$ in \eqref{approx-v_k}, we obtain \begin{equation*} \begin{aligned}
\int_\Omega h \, d\omega & = \int_\Omega \nabla h \cdot \nabla v \, dx - \int_\Omega |\nabla v|^2 \, h \, dx , \end{aligned} \end{equation*}
for all $h \in C^\infty_0(\Omega)$, which justifies equation \eqref{weakriccati}.
By the Riesz decomposition theorem, \begin{equation}\label{integral-form}
v = G(-\triangle v) + g = G(|\nabla v|^2 +\omega) + g, \end{equation} where $g$ is the greatest harmonic minorant of $v$. Since $v \geq 0$, a harmonic minorant of $v$ is $0$, so $g \ge 0$. It follows from (\ref{integral-form}) and the equation $u = G(u\omega) + 1$ that $$ g \le v=\log u = \log \left (G (u \omega) + 1\right)\le G (u \omega). $$ Since $G(u \omega)$ is a Green potential, the greatest harmonic minorant of $G (u\omega)$ is $0$, therefore $g=0$. Hence $v$ is a solution of (\ref{nonlineareqn-1}). This completes the proof of Theorem \ref{riccatithm} (A).
Conversely, suppose $v\in W^{1,2}_{loc}(\Omega)$ is a solution of equation (\ref{nonlineareqn-1}), that is, $v = G (|\nabla v|^2 + \omega)$. Then $v \geq 0$ is superharmonic,
$d\nu = |\nabla v|^2 dx + d\omega$ is the corresponding Riesz measure, and \eqref{ric-eq-1} holds. Let $v_k = \min\, (v, \, k)$ and $\nu_k = -\triangle v_k$, for $k=1,2, \ldots$. Clearly, $v_k\in W^{1,2}_{loc}(\Omega)\bigcap L^\infty(\Omega)$ is superharmonic.
Next, as in the proof of \eqref{claim-u_k} above, we observe that $\nu_k \ge \nu$ on the set $F_k=\{x\in \Omega: \, v(x)\le k\}$. To verify this claim, it is enough to check that \begin{equation}\label{claim-v_k} \nu_k (K) \ge \nu(K), \quad \text{for every compact set} \, \, K \subseteq F_k. \end{equation} The preceding inequality is deduced again using the approximation argument based on \cite{Hel}, Theorem 4.6.3. It requires the existence of a Borel set $E\subset \Omega$ such that $G \nu<\infty$ on $\Omega\!\setminus\!E$, and $\nu(E)=0$. Let $E=\{x \in \Omega: \, v(x)=\infty\}$. Then $E$ is a Borel set and $\text{cap}(E)=0$. We need to show that $\nu(E)=0$.
It is known (see \cite{HMV}, Lemma 2.1) that since $v\in W^{1,2}_{loc}(\Omega)$ is a solution to \eqref{ric-eq-1}, then \[
\int_\Omega h^2 d\nu=\int_\Omega |v|^2 h^2 dx + \int_\Omega h^2 d\omega \le 4 \int_\Omega |\nabla h|^2 dx, \] for all $h \in C^\infty_0(\Omega)$. It follows immediately that $\nu(F)\le 4 \, \text{cap}(F)$ for all compact (and hence Borel) sets $F$. Since $\text{cap}(E)=0$, we see that $\nu(E)=0$, which completes the proof of \eqref{claim-v_k}.
We remark that actually $\nu_k = \nu$ on $G_k$, where $G_k=\{x \in \Omega: \, v(x)<k\}$, exactly as was shown above for $\mu_k = \mu$ on $G_k$ (with $e^k$ in place of $k$). However, we do not need this fact in the remaining part of the proof.
Since $\nabla v = \nabla v_k$ $dx$-a.e. on $F_k$, and $\nabla v_k=0$ $dx$-a.e. outside $F_k$, it follows from \eqref{claim-v_k} that \begin{equation}\label{just0}
-\triangle v_k =\nu_k \ge \chi_{F_k} \nu = |\nabla v_k|^2 + \chi_{F_k}\, \omega,
\end{equation} as measures. In other words, \begin{equation}\label{just1}
-\triangle v_k=\nu_k=|\nabla v_k|^2 + \chi_{F_k}\, \omega +\lambda_k,
\end{equation} where $\lambda_k$ is a nonnegative measure in $\Omega$ supported on $F_k$. In fact, as discussed above, $\lambda_k=0$ outside the set $\{x\in \Omega: \, u(x)=k\}$.
Let $u = e^v \geq 1$, $u_k=e^{v_k}$ and $\mu_k=-\triangle u_k$. Clearly, $\nabla u_k=\nabla v_k \, e^{v_k}$, so $u_k\in W^{1,2}_{loc}(\Omega)\bigcap L^\infty(\Omega)$.
We claim that \begin{equation}\label{just2}
\mu_k=-\triangle u_k = -\triangle v_k \, e^{v_k} -|\nabla v_k|^2 \, e^{v_k} \ge 0. \end{equation} To prove (\ref{just2}), we use integration by parts (\ref{by-parts}) with $g=h e^{v_k}$, where $h \in C^\infty_0(\Omega)$, and
$v_k$ in place of $r$: \begin{equation*} \begin{aligned} \int_\Omega h \, e^{v_k} \, d \nu_k & = \int_\Omega \nabla (h \, e^{v_k}) \cdot \nabla v_k \, dx
\\ & = \int_\Omega e^{v_k} \, \nabla h \cdot \nabla v_k \, dx + \int_\Omega h \, |\nabla v_k|^2 \, e^{v_k} \,dx \\
& = \int_\Omega \, \nabla h \cdot \nabla u_k \, dx + \int_\Omega h \, |\nabla v_k|^2 \, e^{v_k} \,dx \\ &
= \int_\Omega h \, d \mu_k + \int_\Omega h \, |\nabla v_k|^2 \, e^{v_k} \,dx. \end{aligned} \end{equation*} Hence, first applying \eqref{just2} and then \eqref{just1}, we obtain \begin{equation*} \begin{aligned} \langle h, \mu_k\rangle & =\int_\Omega h \, d \mu_k \\
& = \int_\Omega h \, e^{v_k} \, d \nu_k
- \int_\Omega h \, | \nabla v_k |^2 \, e^{v_k} \, dx \\ & =\int_\Omega h \, e^{v_k} \ \chi_{F_k} \, d \omega + \int_\Omega h \, e^{v_k} \ d \lambda_k \\ & =\int_\Omega h \, e^{v} \ \chi_{F_k} \, d \omega + \int_\Omega h \, e^{v} \ d \lambda_k. \end{aligned} \end{equation*}
From the preceding equation it follows that, for all $h\in C^\infty_0(\Omega)$, $h \ge 0$, \begin{equation}\label{mu_k-G_k} \langle h, \mu_k\rangle \ge \int_\Omega h \, u \ \chi_{F_k} \, d \omega \ge 0. \end{equation} Since $v_k$, and hence $u_k$, is lower semicontinuous, it follows that $u_k$ is superharmonic in $\Omega$.
Clearly,
$u= \lim _{k \to +\infty} u_k$ is a superharmonic function in $\Omega$ as the limit of the increasing sequence of superharmonic functions $u_k$, since $u=e^v\not\equiv\infty$. Moreover, as mentioned above, the infinity set $E$ on which $u=e^v=\infty$ has zero capacity, and $\omega(E)\le \nu(E)\le 4 \, \text{cap}(E)$, so $\omega(E)=0$.
Since $-\triangle u_k=\mu_k \to \mu$ weakly in $\Omega$, where $\mu = - \triangle u$, passing to the limit as $k \to \infty$ in \eqref{mu_k-G_k} and using the monotone convergence theorem on the right-hand side yields \[ \langle h, \mu\rangle \ge \int_{\Omega\setminus E} h \, u \, d \omega =\int_{\Omega} h \, u \, d \omega \ge 0. \]
Hence $u$ is superharmonic, and \begin{equation}\label{just3} -\triangle u \ge \omega \, u \quad \text{in} \, \, \Omega \end{equation}
in the sense of measures.
It follows from (\ref{just3}) that
$\tilde \omega = - \triangle u-\omega u$ is a non-negative measure in $\Omega$, so by the Riesz decomposition theorem $$ u = G(-\triangle u) + g = G(\omega u) + G \tilde \omega+ g \geq G(\omega u) + g, $$ where $g$ is the greatest harmonic minorant of $u$. Since $u \ge 1$, i.e., $1$ is a harmonic minorant of $u$, it follows that $g \ge 1$, and consequently, \begin{equation}\label{iter} u \ge G(\omega u ) + 1 = Tu + 1, \end{equation} for $T$ defined by (\ref{defT}). Since $u \ge Tu$, it follows by Schur's test that
$||T||_{L^2(\Omega, \omega) \to L^2(\Omega, \omega)} \le 1$, and hence
(\ref{equivnormTless1}) holds with $\beta =1$.
Iterating (\ref{iter}) and taking the limit, we see that $$ \phi \equiv 1 + \mathcal{G} \omega = 1 + \sum_{j=1}^{\infty} G_j \omega= 1 + \sum_{j=1}^{\infty} T^j 1 \le u < +\infty \, \, \text{a.e.}, $$ and $$ \phi = G(\omega \phi ) + 1. $$ Hence $\phi$ is a positive solution of (\ref{dirichlet}). Thus (\ref{martincritsuff-g}) holds by Corollary \ref{cor} (B). This completes the proof of Theorem \ref{riccatithm} (B).
\end{proofof}
\noindent{\bf Remarks.} 1. As in \cite{FV2} for smooth domains and $\omega \in L^1_{loc} (\Omega)$, our sufficiency results hold in uniform domains for signed measures $\omega$, if $\omega$ is
replaced with $|\omega|$ both in the spectral conditions (\ref{normTless1}), (\ref{equivnormTless1}), and
conditions (\ref{martincritsuff-g}), (\ref{martincritnec-g}).
2. The lower pointwise estimates of solutions in Theorem \ref{mainufest}(B)
are still true for signed measures $\omega$, under some additional assumptions
(see \cite{GV1}). However, the upper pointwise estimates Theorem \ref{mainufest}(A)
are no longer true in general, unless we replace
$\omega$ with $|\omega|$.
3. It is still unclear under which (precise) additional assumptions on the quadratic form of $\omega$ the main existence results and upper estimates of solutions remain valid. Some results of this type are discussed in \cite{JMV}, but without the prescribed boundary conditions.
\end{document} |
\begin{document}
\sloppy \title{Hitting and Harvesting Pumpkins}
\begin{abstract} The \emph{$c$-pumpkin} is the graph with two vertices linked by $c \geq 1$ parallel edges. A \emph{$c$-pumpkin-model} in a graph $G$ is a pair $\{A, B\}$ of disjoint subsets of vertices of $G$, each inducing a connected subgraph of $G$, such that there are at least $c$ edges in $G$ between $A$ and $B$. We focus on hitting and packing $c$-pumpkin-models in a given graph in the realm of approximation algorithms and parameterized algorithms. We give an FPT algorithm running in time $2^{\mathcal{O}(k)} n^{\mathcal{O}(1)}$ deciding, for any fixed $c \geq 1$, whether all $c$-pumpkin-models can be hit by at most $k$ vertices. This generalizes known single-exponential FPT algorithms for \textsc{Vertex Cover} and \textsc{Feedback Vertex Set}, which correspond to the cases $c=1,2$ respectively. Finally, we present an $\mathcal{O}(\log n)$-approximation algorithm for both the problems of hitting all $c$-pumpkin-models with a smallest number of vertices, and packing a maximum number of vertex-disjoint $c$-pumpkin-models.
\textbf{Keywords:} Hitting and packing; parameterized complexity; approximation algorithm; single-exponential algorithm; iterative compression; graph minors. \end{abstract}
\section{Introduction} \label{sec:intro}
The \emph{$c$-pumpkin} is the graph with two vertices linked by $c \geq 1$ parallel edges. A \emph{$c$-pumpkin-model} in a graph $G$ is a pair $\{A, B\}$ of disjoint subsets of vertices of $G$, each inducing a connected subgraph of $G$, such that there are at least $c$ edges in $G$ between $A$ and $B$. In this article we study the problems of hitting all $c$-pumpkin-models of a given graph $G$ with as few vertices as possible, and packing as many disjoint $c$-pumpkin-models in $G$ as possible. As discussed below, these problems generalize several well-studied problems in algorithmic graph theory. We focus on FPT algorithms for the parameterized version of the hitting problem, as well as on polynomial-time approximation algorithms for the optimization version of both the packing and hitting problems.
\subsection*{FPT algorithms} From the parameterized complexity perspective, we study the following problem for every fixed integer $c \geq 1$.
\begin{center} \begin{boxedminipage}{.8\textwidth}
$p$-$c$-\textsc{Pumpkin}-\textsc{Hitting} ($p$-$c$-\textsc{Hit})
\begin{tabular}{ r l } \textit{~~~~Instance:} & A graph $G$ and a non-negative integer $k$. \\ \textit{Parameter:} & $k$.\\
\textit{Question:} & Does there exist $S \subseteq V(G)$, $|S| \leq k$, such that\\ ~~~~~~~~~~~~~~~~~~ & $G \setminus S$ does not contain the $c$-pumpkin as a minor?
\\ \end{tabular} \end{boxedminipage}
\end{center}
Several special cases of this problem are well studied in the parameterized complexity. When $c=1$, the $p$-$c$-\textsc{Hit} problem is the {\sc $p$-Vertex Cover} problem~\cite{BFR98,CKX10}. For $c=2$, it is the {\sc $p$-Feedback Vertex Set} problem~\cite{GGH+06,DFL+07,CaoCL10}. When $c=3$, this corresponds to the recently introduced {\sc $p$-Diamond Hitting Set} problem~\cite{FJP10}.
The $p$-$c$-\textsc{Hit} problem can also be seen as a particular case of the following problem, recently introduced by Fomin \emph{et al}.~\cite{FLM+11} and studied from the kernelization perspective: Let $\mathcal F$ be a finite set of graphs. In the $p$-$\mathcal{F}$-\textsc{Hit} problem, we are given an $n$-vertex graph $G$ and an integer $k$ as input, and asked whether at most $k$ vertices can be deleted from $G$ such that the resulting graph does not contain any graph from ${\mathcal F}$ as a minor.
Among other results, it is proved in~\cite{FLM+11} that if $\mathcal{F}$ contains a $c$-pumpkin for some $c \geq 1$, then $p$-$\mathcal{F}$-\textsc{Hit} admits a kernel of size $\mathcal{O}(k^2 \log^{3/2}k)$. As discussed in Section~\ref{sec:algorithm}, this kernel leads to a simple FPT algorithm for $p$-$\mathcal{F}$-\textsc{Hit} in this case, and in particular for $p$-$c$-\textsc{Hit}, with running time $2^{\mathcal{O}(k \log k)} \cdot n^{\mathcal{O}(1)}$. A natural question is whether there exists an algorithm for $p$-$c$-\textsc{Hit} with running time $2^{\mathcal{O}(k)} \cdot n^{\mathcal{O}(1)}$ for every fixed $c \geq 1$. Such algorithms are called {\sl single-exponential}.
For the \textsc{$p$-Vertex Cover} problem the existence of single-exponential algorithms is well-known since almost the beginnings of the field of Parameterized Complexity~\cite{BFR98}, the best current algorithm being by Chen \emph{et al}.~\cite{CKX10}. On the other hand, the question about the existence of single-exponential algorithms for \textsc{$p$-Feedback Vertex Set} was open for a long time and was finally settled independently by Guo \emph{et al}.~\cite{GGH+06} (using iterative compression) and by Dehne \emph{et al}.~\cite{DFL+07}. The current champion {\sl deterministic} algorithm for \textsc{$p$-Feedback Vertex Set} runs in time $\mathcal{O}(3.83^kk\cdot n^2)$~\cite{CaoCL10}, whereas the fastest {\sl randomized} one runs in time $\mathcal{O}(3^k) \cdot \text{poly}(n)$~\cite{CNP+11}.
We present in Section~\ref{sec:algorithm} a single-exponential algorithm for $p$-$c$-\textsc{Hit} for every fixed $c \geq 1$, using a combination of a kernelization-like technique and iterative compression. (In fact, we will solve the {\sl constructive} version of the problem.) Notice that this generalizes the above results for \textsc{$p$-Vertex Cover} and \textsc{$p$-Feedback Vertex Set}. We remark that asymptotically these algorithms are optimal, that is, it is known that unless ETH fails neither \textsc{$p$-Vertex Cover} nor \textsc{$p$-Feedback Vertex Set} admit an algorithm with running time $2^{o(k)}\cdot n^{\mathcal{O}(1)}$~\cite{CCF+05,ImpagliazzoPZ01}. It is worth mentioning here that a similar quantitative approach was taken by Lampis~\cite{Lampis10} for graph problems expressible in MSOL parameterized by the sum of the formula size and the size of a minimum vertex cover of the input graph.
\subsection*{Approximation algorithms} For a fixed integer $c \geq 1$, we define the following two optimization problems.
\begin{center} \begin{boxedminipage}{.8\textwidth}
\textsc{Minimum} $c$-\textsc{Pumpkin}-\textsc{Hitting} (\textsc{Min} $c$-\textsc{Hit})
\begin{tabular}{ r l } \textit{~~~~Input:} & A graph $G$. \\ \textit{Output:} & A subset $S \subseteq V(G)$ such that $G \setminus S$\\ ~~~~~~~~~~~~~~~~~~ & does not contain the $c$-pumpkin as a minor.\\
\textit{Objective:} & Minimize $|S|$.\\ \end{tabular} \end{boxedminipage} \end{center}
\begin{center} \begin{boxedminipage}{.8\textwidth}
\textsc{Maximum} $c$-\textsc{Pumpkin}-\textsc{Packing} (\textsc{Max} $c$-\textsc{Pack})
\begin{tabular}{ r l } \textit{~~~~Input:} & A graph $G$. \\ \textit{Output:} & A collection $\mathcal{M}$ of vertex-disjoint subgraphs of $G$,\\ ~~~~~~~~~~~~~~~~~~ & each containing the $c$-pumpkin as a minor.\\
\textit{Objective:} & Maximize $|\mathcal{M}|$.\\ \end{tabular} \end{boxedminipage} \end{center}
Let us now discuss how the above problems encompass several well-known problems. For $c=1$, \textsc{Min} $1$-\textsc{Hit} is the \textsc{Minimum Vertex Cover} problem, which can be easily 2-approximated by finding any maximal matching, whereas \textsc{Max} $1$-\textsc{Pack} corresponds to finding a \textsc{Maximum Matching}, which can be done in polynomial time. For $c=2$, \textsc{Min} $2$-\textsc{Hit} is the \textsc{Minimum Feedback Vertex Set} problem, which can be also 2-approximated~\cite{BBF99,BeGe96}, whereas \textsc{Max} $2$-\textsc{Pack} corresponds to \textsc{Maximum Vertex-Disjoint Cycle Packing}, which can be approximated to within a $\mathcal{O}(\log n)$ factor~\cite{KNS+07}. For $c=3$, \textsc{Min} $3$-\textsc{Hit} is the \textsc{Diamond Hitting Set} problem studied by Fiorini \emph{et al.} in~\cite{FJP10}, where a $9$-approximation algorithm is given.
We provide in Section~\ref{sec:combinatorial} an algorithm that approximates both the \textsc{Min} $c$-\textsc{Hit} and the \textsc{Max} $c$-\textsc{Pack} problems to within a $\mathcal{O}(\log n)$ factor for every fixed $c \geq 1$. Note that this algorithm matches the best existing algorithms for \textsc{Max} $c$-\textsc{Pack} for $c=2$~\cite{KNS+07}. For the \textsc{Min} $c$-\textsc{Hit} problem, our result is only a slight improvement on the $\mathcal{O}(\log^{3/2} n)$-approximation algorithm given in~\cite{FLM+11}. However, for the \textsc{Max} $c$-\textsc{Pack} problem, there was no approximation algorithm known before except for the $c\leq 2$ case. Also, let us remark that, for $c \geq2$ and every fixed $\varepsilon > 0$, \textsc{Max} $c$-\textsc{Pack} is quasi-NP-hard to approximate to within a $\mathcal{O}(\log^{1/2 - \varepsilon} n)$ factor. For $c=2$ this was shown by Friggstad and Salavatipour~\cite{FrSa11}, and their result can be extended to the case $c>2$ in the following straightforward way. Given an instance $G$ of \textsc{Max} $2$-\textsc{Pack}, we build an instance of \textsc{Max} $c$-\textsc{Pack} by replacing each edge of $G$ with $c-1$ parallel edges. This approximation preserving transformation shows that \textsc{Max} $c$-\textsc{Pack} is quasi-NP-hard to approximate to within a $\mathcal{O}(\log^{1/2 - \varepsilon} n)$ factor for any $c\geq 2$.
The main ingredient of our approximation algorithm is the following combinatorial result: We show that every $n$-vertex graph $G$ either contains a small $c$-pumpkin-model or has a structure that can be reduced in polynomial time, giving a smaller equivalent instance for both the \textsc{Min} $c$-\textsc{Hit} and the \textsc{Max} $c$-\textsc{Pack} problems. Here by a ``small'' $c$-pumpkin-model, we mean a model of size at most $f(c)\cdot \log n$ for some function $f$ independent of $n$. This result extends one of Fiorini {\it et al.}~\cite{FJP10}, who dealt with the case $c=3$.
\section{Preliminaries} \label{sec:prelim}
\subsection*{Graphs} We use standard graph terminology, see for instance~\cite{Die05}. All graphs in this article are finite and undirected, and may have parallel edges but no loops. We will sometimes restrict our attention to simple graphs, that is, graphs without parallel edges.
Given a graph $G$, we denote the vertex set of $G$ by $V(G)$ and the edge set of $G$ by $E(G)$. We use the shorthand $|G|$ for the number of vertices in $G$. For a subset $X\subseteq V(G)$, we use $G[X]$ to denote the subgraph of $G$ induced by $X$. For a subset $Y\subseteq E(G)$ we let $G[Y]$ be the graph with $E(G[Y]):=Y$ and with $V(G[Y])$ being the set of vertices of $G$ incident with some edge in $Y$. For a subset $X \subseteq V(G)$, we may use the notation $G \setminus X$ to denote the graph $G[V(G)\setminus X]$.
The set of neighbors of a vertex $v$ of a graph $G$ is denoted by $N_{G}(v)$. The \emph{degree} $\text{deg}_{G}(v)$ of a vertex $v\in V(G)$ is defined as the number of edges incident with $v$ (thus parallel edges are counted). We write $\text{deg}^{*}_{G}(v)$ for the number of neighbors of $v$, that is, $\text{deg}^{*}_{G}(v)
:= |N_G(v)|$. Similarly, given a subgraph $H \subseteq G$ with $v \in V(H)$, we can define in the natural way $N_{H}(v)$, $\text{deg}_{H}(v)$, and $\text{deg}^{*}_{H}(v)$, that is, $N_{H}(v)=N_{G}(v)\cap V(H)$, $\text{deg}_{H}(v)$ is the number of edges incident with $v$ with both endpoints in $H$, and $\text{deg}^{*}_{H}(v)=|N_{H}(v)|$. In these notations, we may drop the subscript if the graph is clear from the context. By the \emph{neighbors of a subgraph} $H \subseteq G$ we mean the set of vertices in $V(G) \setminus V(H)$ that have at least one neighbor in $H$. The minimum degree of a vertex in a graph $G$ is denoted $\delta(G)$, and the maximum degree of a vertex in $G$ is denoted $\Delta(G)$. We use the notation ${\mathbf{cc}}(G)$ to denote the number of connected components of $G$. Also, we let $\mu(G)$ denote the maximum multiplicity of an edge in $G$. A graph is said to be a {\em multipath} if its underlying simple graph (without parallel edges) is isomorphic to a path.
\subsection*{Minors and models} Given a graph $G$ and an edge $e\in E(G)$, let $G\backslash e$ be the graph obtained from $G$ by removing the edge $e$, and let $G\slash e$ be the graph obtained from $G$ by contracting $e$ (we note that parallel edges resulting from the contraction are kept but self loops are deleted). If $H$ can be obtained from a subgraph of $G$ by a (possibly empty) sequence of edge contractions, we say that $H$ is a \emph{minor} of $G$, and we denote it by $H \preceq_m G$. A graph $G$ is {\em $H$-minor-free}, or simply {\em $H$-free}, if $G$ does not contain $H$ as a minor. A \emph{model} of a graph $H$, or simply \emph{$H$-model}, in a graph $G$ is a collection $\{S_{v} \subseteq V(G) \mid v \in V(H)\}$ such that
\begin{itemize} \item[(i)] $G[S_{v}]$ is connected for every $v\in V(H)$;
\item[(ii)] $S_{v}$ and $S_{w}$ are disjoint for every two distinct vertices $v, w$ of $H$; and
\item[(iii)] there are at least as many edges between $S_{v}$ and $S_{w}$ in $G$ as between $v$ and $w$ in $H$, for every $vw \in E(H)$. \end{itemize}
The {\em size} of the model is defined as $\sum_{v \in V(H)} |S_{v}|$. Clearly, $H$ is a minor of $G$ if and only if there exists a model of $H$ in $G$. In this paper, we will almost exclusively consider $H$-models with $H$ being isomorphic to a $c$-pumpkin for some $c\geq 1$. Thus a $c$-pumpkin-model in a graph $G$ is specified by an unordered pair $\{A, B\}$ of disjoint subsets of vertices of $G$, each inducing a connected subgraph of $G$, such that there are at least $c$ edges in $G$ between $A$ and $B$. A $c$-pumpkin-model $\{A, B\}$
of $G$ is said to be {\em minimal} if there is no $c$-pumpkin-model $\{A', B'\}$ of $G$ with $A' \subseteq A$, $B' \subseteq B$, and $|A'| + |B'| < |A| +
|B|$.
A subset $X$ of vertices of a graph $G$ such that $G \setminus X$ has no $c$-pumpkin-minor is called a {\em $c$-pumpkin-hitting set}, or simply {\em $c$-hitting set}.
We denote by $\tau_{c}(G)$ the minimum size of a $c$-pumpkin-hitting set in $G$. A collection $\mathcal{M}$ of vertex-disjoint subgraphs of a graph $G$, each containing a $c$-pumpkin-model, is called a {\em $c$-pumpkin-packing}, or simply {\em $c$-packing}. We denote by $\nu_{c}(G)$ the maximum size of a $c$-pumpkin-packing in $G$. Obviously, for any graph $G$ it holds that $\nu_{c}(G) \leq \tau_{c}(G)$, but the converse is not necessarily true.
The following lemma on models will be useful in our algorithms. The proof is straightforward and hence is omitted.
\begin{lemma} \label{lem:models_diameter} Suppose $G'$ is obtained from a graph $G$ by contracting some vertex-disjoint subgraphs of $G$, each of diameter at most $k$. Then, given an $H$-model in $G'$ of size $s$, one can compute in polynomial time an $H$-model in $G$ of size at most $k\cdot \Delta(H)\cdot s$. \end{lemma}
\subsection*{Parameterized algorithms}
A \emph{parameterized problem} $\Pi$ is a subset of $\Gamma^{*}\times \mathbb{N}$ for some finite alphabet $\Gamma$. An instance of a parameterized problem consists of a pair $(x,k)$, where $k$ is called the
\emph{parameter}. A central notion in parameterized complexity is {\em fixed parameter tractability (FPT)}, which means, for a given instance $(x,k)$, solvability in time $f(k)\cdot p(|x|)$, where $f$ is some computable function of $k$ and $p$ is a polynomial in the input size.
A \emph{kernelization algorithm} or, in short, a \emph{kernel} for a parameterized problem $\Pi\subseteq \Gamma^{*}\times \mathbb{N}$ is an algorithm that given $(x,k)\in \Gamma^{*}\times \mathbb{N} $ outputs in time polynomial in $|x|+k$ a pair $(x',k')\in \Gamma^{*}\times \mathbb{N}$ such that \begin{itemize} \item[(i)] $(x,k)\in \Pi$ if and only if $(x',k')\in \Pi$; and
\item[(ii)] $\max\{|x'|, k' \}\leq g(k)$, \end{itemize}
where $g$ is some computable function. The function $g$ is referred to as the \emph{size} of the kernel. If~$g(k)=k^{\mathcal{O}(1)}$ or $g(k)=\mathcal{O}(k)$,
then we say that $\Pi$ admits a polynomial kernel and a linear kernel, respectively.
\emph{Iterative compression} is a tool that has been used successfully in finding fast FPT algorithms for a number of parameterized problems. The main idea behind iterative compression is an algorithm which, given a solution of size $k+1$ for a problem, either compresses it to a solution of size $k$ or proves that there is no solution of size $k$. This technique was first introduced by Reed \emph{et al}. to solve the \textsc{Odd Cycle Transversal} problem~\cite{RSV04}, where one is interested in finding a set of at most $k$ vertices whose deletion makes the graph bipartite~\cite{RSV04}. Since then, it has been extensively used in the literature, see for instance~\cite{FGK+10,CLL+10,GGH+06}.
See~\cite{DowneyF99} for detailed introduction to Parameterized Complexity.
\subsection*{Tree-width} We briefly recall the definition of the tree-width of a graph. A \emph{tree decomposition} of a graph $G$ is an ordered pair $(T, \{W_{x} \mid x \in V(T)\})$ where $T$ is a tree and $\{W_{x} \mid x \in V(T)\}$ a family of subsets of $V(G)$ (called {\em bags}) such that \begin{itemize} \item[(i)] $\bigcup_{x \in V(T)} W_{x} = V(G)$; \item[(ii)] for every edge $uv \in E(G)$, there exists $x \in V(T)$ with $u,v \in W_{x}$; and \item[(iii)] for every vertex $u\in V(G)$, the set $\{x\in V(T) \mid u \in W_{x}\}$ induces a subtree of $T$. \end{itemize} The {\em width} of tree decomposition $(T, \{W_{x} \mid x \in V(T)\})$ is
$\max \{ |W_{x}| - 1 \mid x \in V(T)\}$. The {\em tree-width} ${\mathbf{tw}}(G)$ of $G$ is the minimum width among all tree decompositions of $G$. We refer the reader to Diestel's book~\cite{Die05} for an introduction to the theory of tree-width. It is an easy exercise to check that the tree-width of a simple graph is an upper bound on its minimum degree. This implies the following lemma.
\begin{lemma} \label{lem:edges_tw} Every $n$-vertex simple graph with tree-width $k$ has at most $k \cdot n$ edges. \end{lemma}
We will need the following result of Bodlaender \emph{et al}.~\cite{BodlaenderLTT97}.
\begin{theorem}[Bodlaender \emph{et al}.~\cite{BodlaenderLTT97}] \label{cor:ExcludeTheta} Every graph not containing a $c$-pumpkin as a minor has tree-width at most $2c-1$. \end{theorem}
The following corollary is an immediate consequences of the above theorem.
\begin{corollary} \label{cor:MultiExcludeTheta} Every $n$-vertex graph (may contain parallel edges) with no minor isomorphic to a $c$-pumpkin has at most $(c-1)\cdot (2c-1) \cdot n$ edges. \end{corollary}
Note that the existence of a $c$-pumpkin-minor in a graph can be tested in polynomial time by using the polynomial-time algorithm of Robertson and Seymour~\cite{RS95}. The following proposition states that $c$-pumpkin-minors can be found in {\sl linear time}.
\begin{proposition} \label{prop:FindingTheta} For each fixed integer positive integer $c$, the existence of a $c$-pumpkin-minor in an $n$-vertex graph $G$ can be done in time $\mathcal{O}(n)$. \end{proposition} \begin{proof} We first check whether the treewidth of $G$ is at most $2c-1$, by using the linear-time algorithm of Bodlaender~\cite{Bod96}. If the treewidth of $G$ is strictly larger than $2c-1$, then by Theorem~\ref{cor:ExcludeTheta} we can conclude that $G$ contains a $c$-pumpkin-minor. Otherwise, the treewidth of $G$ is bounded, and we can test for the existence of a $c$-pumpkin-minor by using the linear-time algorithm of Courcelle~\cite{Courcelle92}. \end{proof}
\section{A single-exponential FPT algorithm} \label{sec:algorithm}
As mentioned in the introduction, it is proved in~\cite{FLM+11} that given an instance $(G,k)$ of $p$-$\mathcal{F}$-\textsc{Hit} such that $\mathcal{F}$ consists of only a $c$-pumpkin for some $c \geq 1$, one can obtain in polynomial time an equivalent instance with
$\mathcal{O}(k^2 \log^{3/2}k)$ vertices.
This kernel leads to the following simple FPT algorithm for $p$-$c$-\textsc{Hit}: First compute the kernel $K$ in polynomial time, and then for every subset $S \subseteq V(K)$ of size $k$, test whether $K[V(K) \setminus S]$ contains a $c$-pumpkin as a minor, using for instance the linear-time algorithm given by Proposition~\ref{prop:FindingTheta}. If for some $S$ we have that $K[V(K) \setminus S]$ does not contain $c$-pumpkin as a minor, we answer \textsc{Yes}; otherwise the answer is \textsc{No}. The running time of this algorithm is clearly bounded by $ {k^2 \log^{3/2}k \choose k}\cdot n^{\mathcal{O}(1)} = 2^{\mathcal{O}(k \log k)} \cdot n^{\mathcal{O}(1)}$.
In this section we give an algorithm for $p$-$c$-\textsc{Pumpkin}-\textsc{Hitting} that runs in time $d^k \cdot n^{\mathcal{O}(1)}$ for any fixed $c \geq 1$, where $d$ only depends on the fixed constant $c$.
Towards this, we first introduce a variant of $p$-$c$-\textsc{Pumpkin}-\textsc{Hitting}, namely $p$-$c$-\textsc{Pumpkin}-\textsc{Disjoint Hitting}, formally defined as follows.
\begin{center} \begin{boxedminipage}{.9\textwidth}
$p$-$c$-\textsc{Pumpkin}-\textsc{Disjoint Hitting} ($p$-$c$-\textsc{Disjoint Hit} for short)
\begin{tabular}{ r l } \textit{~~~~Instance:} & A graph $G$ , a non-negative integer $k$, and a set $S \subseteq V(G)$\\
& with $|S| \leq k+1$, such that $G \setminus S$ does not contain the\\ & $c$-pumpkin as a minor. \\ \textit{Parameter:} & $k$.\\
\textit{Question:} & Does there exist $S' \subseteq V(G) \setminus S$, with $|S'| \leq k$, such that\\ ~~~~~~~~~~~~~~~~~~ & $G \setminus S$ does not contain the $c$-pumpkin as a minor?
\\ \end{tabular} \end{boxedminipage}
\end{center}
We would like to note that we will focus on solving the {\sl constructive} version of the $p$-$c$-\textsc{Disjoint Hit} problem. That is, we will be interested in {\sl finding} such a set $S' \subseteq V(G) \setminus S$, as we will need it in our algorithm. Next we show a lemma that allows us to relate the two problems mentioned above.
\setcounter{footnote}{0}
\begin{lemma}
\label{lem:disjointtonondisjoint} If $p$-$c$-\textsc{Disjoint Hit} can be solved in time $\eta(c)^k \cdot n^{\mathcal{O}(1)}$, then $p$-$c$-\textsc{Hit} can be solved in time $(\eta(c)+1)^k \cdot n^{\mathcal{O}(1)}$. \end{lemma} \begin{proof}
Let \(\mathcal{A}\) be an \textsc{FPT} algorithm which solves the $p$-$c$-\textsc{Disjoint Hit} problem in time $\eta(c)^k \cdot n^{\mathcal{O}(1)}$. Let $(G,k)$ be an input graph for the $p$-$c$-\textsc{Hit} problem, and let $v_1,\ldots,v_n$ be an arbitrary ordering
of $V(G)$. Let $V_i$ and $G_i$, respectively, denote the subset $\{v_1,\ldots, v_i\}$ of vertices and the induced subgraph $G[V_i]$. We iterate over $i=1,\ldots ,n$ in the following way. At the $i$-th iteration, suppose we have a $c$-hitting set $S_i \subseteq V_i$ of \(G_{i}\) of size at most $k$. At the next iteration, we set $S_{i+1}:=S_i\cup \{v_{i+1}\}$ (note that \(S_{i+1}\) is a
$c$-hitting set for $G_{i+1}$ of size at most $k+1$). If $|S_{i+1}|\leq k$, we can safely move on to the $(i+2)$-th iteration. If $|S_{i+1}|=k+1$, we look at every subset $S \subseteq S_{i+1}$ and check whether there is a $c$-hitting set $W$ of size at most $k$ such that $W\cap S_{i+1}=S_{i+1}\setminus S$. To do this, we use the FPT algorithm $\mathcal{A}$ for $p$-$c$-\textsc{Disjoint Hit} on the instance $(H,S)$, with $H=G_{i+1} \setminus (S_{i+1}\setminus S)$. If
$\mathcal{A}$ returns a $c$-hitting set $W$ of $H$ with $|W|< |S|$, then observe that the vertex set $(S_{i+1}\setminus S) \cup W$ is a $c$-hitting set of $G$ of size strictly smaller than $S_{i+1}$. If there is a $c$-hitting set of $G_{i+1}$ of size strictly smaller than $S_{i+1}$, then for some $S\subseteq S_{i+1}$, there is a small $c$-hitting set in $G_{i+1} \setminus (S_{i+1}\setminus S)$ disjoint from $S$, and $\mathcal{A}$ correctly returns a solution. If no such small $c$-hitting set exists, the algorithm returns \textsc{No}. Let us now argue about the running time of this algorithm. The time required to execute \(\mathcal{A}\) for every subset $S$ at the $i$-th iteration is $\sum_{i=0}^{k+1}{k+1 \choose i} \cdot \eta(c)^i \cdot n^{\mathcal{O}(1)} = (\eta(c)+1)^{k+1} \cdot n^{\mathcal{O}(1)}$. That is, we have an algorithm for $p$-$c$-\textsc{Hit} running in time $(\eta(c)+1)^k \cdot n^{\mathcal{O}(1)}$, as we wanted to prove.\end{proof}
Lemma~\ref{lem:disjointtonondisjoint} allows us to focus on $p$-$c$-\textsc{Disjoint Hit}. In what follows we give an algorithm for $p$-$c$-\textsc{Disjoint Hit} that runs in single-exponential time. In fact, we obtain a linear kernel for $p$-$c$-\textsc{Disjoint Hit}, which clearly yields a single-exponential algorithm.
\subsection*{Overview of the algorithm} The algorithm for $p$-$c$-\textsc{Disjoint Hit} is based on a combination of polynomial-time preprocessing and a protrusion-based reduction rule. Let $(G,S,k)$ be the given instance of $p$-$c$-\textsc{Disjoint Hit} and let $V:=V(G)$. Our main objective is to show that, after applying some simple polynomial-time reduction rules, $\{v \in V \setminus S : N_G(v)\cap S \neq \emptyset\}$ has cardinality $\mathcal{O}(k)$; the proof of this fact, specially Lemma~\ref{lem:1nbr}, is the most technical part of this section.
Once we have the desired upper bound, we use a protrusion-based reduction rule adapted from~\cite{FLM+11} to give a polynomial-time procedure that, given an instance $(G,S,k)$ of $p$-$c$-\textsc{Disjoint Hit}, returns an equivalent instance $(G',S,k')$ such that $G'$ has $\mathcal{O}(k)$ vertices. That is, we obtain a linear vertex kernel for $p$-$c$-\textsc{Disjoint Hit}\footnote{This was not the case in the conference version of this article, in which the algorithm for $p$-$c$-\textsc{Disjoint Hit} was considerably more complicated, involving in particular a branching procedure and a more extensive usage of the protrusion-based reduction rule.}.
Notice that once we have a linear vertex kernel of size $\alpha k$ for $p$-$c$-\textsc{Disjoint Hit}, we can solve the problem in ${\alpha k \choose k } \cdot k^{\mathcal{O}(1)}$. We can now proceed to the detailed description of the algorithm.
\subsection*{Pre-processing step} We start by defining two sets. Our first objective is to upper bound the cardinality of these two sets by $\mathcal{O}(k)$.
\begin{eqnarray*}
V_1& :=& \{v \in V \setminus S : |N_G(v)\cap S| = 1\}\\
V_{\geq 2} & := & \{v \in V \setminus S : |N_G(v)\cap S| \geq 2\}. \end{eqnarray*} We start with some simple polynomial-time reduction rules (depending on $c$) that will be applied in the compression routine. We also prove, together with the description of each rule, that they are valid for our problem.
\begin{itemize} \item[{\bf R1}] Suppose that $C$ is a connected component of $G \setminus S$ with no neighbor in $S$. Then delete $C$.
\noindent \emph{Proof of correctness}. The deletion of $C$ can be safely done, as its vertices will never participate in a $c$-pumpkin-model.$
$$\square$
\item[{\bf R2}] Suppose that $C$ is a connected component of $G\setminus S$ with exactly one neighbor $v$ in $S$, and such that $G[V(C) \cup \{v\}]$ is $c$-pumpkin-free. Then delete $C$.
\noindent \emph{Proof of correctness}. In this case $C$ can be also safely deleted, as its vertices will never participate in a minimal $c$-pumpkin-model.$
$$\square$
\item[{\bf R3}] Let $u \in S$, let $v \in V(G) \setminus S$, let $P$ be a (non-empty) connected component of $G \setminus \{u,v\}$ entirely contained in $G \setminus S$, and suppose that is $c$-pumpkin-free. Let $H_p$ be the graph obtained from $G[V(P) \cup \{u,v\}]$ by adding $p$ parallel edges between $u$ and $v$, and let $p_c$ be the smallest positive integer $p$ such that $H_p$ contains a $c$-pumpkin-minor (note that it is possible that $p_c=0$). Then replace $P$ with $c-p_c$ parallel edges between $u$ and $v$. See Figure~\ref{fig:R3} for some small examples for $c=4$.
\noindent \emph{Proof of correctness}. Note that in order to hit all $c$-pumpkins-models intersecting $P$, there is no need to include any vertex of $P$ in the solution, as any such vertex could be replaced with $v$, obtaining another solution with equal or smaller size. We say that two $c$-pumpkins-models in $G$ are {\sl $P$-equivalent} if they coincide except, possibly, for vertices in $P$. Let $G'$ be the graph obtained from $G$ by replacing $P$ with $c-p_c$ parallel edges between $u$ and $v$. By construction, $G$ and $G'$ contain exactly the same $c$-pumpkins-models modulo the $P$-equivalence relation. As we can assume that no vertex of $P$ is included in the solution, we conclude that the reduction rule yields an equivalent instance. $
$$\square$ \end{itemize}
\begin{figure}
\caption{Four examples of reduction rule~{\bf R3} for $c=4$.}
\label{fig:R3}
\end{figure}
We would like to note that under the hypothesis of Rule~{\bf R3}, if in addition it holds that $G[V(P) \cup \{u,v\}]$ contains a $c$-pumpkin-minor, then we could safely delete vertex $v$ from $G$ and decrease the parameter $k$ by one. Nevertheless, it turns out that keeping $c$ parallel edges between $u$ and $v$ yields the analysis of the algorithm simpler.
We say that the instance $(G,S,k)$ is \emph{$(S,c)$-reduced} if rules~{\bf R1}, {\bf R2}, or~{\bf R3} cannot be applied anymore. Note that reduction rule~{\bf R1} can easily be applied in polynomial time. For reduction rules~{\bf R2} and~{\bf R3}, we have to test whether the considered graph contains a $c$-pumpkin-minor, which can be done in linear time by Proposition~\ref{prop:FindingTheta}.
The following Lemmas~\ref{lem:2nbr} and~\ref{lem:1nbr} are key to the analysis of our algorithm. We also need two intermediate technical results stated in Lemmas~\ref{lem:atLeast2} and~\ref{cl:FewConnectedComponents}, which will be used in the proof of Lemma~\ref{lem:1nbr}.
\begin{lemma}
\label{lem:2nbr} There is a function $f\colon \mathbb{N} \to \mathbb{N}$ such that if $(G,S,k)$ is a \textsc{Yes}-instance to the $p$-$c$-\textsc{Disjoint Hit} problem, then $|V_{\geq 2}| \leq f(c) \cdot k$. \end{lemma} \begin{proof}
In order to upper-bound $|V_{\geq 2}|$, we build from $G[S]$ the following auxiliary graph $H$: we start with $H=(S,E(G[S]))$, and for each vertex $v \in V_{\geq 2}$ with neighbors $u_1,\ldots,u_{\ell}$, $\ell \geq 2$, we add to $H$ an edge $e_v$ between two arbitrary neighbors $u_1,u_2$ of $v$. Note that $H \preceq_m G$, and that for each vertex $v \in V_{\geq 2}$, $H \setminus e_v
\preceq_m G \setminus v$. We now argue that $|E(H)|$ is linearly bounded by
$k$, which implies the desired result as by construction $|V_{\geq 2}| \leq
|E(H)|$. If $G$ is a \textsc{Yes}-instance, there must be a set $S' \subseteq V
\setminus S$, $|S'| \leq k$, such that $G \setminus S'$ is $c$-pumpkin-free. By construction of $H$, the removal of each vertex $v \in S' \cap V_{\geq 2}$ corresponds to the removal of the edge $e_v \in E(H)$. Let $H' = H \setminus
\{e_v : v \in S' \cap V_{\geq 2}\}$, and note that $H' \preceq_m G \setminus S'$, so $H'$ is $c$-pumpkin-free. Therefore, by Corollary~\ref{cor:MultiExcludeTheta} it follows that $|E(H')| \leq (c-1)\cdot
(2c-1)\cdot |V(H')| \leq (c-1)\cdot (2c-1)\cdot (k+1)$. As $|E(H)| \leq
|E(H')|+k$, we conclude that $|V_{\geq 2}| \leq |E(H)| \leq (c-1)\cdot (2c-1)\cdot (k+1) +k$.\end{proof}
\begin{lemma} \label{lem:atLeast2} There is a function $g\colon \mathbb{N} \to \mathbb{N}$ such that if $(G,S,k)$ is a \textsc{Yes}-instance to the
$p$-$c$-\textsc{Disjoint Hit} problem and $\mathcal{C}$ is a collection of disjoint connected subgraphs of $G \setminus S$ such that each subgraph has at least two distinct neighbors in $S$, then $|\mathcal{C}| \leq g(c) \cdot k$. \end{lemma}
\begin{proof}The proof is very similar to the proof of Lemma~\ref{lem:2nbr}. In order to upper-bound $|\mathcal{C}|$, we build from $G[S]$ the following auxiliary graph $H$: we start with $H=(S,E(G[S]))$, and for each subgraph $C \in \mathcal{C}$ with neighbors $u_1,\ldots,u_{\ell}$, $\ell \geq 2$, we add to $H$ an edge $e_C$ between two arbitrary neighbors $u_1,u_2$ of $C$. Note that $H \preceq_m G$, and that for each subgraph $C \in \mathcal{C}$, $H \setminus e_C
\preceq_m G \setminus C$. We now argue that $|E(H)|$ is linearly bounded by
$k$, which implies the desired result as by construction $|\mathcal{C}| \leq
|E(H)|$. If $G$ is a \textsc{Yes}-instance, there must be a set $S' \subseteq V
\setminus S$, $|S'| \leq k$, such that $G \setminus S'$ is $c$-pumpkin-free. By construction of $H$, the removal of a vertex $v$ in a subgraph $C \in
\mathcal{C}$ corresponds to the removal of at most one edge $e_C \in E(H)$ (as maybe the edge $e_C$ can still be {\sl simulated} after the removal of $v$). Let $H'$ be the subgraph obtained from $H$ after the removal of those edges, and note that $H' \preceq_m G \setminus S'$, so $H'$ is $c$-pumpkin-free. Therefore, by Corollary~\ref{cor:MultiExcludeTheta} it follows that $|E(H')|
\leq (c-1)\cdot (2c-1)\cdot |V(H')| \leq (c-1)\cdot (2c-1)\cdot (k+1)$. As
$|E(H)| \leq |E(H')|+k$, we conclude that $|\mathcal{C}| \leq |E(H)| \leq (c-1)\cdot (2c-1)\cdot (k+1) +k$. \end{proof}
\begin{lemma} \label{cl:FewConnectedComponents} In an $(S,c)$-reduced \textsc{Yes}-instance $(G,S,k)$ to the $p$-$c$-\textsc{Disjoint Hit} problem, the number of connected components of $G\setminus S$ is $\mathcal{O}(k)$. \end{lemma} \begin{proof} Note that by reduction rules~{\bf R1} and~{\bf R2}, we can assume that each connected component $C$ of $G\setminus S$ has some neighbor in $S$, and that if $C$ has exactly one neighbor $v$ in $S$, then $G[V(C) \cup \{v\}]$ has a $c$-pumpkin. On the one hand, the number of components $C$ that have exactly one neighbor $v$ in $S$ and such that $G[V(C) \cup \{v\}]$ contains the $c$-pumpkin as a minor is at most $k$, as any solution needs to contain at least one vertex from each such connected component. On the other hand, the number of components that have at least two neighbors in $S$ is $\mathcal{O}(k)$ by Lemma~\ref{lem:atLeast2}.\end{proof}
Now we prove our key structural lemma.
\begin{lemma} \label{lem:1nbr} There is a function $h\colon \mathbb{N} \to \mathbb{N}$ such that if $(G,S,k)$ is an $(S,c)$-reduced \textsc{Yes}-instance to the
$p$-$c$-\textsc{Disjoint Hit} problem, then $|V_{1}| \leq h(c) \cdot k$.
\end{lemma} \begin{proof} For simplicity we call the vertices in $V_{1}$ {\sl white}. We proceed to find a packing of disjoint connected subgraphs $\mathcal{P}=\{B_1,\ldots,B_{\ell}\}$
of $G \setminus S$ containing all white vertices except for $\mathcal{O}(k)$ of them. This will help us in bounding $|V_{1}|$. We call the subgraphs in $\mathcal{P}$ \emph{blocks}. For a graph $H \subseteq G\setminus S$, let $w(H)$ be the number of white vertices in $H$. The idea is to obtain, as far as possible, blocks $B$ with $c \leq w(B) \leq c^3$; we call these blocks \emph{suitable}, and the other blocks are called \emph{unsuitable}. If at some point we cannot refine the packing anymore in order to obtain suitable blocks, we will argue about its structural properties,
which will allow us to bound the number of white vertices.
We start with $\mathcal{P}$ containing all the connected components $C$ of $G \setminus S$ such that $w(C)> c^3$, and we recursively try to refine the current packing. By Lemma~\ref{cl:FewConnectedComponents}, we know that the number of connected components is $\mathcal{O}(k)$, and hence the number of white vertices that are not included in $\mathcal{P}$ is $\mathcal{O}(c^3k)=\mathcal{O}(k)$.
More precisely, for each block $B$ with $w(B) > c^3$, we build a spanning tree $T$ of $B$, and we orient each edge $e \in E(T)$ towards the components of $T \setminus \{e\}$ containing at least $c$ white vertices. Note that, as $w(B) > c^3$, each edge gets at least one orientation, and that edges may be oriented in both directions. If some edge $e \in E(T)$ is oriented in both directions, we replace in $\mathcal{P}$ block $B$ with the subgraphs induced by the vertices in each of the two subtrees. We stop this recursive procedure whenever we cannot find more suitable blocks using this orientation trick. Let $\mathcal{P}$ be the current packing.
Now let $B$ be an unsuitable block in $\mathcal{P}$, that is, $w(B)>c^3$ and no edge of its spanning tree $T$ is oriented in both directions. This implies that there exists a vertex $v \in V(T)$ with all its incident edges pointing towards it. We call such a vertex $v$ a \emph{sink}. Let $T_1,\ldots,T_p$ be the connected components of $T \setminus \{v\}$. Note that as $v$ is a sink, $w(T_i) < c$ for $1 \leq i \leq p$, using the fact that $w(B) > c^3$ we conclude that $p \geq c^2$. Now let $P_1,\ldots,P_{\ell}$ be the connected components of $G[V(T_1)\cup \cdots \cup V(T_p)]= G[V(B) \setminus \{v\}]$, and note that $\ell \leq p$. We call these subgraphs $P_i$ the \emph{pieces} of the unsuitable block $B$. For each unsuitable block, we delete the pieces with no white vertex. This completes the construction of $\mathcal{P}$. The next claim bounds the number of white vertices in each piece of an unsuitable block in $\mathcal{P}$.
\begin{claimN} \label{cl:pizza} Each of the pieces of an unsuitable block contains less than $c^2$ white vertices. \end{claimN} \begin{proof} Assume for contradiction that there exists a piece $P$ of an unsuitable block with $w(P) \geq c^2$, and let $v$ be the sink of the unsuitable block obtained from tree $T$. By construction, $V(P)$ is the union of the vertices in some of the trees in $T \setminus \{v\}$; let without loss of generality these trees be $T_1,\ldots,T_{q}$. As $w(P) \geq c^2$ and $w(T_i) < c$ for $1 \leq i \leq q$, it follows that $q \geq c$. As $v$ has at least one neighbor in each of the trees $T_i$, $1 \leq i \leq q$, and $P$ is a connected subgraph of $G$, we can obtain a $c$-pumpkin-model $\{A,B\}$ in $G \setminus S$ by setting $A:=\{v\}$ and $B:=V(P)$, contradicting the fact that $G \setminus S$ is $c$-pumpkin-free. See Figure~\ref{fig:packing}(a) for an example for $c=3$.\end{proof}
\begin{figure}
\caption{(a) Example for $c=3$ of the contradiction in the proof of Claim~\ref{cl:pizza}. The trees $T_1$, $T_2$, and $T_3$ are defined by the dashed edges. (b) Example for $c=3$ of an unsuitable block $B$ in the packing $\mathcal{P}$ built in the proof of Lemma~\ref{lem:1nbr}.}
\label{fig:packing}
\end{figure}
Hence in the packing $\mathcal{P}$ we are left with a set of suitable blocks with at most $c^3$ white vertices each, and a set of unsuitable blocks, each one broken up into pieces linked by a sink in a star-like structure. By Claim~\ref{cl:pizza}, each piece of the remaining unsuitable blocks contains at most $c^2$ white vertices. See Figure~\ref{fig:packing}(b) for an example of an unsuitable block $B$ for $c=3$.
Now we need two claims about the properties of the constructed packing.
\begin{claimN} \label{cl:suitable} In the packing $\mathcal{P}$ constructed above, the number of suitable or unsuitable blocks is $\mathcal{O}(k)$. \end{claimN} \begin{proof} We first bound the number of suitable blocks, and for this we distinguish between two types of suitable blocks. \begin{itemize} \item[$\bullet$] Type 1: blocks that have exactly one neighbor in $S$. For each such block we need to include a vertex of it in the $c$-hitting set (as each suitable block contains at least $c$ white vertices), so their number is at most $k$.
\item[$\bullet$] Type 2: blocks that have at least two distinct neighbors in $S$. The number of such blocks is $\mathcal{O}(k)$ by Lemma~\ref{lem:atLeast2}. \end{itemize} The proof for unsuitable blocks is similar. Namely, we distinguishing between the same two types of blocks, and use the fact that each unsuitable block contains at least $c^3\geq c$ white vertices. This concludes the proof.\end{proof}
\begin{claimN} \label{cl:numberPieces} In an $(S,c)$-{\sl reduced} \textsc{Yes}-instance, the total number of pieces in the packing $\mathcal{P}$ constructed above is $\mathcal{O}(k)$. \end{claimN} \begin{proof} We distinguish between three types of pieces. \begin{itemize} \item[$\bullet$] Type 1: pieces that have at least two distinct neighbors in $S$ (see piece $P_3$ in Figure~\ref{fig:packing}(b) for $c=3$). The number of pieces of this type is $\mathcal{O}(k)$ by Lemma~\ref{lem:atLeast2}.
\item[$\bullet$] Type 2: pieces that are not of Type 1 and that have at least one neighbor in some suitable block or in another unsuitable block (note that by construction a piece cannot have any neighbor in other pieces of the same unsuitable block). We construct an auxiliary graph $H$ as follows: we start with the packing $\mathcal{P}$, and we add all the edges in $G \setminus S$
between vertices in different blocks of $\mathcal{P}$ (suitable or unsuitable). Then we contract each block to a single vertex, and let $H$ be the resulting graph. By Claim~\ref{cl:suitable}, $|V(H)|=\mathcal{O}(k)$. As $H \preceq_m G
\setminus S$ and $G \setminus S$ is $c$-pumpkin-free, by Corollary~\ref{cor:MultiExcludeTheta} we have that $|E(H)|=\mathcal{O}(k)$. Since each piece of Type~2 is incident with at least one edge of $H$ after uncontracting the blocks, it follows that the number of pieces of Type~2 is at most $2|E(H)|=\mathcal{O}(k)$.
\item[$\bullet$] Type 3: the remaining pieces. That is, these are pieces $P$ that see exactly one vertex $u$ in $S$, and that are connected to the rest of $G \setminus S$ only through the corresponding sink $v$. In other words, such a piece $P$ is a connected component of $G \setminus \{u,v\}$. We distinguish two subcases. \begin{itemize} \item[$\circ$] If $G[V(P) \cup \{u\}]$ contains a $c$-pumpkin-minor, then any $c$-hitting set needs to contain at least one vertex in $P$ (see piece $P_1$ in Figure~\ref{fig:packing}(b) for $c=3$). We conclude that the number of pieces of this subtype is at most $k$. \item[$\circ$] Otherwise, all the conditions of reduction rule~{\bf R3} are fulfilled, and we conclude that such a piece cannot exist in an $(S,c)$-{\sl reduced} instance. \end{itemize} \end{itemize}
\end{proof}
To conclude, recall that the constructed packing $\mathcal{P}$ contains all but $\mathcal{O}(k)$ white vertices, either in suitable blocks or in pieces of unsuitable blocks. As by construction each suitable block has at most $c^3$ white vertices and by Claim~\ref{cl:suitable} the number of such blocks is $\mathcal{O}(k)$, it follows that the number of white vertices contained in suitable blocks is $\mathcal{O}(k)$. Similarly, by Claim~\ref{cl:pizza} each piece contains at most $c^2$ white vertices, and the total number of pieces is $\mathcal{O}(k)$ by Claim~\ref{cl:numberPieces}, so the number of white vertices contained in pieces of unsuitable blocks is also $\mathcal{O}(k)$.\end{proof}
\subsection*{Linear kernel}
We now proceed to describe a procedure, called \emph{protrusion rule}, that bounds the size of our graph when $|V_1\cup V_{\geq 2}|=\mathcal{O}(k)$. We first need some definitions.
Many graph optimization problems can be expressed as finding an optimal number of vertices or edges satisfying a property expressible in Monadic Second Order (MSO) logic. A parameterized graph problem $\Pi \subseteq \Sigma^* \times \mathbb{N}$ is given with a graph $G$ and an integer $k$ as an input. When the goal is to decide whether there exists a subset $W$ of at most $k$ vertices for which an MSO-expressible property $P_{\Pi}(G,W)$ holds, we say that $\Pi$ is a \mbox{$p$-min-MSO} ~graph problem. One can easily check that the $p$-$c$-\textsc{Hit} problem is \mbox{$p$-min-MSO}. In the (parameterized) \emph{disjoint version} $\Pi^d$ of a \mbox{$p$-min-MSO} ~problem $\Pi$, we are given a triple $(G,S,k)$, where $G$ is a graph, $S$ a subset of $V(G)$ and $k$ the parameter, and we seek for a solution set $W$ which is disjoint from $S$, and whose size is at most $k$.
Given $R\subseteq V(G)$, we define $\partial_G(R)$ as the set of vertices in $R$ that have a neighbor in $V(G)\setminus R$. Thus the neighborhood of $R$ is $N_G(R) = \partial_G(V(G) \setminus R)$. We say that a set $X \subseteq V(G)$
is an {\em $r$-protrusion} of $G$ if ${\mathbf{tw}}(G[X])\leq r$ and $|\partial_G(X)| \leq r$.
An important concept when applying protrusion-based reduction rules is \emph{strong monotonicity} of a problem, which we do not define here (see for instance~\cite{BodlaenderFLPST09}). What we will need is the following fact, which can be found in~\cite[proof of Lemma 13]{BodlaenderFLPST09}: if $\mathcal{F}$ is a finite set of connected planar graphs, then the $p$-$\mathcal{F}$-\textsc{Hit} problem is strongly monotone. As the $c$-pumpkin is a connected planar graph for any $c \geq 1$, we immediately have the following lemma.
\begin{lemma} \label{lem:strongMon} The $p$-$c$-\textsc{Hit} problem is strongly monotone. \end{lemma}
The following lemma is key to our protrusion-based reduction rule. Its proof follows basically from the framework introduced in~\cite{BodlaenderFLPST09}, although some details need to be modified when dealing with the disjoint version of a problem, as it is our case. A self-contained proof for the specific case of disjoint problems can be found in~\cite[Lemma 12]{KPP11}. The general statement deals with the disjoint version of a general (parameterized) strongly monotone \mbox{$p$-min-MSO} ~problem. As the $p$-$c$-\textsc{Hit} problem is \mbox{$p$-min-MSO}, and it is strongly monotone by Lemma~\ref{lem:strongMon}, we only state the lemma for the specific case of our problem.
\begin{lemma}[Kim \emph{et al}.~\cite{KPP11}] \label{safe:protrusion} Let $\Pi^d$ be the disjoint version of $p$-$c$-\textsc{Hit}. There exists a computable function $\gamma:\mathbb{N}\rightarrow \mathbb{N}$ and an algorithm that given:
\begin{itemize} \item an instance $(G,S,k)$ of $\Pi^d$ such that $G \setminus S$ is $c$-pumpkin-minor-free, and
\item a $t$-protrusion $X$ of \(G\) such that $|X|>\gamma(2t+1)$ and
$X\cap S=\emptyset$, \end{itemize}
in time $\mathcal{O}(|X|)$ outputs an instance $(G',S,k')$ such that
$|V(G')|<|V(G)|$, $k'\leq k$, $(G',S,k')\in\Pi^d$ if and only if $(G,S,k)\in\Pi^d$, and $G' \setminus S$ is $c$-pumpkin-minor-free. \end{lemma}
We are now ready the state the protrusion rule. It follows as a corollary of Lemma~\ref{safe:protrusion} that the following reduction rule for $p$-$c$-\textsc{Disjoint Hit} is safe.
\begin{itemize} \item[{\bf P}] Let $(G,S,k)$ be an instance of $p$-$c$-\textsc{Disjoint Hit} and let $\gamma:\mathbb{N}\rightarrow \mathbb{N}$ be the computable function given by Lemma~\ref{safe:protrusion}. Let $X$ be a $4c$-protrusion of $G$ with
$|X|>\gamma(8c+1)$ and such that $X\cap S=\emptyset$. Then use the algorithm given by Lemma~\ref{safe:protrusion} to compute in time $\mathcal{O}(|X|)$ an equivalent instance $(G',S,k')$ such that $G[S]$ and $G'[S]$ are isomorphic,
$G' \setminus S$ is $c$-pumpkin-minor-free, $|V(G')|<|V(G)|$, and $k'\leq k$. \end{itemize}
Before describing how to obtain the linear kernel for $p$-$c$-\textsc{Disjoint Hit}, we need the following lemma, corresponding to~\cite[Lemma~$6$]{FLM+11}.
\begin{lemma}[Fomin \emph{et al}.~\cite{FLM+11}] \label{lem:FindProtrusions} There is a linear-time algorithm that given an $n$-vertex graph $G$ and a set $S \subseteq V(G)$ such that ${\mathbf{tw}}(G \setminus S) \leq d$, outputs a $2(d + 1)$-protrusion in $G \setminus S$ of size at least
$\frac{n-|S|}{4|N(S)|+1}$, where $N(S)$ is the set of vertices in $V(G) \setminus S$ with at least one neighbor in $S$. Here $d$ is some constant. \end{lemma}
The proof of the next lemma is similar to the proof of~\cite[Theorem~$1$]{FLM+11}.
\begin{lemma}
\label{lemma:linearkernel} If $|V_1\cup V_{\geq 2}|=\mathcal{O}(k)$, then $p$-$c$-\textsc{Disjoint Hit} has a kernel with $\mathcal{O}(k)$ vertices. \end{lemma} \begin{proof} Let for this proof $V_{\star}=V_1\cup V_{\geq 2}$, and let $(G,S,k)$ be an instance of $p$-$c$-\textsc{Disjoint Hit} such that
$|V_{\star}|=\mathcal{O}(k)$. As by Theorem~\ref{cor:ExcludeTheta} we have that ${\mathbf{tw}}(G \setminus S)\leq 2c-1$, we can apply Lemma~\ref{lem:FindProtrusions} and obtain in linear time a $2((2c-1)+1)=4c$-protrusion $Y$ of size at least
$\frac{|V(G)|-|S|}{4|V_{\star}|+1}$ in $G\setminus S$. Let $\gamma : \mathbb{N}
\rightarrow \mathbb{N}$ be the function defined in Lemma~\ref{safe:protrusion}. If $\frac{|V(G)|-|S|}{4|V_{\star}|+1} > \gamma(8c+1)$, then using protrusion rule {\bf P} we obtain in time
$\mathcal{O}(|Y|)$ an instance $(G',S,k')$ such that $G[S]$ and $G'[S]$ are isomorphic, $G' \setminus S$ is $c$-pumpkin-minor-free, $|V(G')|<|V(G)|$, $k'\leq k$, and such that $(G',S,k')$ is a \textsc{Yes}-instance of $p$-$c$-\textsc{Disjoint Hit} if and only if $(G,S,k) $ is a \textsc{Yes}-instance of $p$-$c$-\textsc{Disjoint Hit}. We continue applying Lemma~\ref{lem:FindProtrusions} and protrusion rule {\bf P} to the newly obtained instance as far as there is a $4c$-protrusion of size strictly greater than $\gamma(8c+1)$. We would like to note that in the whole process we never delete either the vertices of $S$ or $V_{\star}$.
Let $(G^*,S,k^*)$ be the reduced instance obtained after this procedure. It follows that there is no $4c$-protrusion of size greater than $\gamma(8c+1)$ in $G^*\setminus S$, so protrusion rule {\bf P} no longer applies. Note that
$k^*\leq k$. We claim that the number of vertices in this graph $G^*$ is bounded by $\mathcal{O}(k)$. Indeed, since we cannot apply protrusion rule {\bf P}, we have that $\frac{|V(G^*)|-|S|}{4|V_{\star}|+1}\leq \gamma(8c+1)$. Therefore, we have that $ |V(G^*)| \leq \gamma(8c+1)
\cdot(4|V_{\star}|+1)+|S|$. Since $|S|= \mathcal{O}(k)$ and by hypothesis
$|V_{\star}|=\mathcal{O}(k)$, we have that $ |V(G^*)|= \mathcal{O}(k)$. This completes the proof.\end{proof}
Lemma~\ref{lemma:linearkernel} clearly implies that, if $|V_1\cup V_{\geq 2}|=\mathcal{O}(k)$, then $p$-$c$-\textsc{Disjoint Hit} can be solved in time
$2^{\mathcal{O}(k)} \cdot n^{\mathcal{O}(1)}$. Nevertheless, the above proof only shows that the {\sl decision version} of $p$-$c$-\textsc{Disjoint Hit} can be solved in single-exponential time, as we have applied reduction rules that may modify the instance. But in the iterative compression routine (see proof of Lemma~\ref{lem:disjointtonondisjoint}), we need to be able to obtain an {\sl explicit solution} $S'\subseteq V(G)\setminus S$ of $p$-$c$-\textsc{Disjoint Hit} in the original instance, with $|S'|=k$, if it exists.
We can get this explicit solution (if it exists) by repeatedly applying the single-exponential algorithm for the decision version as follows. Suppose that $G$ is a \textsc{Yes}-instance of $p$-$c$-\textsc{Disjoint Hit}, and let an ordering of the vertices of $V(G)\setminus S$ be $u_1,u_2,\ldots,u_q$. Set $i:=1$ and $U:=\emptyset$. Repeat the following two steps for $i=1,\ldots,q$.
\begin{enumerate} \item Check whether $G\setminus U$ is $c$-pumpkin-free in linear time, using Proposition~\ref{prop:FindingTheta}. If it is the case, then return $U$ as the desired solution. Otherwise, go to the next step. \item Using the single-exponential algorithm for the decision version of $p$-$c$-\textsc{Disjoint Hit}, check whether $G\setminus (\{u_i\}\cup U)$
contains a solution $S^* \subseteq V(G)\setminus (S \cup U \cup \{u_i\})$ of size $k-(|U|+1)$. If it is the case, then set $U:=U\cup \{u_i\}$. Set $i:=i+1$. \end{enumerate} This concludes the description of the algorithm to obtain the desired explicit solution $U$ in the compression step.
\subsection*{Final algorithm} Finally we combine everything to obtain the following result. \begin{theorem} \label{prop:runningTime} For any fixed $c \geq 1$, the $p$-$c$-\textsc{Pumpkin}-\textsc{Hitting} problem can be solved in time $2^{\mathcal{O}(k)} \cdot n^{\mathcal{O}(1)}$. \end{theorem} \begin{proof} To obtain the desired result, by Lemma~\ref{lem:disjointtonondisjoint} and the procedure described after the proof of Lemma~\ref{lemma:linearkernel}, it is sufficient to obtain a single-exponential algorithm for $p$-$c$-\textsc{Disjoint Hit}. To this end, after applying reduction rules {\bf R1}, {\bf R2}, and {\bf R3} in polynomial time, by Lemmas~\ref{lem:2nbr}
and~\ref{lem:1nbr} we have that $|V_1 \cup V_{\geq 2}|= \mathcal{O}(k)$. Thus, using Lemma~\ref{lemma:linearkernel} we get, also in polynomial time, an equivalent instance $(G^*,S,k^*)$ with $\mathcal{O}(k)$ vertices, and hence the problem can be solved by enumerating all subsets of size at most $k^*$ of $G^*\setminus S$. This completes the proof.\end{proof}
\subsection*{Running time analysis.} To conclude this section, we provide a running time analysis of the algorithm given by Theorem~\ref{prop:runningTime} above. We would like to note that we did not focus at all on optimizing the hidden constant in the term $2^{\mathcal{O}(k)}$, so we will just concentrate on the term $n^{\mathcal{O}(1)}$. From the proof of Lemma~\ref{lem:disjointtonondisjoint} it follows that if $p$-$c$-\textsc{Disjoint Hit} can be solved in time $a^k \cdot n^b$ for two constants $a,b$, then $p$-$c$-\textsc{Pumpkin}-\textsc{Hitting}
can be solved in time $(a+1)^k \cdot n^{b+1}$. Let us now focus on $p$-$c$-\textsc{Disjoint Hit}. First note that reduction rules {\bf R1}, {\bf R2} and {\bf R3} can be applied in linear time. Indeed, the connected components of $G \setminus S$ can be listed by successively performing BFS in time $\mathcal{O}(|V(G \setminus S)| + |E(G \setminus S)|)$, which equals
$\mathcal{O}(|V(G \setminus S)|)$ as the graph $G \setminus S$ has bounded treewidth by Theorem~\ref{cor:ExcludeTheta}. By Proposition~\ref{prop:FindingTheta}, testing for the existence of a $c$-pumpkin-minor can also be performed in linear time. As for protrusion rule {\bf P}, it can also be performed in linear time by Lemmas~\ref{safe:protrusion} and~\ref{lem:FindProtrusions}. As each of these reduction rules is applied at most $\mathcal{O}(k \cdot n)$ times, and as once we have a linear kernel the problem can be solved exhaustively in time $2^{\mathcal{O}(k)}$, we conclude that $p$-$c$-\textsc{Disjoint Hit} can be solved in time $2^{\mathcal{O}(k)} \cdot n^2$, and therefore the algorithm given by Theorem~\ref{prop:runningTime} solves the $p$-$c$-\textsc{Pumpkin}-\textsc{Hitting} problem in time $2^{\mathcal{O}(k)} \cdot n^3$. We feel that there is room for improvement in this running time, as it was not our main objective to optimize it.
\section{An approximation algorithm for hitting and packing pumpkins} \label{sec:combinatorial}
In this section we show that every $n$-vertex graph $G$ either contains a small $c$-pumpkin-model or has a structure that can be reduced, giving a smaller equivalent instance for both the \textsc{Minimum} $c$-\textsc{Pumpkin}-\textsc{Hitting} and the \textsc{Maximum} $c$-\textsc{Pumpkin}-\textsc{Packing} problems. Here by a ``small'' $c$-pumpkin-model, we mean a model of size at most $f(c)\cdot \log n$ for some function $f$ independent of $n$. We finally use this result to derive a $\mathcal{O}(\log n)$-approximation algorithm for both problems.
This section is organized as follows. We first describe in Section~\ref{sec:reduction_rules} our reduction rules and prove their validity for both hitting and packing problems (see Lemma~\ref{lem:both}). The existence of small $c$-pumpkin-models in $c$-reduced graphs is provided in Section~\ref{sec:smallPumpkins} (see Lemma~\ref{lem:smallmodelsreducedgraphs}); its proof strongly relies on a graph structure that we call \emph{hedgehog}, which we study in Section~\ref{sec:hedgehogs}. We finally focus in Section~\ref{sec:1stAlgo} on the algorithmic consequences of our results (see Theorem~\ref{thm:logn_packing_covering}).
\subsection{Reduction rules} \label{sec:reduction_rules}
We describe two reduction rules for hitting/packing $c$-pumpkin-models, which given an input graph $G$ satisfying some specific conditions, produce a graph $H$ with less vertices than $G$ and satisfying $\tau_{c}(G)= \tau_{c}(H)$ and $\nu_{c}(G)= \nu_{c}(H)$. Moreover, these operations are defined in such a way that, for both problems, an optimal (resp.\ approximate) solution for $G$ can be retrieved in polynomial time from an optimal (resp.\ approximate) solution for $H$.
Given a graph $G$ and two distinct vertices $u, v$ of $G$, we write $G +_{k} uv$ for the graph obtained from $G$ by adding $k$ parallel edges linking $u$ to $v$. A {\em $c$-outgrowth} of a graph $G$ is a triple $(K, u, v)$ such that
\begin{itemize} \item[(i)] $u,v$ are two distinct vertices of $G$;
\item[(ii)] $K$ is a connected component of $G \setminus \{u, v\}$ with $|K| \geq 1$;
\item[(iii)] $u$ and $v$ both have at least one neighbor in $K$ in the graph $G$; and \item[(iv)] the graph $\Gamma(K, u, v)$ obtained from $G[V(K) \cup \{u,v\}]$ by removing all the edges between $u$ and $v$ has no $c$-pumpkin-minor. \end{itemize}
Given a $c$-outgrowth $(K, u, v)$ we let $\gamma(K, u, v) := c - k$, where $k$ is the smallest integer such that $\Gamma(K, u, v) +_{k} uv$ has a $c$-pumpkin minor. Observe that, when adding $k$ parallel edges between $u$ and $v$ to $\Gamma(K, u, v)$, there are two distinct ``types'' of $c$-pumpkin-models $\{A, B\}$ that could appear: Exchanging $A$ and $B$ if necessary, we either have $u \in A$ and $v\in B$ (first type), or $u, v\in A$ (second type). (Possibly both types of models coexist.) Note that we always have $\gamma(K, u, v) = c - 1$ when $\Gamma(K, u, v) +_{k} uv$ contains a $c$-pumpkin-model of the second type. See Figure~\ref{fig:outgrowth} for an illustration.
\begin{figure}
\caption{The graph $\Gamma(K, u, v)$ of two $c$-outgrowths $(K, u, v)$ for $c=11$ (left) and $c=7$ (right), respectively. We have $\gamma(K, u, v) = 9$ for the left one, and $\gamma(K, u, v) = 6$ for the right one. }
\label{fig:outgrowth}
\end{figure}
Now that we are equipped with these definitions and notations, we may describe the two reduction rules, which depend on the value of the positive integer $c$.
\begin{enumerate} \item[{\bf Z1}] Suppose $v$ is a vertex of $G$ such that no block
of $G$ containing $v$ has a $c$-pumpkin-minor. Then define $H$ as the graph obtained from $G$ by removing $v$.
\item[{\bf Z2}] Suppose that $(K, u, v)$ is a $c$-outgrowth of $G$.
Then define $H$ as the graph obtained from $G \setminus V(K)$ by adding $\gamma(K, u, v)$ parallel edges between $u$ and $v$. \end{enumerate}
\begin{figure}
\caption{Illustration of reduction rule {\bf Z2} on the two outgrowths from Figure~\ref{fig:outgrowth}.}
\label{fig:Z2}
\end{figure}
See Figure~\ref{fig:Z2} for an illustration of {\bf Z2}.
We note that testing for the existence of a $c$-pumpkin-minor in a graph $G$ can be done in polynomial time when $c$ is fixed by Proposition~\ref{prop:FindingTheta}. Moreover, if there is one, an explicit $c$-pumpkin-model can be computed. This follows from classical results of Robertson and Seymour~\cite{RS95}, and will be used implicitly in the subsequent proofs. Note that, in particular, testing whether a vertex $v$ is in a block containing a $c$-pumpkin-minor can be done in polynomial time. Similarly, testing whether a triple $(K, u, v)$ with $K$ a component of $G - \{u,v\}$ is a $c$-outgrowth can be done in polynomial time, and the parameter $\gamma(K, u, v)$ can be computed in polynomial time as well. Therefore, we can check in polynomial time if {\bf Z1} or {\bf Z2} can be applied to a given graph, and each of these two reduction rules can be realized in polynomial time.
A graph $G$ is said to be {\em $c$-reduced} if neither {\bf Z1} nor {\bf Z2} can be applied to $G$. The next lemma shows the validity of these reduction rules.
\begin{lemma} \label{lem:both} Let $c$ be a fixed positive integer. Suppose that $H$ results from the application of {\bf Z1} or {\bf Z2} on a graph $G$. Then \begin{itemize} \item[(a)] $\tau_{c}(G) = \tau_{c}(H)$ and moreover, given a $c$-hitting set $X'$ of $H$, one can compute in polynomial time a
$c$-hitting set $X$ of $G$ with $|X| \leq |X'|$.
\item[(b)] $\nu_{c}(G) = \nu_{c}(H)$ and moreover, given a $c$-packing $\mathcal{M}'$ of $H$, one can compute in polynomial time a $c$-packing
$\mathcal{M}$ of $G$ with $|\mathcal{M}| = |\mathcal{M}'|$. \end{itemize} \end{lemma}
In order to prove Lemma~\ref{lem:both}, we first need to introduce a few technical lemmas; the validity of the reduction rules is shown in Lemmas~\ref{lem:valid_covering} and~\ref{lem:valid_packing} at the end of this section, which correspond to Lemma~\ref{lem:both}(a) and Lemma~\ref{lem:both}(b), respectively.
\begin{lemma} \label{lem:intermediate1} Let $c$ be a fixed positive integer. Suppose $H$ is obtained by applying rule {\bf Z2} on a $c$-outgrowth $(K, u, v)$ of a graph $G$. Let $X$ be an arbitrary subset of vertices of $V(G) \setminus (V(K) \cup \{u,v\})$. Then, given a $c$-pumpkin-model of $H \setminus X$, one can find in polynomial time a $c$-pumpkin-model of $G \setminus X$. \end{lemma} \begin{proof} Let $\Gamma:= \Gamma(K, u, v)$, $\gamma :=\gamma(K, u, v)$, and $k:= c - \gamma$. Let $\{A, B\}$ denote the given $c$-pumpkin-model of $H \setminus X$.
If $u \notin A \cup B$ or $v \notin A \cup B$, then $\{A, B\}$ is a $c$-pumpkin-model in $G \setminus X$ and we are done. Thus, exchanging $A$ and $B$ if necessary, we may assume that either $u, v \in A$, or $u\in A$ and $v \in B$. In the first case, since $G[A \cup V(K)]$ is connected, $\{A \cup V(K), B\}$ is a $c$-pumpkin-model in $G \setminus X$. Now suppose that $u\in A$ and $v \in B$. We need to consider which types of $c$-pumpkin-models appear in $\Gamma +_{k} uv$.
If $\Gamma +_{k} uv$ contains a $c$-pumpkin-model $\{A', B'\}$ with $u \in A'$ and $v \in B'$ then there are exactly $\gamma$ edges linking $A'$ to $B'$ in the graph $\Gamma$, and hence $\{A \cup A', B\cup B'\}$ is a $c$-pumpkin-model in $G\setminus X$, as desired.
If $\Gamma +_{k} uv$ has a $c$-pumpkin-model $\{A', B'\}$ with $u, v \in A'$, then $k=1$ and $\gamma = c - 1$. In $H[A \cup B]$ there is a path $P$ linking $u$ to $v$ that avoids the $c-1$ edges that resulted from the application of {\bf Z2} on the $c$-outgrowth $(K, u, v)$. (Note that $P$ could possibly consists of a single edge linking $u$ to $v$.) Then $\{A' \cup V(P), B'\}$ is a $c$-pumpkin-model in $G\setminus X$.
\end{proof}
Next we show that the converse of the above lemma also holds.
\begin{lemma} \label{lem:intermediate2}
Let $c$ be a fixed positive integer. Suppose $H$ is obtained by applying rule {\bf Z2} on a $c$-outgrowth $(K, u, v)$ of a graph $G$. Let $X$ be an arbitrary subset of vertices of $V(G) \setminus (V(K) \cup \{u,v\})$. Then, given a $c$-pumpkin-model of $G \setminus X$, one can find in polynomial time a $c$-pumpkin-model of $H \setminus X$. \end{lemma} \begin{proof} Let $\Gamma:= \Gamma(K, u, v)$, $\gamma :=\gamma(K, u, v)$, and $k:= c - \gamma$. Let $\{A, B\}$ denote the given $c$-pumpkin-model of $G \setminus X$. We may assume that this model is minimal (if not, one can obviously make it minimal in polynomial time).
If $u \notin A \cup B$ or $v \notin A \cup B$, then by minimality of $\{A, B\}$ both $A$ and $B$ avoid $V(K)$. Thus $\{A, B\}$ is a $c$-pumpkin-model in $H \setminus X$, and we are done. Hence, exchanging $A$ and $B$ if necessary, we may assume that either $u, v \in A$, or $u\in A$ and $v \in B$. In the second case, at most $\gamma$ edges between $A$ and $B$ in $G \setminus X$ are included in $\Gamma$. Since there are $\gamma$ extra edges between $u$ and $v$ in $H$ compared to $G$, it follows that $\{A \setminus V(K), B \setminus V(K)\}$ is a $c$-pumpkin-model in $H \setminus X$.
Now suppose that $u, v\in A$. If $B \subseteq V(K)$ then all edges between $A$ and $B$ in $G \setminus X$ are in $\Gamma$. Let $P$ be a path in $G[A]$ linking $u$ to $v$. (Note that the path $P$ possibly consists of a single edge.) Then $P$ is disjoint from $V(K)$, as otherwise $P \subseteq \Gamma$ and $\{A \cap V(\Gamma), B\}$ would be a $c$-pumpkin-model in $\Gamma$. Thus in particular $k=1$ and $\gamma = c - 1$. Since there are $c-1$ extra edges between $u$ and $v$ in $H$ compared to $G$, and $P$ avoids all these edges, $\{ \{u\}, \{v\} \cup (V(P) \setminus \{u\}) \}$ is a $c$-pumpkin-model in $H \setminus X$.
If $B \not \subseteq V(K)$ then $B$ is disjoint from $V(K)$. Since $u$ and $v$ are linked by at least $\gamma \geq 1$ edges in $H$, the graph $H[A \setminus V(K)]$ is connected, and it follows that $\{A \setminus V(K), B\}$ is a $c$-pumpkin-model in $H \setminus X$. \end{proof}
\begin{lemma} \label{lem:valid_covering} Let $c$ be a fixed positive integer. Suppose $H$ results from the application of {\bf Z1} or {\bf Z2} on a graph $G$. Then $\tau_{c}(G) = \tau_{c}(H)$. Moreover, every $c$-hitting set $X'$ of $H$ is also a $c$-hitting set of $G$. \end{lemma} This lemma implies that an optimal solution to the \textsc{Minimum} $c$-\textsc{Pumpkin}-\textsc{Hitting} problem on $G$ can be computed given one for $H$, and similarly that an approximate solution for $G$ can be obtained from an approximate solution for $H$. This will be used in our approximation algorithms in Section~\ref{sec:1stAlgo}.
\begin{proof}[Proof of Lemma~\ref{lem:valid_covering}] First suppose $H$ results from the application of {\bf Z1} on $G$ with vertex $v$. We trivially have $\tau_{c}(G) \geq \tau_{c}(H)$. Let $X'$ be a given $c$-hitting set of $H$. If $X'$ is not a $c$-hitting set of $G$, then $G\setminus X'$ has a $c$-pumpkin-model; let $\{A, B\}$ be a minimal one. We have $v\in A \cup B$ since otherwise $\{A, B\}$ would be a $c$-pumpkin-model in $H \setminus X'$. By the minimality of $\{A, B\}$, we must have $A \cup B \subseteq V(K)$ for some block $K$ of $G$. But then $K$ is a block of $G$ including $v$ and containing a $c$-pumpkin-minor, contradicting the assumptions of {\bf Z1}. Therefore $X'$ is a $c$-hitting set of $G$, and $\tau_{c}(G) \leq \tau_{c}(H)$ also holds, implying $\tau_{c}(G) = \tau_{c}(H)$.
Now assume $H$ has been obtained by applying {\bf Z2} on $G$ with $c$-outgrowth $(K, u, v)$, and let $\Gamma:= \Gamma(K, u, v)$.
First we show $\tau_{c}(G) \geq \tau_{c}(H)$. Let $X$ be a minimum $c$-hitting set of $G$. If $u\in X$ or $v\in X$, then $X$ is trivially a $c$-hitting set of $H$, so let us assume $u,v\notin X$. Moreover, we may suppose that $X$ has no vertex in $K$, since otherwise we could replace all such vertices with the vertex $u$ (or equivalently $v$). Since $X \subseteq V(G) \setminus V(\Gamma)$ and $G \setminus X$ has no $c$-pumpkin-minor, it follows from Lemma~\ref{lem:intermediate1} that $H \setminus X$ has no $c$-pumpkin-minor either, that is, $X$ is a $c$-hitting set of $H$. This shows $\tau_{c}(G) \geq \tau_{c}(H)$.
Now we prove that $\tau_{c}(G) \leq \tau_{c}(H)$ also holds. Here we show that, given a $c$-hitting set $X'$ of $H$, the set $X'$ is also a $c$-hitting set of $G$. Hence, this will also prove the second part of the lemma. If $u \in X'$ or $v \in X'$, then $X'$ is trivially a $c$-hitting set of $G$. If $u, v \notin X'$, then Lemma~\ref{lem:intermediate2} implies that $G \setminus X'$ has no $c$-pumpkin-minor, that is, that $X'$ is a $c$-hitting set of $G$. This shows $\tau_{c}(G) \leq \tau_{c}(H)$, and therefore $\tau_{c}(G) = \tau_{c}(H)$. \end{proof}
We conclude this section with a lemma similar to Lemma~\ref{lem:valid_covering} for $c$-packings.
\begin{lemma} \label{lem:valid_packing} Let $c$ be a fixed positive integer. Suppose $H$ results from the application of {\bf Z1} or {\bf Z2} on a graph $G$. Then $\nu_{c}(G) = \nu_{c}(H)$. Moreover, given a $c$-packing $\mathcal{M}'$ of $H$ one can compute in polynomial time a $c$-packing $\mathcal{M}$ of $G$ with
$|\mathcal{M}| = |\mathcal{M}'|$. \end{lemma} \begin{proof} First suppose $H$ results from the application of {\bf Z1} on $G$ with vertex $v$. Clearly, every $c$-packing of $H$ is a $c$-packing for $G$. Thus $\nu_{c}(G) \geq \nu_{c}(H)$, and it is enough to show the reverse inequality. Consider a $c$-packing of $G$. We may assume that every $c$-pumpkin-model in that packing is minimal. Thus each such model is contained in some block of $G$, and hence
avoids the vertex $v$. Therefore the packing also exists in $H$, implying $\nu_{c}(G) \leq \nu_{c}(H)$ and $\nu_{c}(G) = \nu_{c}(H)$, as desired.
Now assume $H$ has been obtained by applying {\bf Z2} on $G$ with outgrowth $(K, u, v)$.
First we show $\nu_{c}(G) \geq \nu_{c}(H)$. Let $\mathcal{M}'=\{M'_{1}, \dots, M'_{k}\}$ be a given $c$-packing of $H$. We show that a packing of the same size in $G$ can be computed in polynomial time, which will prove the second part of the lemma.
If every $M'_{i}$ avoids at least one of $u, v$ then the packing $\mathcal{M} := \mathcal{M}'$ is a $c$-packing in $G$ and we are done. So assume one model in the collection, say without loss of generality $M'_{1}$, includes both $u$ and $v$. Let $X$ be the union of the vertices in $M'_{2}, \dots, M'_{k}$. Since $M'_{1}$ is a $c$-pumpkin-model in $H\setminus X$, using Lemma~\ref{lem:intermediate1} we can compute in polynomial time a $c$-pumpkin-model $M_{1}$ in $G\setminus X$. Hence $\mathcal{M} :=\{M_{1},M'_{2}, \dots, M'_{k}\}$ is a $c$-packing of the desired size in $G$.
In order to prove $\nu_{c}(G) = \nu_{c}(H)$ it remains to show $\nu_{c}(G) \leq \nu_{c}(H)$. Let $\{M_{1}, \dots, M_{k}\}$ be a $c$-packing of $G$. We may assume that each $M_{i}$ is minimal. Thus if some $M_{i}$ contains some vertex of $K$ then $M_{i}$ contains both $u$ and $v$. If there is no such model in the packing then $\{M_{1}, \dots, M_{k}\}$ is also of $c$-packing of $H$ and we are done. We may thus assume that some model in the packing, say without loss of generality $M_{1}$, contains both $u$ and $v$. As before, let $X$ be the union of the vertices in $M_{2}, \dots, M_{k}$. Using Lemma~\ref{lem:intermediate2} with $M_{1}$ and $X$ we find a $c$-pumpkin-model $M'_{1}$ in $H\setminus X$. Thus $\{M'_{1},M_{2}, \dots, M_{k}\}$ is a $c$-packing of size $k$ in $H$, as desired. \end{proof}
\subsection{Hedgehogs} \label{sec:hedgehogs}
Recall that a graph is said to be a {\em multipath} if its underlying simple graph is isomorphic to a path. If $P$ is a multipath and $u,v \in V(P)$, we write $uPv$ for the subgraph of $P$ induced by the vertices on a $u$--$v$ path in $P$ (thus edges in $uPv$ have the same multiplicities as in $P$).
A {\em hedgehog} is a pair $(H, P)$, where $H$ is a graph and $P$ is an induced multipath of $H$ with $|P| \geq 2$ and such that \begin{itemize} \item[(i)] the (possibly empty) set $S := V(H) \setminus V(P)$ is a stable set of $H$; and
\item[(ii)] every vertex in $S$ has at least two neighbors in $P$. \end{itemize} (Let us recall that a {\em stable set} is a set of vertices such that no two of them are adjacent.)
Consider a hedgehog $(H, P)$. Its {\em size} is defined as $|P|$, the number of vertices in $P$. A {\em bad cutset} of $(H, P)$ is a set $X=\{u,v\}$ of two {\em internal} vertices of $P$ such that $H \setminus X$ has a connected component $K$ avoiding both endpoints of $P$.
This definition is motivated by reduction rule {\bf Z2}: First, if $K$ is such a component, then $u$ and $v$ each have at least one neighbor in $K$. This is because either $K$ contains the subpath of $P$ strictly between $u$ and $v$, or $K$ consists of a unique vertex of $V(H) \setminus V(P)$ which is then adjacent to $u$ and $v$ (by condition (ii) in the definition of hedgehogs). Hence either $(K, u, v)$ is a $c$-outgrowth of $H$, or one can find a $c$-pumpkin-minor in $H[V(K) \cup \{u,v\}]$.
A {\em rooted} $c$-pumpkin-model of $(H, P)$ is a $c$-pumpkin-model $\{A, B\}$ of $H$ with the extra property that $A$ and $B$ both contain an endpoint of $P$.
Given a hedgehog $(H, P)$ and a connected induced subgraph $Q$ of $P$ with $|Q| \geq 2$, one can define a hedgehog $(H', Q)$ as follows: First, remove from $H$ every vertex not in $P$ that has no neighbor in $Q$. Then contract every edge of $P$ not included in $Q$. Finally, remove from the graph every vertex not in $Q$ that has only one neighbor in $Q$. This defines the graph $H'$. We leave it to the reader to check that $(H', Q)$ is indeed a hedgehog; we say that $(H', Q)$ is the {\em contraction} of $(H, P)$ on the multipath $Q$. See Figure~\ref{fig:hedgehog} for an illustration of this operation. The following lemma is a direct consequence of the definition.
\begin{figure}
\caption{A hedgehog $(H, P)$ (left) and a contraction $(H', Q)$ of $(H, P)$ (right). The multipaths $P$ and $Q$ are drawn in bold.}
\label{fig:hedgehog}
\end{figure}
\begin{lemma} \label{lem:hedgehog_cutset} If $(H', Q)$ is a contraction of a hedgehog $(H, P)$ and $X$ is a bad cutset of $(H', Q)$, then $X$ is also a bad cutset of $(H, P)$. \end{lemma}
We show that every big enough hedgehog has a rooted $c$-pumpkin-model or a bad cutset. This fact will be useful in the subsequent proofs.
\begin{lemma} \label{lem:hedgehog} Let $c$ be a fixed positive integer. Then every hedgehog $(H, P)$ of size at least $(2c)^{2c}$ contains a rooted $c$-pumpkin-model or a bad cutset, either of which can be found in polynomial time. \end{lemma} \begin{proof} The proof is by induction on $c$. The base case $c=1$ is trivial since $P$ directly gives a rooted $1$-pumpkin-model. For the inductive step, assume $c > 1$. Define $f(k)$, for a positive integer $k$, as $f(k) := (2k)^{2k}$. Let $S:= V(H) \setminus V(P)$. Let $a, b$ be the endpoints of $P$.
If a vertex $v\in S$ has at least $c$ neighbors in $P$, then let $w$ be the neighbor of $v$ that is closest to $a$ on $P$. Then $A:= V(aPw) \cup \{v\}$ and $B:= V(P) \setminus A$ both induce a connected subgraph of $H$. Moreover, there are at least $c-1$ edges from $v$ to $B$, and at least one from $A \setminus \{v\}$ to $B$ (because of $P$). Since $a\in A$ and $b\in B$, we deduce that $\{A, B\}$ is a rooted $c$-pumpkin-model of $(H, P)$. Thus we may assume that every vertex in $S$ has at most $c-1$ neighbors in $P$. In particular we have $c \geq 3$, since every vertex in $S$ has at least two neighbors in $P$.
The multipath $P$, seen from its endpoint $a$, induces a natural linear ordering of the neighbors of a given vertex in $S$; we say that two such neighbors are {\em consecutive} if they are consecutive in that ordering.
Suppose that there exists a vertex $v\in S$ with two consecutive neighbors $x, y$ such that $|xPy| \geq f(c-1) + 2$. Consider the contraction $(H', Q)$ of
$(H, P)$ on the multipath $Q:= xPy \setminus \{x, y\}$. Since $|Q| \geq f(c-1)$, by induction $(H', Q)$ has a rooted $(c-1)$-pumpkin-model $\{A' , B'\}$ or a bad cutset $X$. If the latter holds, then by Lemma~\ref{lem:hedgehog_cutset} the set $X$ is also a bad cutset of $(H, P)$ and we are done. Thus we may assume the former holds. In the graph $H$, the vertex $v$ has no neighbor in $Q$, thus $v$ is not included in $H'$. Hence, we can obtain a rooted $c$-pumpkin-model $\{A, B\}$ in $(H, P)$ by setting $A:= A' \cup V(aPx) \cup \{v\}$ and $B:= B' \cup V(yPb)$. Therefore we can assume that, for every vertex $v \in S$, every two consecutive neighbors of $v$ are at distance at most $f(c-1)$ on $P$.
Let us enumerate the vertices of $P$ in order as $p_{1}, p_{2}, \dots, p_{k}$, with $p_{1}=a$ and $p_{k}=b$. We may assume that, for every $i\in \{3, \dots, k-2\}$, \begin{equation} \label{eq:exposed} \textrm{$p_{i}$ is adjacent to some vertex in $S$.} \end{equation}
Indeed, if not then $\{p_{i-1}, p_{i+1}\}$ would be a bad cutset of $(H, P)$. Since $k = |P| \geq f(c) \geq f(3) \geq 5$, this implies in particular that $S$ is not empty.
Define an {\em open} interval $I_{v}=(i, j)$ for every vertex $v\in S$, where $i$ ($j$) is the smallest (largest, respectively) index $t$ such that $p_{t}$ is a neighbor of $v$ in $H$. (Observe that $i < j$ since $v$ has at least two neighbors.) Now, let $G$ be the interval graph defined by these open intervals, that is, let $V(G) := S$, and for every two distinct vertices $v, w\in S$, make $v$ adjacent to $w$ in $G$ if and only if $I_{v} \cap I_{w} \neq \emptyset$.
For a connected subgraph $G'$ of $G$, we define $I(G')$ as the union of the intervals of vertices in $G'$, that is, $I(G') := \bigcup\{I_{v}: v\in V(G')\}$. Observe that, since $G'$ is connected, we have $I(G') = (i,j)$ for some integers $i,j$ with $1 \leq i < j \leq k$.
First suppose that $G$ has at least three connected components. The ordering $p_{1}, \dots, p_{k}$ of the vertices of $P$ induces an ordering of these components; let $C$, $C'$, $C''$ be three consecutive connected components in that ordering. Let $(i, j):= I(C)$, $(i', j'):= I(C')$, and $(i'', j''):= I(C'')$. Then we have $1 \leq i < j \leq i' < j' \leq i'' < j'' \leq k$, and every vertex of $S$ that is adjacent to some vertex strictly between $p_{i'}$ and $p_{j'}$ on $P$ has all its neighbors in the set $\{p_{i'}, p_{i' + 1}, \dots, p_{j'}\}$. Thus, for each $w\in V(C')$, the component $K$ of $H - \{p_{i'}, p_{j'}\}$ that contains $w$ avoids both endpoints of $P$. It follows that $\{p_{i'}, p_{j'}\}$ is a bad cutset of $(H, P)$. Hence, we may assume that $G$ has at most two connected components.
Since $G$ has at most two connected components, using~\eqref{eq:exposed} we deduce that $G$ has a connected component $C$ with $I(C)= (x, y)$ such that
\begin{equation}
\label{eq:Q} y - x + 1 \geq \frac{|P| - 4}{2} \geq \frac{f(c) - 4}{2}. \end{equation}
Let $Q := p_{x}Pp_{y}$ and let $(H', Q)$ be the contraction of $(H, P)$ on $Q$. (Note that possibly $Q=P$, in which case $(H', Q)=(H, P)$.) We will show that $(H', Q)$ contains a rooted $c$-pumpkin-model. The lemma will then follow, since such a model can be extended straightforwardly to one of $(H, P)$.
First let us observe that $H'$ is an induced subgraph of $H$. This is because, by our choice of $Q$, every vertex of $S$ that is adjacent to at least two vertices of $Q$, or to at least one internal vertex of $Q$, has all its neighbors in $Q$. Let $S' := V(H') \setminus V(Q) = V(C)$. For a vertex $u\in S'$, let us denote by $\ell(u)$ and $r(u)$ the two integers such that $I_{u} = (\ell(u), r(u))$.
It follows from our assumptions on $(H, P)$ that, for each $u \in S'$, the vertex $u$ has at most $c-1$ neighbors in $Q$ and every two consecutive neighbors of $u$ are at distance at most $f(c-1)$ on $Q$. This implies \begin{equation} \label{eq:length} r(u) - \ell(u) \leq (c-2)f(c-1) \end{equation} for each $u \in S'$.
In $H'$, the vertices $p_{x}$ and $p_{y}$ each have at least one neighbor in $S'$. Let $v \in S'$ be a neighbor of $p_{x}$ maximizing $r(v)$, and let $w\in S'$ be a neighbor of $p_{y}$ minimizing $\ell(w)$. Let $Z$ be a shortest $v$--$w$ path in the interval graph $G$; enumerate the vertices of $Z$ as $z_{1}, z_{2}, \dots, z_{m}$ with $z_{1} = v$ and $z_{m}=w$.
By our choice of $v,w$ and the fact that $Z$ is a shortest $v$--$w$ path, we have \begin{align} \label{eq:endpoints1} \ell(z_{j}) &< \ell(z_{j + 1}) \\ \label{eq:endpoints2} r(z_{j}) &< r(z_{j+1}) \end{align} for each $j \in \{1, \dots, m-1\}$, and \begin{equation} \label{eq:endpoints3} r(z_{j}) \leq \ell(z_{j + 2}) \end{equation} for each $j \in \{1, \dots, m-2\}$.
Since $I(Z)=I(C)=(x,y)$ we have $y \leq x + \sum_{j=1}^{m} (r(z_{j}) - \ell(z_{j}))$. Hence $y - x \leq m(c-2)f(c-1)$ by~\eqref{eq:length}. Using~\eqref{eq:Q} we then obtain \begin{equation} m \geq \frac{y - x}{(c-2)f(c-1)} \geq \frac{f(c) - 6}{2(c-2)f(c-1)} = \frac{(2c)^{2c} - 6}{2(c-2)(2c - 2)^{2c-2}} \geq c. \end{equation}
Let $d := \lfloor m/2 \rfloor$. Define, for $i\in \{x, \dots, y\}$, the set $J(p_{i})$ as the set of indices $j\in \{1, \dots, 2d\}$ such that $i \in \{\ell(z_{j}), r(z_{j})\}$ (let us emphasize that the latter set is not an interval but just a $2$-element set). We say that $p_{i}$ is a {\em breakpoint} of $Q$ if $J(p_{i})$ is not empty. (Thus $p_{x}$ is a breakpoint in particular.) It is a consequence of \eqref{eq:endpoints1}, \eqref{eq:endpoints2}, and
\eqref{eq:endpoints3} that, if $|J(p_{i})| > 1$, then $J(p_{i}) = \{j, j+2\}$ for some $j\in \{1, \dots, 2d - 2\}$. In particular, the numbers in $J(p_{i})$ always have the same parity.
We color the vertices in $V(Q) \cup \{z_{1},\dots, z_{2d}\}$ in {\em black} or {\em white} as follows. First, for every $j \in \{1, \dots, 2d\}$, color $z_{j}$ black if $j$ is odd, white if $j$ is even. Next color every breakpoint $p_{i}$ of $Q$ with the color corresponding to the parity of the numbers in $J(p_{i})$ (namely, black for odd and white for even). Finally, color every uncolored vertex of $Q$ with the color of the closest breakpoint of $Q$ in the direction of $p_{x}$. See Figure~\ref{fig:coloring} for an illustration of the coloring.
\begin{figure}\label{fig:coloring}
\end{figure}
Let $A$ and $B$ be the set of black and white vertices, respectively. By construction, $p_{x} \in A$ and $p_{y} \in B$, and each of $A,B$ induces a connected subgraph of $H'$. Moreover, there are $2d+1 \geq m \geq c$ edges of $Q$ whose endpoints received distinct colors. It follows that $\{A, B\}$ is a rooted $c$-pumpkin-model of $(H', Q)$, as desired.
We have shown that $(H, P)$ always has a rooted $c$-pumpkin-model or a bad cutset. Moreover, it is easily seen from the proof given above that each of these can be found in polynomial time. This concludes the proof of the lemma. \end{proof}
We note that no effort has been made in order to optimize the constants in Lemma~\ref{lem:hedgehog}.
\subsection{Small pumpkins in reduced graphs} \label{sec:smallPumpkins}
Our goal is to prove that every $n$-vertex $c$-reduced graph $G$ has a $c$-pumpkin-model of size $\mathcal{O}(\log n)$, where $c$ is a fixed constant. We will use the following recent result by Fiorini {\it et al}.~\cite{FJTW10} about the existence of small minors in {\sl simple} graphs with large average degree.
\begin{theorem}[Fiorini {\it et al}.~\cite{FJTW10}] \label{thm:smallminors} There is a function $h$ such that every $n$-vertex simple graph $G$ with average degree at least $2^{t}$ contains a $K_t$-model with at most $h(t) \cdot \log n$ vertices. Moreover, such a model can be computed in polynomial time. \end{theorem}
Since a $K_{t}$-model in a graph directly gives a $c$-pumpkin-model of the same size for $c = (\lfloor t/2 \rfloor)^{2}$, we have the following corollary from Theorem~\ref{thm:smallminors}, which is central in the proof of Lemma~\ref{lem:smallmodelsreducedgraphs}.
\begin{corollary} \label{cor:smallminors} There is a function $h$ such that every $n$-vertex simple graph $G$ with average degree at least $2^{2\sqrt{c} + 1}$ contains a $c$-pumpkin-model with at most $h(c) \cdot \log n$ vertices. Moreover, such a model can be computed in polynomial time. \end{corollary}
The next lemma states the existence of small $c$-pumpkin-models in a $c$-reduced graph $G$. Its proof relies on Lemma~\ref{lem:hedgehog} on hedgehogs and Corollary~\ref{cor:smallminors}. The proof can be briefly summarized as follows. An hedgehog in $G$ which is large enough so that Lemma~\ref{lem:hedgehog} can be applied to it, but at the same time not too big, witnesses the existence of either a small $c$-pumpkin-model or a $c$-outgrowth. Hence we may assume that no such hedgehog exists in $G$. The latter fact is then used to either find directly a small $c$-pumpkin-model, or a dense-enough minor that is not ``too far'' from $G$ in the sense that it is obtained by contracting disjoint connected subgraphs of $G$ of bounded radius. In the latter case, we apply Corollary~\ref{cor:smallminors} on the minor, yielding a small $c$-pumpkin-model which we then lift back to $G$, incurring only a constant-factor increase in its size.
\begin{lemma} \label{lem:smallmodelsreducedgraphs} There is a function $f$ such that every $n$-vertex $c$-reduced graph $G$ contains a $c$-pumpkin-model of size at most $f(c)\cdot \log n$. Moreover, such a model can be computed in polynomial time. \end{lemma} \begin{proof} Let \begin{align*} k &:= c^{2} \left\lceil 2^{2\sqrt{c} + 1} \right\rceil \\ r &:= (2c)^{2c}k \\ b &:= k^{r}. \end{align*}
We will prove the lemma with $f$ defined as $$ f(c) := \max\{krb, 3rc\cdot h(c)\}, $$ where $h$ is the function in Corollary~\ref{cor:smallminors}. Throughout the proof, a $c$-pumpkin-model of $G$ is said to be {\em small} if it has the required size, that is, if it has at most $f(c)\log n$ vertices.
Recall that $\mu(G)$ denotes the maximum multiplicity of any edge in $G$. The lemma trivially holds if $\mu(G) \geq c$, so we may assume $\mu(G) < c$. Let $W$ be the (possibly empty) subset of vertices of $G$ having degree at least $k$.
We build a collection $\mathcal{P}$ of vertex-disjoint induced subgraphs of $G\setminus W$, each isomorphic to a multipath on $r$ vertices. Initially, we let $\mathcal{P} := \emptyset$ and $G' := G \setminus W$. Then, as long as $G'$ has a connected component $C$ with diameter at least $r-1$, we do the following: First, we consider two vertices at distance $r-1$ in $C$ and compute a shortest path $Q$ between these two vertices. Note that the subgraph $P$ of $G$ induced by $V(Q)$ is a multipath on $r$ vertices. Next, we add $P$ to $\mathcal{P}$. Finally, we remove from $G'$ the $r$ vertices in $P$.
When the above procedure is finished, every connected component of $G'$ has diameter less than $r-1$ and maximum degree less than $k$. Hence each such component has bounded size: at most $k^{r} = b$ vertices. Let $\mathcal{C}$ denote the collection of connected components of $G'$.
An illustration of the sets $W$, $\mathcal{P}$, and $\mathcal{C}$ in the graph $G$ is given in Figure~\ref{fig:pieces}.
\begin{figure}
\caption{The sets $W$, $\mathcal{P}$, and $\mathcal{C}$ in the graph $G$.}
\label{fig:pieces}
\end{figure}
If some subgraph $C \in \mathcal{C}$ contains a $c$-pumpkin-model, then the size of the model is at most $|C| \leq b \leq f(c) \leq f(c)\log n$, and we are done. (Note that $n\geq2$ since the model has at least two vertices.) Thus we may assume that no subgraph $C \in \mathcal{C}$ contains a $c$-pumpkin-minor.
Let $J$ be the graph obtained from $G$ by contracting each subgraph $C \in \mathcal{C}$ into a single vertex $v_{C}$. Consider a subgraph $C \in \mathcal{C}$. We cannot have $\text{deg}^{*}_{J}(v_{C}) = 0$, because otherwise we could have applied {\bf Z1} on any vertex of $C$ in $G$ (since $C$ has no $c$-pumpkin-minor). If $\text{deg}^{*}_{J}(v_{C}) = 1$, then let $v$ be an arbitrary vertex of $C$, and let $w$ be the unique vertex in $V(G) \setminus V(C)$ having a neighbor in $V(C)$ in the graph $G$. Since {\bf Z1} cannot be applied on $G$ with vertex $v$, there is a block $B$ of $G$ that includes $v$ and containing a
$c$-pumpkin-model. Since $V(B) \subseteq V(C) \cup \{w\}$, this model has size at most $|B| \leq b + 1 \leq f(c)$, that is, we have found a small $c$-pumpkin-model of $G$. Therefore, we may assume \begin{equation} \label{eq:J2} \text{deg}^{*}_{J}(v_{C}) \geq 2 \end{equation} for every $C \in \mathcal{C}$.
Let $K$ be the graph obtained from $J$ by contracting each subgraph $P \in \mathcal{P}$ into a single vertex $v_{P}$. If two vertices of $K$ are linked by at least $c$ parallel edges (note that these two vertices cannot correspond to two components of $\mathcal{C}$, as no such two components are adjacent), then we directly find a $c$-pumpkin-model in $G$ of size at most $\max\{b + 1, r + 1, 2r, b+r\} \leq f(c)$. Thus we may assume \begin{equation} \label{eq:muK} \mu(K) < c. \end{equation}
We have $\text{deg}^{*}_{K}(v_{C}) \geq 1$ for every $C \in \mathcal{C}$. Let us say that a subgraph $C \in \mathcal{C}$ is {\em bad} if $\text{deg}^{*}_{K}(v_{C}) = 1$, and {\em good} otherwise.
We color the vertices of each multipath $P \in \mathcal{P}$ as follows: a vertex $v\in V(P)$ is colored {\em black} if, in the graph $G$, all its neighbors outside $P$ belong to bad subgraphs of $\mathcal{C}$; the vertex $v$ is colored {\em white} otherwise. (We remark that $v$ could possibly have no neighbor outside $P$, in which case $v$ is colored black by our definition.)
\begin{claimN} \label{claim:multipath} If some multipath $P\in \mathcal{P}$ contains $(2c)^{2c}$ consecutive black vertices, then one can find a small $c$-pumpkin-model in $G$. \end{claimN} \begin{proof} Let $Q$ be the subgraph of $P$ induced by these $(2c)^{2c}$ black vertices. Let $\mathcal{C'}$ be the subset of subgraphs $C \in \mathcal{C}$ such that $v_{C}$ is adjacent to an {\em internal} vertex of $Q$ in the graph $J$. Let $S := \{v_{C} : C \in \mathcal{C'}\}$. Since all vertices of $Q$ are colored black, it follows that internal vertices of $Q$ are only adjacent in $J$ to vertices in $V(P) \cup S$, and that every subgraph $C \in \mathcal{C'}$ is bad.
Let $H$ be the graph obtained from $J[V(P) \cup S]$ by contracting every edge of $P$ not included in $Q$. Since every subgraph $C \in \mathcal{C'}$ is bad, it follows from \eqref{eq:J2} that, in $H$, every vertex in $S$ has at least two neighbors in $Q$. Hence $(H, Q)$ is a hedgehog of size $|Q|=(2c)^{2c}$.
The graph $H$ is a minor of the subgraph $G^{*}$ of $G$ induced by $$ V(P) \cup \bigcup\{V(C): C \in \mathcal{C'}\}. $$
Since vertices of $P$ have degree at most $k$ in $G$ and $|C| \leq b$ for every $C \in \mathcal{C'}$, we have \begin{equation}
\label{eq:Gstar} |G^{*}| \leq r + r(k-1)b \leq f(c). \end{equation} We claim that $G^{*}$ contains a $c$-pumpkin-minor. By~\eqref{eq:Gstar}, such a minor directly yields a small $c$-pumpkin-model of $G$. Arguing by contradiction, assume $G^{*}$ has no $c$-pumpkin-minor. Thus $H$ has no $c$-pumpkin-minor either.
Applying Lemma~\ref{lem:hedgehog} on $(H, Q)$, we obtain either a bad cutset $X$ of $(H, Q)$ or a $c$-pumpkin-model of $H$. The latter case cannot happen since $H$ has no $c$-pumpkin-minor, so assume the former holds and let $\{u, v\}:= X$. Consider a connected component $T$ of $H \setminus X$ that avoids both endpoints of $Q$. Let $Z$ be the subgraph of $G$ induced by $(V(T) \cap V(Q)) \cup \bigcup\{V(C) : v_{C} \in V(T)\}$. It follows from the definition of $H$ and our choice of $T$ that $Z$ is a connected component of $G \setminus X$ such that $u$ and $v$ are both adjacent to some vertex in $Z$. Since $G^{*}$ has no $c$-pumpkin-minor, it follows that $(Z, u, v)$ is a $c$-outgrowth of $G$. But this implies that we could have applied {\bf Z2} on $G$ with the $c$-outgrowth $(Z, u, v)$, a contradiction. \end{proof}
By Claim~\ref{claim:multipath}, we may assume that, for every $P\in \mathcal{P}$, the number $w(P)$ of white vertices in $P$ satisfies \begin{equation} \label{eq:wP} w(P) \geq \frac{r}{(2c)^{2c}}=k. \end{equation}
Our aim now is to use \eqref{eq:wP} to define a minor of $K$ with large minimum degree. First, for every good subgraph $C\in \mathcal{C}$, ``assign'' $v_{C}$ to an arbitrary neighbor of $v_{C}$ in $K$. Next, for every $w\in W$, contract all edges $v_{C}w$ of $K$ into the vertex $w$ for all vertices $v_{C}$ assigned to $w$. Similarly, for every $P\in \mathcal{P}$, contract all edges $v_{C}v_{P}$ into the vertex $v_{P}$ for all vertices $v_{C}$ assigned to $v_{P}$. Finally, remove the vertex $v_{C}$ for every bad subgraph $C\in \mathcal{C}$. The resulting graph is denoted $L$.
For every vertex of $L$ there is a natural induced subgraph of $G$ that corresponds to it, namely the subgraph defined by all the edges that were contracted into $w$. Let $S_{w}$ and $S_{P}$ be the (induced) subgraph of $G$ that corresponds to the vertex $w\in W$ and $v_{P}$ ($P \in \mathcal{P}$) of $L$, respectively. The subgraphs $S_{w}$ ($w\in W$) and $S_{P}$ ($P \in
\mathcal{P}$) of $G$ have diameter at most $2r$ and $3r$, respectively. Thus, by Lemma~\ref{lem:models_diameter}, a $c$-pumpkin-model of $L$ of size $q$ can be turned into one of $G$ of size at most $3rc\cdot q$. Hence, in order to conclude the proof, it is enough to find a $c$-pumpkin-model in $L$ of size at most $h(c) \log |L|$, since $$
h(c) \log |L| \leq \frac{f(c)}{3rc} \log |L| \leq \frac{f(c)}{3rc} \log n. $$ To do so, we will show that $L$ has large minimum degree.
First consider a vertex $w\in W$, and note that $\text{deg}_{K}(w)=\text{deg}_{G}(w) \geq k$. Let $a$ be the number of edges incident with $w$ in $K$ such that the other endpoint is a vertex of the form $v_{C}$ ($C \in \mathcal{C}$) that was assigned to $w$. By \eqref{eq:J2}, $w$ cannot be adjacent in $K$ to a vertex $v_{C}$ corresponding to a bad subgraph $C \in \mathcal{C}$. Thus, it follows from the definitions of good subgraphs and $L$ that $$ \text{deg}_{L}(w) \geq \frac{a}{\mu(K)} + (\text{deg}_{K}(w) - a). $$ (The $\frac{a}{\mu(K)}$ term above comes from the fact that each vertex $v_{C}$ that was assigned to $w$ contributes at least one to the degree of $w$ in $L$, while in $K$ there were at most $\mu(K)$ edges between $v_{C}$ and $w$.) Using \eqref{eq:muK} we obtain \begin{equation} \label{eq:degL-W} \text{deg}_{L}(w) > \frac{a}{c} + (\text{deg}_{K}(w) - a) \geq \frac{\text{deg}_{K}(w)}{c} \geq \frac{k}{c}. \end{equation}
Now consider a multipath $P \in \mathcal{P}$. Let $a'$ be the number of edges incident with $v_{P}$ in $K$ such that the other endpoint is a vertex of the form $v_{C}$ ($C \in \mathcal{C}$) that was assigned to $v_{P}$. Let $b'$ be the number of edges incident with $v_{P}$ in $K$ that are not of the previous form and also not incident with a vertex $v_{C}$ such that $C$ is bad. By the definition of white vertices, we have $a' + b' \geq w(P)$ (recall that $w(P)$ is the number of white vertices in $P$). Using~\eqref{eq:wP}, it follows \begin{equation} \label{eq:degL-P} \text{deg}_{L}(v_{P}) \geq \frac{a'}{\mu(K)} + b' > \frac{a'}{c} + b' \geq \frac{w(P)}{c} \geq \frac{k}{c}. \end{equation}
It follows from \eqref{eq:degL-W} and \eqref{eq:degL-P} that $L$ has minimum degree at least $k/c$. If $\mu(L) \geq c$, then $L$ has a $c$-pumpkin-model of size two and we are trivially done, so let us assume $\mu(L) < c$. Then the underlying simple graph $L'$ of $L$ has minimum degree at least $k / c^{2} \geq 2^{2\sqrt{c} + 1}$. Using Corollary~\ref{cor:smallminors} on $L'$, we find a $c$-pumpkin-model in $L$ of the desired size, that is, of size at most $h(c)
\log |L|$.
Finally, we note that each step of the proof can easily be realized in polynomial time. Therefore, a small $c$-pumpkin-model of $G$ can be found in polynomial time.\end{proof}
\subsection{Algorithmic consequences} \label{sec:1stAlgo}
Lemma~\ref{lem:smallmodelsreducedgraphs} can be used to obtain $\mathcal{O}(\log n)$-approximation algorithms for both the \textsc{Minimum} $c$-\textsc{Pumpkin}-\textsc{Hitting} and the \textsc{Maximum} $c$-\textsc{Pumpkin}-\textsc{Packing} problems for every fixed $c \geq 1$, as we now show.
\begin{algorithm} \caption{\label{algo:packing_covering}A $\mathcal{O}(\log n)$-approximation algorithm.}
\hspace{-1.4cm} \begin{minipage}{1.08\textwidth} \begin{itemize} \item[] {\bf INPUT:} An arbitrary graph $G$ \item[] {\bf OUTPUT:} A $c$-packing $\mathcal{M}$ of $G$
and a $c$-hitting set $X$ of $G$ s.t.\ $|X| \leq (f(c) \log |G|) \cdot
|\mathcal{M}|$ \item[] $\mathcal{M} \leftarrow \emptyset$; $X \leftarrow \emptyset$
\item[] {\bf If} $|G| \leq 1$: Return $\mathcal{M}$, $X$ $\ \ \ ${\em /* $G$ cannot have a $c$-pumpkin-minor */} \item[] {\bf Else}: \begin{itemize} \item[] {\bf If} $G$ is not $c$-reduced: \begin{itemize} \item[] Apply a reduction rule on $G$, giving a graph $H$ \item[] Call the algorithm on $H$, giving a packing $\mathcal{M}'$ and a $c$-hitting set $X'$ of $H$ \item[] Compute using Lemma~\ref{lem:both}(b)
a $c$-packing $\mathcal{M}$ of $G$ with $|\mathcal{M}| = |\mathcal{M}'|$ \item[] Compute using Lemma~\ref{lem:both}(a)
a $c$-hitting set $X$ of $G$ with $|X| \leq |X'|$ \item[] Return $\mathcal{M}$, $X$ \end{itemize} \item[] {\bf Else}: \begin{itemize} \item[] Compute using Lemma~\ref{lem:smallmodelsreducedgraphs} a $c$-pumpkin-model $M=\{A,B\}$ of $G$ with
\item[] \hspace{0.6cm} $|A\cup B| \leq f(c)\log |G|$
\item[] $H \leftarrow G \setminus (A\cup B)$ \item[] Call the algorithm on $H$, giving a packing $\mathcal{M}'$ and a $c$-hitting set $X'$ of $H$ \item[] $\mathcal{M} \leftarrow \mathcal{M}' \cup \{M\}$ \item[] $X \leftarrow X' \cup A \cup B$ \item[] Return $\mathcal{M}$, $X$ \end{itemize} \end{itemize} \end{itemize} \end{minipage} \end{algorithm}
\begin{theorem} \label{thm:logn_packing_covering} Given an $n$-vertex graph $G$, a $\mathcal{O}(\log n)$-approximation for both the \textsc{Minimum} $c$-\textsc{Pumpkin}-\textsc{Hitting} and the \textsc{Maximum} $c$-\textsc{Pumpkin}-\textsc{Packing} problems on $G$ can be computed in polynomial time using Algorithm~\ref{algo:packing_covering}, for any fixed integer $c \geq 1$. \end{theorem} \begin{proof}Consider Algorithm~\ref{algo:packing_covering}, where $f$ is the function in Lemma~\ref{lem:smallmodelsreducedgraphs}. We will show that this algorithm provides a $\mathcal{O}(\log n)$-approximation for the two problems under consideration.
It should be clear that the collection $\mathcal{M}$ returned by Algorithm~\ref{algo:packing_covering} is a $c$-packing of $G$, and similarly that the set $X$ is a $c$-hitting set of $G$. Thus it is enough to show that they satisfy $|X| \leq (f(c)\log |G|) \cdot |\mathcal{M}|$ as claimed in the description of the algorithm. Indeed, since $|\mathcal{M}| \leq \nu_{c}(G) \leq
\tau_{c}(G) \leq |X|$ and $f(c)$ is a constant depending only on $c$, this implies that the approximation factor of Algorithm~\ref{algo:packing_covering} is $\mathcal{O}(\log n)$ for both the \textsc{Minimum} $c$-\textsc{Pumpkin}-\textsc{Hitting} and the \textsc{Maximum} $c$-\textsc{Pumpkin}-\textsc{Packing} problems.
We prove the inequality $|X| \leq (f(c)\log |G|) \cdot |\mathcal{M}|$ by induction on $|G|$. The inequality is obviously true in the base case, namely when $|G| \leq 1$, so let us assume $|G| > 1$.
If $G$ is not $c$-reduced, then by induction the packing $\mathcal{M}'$ and the
$c$-hitting set $X'$ of $H$ considered by the algorithm satisfy $|X'| \leq
(f(c)\log |H|) \cdot |\mathcal{M}'|$, and we obtain $$
|X| \leq |X'| \leq (f(c)\log |H|) \cdot |\mathcal{M}'| = (f(c)\log |H|) \cdot
|\mathcal{M}| \leq (f(c)\log |G|) \cdot |\mathcal{M}| $$ as desired.
If $G$ is $c$-reduced, then by induction the packing $\mathcal{M}'$ and the
$c$-hitting set $X'$ of $H$ satisfy $|X'| \leq (f(c)\log |H|) \cdot
|\mathcal{M}'|$, and we have \begin{align*}
|X| &= |X'| + |A\cup B| \\
&\leq (f(c)\log |H|) \cdot |\mathcal{M}'| + f(c)\log |G| \\
&\leq (f(c)\log |G|) \cdot (|\mathcal{M}'| + 1) \\
&= (f(c)\log |G|) \cdot |\mathcal{M}|. \end{align*}
Thus $|X| \leq (f(c)\log |G|) \cdot |\mathcal{M}|$ holds in all cases.
Finally, we observe that there are at most $n$ recursive calls during the whole execution of the algorithm, which implies that its running time is polynomial in $n$.\end{proof}
\section{Concluding remarks} \label{sec:concl}
On the one hand, we provided an FPT algorithm running in time $2^{\mathcal{O}(k)} \cdot n^{\mathcal{O}(1)}$ deciding, for any fixed $c \geq 1$, whether all $c$-pumpkin-models of a given graph can be hit by at most $k$ vertices. In our algorithms we used protrusions
but it may be possible to avoid it by further exploiting the structure of the graphs during the iterative compression routine (for example, a graph excluding the $3$-pumpkin is a forest of cacti). We did not focus on optimizing the constants involved in our algorithms; it may be worth doing it, as well as
enumerating all solutions, in the same spirit of~\cite{GGH+06} for \textsc{Feedback Vertex Set}.
It is natural to ask whether there exist faster algorithms for sparse graphs. Also, it would be interesting to have lower bounds for the running time of parameterized algorithms for this problem, in the spirit of those recently provided in~\cite{LMS11b}. One could as well consider other containment relations, like topological minor, induced minor, or contraction minor.
A more difficult problem seems to find single-exponential algorithms for the problem of deleting at most $k$ vertices from a given graph so that the resulting graph has tree-width bounded by some constant $c$. Note that the case $c=0$ (resp. $c=1$) corresponds to {\sc $p$-Vertex Cover} (resp. {\sc $p$-Feedback Vertex Set}). Very recently, this problem has been solved for $c=2$~\cite{KPP11}, the cases $c\geq 3$ being still open.
One could also consider the parameterized version of packing disjoint $c$-pumpkin-models, as it has been done for $c=2$ in~\cite{BTY09}.
On the other hand, we provided a $\mathcal{O}(\log n)$-approximation for the problems of packing the maximum number of vertex-disjoint $c$-pumpkin-models, and hitting all $c$-pumpkin-models with the smallest number of vertices. It may be possible that the hitting version admits a constant-factor approximation; so far, such an algorithm is only known for $c \leq 3$.
As mentioned in the introduction, for the packing version there is a lower bound of $\Omega(\log^{1/2 - \varepsilon} n)$ on the approximation ratio (under reasonable complexity-theoretic assumptions). In fact, this lower bound applies to both the vertex-disjoint packing and the edge-disjoint packing~\cite{FrSa11}. For $c=2$, the problem of packing a maximum number of edge-disjoint cycles admits a $\mathcal{O}(\sqrt{\log n})$-approximation, whereas up to date $\mathcal{O}(\log n)$ is the best approximation ratio known for vertex-disjoint cycles~\cite{KNS+07}. Therefore, one might expect to get better approximation algorithms for packing edge-disjoint $c$-pumpkin-models.
Our algorithms use as subroutines some steps that are only of theoretical interest. For instance, the FPT algorithm of Section~\ref{sec:algorithm} uses a protrusion replacement rule that involves huge constants, and in the whole paper we repeatedly use Courcelle's theorem~\cite{Courcelle92} to test for the existence of a $c$-pumpkin-model in graphs of bounded treewidth. Turning these steps into routines involving reasonable constants is worth investigating.
Finally, a class of graphs $\mathcal{H}$ has the \emph{Erd\H{o}s-P\'{o}sa property} if there exists a function $f$ such that, for every integer $k$ and every graph $G$, either $G$ contains $k$ vertex-disjoint subgraphs each isomorphic to a graph in $\mathcal{H}$, or there is a set $S \subseteq V(G)$ of at most $f(k)$ vertices such that $G \setminus S$ has no subgraph in $\mathcal{H}$. Given a connected graph $H$, let $\mathcal{M}(H)$ be the class of graphs that can be contracted to $H$. Robertson and Seymour~\cite{RoSe86} proved that $\mathcal{M}(H)$ satisfies the Erd\H{o}s-P\'{o}sa property if and only if $H$ is planar. Therefore, for every $c \geq 1$, the class of graphs that can be contracted to the $c$-pumpkin satisfies the Erd\H{o}s-P\'{o}sa property. But the best known function $f$ is super-exponential (see~\cite{Die05}), so it would be interesting to find a better function for this case. The only known lower bound on $f$ is $\Omega(k \log k)$ when $c \geq 2$, which follows from the $\Omega(k \log k)$ lower bound given by Erd\H{o}s and P\'{o}sa in their seminal paper~\cite{ErPo65} for $c=2$.
\subsection*{Acknowledgement.} We would like to thank the anonymous referees for helpful remarks that improved the presentation of the article.
\end{document} |
\begin{document}
\title{Critical branching as a pure death process coming down from infinity
}
\author{ Serik Sagitov \\ Chalmers University of Technology and University of Gothenburg}
\date{} \maketitle
\begin{abstract} We consider the critical Galton-Watson process with overlapping generations stemming from a single founder. Assuming that both the variance of the offspring number and the average generation length are finite, we establish the convergence of the finite-dimensional distributions, conditioned on non-extinction at a remote time of observation. The limiting process is identified as a pure death process coming down from infinity.
This result brings a new perspective on Vatutin's dichotomy claiming that in the critical regime of age-dependent reproduction, an extant population either contains a large number of short-living individuals or consists of few long-living individuals.
\end{abstract}
\section{Introduction }\label{sec:int}
Consider a self-replicating system evolving in the discrete time setting according to the next rules:
\begin{description} \item[\ \ \,-] the system is founded by a single individual, the founder born at time 0, \item[\ \ \,-] the founder dies at a random age $L$ and gives a random number $N$ of births at random ages $\tau_j$ satisfying \begin{equation*}
1\le\tau_1\le \ldots\le \tau_N\le L, \end{equation*} \item[\ \ \,-] each new individual lives independently from others according to the same life law as the founder. \end{description} An individual which was born at time $t_1$ and dies at time $t_2$ is considered to be alive during the time interval $[t_1,t_2-1]$. Letting $Z(t)$ stand for the number of individuals alive at time $t$, we study the random dynamics of the sequence $$Z(0)=1, Z(1), Z(2),\ldots,$$ which is a natural extension of the well-known Galton-Watson process, or \textit{GW-process} for short, see \cite{WG}. The process $Z(\cdot)$ is the discrete time version of what is usually called the Crump-Mode-Jagers process or the general branching process, see \cite{J}.
To emphasise the discrete time setting, we call it a GW-process with overlapping generations, or \textit{GWO-process} for short.
Put $b:=\frac{1}{2}\mathrm{Var\hspace{0.2mm}}(N)$. This paper deals with the GWO-processes satisfying
\begin{equation}
\label{ir} \mathrm{E\space }(N)=1,\quad 0<b<\infty. \end{equation} Condition $\mathrm{E\space }(N)=1$ says that the reproduction regime is critical, implying $\mathrm{E\space }(Z(t))\equiv1$ and making extinction inevitable, provided $b>0$. According to \cite[Ch I.9]{AN}, given \eqref{ir},
the survival probability
$$Q(t):=\mathrm{P\space }(Z(t)>0)$$
of a GW-process satisfies the asymptotic formula $tQ(t)\to b^{-1}$ as $t\to\infty$ (this was first proven in \cite{K} under a third moment assumption). A direct extension of this classical result for the GWO-processes, $$tQ(ta)\to b^{-1},\quad t\to\infty,\quad a:=\mathrm{E\space }(\tau_1+\ldots+\tau_N),$$ was obtained in \cite{D,H} under conditions \eqref{ir}, $a<\infty$, \begin{equation}
\label{d0} t^2\mathrm{P\space }(L>t)\to 0,\quad t\to\infty, \end{equation} plus an additional extra condition. (Notice that by our definition, $a\ge1$, and $a=1$ if and only if $L\equiv1$, that is when the GWO-process in question is a GW-process). Treating $a$ as the \textit{mean generation length}, see \cite{J,21}, we may conclude that the asymptotic behaviour of the critical GWO-process with \textit{short-living individuals}, see condition \eqref{d0}, is similar to that of the critical GW-process, provided time is counted generation-wise.
New asymptotical patterns for the critical GWO processes are found under the assumption
\begin{equation}
\label{d} t^2\mathrm{P\space }(L>t)\to d,\quad 0\le d< \infty,\quad t\to\infty, \end{equation} which compared to \eqref{d0}, allows the existence of \textit{long-living individuals} given $d>0$. Condition \eqref{d} was first introduced in the pioneering paper \cite{V79} dealing with the \textit{Bellman-Harris processes}. In the current discrete time setting, the Bellman-Harris process is a GWO-process subject to two restrictions:
\begin{description} \item[\ \ \,-] $\mathrm{P\space }(\tau_1=\ldots=\tau_N= L)=1$, so that all births occur at the moment of individual's death, \item[\ \ \,-] the random variables $L$ and $N$ are independent. \end{description} For the Bellman-Harris process, conditions \eqref{ir} and \eqref{d} imply $a=\mathrm{E\space }(L)$, $a<\infty$, and according to \cite[Theorem 3]{V79}, we get
\begin{equation}
\label{ad} tQ(t)\to h,\quad t\to\infty,\qquad h:=\frac{a+\sqrt{a^2+4bd}}{2b}. \end{equation} As was shown in \cite[Corollary B]{T}, see also \cite[Lemma 3.2]{95} for an adaptation to the discrete time setting, relation \eqref{ad} holds even for the GWO-processes satisfying conditions \eqref{ir}, \eqref{d}, and $a<\infty$.
The main result of this paper, Theorem 1 of Section \ref{main}, considers a critical GWO-process under the above mentioned neat set of assumptions \eqref{ir}, \eqref{d}, $a<\infty$, and establishes the convergence of the finite-dimensional distributions conditioned on survival at a remote time of observation. A remarkable feature of this result is that its limit process is fully described by a single parameter $c:=4bda^{-2}$, regardless of complicated mutual dependencies between the random variables $\tau_j, N,L$.
Our proof of Theorem 1, requiring an intricate asymptotic analysis of multi-dimensional probability generating functions, for the sake of readability, is split into two sections. Section \ref{out} presents a new proof of \eqref{ad} inspired by the proof of \cite{V79}. The crucial aspect of this approach, compared to the proof of \cite[Lemma 3.2]{95},
is that certain essential steps do not rely on the monotonicity of the function $Q(t)$. In Section \ref{Lp1}, the technique of Section \ref{out} is further developed to finish the proof of Theorem 1.
We conclude this section by mentioning the illuminating family of GWO-processes called the \textit{Sevastyanov processes} \cite{Sev}. The Sevastyanov process is a generalised version of the Bellman-Harris process, with possibly dependent $L$ and $N$. In the critical case, the mean generation length of the Sevastyanov process, $a=\mathrm{E\space }(L N)$, can be represented as $$a=\mathrm{Cov\hspace{0.2mm}}(L,N)+\mathrm{E\space }(L).$$ Thus, if $L$ and $N$ are positively correlated, the average generation length $a$ exceeds the average life length $\mathrm{E\space }(L)$.
Turning to a specific example of the Sevastyanov process, take
\[\mathrm{P\space }(L= t)= p_1 t^{-3}(\ln\ln t)^{-1}, \quad \mathrm{P\space }(N=0|L= t)=1-p_2,\quad \mathrm{P\space }(N=n_t|L= t)=p_2, \ t\ge2,\] where $n_t:=\lfloor t(\ln t)^{-1}\rfloor$ and $(p_1,p_2)$ are such that \[\sum_{t=2}^\infty \mathrm{P\space }(L= t)=p_1 \sum_{t=2}^\infty t^{-3}(\ln\ln t)^{-1}=1,\quad \mathrm{E\space }(N)=p_1p_2\sum_{t=2}^\infty n_t t^{-3}(\ln\ln t)^{-1}=1.\] In this case, for some positive constant $c_1$, \[\mathrm{E\space }(N^2)= p_1p_2\sum_{t=1}^\infty n_t^2 t^{-3}(\ln\ln t)^{-1}< c_1\int_2^\infty \frac{d (\ln t)}{(\ln t)^2\ln\ln t}<\infty,\] implying that condition \eqref{ir} is satisfied. Clearly, condition \eqref{d} holds with $d=0$. At the same time, \[a=\mathrm{E\space }(NL)= p_1p_2\sum_{t=1}^\infty n_t t^{-2}(\ln\ln t)^{-1}> c_2\int_2^\infty \frac{d (\ln t)}{(\ln t)(\ln\ln t)}=\infty,\] where $c_2$ is a positive constant. This example demonstrates that for the GWO-process, unlike the Bellman-Harris process, conditions \eqref{ir} and \eqref{d} do not automatically imply the condition $a<\infty$.
\section{The main result}\label{main}
\begin{theorem}\label{thL} For a GWO-process satisfying \eqref{ir}, \eqref{d} and $a<\infty$, there holds a weak convergence of the finite dimensional distributions \begin{align*}
(Z(ty),0<y<\infty|Z(t)>0)\stackrel{\rm fdd\,}{\longrightarrow} (\eta(y),0<y<\infty),\quad t\to\infty. \end{align*} The limiting process is a continuous time pure death process $(\eta(y),0\le y<\infty)$, whose evolution law is determined by a single compound parameter $c=4bda^{-2}$, as specified next. \end{theorem}
The finite dimensional distributions of the limiting process $\eta(\cdot)$ are given below in terms of the $k$-dimensional probability generating functions $\mathrm{E\space }(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)})$, $k\ge1$, assuming \begin{equation}\label{mansur}
0=y_0< y_1< \ldots< y_{j}<1\le y_{j+1}< \ldots< y_k<y_{k+1}=\infty,\quad 0\le j\le k,\quad 0\le z_1,\ldots,z_k<1. \end{equation} Here the index $j$ highlights the pivotal value 1 corresponding to the time of observation $t$ of the underlying GWO-process.
As will be shown in Section \ref{Send}, if $j=0$, then \begin{align*} \mathrm{E\space }(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)})=1-\frac{1+\sqrt{1+\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i}}{(1+\sqrt{1+c})y_1},\quad \Gamma_i:=c({y_1}/{y_i} )^2, \end{align*} and
if $j\ge1$, \begin{align*} \mathrm{E\space }(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)})=\frac{\sqrt{1+\sum_{i=1}^{j}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i+cz_1\cdots z_{j}y_1^{2} }-\sqrt{1+\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i}}{(1+\sqrt{1+c})y_1}. \end{align*} In particular, for $k=1$, we have \begin{align*} \mathrm{E\space }(z^{\eta(y)})&= \frac{\sqrt{1+c(1-z)+czy^{2}}-\sqrt{1+c(1-z)}}{(1+\sqrt{1+c})y},\quad 0< y<1,\\ \mathrm{E\space }(z^{\eta(y)})&= 1-\frac{1+\sqrt{1+c(1-z)}}{(1+\sqrt{1+c})y},\quad y\ge1. \end{align*} It follows that $\mathrm{P\space }(\eta(y)\ge0)=1$ for $y>0$, and moreover, putting here first $z=1$ and then $z=0$, brings \begin{align*} \mathrm{P\space }(\eta(y)<\infty)&=\frac{\sqrt{1+cy^2}-1}{(1+\sqrt{1+c})y}\cdot1_{\{0< y<1\}}+\Big(1-\frac{2}{(1+\sqrt{1+c})y}\Big)\cdot1_{\{y\ge 1\}},\\ \mathrm{P\space }(\eta(y)=0)&=\frac{y-1}{y}\cdot1_{\{y\ge 1\}}, \end{align*} implying that $\mathrm{P\space }(\eta(y)=\infty)>0$ for all $y>0$, and in fact, letting $y\to0$, we may set $\mathrm{P\space }(\eta(0)=\infty)=1.$
To demonstrate that the process $\eta(\cdot)$ is indeed a pure death process, consider the function \[\mathrm{E\space }(z_1^{\eta(y_1)-\eta(y_2)}\cdots z_{k-1}^{\eta(y_{k-1})-\eta(y_{k})}z_k^{\eta(y_k)})\] determined by \begin{align*} \mathrm{E\space }(z_1^{\eta(y_1)-\eta(y_2)}\cdots z_{k-1}^{\eta(y_{k-1})-\eta(y_{k})}z_k^{\eta(y_k)}) &=\mathrm{E\space }(z_1^{\eta(y_1)}(z_2/z_1)^{\eta(y_2)}\cdots (z_k/z_{k-1})^{\eta(y_k)}). \end{align*} This function is given by two expressions \begin{align*} \frac{(1+\sqrt{1+c})y_1-1-\sqrt{1+\sum\nolimits_{i=1}^{k} (1-z_{i})\gamma_i}}{(1+\sqrt{1+c})y_1}, \quad &\text{for }j=0,\\ \frac{\sqrt{1+\sum\nolimits_{i=1}^{j-1}(1-z_{i})\gamma_i+(1-z_{j})\Gamma_j+cz_j y_1^2}-\sqrt{1+\sum\nolimits_{i=1}^{k} (1-z_{i})\gamma_i}}{(1+\sqrt{1+c})y_1}, \quad &\text{for }j\ge1, \end{align*} where $\gamma_i:=\Gamma_i-\Gamma_{i+1}$ and $\Gamma_{k+1}=0$. Setting $k=2$, $z_1=z$, and $z_2=1$, we deduce that the function \begin{equation}\label{lava}
\mathrm{E\space }(z^{\eta(y_1)-\eta(y_2)};\eta(y_1)<\infty),\quad 0<y_1<y_2,\quad 0\le z\le1, \end{equation} is given by one of the following three expressions depending on whether $j=2$, $j=1$, or $j=0$, \begin{align*} \frac{\sqrt{1+c y_1^2+c(1-z)(1-(y_1/y_2)^2)} -\sqrt{1+c (1-z)(1-(y_1/y_2)^2)}}{(1+\sqrt{1+c})y_1},\quad &y_2<1, \\ \frac{\sqrt{1+c y_1^2+c(1-z) (1-y_1^2)} -\sqrt{1+c(1-z)(1-(y_1/y_2)^2)}}{(1+\sqrt{1+c})y_1},\quad &y_1<1\le y_2, \\ 1- \frac{1+\sqrt{1+c(1-z)(1-(y_1/y_2)^2)}}{(1+\sqrt{1+c})y_1},\quad &1\le y_1. \end{align*} Since generating function \eqref{lava} is finite at $z=0$, we conclude that $$\mathrm{P\space }(\eta(y_1)< \eta(y_2); \eta(y_1)< \infty)=0,\quad 0<y_1<y_2.$$
This implies
$$\mathrm{P\space }(\eta(y_2)\le \eta(y_1))=1,\quad 0<y_1<y_2,$$ meaning that unless the process $\eta(\cdot)$ is sitting at the infinity state, it evolves by negative integer-valued jumps until it gets absorbed at zero.
Consider now the conditional probability generating function \begin{equation}\label{ava}
\mathrm{E\space }(z^{\eta(y_1)-\eta(y_2)}| \eta(y_1)<\infty),\quad 0<y_1<y_2,\quad 0\le z\le1. \end{equation} In accordance with the above given three expressions for \eqref{lava}, generating function \eqref{ava} is specified by the following three expressions \begin{align*} \frac{\sqrt{1+c y_1^2+c(1-z)(1-(y_1/y_2)^2)} -\sqrt{1+c (1-z)(1-(y_1/y_2)^2)}}{\sqrt{1+c y_1^2}-1},\quad &y_2<1, \\ \frac{\sqrt{1+c y_1^2+c(1-z) (1-y_1^2)} -\sqrt{1+c(1-z)(1-(y_1/y_2)^2)}}{\sqrt{1+c y_1^2}-1},\quad &y_1<1\le y_2, \\ 1- \frac{\sqrt{1+c(1-z)(1-(y_1/y_2)^2)}-1}{(1+\sqrt{1+c})y_1-2},\quad &1\le y_1. \end{align*} In particular, setting here $z=0$, we obtain
\[\mathrm{P\space }(\eta(y_1)-\eta(y_2)=0| \eta(y_1)<\infty)= \left\{ \begin{array}{llr} \frac{\sqrt{1+c(1+y_1^2-(y_1/y_2)^2)}-\sqrt{1+c(1-(y_1/y_2)^2)}}{\sqrt{1+c y_1^2}-1} & \text{for} & 0<y_1< y_2<1, \\ \frac{\sqrt{1+c}-\sqrt{1+c(1-(y_1/y_2)^2)}}{\sqrt{1+c y_1^2}-1} & \text{for} & 0<y_1<1\le y_2, \\ 1- \frac{\sqrt{1+c(1-(y_1/y_2)^2)}-1}{(1+\sqrt{1+c})y_1-2} & \text{for} & 1\le y_1<y_2. \end{array} \right. \] Notice that given $0<y_1\le1$,
\[\mathrm{P\space }(\eta(y_1)-\eta(y_2)=0| \eta(y_1)<\infty)\to 0,\quad y_2\to\infty,\] which is expected because of $\eta(y_1)\ge\eta(1)\ge1$ and $\eta(y_2)\to0$ as $y_2\to\infty$.
\begin{figure}
\caption{The dashed line is the probability density function of $T$, the solid line is the probability density function of $T_0$. The left panel illustrates the case $c=5$, and the right panel illustrates the case $c=15$.}
\label{trump}
\end{figure}
The random times \[T=\sup\{u: \eta(u)=\infty\},\quad T_0=\inf\{u:\eta(u)=0\},\] are major characteristics of a trajectory of the limit pure death process. Since \begin{align*} \mathrm{P\space }(T\le y)=\mathrm{E\space }(z^{\eta(y)})\Big\vert_{z=1},\qquad \mathrm{P\space }(T_0\le y)=\mathrm{E\space }(z^{\eta(y)})\Big\vert_{z=0}, \end{align*} in accordance with the above mentioned formulas for $\mathrm{E\space }(z^{\eta(y)})$, we get the following marginal distributions \begin{align*}
\mathrm{P\space }(T\le y)&=\frac{\sqrt{1+cy^2}-1}{(1+\sqrt{1+c})y}\cdot1_{\{0\le y<1\}}+\Big(1-\frac{2}{(1+\sqrt{1+c})y}\Big)\cdot1_{\{y\ge 1\}},\\
\mathrm{P\space }(T_0\le y)&=\frac{y-1}{y}\cdot1_{\{y\ge 1\}}. \end{align*} The distribution of $T_0$ is free from the parameter $c$ and has the Pareto probability density function \[f_0(y)=y^{-2}1_{\{y>1\}}.\] In the special case \eqref{d0}, that is when \eqref{d} holds with $d=0$, we have $c=0$ and $\mathrm{P\space }(T=T_0)=1$. If $d>0$, then $T\le T_0$, and the distribution of $T$ has the following probability density function \[ f(y)=\left\{ \begin{array}{llr} \frac{1}{(1+\sqrt{1+c})y^2} (1-\frac{1}{\sqrt{1+cy^2}})& \text{for} & 0\le y<1, \\
\frac{2}{(1+\sqrt{1+c})y^2} & \text{for} & y\ge1, \end{array} \right. \] having a positive jump at $y=1$ of size $f(1)-f(1-)=(1+c)^{-1/2}$. Observe that $\frac{f(1-)}{f(1)}\to\frac{1}{2}$ as $c\to\infty$.
Intuitively, the limiting pure death process counts the long-living individuals in the GWO-process, that is those individuals whose life length is of order $t$. These long-living individuals may have descendants, however none of them would live long enough to be detected by the finite dimensional distributions at the relevant time scale, see Lemma 2 below. Theorem \ref{thL} suggests a new perspective on Vatutin's dichotomy, see \cite{V79}, claiming that the long term survival of a critical age-dependent branching process is due to either a large number of short-living individuals or a small number of long-living individuals.
In terms of the random times $T\le T_0$, Vatutin's dichotomy discriminates between two possibilities: if $T>1$, then $\eta(1)=\infty$, meaning that the GWO-process has survived due to a large number of individuals, while if $T\le 1<T_0$, then $1\le \eta(1)<\infty$ meaning that the GWO-process has survived due to a small number of individuals.
\section{Proof of \ $\boldsymbol{tQ(t)\to h}$}\label{out}
This section deals with the survival probability of the critical GWO-process $$Q(t)=1-P(t),\quad P(t):=\mathrm{P\space }(Z(t)=0).$$ By its definition, the GWO-process can be represented as the sum \begin{equation}\label{CD}
Z(t)=1_{\{L>t\}}+\sum\nolimits_{j=1}^{N} Z_j(t-\tau_j),\quad t=0,1,\ldots, \end{equation} involving $N$ independent daughter processes $Z_j(\cdot)$ generated by the founder individual at the birth times $\tau_j$, $j=1,\ldots,N$ (here it is assumed that $Z_j(t)=0$ for all negative $t$). The branching property \eqref{CD} implies the relation \[ 1_{\{Z(t)=0\}}=1_{\{L\le t\}}\prod\nolimits_{j=1}^{N} 1_{\{Z_j (t-\tau_j)=0\}},\] saying that the GWO-process goes extinct by the time $t$ if, on one hand, the founder is dead at time $t$ and, on the other hand, all daughter processes are extinct by the time $t$. After taking expectations of both sides, we can write \begin{equation}\label{ejp} P(t)=\mathrm{E\space }\Big(\prod\nolimits_{j=1}^{N}P(t-\tau_j);L\le t\Big). \end{equation} As shown next, this non-linear equation for $P(\cdot)$ entails the asymptotic formula \eqref{ad} under conditions \eqref{ir}, \eqref{d}, and $a<\infty$.
\subsection{Outline of the proof of \eqref{ad}}\label{ou}
We start by stating four lemmas and two propositions. Let
\begin{align} \Phi(z)&:=\mathrm{E\space }((1-z)^ N-1+Nz), \label{AL}\\ W(t)&:=(1-ht^{-1})^{N}+Nht^{-1}-\sum\nolimits_{j=1}^{N}Q(t-\tau_j)-\prod\nolimits_{j=1}^{N} P(t-\tau_j), \label{Wt}\\ D(u,t)&:=\mathrm{E\space }\Big(1-\prod\nolimits_{j=1}^{N}P(t-\tau_j);\,u<L\le t\Big)+\mathrm{E\space }\Big((1-ht^{-1})^{N} -1+Nht^{-1};L> u\Big), \label{Dut}\\ \mathrm{E\space }_u(X)&:=\mathrm{E\space }(X;L\le u ),\label{Et} \end{align} where $0\le z\le 1$, $u>0$, $t\ge h$, and $X$ is an arbitrary random variable.
\begin{lemma}\label{fQd} Given \eqref{AL}, \eqref{Wt}, \eqref{Dut}, and \eqref{Et}, assume that $0< u\le t$ and $t\ge h$. Then
\begin{align*} \Phi(ht^{-1})= \mathrm{P\space }(L> t)+\mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N}Q(t-\tau_j)\Big)-Q(t)+\mathrm{E\space }_u(W(t))+D(u,t). \end{align*} \end{lemma}
\begin{lemma}\label{L3} If \eqref{ir} and \eqref{d} hold, then $\mathrm{E\space }(N;L>ty)=o(t^{-1})$ as $t\to\infty$ for any fixed $y>0$. \end{lemma}
\begin{lemma}\label{L2}
If \eqref{ir}, \eqref{d}, and $a<\infty$ hold, then for any fixed $0<y<1$, \begin{align*} \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{1}{t-\tau_j}-\frac{1}{t}\Big)\Big)\sim at^{-2},\quad t\to\infty. \end{align*} \end{lemma}
\begin{lemma}\label{L4} Let $k\ge1$. If $0\le f_j,g_j\le 1$ for $ j=1,\ldots,k$, then \[ \prod\nolimits_{j=1}^k(1-g_j)-\prod\nolimits_{j=1}^k(1-f_j)=\sum\nolimits_{j=1}^k (f_j-g_j)r_j, \] where $0\le r_j\le1$ and \begin{align*} 1-r_j=\sum\nolimits_{i=1}^{j-1}g_i+\sum\nolimits_{i=j+1}^{k}f_i-R_j, \end{align*} for some $R_j\ge0$. If moreover, $f_j\le q$ and $g_j\le q$ for some $q>0$, then $$1- r_j\le(k-1)q,\qquad R_j\le kq,\qquad R_j\le k^2q^2.$$ \end{lemma}
\begin{proposition}\label{Lx}
If \eqref{ir}, \eqref{d}, and $a<\infty$ hold, then $\limsup_{t\to\infty} tQ(t)<\infty$.
\end{proposition}
\begin{proposition}\label{Ly}
If \eqref{ir}, \eqref{d}, and $a<\infty$ hold, then $\liminf_{t\to\infty} tQ(t)>0$.
\end{proposition}
According to these two propositions, there exists a triplet of positive numbers $(q_1,q_2,t_0)$ such that \begin{equation}\label{ca} q_1\le tQ(t)\le q_2,\quad t\ge t_0,\quad 0<q_1<h<q_2<\infty. \end{equation} The claim $tQ(t)\to h$ is derived using \eqref{ca} by accurately removing asymptotically negligible terms from the relation for $Q(\cdot)$ stated in Lemma 1, after setting $u=ty$ with a fixed $0<y<1$, and then choosing a sufficiently small $y$. In particular, as an intermediate step, we will show that \begin{align} Q(t)= \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}Q(t-\tau_j)\Big)+\mathrm{E\space }_{ty}(W(t))-aht^{-2}+o(t^{-2}),\quad t\to\infty. \label{rys} \end{align} Then, restating our goal as $\phi(t)\to 0$ in terms of the function $\phi(t)$, defined by \begin{equation}\label{cal} Q(t)=\frac{h +\phi(t)}{t},\quad t\ge1, \end{equation} we rewrite \eqref{rys} as
\begin{align} \frac{h +\phi(t)}{t}&= \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\frac{h +\phi(t-\tau_j)}{t-\tau_j}\Big)+\mathrm{E\space }_{ty}(W(t))-aht^{-2}+o(t^{-2}),\quad t\to\infty. \label{eye} \end{align} It turns out that the three terms involving $h$, outside $W(t)$, effectively cancel each other, yielding \begin{align} \frac{\phi(t)}{t}&= \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\frac{\phi(t-\tau_j)}{t-\tau_j}+W(t)\Big)+o(t^{-2}),\quad t\to\infty.\label{luh} \end{align}
Treating $W(t)$ in terms of Lemma \ref{L4}, brings \begin{align}
\phi(t)&= \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\phi(t-\tau_j) r_j(t)\frac{t}{t-\tau_j}\Big)+o(t^{-1}), \label{afo} \end{align} where $r_j(t)$ is a counterpart of $r_j$ in Lemma \ref{L4}. To derive from here the desired convergence $\phi(t)\to0$, we will adapt a clever trick from Chapter 9.1 of \cite{Seva}, which was further developed in \cite{V79} for the Bellman-Harris process, with possibly infinite $\mathrm{Var\hspace{0.2mm}}(N)$. Define a non-negative function $m(t)$ by \begin{align}
m(t):=|\phi(t)|\, \ln t,\quad t\ge 2. \label{mt} \end{align}
Multiplying \eqref{afo} by $\ln t$ and using the triangle inequality, we obtain \begin{align}
m(t)\le \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N} m(t-\tau_j)r_j(t) \frac{t\ln t}{(t-\tau_j)\ln(t-\tau_j)}\Big)+v(t),\label{elin} \end{align} where $v(t)\ge 0$ and $v(t)=o(t^{-1}\ln t)$ as $t\to\infty$. It will be shown that this leads to $m(t)=o(\ln t)$, thereby concluding the proof of \eqref{ad}.
\subsection{Proof of lemmas and propositions}\label{lemmas}
\begin{proof} {\sc of Lemma \ref{fQd}}. For $0<u\le t$, relations \eqref{ejp} and \eqref{Et} give \begin{align}\label{Qln} P(t)=\mathrm{E\space }_u\Big(\prod\nolimits_{j=1}^{N}P(t-\tau_j) \Big)+\mathrm{E\space }\Big(\prod\nolimits_{j=1}^{N}P(t-\tau_j);u<L\le t\Big). \end{align} On the other hand, for $t\ge h$, \begin{align*} \Phi(ht^{-1}) &\stackrel{\eqref{AL}}{=}\mathrm{E\space }_u\Big((1-ht^{-1})^{N}-1+Nht^{-1}\Big)+\mathrm{E\space }\Big((1-ht^{-1})^{N}-1 +Nht^{-1};L> u\Big). \end{align*} Adding the latter relation to \begin{align*} 1 &=\mathrm{P\space }(L\le u)+\mathrm{P\space }(L> t)+\mathrm{P\space }(u<L\le t), \end{align*} and subtracting \eqref{Qln} from the sum, we get \begin{align*} \Phi(ht^{-1})+Q(t)=\mathrm{E\space }_u\Big((1-ht^{-1})^{N} +Nht^{-1}-\prod\nolimits_{j=1}^{N}P(t-\tau_j)\Big)+\mathrm{P\space }(L> t)+D(u,t), \end{align*} with $D(u,t)$ defined by \eqref{Dut}. After a rearrangement, we obtain the statement of the lemma. \end{proof}
\begin{proof} {\sc of Lemma \ref{L3}}. For any fixed $\epsilon>0$,
\begin{align*} \mathrm{E\space }(N;L>t)=\mathrm{E\space }(N;N\le t\epsilon,L>t)+\mathrm{E\space }(N;1<N(t\epsilon)^{-1},L>t)\le t\epsilon\mathrm{P\space }(L>t)+(t\epsilon)^{-1}\mathrm{E\space }(N^2;L>t). \end{align*} Thus, by \eqref{ir} and \eqref{d},
\begin{align*} \limsup_{t\to\infty} (t\mathrm{E\space }(N;L>t))\le d\epsilon, \end{align*} and the assertion follows as $\epsilon\to0$. \end{proof}
\begin{proof} {\sc of Lemma \ref{L2}}. For $t=1,2,\ldots$ and $y>0$, put \begin{align*} B_t(y)&:= t^2\,\mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{1}{t-\tau_j}-\frac{1}{t}\Big)\Big)-a.
\end{align*} For any $0<u<ty$, using
\[a=\mathrm{E\space }_u(\tau_1+\ldots+\tau_N)+A_u,\quad A_u:=\mathrm{E\space }(\tau_1+\ldots+\tau_N;L> u),\] we get \begin{align*} B_t(y)&= \mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N} \frac{t}{t-\tau_j}\tau_j\Big)+\mathrm{E\space }\Big(\sum\nolimits_{j=1}^{N} \frac{t}{t-\tau_j}\tau_j\,;u<L\le ty\Big)-\mathrm{E\space }_u(\tau_1+\ldots+\tau_N)-A_u \\
&=\mathrm{E\space }\Big(\sum\nolimits_{j=1}^{N}\frac{\tau_j}{1-\tau_j/t};u<L\le ty\Big)+\mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N}\frac{\tau_j^2}{t-\tau_j}\Big)-A_u. \end{align*} For the first term on the right hand side, we have $\tau_j\le L\le ty$, so that \begin{align*} \mathrm{E\space }\Big(\sum\nolimits_{j=1}^{N}\frac{\tau_j}{1-\tau_j/t};u<L\le ty\Big)\le(1-y)^{-1}A_u. \end{align*} For the second term, $\tau_j\le L\le u$ and therefore \begin{align*} \mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N}\frac{\tau_j^2}{t-\tau_j}\Big)\le\frac{u^2}{t-u}\mathrm{E\space }_u(N)\le\frac{u^2}{t-u}. \end{align*}
This yields \[-A_u\le B_t(y)\le (1-y)^{-1}A_u+\frac{u^2}{t-u},\quad 0<u<ty<t,\] implying \[-A_u\le \liminf_{t\to\infty} B_t(y)\le\limsup_{t\to\infty} B_t(y)\le (1-y)^{-1}A_u.\] Since $A_u\to0$ as $u\to\infty$, we conclude that $B_t\to0$ as $t\to\infty$. \end{proof}
\begin{proof} {\sc of Lemma \ref{L4}}. Let \begin{equation*} r_j:=(1-g_1)\ldots (1-g_{j-1})(1-f_{j+1})\ldots (1-f_k),\quad 1\le j\le k. \end{equation*} Then $0\le r_j\le1$ and the first stated equality is obtained by telescopic summation of \begin{align*}
(1-g_1)\prod\nolimits_{j=2}^{k}(1-f_j)-\prod\nolimits_{j=1}^k(1-f_j)&=(f_1-g_1)r_1,\\
(1-g_1)(1-g_2)\prod\nolimits_{j=3}^{k}(1-f_j)- (1-g_1)\prod\nolimits_{j=2}^{k}(1-f_j)&=(f_2-g_2)r_2,\ldots,\\ \prod\nolimits_{j=1}^{k}(1-g_j)-\prod\nolimits_{j=1}^{k-1}(1-g_j)(1-f_k)&=(f_k-g_k)r_k. \end{align*} The second stated equality is obtained with \begin{align*} R_j&:=\sum_{i=j+1}^{k}f_i(1-(1-f_{j+1})\ldots (1-f_{i-1}))+\sum_{i=1}^{j-1}g_i(1-(1-g_1)\ldots (1-g_{i-1})(1-f_{j+1})\ldots (1-f_k)), \end{align*} by performing telescopic summation of \begin{align*}
1-(1-f_{j+1})&=f_{j+1},\\ (1-f_{j+1})-(1-f_{j+1})(1-f_{j+2})&=f_{j+2}(1-f_{j+1}),\ldots,\\
\prod\nolimits_{i=j+1}^{k-1}(1-f_i)- \prod\nolimits_{i=j+1}^{k}(1-f_i)&=f_k\prod\nolimits_{i=j+1}^{k-1}(1-f_i),\\
\prod\nolimits_{i=j+1}^{k}(1-f_i)-(1-g_1)\prod\nolimits_{i=j+1}^{k}(1-f_i)&=g_1\prod\nolimits_{i=j+1}^{k}(1-f_i),\ldots,\\
\prod\nolimits_{i=1}^{j-2}(1-g_i)\prod\nolimits_{i=j+1}^{k}(1-f_i)- \prod\nolimits_{i=1}^{j-1}(1-g_i)\prod\nolimits_{i=j+1}^{k}(1-f_i)&=g_{j-1} \prod\nolimits_{i=1}^{j-2}(1-g_i)\prod\nolimits_{i=j+1}^{k}(1-f_i). \end{align*}
By the above definition of $R_j$, we have $R_j\ge0$. Furthermore, given $f_j\le q$ and $g_j\le q$, we get \[R_j\le \sum\nolimits_{i=1}^{j-1}g_i+\sum\nolimits_{i=j+1}^{k}f_i\le (k-1)q. \] It remains to observe that \begin{align*} 1-r_j\le 1-(1-q)^{k-1}\le (k-1)q, \end{align*} and from the definition of $R_j$, \[R_j\le q\sum\nolimits_{i=1}^{k-j-1}(1-(1-q)^{i})+q\sum\nolimits_{i=1}^{j-1}(1-(1-q)^{k-j+i-1})\le q^2\sum\nolimits_{i=1}^{k-2}i\le k^2q^2.\] \end{proof}
\begin{proof} {\sc of Proposition \ref{Lx}}. By the definition of $\Phi(\cdot)$, we have $$\Phi(Q(t))+P(t)=\mathrm{E\space }_u\Big(P(t)^{N} \Big)+\mathrm{P\space }(L> u)-\mathrm{E\space }\Big(1-P(t)^ N;\,L> u\Big),$$ for any $0<u<t$. This and \eqref{Qln} yield
\begin{align} \Phi(Q(t))&=\mathrm{E\space }_u\Big(P(t)^{N}-\prod\nolimits_{j=1}^{N}P(t-\tau_j)\Big)+\mathrm{P\space }(L> u) \nonumber\\ &-\mathrm{E\space }\Big(1-P(t)^ N;\,L> u\Big)-\mathrm{E\space }\Big(\prod\nolimits_{j=1}^{N}P(t-\tau_j);u<L\le t\Big). \label{Nad} \end{align} An upper bound follows \begin{align*}
\Phi(Q(t))&\le \mathrm{E\space }_u\Big(P(t)^{N}-\prod\nolimits_{j=1}^{N}P(t-\tau_j)\Big)+\mathrm{P\space }(L> u), \end{align*} which together with Lemma \ref{L4} and monotonicity of $Q(\cdot)$ entail \begin{align}\label{13}
\Phi(Q(t))\le \mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N}(Q(t-\tau_j)-Q(t))\Big)+\mathrm{P\space }(L>u). \end{align}
Borrowing an idea from \cite{T}, suppose, on the contrary, that $$t_n:=\min\{t: tQ(t)\ge n\}$$
is finite for any natural $n$. It follows that $$Q(t_n)\ge \frac{n}{t_n},\qquad Q(t_n-u)<\frac{n}{t_n-u},\quad 1\le u\le t_n-1.$$ Putting $t=t_n$ into \eqref{13} and using monotonicity of $\Phi(\cdot)$, we find \begin{eqnarray*}
\Phi(nt_n^{-1})\le \Phi(Q(t_n))\le \mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{n}{t_n-\tau_j}-\frac{n}{t_n}\Big)\Big)+\mathrm{P\space }(L> u). \end{eqnarray*} Setting here $u=t_n/2$ and applying Lemma \ref{L2} together with \eqref{d}, we arrive at the relation $$\Phi(nt_n^{-1})=O(nt_n^{-2}),\quad n\to\infty.$$ Observe that under condition \eqref{ir}, the L'Hospital rule gives \begin{equation}\label{L1} \Phi(z)\sim bz^2,\quad z\to0. \end{equation} The resulting contradiction, $n^{2}t_n^{-2}=O(nt_n^{-2})$ as $n\to\infty$, finishes the proof of the proposition. \end{proof}
\begin{proof} {\sc of Proposition \ref{Ly}}. Relation \eqref{Nad} implies \begin{align*}
\Phi(Q(t))\ge \mathrm{E\space }_u\Big(P(t)^{N}-\prod\nolimits_{j=1}^{N}P(t-\tau_j)\Big)-\mathrm{E\space }\Big(1-P(t)^ N;\,L> u\Big). \end{align*} By Lemma \ref{L4}, \begin{align*} P(t)^{N}-\prod\nolimits_{j=1}^{N}P(t-\tau_j)= \sum_{j=1}^{N}(Q(t-\tau_j)-Q(t))r_j^*(t), \end{align*} where $0\le r_j^*(t)\le 1$ is a counterpart of term $r_j$ in Lemma \ref{L4}. Due to monotonicity of $P(\cdot)$, we have, again referring to Lemma \ref{L4}, $$1-r_j^*(t)\le (N-1)Q(t-L).$$ Thus, for $0<y<1$, \begin{align}\label{cont}
\Phi(Q(t))&\ge \mathrm{E\space }_{ty}\Big(\sum_{j=1}^{N}(Q(t-\tau_j)-Q(t))r_j^*(t) \Big)-\mathrm{E\space }\Big(1-P(t)^ N;\,L> ty\Big).
\end{align}
The assertion $\liminf_{t\to\infty} tQ(t)>0$ is proven by contradiction. Assume that $\liminf_{t\to\infty} tQ(t)=0$, so that $$t_n:=\min\{t: tQ(t)\le n^{-1}\}$$ is finite for any natural $n$. Plugging $t=t_n$ in \eqref{cont} and using $$Q(t_n)\le \frac{1}{nt_n},\quad Q(t_n-u)-Q(t_n)\ge \frac{1}{n(t_n-u)}-\frac{1}{nt_n},\quad 1\le u\le t_n-1,$$ we get $$\Phi\Big(\frac{1}{nt_n}\Big)\ge n^{-1}\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{1}{t_n-\tau_j}-\frac{1}{t_n}\Big)r_j^*(t_n)\Big)-\frac{1}{nt_n}\mathrm{E\space }(N;\,L> t_ny).$$ Given $L\le ty$, we have \begin{align*} 1-r_j^*(t)\le NQ(t(1-y))\le N\frac{q_2}{t(1-y)},
\end{align*}
where the second inequality is based on the already proven part of \eqref{ca}. Therefore, $$\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{1}{t_n-\tau_j}-\frac{1}{t_n}\Big)(1-r_j^*(t_n))\Big)\le \frac{q_2y}{t_n^2(1-y)^2}\mathrm{E\space }(N^2),$$ and we derive \begin{align*}
nt_n^2\Phi(\tfrac{1}{nt_n})&\ge t_n^2\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{1}{t_n-\tau_j}-\frac{1}{t_n}\Big)\Big)
-\frac{\mathrm{E\space }(N^2)q_2y}{(1-y)^2}-t_n\mathrm{E\space }(N;\,L> t_ny). \end{align*} Sending $n\to\infty$ and applying \eqref{L1}, Lemma \ref{L3}, and Lemma \ref{L2}, we arrive at the inequality $$0\ge a-yq_2\mathrm{E\space }(N^2)(1-y)^{-2},\quad 0<y<1,$$ which is false for sufficiently small $y$. \end{proof}
\subsection{Proof of \eqref{luh} and \eqref{afo}}\label{end}
Fix an arbitrary $0<y<1$. Lemma 1 with $u=ty$, gives \begin{align} \Phi(h t^{-1})= \mathrm{P\space }(L> t)+\mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}Q(t-\tau_j)\Big)-Q(t)+\mathrm{E\space }_{ty}(W(t))+D(ty,t). \label{mam} \end{align}
Let us show that \begin{align} D(ty,t)=o(t^{-2}),\quad t\to\infty. \label{cov} \end{align} Using Lemma \ref{L3} and \eqref{ca}, we find that for an arbitrarily small $\epsilon>0$, \[\mathrm{E\space }\Big(1-\prod\nolimits_{j=1}^{N}P(t-\tau_j);\,ty<L\le t(1-\epsilon)\Big)=o(t^{-2}),\quad t\to\infty.\] On the other hand,
\begin{align*} \mathrm{E\space }\Big(1-\prod\nolimits_{j=1}^{N}P(t-\tau_j);\,t(1-\epsilon)<L\le t\Big)\le \mathrm{P\space }(t(1-\epsilon)<L\le t), \end{align*} so that in view of \eqref{d}, \[\mathrm{E\space }\Big(1-\prod\nolimits_{j=1}^{N}P(t-\tau_j);\,ty<L\le t\Big)=o(t^{-2}),\quad t\to\infty.\] This, \eqref{Dut} and Lemma \ref{L3} entail \eqref{cov}.
Observe that \begin{equation}\label{stop} bh^2=ah+d. \end{equation} Combining \eqref{mam}, \eqref{cov}, and $$\mathrm{P\space }(L> t)-\Phi(h t^{-1})\stackrel{\eqref{d}\eqref{L1}}{=}dt^{-2}-bh^2t^{-2}+o(t^{-2})\stackrel{\eqref{stop}}{=}-aht^{-2}+o(t^{-2}),\quad t\to\infty,$$ we derive \eqref{rys}, which in turn gives \eqref{eye}. The latter implies \eqref{luh} since by Lemmas \ref{L3} and \ref{L4}, \[ \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\frac{h }{t-\tau_j}\Big)-\frac{h}{t}=\mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{h }{t-\tau_j}-\frac{h}{t}\Big)\Big) -ht^{-1}\mathrm{E\space }(N;L> ty)=aht^{-2}+o(t^{-2}). \]
Turning to the proof of \eqref{afo}, observe that the random variable $$ W(t)=(1-h t^{-1})^{N}-\prod\nolimits_{j=1}^{N}\Big(1-\frac{h +\phi(t-\tau_j)}{t-\tau_j}\Big)+\sum\nolimits_{j=1}^{N}\Big(\frac{h }{t}-\frac{h +\phi(t-\tau_j)}{t-\tau_j}\Big), $$ can be represented in terms of Lemma \ref{L4} as $$ W(t)=\prod\nolimits_{j=1}^{N}(1-f_j(t))-\prod\nolimits_{j=1}^{N}(1-g_j(t))+\sum\nolimits_{j=1}^{N}(f_j(t)-g_j(t))=\sum\nolimits_{j=1}^{N}(1-r_j(t))(f_j(t)-g_j(t)), $$ by assigning \begin{align}\label{sal} f_j(t):=h t^{-1},\quad g_j(t):=\frac{h +\phi(t-\tau_j)}{t-\tau_j}. \end{align} Here $0\le r_j(t)\le 1$ and for sufficiently large $t$, \begin{align}\label{stal} 1-r_j(t)\stackrel{ \eqref{ca}}{\le} Nq_2t^{-1}. \end{align} After plugging into \eqref{luh} the expression $$ W(t)=\sum\nolimits_{j=1}^{N}\Big(\frac{h }{t}-\frac{h }{t-\tau_j}\Big)(1-r_j(t))-\sum\nolimits_{j=1}^{N}\frac{\phi(t-\tau_j)}{t-\tau_j}(1-r_j(t)), $$ we get \begin{align*} \frac{\phi(t)}{t}&= \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\frac{\phi(t-\tau_j)}{t-\tau_j}r_j(t)\Big)+\mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{h }{t-\tau_j}-\frac{h}{t}\Big)(1-r_j(t) )\Big)+o(t^{-2}),\quad t\to\infty. \end{align*} The latter expectation is non-negative, and for an arbitrary $\epsilon>0$, it has the following upper bound \begin{align*}
\mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{h }{t-\tau_j}-\frac{h}{t}\Big)(1-r_j(t) )\Big) \stackrel{ \eqref{stal}}{\le} q_2\epsilon\mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\Big(\frac{h }{t-\tau_j}-\frac{h}{t} \Big)\Big)+\frac{q_2h}{(1-y)t^2}\mathrm{E\space }(N^2;N> t\epsilon). \end{align*} Thus, in view of Lemma \ref{L2}, \begin{align*} \frac{\phi(t)}{t}&= \mathrm{E\space }_{ty}\Big(\sum\nolimits_{j=1}^{N}\frac{\phi(t-\tau_j)}{t-\tau_j}r_j(t)\Big)+o(t^{-2}),\quad t\to\infty. \end{align*} Multiplying this relation by $t$, we arrive at \eqref{afo}.
\subsection{Proof of $\phi(t)\to 0$}\label{end}
Recall \eqref{mt}. If the non-decreasing function $$M(t):=\max_{1\le j\le t} m(j)$$ is bounded from above, then $\phi(t)=O(\frac{1}{\ln t})$ proving that $\phi(t)\to 0$ as $t\to\infty$. If $M(t)\to\infty$ as $t\to\infty$, then there is an integer-valued sequence $0<t_1<t_2<\ldots,$ such that the sequence $M_n:=M(t_n)$ is strictly increasing and converges to infinity. In this case, \begin{equation}\label{liv} m(t)\le M_{n-1}<M_n,\quad 1\le t< t_n,\quad m(t_n)=M_n,\quad n\ge1. \end{equation}
Since $|\phi(t)|\le \frac{M_{n}}{\ln t_{n}}$ for $t_n\le t<t_{n+1}$, to finish the proof of $\phi(t)\to 0$, it remains to verify that \begin{equation}\label{dog}
M_{n}=o(\ln t_{n}),\quad n\to\infty. \end{equation}
Fix an arbitrary $y\in(0,1)$. Putting $t=t_n$ in \eqref{elin} and using \eqref{liv}, we find \begin{align*} M_n\le M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}r_j(t_n)\frac{t_n\ln t_n}{(t_n-\tau_j)\ln(t_n-\tau_j)}\Big)+(t_n^{-1}\ln t_n)o_n. \end{align*} Here and elsewhere, $o_n$ stands for a non-negative sequence such that $o_n\to0$ as $n\to\infty$. In different formulas, the sign $o_n$ represents different such sequences. Since $$ 0\le \frac{t\ln t}{(t-u)\ln (t-u)}-1\le \frac{u(1+\ln t)}{(t-u)\ln (t-u)},\quad 0\le u< t-1, $$ and $r_j(t_n)\in[0,1]$, it follows that \begin{align*} M_n-M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}r_j(t_n)\Big)&\le M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\frac{\tau_j(1+\ln t_n)}{t_n(1-y)\ln (t_n(1-y))}\Big)+(t_n^{-1}\ln t_n)o_n. \end{align*} Recalling that $a=\mathrm{E\space }(\sum_{j=1}^{N}\tau_j)$, observe that \begin{align*} \mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\frac{\tau_j(1+\ln t_n)}{t_n(1-y)\ln (t_n(1-y))}\Big)\le \frac{a(1+\ln t_n)}{t_n(1-y)\ln (t_n(1-y))} = (a(1-y)^{-1}+o_n)t_n^{-1}. \end{align*} Combining the last two relations, we conclude \begin{align}\label{alt} M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}(1-r_j(t_n))\Big)&\le a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n. \end{align}
Now it is time to unpack the term $r_j(t)$. By Lemma \ref{L4} with \eqref{sal},
$$ 1-r_j(t)=\sum_{i=1}^{j-1}\frac{h +\phi(t-\tau_i)}{t-\tau_i}+(N-j)\frac{h }{t}-R_j(t), $$ where provided $\tau_j\le ty$, $$ 0\le R_j(t)\le Nq_2t^{-1}(1-y)^{-1},\quad R_j(t)\le N^2q_2^2t^{-2}(1-y)^{-2},\quad t>t^*, $$ for a sufficiently large $t^*$. This allows us to rewrite \eqref{alt} in the form \begin{align*} M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}&\Big(\sum_{i=1}^{j-1}\frac{h +\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-j)\frac{h }{t_n}\Big)\Big)\\ &\le M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N} R_j(t_n)\Big)+a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n. \end{align*} To estimate the last expectation, observe that if $\tau_j\le ty$, then for any $\epsilon>0$, $$ R_j(t)\le Nq_2t^{-1}(1-y)^{-1} 1_{\{N>t\epsilon\}}+ N^2q_2^2t^{-2} (1-y)^{-2}1_{\{N\le t\epsilon\}},\quad t>t^*. $$ implying that for sufficiently large $n$, $$\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}R_j(t_n)\Big) \le q_2t_n^{-1}(1-y)^{-1}\mathrm{E\space }(N^{2} ; N> t_n\epsilon)+ q_2^2\epsilon t_n^{-1}(1-y)^{-2}\mathrm{E\space }(N^2),$$ so that \begin{align*} M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\Big(\sum\nolimits_{i=1}^{j-1}\frac{h +\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-j)\frac{h }{t_n}\Big)\Big) \le a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n. \end{align*} Since \[\sum\nolimits_{j=1}^{N}\sum\nolimits_{i=1}^{j-1}\Big(\frac{h}{t_n-\tau_i}- \frac{h }{t_n}\Big)\ge0,\]
we obtain \begin{align*} M_n\mathrm{E\space }_{t_ny}\Big(\sum\nolimits_{j=1}^{N}\Big(\sum_{i=1}^{j-1}\frac{\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-1)\frac{h }{t_n}\Big)\Big) \le a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n. \end{align*}
By \eqref{cal} and \eqref{ca}, we have $\phi(t)\ge q_1-h$ for $t\ge t_0$. Thus for $\tau_j\le L\le t_ny$ and sufficiently large $n$, $$\frac{\phi(t_n-\tau_i)}{t_n-\tau_i}\stackrel{}{\ge} \frac{q_1-h}{t_n(1-y)}.$$ This gives \[\sum\nolimits_{j=1}^{N}\Big(\sum_{i=1}^{j-1}\frac{\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-1)\frac{h }{t_n}\Big)\ge \Big(h+\frac{q_1-h }{2(1-y)}\Big)t_n^{-1}N(N-1),\] which after multiplying by $t_nM_n$ and taking expectations, yields \begin{align*} \Big(h+\frac{q_1-h }{2(1-y)}\Big)M_n\mathrm{E\space }_{t_ny}(N(N-1)) \le a(1-y)^{-1}M_n +(M_n+\ln t_n)o_n. \end{align*} Finally, since $$ \mathrm{E\space }_{t_ny}(N(N-1))\to2b,\quad n\to\infty,$$ we derive that for any $0<\epsilon<y<1$, there is a finite $n_\epsilon$ such that for all $n>n_\epsilon$, $$M_n\Big(2bh(1-y)+bq_1-bh-a-\epsilon\Big) \le \epsilon\ln t_n.$$
By \eqref{stop}, we have $bh\ge a$, and therefore, $$2bh(1-y)+bq_1-bh-a-\epsilon\ge bq_1-2bhy-y.$$ Thus, choosing $y=y_0$ such that $bq_1-2bhy_0-y_0=\frac{bq_1}{2}$, we see that $$\limsup_{n\to\infty}\frac{M_n}{\ln t_n} \le \frac{2\epsilon}{bq_1},$$ which entails \eqref{dog} as $\epsilon\to0$, concluding the proof of $\phi(t)\to 0$.
\section{Proof of Theorem \ref{thL}}\label{Lp1} We will use the following notational agreements for the $k$-dimensional probability generating function \[\mathrm{E\space }(z_1^{Z(t_1)}\cdots z_k^{Z(t_k)})=\sum_{i_1=0}^\infty\ldots\sum_{i_k=0}^\infty\mathrm{P\space }(Z(t_1)=i_1,\ldots, Z(t_k)=i_k)z_1^{i_1}\cdots z_k^{i_k},\] with $0< t_1\le \ldots\le t_k$ and $z_1,\ldots,z_k\in[0,1]$. We denote \[P_k(\bar t,\bar z):=P_k(t_1,\ldots,t_{n};z_1,\ldots,z_{k}):=\mathrm{E\space }(z_1^{Z(t_1)}\cdots z_k^{Z(t_k)}),\] and write for $t\ge0$, \[P_k(t+\bar t,\bar z):=P_k(t+t_1,\ldots,t+t_{k};z_1,\ldots,z_{k}).\] Moreover, for $0< y_1<\ldots<y_k$, we write \[P_k(t\bar y,\bar z):=P_k(ty_1,\ldots,ty_{k};z_1,\ldots,z_{k}),\] and assuming $0< y_1<\ldots<y_k<1$, \[P_k^*(t,\bar y,\bar z):=\mathrm{E\space }(z_1^{Z(ty_1)}\cdots z_{k}^{Z(ty_{k})};Z(t)=0)=P_{k+1}(ty_1,\ldots,ty_k,t;z_1,\ldots,z_k,0). \]
These notational agreements will be similarly applied to the functions \begin{equation}\label{Q*} Q_k(\bar t,\bar z):=1-P_k(\bar t,\bar z),\quad Q_k^*(t,\bar y,\bar z):=1-P_k^*(t,\bar y,\bar z). \end{equation} Our special interest is in the function \begin{equation}\label{krik}
Q_k(t):=Q_k(t+\bar t,\bar z),\quad 0= t_1< \ldots< t_k, \quad z_1,\ldots,z_k\in[0,1), \end{equation} to be viewed as a counterpart of the function $Q(t)$ treated by Theorem 2. Recalling the compound parameters $h=\frac{a+\sqrt{a^2+4bd}}{2b}$ and $c=4bda^{-2}$, put \begin{equation}\label{hk} h_k:=h\frac{1+\sqrt{1+cg_k}}{1+\sqrt{1+c}},\quad g_k:= g_k(\bar y,\bar z):=\sum_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})y_{i}^{-2}. \end{equation} The key step of the proof of Theorem 1 is to show that for any given $1=y_1<y_2<\ldots<y_k$, \begin{equation}\label{dm} tQ_k(t)\to h_k,\quad t_i:=t(y_i-1), \quad i=1,\ldots,k,\quad t\to\infty. \end{equation} This is done following the steps of our proof of $tQ(t)\to h$ given in Section \ref{out}.
Unlike $Q(t)$, the function $Q_k(t)$ is not monotone over $t$. However, monotonicity of $Q(t)$ was used in the proof of Theorem 2 only in the proof of \eqref{ca}. The corresponding statement $$ 0<q_1\le tQ_k(t)\le q_2<\infty,\quad t\ge t_0, $$ follows from the bounds $(1-z_1)Q(t)\le Q_k(t)\le Q(t)$, which hold due to monotonicity of the underlying generating functions over $z_1,\ldots,z_{n}$. Indeed, \[Q_k(t)\le Q_k(t, t+t_2,\ldots,t+t_{k};0,\ldots,0)= Q(t),\] and on the other hand, \[Q_k(t)= Q_k(t,t+t_2,\ldots,t+t_{k};z_1,\ldots,z_k)= \mathrm{E\space }(1-z_1^{Z(t)}z_2^{Z(t+t_2)}\cdots z_k^{Z(t+t_k)})\ge \mathrm{E\space }(1-z_1^{Z(t)}),\] where \[ \mathrm{E\space }(1-z_1^{Z(t)})\ge \mathrm{E\space }(1-z_1^{Z(t)};Z(t)\ge1)\ge (1-z_1)Q(t).\]
\subsection{Proof of \ $\boldsymbol{tQ_k(t)\to h_k}$}\label{Lup}
The branching property \eqref{CD} of the GWO-process gives \[ \prod_{i=1}^{k} z_i^{Z(t_i)}=\prod_{i=1}^{k} z_i^{1_{\{L>t_i\}}}\prod\nolimits_{j=1}^{N} z_i^{Z_j(t_i-\tau_j)}.\] Given $0< t_1<\ldots<t_k< t_{k+1}=\infty$, we use \begin{align*} \prod_{i=1}^{k} z_i^{1_{\{L>t_i\}}}&=1_{\{L\le t_1\}}+\sum_{i=1}^{k}z_1\cdots z_{i}1_{\{t_{i}<L\le t_{i+1}\}}, \end{align*} to deduce the following counterpart of \eqref{ejp} \begin{align*} P_k(\bar t,\bar z)&=\mathrm{E\space }_{t_1}\Big(\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z)\Big)+\sum_{i=1}^{k}z_1\cdots z_{i}\mathrm{E\space }\Big(\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z); t_{i}<L\le t_{i+1}\Big), \end{align*} which entails \begin{align}\label{apes} P_k(\bar t,\bar z)&=\mathrm{E\space }_{t_1}\Big(\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z)\Big)+\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{P\space }(t_{i}<L\le t_{i+1}) \nonumber\\ &-\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{E\space }\Big(1-\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z); t_{i}<L\le t_{i+1}\Big). \end{align} Using this relation we establish the following counterpart of Lemma \ref{fQd}.
\begin{lemma}\label{fad} Consider function \eqref{krik} and put $P_k(t):=1-Q_k(t)=P_k(t+\bar t,\bar z)$. For $0<u<t$, the relation
\begin{align} \Phi(h_k t^{-1})&= \mathrm{P\space }(L> t)-\sum_{i=1}^{k}z_1\cdots z_{i}\mathrm{P\space }(t+t_i<L\le t+t_{i+1}) \nonumber \\ &+\mathrm{E\space }_u\Big(\sum\nolimits_{j=1}^{N}Q_k(t-\tau_j)\Big)-Q_k(t)+\mathrm{E\space }_u(W_k(t))+D_k(u,t), \label{arr} \end{align} holds with $t_{k+1}=\infty$,
\begin{align} \label{tWt} W_k(t):=(1-h_k t^{-1})^{N}+Nh_k t^{-1}-\sum\nolimits_{j=1}^{N}Q_k(t-\tau_j)-\prod\nolimits_{j=1}^{N}P_k(t-\tau_j) \end{align} and
\begin{align} \label{tDut} D_k(u,t):=\ &\mathrm{E\space }\Big(1-\prod\nolimits_{j=1}^{N}P_k(t-\tau_j);u<L\le t\Big)+\mathrm{E\space }\Big((1-h_k t^{-1})^{N} -1+Nh_k t^{-1};L> u\Big) \nonumber\\ &+\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{E\space }\Big(1-\prod_{j=1}^{N}P_k(t-\tau_j); t+t_{i}<L\le t+t_{i+1}\Big). \end{align}
\end{lemma}
\begin{proof} According to \eqref{apes}, \begin{align*} P_k(t)&=\mathrm{E\space }_u\Big(\prod_{j=1}^{N}P_k(t-\tau_j)\Big)+\mathrm{E\space }\Big(\prod\nolimits_{j=1}^{N}P_k(t-\tau_j);u<L\le t\Big) \\ &+\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{P\space }(t+t_{i}<L\le t+t_{i+1})-\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{E\space }\Big(1-\prod_{j=1}^{N}P_k(t-\tau_j); t+t_{i}<L\le t+t_{i+1}\Big). \end{align*} By the definition of $\Phi(\cdot)$, \begin{align*} \Phi(h_k t^{-1})+1 &=\mathrm{E\space }_u\Big((1-h_k t^{-1})^{N}+Nh_k t^{-1}\Big)+\mathrm{P\space }(L> t)\\ &+\mathrm{E\space }\Big((1-h_k t^{-1})^{N} -1+Nh_k t^{-1};L> u\Big)+\mathrm{P\space }(u<L\le t), \end{align*} and after subtracting the two last equations, we get \begin{align*} \Phi(h_k t^{-1})+Q_k(t)&=\mathrm{E\space }_u\Big((1-h_k t^{-1})^{N} +Nh_k t^{-1}-\prod\nolimits_{j=1}^{N}P_k(t-\tau_j)\Big)+\mathrm{P\space }(L> t)\\ &-\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{P\space }(t+t_{i}<L\le t+t_{i+1})+D_k(u,t) \end{align*} with $D_k(u,t)$ satisfying \eqref{tDut}. After a rearrangement, relation \eqref{arr} follows together with \eqref{tWt}. \end{proof}
With Lemma \ref{fad} in hand, convergence \eqref{dm} is proven applying almost exactly the same argument used in the proof of $tQ(t)\to h$. An important new feature emerges due to the additional term in the asymptotic relation defining the limit $h_k$. Let $1=y_1<y_2<\ldots<y_k<y_{k+1}=\infty$. Since \begin{align*} \sum\nolimits_{i=1}^{k}z_1\cdots z_{i}\mathrm{P\space }(ty_{i}<L\le ty_{i+1})\sim d t^{-2}\sum_{i=1}^{k}z_1\cdots z_{i}(y_{i}^{-2}-y_{i+1}^{-2}), \end{align*} we see that \begin{align*} \mathrm{P\space }(L> t)-\sum\nolimits_{i=1}^{k}z_1\cdots z_{i}\mathrm{P\space }(ty_{i}<L\le ty_{i+1})\sim dg_k t^{-2}, \end{align*} where $g_k$ is defined by \eqref{hk}. Assuming $0\le z_1,\ldots,z_k<1$, we ensure that $g_k>0$, and as a result, we arrive at a counterpart of the quadratic equation \eqref{stop}, \[ bh_k^2=ah_k+dg_k, \] which gives \[ h_k=\frac{a+\sqrt{a^2+4bdg_k}}{2b}=h\frac{1+\sqrt{1+cg_k}}{1+\sqrt{1+c}},\] justifying our definition \eqref{hk}. We conclude that for $k\ge1$, \begin{equation}\label{love} \frac{Q_k(t\bar y,\bar z)}{Q(t)}\to \frac{1+\sqrt{1+c\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})y_{i}^{-2}}}{1+\sqrt{1+c}},\quad 1=y_1<\ldots< y_k,\quad 0\le z_1,\ldots,z_k<1. \end{equation}
\subsection{Conditioned generating functions}\label{Send} To finish the proof of Theorem 1, consider the generating functions conditioned on the survival of the GWO-process. Given \eqref{mansur} with $j\ge1$, we have \begin{align*}
Q(t)\mathrm{E\space }&(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0)=\mathrm{E\space }(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)};Z(t)>0)\\ &=P_k(t\bar y,\bar z)-\mathrm{E\space }(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)};Z(t)=0)\stackrel{\eqref{Q*}}{=}Q_j^*(t,\bar y,\bar z)-Q_k(t\bar y,\bar z), \end{align*} and therefore,
\[\mathrm{E\space }(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0)=\frac{Q_j^*(t,\bar y,\bar z)}{Q(t)}-\frac{Q_k(t\bar y,\bar z)}{Q(t)}.\] Similarly, if \eqref{mansur} holds with $j=0$, then
\[\mathrm{E\space }(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0)=1-\frac{Q_k(t\bar y,\bar z)}{Q(t)}.\]
Letting $t'=ty_1$, we get \[\frac{Q_k(t\bar y,\bar z)}{Q(t)}=\frac{Q_k(t',t'y_2/y_1,\ldots,t'y_k/y_1)}{Q(t')}\frac{Q(ty_1)}{Q(t)},\] and applying relation \eqref{love}, \begin{equation*} \frac{Q_k(t\bar y,\bar z)}{Q(t)}\to \frac{1+\sqrt{1+\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i}}{(1+\sqrt{1+c})y_1}, \end{equation*} where $\Gamma_i=c({y_1}/{y_i} )^2$. On the other hand, since \[Q_j^*(t,\bar y,\bar z)=Q_{j+1}(ty_1,\ldots,ty_j,t;z_1,\ldots,z_j,0), \quad j\ge1,\] we also get \begin{equation*} \frac{Q_j^*(t,\bar y,\bar z)}{Q(t)}\to \frac{1+\sqrt{1+\sum\nolimits_{i=1}^{j}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i+cz_1\cdots z_{j}y_1^2}}{(1+\sqrt{1+c})y_1}. \end{equation*} We conclude that as stated in Section \ref{main}, \begin{align*}
\mathrm{E\space }(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0)\to \mathrm{E\space }(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)}). \end{align*}
\end{document} |
\begin{document}
\begin{center} {\bf EXTREMALITY OF CONVEX SETS WITH SOME APPLICATIONS}\\[3ex] BORIS S. MORDUKHOVICH\footnote{Corresponding author. Department of Mathematics, Wayne State University, Detroit, MI 48202, USA (boris@math.wayne.edu) and Peoples' Friendship University of Russia, Moscow 117198, Russia. Email: boris@math.wayne.edu, phone: (734)369-3675, fax: (313)577-7596. Research of this author was partly supported by the National Science Foundation under grants DMS-1007132 and DMS-1512846 and by the Air Force Office of Scientific Research under grant \#15RT0462.} and NGUYEN MAU NAM\footnote{Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, Portland, OR 97207, USA. Email: mau.nam.nguyen@pdx.edu. Research of this author was partly supported by the National Science Foundation under grant \#1411817.}.\\[3ex] {\bf Dedicated to the memory of Jonathan Michael Borwein} \end{center} \small{\bf Abstract:} In this paper we introduce an enhanced notion of extremal systems for sets in locally convex topological vector spaces and obtain efficient conditions for set extremality in the convex case. Then we apply this machinery to deriving new calculus results on intersection rules for normal cones to convex sets and on infimal convolutions of support functions.\\[1ex] \noindent {\bf Keywords:} Convex and variational analysis, extremal systems of sets, normals to convex sets, normal intersection rules, support functions, infimal convolutions
\newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Example}[Theorem]{Example} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \normalsize\vspace*{-0.2in}
\section{Introduction}\vspace*{-0.1in}
{\em Convex analysis} has been well recognized as an important area of mathematics with numerous applications to optimization, control, economics, and many other disciplines. We refer the reader to the fundamental monographs \cite{bc,bl,HU,r,z} and the bibliographies therein for various aspects of convex analysis and its applications. Jon Borwein, who unexpectedly passed away on August 2, 2016, made pivotal contributions to these and related fields of Applied Mathematics, among other areas of his fantastic creative activity.
Methods and constructions of convex analysis play also a decisive role in the study of nonconvex functions and sets by using certain convexification procedures. In particular, calculus and applications of Clarke's generalized gradients for nonconvex functions \cite{c} is based on appropriate convexifications and employing techniques and results of convex analysis.
Besides this, other ideas have been developed in the study and applications of nonconvex functions, sets, and set-valued mappings in the framework of {\em variational analysis}, which employs variational/optimization principles married to perturbation and approximation techniques; see the books \cite{BZ,m-book1,RockWets-VA} for extended expositions in finite and infinite dimensions. Powerful tools, results, and applications of variational analysis have been obtained by using the {\em dual-space geometric approach} \cite{m-book1} based on the {\em extremal principle} (a geometric variational principle) for systems of sets. This approach produces first a {\em full calculus} of generalized normals to nonconvex sets and then applies it to establish comprehensive calculus rules for related subgradients of extended-real-valued functions and coderivatives of set-valued mappings. Needless to say that well-developed calculus of generalized differentiation is an unavoidable requirement and the key for various applications.
Addressing generally nonconvex objects, results of variational analysis contain corresponding convex facts as their particular cases. However, basic variational techniques involving limiting procedures do not fully capture advantages from the presence of convexity. Indeed, the major calculus results of \cite{m-book1} hold in {\em Asplund} spaces (i.e., such Banach spaces where every separable subspace has a separable dual) and the {\em closedness} of sets (epigraphs for extended-real-valued function, graphs for set-valued mappings) is a standing assumption.
The major goal of this paper is to investigate a counterpart of the variational geometric approach to the study of convex sets in locally convex topological vector (LCTV) spaces without any completeness and closedness assumptions. Based on an enhanced notion of {\em set extremality}, which is a global version of the corresponding local concept largely developed and applied in \cite{m-book1} while occurring to be particularly useful in the convex setting mainly exploited here, this approach allows us to obtain the basic intersection rule for normals to convex sets under a new qualification condition. The same approach also allows us to derive new calculus results for support functions of convex set intersections in general LCTV spaces. Note that these results can be used to obtain major calculus rules of generalized differentiation and Fenchel conjugates for extended-real-valued convex functions; cf.\ our previous publications \cite{bmn,bmn1} for some versions in finite dimensions.
The rest of the paper is organized as follows. In Section~2 we introduce the aforementioned version of set extremality, establish its relationships with the separation property for convex sets, and derive various extremality conditions. The obtained results are applied in Section~3 to get the normal cone representation for convex set intersections under a new qualification condition. In Section~4 this approach is employed to represent the support function of set intersections via the infimal convolution of supports to intersection components.
For simplicity of presentation we suppose, unless otherwise stated, that all the spaces under consideration are {\em normed linear} spaces. The reader can check that the results obtained below in this setting hold true in the LCTV space generality.
The notation used throughout the paper is standard in the areas of functional, convex, and variational analysis; cf.\ \cite{m-book1,r,RockWets-VA,z}. Recall that the closed ball centered at $\bar{x}$ with radius $r>0$ is denoted by $\Bbb B(\bar{x};r)$ while the closed unit ball of the space $X$ in question and its topological dual $X^*$ are denoted by $\Bbb B$ and $\Bbb B^*$, respectively, if no confusion arises. Given a convex set $\Omega\subset X$, we write $\Bbb R^+(\Omega):=\{tv\in X|\;t\in\Bbb R_+,\;v\in\Omega\}$, where $\Bbb R_+$ signifies the collection of positive numbers, and use the symbol $\overline\Omega$ for the topological closure of $\Omega$. Finally, remind the notation for the (algebraic) {\em core} of a set: \begin{equation}\label{core-def}
\mbox{\rm core}\,\Omega:=\big\{x\in\Omega\big|\;\forall\,v\in X\;\exists\,\gamma>0\;\mbox{\rm such that }\;x+tv\in\Omega\;\mbox{\rm whenever }\;|t|<\gamma\big\}. \end{equation}
In what follow we deal with {\em extended-real-valued} functions $f\colon X\to\Bar{\R}:=(-\infty,\infty]$ and assume that are {\em proper}, i.e., $\mbox{\rm dom}\, f:=\{x\in X|\;f(x)<\infty\}\ne\emptyset$.\vspace*{-0.2in}
\section{Extremal Systems of Sets} \setcounter{equation}{0}\vspace*{-0.1in}
We start this section with the definition of extremality for set systems, which is inspired by the notion of local set extremality in variational analysis (see \cite[Definition~2.1]{m-book1}) while having some special features that are beneficial for convex sets. In particular, we do not require that the sets have a common point.\vspace*{-0.1in}
\begin{Definition}{\bf(set extremality).}\label{ext-sys} We say that two nonempty sets $\Omega_1,\Omega_2\subset X$ form an {\sc extremal system} if for any $\varepsilon>0$ there exists $a\in X$ such that \begin{equation}\label{setex}
\|a\|\le\varepsilon\;\;\mbox{\rm and }\;(\Omega_1+a)\cap\Omega_2=\emptyset. \end{equation} \end{Definition}\vspace*{-0.05in}
Observe similarly to \cite{m-book1} that the notion of set extremality introduced in Definition~\ref{ext-sys} covers (global) optimal solutions to problems of constrained optimization with scalar, vector, and set-valued objectives, various equilibrium concepts arising in operations research, mechanics, and economic modeling, etc. Furthermore, the set extremality naturally arises in deriving calculus rules of generalized differentiation in variational analysis. In particular, we are going to demonstrate this below in our device of the normal cone intersection rule and the support function representation for convex set intersections presented in the paper.
Given a convex set $\Omega\subset X$ with $\bar{x}\in\Omega$, the {\em normal cone} to $\Omega$ at $\bar{x}$ is \begin{equation}\label{nor}
N(\bar{x};\Omega):=\big\{x^*\in X^*\big|\;\langle x^*,x-\bar{x}\rangle\le 0\;\;\mbox{\rm for all }\;x\in\Omega\big\}. \end{equation}
The following underlying result establishes a useful characterization of set extremality and shows that, in the case of convex sets, extremality is closely related to while being different from the conventional convex separation: \begin{equation}\label{sep} \sup_{x\in\Omega_1}\langle x^*,x\rangle\le\inf_{x\in\Omega_2}\langle x^*,x\rangle\;\;\mbox{\rm for some }\;x^*\ne 0. \end{equation} Note that if $\Omega_1, \Omega_2$ are convex sets such that $\bar{x}\in\Omega_1\cap\Omega_2$, then \eqref{sep} is equivalent to \begin{eqnarray}\label{ep} N(\bar{x};\Omega_1)\cap\big(-N(\bar{x};\Omega_2)\big)\ne\{0\}. \end{eqnarray}\vspace*{-0.35in} \begin{Theorem}{\bf(set extremality and separation).}\label{extremal principle} Let $\Omega_1,\Omega_2\subset X$ be nonempty sets. Then the following assertions are fulfilled:
{\bf(i)} The sets $\Omega_1$ and $\Omega_2$ form an extremal system if and only if $0\notin{\rm int}(\Omega_1-\Omega_2)$. Furthermore, the extremality of $\Omega_1,\Omega_2$ implies that $({\rm int}\,\Omega_1)\cap\Omega_2=\emptyset$ and likewise $({\rm int}\,\Omega_2)\cap\Omega_1=\emptyset$.
{\bf(ii)} If $\Omega_1,\Omega_2$ are convex and form an extremal system and if ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$, then the separation property \eqref{sep} holds.
{\bf (iii)} The separation property \eqref{sep} always implies the set extremality \eqref{setex}, without imposing either the convexity of $\Omega_1,\Omega_2$ or the condition ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$ as in {\rm(ii)}. \end{Theorem}\vspace*{-0.1in}
{\bf Proof.} To verify the extremality characterization in (i), suppose first that the sets $\Omega_1,\Omega_2$ form an extremal system while the condition $0\notin{\rm int}(\Omega_1-\Omega_2)$ fails. Then there is $r>0$ such that $\Bbb B(0;r)\subset\Omega_1-\Omega_2$. Put $\varepsilon:=r$ and observe that $-a\in\Omega_1-\Omega_2$ for any $a\in X$ with $\|a\|\le\varepsilon$, which gives us $(\Omega_1+a)\cap\Omega_2\ne\emptyset$ and thus contradicts \eqref{setex}. To justify the converse implication in (i), suppose that $0\notin{\rm int}(\Omega_1-\Omega_2)$. Then for any $\varepsilon>0$ we get $$ \Bbb B(0;\varepsilon)\cap\big(X\setminus(\Omega_1-\Omega_2)\big)\ne\emptyset, $$
which tells us that there is $a\in X$ such that $\|a\|<\varepsilon$ and $-a\in\Omega_1-\Omega_2$, i.e., \eqref{setex} holds. It remains to show in (i) that the extremality of $\Omega_1,\Omega_2$ yields $({\rm int}\,\Omega_1)\cap\Omega_2=\emptyset$. Assuming the contrary, take $x\in{\rm int}\,\Omega_1$ with $x\in\Omega_2$ and find $\varepsilon>0$ such that $x-a\in\Omega_1$ for any $a\in X$ with $\|a\|<\varepsilon$. This clearly contradicts \eqref{setex} and thus completes the proof of (i).
Next we verify (ii). Consider the two convex sets $\Lambda_1:=\Omega_1-\Omega_2$ and $\Lambda_2:=\{0\}$ in $X$. By the extremality of $\Omega_1,\Omega_2$ we have due to (i) that $({\rm int}\,\Lambda_1)\cap\Lambda_2=\emptyset$, where $\mbox{\rm int}\,\Lambda_1\ne\emptyset$ by the assumption in (ii). The classical separation theorem applied to $\Lambda_1,\Lambda_2$ tells us that $\sup_{x\in\Omega_1-\Omega_2}\langle x^*,x\rangle\le 0$, which is clearly equivalent to \eqref{sep}. Thus assertion (ii) is justified.
To prove the final assertion (iii), take $x^*\ne 0$ from \eqref{sep} and find $c\in X$ such that $\langle x^*,c\rangle>0$. For any $\varepsilon>0$ we can select $a:=-c/k$ satisfying $\|a\|<\varepsilon$ when $k\inI\!\!N$ is sufficiently large. Let as show that \eqref{setex} holds with this vector $a$. If it is not the case, then there exists $\widehat x\in\Omega_2$ such that $\widehat x-a\in\Omega_1$. By the separation property \eqref{sep} we have $$ \langle x^*,\widehat x-a\rangle\le\sup_{x\in\Omega_1}\langle x^*,x\rangle\le\inf_{x\in\Omega_2}\langle x^*,x\rangle\le\langle x^*,\widehat x\rangle, $$ which gives us by the above construction of $a\in X$ that $$ \langle x^*,\widehat x\rangle-\langle x^*,a\rangle=\langle x^*,\widehat x\rangle+k\langle x^*,c\rangle\le\langle x^*,\widehat x\rangle, $$ and therefore $\langle x^*,c\rangle\le 0$. It contradicts the choice of $c\in X$ and hence justifies assertion (iii) while completing in this way the proof of the theorem. $
\square$\vspace*{-0.1in}
\begin{Corollary}{\bf (sufficient conditions for extremality of convex sets).}\label{int-ext} Let $\Omega_1,\Omega_2$ be nonempty convex sets of $X$ satisfying the conditions $\mbox{\rm int}\,\Omega_1\ne\emptyset$ and $(\mbox{\rm int}\,\Omega_1)\cap\Omega_2=\emptyset$. Then the sets $\Omega_1$ and $\Omega_2$ form an extremal system. Furthermore, we have $\mbox{\rm int}(\Omega_1-\Omega_2)\ne\emptyset$. \end{Corollary}\vspace*{-0.1in} {\bf Proof.} It is well known that the assumptions imposed in the corollary ensure the separation property for convex sets. Thus the set extremality of $\Omega_1,\Omega_2$ follows from Theorem~\ref{extremal principle}(iii). To verify the last assertion of the corollary, take any $\bar{x}\in{\rm int}\,\Omega_1$ and find $r>0$ such that ${\rm int}\,\Bbb B(\bar{x};r)\subset\Omega_1$. Then for any fixed point $x\in\Omega_2$ we have $$ V:={\rm int}\,\Bbb B(\bar{x};r)-x\subset\Omega_1-\Omega_2, $$ and thus $\mbox{\rm int}(\Omega_1-\Omega_2)\ne\emptyset$ because $V$ is a nonempty open subset of $X$. $
\square$\vspace*{-0.1in}
\begin{Remark}{\bf (on the extremal principle).}\label{ext-prin} {\rm Condition \eqref{ep} is known to hold, under the name of the (exact) {\em extremal principle}, for locally extremal points of nonconvex sets. In \cite[Theorem~2.22]{m-book1} it is derived for closed subsets of Asplund spaces with the replacement of \eqref{nor} by the basic/limiting normal cone of Mordukhovich, which reduces to \eqref{nor} for convex sets. Besides the Asplund space requirement, the aforementioned result of \cite{m-book1} imposes the {\em sequential normal compactness} (SNC) assumption on one of the sets $\Omega_1,\Omega_2$. This property is satisfied for convex sets under the interiority assumption of Corollary~\ref{int-ext}; see \cite[Proposition~1.25]{m-book1}. Furthermore, in the case of closed convex sets in Banach spaces the SNC property offers significant advantages for the validity of \eqref{ep} in comparison with the interiority condition due to the SNC characterization from \cite[Theorem~1.21]{m-book1}: a closed convex set $\Omega$ with nonempty relative interior (i.e., the interior of it with respect to its span) is SNC at every $\bar{x}\in\Omega$ if and only if the closure of the span of $\Omega$ is of finite codimension. A similar characterization has been obtained in \cite[Theorem~2.5]{blm} for the more restrictive Borwein-Str\'ojwas' {\em compactly epi-Lipschitzian} (CEL) property \cite{BS} of closed convex sets in normed spaces. Note that the CEL and SNC properties may not agree even for closed convex cones in nonseparable Asplund spaces; see \cite{fm} for comprehensive results and examples.} \end{Remark}\vspace*{-0.05in}
As established in Theorem~\ref{extremal principle}(ii), the set extremality in \eqref{setex} implies the separation property \eqref{sep} and its equivalent form \eqref{ep} whenever $\bar{x}\in\Omega_1\cap\Omega_2$ under the {\em nonempty difference interior} ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$ for arbitrary convex sets $\Omega_1,\Omega_2$ in LCTV spaces. Could we relax this assumption? The next theorem shows that it can be done, for {\em closed} convex subsets of {\em Banach} spaces, in both {\em approximate} and {\em exact} forms of the {\em convex extremal principle}. Furthermore, the results obtained therein justify that both of these forms are {\em characterizations} of the convex set extremality under the SNC property of one of the sets involved without imposing any interiority assumption on them or their difference.
To proceed, recall first the definition of the SNC property used below for convex sets; compare it with a nonconvex counterpart from \cite[Definition~1.20]{m-book1}. A subset $\Omega\subset X$ of a Banach space is {\em SNC} at $\bar{x}\in\Omega$ if for any sequence $\{(x_k,x^*_k)\}_{k\inI\!\!N}\subset X\times X^*$ we have \begin{equation}\label{snc}
\big[x^*_k\in N(x_k;\Omega),\;x_k\in\Omega,\;x_k\to\bar{x},\;x^*_k\stackrel{w^*}{\to}0\big]\Longrightarrow\|x^*_k\|\to 0\;\;\mbox{\rm as }\;k\to\infty, \end{equation} where the normal cone is taken from \eqref{nor}, and where the symbol $\stackrel{w^*}{\to}$ signifies the {\em sequential} convergence in the weak$^*$ topology of $X^*$. We have already mentioned in Remark~\ref{ext-prin} the explicit description of the SNC property for closed convex sets with nonempty relative interiors in Banach spaces given in \cite[Theorem~1.21]{m-book1}. Assertion (ii) of the next theorem employs SNC \eqref{snc} for furnishing the limiting procedure in general Banach spaces.\vspace*{-0.1in}
\begin{Theorem}{\bf(approximate and exact versions of the convex extremal principle in Banach spaces).}\label{convex-ep} Let $\Omega_1$ and $\Omega_2$ be closed convex subsets of a Banach space $X$, and let $\bar{x}$ be any common point of $\Omega_1,\Omega_2$. Consider the following assertions:
{\bf (i)} The sets $\Omega_i$, $i=1,2$, form an extremal system in $X$.
{\bf (ii)} For each $\varepsilon>0$ we have: \begin{eqnarray}\label{ep1}
\exists\,x_i\in\Bbb B(\bar{x};\varepsilon)\cap\Omega_i,\;\exists\,\;x^*_i\in N(x_{i\varepsilon};\Omega_i)+\varepsilon\Bbb B^*\;\;\mbox{\rm with }\;x^*_1+x^*_2=0,\;\|x^*_1\|=\|x^*_2\|=1. \end{eqnarray}
{\bf (iii)} The equivalent properties \eqref{sep} and \eqref{ep} are satisfied.
Then we always have the implication {\rm(i)}$\Longrightarrow${\rm(ii)}. Furthermore, all the properties in {\rm(i)}--{\rm(iii)} are equivalent if in addition either $\Omega_1$ or $\Omega_2$ is $SNC$ at $\bar{x}$. \end{Theorem}\vspace*{-0.1in} {\bf Proof.} Let us begin with verifying (i)$\Longrightarrow$ (ii). It follows from the extremality condition that for any $\varepsilon>0$ there exists $a\in X$ such that \begin{equation*}
\|a\|\le\varepsilon^2\;\;\mbox{\rm and }\;(\Omega_1+a)\cap\Omega_2=\emptyset. \end{equation*} Define the convex, lower semicontinuous, and bounded from below function $f\colon X^2\to\Bar{\R}$ by \begin{equation}\label{ext1}
f(x_1,x_2):=\|x_1-x_2+a\|+\delta\big((x_1,x_2);\Omega_1\times\Omega_2\big),\quad(x_1,x_2)\in X^2, \end{equation}
via the indicator function of the closed set $\Omega_1\times\Omega_2$. It follows from \eqref{setex} that $f(x_1,x_2)>0$ on $X\times X$ and $f(\bar{x},\bar{x})=\|a\|\le\varepsilon^2$ for any $\bar{x}\in\Omega_1\times\Omega_2$. Applying to \eqref{ext1} the Ekeland variational principle (see, e.g., \cite[Theorem~2.26(i)]{m-book1}), we find a pair $(x_{1\varepsilon},x_{2\varepsilon})\in\Omega_1\times\Omega_2$ satisfying $\|x_{1\varepsilon}-\bar{x}\|\le\varepsilon$, $\|x_{2\varepsilon}-\bar{x}\|\le\varepsilon$, and \begin{equation*}
f(x_{1\varepsilon},x_{2\varepsilon})\le f(x_1,x_2)+\varepsilon\big(\|x_1-x_{1\varepsilon}\|+\|x_2-x_{2\varepsilon}\|\big)\;\;\mbox{\rm for all }\;(x_1,x_2)\in X^2. \end{equation*}
The latter means that the function $\varphi(x_1,x_2):=f(x_1,x_2)+\varepsilon\big(\|x_1-x_{1\varepsilon}\|+\|x_2-x_{2\varepsilon}\|)$ attains its minimum on $X^2$ at $(x_{1\varepsilon},x_{2\varepsilon})$ with $\|x_{1\varepsilon}-x_{2\varepsilon}-a\|\ne 0$. Thus the generalized Fermat rule tells us that $0\in\partial\varphi(x_{1\varepsilon},x_{2\varepsilon})$. Taking into account the summation structure of $f$ in \eqref{ext1}, we apply to its subdifferential the classical Moreau-Rockafellar theorem that allows us to find---by standard subdifferentiation of the norm and indicator functions---such dual elements $x^*_{i\varepsilon}\in N(x_{i\varepsilon};\Omega_i)+\varepsilon\Bbb B^*$ for $i=1,2$ that all the conditions in \eqref{ep1} are satisfied. This justifies assertion (ii) of the theorem.
We verify next the validity of (ii)$\Longrightarrow$(iii) by furnishing the passage to the limit in \eqref{ep1} as $\varepsilon\downarrow 0$ with the help of the SNC property of, say, the set $\Omega_1$ at $\bar{x}$. Take a sequence $\varepsilon_k\downarrow 0$ as $k\to\infty$ and find by \eqref{ep1} the corresponding septuples $(x_{1k},x_{2k},x^*_k,x^*_{1k},x^*_{2k},e^*_{1k},e^*_{2k})$ so that $x_{1k}\to\bar{x}$, $x_{2k}\to\bar{x}$ as $k\to\infty$, and \begin{equation}\label{ep2}
x^*_k=x^*_{1k}+\varepsilon_ke^*_{1k},\;x^*_k=-x^*_{2k}+\varepsilon_k e^*_{2k},\;\|x^*_k\|=1,\;x^*_{ik}\in N(x_{ik};\Omega_1),\;e^*_{ik}\in\Bbb B^* \end{equation} for all $k\inI\!\!N$ and $i=1,2$. The classical Banach–-Alaoglu theorem of functional analysis tells us that for any Banach space $X$ the sequence of triples $(x^*_k,e^*_{1k},e^*_{2k})$ contains a {\em subnet} converging to some $(x^*,e^*_1,e^*_2)\in\Bbb B^*\times\Bbb B^*\times\Bbb B^*$ in the weak$^*$ topology of $X^*$. It follows from \eqref{ep2} and definition \eqref{nor} of the normal cone to convex sets that the corresponding subnets of $\{x^*_{1k},x^*_{2k})\}$ converge in the latter topology to some pair $(x^*_1,x^*_2)\in X^*\times X^*$ satisfying $x^*_1=-x^*_2=x^*$ and $x^*_i\in N(\bar{x};\Omega_i)$ for $i=1,2$.
To justify (iii), it remains to show that we can always find $x^*\ne 0$ in this way provided that $\Omega_1$ is SNC at $\bar{x}$. Assuming the contrary, let us first check that $\{x^*_{1k}\}$ converges to zero in the weak$^*$ topology. If it is not the case, there is $z\in X$ such that the numerical sequence $\{\langle x^*_{1k},z\rangle\}$ does not converge to zero. Fix $w\in \Omega_1$ and for each $k\inI\!\!N$ consider the set \begin{equation}\label{Vk}
V_k:=\big\{z^*\in X^*\big|\;|\langle z^*,w-\bar{x}\rangle-\langle x^*_1,w-\bar{x}\rangle|<1/k,\;|\langle z^*,z\rangle-\langle x^*_1,z\rangle|<1/k\big\}, \end{equation} which is a neighborhood of $x^*_1$ in the weak$^*$ topology of $X^*$. By extracting numerical subsequences in \eqref{Vk}, suppose without loss of generality that \begin{equation*} \langle x^*_{1k},w-\bar{x}\rangle\to\langle x^*_1,w-\bar{x}\rangle\;\;\mbox{\rm and }\;\langle x^*_{1k},z\rangle\to\langle x^*_1,z\rangle\;\;\mbox{\rm as }\;k\to\infty. \end{equation*} Remembering that $x^*_{1k}\in N(x_{1k};\Omega_1)$ by \eqref{ep2} gives us the estimate \begin{equation}\label{xk} \langle x^*_{1k},w-\bar{x}\rangle=\langle x^*_{1k},w-x_{1k}\rangle+\langle x^*_{1k},x_{1k}-\bar{x}\rangle\le\langle x^*_{1k},x_{1k}-\bar{x}\rangle,\quad k\inI\!\!N. \end{equation}
Note that $\langle x^*_{1k},w-\bar{x}\rangle\to\langle x^*_1,w-\bar{x}\rangle$ and $|\langle x^*_{1k},x_{1k}-\bar{x}\rangle|\le\|x^*_{1k}\|\cdot\|x_{1k}-\bar{x}\|\to 0$ as $k\to\infty$ by the boundedness of $\{x^*_{1k}\}$ in \eqref{ep2}. Passing now to the limit in \eqref{xk} tells us that $\langle x^*_1,w-\bar{x}\rangle\le 0$ and so $x^*_1\in N(\bar{x};\Omega_1)$. It follows from \eqref{ep2} that $-x_1^*\in N(\bar{x};\Omega_2)$ and thus $x_1^*\in N(\bar{x};\Omega_1)\cap(-N(\bar{x};\Omega_2))=\{0\}$, which contradicts the imposed assumption on $\langle x^*_{1k}, z\rangle\to 0$. Therefore the sequence $\{x^*_{1k}\}$ converges to zero in the weak$^*$ topology of $X^*$, which implies its sequential convergence
$x^*_{1k}\xrightarrow{w^*}0$ as well. By the assumed SNC property of $\Omega_1$ at $\bar{x}$ we conclude that $x^*_{1k}\xrightarrow{\|\cdot\|}0$ while yielding $x^*_k \xrightarrow{\|\cdot\|}0$. This surely contradicts \eqref{ep2} and thus ends the proof of implication (ii)$\Longrightarrow$(iii).
To check finally the equivalence assertion in (iii), observe that the separation property \eqref{sep} ensures by Theorem~\ref{extremal principle}(iii) that the sets $\Omega_1,\Omega_2$ form an extremal system in $X$, and so we have conditions \eqref{ep1} in (i). Since implication (i)$\Longrightarrow$(ii) has been verified above, this readily justifies the claimed equivalences in (iii) and thus completes the proof of theorem. $
\square$
As an immediate consequence of the (convex) approximate extremal principle in Theorem~\ref{convex-ep}(ii), we obtain the celebrated Bishop-Phelps theorem for closed convex sets in general Banach spaces; see \cite[Theorem~3.18]{ph}. Recall that $\bar{x}\in\Omega$ is a {\em support point} of $\Omega\subset X$ if there is $0\ne x^*\in X^*$ such that the function $x\mapsto\langle x^*,x\rangle$ attaints its supremum on $\Omega$ at $\bar{x}$.\vspace*{-0.1in}
\begin{Corollary}{\bf (Bishop-Phelps theorem).}\label{bp} Let $\Omega$ be a nonempty, closed, and convex subset of a Banach space $X$. Then the support points of $\Omega$ are dense on the boundary of $\Omega$. \end{Corollary}\vspace*{-0.1in} {\bf Proof.} It is obvious from \eqref{setex} and the definition of boundary points that for any boundary point $\bar{x}$ of $\Omega$, the sets $\Omega_1:=\{\bar{x}\}$ and $\Omega_2:=\Omega$ form an extremal system in $X$. Then the result follows from \eqref{ep1} and the normal cone structure in \eqref{nor}.$
\square$
Note that a geometric approach involving the approximate extremality conditions \eqref{ep1} at points nearby may be useful for applications to the so-called {\em sequential convex subdifferential calculus} initiated by Attouch-Baillon-Th\'era \cite{abt} and Thibault \cite{thib} in different frameworks and then developed in other publications. Likewise it can be applied as a geometric device of coderivative and conjugate calculus rules, which is our intention in the future research.\vspace*{-0.2in}
\section{Normal Cone Intersection Rule} \setcounter{equation}{0}\vspace*{-0.1in}
In this section we employ the set extremality and the results of Theorem~\ref{extremal principle} to obtain the exact intersection rule for the normal cone \eqref{nor} under a new qualification condition.
The following theorem justifies a precise representation of the normal cone $N(\bar{x};\Omega_1\cap\Omega_2)$ via normals to each sets $\Omega_1$ and $\Omega_2$ under the new qualification condition \eqref{qc} depending on $\bar{x}$, which is weaker than the standard interiority condition in LCTV spaces. For convenience we refer to \eqref{qc} as to the {\em bounded extremality condition}.\vspace*{-0.1in}
\begin{Theorem}{\bf (intersection rule).}\label{nir} Let $\Omega_1,\Omega_2\subset X$ be convex, and let $\bar{x}\in\Omega_1\cap\Omega_2$. Suppose that there exists a bounded convex neighborhood $V$ of $\bar{x}$ such that \begin{equation}\label{qc} 0\in\mbox{\rm int}\big(\Omega_1-(\Omega_2\cap V)\big). \end{equation} Then we have the normal cone intersection rule \begin{equation}\label{ni} N(\bar{x};\Omega_1\cap\Omega_2)=N(\bar{x};\Omega_1)+N(\bar{x};\Omega_2). \end{equation} \end{Theorem}\vspace*{-0.1in} {\bf Proof.} To verify \eqref{ni} under the qualification condition \eqref{qc}, denote $A:=\Omega_1$ and $B:=\Omega_2\cap V$ and observe that $0\in\mbox{\rm int}(A-B)$ and $B$ is bounded. Fixing an arbitrary normal $x^*\in N(\bar{x};A\cap B)$, we get by \eqref{nor} that $\langle x^*,x-\bar{x}\rangle\le 0$ for all $x\in A\cap B$. Consider the sets \begin{equation}\label{theta}
\Theta_1:= A\times[0,\infty)\;\;\mbox{\rm and }\;\Theta_2:=\big\{(x,\mu)\in X\times\Bbb R\big|\;x\in B,\;\mu\le\langle x^*,x-\bar{x}\rangle\big\}. \end{equation} It follows from the constructions of $\Theta_1$ and $\Theta_2$ that for any $\alpha>0$ we have \begin{equation*} \big(\Theta_1+(0,\alpha)\big)\cap\Theta_2=\emptyset, \end{equation*} and thus these sets form an {\em extremal system} by Definition~\ref{ext-sys}. Employing Theorem~\ref{extremal principle}(i) tells us that $0\notin\mbox{\rm int}(\Theta_1-\Theta_2)$. To check next that $\mbox{\rm int}(\Theta_1-\Theta_2)\ne\emptyset$, take $r>0$ such that $U:=\Bbb B(0;r)\subset A-B$. The boundedness of the set $B$ allows us to choose $\bar{\lambda}\in\Bbb R$ satisfying \begin{equation}\label{lambda} \bar{\lambda}\ge\sup_{x\in B}\langle-x^*,x-\bar{x}\rangle. \end{equation} Then we get ${\rm int}(\Theta_1-\Theta_2)\ne\emptyset$ by showing that $U\times(\bar{\lambda},\infty)\subset\Theta_1-\Theta_2$. To verify the latter, fix any $(x,\lambda)\in U\times(\bar{\lambda},\infty)$ for which we clearly have $x\in U\subset A-B$ and $\lambda>\bar{\lambda}$, and so $x=w_1-w_2$ with some $w_1\in A$ and $w_2\in B$. This implies in turn the representation $$ (x,\lambda)=(w_1,\lambda-\bar\lambda)-(w_2,-\bar\lambda). $$ Further, it follows from $\lambda-\bar\lambda>0$ that $(w_1,\lambda-\bar\lambda)\in\Theta_1$, and we deduce from \eqref{theta} and \eqref{lambda} that $(w_2,-\bar\lambda)\in\Theta_2$, which shows that $\mbox{\rm int}(\Theta_1-\Theta_2)\ne\emptyset$. Applying now Theorem~\ref{extremal principle}(ii) to the sets $\Theta_1,\Theta_2$ in \eqref{theta} gives us $y^*\in X^*$ and $\gamma\in\Bbb R$ such that $(y^*,\gamma)\ne(0,0)$ and \begin{equation}\label{convexseparation} \langle y^*,x\rangle +\lambda_1\gamma\le\langle y^*,y\rangle+\lambda_2\gamma\;\;\mbox{\rm whenever }\;(x,\lambda_1)\in\Theta_1,\;(y,\lambda_2)\in\Theta_2. \end{equation} Using \eqref{convexseparation} with $(\bar{x},1)\in\Theta_1$ and $(\bar{x},0)\in\Theta_2$ yields $\gamma\le 0$. Supposing $\gamma=0$, we get $$ \langle y^*,x\rangle\le\langle y^*,y\rangle\;\;\mbox{\rm for all }\;x\in A,\;y\in B. $$ Since $U\subset A-B$, it readily produces $y^*=0$, a contradiction, which shows that $\gamma<0$. Employing next \eqref{convexseparation} with $(x,0)\in\Theta_1$ for $x\in A$ and $(\bar{x},0)\in\Theta_2$ tells us that \begin{equation*} \langle y^*,x\rangle\le\langle y^*,\bar{x}\rangle\;\;\mbox{\rm for all }\;x\in A,\;\;\mbox{\rm and so }\;y^*\in N(\bar{x};A). \end{equation*} Using finally \eqref{convexseparation} with $(\bar{x},0)\in\Theta_1$ and $(y,\langle x^*,y-\bar{x}\rangle)\in\Theta_2$ for $y\in B$ implies that \begin{equation*} \langle y^*,\bar{x}\rangle\le\langle y^*,y\rangle+\gamma\langle x^*,y-\bar{x}\rangle\;\;\mbox{\rm for all }\;y\in B. \end{equation*} Dividing both sides of the obtained inequality by $\gamma<0$, we arrive at \begin{equation*} \langle x^*+y^*/\gamma,y-\bar{x}\rangle\le 0\;\;\mbox{\rm for all }\;y\in B, \end{equation*} which verifies by \eqref{nor} the validity of the inclusions $$ x^*\in-y^*/\gamma+N(\bar{x};B)\subset N(\bar{x};A)+N(\bar{x};B) $$ and thus shows that $N(\bar{x};A\cap B)\subset N(\bar{x};A)+N(\bar{x};B)$. The opposite inclusion therein is trivial, and so we get the equality $N(\bar{x};A\cap B)= N(\bar{x};A)+N(\bar{x};B)$. Since $N(\bar{x};A\cap B)=N(\bar{x};\Omega_1\cap\Omega_2)$ and $N(\bar{x};B)=N(\bar{x};\Omega_2)$, it justifies \eqref{ni} and completes the proof. $
\square$\vspace*{-0.1in}
\begin{Remark}{\bf (comparing qualification conditions for the normal intersection formula).}\label{qc-comp} {\rm We have the following useful observations:
{\bf (i)} It is easy to see that, if one of the sets $\Omega_1,\Omega_2$ is bounded, the introduced qualification condition \eqref{qc} reduces to the {\em difference interiority condition} \begin{equation}\label{dqc} 0\in{\rm int}(\Omega_1-\Omega_2). \end{equation} Furthermore, \eqref{qc} surely holds under the validity of the {\em classical interiority condition} $\Omega_1\cap({\rm int}\,\Omega_2)\ne\emptyset$, which is the only condition previously known to us that ensures the validity of the intersection formula \eqref{ni} in the general LCTV (or even normed) space setting. Indeed, if the latter condition is satisfied, take $u\in\Omega_1\cap({\rm int}\,\Omega_2)$ and $\gamma>0$ such that $u+\gamma\Bbb B\subset\Omega_2$. Then we choose $r>0$ with $u+\gamma\Bbb B\subset\Omega_2\cap\Bbb B(\bar{x};r)$. Thus $\gamma\Bbb B\subset\Omega_1-(\Omega_2\cap\Bbb B(\bar{x};r))$ and so $0\in\mbox{\rm int}(\Omega_1-(\Omega_2\cap V))$, where $V:=\Bbb B(\bar{x};r)$.
As the following simple example shows, the bounded extremality condition \eqref{qc} may be weaker than the classical interiority condition even in $\Bbb R^2$. Indeed, consider the convex sets $$ \Omega_1:=\Bbb R\times[0,\infty)\;\;\mbox{\rm and }\;\Omega_2:=\{0\}\times\Bbb R $$ for which $\Omega_1\cap({\rm int}\,\Omega_2)=\emptyset$, while the conditions $0\in{\rm int}(\Omega_1-\Omega_2)$ and \eqref{qc} hold.
{\bf (ii)} If $X$ is {\em Banach} and both sets $\Omega_1,\Omega_2$ are {\em closed} with $\mbox{\rm int}(\Omega_1-\Omega_2)\ne\emptyset$, the difference interiority condition \eqref{dqc} reduces to Rockafellar's {\em core qualification condition} $0\in{\rm core}(\Omega_1-\Omega_2)$ introduced in \cite{r1}. This follows from the equivalence \begin{equation}\label{core-int} \big[0\in{\rm core}(\Omega_1-\Omega_2)\big]\Longleftrightarrow\big[0\in{\rm int}(\Omega_1-\Omega_2)\big] \end{equation} valid in this case. Indeed, the implication ``$\Longleftarrow$" in \eqref{core-int} is obvious due to $\mbox{\rm int}\,\Omega\subset\mbox{\rm core}\,\Omega$ for any set. To verify the opposite implication in \eqref{core-int}, recall that $\mbox{\rm int}\,\Omega=\mbox{\rm core}\,\Omega$ for closed convex subsets of Banach spaces by \cite[Theorem~4.1.8]{BZ}. Using now the well-known fact that $\mbox{\rm int}\,\overline\Omega=\mbox{\rm int}\,\Omega$ for convex sets with nonempty interiors yields $$ 0\in\mbox{\rm core}\big(\overline{\Omega_1-\Omega_2}\big)=\mbox{\rm int}\big(\overline{\Omega_1-\Omega_2}\big)=\mbox{\rm int}\big(\Omega_1-\Omega_2\big). $$ Note that the core qualification condition is superseded in the same setting by the requirement that $\Bbb R^+(\Omega_1-\Omega_2)\subset X$ is a closed subspace, which is known as the {\em Attouch-Br\'ezis regularity condition} established in \cite{AB} with the usage of convex duality and the fundamental Banach-Dieudonn\'e-Krein-\v Smulian theorem in general Banach spaces.} \end{Remark}\vspace*{-0.1in}
The next proposition shows that the core condition $0\in{\rm core}(\Omega_1-\Omega_2)$ implies the extremality one \eqref{qc} for closed subsets of reflexive Banach spaces {\em provided that} ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$. Thus the extremality approach of Theorem~\ref{nir} offers in this setting a simplified proof of the intersection formula in comparison with those known in the literature.\vspace*{-0.1in}
\begin{Proposition}{\bf (bounded extremality condition in reflexive spaces).}\label{intersection rule reflexive} The qualification condition \eqref{qc} holds at any $\bar{x}\in\Omega_1\cap\Omega_2$ if $X$ is a reflexive Banach space and $\Omega_1,\Omega_2\subset X$ are closed convex sets such that ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$ and $0\in\mbox{\rm core}(\Omega_1-\Omega_2)$. \end{Proposition}\vspace*{-0.1in} {\bf Proof.} Fix any number $r>0$ and show that \begin{equation}\label{cl} 0\in\mbox{\rm core}\big(\Omega_1\cap\Bbb B(\bar{x};r)-\Omega_2\cap\Bbb B(\bar{x};r)\big). \end{equation}
Indeed, the assumption ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$ allows us to find $\gamma>0$ such that $\gamma\Bbb B\subset\Omega_1-\Omega_2$. For any $x\in X$ denote $u:= \frac{\gamma}{\|x\|+1}x\in\gamma\Bbb B$ and get $u=w_1-w_2$ with $w_i\in\Omega_i$ for $i=1,2$. Hence there is a constant $\bar{\gamma}>0$ depending on $x$ and $r$ for which $$
t\max\big\{\|w_1-\bar{x}\|,\|w_2-\bar{x}\|\big\}<r\;\mbox{ whenever }\;0<t<\bar{\gamma}. $$ This readily justifies the relationships \begin{equation*} tu=tw_1-tw_2=\big(\bar{x}+t(w_1-\bar{x})\big)-\big(\bar{x}+t(w_2-\bar{x})\big)\in\big(\Omega_1\cap\Bbb B(\bar{x};r)\big)-\big(\Omega_2\cap\Bbb B(\bar{x};r)\big) \end{equation*} for all $0<t<\bar{\gamma}$ and thus establishes the claimed inclusion \eqref{cl} by the core definition \eqref{core-def}.
Since $X$ is reflexive and the sets $\Omega_i\cap\Bbb B(\bar{x};r)$, $i=1,2$, are closed and bounded in $X$, they are weakly sequentially compact in this space. This implies that their difference $\big(\Omega_1\cap\Bbb B(\bar{x};r)\big)-\big(\Omega_2\cap\Bbb B(\bar{x};r)\big)$ is closed in $X$. Then we get by \cite[Theorem~4.1.8]{BZ} that $$ 0\in\mbox{\rm core}\big(\Omega_1\cap\Bbb B(\bar{x};r)-\Omega_2\cap\Bbb B(\bar{x};r)\big)=\mbox{\rm int}\big(\Omega_1\cap\Bbb B(\bar{x};r)-\Omega_2\cap\Bbb B(\bar{x};r)\big)\subset{\rm int}\big(\Omega_1-\Omega_2\cap\Bbb B(\bar{x};r)\big), $$ which verifies \eqref{qc} and thus completes the proof of the proposition. $
\square$\vspace*{-0.2in}
\section{Support Functions for Set Intersections} \setcounter{equation}{0}\vspace*{-0.1in}
In this section we derive a precise representation of support functions for convex set intersections via the infimal convolution of the support functions to the intersection components under the {\em difference interiority condition} \eqref{dqc}. This result under \eqref{dqc} seems to be new in the literature on convex analysis in LCTV (and also in normed) spaces; see Remark~\ref{rem-supp} for more discussions. Furthermore, we present a novel geometric device for results of this type by employing set extremality and the normal intersection rule obtained above.
Recall that the {\em support function} of a nonempty set $\Omega\subset X$ is given by \begin{equation}\label{supp}
\sigma_\Omega(x)(x^*):=\sup\{\langle x^*,x\rangle\big|\;x\in\Omega\big\},\quad x^*\in X^*. \end{equation} The {\em infimal convolution} of two functions $f,g\colon X\to\Bar{\R}$ is \begin{equation}\label{ic}
(f\oplus g)(x):=\inf\big\{f(x_1)+g(x_2)\big|\;x_1+x_2=x\big\}=\inf\big\{f(u)+g(x-u)\big|\;u\in X\big\}. \end{equation}\vspace*{-0.35in}
\begin{Theorem}{\bf(support functions for set intersections via infimal convolutions).}\label{sigma intersection rule} Let the sets $\Omega_1,\Omega_2\subset X$ be nonempty and convex, and let and one of them be bounded. Then the difference interiority condition \eqref{dqc} ensures the representation \begin{equation}\label{convol} (\sigma_{\Omega_1\cap\Omega_2})(x^*)=(\sigma_{\Omega_1}\oplus\sigma_{\Omega_2})(x^*)\;\;\mbox{\rm for all }\;x^*\in X^*. \end{equation} Moreover, for any $x^*\in\mbox{\rm dom}\,(\sigma_{\Omega_1\cap\Omega_2})$ there are $x^*_1,x^*_2\in X^*$ such that $x^*=x^*_1+x^*_2$ and \begin{equation}\label{convol1} (\sigma_{\Omega_1\cap\Omega_2})(x^*)=\sigma_{\Omega_1}(x^*_1)+\sigma_{\Omega_2}(x^*_2). \end{equation} \end{Theorem}\vspace*{-0.1in} {\bf Proof.} First we check that the inequality ``$\le$" in \eqref{convol} holds in the general setting. Fix any $x^*\in X^*$ and pick $x^*_1,x^*_2\in X^*$ such that $x^*=x^*_1+x^*_2$. Then it follows from \eqref{supp} that \begin{equation*} \langle x^*,x\rangle=\langle x^*_1,x\rangle +\langle x^*_2,x\rangle\le\sigma_{\Omega_1}(x^*_1)+\sigma_{\Omega_2}(x^*_2)\;\;\mbox{\rm whenever }\;x\in\Omega_1\cap\Omega_2. \end{equation*} Taking the infimum on the right-hand side above with respect to all $x^*_1,x^*_2\in X^*$ satisfying $x^*_1+x^*_2=x^*$ gives us by definition \eqref{ic} of the infimal convolution that \begin{equation*} \langle x^*,x\rangle\le(\sigma_{\Omega_1}\oplus\sigma_{\Omega_2})(x^*). \end{equation*} This verifies and the inequality ``$\le$" in \eqref{convol} by taking the supremum on the left-hand side therein with respect to $x\in\Omega_1\cap\Omega_2$.
To justify further the opposite inequality in \eqref{convol} under the validity of \eqref{dqc}, suppose that $\Omega_2$ is bounded. It suffices to consider the case where $x^*\in\mbox{\rm dom}\,(\sigma_{\Omega_1\cap\Omega_2})$ and prove the inequality ``$\le$" in \eqref{convol1}; then the one in \eqref{convol} and both statements of the theorem follow.
To proceed, denote $\alpha:=(\sigma_{\Omega_1\cap\Omega_2})(x^*)\in\Bbb R$, for which we clearly have $\langle x^*,x\rangle-\alpha\le 0$ whenever $x\in\Omega_1\cap\Omega_2$, and then construct the two nonempty convex subsets of $X\times\Bbb R$ by \begin{equation}\label{Theta}
\Theta_1:=\Omega_1\times[0,\infty)\;\;\mbox{\rm and }\;\Theta_2:=\big\{(x,\lambda)\in X\times\Bbb R\big|\;x\in\Omega_2,\;\lambda\le\langle x^*,x\rangle-\alpha\big\}. \end{equation} Observe that the sets $\Theta_1,\Theta_2$ form an {\em extremal system}. Indeed, it follows from the choice of $\alpha$ and the construction in \eqref{Theta} that for any $\gamma>0$ we have \begin{equation*} \big(\Theta_1+(0,\gamma)\big)\cap\Theta_2=\emptyset. \end{equation*} Then Theorem~\ref{extremal principle}(i) tells us that $0\notin\mbox{\rm int}(\Theta_1-\Theta_2)$. Arguing similarly to the proof of Theorem~\ref{nir}, we see that the condition $\mbox{\rm int}(\Theta_1-\Theta_2)\ne\emptyset$ holds for the sets in \eqref{Theta}. Thus Theorem~\ref{extremal principle}(ii) allows us to find a pair $(y^*,\beta)\ne(0,0)$ such that \begin{equation}\label{sep-conv} \langle y^*, x\rangle+\lambda_1\beta\le\langle y^*,y\rangle+\lambda_2\beta\;\;\mbox{\rm whenever }\;(x,\lambda_1)\in\Theta_1,\;(y,\lambda_2)\in\Theta_2. \end{equation} Choosing $(\bar{x},1)\in\Theta_1$ and $(\bar{x},0)\in\Theta_2$ in \eqref{sep-conv} shows that $\beta\le 0$. If $\beta=0$, then \begin{equation*} \langle y^*,x\rangle\le\langle y^*,y\rangle\;\;\mbox{\rm for all }\;x\in\Omega_1,\;y\in\Omega_2. \end{equation*} By ${\rm int}(\Omega_1-\Omega_2)\ne\emptyset$ this yields $y^*=0$, a contradiction justifying the negativity of $\beta$ in \eqref{sep-conv}. Take now $(x,0)\in\Theta_1$ and $(y,\langle x^*,y\rangle-\alpha)\in\Theta_2$ in \eqref{sep-conv} and then get \begin{equation*} \langle y^*,x\rangle\le\langle y^*,y\rangle+\beta(\langle x^*,y\rangle-\alpha), \end{equation*} which can be equivalently rewritten (due to $\beta<0$) as \begin{equation*} \alpha\ge\big\langle y^*/\beta+x^*,y\big\rangle+\big\langle-y^*/\beta,x\big\rangle\;\;\mbox{\rm for all }\;x\in\Omega_1,\;y\in\Omega_2. \end{equation*} Denoting $x^*_1:=y^*/\beta+x^*$ and $x^*_2:=-y^*/\beta$, we have $x^*_1+x^*_2=x^*$ and $\langle x^*_1,x\rangle +\langle x^*_2,y\rangle\le\alpha$ for all $x\in\Omega_1$ and $y\in\Omega_2$. This shows that \begin{equation*} \sigma_{\Omega_1}(x^*_1)+\sigma_{\Omega_2}(x^*_2)\le\alpha=\sigma_{\Omega_1\cap\Omega_2}(x^*) \end{equation*} and thus completes the proof of the theorem.$
\square$\vspace*{-0.1in}
\begin{Remark}{\bf(comparison with Fenchel duality).}\label{rem-supp} {\rm Since the qualification condition \eqref{qc} used in Theorem~\ref{nir} is equivalent to \eqref{dqc} employed in Theorem~\ref{sigma intersection rule} when one of the sets $\Omega_1,\Omega_2$ is bounded, all the comments given in Remark~\ref{qc-comp} are applied here. On the other hand, there is a remarkable feature of the calculus rules for support functions presented in Theorem~\ref{sigma intersection rule}, which does not have analogs in the setting of Theorem~\ref{nir} and should be specially commented. Namely, the support function \eqref{supp} is the {\em Fenchel conjugate} \begin{equation*}
f^*(x^*):=\sup\big\{\langle x^*,x\rangle-f(x)\big|\;x\in X\big\},\quad x^*\in X^*, \end{equation*} of the indicator function $f(x):=\delta(x;\Omega)$ of a given set $\Omega\subset X$, and hence a well-developed {\em conjugate calculus} can be applied to establish representations \eqref{convol} and \eqref{convol1}; see, e.g., the books \cite{Ra,r1,s,z} and the references therein. However, it seems to us that such an approach from Fenchel duality misses the specific results of Theorem~\ref{sigma intersection rule} derived for the support function under the qualification condition \eqref{dqc} in general LCTV spaces. Observe also that, in contrast to analytical schemes usually applied to deriving conjugate calculus and then deducing results of the type of Theorems~\ref{nir} and \ref{sigma intersection rule} from them, we develop here a {\em geometric approach} in the other direction based on {\em set extremality}.} \end{Remark}
\small
\end{document} |
\begin{document}
\title{ Eigenvalues, Peres' separability condition and entanglement \thanks{Supported by the National Natural Science Foundation of China under Grant No. 69773052.}} \author{An Min WANG$^{1,2,3}$} \address{CCAST(World Laboratory) P.O.Box 8730, Beijing 100080$^1$\\ and Laboratory of Quantum Communication and Quantum Computing\\ University of Science and Technology of China$^2$\\ Department of Modern Physics, University of Science and Technology of China\\ P.O. Box 4, Hefei 230027, People's Republic of China$^3$}
\maketitle \centerline{(\it Revised Version)}
\begin{abstract} The general expression with the physical significance and positive definite condition of the eigenvalues of $4\times 4$ Hermitian and trace-one matrix are obtained. This implies that the eigenvalue problem of the $4\times 4$ density matrix is generally solved. The obvious expression of Peres' separability condition for an arbitrary state of two qubits is then given out and it is very easy to use. Furthermore, we discuss some applications to the calculation of the entanglement, the upper bound of the entanglement, and a model of the transfer of entanglement in a qubit chain through a noisy channel.
\noindent{PACS: 03.67-a,03.65.Bz, 89.70.+c}
\noindent{Key Words: Density Matrix, Eigenvalues, Separability, Entanglement}
\end{abstract}
The density matrix (DM) was introduced by J. von Neumann to describe the statistical concepts in quantum mechanics \cite{Neumann}. The main virtue of DM is its analytical power in the construction of the general formulas and in the proof of the general theorems. The evaluation of averages and probabilities of the physical quantities characterizing a given system is extremely cumbersome without the use of density matrix techniques. Recently, the application of DM has been gaining more and more importance in the many fields of physics. For example, in the quantum information and quantum computing\cite{QC}, DM techniques have become an important tool for describing and characterizing the measure of entanglement, purification of entanglement and encoding \cite{Bennett1,Deutsch,Horodecki,Wootters}. However, even if DM is a simple enough $4\times4$ dimensional one, to write a general expression of its eigenvalues in a compact form with the physical significance seems not to be a trivial problem. Although one has known the theory of quartic equation, this is still difficult since DM has 15 independent parameters. Actually, we need a physical closed form for it but not a mathematical closed form only with formal meaning. This letter is just devoted to this fundamental problem in quantum mechanics. It successfully finds out the general expression of the eigenvalues of a $4\times 4$ density matrix with a clear physical significance and in a compact form. Thus, an obvious expression of Peres' separability condition is derived clearly. This provides a very easy and direct way to use it. Moreover, some important applications to the entanglement and separability in the quantum information, such as the calculation of the entanglement, the upper bound of the entanglement and the transfer of the entanglement, are discussed constructively.
The elementary unit of quantum information is so-called ``qubit" \cite{Schumacher}. A single qubit can be envisaged as a two-state quantum system such as a spin-half or a two-level atom. A pair of qubits forms the simplest quantum register which can be expressed by a $4\times 4$ density matrix. Just is well known, the eigenvalues of DM of two qubits are closely related with its entanglement and separability. For example, Wootters gave a measure of the entanglement in terms of the eigenvalues \cite{Wootters}, and Peres' separability condition depends on the positive definite property of the partial transpose of DM. Therefore, it is very interesting and essentially important to know what is the general expression of eigenvalues of DM of two qubits in an arbitrary state.
DM of two qubits can be written as \begin{equation} \rho=\frac{1}{4}\sum_{\mu,\nu=0}^3 a_{\mu\nu}\sigma_\mu\otimes\sigma_\nu,\label{Rhoe} \end{equation} where $\sigma_0$ is two dimensional identity matrix and $\sigma_i$ is the usual Pauli matrix. $\rho=\rho^\dagger$ (Hermitian) leads to $a_{\mu\nu}$ be the real numbers, ${\rm Tr}\rho=1$ (trace-one) requires $a_{00}=1$, and from the eigenvalue of Pauli matrix it follows that $-1\leq a_{\mu\nu}\leq 1$. Moreover, it is easy to get \begin{equation} a_{\mu\nu}={\rm Tr}(\rho\sigma_\mu\otimes\sigma_\nu).\label{CA} \end{equation} Note that Eq.(\ref{Rhoe}) does not involve with the positive definite condition for $\rho$. In order to find out the general expression of the eigenvalues of DM, we first give out the following two lemmas.
\noindent {\it Lemma One}\ The form of the characteristic polynomial of a $4\times 4$ Hermitian and trace-one matrix $\Omega$ is \begin{equation} b_0+b_1\lambda + b_2\lambda^2-\lambda^3+\lambda^4, \end{equation} where the coefficients $b_0,b_1$ and $b_2$ are defined by \begin{eqnarray} b_0&=&\frac{1}{64}[1-\bm{\xi}_A^2 \bm{\xi}_B^2-(A\bm{\xi}_A)^2-(A\bm{\xi}_B)^2 \nonumber \\ & &+2\bm{\xi}_A^T A \bm{\xi}_B +(({\rm Tr}A)^2-{\rm Tr}A^2)\bm{\xi}_A \cdot\bm{\xi}_B \nonumber \\ & &+2\bm{\xi}_B^T A^2\bm{\xi}_A-2{\rm Tr}A\;\bm{\xi}_B^T A\bm{\xi}_A-(\bm{a}_1\times\bm{a}_2)^2 \nonumber\\ & &-(\bm{a}_2\times\bm{a}_3)^2-(\bm{a}_3\times\bm{a}_1)^2-2(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3] \nonumber\\ & &-\frac{1}{16}[{\rm Tr}\Omega^2-({\rm Tr}\Omega^2)^2],\\ b_1&=&\frac{1}{8}[2{\rm Tr}\Omega^2-1-\bm{\xi}_A^T A \bm{\xi}_B+(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3],\\ b_2&=&\frac{1}{2}(1-{\rm Tr}\Omega^2). \end{eqnarray} In the above equations, we have introduced the polarized vectors of the reduced density matrices $\bm{\xi}_A=(a_{10},a_{20},a_{30})$, $ \bm{\xi}_B=(a_{01},a_{02},a_{03})$; the space Bloch's vector $\bm{a}_1=(a_{11},a_{12},a_{13})$, $\bm{a}_2=(a_{21},a_{22},a_{23})$, $ \bm{a}_3=(a_{31},a_{32},a_{33})$;
and the polarized rotation matrix $A=\{a_{ij}\}\; (i,j=1,2,3)$. Note that $\bm{\xi}_{\{A,B\}}$ is viewed as a column vector and its transpose $\bm{\xi}^{\rm T}_{\{A,B\}}$ is then a row vector. The physical meaning of $3\times 3$ matrix $A$ can be seen in my paper \cite{My0}. Again, the positive definite condition has not been used here and $a_{\mu\nu}$ is defined just as Eq.(\ref{CA}).
\noindent{\it Lemma Two}\ If a $4\times 4$ Hermitian and trace-one matrix $\Omega$ has $m$ non-zero eigenvalues, then \begin{equation} {\rm Tr}\Omega^2\geq \frac{1}{m}. \end{equation} If $\Omega$ is positive definite, then \begin{equation} {\rm Tr}\Omega^2\leq 1. \end{equation}
To prove Lemma one, we need to use the physical ideas to arrange those coefficients from the characteristic determinant into a compact form. And, it leads to the general expression of the eigenvalues of DM with the physical significance for their applications. Lemma two can be obtained by the standard method to find the extremum. Based on the theory of the quartic equation, we have
\noindent{\it Theorem One}\ The eigenvalues of the $4\times 4$ Hermitian and trace-one matrix $\Omega$ are \begin{eqnarray} \lambda^{\pm}(-)&=&\frac{1}{4}-\frac{1}{4\sqrt{3}}(4{\rm Tr}\Omega^2-1+8c_1\cos\phi)^{1/2} \nonumber\\ & &\pm\frac{1}{2\sqrt{6}} \left[4{\rm Tr}\Omega^2-1-4c_1\cos\phi+\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1+8c_1\cos\phi}}\right]^{1/2},\\ \lambda^\pm(+)&=&\frac{1}{4}+\frac{1}{4\sqrt{3}}(4{\rm Tr}\Omega^2-1+8c_1\cos\phi)^{1/2}\nonumber \\ & &\pm\frac{1}{2\sqrt{6}}\left[4{\rm Tr}\Omega^2-1-4c_1\cos\phi-\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1+8c_1\cos\phi}}\right]^{1/2}, \end{eqnarray} where \begin{eqnarray} \cos3\phi&=&\frac{c_2}{2c_1^3}=4\cos^3\phi-3\cos\phi\label{3Phi}, \\ c_1&=&\sqrt{12b_0+3b_1+b_2^2},\\ c_2&=&27b_1^2+b_0(27-72b_2)+9b_1b_2+2b_2^3. \end{eqnarray} In the above equations, we have assumed $c_1\neq 0, c_2\neq 0$. Note that $c_2^2-4c_1^6=-27(\lambda_1-\lambda_2)^2(\lambda_1-\lambda_3)^2(\lambda_1-\lambda_4)^2(\lambda_2-\lambda_3)^2(\lambda_2-\lambda_4)^2(\lambda_3-\lambda_4)^2$ is non-positive. Thus, $4c_1^6\geq c_2^2\geq 0$, and so $c_1$ is real. If there is any repeated root ($c_2^2-4c_1^6=0$), $\phi=0$ or $\pi/3$ since $c_2=\pm 2c_1^3$. In fact, Eq.(\ref{3Phi}) implies that \begin{eqnarray} \cos\phi&=&\frac{c_1}{2^{2/3}(c_2+\sqrt{c_2^2-4c_1^6})^{1/3}}+\frac{(c_2+\sqrt{c_2^2-4c_1^6})^{1/3}}{2\times 2^{1/3}c_1},\\ \phi&=&{\rm Arg}\left[\left(c_2+\sqrt{c_2^2-4c_1^6}\right)^{1/3}\right] \end{eqnarray} Obviously, if $c_2<0$, then $\pi/6<\phi\leq \pi/3$, and if $c_2>0$, then $0\leq\phi<\pi/6$.
Now consider the case that $c_1$ or/and $c_2$ are equal to zero. We discuss it by two steps. First, suppose $b_2=3/8$ or ${\rm Tr}\Omega^2=1/4$. If only $c_1=0$ or $c_2=0$, then some of these eigenvalues will be complex numbers. This is contradict with Hermitian property of DM. So this conclusion means that both $c_1$ and $c_2$ have to be zero together. Thus, we can obtain that all the eigenvalues are $1/4$. Second, suppose $b_2\neq 3/8$ or ${\rm Tr}\Omega^2\neq 1/4$. We have to analysis the possibilities stated as the following.
For only $c_1=0$ or $12b_0+3b_1+b_2^2=0$, again from $c_2^2=c_2^2-4c_1^6=-27(\lambda_1-\lambda_2)^2(\lambda_1-\lambda_3)^2(\lambda_1-\lambda_4)^2(\lambda_2-\lambda_3)^2(\lambda_2-\lambda_4)^2(\lambda_3-\lambda_4)^2\leq 0$, we have that $c_2$ has to be zero since it is real. So only $c_1=0$ is impossible.
For only $c_2=0$ or $27b_1^2+b_0(27-72b_2)+9b_1b_2+2b_2^3=0$, the eigenvalues of the $4\times 4$ Hermitian and trace-one matrix are then \begin{eqnarray} \lambda^\pm(-)&=&\frac{1}{4}-\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1}\pm\frac{1}{2\sqrt{6}}\left(4{\rm Tr}\Omega^2-1+\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1}}\right)^{1/2},\\ \lambda^\pm(+)&=&\frac{1}{4}+\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1}\pm\frac{1}{2\sqrt{6}}\left(4{\rm Tr}\Omega^2-1-\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1}}\right)^{1/2}. \end{eqnarray}
For both $c_1=0$ and $c_2=0$, the eigenvalues of the $4\times 4$ Hermitian and trace-one matrix are either \begin{eqnarray} \lambda_{1,2,3}&=&\frac{1}{4}-\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1},\\ \lambda_4&=&\frac{1}{4}+\frac{\sqrt{3}}{4}\sqrt{4{\rm Tr}\Omega^2-1}. \end{eqnarray} if $b_0=[3-6{\rm Tr}\Omega^2-6({\rm Tr}\Omega^2)^2+\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/288,\; b_1=[18{\rm Tr}\Omega^2-9-\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/72$, or \begin{eqnarray} \lambda_{1,2,3}&=&\frac{1}{4}+\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1},\\ \lambda_4&=&\frac{1}{4}-\frac{\sqrt{3}}{4}\sqrt{4{\rm Tr}\Omega^2-1}. \end{eqnarray} if $b_0=[3-6{\rm Tr}\Omega^2-6({\rm Tr}\Omega^2)^2-\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/288,\; b_1=[18{\rm Tr}\Omega^2-9+\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/72$.
Just is well known, Peres' separability condition tells us, all the eigenvalues of the partial transpose of DM ought to be non-negative \cite{Peres}. Thus, taking the minimum eigenvalue in Theorem one and setting it non-negative, we have
{\noindent}{\it Theorem Two}\ The separability condition of DM $\rho$ of two qubits in an arbitrary state is \begin{eqnarray} 1&\geq&\frac{1}{\sqrt{3}}(4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P})^{1/2}+\frac{2}{\sqrt{6}}\left[4{\rm Tr}\rho^2-1\right.\nonumber\\ & &\left.-4c_1^{\rm P}\cos\phi^{\rm P}+\frac{3\sqrt{3}(1+8b_1^{\rm P}-2{\rm Tr}\rho^2)}{\sqrt{4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P}}}\right]^{1/2}, \end{eqnarray} where \begin{eqnarray} b_0^{\rm P}&=&b_0-\frac{1}{32}[(({\rm Tr}A)^2-{\rm Tr}A^2)\bm{\xi}_A \cdot\bm{\xi}_B+2\bm{\xi}_B^T A^2\bm{\xi}_A\nonumber\\ & &-2{\rm Tr}A\;\bm{\xi}_B^T A\bm{\xi}_A)]+\frac{1}{16}(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3,\label{PTB0}\\ b_1^{\rm P}&=&b_1-\frac{1}{4}(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3,\label{PTB1}\\ b_2^{\rm P}&=&b_2,\\ c_1^{\rm P}&=&\sqrt{12b_0^{\rm P}+3b_1^{\rm P}+b_2^{\rm P}{}^2},\label{PTB2}\\ c_2^{\rm P}&=&27b_1^{\rm P}{}^2+b_0^{\rm P}(27-72b_2^{\rm P})+9b_1^{\rm P}b_2^{\rm P}+2b_2^{\rm P}{}^3,\\ \cos\phi^{\rm P}&=&\frac{c_1^{\rm P}}{2^{2/3}(c_2^{\rm P}+\sqrt{c_2^{\rm P}{}^2-4c_1^{\rm P}{}^6})^{1/3}}+\frac{(c_2^{\rm P}+\sqrt{c_2^{\rm P}{}^2-4c_1^{\rm P}{}^6})^{1/3}}{2\times 2^{1/3}c_1^{\rm P}}. \end{eqnarray} And $c_1^{\rm P}\neq 0,\;c_2^{\rm P}\neq 0$. If only $c_2^{\rm P}=0$, the separability condition becomes \begin{equation} 1\geq\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\rho^2-1}+\frac{1}{2\sqrt{6}}\left(4{\rm Tr}\rho^2-1+\frac{3\sqrt{3}(1+8b_1^{\rm P}-2{\rm Tr}\rho^2)}{\sqrt{4{\rm Tr}\rho^2-1}}\right)^{1/2}. \end{equation} If both $c_1^{\rm P}=0$ and $c_2^{\rm P}=0$, then in case one the DM is always separable and in case two the separability condition is \begin{equation} {\rm Tr}\rho^2\leq \frac{1}{3} \end{equation} In the above, we have used the fact that the trace of the square of the partial transpose matrix of DM is equal to the trace of the square of DM.
Obviously, the pure state is the simplest case. In fact, we can prove the following theorem:
\noindent {\it Theorem Three}\ The eigenvalues of the partial transpose of DM of two qubits in a pure state $\ket{\phi}=a\ket{00}+b\ket{01}+c\ket{10}+d\ket{11}$ is \begin{equation}
\mp |ad-bc|,\;\frac{1}{2}(1\mp\sqrt{1-4|ad-bc|^2}), \end{equation} and then the separability condition is just \begin{equation} ad-bc=0. \end{equation} It is consistent with my paper \cite{My0}.
Because Peres' separability condition is necessary and sufficient one for two qubits, the Theorem two and Theorem three, as the obvious and general expression of Peres' condition, are necessary and sufficient one either.
If there are some vanishing eigenvalues for a $4\times 4$ Hermitian and trace-one matrix, the conclusions can be simplified. The following theorems will show this judgment. Their proofs can be given by solving the corresponding characteristic equations.
\noindent{\it Theorem Four}\ If at least there is one vanishing eigenvalue for a $4\times 4$ Hermitian and trace-one matrix, its eigenvalues are \begin{eqnarray} \lambda_1&=&\frac{1}{3}(1+\sqrt{6{\rm Tr}\Omega^2-2}\; \cos\phi),\\ \lambda_2&=&\frac{1}{3}[1-\sqrt{6{\rm Tr}\Omega^2-2}\; \cos(\phi-\pi/3)],\\ \lambda_3&=&\frac{1}{3}[1-\sqrt{6{\rm Tr}\Omega^2-2}\; \cos(\phi+\pi/3)], \end{eqnarray} where \begin{eqnarray} \cos\phi&=& \frac{\sqrt{1-3b_2}}{2^{2/3}(d+\sqrt{d^2-4(1-3b_2)^3})^{1/3}}\\ \nonumber & &+\frac{(d+\sqrt{d^2-4(1-3b_2)^3})^{1/3}}{2\times 2^{1/3}\sqrt{1-3b_2}}, \label{3COS}\\ d&=&2-27b_1-9b_2. \end{eqnarray} Here we have assumed $3{\rm Tr}\Omega^2-1\neq 0$ and $d=(3\lambda_1-1)(3\lambda_2-1)(3\lambda_3-1)\neq 0$. If $d< 0$, then $\pi/6<\phi\leq \pi/3$, and if $d>0$, then $0\leq\phi<\pi/6$. Because that $d^2-4(1-3b_2)^3=-27(\lambda_1-\lambda_2)^2(\lambda_2-\lambda_3)^2(\lambda_3-\lambda_1)^2$, we have $4(1-3b_2)^3\geq d^2 \geq 0$, and $1-3b_2=(3{\rm Tr}\Omega^2-1)/2\geq 0$. If ${\rm Tr}\Omega^2=1/3$, we have that $d$ has to be zero. Thus, $b_1=-1/27$ and $b_2=1/3$. This implies that all the eigenvalues are equal to $1/3$. In particular, only $d=0$, the eigenvalues becomes \begin{eqnarray} \lambda_1&=&\frac{1}{3},\\ \lambda_{2,3}&=&\frac{1}{3}\left(1\pm\sqrt{\frac{3}{2}}\sqrt{3{\rm Tr}\Omega^2-1}\right). \end{eqnarray}
\noindent{\it Theorem Five}\ If at least there is one vanishing eigenvalue for a $4\times 4$ Hermitian and trace-one matrix, then the positive definite condition of eigenvalues is \begin{equation} \sqrt{6{\rm Tr}\Omega^2-2}\; \cos(\phi^{\rm P}-\pi/3)\leq 1. \end{equation} If only $d=0$, the positive definite condition becomes \begin{equation} {\rm Tr}\Omega^2\leq\frac{5}{9}. \end{equation}
\noindent{\it Theorem Six}\ If at least there are two vanishing eigenvalues for a $4\times 4$ Hermitian and trace-one matrix, then the other eigenvalues are \begin{equation} \lambda_\pm=\frac{1}{2}(1\pm\sqrt{2{\rm Tr}\Omega^2-1}). \end{equation} The positive definite condition of the eigenvalues \begin{equation} {\rm Tr}\Omega^2\leq 1. \end{equation}
Thus, from the Peres' separability condition, it is easy to prove the following theorem:
\noindent{\it Theorem Seven}\ If the partial transpose of DM of two qubits has at least two vanishing eigenvalues, this density matrix is separable. If the partial transpose of DM of two qubits has only one vanishing eigenvalue, the separability condition is obtained by using of Theorem Five to it and setting the minimum eigenvalue is positive.
Now, let's we discuss some applications of our theorems. Just is well known that many measures of entanglement are related with the quantum entropies which are defined by density matrix, for example, the entanglement of formation \cite{Bennett} and the relative entropy of entanglement \cite{Vedral}. To compute quantum entropy, we often need to find the eigenvalues of density matrix. Even, according to Wootters \cite{Wootters}, the measure of entanglement of two qubits is directly determined by the eigenvalues of $4\times 4$ hermitian matrix. Therefore, in terms of our Theorems about the eigenvalues, we can easily calculate the entanglement of formation of an arbitrary state of two qubits. As to the relative entropy of entanglement or its improving \cite{My1}, we have to calculate von Neumann entropy which is just defined directly by the eigenvalues.
Furthermore, we can find a relation between the upper bound of entanglement and the eigenvalues.
\noindent{\it Theorem Eight}\ If all the eigenvalues of DM of two qubits are not zero, the possibly maximum value of the entanglement of formation is not larger than \begin{eqnarray} & &\frac{1}{\sqrt{3}}(4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P})^{1/2}+\frac{2}{\sqrt{6}}\left[4{\rm Tr}\rho^2-1\right.\nonumber\\ & &\left.-4c_1^{\rm P}\cos\phi^{\rm P}+\frac{3\sqrt{3}(1+8b_1^{\rm P}-2{\rm Tr}\rho^2)}{\sqrt{4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P}}}\right]^{1/2}. \end{eqnarray}
This is because DM of two qubits can be written as \begin{equation} \rho=\lambda_{\min} I+(1-4\lambda_{\min})\rho^\prime. \end{equation} Note that we can not put a number larger than $\lambda_{\min}$ in front of the identity matrix $I$, because we have to keep $\rho^\prime$ to be positive definite.
Furthermore, let's consider a model of transfer of entanglement \cite{My2}. This can be expressed as a following story. Alice and Bob are friends. One day, Alice and Bob sat together at the lounge in a party. On the left side of Alice is Charlie and on the right side of Bob is David. Alice and Charlie, Bob and David respectively exchanged their seats. This leads to Alice and Bob's entanglement decreases. In language of quantum information, Alice and Bob shares an entangled state initially. Without loss of generality, suppose it is in Bell state $(\ket{00}+\ket{11})/\sqrt{2}$, Charlie is in $\ket{c}$ and David is in $\ket{d}$. That is that four of them is in a total state $\ket{c}\otimes(\ket{00}+\ket{11})\otimes\ket{d}/\sqrt{2}$. Now, introduce the swapping interaction\cite{Loss} respectively between Alice and Charlie, and between Bob and David: \begin{equation} S=\left(\begin{array}{cccc} 1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1 \end{array}\right),\qquad S\ket{ab}=\ket{ba}\quad (a,b=0,1). \end{equation} It is easy to see that \begin{equation} S\otimes S\left[\frac{1}{\sqrt{2}}\ket{c}\otimes(\ket{00}+\ket{11})\otimes\ket{d}\right]=\frac{1}{\sqrt{2}}(\ket{0}\otimes\ket{cd}\otimes\ket{0}+\ket{1}\otimes\ket{cd}\otimes\ket{1}). \end{equation} Thus, the entanglement between the second qubit and the third qubit is transfered to the entanglement of the first qubit and the fourth qubit. In general, the swapping process suffers the affection of noisy. We suppose, after transfer of entanglement, that DM becomes \begin{equation} \rho^\prime=(1-\epsilon)\rho+\epsilon\frac{1}{4}I. \end{equation} where $\epsilon$ represents the strength of the noise and $\rho$ is, in form, the same as DM before transfer of entanglement. Obviously, this model can be extended to a qubit chain: \begin{equation} \cdots\overbrace{\underbrace{\bullet\leftrightarrow\cdots\leftrightarrow\bullet\leftrightarrow}_n\overbrace{\bullet\leftrightarrow\underbrace{\bullet\qquad\bullet}_{\rho_0}\leftrightarrow\bullet}^{\rho_1}\underbrace{\leftrightarrow\bullet\leftrightarrow\cdots\leftrightarrow\bullet\leftrightarrow\bullet}_n}^{\rho_n}\cdots \end{equation} Through the swapping interaction, the entanglement can be transfered along with the chain one node by one node and forward to the opposite directions. At the beginning, denote DM for a pair of given adjacent nodes of a qubit chain as $\rho_0$. After the first swapping, the DM of a pair of qubits respectively on the nodes of the left side and right side of the given pair of qubits is written as $\rho_1$. Since the affection of noise, after the $n$-th swapping in turn along with two directions, the DM of a pair of qubits respectively on the $n$-th nodes of the left side and right side of the original two adjacent qubits becomes \begin{equation} \rho_n=(1-\epsilon)\rho_{n-1}+\epsilon\frac{1}{4}I. \end{equation} This equation has not taken into account the fact that $\rho_n$ and $\rho_{n-1}$ are related with the different pairs of qubits, and let it is valid only in mathematics. We also assume that at the beginning a pair of qubits in the adjacent nodes is in a pure state. Thus we would like to know what is $n$'s value if $\rho_n$ is separable. That is to calculate the transfer distance $n$ of entanglement for such a chain of qubits through a noisy channel.
According to our theorem, the minimum eigenvalue of the partial transpose of $\rho_n$ is \begin{equation}
\lambda_{\min}=\frac{1}{4}[1-((1-\epsilon)^n(1+4|ad-bc|)]. \end{equation} By using of Peres' criterion for the separable state, we can find that \begin{equation}
n\leq-\frac{\log(1+4|ad-bc|)}{\log (1-\epsilon)}. \end{equation} In particular, when $\rho_0$ is a density matrix in the maximum entangled state, we obtain \begin{equation} n\leq\displaystyle -\frac{\log 3}{\log (1-\epsilon)}. \end{equation} Obviously, when one hopes $n=10$, then it allows the noisy strength $\epsilon$ is not larger than $0.104042$. When the noisy strength $\epsilon$ is larger than $0.42265$, any transfer will lead in disentanglement. If the noisy strength $\epsilon$ only reaches at $0.01$ or $0.1$, the transfer distance $n$ can be 109 or 10. Likewise, we can describe the transfer of entanglement along with one direction. The significances and applications of this model of transfer of entanglement should be imaginable.
In addition, we hope to apply our theorems into seeking the minimum pure state decomposition of DM, this work is in progressing. In a words, we can say, of course, the theorems proposed here are the useful tools to study the entanglement and the related problems.
I would like to thank Artur Ekert for his great help and for his hosting my visit to center of quantum computing in Oxford University.
\begin{references} \bibitem{Neumann} J. von Neumann, {\it G\"ottinger Nachrichten}, (1927)245 \bibitem{QC} D.P.DiVincenzo, {\it Science} {\bf 270} (1995)255; A.Steane, {\it Per. Prog. Phys.} {\bf 61}117(1998) \bibitem{Bennett1}C.H.Bennett, G.Brassard, S.Popescu, B.Schumacher, J.A.Smolin and W.K.Wootters, {\it Phys. Rev. Lett.} {\bf 76} (1996)722 \bibitem{Deutsch}D.Deutsch, A.Ekert, R.Jozsa, C.Macchiavello, S.Popescu and A.Sanpera, {\it Phys. Rev. Lett.} {\bf 77} (1996)2818 \bibitem{Horodecki}M.Horodecki, P.Horodecki and R.Horodecki, {\it Phys. Rev. Lett.} {\bf 78} (1997)574 \bibitem{Wootters} W.K.Wootters, {\it Phys. Rev. Lett.} {\bf 80}, 2245(1998); S.Hill and W.K.Wootters, {\it Phys. Rev. Lett.} {\bf 78}, 5022(1997) \bibitem{Schumacher} B.Schumacher, {\it Phys. Rev. A} {\bf 51} 2738(1995) \bibitem{My0}An Min Wang, {\it Chinese Phys. Lett.} {\bf 17} 243(2000) \bibitem{Peres} A.Peres, {\it Phys. Rev. Lett.} {\bf 77}, 1413(1996) \bibitem{Bennett} C.H.Bennett, H.J.Bernstein, S.Popesu, and B.Schumacher, {\it Phys. Rev. A} {\bf 53}, 2046(1996); S.Popescu, D.Rohrlich, {\it Phys. Rev. A} {\bf 56}, R3319(1997) \bibitem{Vedral}V.Vedral, M.B.Plenio, K.Jacobs, and P.L.Knight, {\it Phys. Rev. A} {\bf 56}, 4452(1997); V.Vedral, M.B.Plenio, M.A.Rippin, and P.L.Knight, {\it Phys. Rev. Lett.} {\bf 78}, 2275(1997); V.Vedral and M.B.Plenio, {\it Phys. Rev. A} {\bf 57}, 1619(1998) \bibitem{My1}An Min Wang, quant-ph/0001023 \bibitem{My2}private communication with Artur Ekert (Oct.,1999) \bibitem{Loss} D. Loss and D. P. DiVincenzo, {\it Phys. Rev. A} {\bf 57} 120(1998)
\end{references}
\end{document} |
\begin{document}
\title{Effective Andr\'e-Oort Type Results for Almost-Holomorphic Modular Functions} \author{Haden Spence}
\maketitle
\begin{abstract}
In this short paper we discuss a number of effective and/or explicit results of Andr\'e-Oort type for the nonholomorphic function $\chi^*$, which I have discussed in a number of other papers such as \cite{Spence2017Ext} and \cite{Spence2016}. After working in a rather ad-hoc manner to get some good estimates on the tails of the $q$-expansions involved, we prove weak effective Andr\'e-Oort results for $\chi^*$, which mimic but are not full analogues of effective Andr\'e-Oort results known due to K\"uhne \cite{Kuehne2012}/Bilu-Masser-Zannier \cite{Bilu2013} for the classical modular function $j$.
Then we go on to discuss what we call an ``explicit'' result; that certain triples of special points cannot often be collinear, looking for an analogue of \cite{Bilu2017}. Again we cannot get a perfect analogy, but we do prove a weaker result and discuss what remains to be proved to complete this.
An important result which arises as a side-effect of the explicit calculation done here is Corollary \ref{cor:FieldEquality}, which affirms a conjecture I made in earlier papers (particularly \cite{Spence2017Ext}); that for a quadratic point $\tau$ we have $\mathbb{Q}(j(\tau))=\mathbb{Q}(\chi^*(\tau))$. Although it appears here somewhat tangentially, it may be the most significant result in the paper. \end{abstract}
\textbf{Acknowledgements.} As ever I would like to thank my supervisor Jonathan Pila, who has guided me with his characteristic good humour and certain judgement throughout my DPhil studies. This short paper in particular also owes a lot to Gareth Jones, who encouraged me to pursue this and with whom I had a number of very fruitful conversations on these topics. Since I have now left Oxford, it is also a good time to thank Oxford University and my friends and colleagues at the Mathematical Institute there for their help and friendship. Much of this work was carried out while I was funded by an EPSRC grant; I thank the EPSRC again for their generosity throughout my DPhil.
\textbf{Disclaimer.} In some ways this paper may be somewhat incomplete as a result of the author's being out of academic circulation. I welcome any comments and suggestions anyone may have on how it might be improved - though such changes may be slow in coming!
\section{Introduction}\label{sect:intro} In work carried out during the course of my DPhil studies at the University of Oxford, I proved various results of Andr\'e-Oort type in the context of ``nonclassical modular functions''. This last phrase, of course, is rather general and could apply to any number of functions. The functions that seem most appropriate for the purpose, however, appear to be the \emph{quasimodular} and \emph{almost holomorphic modular} functions.
These two classes of function follow a pattern that arises often among classes of `not-quite-modular' functions. One begins with some class of holomorphic functions which fail to be quite invariant under the action of the modular group (in this case the quasimodular functions), then by applying some sort of correction one produces a nonholomorphic function which is fully invariant under $\operatorname{SL}_2(\mathbb{Z})$.
Quasimodular functions arise from the derivatives of modular forms. This is a well-known and fairly obvious construction: when differentiating the modular law for a modular form $f$, \[f(\gamma\tau) = (c\tau+d)^kf(\tau),\] one gets \[f'(\gamma\tau)(c\tau+d)^{-2} = (c\tau+d)^kf'(\tau) + ck(c\tau+d)^{k-1}f(\tau)\] and hence \begin{equation}\label{eqn:qmForm}f'(\gamma\tau)(c\tau+d)^{-k-2} = f'(\tau) + k\frac{c}{c\tau+d}f(\tau),\end{equation} so that $f'$ is \emph{nearly} a modular form of weight $k+2$.
This is essentially the definition of a quasimodular form: a holomorphic function which transforms under $\operatorname{SL}_2(\mathbb{Z})$ like a modular form, only with an error term which is a polynomial in $\frac{c}{c\tau+d}$ with holomorphic functions as coefficients.
In this document we tend to be more concerned with the dual almost holomorphic modular functions. Using (\ref{eqn:qmForm}) and the fact that $\operatorname{Im}\gamma\tau = \operatorname{Im}\tau|c\tau+d|^{-2}$, one sees that
\begin{align*}(c\tau+d)^{-k-2}\left[f'(\gamma\tau) - \frac{ik}{2}\frac{f(\gamma\tau)}{\operatorname{Im}\gamma\tau}\right]&=f'(\tau) + k\frac{c}{c\tau+d}f(\tau)-\frac{ik}{2}\frac{f(\tau)}{\operatorname{Im}\tau}\frac{|c\tau+d|^2}{(c\tau+d)^2}\\ &=f'(\tau)-\frac{ik}{2}\frac{f(\tau)}{(c\tau+d)\operatorname{Im}\tau}\left(2ci\operatorname{Im}\tau+(c\overline{\tau}+d)\right)\\ &=f'(\tau)-\frac{ik}{2}\frac{f(\tau)}{\operatorname{Im}\tau}.\end{align*} So the function \[\widehat{f} = f' - \frac{ikf}{2\operatorname{Im}},\] the almost holomorphic dual of $f'$, transforms like a modular form of weight $k+2$. In fact any quasimodular form can be corrected in such a way, and corrected functions of this type are called almost holomorphic modular forms. \begin{definition}
A function $f:\mathbb{H}\to\mathbb{C}$ is called almost holomorphic if there are holomorphic functions $f_k:\mathbb{H}\to\mathbb{C}$, bounded as $\operatorname{Im}\tau\to\infty$, such that
\[f(\tau)=\sum_{r=0}^{n}f_r(\tau)(\operatorname{Im}\tau)^{-r}.\]
Such a function is an almost holomorphic modular form if there is an integer $k$ such that
\[f(\gamma\tau)=(c\tau+d)^kf(\tau)\]
for all\footnote{Throughout this paper we ignore level structure, dealing always with the full modular group $\operatorname{SL}_2(\mathbb{Z})$.} $\gamma=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\operatorname{SL}_2(\mathbb{Z})$ and all $\tau\in\mathbb{H}$.
An almost holomorphic modular function is a quotient of two almost holomorphic modular forms of equal weight. \end{definition}
Like most modular objects, quasimodular functions have $q$-expansions, that is expressions of the form \[f(\tau)=\sum_{n=-N}^{\infty} c_nq^n,\] where $q=e^{2\pi i\tau}$ and $c_n\in\mathbb{C}$. This is not quite true of almost holomorphic modular functions, of course, but they can be represented instead by polynomials with $q$-expansions as coefficients; elements of $\mathbb{C}((q))[(\operatorname{Im}\tau)^{-1}]$. Though these are not strictly $q$-expansions in the traditional sense, we will refer to them as such for the remainder of the document.
The prototypical quasimodular form is the weight-2 quasimodular Eisenstein series $E_2$, which we write here in terms of its $q$-expansion \[E_2(\tau) = 1 - 24\sum_{n\geq 1}\sigma_1(n)q^n,\]
where $\sigma_k$ is the sum-of-divisors function $\sigma_k(n)=\sum_{d|n}d^k$. $E_2$ is a weight-2 quasimodular form, and can be corrected to make an almost holomorphic modular form \[E_2^* = E_2-\frac{3}{\pi\operatorname{Im}}.\] We will also require the $q$-expansions for the other standard Eisenstein series \[E_4(\tau) = 1 + 240\sum_{n\geq 1}\sigma_3(n)q^n\] and \[E_6(\tau) = 1 - 504\sum_{n\geq 1}\sigma_5(n)q^n.\]
With these building blocks we can create a prototypical almost holomorphic modular function \[\chi^*=\frac{E_2^*E_4E_6}{\Delta},\] where $\Delta$ is the classical modular discriminant function $\frac{1}{1728}(E_4^3-E_6^2)$. There is nothing particularly special about $\chi^*$ except that, since $\Delta$ is known to be non-vanishing within $\mathbb{H}$, $\chi^*$ has no singularities and is everywhere real analytic. Moreover the field $F^*$ of almost holomorphic modular functions satisfies $F^*=\mathbb{C}(j,\chi^*)$; though this last is hardly unique to $\chi^*$. Here $j$ is the classical modular $j$-invariant: we will write $\pi$ for the cartesian product $(j,\chi^*)$ of $j$ and $\chi^*$.
Note that \[\chi^* = \frac{E_2E_4E_6}{\Delta} - \frac{3}{\pi y}\frac{E_4E_6}{\Delta},\] so it will be helpful to give the components names. We will write $\chi = \frac{E_2E_4E_6}{\Delta}$ and $\xi = \frac{E_4E_6}{\Delta}$, so that \[\chi^* = \chi - \frac{3}{\pi y}\xi.\]
It is well-known of course that for quadratic numbers $\tau \in\mathbb{H}$, the number $j(\tau)$ is algebraic, and referred to as a special point, or singular modulus. It is also true, proven by Masser in \cite{Masser1975}, that $\chi^*(\tau)$ is algebraic for quadratic $\tau$, and in fact $\chi^*(\tau)\in\mathbb{Q}(j(\tau))$. Such a point will be called a $\chi^*$-special point, while a point $\pi(\tau)=(j(\tau),\chi^*(\tau))$ is called $\pi$-special (when $\tau$ is quadratic).
Similar special behaviour exists in positive dimensions as well: if $g\in\operatorname{GL}_2^+(\mathbb{Q})$ and $S=\{(\tau,g\tau):\tau\in\mathbb{H}\}$, then the set \[\pi(S)=\{(j(\tau),\chi^*(\tau),j(g\tau),\chi^*(g\tau)):\tau\in\mathbb{H}\}\] is contained in an irreducible 2-dimensional variety $V$ defined over $\overline{\mathbb{Q}}$. Moreover, $V$ depends only on the determinant of $g$ when it is scaled so as to be a primitive integer matrix. These varieties, together with the $\pi$-special points, are the building blocks of what we shall refer to as $\pi$-special varieties; for more details see \cite{Spence2016}. In that paper we also prove the following theorem, which is the motivation for much of this paper.
\begin{theorem}[Andr\'e-Oort for $\pi$]\label{thrm:AOforPi}
Let $V\subseteq\mathbb{C}^{2n}$ be an algebraic variety. Then $V$ contains only finitely many maximal $\pi$-special varieties. \end{theorem} This was proven using techniques from o-minimality which are ineffective and rather difficult to make effective; it also relies heavily on a Galois bound provided by Siegel which is ineffective. This lack of effectivity, however, is not so unusual; in general, techniques powerful enough to deal with classical Andr\'e-Oort results in full generality tend to be ineffective for similar reasons.
Several effective and explicit results of Andr\'e-Oort type do exist in the classical context, however. For instance work of K\"uhne\cite{Kuehne2012}/Bilu-Masser-Zannier \cite{Bilu2013}, who prove:
\begin{theorem}\label{thrm:effectiveAOforJ}
Let $V\subseteq\mathbb{C}^2$ be an algebraic curve defined over $\overline{\mathbb{Q}}$. Then there are effectively computable constants $c_i=c_i(V)$ such that whenever $(j(\tau_1),j(\tau_2))\in V$ with quadratic $\tau_i$ and $d_i$ is the absolute value of the discriminant of $\tau_i$, either
\[\max(d_1,d_2)\leq c_1\]
or there is a primitive integer matrix $g$ of determinant at most $c_2$ such that $\tau_2 = g\tau_1$. \end{theorem}
Besides this effective Andr\'e-Oort result for $j$, there are also a number of what we'll call ``explicit'' results; theorems which answer specific questions about the special subvarieties of particular varieties $V$. For instance, work of Pila and Tsimerman who proved in \cite{Pila2014a} that with obvious exceptions there are only finitely many multiplicatively dependent $n$-tuples of singular moduli, or of Bilu, Luca and Masser \cite{Bilu2017}, who proved:
\begin{theorem}\label{thrm:collinearJPoints}
Barring obvious exceptions, there are only finitely many triples
\[(j(\tau_1),j(\tau_2)),\qquad(j(\tau_3),j(\tau_4)),\qquad(j(\tau_5),j(\tau_6))\]
with $\tau_i\in\mathbb{H}$ all quadratic. \end{theorem}
The goal of this paper is to investigate analogues of theorems \ref{thrm:effectiveAOforJ} and \ref{thrm:collinearJPoints} for $\chi^*$ and for $\pi$. In Section \ref{sect:qExp} we calculate various bounds on the $q$-expansions involved and prove a weak analogue of Theorem \ref{thrm:effectiveAOforJ}, namely: \begin{theorem}\label{thrm:EffectivePiAO}
Let $X\subseteq\mathbb{C}^2$ be an algebraic curve defined over $\overline{\mathbb{Q}}$. Then there is an effectively computable constant $c=c(X)$ such that for quadratic $\tau\in\mathbb{H}$,
\[(j(\tau),\chi^*(\tau))\in X \implies \text{ the absolute value of the discriminant of }\tau \text{ is at most }c.\] \end{theorem} This is of course not a perfect analogue of \ref{thrm:effectiveAOforJ}; the ideal analogue would have two copies of $\chi^*$, rather than a $j$ and a $\chi^*$. We discuss this very briefly in Section \ref{sect:qExp}; the summary is that we are not certain how to approach such a conjecture.
In Section \ref{sect:collinear} we work on a weak analogue of Theorem \ref{thrm:collinearJPoints}: \begin{theorem}\label{thrm:collinearPiPoints}
There are only finitely many collinear triples
\[P_1=(j(\tau_1),\chi^*(\tau_1)),\qquad P_2=(j(\tau_2),\chi^*(\tau_2)),\qquad P_3=(j(\tau_3),\chi^*(\tau_3))\]
with $\tau_i$ quadratic and $P_i$ pairwise distinct. \end{theorem} Just as \ref{thrm:EffectivePiAO} is not a perfect analogue of the classical version \ref{thrm:effectiveAOforJ}, Theorem \ref{thrm:collinearPiPoints} is not a perfect analogue to \ref{thrm:collinearJPoints}. We would very much like to have the following:
\begin{conjecture}\label{conj:collinearChiPoints}
There are only finitely many triples
\[P_1=(\chi^*(\tau_1),\chi^*(\tau_2)),\qquad P_2=(\chi^*(\tau_3),\chi^*(\tau_4)),\qquad P_3=(\chi^*(\tau_5),\chi^*(\tau_6))\]
with $\tau_i$ quadratic, such that the $P_i$ are pairwise distinct and belong to a straight line which is neither horizontal, vertical nor the diagonal $x=y$. \end{conjecture} This should very much be a tractable problem. Indeed by emulating Bilu-Luca-Masser's proof of \ref{thrm:collinearJPoints}, one can get a long way towards a proof of \ref{conj:collinearChiPoints}. Unfortunately, there remains a gap, which while apparently quite surmountable, the author has not found the time to work through. The gap lies in the fact that the Bilu-Luca-Masser approach relies on a particular result of Allombert, Bilu and Pizarro-Madariaga \cite[Theorem 1.2]{Allombert2015}. Without a suitable analogue of this, two crucial lemmas from \ref{thrm:collinearJPoints} lack suitable analogues in the $\chi^*$ setting.
One might be able to use the ``multiplicity of $q$-expansions'' contained in $\chi^*$ to circumvent the need for the missing result. I will briefly discuss the state of my approaches towards Conjecture \ref{conj:collinearChiPoints} at the end of Section \ref{sect:collinear}; with luck, a future version of this paper might contain a complete proof.
\begin{notation}
Throughout, we will use $\mathbb{F}$ to refer to (the closure of) the standard fundamental domain for the action of $\operatorname{SL}_2(\mathbb{Z})$ on $\mathbb{H}$, namely:
\[\mathbb{F}=\left\{z\in\mathbb{C}: |z| \geq 1, -\frac{1}{2}\leq\operatorname{Re} z\leq\frac{1}{2}\right\}.\] \end{notation}
\section{Bounds on $q$-expansions and Effective Andr\'e-Oort for $\pi$}\label{sect:qExp} The goal of most of this section is to carry out the explicit $q$-expansion calculations which form much of the basis for the effective results to come. We will at the end of the section use these to prove our effective Andr\'e-Oort result \ref{thrm:EffectivePiAO} for $\pi$, which is the easiest of the results in this paper.
The bounds we need are on the tails of the relevant $q$-expansions; we wish to show that the first term in each $q$-expansion is the main contributor to the total value of the expansion. The $q$-expansions of $\chi$, $\xi$ and $j$ all begin with $q^{-1}$, so we will write: \[j = q^{-1}+\widehat{j},\qquad \chi = q^{-1}+\widehat{\chi},\qquad \xi = q^{-1}+\widehat{\xi}.\]
We wish to estimate $|\widehat{\chi}|$ and $|\widehat{\xi}|$, aiming for some analogue of known facts about $\widehat{j}$; it was proven by Bilu-Masser-Zannier in \cite{Bilu2013} that $|\widehat{j}|\leq 2079$ for all $\tau \in \mathbb{F}$. We'll be getting analogues of this fact for $\widehat{\chi}$ and $\widehat{\xi}$, working from first principles starting with the known $q$-expansions of Eisenstein series. As with $\widehat{j}$, we'll be able to get much better bounds when $\operatorname{Im}\tau \geq 2$, which will be useful later, so we will distinguish cases based on the size of $\operatorname{Im}\tau$.
For all $n\geq 3$, Robin \cite{Robin1984} proved that \[\sigma(n) < e^\gamma n\log\log n + \frac{0.6483n}{\log\log n}. \] For $n\geq 4$ we can rewrite this as the more manageable \[\sigma(n) < 8 n\log\log n.\] Even better, for $n\geq 6$ we get \[\sigma(n)< 4n\log\log n.\] It trivially follows that for $n\geq 6$: \[\sigma_3(n) < 64n^3(\log\log n)^3,\] \[\sigma_5(n) < 1024n^5(\log\log n)^5,\] and one can check by hand that in fact the above two inequalities hold for $n=4$ and $5$ as well.
Using these, we can get bounds on the tails of the $q$-expansions of the Eisenstein series $E_2$, $E_4$, $E_6$. First note that for $\tau\in\mathbb{F}$,
\[|e^{2\pi i \tau}| = |e^{-2\pi \operatorname{Im}\tau}| \leq e^{-\pi\sqrt{3}} < 0.005. \] Using this we see that:
\begin{align*}|(E_2(\tau)-1)/q|&\leq 24 \left(1 + 0.015 + 0.0001 + 200\sum_{n\geq 4} 8n\log\log n 0.005^n\right)\\&< 24\left(1.016+1600\sum_{n\geq 4} n 0.0055^n\right) \\&< 24 \times 1.017\\&<25,
\\|(E_4(\tau)-1)/q|&\leq 240 \left(1 + 0.045 + 0.001 + 200\sum_{n\geq 4} 64n^3(\log\log n)^3 0.005^n\right)\\&< 240\left(1.046+12800\sum_{n\geq 4} n^3 0.0067^n\right) \\&< 240 \times 1.048\\&<252,
\\|(E_6(\tau)-1)/q|&\leq 504 \left(1 + 0.165 + 0.007 + 200\sum_{n\geq 4} 1024n^5(\log\log n)^5 0.005^n\right)\\&< 504\left(1.172+204800\sum_{n\geq 4} n^5 0.0081^n\right) \\&< 504 \times 2.172\\&=1095. \end{align*} In each case we are using standard methods to evaluate the infinite sum and also using the fact that, for all $n$, $1.1^n > \log\log n$.
Now we can begin work on $\widehat{\chi}$ and $\widehat{\xi}$, aiming first to achieve some strong bounds holding only for certain $\tau$.
\begin{proposition}\label{propn:strongerChiBounds}
For $\operatorname{Im}\tau\geq 2$, we have
\[|\widehat{j}|\leq 1193,\qquad|\widehat{\chi}|\leq 4808,\qquad\text{and}\qquad|\widehat{\xi}|\leq 4782.\]
\begin{proof}
The fact for $\widehat{j}$ is due to K\"uhne, who proved it in \cite{Kuehne2012}. The claims for $\chi$ and $\xi$ will require a little work.
We first need to find a suitable lower bound on $\Delta$. More specifically, we need to find an effective constant $c$ such that
\[\left|\frac{\Delta - q}{q^2}\right| < c\]
for all $\tau\in\mathbb{F}$. My somewhat crude method uses the fact that $|\widehat{j}|\leq 1193$, from which it follows immediately that
\[|1-jq|<1193|q|,\]
which, for $\operatorname{Im}\tau\geq 2$, is bounded above by $0.01$, whence
\begin{equation}\label{eqn:jqBound}|(jq)^{-1}|<(0.99)^{-1}<1.011.\end{equation}
We also know that for $\tau\in\mathbb{F}$,
\begin{multline}\label{eqn:e4cubedBound}\left|\frac{E_4^3-1}{q}\right|=\left|\frac{\left(1+q\frac{E_4-1}{q}\right)^3-1}{q}\right|=\left|3\frac{E_4-1}{q}+3q\left(\frac{E_4-1}{q}\right)^2+q^2\left(\frac{E_4-1}{q}\right)^3\right|\\<756+953+401=2110.\end{multline}
We can write
\[\frac{\Delta - q}{q^2} = \frac{\frac{E_4^3}{j}-q}{q^2} = \frac{1}{jq}\left(\frac{E_4^3-jq}{q}\right),\]
whence
\[\left|\frac{\Delta-q}{q^2}\right| \leq \left|\frac{1}{jq}\right|\left|\frac{E_4^3-1}{q}\right|+\left|\frac{1}{jq}\right||j-q^{-1}|<\left|\frac{1}{jq}\right|\left(\left|\frac{E_4^3-1}{q}\right|+1193\right).\]
Combining this with (\ref{eqn:jqBound}) and (\ref{eqn:e4cubedBound}) yields
\begin{equation}\label{eqn:DiscBound}\left|\frac{\Delta - q}{q^2}\right|< 1.011\times(2110+1193) < 3340.\end{equation}
By direct calculation we can see that
\begin{multline}\label{eqn:firstChiBound}|\widehat{\chi}|=|\chi-q^{-1}|=\left|\frac{\left(1+q\frac{E_2-1}{q}\right)\left(1+q\frac{E_4-1}{q}\right)\left(1+q\frac{E_6-1}{q}\right)-\left(1+q\frac{\Delta-q}{q^2}\right)}{q\left(1+q\frac{\Delta-q}{q^2}\right)}\right|
\\\leq\left|\frac{1}{1+q\frac{\Delta-q}{q^2}}\right|\times\left(\left|\frac{\Delta - q}{q^2}\right|+\left|\frac{E_2-1}{q}\right|+\left|\frac{E_4-1}{q}\right|+\left|\frac{E_6-1}{q}\right|\right.+\\\left.\left|q\frac{E_2-1}{q}\frac{E_4-1}{q}\right|+\left|q\frac{E_2-1}{q}\frac{E_6-1}{q}\right|+\left|q\frac{E_4-1}{q}\frac{E_6-1}{q}\right|+\left|q^2\frac{E_2-1}{q}\frac{E_4-1}{q}\frac{E_6-1}{q}\right|\right),\end{multline}
which, using (\ref{eqn:DiscBound}) is bounded above by
\[1.02\times(3340+25+252+1095+0.1+0.1+1+0.1)<4808.\]
Similarly:
\begin{align*}|\widehat{\xi}|=|\xi-q^{-1}|&\leq\left|\frac{1}{1+q\frac{\Delta-q}{q^2}}\right|\times\left(\left|\frac{\Delta - q}{q^2}\right|+\left|\frac{E_4-1}{q}\right|+\left|\frac{E_6-1}{q}\right|+\left|q\frac{E_4-1}{q}\frac{E_6-1}{q}\right|\right) \\&<1.02\times(3340+252+1095+1)<4782.\end{align*}
\end{proof} \end{proposition}
For the remainder of the $\tau\in\mathbb{F}$, the bounds we can get are not as good.
\begin{proposition}\label{propn:weakerChiBounds}
For all $\tau\in\mathbb{F}$,
\[\left|\widehat{\chi}\right| < 39960\qquad\text{and}\qquad|\widehat{\xi}| < 39032.\]
\begin{proof}
This divides into two steps: first we'll get such bounds for $\operatorname{Im}\tau \geq 1.5$, then for the remainder of the region.
For $\operatorname{Im}\tau\geq 1.5$, we proceed exactly as above. We have $|1-jq|<1193|q|$, which for $\operatorname{Im}\tau\geq 1.5$ is bounded above by 0.1, so as in (\ref{eqn:DiscBound}):
\[\left|\frac{\Delta-q}{q^2}\right|< 1.12\times(2110+1193)<3700.\]
It follows as for (\ref{eqn:firstChiBound}) that when $\operatorname{Im}\tau\geq 1.5$,
\[|\widehat{\chi}| \leq 1.43\times(3700+25+252+1095+1+3+28+1)<7299\]
and similarly
\[|\widehat{\xi}| < 7258.\]
For the remainder of the $\tau\in\mathbb{F}$ (ie. those with $\frac{\sqrt{3}}{2}\leq\operatorname{Im}\tau<1.5$) we use a different technique. Recalling that $\Delta = q\prod_{n=1}^{\infty}(1-q^n)^{24}$ and noting that in the desired region $|q^{-1}|\leq 12392$ and $|q|<0.005$, we see:
\begin{align*}
\left|\frac{\Delta - q}{q^2}\right| = \left|\left(q^{-1}- 1\right)(1-q)^{23}\prod_{n=2}^{\infty}(1-q^n)^{24} - q^{-1}\right|\leq \left|12393\times 0.9\right|+12392 < 23546.
\end{align*}
Also we have
\begin{align*}\left|\frac{1}{1+q\frac{\Delta - q}{q^2}}\right| = \left|\frac{q}{\Delta}\right|=\left|\frac{1}{\prod_{n=1}^{\infty}(1-q^n)^{24}}\right|&<\left|\frac{1}{\prod_{n=1}^{100}(1-0.005^n)\prod_{n=101}^{\infty}(1-n^{-2})}\right|^{24}\\&<\left(\frac{1}{0.994\times\frac{101}{102}}\right)^{24}<1.5.\end{align*}
In much the same way as for (\ref{eqn:firstChiBound}), we then get
\[\left|\widehat{\chi}\right| < 1.5\times (23546 + 25+252+1095+32+137+1380+173) = 39960\]
and
\[|\widehat{\xi}| < 39032,\]
as required.
\end{proof} \end{proposition}
We'll use all of the above propositions in the calculations to come, but in most cases it will make more sense to use the better bounds known for $\operatorname{Im}\tau\geq 2$, ie. Proposition \ref{propn:strongerChiBounds}. The main purpose for getting Proposition \ref{propn:weakerChiBounds} was to yield the following lemmas.
As with many of the calculations in this paper, the first of these lemmas is taken essentially verbatim from Lemma 5.1 of \cite{Bilu2017}, in light of the calculations above.
\begin{lemma}\label{lma:QuadraticSizeComparison}
Let $x=\chi^*(\tau)$ be a $\chi^*$-special point and $x'=\chi^*(\tau')$ the principal $\chi^*$-special point of the same discriminant. Assume (without any loss) that $\tau$ and $\tau'$ are each in $\mathbb{F}$. Then either $\tau=\tau'$ or $|x'|>|x|+5595$.
\begin{proof}
Let $D$ be the common discriminant of $x$ and $x'$. We may assume that $|D|\geq 15$, otherwise $h(D)=1$ and there is nothing to prove. We'll assume that $\tau\ne \tau'$.
Since $\tau'$ is principal and $\tau$ is non-principal, it follows that
\[\operatorname{Im}\tau' = \frac{\sqrt{|D|}}{2}\qquad\text{and}\qquad\operatorname{Im}\tau \leq \frac{\sqrt{|D|}}{4}.\]
Therefore
\begin{align*}|x'|=|\chi^*(\tau')|&=\left|q^{-1}\left(1-\frac{3}{\pi \operatorname{Im}\tau'}\right) + \widehat{\chi}-\frac{3}{\pi \operatorname{Im}\tau'}\widehat{\xi}\right|\\&\geq|q^{-1}|\left(1-\frac{6}{\pi\sqrt{15}}\right)-4808 - \frac{6}{\pi\sqrt{15}}\times 4782\qquad\text{using Proposition \ref{propn:strongerChiBounds}.}\\&\geq e^{\pi\sqrt{|D|}}\times 0.5-7167\end{align*}
On the other hand
\begin{align*}
|x|=|\chi^*(\tau)|&\leq |q^{-1}|\left|1-\frac{3}{\pi\operatorname{Im}\tau}\right|+39960 + \frac{6}{\pi\sqrt{3}}\times 39032\qquad\text{using Proposition \ref{propn:weakerChiBounds}.}\\&\leq |q^{-1}|+39960 + 43039\qquad\text{since }\operatorname{Im}\tau\geq \sqrt{3}/2\text{ and }|1-6/\pi\sqrt{3}|<1.\\&\leq e^{\pi\sqrt{|D|}/2} + 82999.
\end{align*}
So
\[|x'|-|x| \geq e^{\pi\sqrt{|D|}}\times 0.5- e^{\pi\sqrt{|D|}/2} - 82999-7167 \geq 0.5e^{\pi\sqrt{15}}-e^{\pi\sqrt{15}/2}-90166 > 5595, \]
as required.
\end{proof} \end{lemma} The above will be useful for the next section, but it also allows us to resolve an old question about the degree of $\chi^*(\tau)$, which was discussed in \cite{Spence2017Ext} and in \cite{Spence2016}. \begin{corollary}\label{cor:FieldEquality}
For quadratic $\tau$, $\mathbb{Q}(j(\tau))=\mathbb{Q}(\chi^*(\tau))$.
\begin{proof}
For a given discriminant $D$, let $S_D$ be the set of $\tau\in\mathbb{F}$ having discriminant $D$. The class polynomial
\[H_j(X) = \prod_{\tau\in S_D}(X-j(\tau))\in\mathbb{Q}(X)\]
is known to be irreducible over $\mathbb{Q}$.
It is a fact due to Masser that for quadratic $\tau$, $\mathbb{Q}(\chi^*(\tau))\subseteq\mathbb{Q}(j(\tau))$. Proposition 5.2 of \cite{Spence2016} uses work of Masser to show further that for quadratic $\tau$, if $\theta$ is a field automorphism acting on $\mathbb{Q}(j(\tau))$ so that $\theta(j(\tau))=j(\tau')$, then also $\theta(\chi^*(\tau))=\chi^*(\tau')$.
From this and the irreducibility of $H_j(X)$, it follows that the class polynomial
\[H_{\chi^*}(X)=\prod_{\tau\in S_D}(X-\chi^*(\tau))\]
takes the form $p(X)^k$ for some polynomial $p$ irreducible over $\mathbb{Q}$.
Now let $\tau'\in S_D$ be principal. If $k>1$, then it follows that there is a non-principal $\tau\in S_D$ such that $\chi^*(\tau)=\chi^*(\tau')$, which contradicts Lemma \ref{lma:QuadraticSizeComparison}.
So $H_{\chi^*}(X)$ is irreducible, whence $[\mathbb{Q}(\chi^*(\tau)):\mathbb{Q}]=[\mathbb{Q}(j(\tau)):\mathbb{Q}]$, so that indeed $\mathbb{Q}(\chi^*(\tau))=\mathbb{Q}(j(\tau))$.
\end{proof} \end{corollary}
We can also now prove our desired effective Andr\'e-Oort result for $\pi$, namely Theorem \ref{thrm:EffectivePiAO}
\begin{proof}[Proof of Theorem \ref{thrm:EffectivePiAO}]
For a number field $K$, let $p\in K[X,Y]$. Suppose that $p(j(\tau),\chi^{*}(\tau))$ vanishes for some quadratic $\tau$ of discriminant $-D$. For every quadratic $\tau'$ also having discriminant $-D$, there is a Galois automorphism (over $\mathbb{Q}$) sending $j(\tau)$ to $j(\tau')$. Call it $\theta$. By Proposition 5.2 of \cite{Spence2016}, we know that also $\theta(\chi^*(\tau)) = \chi^*(\tau')$, so that $\theta(p)(j(\tau'),\chi^*(\tau'))$ vanishes. Hence, by considering $\theta(p)$ rather than $p$, we may assume that $\tau = \frac{D+\sqrt{-D}}{2}$.
We will write $h(p)$ for the maximum of the absolute logarithmic heights of the coefficients of $p$, as defined in \cite{Bombieri2006}. Since the absolute logarithmic height is Galois invariant, we may carry out the reduction described in the previous paragraph without affecting this height. It follows from an inequality of Liouville \cite{Zannier2009} that, if $\alpha$ is a coefficient occurring in $p$,
\begin{equation}\label{eqn:LiouvilleHeightBound}-[K:\mathbb{Q}]h(p)\leq\log|\alpha|\leq[K:\mathbb{Q}]h(p),\end{equation}
an inequality which we will use on a number of occasions below.
We can write
\[p(j(\tau),\chi^*(\tau)) = p\left(q^{-1}+\widehat{j}, q^{-1}+\widehat{\chi}-\frac{3}{\pi \operatorname{Im}\tau}\left(q^{-1}+\widehat{\xi}\right)\right)\]
and we get a Laurent series in $q$, whose coefficients are polynomials in $3/\operatorname{Im}\tau$. Note that there is no cancellation among the $\frac{3}{\pi\operatorname{Im}\tau}$ terms. So the leading $\frac{3}{\pi\operatorname{Im}\tau}$ term of the leading term in the $q$-series comes directly from the leading $Y$ term of the leading $X$ term of $p(X,Y)$.
The Laurent series therefore takes the form:
\[q^{-\deg p}\left(A\left(\frac{3}{\pi\operatorname{Im}\tau}\right)^k + p_1\left(\frac{3}{\pi\operatorname{Im}\tau}\right)\right)+p_2\left(q^{-1}, \widehat{j}, \widehat{\chi},\widehat{\xi},\frac{3}{\pi\operatorname{Im}\tau}\right),\]
where:
\begin{itemize}
\item $A$ is one of the coefficients of $p$, so in particular \begin{equation}\label{eqn:LowerBoundOnA}|A|\geq e^{-[K:\mathbb{Q}]h(p)}.\end{equation}
\item The degree of $p_1$ is less than $k$.
\item The degree of $p_2(X,A,B,C,D)$ in $X$ is less then $\deg p$.
\item Using (\ref{eqn:LiouvilleHeightBound}), the absolute values of the coefficients of the $p_i$ are bounded above by a constant which can easily be computed in terms of $h(p)$, $[K:\mathbb{Q}]$ and $\deg{p}$.
\end{itemize}
So we can rewrite $p(j,\chi^*)=0$ as
\[A = -\left(\frac{3}{\pi\operatorname{Im}\tau}\right)^{-k}p_1\left(\frac{3}{\pi\operatorname{Im}\tau}\right)-q^{\deg p}\left(\frac{3}{\pi\operatorname{Im}\tau}\right)^{-k}p_2\left(q^{-1}, \widehat{j}, \widehat{\chi},\widehat{\xi},\frac{3}{\pi\operatorname{Im}\tau}\right)\]
Provided that $\operatorname{Im}\tau \geq 2$, the absolute value of the right hand side is bounded above by
\[\left(\frac{3}{\pi\operatorname{Im}\tau}\right)\cdot c_1(h(p),[K:\mathbb{Q}], \deg p) + q\cdot\left(\frac{\pi\operatorname{Im}\tau}{3}\right)^{k}c_2(h(p),[K:\mathbb{Q}],\deg p, 1193, 4782, 4808),\]
for some easily computed constants $c_1$, $c_2$. We noted in (\ref{eqn:LowerBoundOnA}), though, that $|A|\geq e^{-[K:\mathbb{Q}]h(p)}$. Writing $H(p)=e^{[K:\mathbb{Q}]h(p)}$, these inequalities are inconsistent if we have both:
\[\operatorname{Im}\tau > \frac{6\operatorname{H}(p)c_1}{\pi}\]
and
\[e^{-2\pi\operatorname{Im}\tau}\left(\frac{\pi\operatorname{Im}\tau}{3}\right)^{k}c_2 < \frac{1}{2\operatorname{H}(p)}.\]
Since $x^k<k!e^x$ for all $x$, this final condition holds provided that
\[e^{(1-2\pi)\operatorname{Im}\tau}<\frac{3^k}{2\pi^kk!\operatorname{H}(p)c_2},\]
whence it suffices to have
\[\operatorname{Im}\tau > |1-2\pi|^{-1}\log\left(\frac{2\pi^kk!\operatorname{H}(p)c_2}{3^k}\right).\]
So if $p(j(\tau),\chi^*(\tau))$ vanishes, it follows that the above inequalities cannot hold. In other words, the discriminant $-D$ of $\tau$ must satisfy
\[D\leq 4\max\left(\frac{6\operatorname{H}(p)c_1}{\pi}, |1-2\pi|^{-1}\log\left(\frac{2\pi^kk!\operatorname{H}(p)c_2}{3^k}\right) \right)^2. \]
\end{proof} The observant reader will note a discrepancy between the statement of Theorem \ref{thrm:EffectivePiAO} and that of the motivating theorem for $j$, \ref{thrm:effectiveAOforJ}. Specifically, note that there are more degrees of freedom present in Theorem \ref{thrm:effectiveAOforJ} than in \ref{thrm:EffectivePiAO}. A more direct analogue might be something like the following:
\begin{conjecture}\label{conj:effectiveChiAO}
Let $V\subseteq\mathbb{C}^2$ be an algebraic curve defined over $\mathbb{Q}$. Then there are effectively computable constants $c_i=c_i(V)$ such that whenever $(\chi^*(\tau_1),\chi^*(\tau_2))\in V$ with quadratic $\tau_i$ and $d_i$ is the absolute value of the discriminant of $\tau_i$, either
\[\max(d_1,d_2)\leq c_1\]
or there is a primitive integer matrix $g$ of determinant at most $c_2$ such that $\tau_2 = g\tau_1$. \end{conjecture}
Unfortunately, the techniques used by K\"uhne/Bilu-Masser-Zannier to prove this for $j$ relied heavily on effective estimates by Baker on logarithms in algebraic numbers. The presence of the transcendental number $\pi$ in the expression $\chi^*=\chi-\frac{3}{\pi y}\xi$ (as well as in the $q$-expansions) interferes with this method and prevents us from carrying out the K\"uhne/Bilu-Masser-Zannier approach in full. In proving \ref{thrm:EffectivePiAO} we have essentially carried out the easy half of the K\"uhne/Bilu-Masser-Zannier approach; the part which need not appeal to Baker's theorem. Conjecture \ref{conj:effectiveChiAO}, while not strictly stronger than Theorem \ref{thrm:EffectivePiAO}, certainly seems more difficult to approach in the absence of a suitable Baker-like result.
\section{Collinear Special Points}\label{sect:collinear} A triple of points \[P_1=(x_1, y_1),\qquad P_2=(x_2, y_2),\qquad P_3=(x_3, y_3)\] is collinear if and only if the determinant \[\begin{vmatrix}1&1&1\\x_1&x_2&x_3\\y_1&y_2&y_3\end{vmatrix}\] vanishes.
In order to approach Theorem \ref{thrm:collinearPiPoints}, we use the above to define a variety $V\subseteq\mathbb{C}^6$. Sets of collinear special points will therefore correspond to special points in $V$, and by Theorem \ref{thrm:AOforPi}, $V$ will contain only finitely many such points unless it contains a positive-dimensional subvariety.
This approach is very much the same as that used by \cite{Bilu2017} to prove similar results for $j$, and indeed much of the work will be left to that paper rather than replicate the details exactly. In the presence of the estimates from the previous section, we are able to jump straight to the proof of Theorem \ref{thrm:collinearPiPoints}, with a minimum of additional setup. We take the following notation straight from \cite{Bilu2017}.
\begin{definition}
A function from $\mathbb{H}$ to $\mathbb{C}$ is called a $j$-map if either it takes the form
\[j_g = j\circ g: \tau\mapsto j(g\tau)\]
for some $g\in\operatorname{GL}_2^+(\mathbb{Q})$ or is a constant map $j_{\tau_0}$ which sends everything to some $j(\tau_0)$, with $\tau_0$ quadratic.
Similarly, a function is called a $\chi$-map if it takes the form $\chi_g=\chi^*\circ g$ or is constant and special. Note that in this definition, for a given $j$- or $\chi$-map $j_g$ (resp. $\chi_g$), one can always without loss choose $g\in\operatorname{GL}_2^+(\mathbb{Q})$ to take the form
\[\begin{pmatrix}a&b\\0&d\end{pmatrix}\]
with $b<d$. In this case the number $a/d$ is called the \emph{level} of the map and the root of unity $e^{2\pi ib/d}$ is called the \emph{twist}. Note that a nonconstant $j$- or $\chi$-map is defined by its level and twist. The level of a constant $j$- or $\chi$-map is defined to be 0 and the twist undefined.
A pair $(F,G)$ consisting of a $j$-map $F$ and $\chi$-map $G$ is \emph{consistent} if $F$ and $G$ have the same level and twist and, if they are constant maps, they come from the same element of $\mathbb{H}$. If $(F,G)$ is a consistent pair we will often refer to its level and/or twist, in the obvious way. \end{definition}
\begin{proof}[Proof of Theorem \ref{thrm:collinearPiPoints}]
Suppose there were infinitely many pairwise distinct triples
\[P_1=(j(\tau_1),\chi^*(\tau_1)),\qquad P_2=(j(\tau_2),\chi^*(\tau_2)),\qquad P_3=(j(\tau_3),\chi^*(\tau_3))\]
with $\tau_i$ quadratic. Then by Theorem \ref{thrm:AOforPi}, the variety $V$ contains a positive-dimensional special-subvariety, excluding those defined by equations of the form $x_i = x_j$, $y_i = y_j$, $i\ne j$.
This would imply the existence of 3 pairwise distinct triples
\[F_1=(j_1,\chi_1),\qquad F_2=(j_2,\chi_2),\qquad F_3=(j_3,\chi_3)\]
where each $j_i$ is a $j$-map, each $\chi_i$ is a $\chi$-map, each $F_i$ is consistent, at least one of the maps is nonconstant and
\begin{equation}\label{eqn:PiDeterminant}\begin{vmatrix}1&1&1\\j_1&j_2&j_3\\\chi_1&\chi_2&\chi_3\end{vmatrix}=0\end{equation}
identically.
We'll use Lemma 7.2 from \cite{Bilu2017}, which tells us that for any three distinct $j$-maps, not all constant, by composing with an element of $\operatorname{GL}_2^+(\mathbb{Q})$, we can ensure that one of them has strictly higher level than the other two. We can therefore without loss of generality assume that the level of $F_1$ is greater than the levels of the other two pairs.
Note that if $F_2$ and $F_3$ are both constant, then since they are distinct, (\ref{eqn:PiDeterminant}) induces a nontrivial relation between $j_1$ and $\chi_1$, which is impossible: $j$ and $\chi^*$ are algebraically independent. So at most one of $F_2$ and $F_3$ can be constant.
\noindent
\textbf{Case 1: Neither $F_2$ nor $F_3$ is constant.}
Let us write $r_i$ for the level of $F_i$ and $\eta_i$ for its twist.
Since, as proven in Proposition \ref{propn:weakerChiBounds}, the tails of all the relevant $q$-expansions are bounded within $\mathbb{F}$, the only way (\ref{eqn:PiDeterminant}) can hold identically is if
\begin{equation}\label{eqn:PiDeterminantTwo}\begin{vmatrix}1&1&1\\\eta_1q^{-r_1}&\eta_2q^{-r_2}&\eta_3q^{-r_3}\\\eta_1q^{-r_1}-\frac{3}{\pi r_1 y}\eta_1q^{-r_1}&\eta_2q^{-r_2}-\frac{3}{\pi r_2 y}\eta_2q^{-r_2}&\eta_3q^{-r_3}-\frac{3}{\pi r_3 y}\eta_3q^{-r_3}\end{vmatrix}=0,\end{equation}
since the above expression accounts for all of the dominant terms.
We know already that $r_1$ is greater than $r_2$ and $r_3$. So if $r_2>r_3$, then as $\operatorname{Im}\tau$ grows, the dominant term of (\ref{eqn:PiDeterminantTwo}) is
\[\frac{3}{\pi y}\eta_1\eta_2\left(r_1^{-1}-r_2^{-1}\right)q^{-r_1-r_2}.\]
Since $r_1>r_2$, this term is nonzero and so the determinant above cannot vanish identically. Contradiction. Symmetrically we cannot have $r_3>r_2$.
We therefore have $r_2=r_3$, in which case the dominant term of (\ref{eqn:PiDeterminantTwo}) is
\[\frac{3}{\pi y}\eta_1(\eta_2-\eta_3)\left(r_1^{-1}-r_2^{-1}\right)q^{-r_1-r_2},\]
which can only vanish if $\eta_2=\eta_3$, since $r_1>r_2$. Since the levels and twists of $F_2$ and $F_3$ now match, we have $F_2=F_3$, which we assumed was not the case. Contradiction.
\noindent
\textbf{Case 2: Without loss of generality, $F_3$ is constant.}
Write $F_3=(a,b)$. Exactly as above, since the tails of the relevant $q$-expansions are bounded, (\ref{eqn:PiDeterminant}) can only vanish identically if
\[\begin{vmatrix}1&1&1\\\eta_1q^{-r_1}&\eta_2q^{-r_2}&a\\\eta_1q^{-r_1}-\frac{3}{\pi r_1 y}\eta_1q^{-r_1}&\eta_2q^{-r_2}-\frac{3}{\pi r_2 y}\eta_2q^{-r_2}&b\end{vmatrix}=0,\]
since this contains all the potentially dominant terms. In fact the dominant term here is
\[\frac{3}{\pi y}\eta_1\eta_2\left(r_1^{-1}-r_2^{-1}\right)q^{-r_1-r_2}\]
which can't vanish since $r_1>r_2$. \end{proof}
\subsection{Collinearity for $\chi^*$ Alone} In this final part, I will briefly discuss my initial attempts to prove Conjecture \ref{conj:collinearChiPoints}. I am convinced that the conjecture should be attainable with a relative minimum of work - no new ideas being required - but have not had the time to work this all the way through. The approach I have in mind is exactly the same as that used in \cite{Bilu2017}; I will describe how that goes in this context and where there are remaining gaps.
Suppose we had infinitely many collinear triples \[(\chi^*(\tau_1),\chi^*(\sigma_1)), (\chi^*(\tau_2),\chi^*(\sigma_2)),\text{ and } (\chi^*(\tau_3),\chi^*(\sigma_3)),\] with $\tau_i$, $\sigma_j$ quadratic and the 3 pairs being distinct. We exclude the obvious cases where the line in question is the diagonal $X=Y$ or is a horizontal or vertical line.
In light of Theorem \ref{thrm:EffectivePiAO} (which easily implies a version for $\chi^*$ alone by projection of coordinates), we get a collection of six $\chi$-maps $f_1$, $f_2$, $f_3$, $g_1$, $g_2$, $g_3$, not all constant, with the property that \begin{equation}\label{eqn:ChiDeterminant}\begin{vmatrix}1&1&1\\f_1&f_2&f_3\\g_1&g_2&g_3\end{vmatrix}=0,\end{equation} and moreover none of the following hold: \begin{enumerate}
\item $f_1=f_2=f_3$, (to exclude vertical lines)
\item $g_1=g_2=g_3$, (to exclude horizontal lines)
\item $f_i=f_j$ and $g_i=g_j$ for any $i\ne j$ (the pairs $(f_i, g_i)$ are distinct), nor
\item $f_i=g_i$ for all $i$ (to exclude the diagonal). \end{enumerate} Following \cite{Bilu2017}, the idea is to show that if 1, 2 and 3 all fail to hold then in fact 4 must hold, making this set-up impossible.
Exactly as in \cite{Bilu2017}, we can write $m_i$, $n_i$ for the level of $f_i$, $g_i$ respectively, and $\epsilon_i$, $\eta_i$ for their twists (where they are nonconstant). Under the assumption that 1, 2 and 4 all fail to hold, the work lies in proving that $m_i=n_i$ and $\epsilon_i=\eta_i$ for all $i$. This involves conditioning on various inequalities between the $m_i$ and $n_i$ and, for each of the various cases, perform some $q$-expansion calculations.
The calculation differs depending on which, if any, of the $\chi$-maps are constant. As an example, we will demonstrate in the case where just $f_3$ and $g_3$ only are constant. So (\ref{eqn:ChiDeterminant}) becomes \[\begin{vmatrix}1&1&1\\(\epsilon_1q^{-m_1} - \dots) - \frac{3}{\pi m_1y}(\epsilon_1q^{-m_1} - \dots)&(\epsilon_2q^{-m_2} - \dots) - \frac{3}{\pi m_2y}(\epsilon_2q^{-m_2} - \dots)&a\\(\eta_1q^{-n_1} - \dots) - \frac{3}{\pi n_1y}(\eta_1q^{-n_1} - \dots)&(\eta_2q^{-n_2} - \dots) - \frac{3}{\pi n_2y}(\eta_2q^{-n_2} - \dots)&b\end{vmatrix}=0,\] with $a$ and $b$ being $\chi^*$-special points.
Note that by comparison of growth rates, in order for the above to hold, the $q$-expansions corresponding to the holomorphic part $\chi$ and the nonholomorphic part $\frac{3}{\pi y}\xi$ must vanish separately, that is: \begin{equation}\label{eqn:ChiqExpMatrix}\begin{vmatrix}1&1&1\\\epsilon_1q^{-m_1} - 264 - 135602\epsilon_1q^{m_1}&\epsilon_2q^{-m_2} - 264 - 135602\epsilon_2q^{m_2}&a\\\eta_1q^{-n_1} - 264 - 135602\eta_1q^{n_1}&\eta_2q^{-n_2} - 264 - 135602\eta_2q^{n_1}&b\end{vmatrix}=0\end{equation} and \begin{equation}\label{eqn:XiqExpMatrix}\begin{vmatrix}1&1&1\\-m_1^{-1}(\epsilon_1q^{-m_1} - 240 - 8511777\epsilon_1q^{m_1})&-m_2^{-1}(\epsilon_2q^{-m_2} - 240 - 8511777\epsilon_2q^{m_2})&a\\-n_1^{-1}(\eta_1q^{-n_1} - 240 - 8511777\eta_1q^{n_1})&-n_2^{-1}(\eta_2q^{-n_2} - 240 - 8511777\eta_2q^{n_2})&b\end{vmatrix}=0.\end{equation} Here we're calculating the first few terms of the $q$-expansions of $\chi$ and $\xi$ using the known $q$-expansions of the Eisenstein series $E_2$, $E_4$ and $E_6$ from Section \ref{sect:intro}.
Using just (\ref{eqn:ChiqExpMatrix}) - and equivalents for the cases where other $\chi$-maps are constant - we can replicate much of the calculation from \cite{Bilu2017}. The primary relevant sections of \cite{Bilu2017} are Sections 8 onwards, where the case-analysis is carried out. In \cite{Bilu2017}, the analogy of (\ref{eqn:ChiqExpMatrix}) simply has the leading terms of the $q$-expansion of $j$ ($q^{-1} + 744 + 196884 q$) rather than those for $\chi$. There is no analogy for equation (\ref{eqn:XiqExpMatrix}), since $j$ has no nonholomorphic part, so for now we ignore (\ref{eqn:XiqExpMatrix})
In light of equation (\ref{eqn:ChiqExpMatrix}), then, much of the calculation in \cite[Sections 8 onwards]{Bilu2017}, goes through without a hitch, replacing occurrences of the numbers 744 and 196884 with -264 and -135602 respectively. On a number of occasions, however, we need to appeal to appropriate analogues lemmas from earlier in \cite{Bilu2017}.
The lemmas from \cite{Bilu2017} in question are: 4.1, 4.2, 5.1 to 5.9, and 7.3.
Lemmas 4.1 and 4.2 just concern roots of unity, so still hold here. Lemma 5.1 is just the $j$-analogue of this paper's Lemma \ref{lma:QuadraticSizeComparison} and using the estimates from Section \ref{sect:qExp}, a suitable analogue of Lemma 5.3 can be attained. An analogue of Lemma 5.4 is easy in light of this paper's Corollary \ref{cor:FieldEquality}.
Lemmas 5.6 and 5.7 claim that certain numbers like $744\pm 196884$ and $744\pm 196884\theta$ (for $\theta$ a root of unity) are not singular moduli. The equivalent statements for $\chi^*$-special points (with 744 and 196884 replaced by -264 and -135602) are easily proven in light of the following table of integral $\chi^*$-special points, together with Corollary \ref{cor:FieldEquality}, which implies among other things that these are the only integral $\chi^*$-special points. \begin{center}
\begin{tabular}{c|c c c c c c c c c}
Discriminant & -3 & -4 & -7 & -8 & -11 & -12 & -16 & -19 & -27 \\
$\chi^*$ & 0 & 0 & -1215 & 2240 & -14336 & 23760 & 149688 & -497664 & -7772160 \\
\hline
\end{tabular}
\begin{tabular}{c|c c c c}
Discriminant &-28 &-43 &-67&-163\\
$\chi^*$ &10596015&-627056640 &-112852776960 &-223263987730882560
\end{tabular} \end{center} This table is derived easily from Masser's list of special points calculated in \cite{Masser1975}.
Lemmas 5.8 and 5.9 gives restrictions for when a singular modulus can be an integral linear combination of certain roots of unity. A suitable analogue for $\chi^*$-special points can be attained in light of the estimates from Section \ref{sect:qExp}, using the same strategies as employed in \cite{Bilu2017} to get Lemmas 5.8 and 5.9.
A $\chi^*$-analogue of Lemma 7.3, which gives restrictions on when two $j$-maps $f$ and $g$ can satisfy $af+bg+c=0$, is very easy in this setting. If any non-obvious such linear relation exists among $\chi$-maps, then we can combine it with the known algebraic relations (modular polynomials) between $j$- and $\chi$-maps, counting conditions to show that some consistent $(j,\chi)$-map pair $(f,g)$ is a solution to some bivariate polynomial, which is impossible since $j$ and $\chi^*$ are algebraically independent.
So the gap to be filled in consists of finding analogues of Lemmas 5.2 and 5.5 of \cite{Bilu2017}. In turn these rely on getting an analogue of Theorem 1.2 of \cite{Allombert2015}, which is where we finally fall flat. This is a rather significant gap in the approach. The theorem in question says that pairs of singular moduli can only be linearly independent over $\mathbb{Q}$ if they have degree at most 2. This is the culmination of a significant amount of work from \cite{Allombert2015}, and the author has not yet had the time to work through whether a suitable analogue holds for $\chi^*$. So here we must stop.
Before leaving this entirely, however, we will make the obvious comment; we earlier decided to ignore the presence of (\ref{eqn:XiqExpMatrix}), which does not appear in \cite{Bilu2017}. In doing this, of course, we potentially throw away a significant amount of valuable information which could be used to our benefit. A viable approach to bridge the gap in proving Conjecture \ref{conj:collinearChiPoints}, therefore, might be to make use of equation (\ref{eqn:XiqExpMatrix}) to circumvent the necessity of appeals to the missing Lemmas 5.2 and 5.5. Once again, though, the author has not been able to work this through properly, so I will leave it here, in the hope that I have achieved my goal of laying out a viable approach to proving Conjecture \ref{conj:collinearChiPoints} and describing what remains to be done.
\end{document} |
\begin{document}
\title{Multiple solutions for a class of quasilinear problems involving variable exponents \thanks{Partially supported by INCT-MAT and PROCAD}} \author{Claudianor O. Alves\thanks{C.O. Alves was partially supported by CNPq/Brazil 303080/2009-4, e-mail:coalves@dme.ufcg.edu.br} \,\,\, and \,\,\, Jos\'{e} L. P. Barreiro\thanks{e-mail:lindomberg@dme.ufcg.edu.br}\\Universidade Federal de Campina Grande\\Unidade Acad\^emica de Matem\'atica \\CEP:58429-900, Campina Grande - PB, Brazil.} \date{} \maketitle
\begin{abstract} In this paper we prove the multiplicity of solutions for a class of quasilinear problems in $ \mathbb{R}^{N} $ involving variable exponents. The main tool used is in the proof are the direct methods, Ekeland's variational principle and some properties related to Nehari manifold. \end{abstract}
{\scriptsize \textbf{2000 Mathematics Subject Classification:} 35A15, 35B38, 35D30, 35J92.}
{\scriptsize \textbf{Keywords:} Existence, Multiplicity, Variable Exponents, Direct methods}
\section{Introduction}
In this paper, we consider the existence and multiplicity of solutions for the following class of quasilinear problems involving variable exponents \begin{align} \left\{ \begin{array} [c]{rcl} -\Delta_{p(x)} u + \vert u \vert^{p(x) - 2} u & = & \lambda g(k^{-1}x) \vert u \vert^{q(x) - 2}u + f(k^{-1}x) \vert u \vert^{r(x) - 2}u , \quad \mathbb{R}^{N}\\ \mbox{}\\ u \in W^{1, p(x)}(\mathbb{R}^{N}) & & \end{array} \right. \tag{$ P_{\lambda, k} $}\label{Plkm} \end{align}
where $\lambda $ and $k$ are nonnegative parameters with $k \in \mathbb{N}$, the operator $\Delta_{p(x)}u = \mathrm{div}\left( |\nabla u|^{p(x) - 2}\nabla u \right) $ is called $p(x)$-Laplacian, which is a natural extension of the $p$-Laplace operator, with $p$ being a positive constant. We assume that $p,q, r : \mathbb{R}^{N} \to \mathbb{R} $ are positive Lipschitz continuous functions, which are $ \mathbb{Z}^{N}$-periodic and verify \begin{align} 1 < p_{-} \leq p(x) \leq p_{+} < q_{-} \leq q(x) \leq r(x) \ll p^{*}(x) \mbox{ a.e. in } \mathbb{R}^{N}, \tag{$p_{1}$} \label{p1} \end{align} where $p_{+} = \mbox{ess}\sup_{x \in \mathbb{R}^{N}}{p(x)}$, $p_{-} = \mbox{ess}\inf_{x \in \mathbb{R}^{N}}{p(x)}$ and $$ p^{*}(x) = \left\{ \begin{array}{l} Np(x) / (N - p(x)) ~~ ~\mbox{if} ~~ p(x) < N \\ \mbox{}\\ +\infty ~~ \mbox{if} ~~ p(x) \geq N . \end{array} \right. $$ Moreover, we say that a measurable function $h: \mathbb{R}^{N} \to \mathbb{R}$ is $ \mathbb{Z}^{N}$-periodic if $$ h(x + z) = h(x) \,\,\, \forall x \in \mathbb{R}^{N} \,\,\, \mbox{and} \,\,\, \forall z \in \mathbb{Z}^{N}, $$ and the notation $u \ll v$ means that $\displaystyle \inf_{x \in \mathbb{R}^{N}} (v(x) - u(x)) > 0$.
Related to functions $f$ and $g $, we suppose that they are nonnegative continuous functions verifying the following conditions:
\begin{enumerate}[label={($H\arabic{*}$})] \setcounter{enumi}{0} \item\label{H2} \begin{align*} \lim_{\vert x \vert\rightarrow\infty} {g(k^{-1}x)} = 0; \end{align*}
\item\label{H3} There exist $\ell$ points $a_{1}, a_{2}, \cdots,a_{\ell} $ in $\mathbb{Z}^{N} $ with $a_1=0$ such that \[ 1 = f(a_{i}) = \max_{\mathbb{R}^{N}}f(x), \text{ for }1 \leq i \leq\ell. \] Moreover, $0 < f_{\infty} < f(x) $ for any $x \in\mathbb{R}^{N} $ and \[ \lim_{\vert x \vert\rightarrow\infty} f(x) = f_{\infty} . \]
\end{enumerate}
The variable exponents problems appear in a lot of applications, the reader can find in R\r{u}\v{z}i\v{c}ka \cite{Ru} and Krist\'aly, Radulescu \& Varga in \cite{KRV} several models in mathematical physics where this class of problem appear. In recent years, such problems have attracted an increasing attention, we would like to mention \cite{alves08, AlvesFerreira1, AlvesSouto,chabrowki, FanHan, MR}, and also the survey papers \cite{AS,DHN,S} for the advances and references in this area.
The problem $(P_{\lambda,k})$ has been considered in the literature for the case where the exponents are constants, see for example, Adachi \& Tanaka \cite{AT}, Cao \& Noussair \cite{CN}, Cao \& Zhou \cite{Cao}, Hirano \cite{H1}, Hirano \& Shioji \cite{HS}, Hu \& Tang \cite{HuTang}, Jeanjean \cite{jeanjean}, Lin \cite{Lin12}, Hsu, Lin \& Hu \cite{Hsu1}, Tarantello \cite{T}, Wu \cite{Wu1, Wu2} and their references.
In Cao \& Noussair \cite{CN}, the authors have studied the existence and multiplicity of positive and nodal solutions for the following problem $$ \left\{ \begin{array}{l} -\Delta u + u = f(\epsilon x) \vert u \vert^{r - 2}u \,\,\, \mbox{in} \,\,\, \mathbb{R}^{N}\\ \mbox{}\\ u \in H^{1,2}(\mathbb{R}^{N}), \end{array} \right. \eqno{(P_{1})} $$ where $\epsilon$ is a positive real parameter, $r \in (2,2^{*})$ and $f$ verifies condition $(H2)$. By using variational methods, the authors showed the existence of at least $\ell$ positive solutions and $\ell$ nodal solutions if $\epsilon$ is small enough. After, Wu in \cite{Wu1} considered the perturbed problem $$ \left\{ \begin{array}{l}
-\Delta u + u = f(\epsilon x) \vert u \vert^{r - 2}u + \lambda g(\epsilon x)|u|^{q-2}u \,\,\, \mbox{in} \,\,\, \mathbb{R}^{N}\\ \mbox{}\\ u \in H^{1,2}(\mathbb{R}^{N}), \end{array} \right. \eqno{(P_{2})} $$ where $\lambda$ is a positive parameter and $q \in (0,1)$. In \cite{Wu1}, the authors showed the existence of at least $\ell$ positive solutions for $(P_2)$ when $\epsilon$ and $\lambda$ are small enough.
In Hsu, Lin \& Hu \cite{Hsu1}, the authors have considered the following class of quasilinear problems $$ \left\{ \begin{array}{l}
-\Delta_p u + |u|^{p-2}u = f(\epsilon x) \vert u \vert^{r - 2}u + \lambda g(\epsilon x) \,\,\, \mbox{in} \,\,\, \mathbb{R}^{N}\\ \mbox{}\\ u \in W^{1,p}(\mathbb{R}^{N}) \end{array} \right. \eqno{(P_{3})} $$ with $N \geq 3$ and $2 \leq p < N$. In that paper, the authors have proved the same type of results found in \cite{CN} and \cite{Wu1}.
Motivated by results proved in \cite{CN}, \cite{Hsu1} and \cite{Wu1}, we intend in the present paper to prove the existence of multiple solutions for problem (\ref{Plkm}), by using the same type of approach explored in that papers. However, once that we are working with variable exponents, some estimates that hold for the constant case are not immediate for the variable case, and so, a careful analysis is necessary to get some estimates. Here, for example, we were able to prove our results by assuming that some exponents are periodic and $k \in \mathbb{N}$.
Our main result is the following
\begin{thm}\label{T1} Assume that (\ref{p1}) and \ref{H2}--\ref{H3} are satisfied. Then there are $ \Lambda^{*} >0 $ and $k^* \in \mathbb{N}$ such that problem (\ref{Plkm}) admits at least $ \ell $ solutions for $ 0 \leq \lambda < \Lambda^{*} $ and $k \geq k^{*}$. \end{thm}
This paper is organized in the following way: In Section~\ref{expoents_variaveis}, we collect some preliminaries on variable exponent spaces that will be used throughout the paper, which can be found in \cite{AlvesFerreira1}, \cite{AlvesFerreira2}, \cite{chabrowki}, \cite{peter} and \cite{Fan2001a}. In Section 3, we show some technical results, and finally in Section 5 we prove Theorem \ref{T1}.
\noindent\textbf{Notation:} The following notation will be used in the present work: \begin{itemize}
\item $ C $ and $ c_{i} $ denote generic positive constants, which may vary from line to line.
\item We denote by $\int f $ the integral $\int_{\mathbb{R}^{N}}fdx$, for any measurable function $f$.
\item $ B_{R}(z) $ denotes the open ball with center at $ z $ and radius $ R $ in $\mathbb{R}^{N}$.
\item If $h$ is a bounded mensurable function, we denote by $h_+$ and $h_-$ the ensuing real numbers
$$
h_+=\mbox{ess}\sup_{x \in \mathbb{R}^{N}}{h(x)} ~~~~~~ \mbox{and} ~~~~~~ h_-=\mbox{ess}\inf_{x \in \mathbb{R}^{N}}{h(x)}.
$$
Moreover, we also denote by $h'(x)$ the conjugate exponent of $h(x)$ given by $h'(x)=\frac{h(x)}{h(x)-1}$.
\end{itemize}
\section{Preliminaries on Lebesgue and Sobolev spaces with variable exponent in $\mathbb{R}^{N}$}
\label{expoents_variaveis} In this section, we recall the definitions and some results involving the spaces $L^{h(x)}(\mathbb{R}^{N}) $ and $W^{1,h(x)} (\mathbb{R}^{N}) $. We refer to \cite{peter,Fan2001a, Fan2001b, kovacik91} for the fundamental properties of these spaces.
Hereafter, let us denote by $L_{+}^{\infty}(\mathbb{R}^{N})$ the set \[ L_{+}^{\infty}(\mathbb{R}^{N}) = \left\{ u \in L^{\infty}(\mathbb{R}^{N}) : \mbox{ess}\inf_{x \in\mathbb{R}^{N}}u \geq1\right\} \] and we will assume that $h \in L_{+}^{\infty}(\mathbb{R}^{N})$.
The variable exponent Lebesgue space $L^{h(x)}(\mathbb{R}^{N}) $ is defined by \[ L^{h(x)}(\mathbb{R}^{N}) = \left\{ u :\mathbb{R}^{N} \to \mathbb{R} \text{ is measurable }\, : \, \, \,\, \int \vert u(x)\vert^{h(x)} < + \infty\right\}, \] which is endowed with the norm \[ \Vert u \Vert_{h(x)} = \inf\left\{ t > 0 : \int \left\vert \frac{u(x)}{t}\right\vert ^{h(x)} \leq1\right\} . \] On space $L^{h(x)}(\mathbb{R}^{N})$, we consider the \textit{modular function} $\rho: L^{h(x)}(\mathbb{R}^{N}) \to\mathbb{R}$ given by \[
\rho(u) = \int |u(x)|^{h(x)} . \]
\begin{prop} \label{modular} Let $u \in L^{h(x)}(\mathbb{R}^{N}) $ and $\{u_{n}\}_{n \in\mathbb{N}} \subset L^{h(x)}(\mathbb{R}^{N}) $. Then,
\begin{enumerate} \item If $u \neq0 $, $\Vert u \Vert_{h(x)} = a \Leftrightarrow\rho\left( \frac{u}{a}\right) = 1$.
\item $\Vert u \Vert_{h(x)} < 1 \quad(=1; > 1) \Leftrightarrow \rho(u) < 1 (= 1; > 1) $;
\item $\Vert u \Vert_{h(x)} > 1 \Rightarrow\Vert u \Vert_{h(x)}^{h_{-}} \leq\rho(u) \leq\Vert u \Vert_{h(x)}^{h_{+}} $.
\item $\Vert u \Vert_{h(x)} < 1 \Rightarrow\Vert u \Vert_{h(x)}^{h_{+}} \leq\rho(u) \leq\Vert u \Vert_{h(x)}^{h_{-}} $.
\item $\displaystyle \lim_{n \to+\infty} \Vert u_{n} \Vert_{h(x)} = 0 \Leftrightarrow\lim_{n \to\infty}\rho(u_{n}) = 0 .$
\item $\displaystyle \lim_{n \to+\infty} \Vert u_{n} \Vert_{h(x)} = + \infty\Leftrightarrow\lim_{n \to\infty}\rho(u_{n}) = + \infty$. \end{enumerate} \end{prop}
We have the following H\"{o}lder inequality for Lebesgue spaces with variable exponents.
\begin{prop} [H\"{o}lder-type Inequality]Let $u \in L^{h(x)}(\mathbb{R}^{N})$ and $v \in L^{h^{\prime }(x)}(\mathbb{R}^{N}) $. Then, $uv \in L^{1}(\mathbb{R}^{N})$ and \begin{align*} \int \vert u(x)v(x)\vert \leq\left( \frac{1}{h_{-}} + \frac {1}{h^{\prime}_{-}} \right) \Vert u \Vert_{h(x)} \Vert v \Vert_{h^{\prime}(x)}. \end{align*} \end{prop}
The next three results are important tools to study the properties of some energy functionals, and their proofs can be found in \cite{AlvesFerreira2}.
\begin{lem} [Brezis-Lieb's lemma, first version] \label{Brezis-Lieb-1}
Let $ ( \eta_n ) \subset L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) $ with $ m \in \mathbb{N} $ verifying
\begin{enumerate}
\item [\emph{(i)}] $ \eta_n(x) \to \eta(x), \ \text{a.e. in} \ \mathbb R^N $;
\item [\emph{(ii)}] $ \displaystyle \sup_{n \in \mathbb N } | \eta_n |_{ L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) } < \infty $. \\
\end{enumerate}
Then, $ \eta \in L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) $ and
\begin{equation*}
\int \left( \left| \eta_n \right|^{ h(x) } - \left| \eta_n - \eta \right|^{ h(x) } - \left| \eta \right|^{ h(x) } \right) \,= o_n(1).
\end{equation*} \end{lem}
\begin{lem} [Brezis-Lieb's lemma, second version] \label{Brezis-Lieb-2}
Let $( \eta_n ) \subset L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) $ with $ m \in \mathbb{N} $ verifying
\begin{enumerate}
\item [\emph{(i)}] $ \eta_n(x) \to \eta(x), \ \text{a.e. in} \ \mathbb R^N $;
\item [\emph{(ii)}] $ \displaystyle \sup_{n \in \mathbb N } | \eta_n |_{ L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) } < \infty $. \\
\end{enumerate}
Then
\begin{equation*}
\eta_n \rightharpoonup \eta \ \text{in} \ L^{ h(x) } ( \mathbb R^N, \mathbb R^m ).
\end{equation*} \end{lem}
The next proposition is a Brezis-Lieb type result.
\begin{lem} [Brezis-Lieb lemma, third version]
Let $ ( \eta_n ) \subset L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) $ with $ m \in \mathbb{N} $ such that
\begin{enumerate} \label{Brezis-Lieb-3}
\item [\emph{(i)}] $ \eta_n(x) \to \eta(x), \ \text{a.e. in} \ \mathbb R^N $;
\item [\emph{(ii)}] $ \displaystyle \sup_{n \in \mathbb N } | \eta_n |_{ L^{ h(x) } ( \mathbb R^N, \mathbb R^m ) } < \infty $. \\
\end{enumerate}
Then
\begin{equation*}
\int \left| \left| \eta_n \right|^{ h(x)-2 } \eta_n - \left| \eta_n - \eta \right|^{ h(x)-2 } \left( \eta_n - \eta \right) - \left| \eta \right|^{ h(x)-2 } \eta \right|^{ h'(x) } \, = o_n(1).
\end{equation*}
\end{lem}
The variable exponent Sobolev space $W^{1,h(x)}(\mathbb{R}^{N}) $ is defined by \begin{align*}
W^{1,h(x)}(\mathbb{R}^{N}) = \left\{ u \in W^{1,1}_{loc}(\mathbb{R}^{N}) : u \in L^{h(x)}(\mathbb{R}^{N}) \quad\text{ and } \quad| \nabla u | \in L^{h(x)}(\mathbb{R}^{N}) \right\}. \end{align*} The corresponding norm for this space is \begin{align*} \Vert u \Vert_{W^{1,h(x)}(\mathbb{R}^{N})} = \Vert u \Vert_{h(x)} + \Vert\nabla u \Vert_{h(x)}. \end{align*} The spaces $L^{h(x)}(\mathbb{R}^{N})$ and $W^{1,h(x)}(\mathbb{R}^{N})$ are separable and reflexive Banach spaces when $h_{-} >1$.
On space $W^{1,h(x)}(\mathbb{R}^{N})$, we consider the {\it modular function} \linebreak $\rho_{1}: W^{1,h(x)}(\mathbb{R}^{N}) \to \mathbb{R}$ given by \[
\rho_{1}(u) = \int \left( |\nabla u(x)| ^{h(x)} + |u(x)| ^{h(x)}\right). \] If, we define \begin{align*}
\Vert u \Vert = \inf\left\{ t > 0 : \int \frac{(|\nabla u |^{h(x)}+| u |^{h(x)})}{t^{h(x)} } \leq 1\right\}, \end{align*} then $ \Vert \cdot \Vert_{W^{1,h(x)}(\mathbb{R}^{N})} $ and $ \Vert \cdot \Vert $ are equivalent norms on $ W^{1,h(x)}(\mathbb{R}^{N}) $. \begin{prop}\label{modular2}
Let $ u \in W^{1,h(x)}(\mathbb{R}^{N}) $ and $ \{u_{n}\} \subset W^{1,h(x)}(\mathbb{R}^{N}) $. Then, the same conclusion of Proposition \ref{modular} occurs replacing $\|\,\,\, \|_{h(x)}$ and $\rho$ by $\|\,\,\, \|$ and $\rho_1$ respectively.
\end{prop}
\section{Technical lemmas} Associated with problem~(\ref{Plkm}), we have the energy functional \linebreak$J_{\lambda, k}:W^{1,p(x)}(\mathbb{R}^{N}) \to \mathbb{R}$ defined by \[
J_{\lambda, k}(u) =\int\frac{1}{p(x)}\left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) -\lambda\int \frac{g(k^{-1}x)}
{q(x)}|u|^{q(x)} - \int \frac{f(k^{-1}x)
}{r(x)}|u|^{r(x)} . \] It is easy to see that $J_{\lambda, k}\in C^{1}\left( W^{1,p(x)} (\mathbb{R}^{N}),\mathbb{R}\right) $ with \begin{align*}
J'_{\lambda, k}(u) v & =\int \left( |\nabla u|^{p(x)-2}\nabla u\nabla v+|u|^{p(x)-2}uv\right) -\lambda\int g(k^{-1}x)|u|^{q(x)-2}uv\\
& \quad -\int f(k^{-1} x) |u|^{r(x)-2}uv , \end{align*} for any $u,v\in W^{1,p(x)}(\mathbb{R}^{N})$. Thus, the critical points of $ J_{\lambda, k} $ are (weak) solutions of (\ref{Plkm}). Since the functional $J_{\lambda, k}$ is not bounded from below on $ W^{1,p(x)}(\mathbb{R}^{N})$ , we will work on \emph{Nehari manifold} $ \mathcal{M}_{\lambda, k}$ associated with the functional $J_{\lambda, k}$, given by $$ \mathcal{M}_{\lambda, k} = \left\{ u \in W^{1,p(x)}(\mathbb{R}^{N})\setminus\{ 0\}: J'_{\lambda, k}(u) u = 0 \right\} $$ and with level \[ c_{\lambda, k} = \inf_{u \in \mathcal{M}_{\lambda, k}} J_{\lambda, k}(u). \]
Using well known arguments found in Willem \cite{willem}, it follows that $c_{\lambda, k} $ is the mountain pass level of functional $J_{\lambda, k}$.
For $ f \equiv 1 $ and $ \lambda = 0 $, we consider the problem \begin{align} \left\{ \begin{array} [c]{rcl} -\Delta_{p(x)} u + \vert u \vert^{p(x) - 2} u & = & \vert u \vert^{r(x) - 2}u , \quad \mathbb{R}^{N}\\ \mbox{}\\ u \in W^{1, p(x)}(\mathbb{R}^{N}). & & \end{array} \right. \tag{$ P_{\infty} $}\label{Poo} \end{align} Associated with the problem~(\ref{Poo}), we have the energy functional \linebreak $ J_{\infty}:W^{1, p(x)}(\mathbb{R}^{N}) \to \mathbb{R}$ given by \begin{align*}
J_{\infty}(u) =\int \frac{1}{p(x)}\left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \int \frac{1
}{r(x)}|u|^{r(x)}, \end{align*} the level $$ c_{\infty} = \inf_{u \in \mathcal{M}_{\infty}} J_{\infty}(u), $$ and the Nehari manifold $$ \mathcal{M}_{\infty} = \left\{ u \in W^{1,p(x)}(\mathbb{R}^{N})\setminus\{ 0\}: J'_{\infty}(u)u = 0 \right\}. $$
For $ f \equiv f_{\infty} $ and $ \lambda = 0 $, we fix the problem \begin{align} \left\{ \begin{array} [c]{rcl}
-\Delta_{p(x)} u + |u|^{p(x) - 2} u & = & f_{\infty} |u|^{r(x) - 2}u, \quad \mathbb{R}^{N}\\ \mbox{}\\ u \in W^{1, p(x)}(\mathbb{R}^{N}), & & \end{array} \right. \tag{$ P_{f_{\infty}} $}\label{Pf00} \end{align} and as above, we denote by $ J_{f_\infty}, c_{f_{\infty}}$ and $\mathcal{M}_{f_{\infty}}$ the energy functional, the mountain pass level and Nehari manifold associated with $ (P_{f_{\infty}}) $ respectively.
The following result concerns the behavior of $J_{\lambda, k} $ on $ \mathcal{M}_{\lambda, k} $.
\begin{lem} The functional $ J_{\lambda, k} $ is bounded from below on $ \mathcal{M}_{\lambda, k} $. Moreover, $ J_{\lambda, k} $ is coercive on $ \mathcal{M}_{\lambda, k} $. \end{lem} \noindent \textbf{Proof.} For each $ u \in \mathcal{M}_{\lambda, k} $, $ J'_{\lambda, k}(u) u = 0 $. Hence, \begin{align*}
\lambda \int g(k^{-1}x) |u|^{q(x)} = \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \int f(k^{-1} x) |u|^{r(x)}. \end{align*} Note that \begin{align*}
J_{\lambda, k}(u) &\geq \frac{1}{p_{+}} \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \frac{\lambda}{q_{-}} \int g(k^{-1}x) |u|^{q(x)} - \frac{1}{{r_{-}}}\int f(k^{-1} x) |u|^{r(x)} \\
&= \frac{1}{p_{+}} \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \frac{1}{{r_{-}}}\int f(k^{-1} x) |u|^{r(x)}\\
& \quad- \frac{1}{q_{-}} \left( \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \int f(k^{-1} x) |u|^{r(x)} \right). \end{align*} Since $ p_{+} < q_{-} \leq q(x) \leq r(x) \ll p^{*}(x) $, $$
J_{\lambda, k}(u) \geq \left( \frac{1}{p_{+}} - \frac{1}{q_{-}}\right) \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right), $$ showing that $J$ is bounded from below and coercive on $ \mathcal{M}_{\lambda, k} $. \fim
As an immediate consequence of the last lemma, we have
\begin{corollary}\label{ltdalem} Let $ \{u_{n}\} $ be a sequence in $ \mathcal{M}_{\lambda, k} $ and $ J_{\lambda, k}(u_{n}) \to c_{\lambda, k} $. Then $ \{u_{n}\} $ is bounded in $ W^{1,p(x)}(\mathbb{R}^{N}) $. \end{corollary}
The next lemma establishes that Nehari manifold $\mathcal{M}_{\lambda,k}$ has a positive distance from origin.
\begin{lem}\label{NehariDelta} There exists $\eta >0$ such that \begin{equation} \label{R1} \rho_1(u) \geq \eta, \,\,\, \,\,\,\,\, \forall (u, \lambda, k) \in \mathcal{M}_{\lambda, k} \times [0, \Lambda] \times \mathbb{N}. \end{equation} Moreover, if $ E_{\lambda, k} (u) = J'_{\lambda, k} (u) u $, we have that \begin{equation} \label{E_lambda} E'_{\lambda, k} (u) u \leq - \left( q_{-}-p_{+} \right) \eta \,\,\,\,\, \forall (u, \lambda, k) \in \mathcal{M}_{\lambda, k} \times [0, \Lambda] \times \mathbb{N}. \end{equation}
\end{lem} \noindent \textbf{Proof.} Suppose by contradiction that (\ref{R1}) does not hold. Then, there is $ \{ u_{n} \} \subset \mathcal{M}_{\lambda, k} $ such that $$ \rho_1(u_n) \to 0 ~~ \text{ as } n \to \infty, $$ or equivalently, by Proposition \ref{modular2}, $$ \Vert u_{n} \Vert \to 0 \text{ as } n \to \infty. $$
Since $ \{ u_{n} \} \subset \mathcal{M}_{\lambda, k} $ and $\|f\|_\infty \leq 1$, we derive $$
\int \left( |\nabla u_{n}|^{p(x)}+|u_{n}|^{p(x)}\right) \leq \lambda \Vert g \Vert_{\infty} \int |u_{n}|^{q(x)} + \int |u_{n}|^{r(x)}. $$ On the other hand, using the fact that $\Vert u_{n} \Vert < 1 $ for $n$ large enough, it follows from Propositions~{\ref{modular}} and \ref{modular2} \begin{align*}
\Vert u_{n} \Vert^{p_{+}} \leq & \int \left( |\nabla u_{n}|^{p(x)}+|u_{n}|^{p(x)}\right) \\ \leq & \;\lambda \Vert g \Vert_{\infty} \max\left\{ \Vert u_{n} \Vert^{q_{-}}_{q(x)} , \Vert u_{n} \Vert^{q_{+}}_{q(x)} \right\} + \max\left\{ \Vert u_{n} \Vert^{r_{-}}_{r(x)} , \Vert u_{n} \Vert^{r_{+}}_{r(x)} \right\}. \end{align*} By Sobolev embedding, there are positive constants $ c_{1} $ and $ c_{2} $ such that \begin{align*} \Vert u_{n} \Vert^{p_{+}} & \leq \lambda \Vert g \Vert_{\infty} c_{1} \max\left\{ \Vert u_{n} \Vert^{q_{-}} , \Vert u_{n} \Vert^{q_{+}} \right\} + c_{2} \max\left\{ \Vert u_{n} \Vert^{r_{-}} , \Vert u_{n} \Vert^{r_{+}} \right\}, \end{align*} and so, for $n$ large enough, $$
\|u_n\|^{p_{+}} \leq \lambda c_1 \|g\|_{\infty} \|u_n\|^{q_{-}}+c_2 \|u_n\|^{r_{+}}, $$ obtaining an absurd, because $p_+ < q_{-} \leq r_-$. Therefore, (\ref{R1}) is proved.
Next, we will show that (\ref{E_lambda}) occurs. For each $u \in \mathcal{M}_{\lambda, k}$, a simple calculus gives $$ \begin{array}{l}
E'_{\lambda, k} (u) u \leq \, p_{+} \displaystyle \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \lambda q_{-} \int g(k^{-1}x) |u|^{q(x)}
- r_{-} \int f(k^{-1} x) |u|^{r(x)} \\
\mbox{} \\
\mbox{\hspace{1,4 cm} } \leq \left( p_{+} - q_{-} \right) \int \left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) + \left( q_{-} -r_{-} \right) \int f(k^{-1} x) |u|^{r(x)}. \end{array} $$ Since $ p_{+} < q_{-} \leq r_{-} $, it follows that \[ E'_{\lambda, k} (u) u \leq -\left( q_{-}-p_{+} \right) \rho_1(u) \leq - \left( q_{-}- p_{+} \right) \eta, \] finishing the proof. \fim
As by product of the last lemma, we are able to prove that critical points of $J_{\lambda, k}$ restrict to $\mathcal{M}_{\lambda, k}$ are in fact critical point of $J_{\lambda, k}$ on $ W^{1,p(x)}(\mathbb{R}^{N}) $.
\begin{lem} If $ u_{0} \in \mathcal{M}_{\lambda, k} $ is a critical point of $ J_{\lambda, k} $ restricted to $ {\mathcal{M}_{\lambda, k}} $, then $ u_{0} $ is a critical point of $ J_{\lambda, k}$ in $ W^{1,p(x)}(\mathbb{R}^{N}) $. \end{lem}
\noindent \textbf{Proof.} Once that $u_0$ is a critical point of $ J_{\lambda, k} $ restricted to $ {\mathcal{M}_{\lambda, k}} $, there is $ \tau \in \mathbb{R}^{N} $ such that $$ J'_{\lambda, k}(u_{0}) = \tau E'_{\lambda, k} (u_{0}). $$ By Lemma~\ref{NehariDelta}, we know that $ E'_{\lambda, k} (u_0) u_0 < 0 $, then we must have $\tau=0$. Thereby, $$ J'_{\lambda, k}(u_{0}) = 0, $$ implying that $ u_{0} $ is critical point of $ J_{\lambda, k}(u_{0})$ in $ W^{1,p(x)}(\mathbb{R}^{N}) $. \fim
The next result is very important in our arguments, because it implies that weak limit of $(PS)$ sequence is a critical point for the energy functional.
\begin{thm}\label{conv} Let $ \{ u_{n} \} $ be a sequence in $ W^{1,p(x)}(\mathbb{R}^{N}) $ such that $ u_{n} \rightharpoonup u $ in $ W^{1,p(x)}(\mathbb{R}^{N}) $ and $ J'_{\lambda, k}(u_{n}) \to 0 $ as $ n \to \infty $. Then, for some subsequence, $ \nabla u_{n}(x) \to \nabla u(x) $ a.e. in $ \mathbb{R}^{N} $ as $ n \to \infty $ and $ J'_{\lambda, k}(u) = 0 $. \end{thm}
\noindent \textbf{Proof.} Let $ R > 0 $ and $ \phi \in C^{\infty}_{0}(\mathbb{R}^{N}) $ such that
$$ \phi = 0 \,\,\,\, \mbox{if} \,\,\, |x| \geq 2R, \,\,\, \,\,\, \phi = 1 \,\,\, \mbox{if} \,\,\, |x| \leq R \,\,\, \mbox{and} \,\,\, 0 \leq \phi(x) \leq 1 \,\,\,\forall x \in \mathbb{R}^{n}. $$ In what follows, let us denote by $\{P_n\}$ the following sequence $$
P_{n}(x) = \langle |\nabla u_{n}(x)|^{p(x)-2} \nabla u_{n}(x) - |\nabla u(x)|^{p(x)-2} \nabla u(x), \nabla u_{n}(x) - \nabla u (x) \rangle . $$ From definition of $\{P_n\}$, $$
\int_{B_{R}(0)} P_{n} \leq \int |\nabla u_{n}|^{p(x)} \phi - \int |\nabla u_{n}|^{p(x) - 2} \nabla u_{n} \nabla u \phi - \int |\nabla u|^{p(x) - 2} \nabla u \nabla(u_{n} - u) \phi. $$ Recalling that $u_n \rightharpoonup u$ in $W^{1,p(x)}(\mathbb{R}^{N})$, we have \begin{align}
\int_{B_{R}(0)} |\nabla u|^{p(x) - 2} \nabla u \nabla(u_{n} - u)\phi \to 0 \quad \mbox{ as } n \to \infty, \end{align} and so, \[
\int_{B_{R}(0)} P_{n} \leq \int |\nabla u_{n}|^{p(x)} \phi - \int |\nabla u_{n}|^{p(x) - 2} \nabla u_{n} \nabla u \phi + o_n(1). \] On the other hand, from $ J_{\lambda, k}'(u_{n})(\phi u_{n}) = o_n(1)$ and $ J_{\lambda, k}'(u_{n})(\phi u) = o_n(1)$, \begin{align*}
\int_{B_{R}(0)} P_{n} & \leq o_n(1) - \int |\nabla u_{n}|^{p(x) - 2}\nabla u_{n}\nabla \phi(u_{n} - u) \\
& \quad - \int |u_{n}|^{p(x) - 2}u_{n}(u - u_{n}) \phi + \lambda \int g(k^{-1} x) |u_{n}|^{q(x) - 2}u_{n}(u_{n} - u)\phi \\
& \qquad + \int f(k^{-1} x) |u_{n}|^{r(x) - 2}u_{n}(u_{n} - u)\phi. \end{align*} Thus, \begin{align*}
\int_{B_{R}(0)} P_{n} & \leq o_n(1) + c_{1}\int_{supt{\phi}} |\nabla u_{n}|^{p(x) - 1}|u_{n} - u| \\
& \quad + c_1\int_{supt{\phi}} |u_{n}|^{p(x) - 1}|u_{n} - u| +c_1 \lambda \Vert g \Vert_{\infty} \int_{supt{\phi}} |u_{n}|^{q(x) - 1}|u_{n} - u| \\
& \qquad +c_1\int_{supt{\phi}} |u_{n}|^{r(x) - 1}|u_{n} - u|. \end{align*} Combining H\"{o}lder's inequality and Sobolev embedding, we deduce that \[ \int_{B_{R}(0)} P_{n} \to 0 \quad \mbox{as } n \to \infty. \] In what follows, let us consider the sets \[ B_{R}^{+} = \left\{ x \in B_{R}(0) \, / \, p(x) \geq 2 \right\}\quad \text{ and } \quad B_{R}^{-} = \left\{ x \in B_{R}(0) \, / \, 1 < p(x) < 2 \right\}. \] Since \[ P_{n}(x) \geq \left\{ \begin{array}{ccc}
\frac{2^{3- p_{+}}}{p_{+}} |\nabla u_{n} - \nabla u|^{p(x)} & \text{if} & p(x) \geq 2 \\ \\
(p_{-} - 1)\frac{|\nabla u_{n} - \nabla u|^{2}}{\left(|\nabla u_{n}| + |\nabla u| \right)^{2 - p(x)}} & \text{if} & 1 < p(x) < 2, \end{array} \right. \] we have \begin{equation} \label{convE1}
\int_{B_{R}^{+}} |\nabla u_{n} - \nabla u|^{p(x)} dx \to 0 \text{ as} \,\, n \to \infty. \end{equation} Applying again H\"{o}lder's inequality, $$
\hspace*{-2cm} \int_{B_{R}^{-}} |\nabla u_{n} - \nabla u|^{p(x)} \leq C \Vert g_{n}\Vert_{L^{\frac{2}{p(x)}}(B_{R}^{-})} \Vert h_{n}\Vert_{L^{\frac{2}{2 - p(x)}}(B_{R}^{-})}, $$ where $$
g_{n}(x) = \frac{ |\nabla u_{n}(x) - \nabla u(x)|^{p(x)}}{\left( |\nabla u_{n}(x)| + |\nabla u(x)|\right)^{\frac{p(x)(2 - p(x))}{2}}} $$ and $$
h_{n}(x) = \left( |\nabla u_{n}(x)| + |\nabla u(x)| \right)^{\frac{p(x)(2 - p(x))}{2}}. $$ By a direct computation, $ \{ \Vert h_{n}\Vert_{L^{\frac{2}{2 - p(x)}}(B_{R}^{-})}\}$ is a bounded sequence and $$
\int_{B_{R}^{-}}|g_{n}|^{\frac{2}{p(x)}} \leq C \int_{B_{R}^{-}}P_{n} . $$ Then, \begin{equation} \label{convE2}
\int_{B_{R}^{-}} |\nabla u_{n} - \nabla u|^{p(x)} \to 0 \text{ when } n \to \infty. \end{equation} From \eqref{convE1} and \eqref{convE2}, $ \nabla u_{n} \to \nabla u $ a.e. in $ B_{R}(0) $. Once that $R$ is arbitrary, it follows that for some subsequence \[ \nabla u_{n}(x) \to \nabla u(x) \mbox{ a.e. in } \mathbb{R}^{N}. \] This combined with Lemma~\ref{Brezis-Lieb-2} gives \[
|\nabla u_{n}|^{p(x) - 2} \nabla u_{n} \rightharpoonup |\nabla u|^{p(x) - 2} \nabla u \mbox{ in } (L^{p'(x)}(\mathbb{R}^{N}))^{N}. \] Now, using the fact that $J'_{\lambda,k}(u_n)v=o_n(1)$ for all $v \in W^{1,p(x)}(\mathbb{R}^{N}) $ together with the last limit, we derive that $J'_{\lambda,k}(u)v= 0 $ for all $v \in W^{1,p(x)}(\mathbb{R}^{N}) $, finishing the proof. \fim
\subsection{A result of compactness}
The next theorem is a version of a result compactness on Nehari manifolds due to Alves \cite{alves05} for variable exponents. It establishes that problem (\ref{Poo}) has a ground state solution.
\begin{thm}\label{TeoComp} Suppose that (\ref{p1}) holds and let $ \{u_{n}\} \subset \mathcal{M}_{\infty} $ be a sequence with $ J_{\infty}(u_{n}) \to c_{\infty} $. Then, \begin{description} \item[I.] $ u_{n} \to u $ in $ W^{1,p(x)}(\mathbb{R}^{N}) $,
or
\item[II.] There is $ \{y_{n}\} \subset \mathbb{Z}^{N}$ with $|y_n| \to +\infty$ and $w \in W^{1,p(x)}(\mathbb{R}^{N}) $ such that $ w_{n} = u_{n}(\cdot + y_{n}) \to w $ in $ W^{1,p(x)}(\mathbb{R}^{N}) $ and $J_{\infty}(w) = c_{\infty}$. \end{description} \end{thm}
\noindent \textbf{Proof.} Similarly to Corollary~\ref{ltdalem}, we can assume that $\{u_n\}$ is a bounded sequence, and so, there is $ u \in W^{1,p(x)}(\mathbb{R}^{N}) $ and a subsequence of $ \{ u_{n} \}$, still denoted by itself, such that $u_n \rightharpoonup u $ in $ W^{1,p(x)}(\mathbb{R}^{N})$. Applying the Ekeland's variational principle, there is a sequence $ \{w_{n}\} $ in $ \mathcal{M}_{\infty} $ satisfying \[ w_{n} = u_{n} + o_{n}(1), \quad J_{\infty}(w_{n}) \to c_{\infty} \] and \begin{align} \label{eq1} J'_{\infty}(w_{n}) - \tau_{n} E'_{\infty}(w_{n}) = o_{n}(1), \end{align} where $ (\tau_{n}) \subset \mathbb{R} $ and $ E_{\infty}(w) = J'_{\infty}(w) w $, for any $ w \in W^{1,p(x)}(\mathbb{R}^{N}) $.
Since $ \{u_{n}\} \subset \mathcal{M}_{\infty} $, (\ref{eq1}) leads to \[ \tau_{n} E'_{\infty}(w_{n}) w_{n} = o_{n}(1). \] By the arguments of Lemma~\ref{NehariDelta}, there exists $\delta >0$ such that $$
|E'_{\infty}(w_{n})w_{n}| > \delta \,\,\, \forall n \in \mathbb{N}. $$ From this, $ \tau_{n} \to 0 $ as $ n \to \infty $ and we can claim that \[ J_{\infty}(u_{n}) \to c_{\infty} \,\,\, \mbox{and} \,\,\, J'_{\infty}(u_{n}) \to 0. \]
Next, we will study the following possibilities: $ u \neq 0 $ or $ u = 0 $.
\noindent \textbf{Case 1:} $ u \neq 0 $.
Similarly to Theorem~\ref{conv}, it follows that the below limits are valid for some subsequence: \begin{itemize}
\item $ u_{n}(x) \to u(x)$ \,\, and \,\, $\nabla u_{n}(x) \to \nabla u(x) $ a.e. in $ \mathbb{R}^{N}, $
\item $ \displaystyle \int |\nabla u_{n}(x)|^{p(x)-2}\nabla u_{n}(x)\nabla v \to \int |\nabla u(x)|^{p(x)-2}\nabla u(x)\nabla v$,
\item $ \displaystyle \int |u_{n}|^{p(x)-2} u_{n} v \to \int |u|^{p(x)-2} u_{n} v$, \end{itemize} and \begin{itemize}
\item $ \displaystyle \int |u_{n}|^{r(x) -2} u_{n} v \to \int |u|^{r(x) -2} u v $ \end{itemize} for any $ v \in W^{1,p(x)}(\mathbb{R}^{N})$. Consequently, $ u $ is critical point of $ J_{\infty} $. By Fatou's Lemma , it is easy to check that \begin{align*} c_{\infty} \leq & J_{\infty}(u) = J_{\infty}(u) - \frac{1}{r_{-}} J'_{\infty}(u) u \\
= &\int \left( \frac{1}{p(x)} - \frac{1}{r_{-}} \right) \left( | \nabla u |^{p(x)} + |u|^{p(x)} \right) + \int \left( \frac{1}{r_{-}} - \frac{1}{r(x)} \right) |u|^{r(x)} \\
\leq & \liminf_{n \to \infty}\left\{ \int \left( \frac{1}{p(x)} - \frac{1}{r_{-}} \right) \left( | \nabla u_{n} |^{p(x)} + |u_{n}|^{p(x)} \right) \right.\\
&\qquad \left. + \int \left( \frac{1}{r_{-}} - \frac{1}{r(x)} \right) |u_{n}|^{r(x)}\right\} \\ = & \liminf_{n \to \infty} \left\{ J_{\infty}(u_{n}) - \frac{1}{r_{-}} J'_{\infty}(u_{n}) u_{n} \right\} = \, c_{\infty} .\\ \end{align*} Hence, \begin{align*}
\lim_{n \to \infty} \int \left( | \nabla u_{n} |^{p(x)} + |u_{n}|^{p(x)} \right) = \int \left( | \nabla u |^{p(x)} + |u|^{p(x)} \right), \end{align*} implying that $ u_{n} \to u $ in $ W^{1,p(x)}(\mathbb{R}^{N})$.
\noindent \textbf{Case 2:} $ u = 0 $.
In this case, we claim that there are $R, \xi>0$ and $ \{ y_{n} \} \subset \mathbb{R}^{N} $ satisfying \begin{align}\label{lionsfalse}
\limsup_{n \to \infty} \int_{B_{R}(y_{n})} |u_{n}|^{p(x)} \geq \xi. \end{align} If the claim is false, we must have \begin{align*}
\limsup_{n \to \infty} \sup_{y \in \mathbb{R}^{N}} \int_{B_{R}(y)} |u_{n}|^{p(x)} = 0. \end{align*} Thus, by a Lions-type result for variable exponent proved in \cite[Lema~3.1]{Fan2001}, $$ u_{n} \to 0 \mbox{ in } L^{s(x)}(\mathbb{R}^{N}), $$ for any $ s \in C(\mathbb{R}^{N}) $ with $ p \ll s \ll p^{*}$.
Recalling $ J'_{\infty}(u_{n}) u_{n} = o_{n}(1) $, the last limits yield \[
\int \left( | \nabla u_{n} |^{p(x)} + |u_{n}|^{p(x)} \right) = o_{n}(1), \] or equivalently $$ u_n \to 0 \,\,\, \mbox{in} \,\,\, W^{1,p(x)}(\mathbb{R}^{N}), $$
leading to $c_{\infty} = 0 $, which is absurd. This way, (\ref{lionsfalse}) is true. By a routine argument, we can assume that $ {y}_{n} \in \mathbb{Z}^{N} $ and $ |y_{n}| \to \infty $ as $ n \to \infty $. Setting $$ w_{n}(x) = u_{n}(x + {y}_{n}), $$ and using the fact that $ p $ and $ r $ are $ \mathbb{Z}^{N}$-periodic, a change of variable gives $$
J_{\infty}(w_{n}) = J_{\infty}(u_{n}) \,\,\, \mbox{and} \,\,\, \|J'_{\infty}(w_{n})\| = \|J'_{\infty}(u_{n})\|, $$ showing that $ \{ w_{n} \} $ is a sequence $(PS)_{c_{\infty}} $ for $ J_{\infty} $. If $ w \in W^{1,p(x)}(\mathbb{R}^{N}) $ denotes the weak limit of $ \{ w_{n} \} $, it follows from (\ref{lionsfalse}), $$
\int_{B_{{R}}(0)} |w|^{p(x)} \geq \xi, $$ showing that $ w \neq 0 $.
Repeating the same argument of the first case for the sequence $ \{w_{n}\} $, we deduce that $ w_{n} \to w $ in $ W^{1,p(x)}(\mathbb{R}^{N}) $, $ w \in \mathcal{M}_{\infty} $ and $ J_{\infty}(w) = c_{\infty} $. \fim
\subsection{Estimates involving the minimax levels}
The main goal of this section is to prove some estimates involving the minimax levels $c_{\lambda, k},c_{0, k}$ and $c_{\infty}$.
First of all, we recall the inequalities $$ J_{\lambda, k} (u) \leq J_{0, k}(u) \,\,\, \mbox{and} \,\,\, J_{\infty}(u) \leq J_{0, k}(u) \,\,\,\,\, \forall u \in W^{1,p(x)}(\mathbb{R}^{N}), $$ which imply \[ c_{\lambda, k} \leq c_{0, k}\quad \mbox{ and } \quad c_{\infty} \leq c_{0, k}. \]
\begin{lem}\label{c0<cf00} The minimax levels $c_{0, k}$ and $c_{f_{\infty}}$ satisfy the inequality \linebreak $c_{0, k} < c_{f_{\infty}}$. Hence, $c_{\infty} < c_{f_{\infty}}$. \end{lem} \noindent \textbf{Proof.} In a manner analogous to Theorem~\ref{TeoComp}, there is $ U \in W^{1,p(x)}(\mathbb{R}^{N}) $ verifying \[ J_{f_{\infty}}(U) = c_{f_{\infty}} \quad \mbox{ and } \quad J'_{f_{\infty}}(U) = 0. \] From Lemma~3.6 in \cite{Fan2008}, there exists $ t > 0 $ such that $ t U \in \mathcal{M}_{0, k} $. Thus, $$
c_{0, k} \leq J_{0, k}(tU) = \int \frac{t^{p(x)}}{p(x)} \left( |\nabla U|^{p(x)} + |U|^{p(x)}\right) - \int f(k^{-1} x) \frac{t^{r(x)}}{r(x)} |U|^{r(x)}. $$ Since that by $(H2)$, $f_{\infty} < f(x)$ for all $x \in \mathbb{R}^{N}$, we derive $$ c_{0, k} < J_{f_{\infty}}(tU) \leq \max_{s \geq 0}J_{f_{\infty}}(sU) = J_{f_{\infty}}(U) = c_{f_{\infty}}. $$ \fim
Using the last lemma, we are able to prove that $ J_{\lambda, k} $ verifies the $(PS)_{d}$ condition for some values of $d$.
\begin{lem}\label{Cond-PS} The functional $ J_{\lambda, k} $ satisfies the $(PS)_{d}$ condition for \linebreak $ d \leq c_{\infty} + \varrho $, where $\varrho=\frac{1}{2}(c_{f_\infty}-c_\infty)>0$. \end{lem} \begin{pf} Let $ \{v_{n}\} \subset W^{1,p(x)}(\mathbb{R}^{N}) $ be a $(PS)_{d}$ sequence for functional $ J_{\lambda, k} $ with $ d \leq c_{\infty} + \varrho$. Similarly to Corollary~\ref{ltdalem}, $ \{v_{n}\} $ is a bounded sequence in $ W^{1,p(x)}(\mathbb{R}^{N}) $, and so, for some subsequence, still denoted by $ \{v_{n}\} $, \[ v_{n} \rightharpoonup v \mbox{ in } W^{1,p(x)}(\mathbb{R}^{N}), \] for some $v \in W^{1,p(x)}(\mathbb{R}^{N}).$ Now, we claim that \begin{align} J_{\lambda, k} (v_{n}) - J_{0, k} (w_{n}) - J_{\lambda, k}(v) = o_{n}(1) \label{J-J0-on1} \end{align} and \begin{align} \Vert J'_{\lambda, k} (v_{n}) - J'_{0, k} (w_{n}) - J'_{\lambda, k}(v) \Vert = o_{n}(1), \label{J'-J0'-on1} \end{align} where $ w_{n} = v_{n} - v $.
Indeed, proceeding as in proof of Theorem ~\ref{conv}, we have the following convergences \[ \nabla v_{n}(x) \to \nabla v(x) \,\,\, \mbox{and} \,\,\, v_{n}(x) \to v(x) \,\,\, \mbox{ a.e. in} \,\,\, \mathbb{R}^{N}. \]
Applying Lemma ~\ref{Brezis-Lieb-1}, it follows that \begin{align*} J_{\lambda, k}(v_{n}) = J_{0,k}(w_{n}) + J_{\lambda, k}(v) + o_{n}(1), \label{eq7} \end{align*} showing~(\ref{J-J0-on1}). The equality (\ref{J'-J0'-on1}) follows combining \ref{H2} with Lemmas \ref{Brezis-Lieb-2} and \ref{Brezis-Lieb-3}.
Since $ J'_{\lambda, k}(v) = 0 $ and $ J_{\lambda, k}(v) \geq 0 $, from (\ref{J-J0-on1})-(\ref{J'-J0'-on1}), we have that $w_{n} = v_{n} - v$ is a $(PS)_{d^{*}}$ sequence for $J_{0, k}$ with $d^*=d - J_{\lambda, k}(v) \leq c_{\infty} +\varrho$.
\begin{claim} \label{C2} There is $ R > 0 $ such that \[
\limsup_{n \to \infty} \sup_{y \in \mathbb{R}^{N}} \int_{B_{R}(y)}|w_{n}|^{p(x)} = 0. \] \end{claim} If the claim is true, we have \[
\int |w_{n}|^{r(x)} \to 0. \] On the other hand, by (\ref{J'-J0'-on1}), we know that $ J'_{0, k}(w_{n}) = o_{n}(1) $, then
\begin{align*}
\int \left( | \nabla w_{n}|^{p(x)} + |w_{n}|^{p(x)}\right) = o_{n}(1), \end{align*} showing that $ w_{n} \to 0 $ in $ W^{1,p(x)}(\mathbb{R}^{N}) $, and so, $ v_{n} \to v $ in $ W^{1,p(x)}(\mathbb{R}^{N}). $
\noindent\textbf{Proof of Claim \ref{C2}:} If the claim is not true, for each $ R > 0 $ given, we find $ \xi > 0 $ and $ \{y_{n}\} \subset \mathbb{Z}^{N} $ verifying \[
\limsup_{n \to \infty}\int_{B_{R}(y_{n})} |w_{n}|^{p(x)} \geq \xi > 0. \] Once that $w_n \rightharpoonup 0$ in $W^{1,p(x)}(\mathbb{R}^{N}) $, it follows that $\{y_n\}$ is an unbounded sequence. Setting \[ \tilde{w}_{n} = w_{n}(\cdot + y_{n}), \] we have that $ \{\tilde{w}_{n}\} $ is also a $(PS)_{d^*}$ sequence for $J_{0, k}$, and so, it must be bounded. Then, there are $ \tilde{w} \in W^{1,p(x)}(\mathbb{R}^{N}) $ and a subsequence of $ \{\tilde{w}_{n}\} $, still denoted by itself, such that $$ \tilde{w}_{n} \rightharpoonup \tilde{w} \in W^{1,p(x)}(\mathbb{R}^{N})\setminus\{0\}. $$ Moreover, since $ J'_{0, k}(w_{n}) \phi( \cdot - y_{n}) = o_{n}(1) $ for each $ \phi \in W^{1,p(x)}(\mathbb{R}^{N})$ and $ \nabla \tilde{w}_{n}(x) \to \nabla \tilde{w}(x) $ a.e. in $ \mathbb{R}^{N} $, we obtain \begin{align*}
\int \left( |\nabla \tilde{w}|^{p(x) - 2} \nabla \tilde{w} \nabla \phi + | \tilde{w} |^{p(x) - 2} \tilde{w} \phi \right) = \int f_{\infty} | \tilde{w} |^{r(x) - 2} \tilde{w} \phi, \end{align*} from where it follows that $ \tilde{w} $ is a weak solution of the Problem~(\ref{Pf00}). Consequently, after some routine calculations, we get $$ c_{f_{\infty}} \leq J_{f_{\infty}}(\tilde{w}) = J_{f_{\infty}}(\tilde{w}) - \frac{1}{r_{-}} J'_{f_{\infty}}(\tilde{w})\tilde{w} \leq \liminf_{n \to \infty} \left\{J_{0, k}(w_{n}) - \frac{1}{r_{-}} J'_{0, k}(w_{n}) w_{n}\right\} = d^* $$ implying that $c_{f_{\infty}} \leq c_{\infty} + \varrho$, which is an absurd because $\varrho < c_{f_{\infty}} - c_{\infty}$. Therefore, the Claim \ref{C2} is true. \end{pf}
In what follows, let us fix $ \rho_{0}, r_{0} > 0 $ satisfying \begin{itemize}
\item $ \overline{B_{\rho_{0}}(a_{i})} \cap \overline{B_{\rho_{0}}(a_{j})} = \emptyset $ for $ i \neq j$ \,\,\, \mbox{and} \,\,\, $i,j \in \{1,...,\ell\}$
\item $ \bigcup^{\ell}_{i = 1}B_{\rho_{0}}(a_{i}) \subset B_{r_0}(0) $.
\item $K_{\frac{\rho_{0}}{2}} = \bigcup^{\ell}_{i = 1}\overline{B_{\frac{\rho_{0}}{2}}(a_{i})}$ \end{itemize} Besides this, we define the function $ Q_{k} : W^{1, p(x)}(\mathbb{R}^{N}) \to \mathbb{R} $ by \begin{align*}
Q_{k}(u) = \frac{\int \chi (k^{-1} x)|u|^{p_{+}}}{\int |u|^{p_{+}}}, \end{align*} where $ \chi : \mathbb{R}^{N} \to \mathbb{R}^{N} $ is given by \[ \chi(x) = \left\{ \begin{array}{ccc}
x & \mbox{if} & |x| \leq r_{0} \\
r_{0} \frac{x}{|x|} & \mbox{if} & |x| > r_{0}. \end{array} \right. \]
The next two lemmas will be useful to get important $(PS)$-sequences associated with $ J_{\lambda, k} $.
\begin{lem}\label{lemK} There are $ \delta_{0} > 0 $ and $ k_1 \in \mathbb{N} $ such that if $ u \in \mathcal{M}_{0, k} $ and $ J_{0, k}(u) \leq c_{\infty} + \delta_{0} $, then \[ Q_{k}(u) \in K_{\frac{\rho_{0}}{2}} \,\,\,\, \mbox{for} \,\,\, k \geq k_1. \] \end{lem} \noindent \textbf{Proof.} If the lemma does not occur, there must be $ \delta_{n} \to 0 $, $ k_{n} \to +\infty $ and $ u_{n} \in \mathcal{M}_{0, k_{n}} $ satisfying \[ J_{0, k_{n}}(u_{n}) \leq c_{\infty} + \delta_{n} \] and \[ Q_{k_{n}}(u_{n}) \not\in K_{\frac{\rho_{0}}{2}}. \] Fixing $ s_{n} > 0 $ such that $ s_{n} u_{n} \in \mathcal{M}_{\infty} $, we have that \[ c_{\infty} \leq J_{\infty}(s_{n} u_{n}) \leq J_{0, k_{n}} (s_{n}u_{n}) \leq \max_{t \geq 0 } J_{0, k_{n}} (tu_{n}) = J_{0, k_{n}}(u_{n}) \leq c_{\infty} + \delta_{n} \] hence, \[\{s_{n} u_{n}\} \subset \mathcal{M}_{\infty} \,\,\,\, \mbox{and} \,\,\,\, J_{\infty}(s_{n} u_{n}) \to c_{\infty}. \]
Applying the variational principle of Ekeland, we can assume without loss of generality that $\{s_{n} u_{n}\} \subset \mathcal{M}_{\infty} $ is a sequence $ (PS)_{c_{\infty}} $ for $ J_{\infty} $, that is, $$ J_\infty(s_n u_n) \to c_\infty \,\,\,\, \mbox{and} \,\,\,\, J'_{\infty}(s_n u_n) \to 0. $$ According to Theorem \ref{TeoComp}, we must consider the ensuing cases: \begin{description}
\item[i)] $ s_{n}u_{n} \to U \neq 0 $ in $ W^{1,,p(x)}(\mathbb{R}^{N}) $; \par \end{description} or \begin{description}
\item[ii)] There exists $ \{y_{n}\} \subset \mathbb{Z}^{N} $ with $|y_n| \to +\infty $ such that $ v_{n} = s_{n}u(\cdot + y_{n}) $ is convergent in $ W^{1,,p(x)}(\mathbb{R}^{N}) $ for some $ V \in W^{1,p(x)}(\mathbb{R}^{N}) \setminus \{0\}$. \end{description}
By a direct computation, we can suppose that $ s_{n} \to s_{0} $ for some $ s_{0} > 0 $. Therefore, without loss of generality, we can assume that $$ u_{n} \to U \,\,\, \mbox{or} \,\,\,\, v_{n} = u( \,\, \cdot + y_{n}) \to V \,\,\,\, \mbox{in} \,\,\, W^{1, p(x)}(\mathbb{R}^{N}). $$
\noindent\textbf{Analysis of} $\mathbf{i)}$.
By Lebesgue's dominated convergence theorem \[
Q_{k_{n}}(u_{n}) = \frac{\int \chi({k_{n}}^{-1} x)|u_{n}|^{p_{+}}}{\int |u_{n}|^{p_{+}}} \to \frac{\int \chi(0)|U|^{p_{+}}}{\int |U|^{p_{+}}} = 0 \in K_{\frac{\rho_{0}}{2}}, \] implying $ Q_{k_{n}}(u_{n}) \in K_{\frac{\rho_{0}}{2}} $ for $n $ large, which is an absurd.
\noindent\textbf{Analysis of} $\mathbf{ii)}$.
Using again the Ekeland's variational principle, we can suppose that $ J'_{0, k_{n}}(u_{n}) = o_{n}(1) $. Hence, $ J'_{0, k_{n}} (u_{n}) \phi(\cdot - y_{n}) = o_{n}(1) $ for any $ \phi \in W^{1,,p(x)}(\mathbb{R}^{N})$, and so, \begin{equation}
o_{n}(1) = \int \left( |\nabla v_{n}|^{p(x) - 2} \nabla v_{n} \nabla \phi + |v_{n}|^{p(x) - 2} v_{n} \phi\right) - \int f({k_{n}}^{-1} (x + y_{n})) |v_{n}|^{r(x) - 2}v_{n} \phi. \label{eq5} \end{equation} The last limit implies that for some subsequence, $$ \nabla v_{n}(x) \to \nabla V(x) \,\,\, \mbox{and} \,\,\, v_{n}(x) \to V(x) \,\,\, \mbox{a.e in} \,\,\, \mathbb{R}^{N}. $$ Now, we will study two cases: \begin{description}
\item[I)] $ |{k_{n}}^{-1}y_{n}| \to +\infty $ \end{description} and \begin{description}
\item[II)] $ {k_{n}}^{-1}y_{n} \to y $, for some $y \in \mathbb{R}^{N}$. \end{description}
If I) holds, it follows that \[
\int \left( |\nabla V|^{p(x) - 2} \nabla V \nabla \phi + |V|^{p(x) - 2} V \phi\right) = \int f_{\infty} |V|^{r(x) - 2} V \phi, \] showing that $ V $ is a nontrivial weak solution of the problem~(\ref{Pf00}). Now, by Fatou's Lemma, $$ c_{f_{\infty}} \leq J_{f_{\infty}}(V) = J_{f_{\infty}}(V) - \frac{1}{r_{-}}J'_{f_{\infty}}(V)V \leq \liminf_{n \to \infty} \left\{ J_{\infty}(u_{n}) - \frac{1}{r_{-}}J'_{\infty}(u_{n})u_{n} \right\} = c_{\infty}, $$ or equivalently, $ c_{f_{\infty}} \leq c_{\infty} $, contradicting the Lemma~\ref{c0<cf00}.
Now, if $ {k_{n}}^{-1}y_{n} \to y $ for some $y \in \mathbb{R}^{N}$, then $ V $ is a weak solution of the following problem \begin{align} \left\{ \begin{array} [c]{rcl}
-\Delta_{p(x)} u + |u|^{p(x) - 2} u & = & f(y)|u|^{r(x) - 2}u , \quad \mathbb{R}^{N}\\ \mbox{}\\ u \in W^{1, p(x)}(\mathbb{R}^{N}). & & \end{array} \right. \tag{$ P_{f(y)} $}\label{Pfy} \end{align} Repeating the previous argument, we deduce that \begin{align} c_{f(y)} \leq c_{\infty}, \label{eq6} \end{align} where $ c_{f(y)} $ the mountain pass level of the functional $ J_{f(y)} : W^{1,p(x)}(\mathbb{R}^{N}) \to \mathbb{R} $ given by \begin{align*}
J_{f(y)}(u) =\int \frac{1}{p(x)}\left( |\nabla u|^{p(x)}+|u|^{p(x)}\right) - \int \frac{ f(y) }{r(x)} |u|^{r(x)}. \end{align*} Observe that $$ c_{f(y)} = \inf_{u \in \mathcal{M}_{f(y)}} J_{f(y)}(u) $$ where $$ \mathcal{M}_{f(y)} = \left\{ u \in W^{1,p(x)}(\mathbb{R}^{N})\setminus\{ 0\}: J'_{f(y)}(u) u = 0 \right\}. $$ If $ f(y) < 1 $, a similar argument explored in the proof of Lemma~\ref{c0<cf00} shows that $ c_{f(y)} > c_{\infty} $, contradicting the inequality (\ref{eq6}). Thereby, $ f(y) = 1$ and $ y = a_{i} $ for some $ i = 1, \cdots \ell $. Hence, \begin{align*}
Q_{k_{n}}(u_{n}) &= \frac{\int \chi({k_{n}}^{-1} x)|u_{n}|^{p_{+}}}{\int |u_{n}|^{p_{+}}}\\
& = \frac{\int \chi({k_{n}}^{-1} x + {k_{n}}^{-1} y_{n})|v_{n}|^{p_{+}}}{\int |v_{n}|^{p_{+}}} \to \frac{\int \chi(y)|V|^{p_{+}}}{\int |V|^{p_{+}}}=a_i \in K_{\frac{\rho_{0}}{2}}, \\ \end{align*} implying that $ Q_{k_{n}}(u_{n}) \in K_{\frac{\rho_{0}}{2}} $ for $ n $ large, which is a contradiction, since by assumption $ Q_{k_{n}}(u_{n}) \not\in K_{\frac{\rho_{0}}{2}} $. \fim
\begin{lem}\label{lemK2} Let $ \delta_{0}>0 $ given in Lemma \ref{lemK} and $ k_3=\max\{k_1,k_2\}$. Then, there is $ \Lambda^{*} > 0 $ such that \[ Q_{k}(u) \in K_{\frac{\rho_{0}}{2}}, \,\,\,\,\,\, \forall (u, \lambda, k) \in \mathcal{A}_{\lambda, k} \times [0, \Lambda_*) \times ( [k_3,+\infty) \cap \mathbb{N}). \] \end{lem}
\noindent \textbf{Proof.} Observe that $$
J_{\lambda, k}(u) = J_{0, k}(u) - \lambda \int \frac{g({k}^{-1} x)}{q(x)} |u|^{q(x)} \,\,\, \forall u \in W^{1,p(x)}(\mathbb{R}^{N}). $$ In what follows, let $ t_{u} > 0 $ such that $ t_{u}u \in \mathcal{M}_{0, k} $. Then, \begin{align}
J_{0, k}(t_{u} u) & = J_{\lambda, k}(t_{u}u) + \lambda \int \frac{g({k}^{-1} x)}{q(x)} (t_{u})^{q(x)} |u|^{q(x)} \nonumber \\
& \leq \max_{t \geq 0} J_{\lambda, k}(tu)+ \lambda \int \frac{g({k}^{-1} x)}{q(x)} (t_{u})^{q(x)} |u|^{q(x)} \label{Z1}. \end{align}
\begin{claim}\label{tuNerahi} \mbox{}\\
\noindent {\bf a)} There is a constant $ R > 0 $ such that \[ \mathcal{A}_{\lambda, k} = \left\{ u \in \mathcal{M}_{\lambda, k}; J_{\lambda, k}(u) < c_{\infty} + \frac{\delta_{0}}{2} \right\} \subset B_{R}(0), \] for $k \geq k_1$, that is, $ \mathcal{A}_{\lambda, k} $ is bounded set, where $k_1$ was given in Lemma \ref{lemK}. Moreover, $ R $ is independent of $ \lambda $ and $k$.
\noindent {\bf b)} Let $ u \in \mathcal{A}_{\lambda, k} $ and $ t_{u} > 0 $ such that $ t_{u} u \in \mathcal{M}_{0, k} $. Then, given $\Lambda >0$, there are $ C > 0 $ and $k_2 \in \mathbb{N}$ such that \[ 0 \leq t_{u} \leq C, \quad \mbox{ for all } (u, \lambda, k) \in \mathcal{A}_{\lambda, k} \times [0,\Lambda] \times ([ k_2, +\infty) \cap \mathbb{N}). \] \end{claim}
\noindent {\bf Proof of a):} Let $ u \in \mathcal{M}_{\lambda, k} $ such that $ J_{\lambda, k}(u) < c_{\infty} + \frac{\delta_{0}}{2}$ for $ k \geq k_1 $. Then, \begin{align*}
\int \left( |\nabla u|^{p(x)} + |u|^{p(x)} \right) - \lambda \int g({k}^{-1} x) |u|^{q(x)} - \int f({k}^{-1} x)|u|^{r(x)} = 0 \end{align*} and \begin{align*}
\int \frac{1}{p(x)} \left( |\nabla u|^{p(x)} + |u|^{p(x)} \right) - & \lambda \int \frac{g({k}^{-1} x)}{q(x)} |u|^{q(x)} - \int \frac{f({k}^{-1} x)}{r(x)} |u|^{r(x)} < c_{\infty} + \frac{\delta_{0}}{2}. \end{align*} Combining the last two expressions, we obtain \begin{align*}
\left( \frac{1}{p_{+}} - \frac{1}{q_{-}} \right) \int \left( |\nabla u|^{p(x)} + |u|^{p(x)} \right) + \left( \frac{1}{q_{-}} - \frac{1}{r_{-}} \right) \int f({k}^{-1} x) |u|^{r(x)}
< c_{\infty} + \frac{\delta_{0}}{2}. \end{align*} Therefrom, \begin{align*}
\int \left( |\nabla u|^{p(x)} + |u|^{p(x)} \right) < (c_{\infty} + \frac{\delta_{0}}{2})\left( \frac{1}{p_{+}} - \frac{1}{q_{-}} \right)^{-1}, \end{align*} proving a).
\noindent {\bf Proof of b):} Supposing by contradiction that the lemma does not hold, there is $ \{u_{n}\} \subset \mathcal{A}_{\lambda_n, k_n} $ with $\lambda_n \to 0$ and $k_n \to +\infty$ such that $ t_{u_{n}} u_{n} \in \mathcal{M}_{0, k_n} $ and $ t_{u_{n}} \to \infty $ as $ n \to \infty $. Without loss of generality, we assume that $ t_{u_{n}} \geq 1$. As $ t_{u_{n}} u_{n} \in \mathcal{M}_{0, k_n} $, we derive $$
(t_{u_{n}})^{p_+} \int \left( |\nabla u_{n}|^{p(x)} + |u_{n}|^{p(x)} \right) \geq f_{\infty} (t_{u_{n}})^{r_{-}} \int |u_{n}|^{r(x)}, $$ or equivalently, \begin{align}
\int \left( |\nabla u_{n}|^{p(x)} + |u_{n}|^{p(x)} \right) \geq f_{\infty} t_{u_{n}}^{r_{-}- p_{+}} \int |u_{n}|^{r(x)} . \label{modular-tu} \end{align} Now, we claim that there is $ \eta_{1} > 0 $ such that \begin{equation} \label{eta1}
\int |u_n|^{r(x)} > \eta_{1} \,\,\, \forall n \in \mathbb{N}. \end{equation}
Indeed, arguing by contradiction, if $ \int |u_{n}|^{r(x)} \to 0 $, by interpolation it follows that
$ \int_{\mathbb{R}^{N}} |u_n|^{q(x)} \to 0$ . Since $ u_{n} \in \mathcal{M}_{\lambda_{n}, k_{n}} $, $$
\int \left( |\nabla u_{n}|^{p(x)} + |u_{n}|^{p(x)} \right) \leq \lambda_n \Vert g \Vert_{\infty} \int |u_{n}|^{q(x)} + \int |u_{n}|^{r(x)} = o_{n}(1), $$ or equivalently, $$ u_{n} \to 0 \,\,\, \mbox{in} \,\,\, W^{1, p(x)}(\mathbb{R}^{N}), $$ which contradicts Lemma~\ref{NehariDelta}, proving (\ref{eta1}). Thereby, from inequality (\ref{modular-tu}), $$
\rho_{1}(u_{n})=\int \left( |\nabla u_{n}|^{p(x)} + |u_{n}|^{p(x)} \right) \to +\infty, $$ implying that $ \{u_{n}\}$ is a unbounded sequence. However, this is impossible, because by item a), $ \{u_{n} \} $ is bounded, showing that b) holds.
Now, combining Claim \ref{tuNerahi}-b) with (\ref{Z1}), we get \begin{align*}
J_{0, k}(t_{u} u) \leq J_{\lambda, k}(u) + \frac{\lambda}{q_{-}}\Vert g \Vert_{\infty} C^{q_+} \int |u|^{q(x)}. \end{align*} Once that $u \in \mathcal{A}_{\lambda, k}$, we derive \begin{align*}
J_{0, k}(t_{u} u) < c_{\infty} + \frac{\delta_{0}}{2} + \lambda c_{2} \int|u|^{q(x)}. \end{align*} Using the Sobolev embedding combined with Claim \ref{tuNerahi}-a), we obtain \[ J_{0, k} (t_{u} u) < c_{\infty} + \frac{\delta_{0}}{2} + c_{3} \lambda \quad \forall u \in \mathcal{A}_{\lambda, k} \] where $c_{3} $ is a positive constant. Setting $ \Lambda^{*} : = {\delta_{0}}/{2c_{3}}$ and $\lambda \in [0, \Lambda^*)$, it follows that \[ t_{u}u \in \mathcal{M}_{0, k} \quad \mbox{ and } \quad J_{0, k} (t_{u} u) < c_{\infty} + \delta_{0}. \] Then, by Lemma~\ref{lemK}, \[ Q_{k}(t_{u} u) \in K_{\frac{\rho_{0}}{2}}. \] Now, it remains to note that \[ Q_{k}(u) = Q_{k}(t_{u} u), \] to conclude the proof of lemma. \fim
From now on, we will use the ensuing notation \begin{itemize}
\item $ \theta^{i}_{\lambda, k} = \left\{u \in \mathcal{M}_{\lambda, k} ; |Q_{k}(u) - a_{i} | < \rho_{0} \right\}$,
\item $ \partial\theta^{i}_{\lambda, k} = \left\{u \in \mathcal{M}_{\lambda, k} ; |Q_{k}(u) - a_{i} | = \rho_{0} \right\}$,
\item $ \beta^{i}_{\lambda, k} = \displaystyle\inf_{u \in \theta^{i}_{\lambda, k}} J_{\lambda, k}(u) $
\end{itemize} and \begin{itemize}
\item $ \tilde{\beta}^{i}_{\lambda, k} = \displaystyle\inf_{u \in \partial\theta^{i}_{\lambda, k}} J_{\lambda, k}(u) .$ \end{itemize}
The above numbers are very important in our approach, because we will prove that there is a $(PS)$ sequence of $J_{\lambda, k}$ associated with each $\theta^{i}_{\lambda, k}$ for $i=1,2,...,\ell$. To this end, we need of the following technical result
\begin{lem} \label{rho} There is $ k^{*} \in \mathbb{N} $ such that $$ \beta^{i}_{\lambda, k} < c_{\infty} + \varrho \,\,\, \mbox{and} \,\,\, \beta^{i}_{\lambda, k} < \tilde{\beta}^{i}_{\lambda, k}, $$ for all $ \lambda \in [0, \Lambda^*)$ and $ k \geq k^{*}$, where $\varrho=\frac{1}{2}(c_{f_\infty}-c_\infty)>0.$ \end{lem} \noindent \textbf{Proof.} From now on, $ U \in W^{1, p(x)}(\mathbb{R}^{N}) $ is a ground state solution associated with (\ref{Poo}), that is, \[ J_{\infty}(U) = c_{\infty} \quad \mbox{ and } \quad J'_{\infty}(U) = 0 \,\,\, \mbox{( See Theorem \ref{TeoComp} )}. \] For $ 1 \leq i \leq \ell$ and $ k \in \mathbb{N} $, we define the function $ \widehat{U}^{i}_{k} : \mathbb{R}^{N} \to \mathbb{R}$ by \[ \widehat{U}^{i}_{k}(x) = U( x - ka_{i}). \]
\begin{claim}\label{ltda-Jl-coo} For all $i \in \{1,...,\ell \}$, we have that \[ \limsup_{k \to +\infty}(\sup_{t \geq 0 } J_{\lambda, k}(t\widehat{U}^{i}_{k})) \leq c_{\infty}. \] \end{claim} Indeed, since $p,q$ and $ r $ are $ \mathbb{Z}^{N}$-periodic, and $ a_{i} \in \mathbb{Z}^{N}$, a change variable gives \begin{align*}
J_{\lambda, k}(t\widehat{U}^{i}_{k}) = &\int \frac{t^{p(x)}}{p(x)} \left( |\nabla U|^{p(x)} + |U|^{p(x)} \right)
- \lambda \int g(k^{-1} x + a_{i})\frac{t^{q(x)}}{q(x)} \left| U \right|^{q(x)}\\
& \qquad - \int f(k^{-1} x + a_{i}) \frac{t^{r(x)}}{r(x)} |U|^{r(x)}. \end{align*} Moreover, we know that there exists $ s = s(k) > 0 $ such that \begin{align*} \max_{t \geq 0} J_{\lambda, k}(t\widehat{U}^{i}_{k}) = J_{\lambda, k}(s\widehat{U}^{i}_{k}). \end{align*} By a direct computation, it follows that $ s(k) \not\to 0 $ and $ s(k) \not\to \infty $ as $ k \to \infty $. Thus, without loss of generality, we can assume $ s(k) \to s_{0}>0$ as $ k \to \infty $. Thereby, \begin{align*}
\lim_{k \to \infty} \left( \max_{t \geq 0}J_{\lambda, k}(t\widehat{U}^{i}_{k}) \right) &= \int \frac{s_{0}^{p(x)}}{p(x)} \left( |\nabla U|^{p(x)} + |U|^{p(x)} \right)
- \lambda \int g(a_{i})\frac{s_{0}^{q(x)}}{q(x)} \left| U \right|^{q(x)}\\
& \qquad - \int f(a_{i}) \frac{s_{0}^{r(x)}}{r(x)} |U|^{r(x)}\\ & \leq J_{\infty}(s_{0}U) \leq \max_{s \geq 0}J_{\infty} (sU) = J_{\infty}(U) = c_{\infty}. \end{align*} Consequently, \begin{align*} \limsup_{k \to +\infty}(\sup_{t \geq 0 } J_{\lambda, k}(t\widehat{U}^{i}_{k})) \leq c_{\infty} \,\,\, \,\, \mbox{for} \,\,\, i \in \{1,....,\ell\}. \end{align*}
Since $ Q_{k}(\widehat{U}^{i}_{k}) \to a_{i} $ as $ k \to \infty $, then $ \widehat{U}^{i}_{k} \in \theta^{i}_{\lambda, k} $ for all $ k $ large enough. On the other hand, by Claim \ref{ltda-Jl-coo}, $ J_{\lambda, k} (\widehat{U}^{i}_{k}) < c_{\infty} + \frac{\delta_0}{4} $ holds also for $k$ large enough and $\lambda \in [0, \Lambda^*)$. This way, there exists $k_4 \in \mathbb{N}$ such that $$ \beta^{i}_{\lambda, k} < c_{\infty} + \frac{\delta_0}{4}, \,\,\,\, \forall \lambda \in [0, \Lambda^*) \,\,\, \mbox{and} \,\,\, k \geq k_4. $$ Thus, decreasing $\delta_0$ if necessary, we can assume that $$ \beta^{i}_{\lambda, k} < c_{\infty} + \varrho, \,\,\,\, \forall \lambda \in [0, \Lambda^*) \,\,\, \mbox{and} \,\,\, k \geq k_4. $$ In order to prove the other inequality, we observe that Lemma \ref{lemK2} yields $ J_{\lambda, k}(u) \geq c_{\infty} + \frac{\delta_{0}}{2} $ for all $ u \in \partial \theta^{i}_{\lambda, k} $, if $\lambda \in [0, \Lambda^*)$ and $k \geq k_3$. Therefore, \begin{align*} \tilde{\beta}^{i}_{\lambda, k} \geq c_{\infty} + \frac{\delta_{0}}{2}, \,\,\, \mbox{for} \,\,\, \lambda \in [0, \Lambda^*) \,\,\,\, \mbox{and} \,\,\, k \geq k_3. \end{align*} Fixing $k^*=\max\{k_3,k_4\}$, we derive that \[ \beta^{i}_{\lambda, k} < \tilde{\beta}^{i}_{\lambda, k}, \] for $ \lambda \in [0, \Lambda_{*}) $ and $ k \geq k^*$. \fim
\begin{lem}\label{PSb} For each $ 1 \leq i \leq\ell$, there exists a $(PS)_{\beta^{i}_{\lambda, k}}$ sequence, $ \left\{ u^{i}_{n} \right\} \subset \theta^{i}_{\lambda, k} $ for functional $ J_{\lambda, k} $. \end{lem}
\noindent \textbf{Proof.} By Lemma~\ref{rho}, we know that $ \beta^{i}_{\lambda, k} < \tilde{\beta}^{i}_{\lambda, k}$. Then, the lemma follows adapting the same ideas explored in \cite{Lin12}. \fim
\section{Proof of Theorem~\ref{T1} }
Let $ \{ u^{i}_{n} \} \subset \theta^{i}_{\lambda, k} $ be a $(PS)_{\beta^{i}_{\lambda, k}} $ sequence for functional $ J_{\lambda, k} $ given by Lemma~\ref{PSb}. Since $ \beta^{i}_{\lambda, k} < c_{\infty} + \varrho$, by Lemma~\ref{Cond-PS} there is $ u^{i}$ such that $ u^{i}_{n} \to u^{i} $ in $ W^{1, p(x)}(\mathbb{R}^{N}) $. Thus, $$ u^{i} \in \theta^{i}_{\lambda, k}, \,\,\, J_{\lambda, k}(u^i) = \beta^{i}_{\lambda} \,\,\, \mbox{and} \,\,\, J'_{\lambda, k}(u^i) = 0. $$ Now, we infer that $ u^{i} \neq u^{j} $ for $ i \neq j $ as $ 1 \leq i,j \leq \ell $. To see why, it remains to observe that $$ Q_k(u^i) \in \overline{B_{\rho_{0}}(a_{i})} \,\,\, \mbox{and} \,\,\,\, Q_k(u^j) \in \overline{B_{\rho_{0}}(a_{j})}. $$ Once that $$ \overline{B_{\rho_{0}}(a_{i})} \cap \overline{B_{\rho_{0}}(a_{j})} = \emptyset \,\,\, \mbox{for} \,\,\, i \not= j, $$ it follows that $u^i \not= u^j$ for $i \not= j$. From this, $ J_{\lambda, k} $ has at least $ \ell $ nontrivial critical points for $\lambda \in [0, \Lambda^*)$ and $ k \geq k^*$, proving the theorem. \fim
\end{document} |
\begin{document}
\begin{abstract} We calculate the Assouad dimension of the self-affine carpets of Bedford and McMullen, and of Lalley and Gatzouras. We also calculate the conformal Assouad dimension of those carpets that are not self-similar. \end{abstract}
\title{Assouad dimension of self-affine carpets}
\section{Introduction}\label{sec-intro} Bedford and McMullen generalized the construction of the Sierpi\'nski carpet to build a class of self-affine sets (``carpets'') in the plane ~\cite{Bed-84-Carpets,McM-84-Carpets}. Their construction was later further generalized by Lalley and Gatzouras~\cite{Lal-Gatz-92-self-affine}. In this note we calculate the Assouad dimension of these carpets. We also calculate their conformal Assouad dimension in the non-self-similar case. This calculation exhibits an interesting dichotomy: such a carpet is either minimal for conformal Assouad dimension, or has conformal Assouad dimension zero.
We begin by considering the carpets of Bedford and McMullen. Let us recall the construction of these sets. Given integers $n \geq m$, and a fixed, non-empty set $A \subset \{0,1,\ldots,n-1\}\times\{0,1,\ldots,m-1\}$, we can define the self-affine set \[
S = S(A) = \left\{ \left( \sum_{i=1}^{\infty} \frac{x_i}{n^i},
\sum_{i=1}^{\infty} \frac{y_i}{m^i} \right) : \forall i \in \mathbb{N}, (x_i,y_i) \in A \right\}. \] Following McMullen, we let $t_j$ be the number of elements $(i,j)$ of $A$, for each row $0 \leq j < m$. McMullen shows that the Hausdorff dimension of $S$ satisfies \[
\dim_H (S) = \log_m \Bigg( \sum_{j=0}^{m-1} t_j^{\log_n m} \Bigg). \]
We let $s$ denote the number of rows which have an entry in $A$, that is, $s = |\{j:t_j\neq 0\}|$. McMullen demonstrates that the upper Minkowski dimension is given by \[
\overline{\dim}_M (S) = \log_m(s) + \log_n \left(\frac{|A|}{s}\right). \] (His result is stated for the upper Minkowski dimension, but his proof calculates the Minkowski dimension as well.)
In the self-similar case ($n=m$) the carpet carries an Ahlfors regular measure of dimension $\log_n(|A|)$, and so the Hausdorff, upper Minkowski and Assouad dimensions of the carpet all have this value. It seems, however, that the Assouad dimension of $S$ has not been calculated in the non-self-similar case ($n>m$). (The definition of Assouad dimension is recalled in Section~\ref{sec-prelim}.) \begin{theorem}\label{thm-main} When $n > m$, the Assouad dimension of $S$ is
\[
\dim_A ( S ) = \log_m(s) + \log_n(t),
\]
where $t = \max \{ t_j : 1 \leq j \leq m\}$. \end{theorem} Note that for a self-affine carpet with $n>m$, we have \[
\dim_H(S) < \dim_M(S) < \dim_A(S), \] unless we are in the ``uniform fibers'' case, that is, every non-zero $t_j$ equals $t$.
We prove Theorem~\ref{thm-main} in Section~\ref{sec-dima}. The upper bound follows from a straightforward counting argument. To show the lower bound, we build a suitable ``weak tangent'' to $S$ and use the scale-invariant properties of Assouad dimension.
To illustrate this theorem, consider the carpet $S_1$ generated by \[
A_1 = \big\{ (0,2),(1,0),(2,2),(3,0),(3,2) \big\}, \ n=4,\ m=3, \] and the carpet $S_2$ generated by \[
A_2 = \big\{ (0,0),(0,2),(2,1),(4,0),(4,2) \big\},\ n=5,\ m=3. \] (See Figures~\ref{fig-43dust} and \ref{fig-53min} respectively.) The theorem gives that $\dim_A(S_1) = \log_3(2)+\log_4(3)$ and $\dim_A(S_2) = \log_3(3)+\log_5(2) = 1+\log_5(2)$.
\begin{figure}
\caption{Carpet $S_1$}
\label{fig-43dust}
\end{figure}
\begin{figure}
\caption{Carpet $S_2$}
\label{fig-53min}
\end{figure}
Now, the Assouad dimension is a bi-Lipschitz invariant of a metric space, but it may vary under quasi-symmetric deformations. (For example, quasi-conformal homeomorphisms of the plane.) The infimium of the values it can attain under these deformations is called the conformal Assouad dimension of the metric space $X$, and denoted by $\mathcal{C}\mathrm{dim}_A(X)$. For more details see \cite{Hei-01-lect-analysis,Mac-Tys-cdimexpo}.
Calculating the conformal dimension (Assouad or Hausdorff) of a self-similar carpet is a challenging open problem (see, for example,~\cite{KL-04-confdim}). In~\cite{Bin-Hak-cdim-carpets}, progress is made towards calculating the conformal (Hausdorff) dimension of self-affine ($n>m$) carpets. However, calculating the conformal Assouad dimension of such carpets is quite simple. \begin{theorem}\label{thm-cdima}
Assume that $n>m$. If both $t<n$ and $s<m$, then $\mathcal{C}\mathrm{dim}_A(S)=0$.
Otherwise, $S$ is minimal for conformal Assouad dimension,
i.e., $\mathcal{C}\mathrm{dim}_A(S) = \dim_A(S)$. \end{theorem}
For our examples, we see that $\mathcal{C}\mathrm{dim}_A(S_1) = 0$, while the carpet $S_2$ is minimal for conformal Assouad dimension.
The key observation in this result is that, when $t=n$ or $s=m$, the weak tangent to $S$ built in the proof of Theorem~\ref{thm-main} is the product of a Cantor set and an interval, which is minimal for conformal Assouad dimension. Since quasi-symmetric maps behave well with respect to taking tangents, this gives the required bound. See Section~\ref{sec-cdima} for details.
The methods and techniques of this paper apply to more general self-affine sets. After the work of Bedford and McMullen, the Hausdorff and upper Minkowski dimension of more general sets were studied by Lalley and Gatzouras~\cite{Lal-Gatz-92-self-affine}, Bara\'nski~\cite{Baranski-07-fractals} and others. For a recent survey on such constructions, see Chen and Pesin~\cite{Chen-Pesin-10-survey}.
In Section~\ref{sec-lal-gatz}, we extend Theorems~\ref{thm-main} and \ref{thm-cdima} to the self-affine carpets of Lalley and Gatzouras. Rather than specifying a collection of rectangles in a grid, as with the carpets of Bedford and McMullen, the basic defining pattern of these carpets is a collection of $m$ disjoint rows of heights $b_1, b_2, \ldots, b_m$ in the unit square, where the $i$th row contains $n_i$ disjoint self-affine copies of the entire set of widths $a_{i1}, \ldots, a_{in_i}$. We require that for every $1 \leq i \leq m$, $1 \leq j \leq n_i$, we have $a_{ij} < b_i$. This pattern defines a self-affine set $S \subset [0,1]^2$. (See Section~\ref{sec-lal-gatz} for more details.)
Let $\beta_y \in (0, 1]$ be the Hausdorff dimension of the projection of $S$ onto the $y$-axis, namely, $\beta_y$ is the solution to $\sum_{i=1}^m b_i^{\beta_y}=1$. Let $\beta_x \in (0, 1]$ be the maximal Hausdorff dimension of a horizontal fiber, that is, $\beta_x = \max\{a : \exists i \text{ with } \sum_{j=1}^{n_i} a_{ij}^a = 1 \}$. Then we show the following. \begin{theorem}\label{thm-main-gl}
The Assouad dimension of $S$ is $\dim_A(S) = \beta_x + \beta_y.$ \end{theorem} \begin{theorem}\label{thm-cdima-gl}
If $\beta_x<1$ and $\beta_y<1$, then $\mathcal{C}\mathrm{dim}_A(S) = 0$.
Otherwise, $S$ is minimal for conformal Assouad dimension. \end{theorem}
We expect that the results of this paper can be extended, at least partially, to even more general cases. For example, the recent preprint of Bandt and K\"aen\-m\"aki~\cite{Ban-Kae-11-self-affine} demonstrates that tangents to generic points in certain self-affine sets contain sets of the form $C \times [0,1]$, where $C$ is a Cantor set. It would be interesting to study how general a phenomenon the zero/minimal dichotomy for conformal Assouad dimension is amongst self-affine sets.
The author thanks Jeremy Tyson for introducing him to this topic and for useful comments. He also thanks Antti K\"aenm\"aki and the referee for their helpful suggestions.
\section{Preliminary results}\label{sec-prelim} A metric space $X$ is \emph{doubling} if there exists an $N$ so that any ball can be covered by $N$ balls of half the radius. Repeatedly applying this property, we see that there exists some $C>0$ and $\alpha>0$ so that for any $r,R$ satisfying $0 < r \leq \frac{1}{2} R \leq \diam(X)$, any ball $B(x,R) \subset X$ may be covered by $C (\frac{R}{r})^\alpha$ balls of radius $r$.
The \emph{Assouad dimension} of a metric space $X$, denoted by $\dim_A(X)$, is the infimal value of $\alpha$ for which there exists a constant $C$ so that the above property holds. We always have \[
\dim_H (X) \leq \overline{\dim}_M (X) \leq \dim_A (X), \] and these inequalities may be strict. Unsurprisingly, if $X \subset \mathbb{R}^2$, then $\dim_A(X) \leq 2$.
Given $U \subset X$, the $\epsilon$-neighborhood of $U$ is the set \[
N(U,\epsilon) = \{ x \in X : \exists u \in U, d(x,u) < \epsilon\} \] Recall that the Hausdorff distance between $U,V \subset X$ is the infimal $\epsilon$ so that $U \subset N(V, \epsilon)$ and $V \subset N(U,\epsilon)$. Denote this distance by $d_H(U,V)$. If $X$ is compact, and $\mathcal{M}(X)$ is the set of all closed subsets of $X$, then $(\mathcal{M},d_H)$ is a compact metric space~\cite{BBI-01-metric-geom}.
We now use this convergence to give a non-trivial lower bound on the Assouad dimension of a set. For simplicity we restrict to the case of subsets of $\mathbb{R}^2$, however this bound holds for general ``weak tangents''. \begin{proposition}\label{prop-tangent-dimA}
Fix a compact subset $X$ in $\mathbb{R}^2$.
Suppose $U$ is a compact subset of $X$.
Suppose that for each $k \in \mathbb{N}$, we have
some $U_k \subset \mathbb{R}^2$ that is similar to $U$,
i.e.\ $U_k$ is isometric to a possibly rescaled copy of $U$.
Finally, suppose that $U_k \cap X$ converges to $\hat{U} \subset X$
with respect to the Hausdorff distance.
Then \[\dim_A (\hat{U}) \leq \dim_A(U).\]
Moreover, $\mathcal{C}\mathrm{dim}_A(\hat{U}) \leq \mathcal{C}\mathrm{dim}_A(U)$. \end{proposition} \begin{proof}
Suppose not. Then there is some $\alpha$ so that
$\dim_A(U) < \alpha < \dim_A(\hat{U})$.
Then for all $D > 0$, there exists some $0 < r < R$ and
a set $P$ in $\hat{U}$ of cardinality at least $D(\frac{R}{r})^\alpha$
so that every pair of distinct points in $P$
are separated by at least $r$.
Since $U_k$ is similar to $U$, there is a fixed constant $C>0$
so that every radius $R$ ball in $U_k$ can be covered by
$C( \frac{R}{r})^\alpha$ balls of radius $r$.
On the other hand, for some sufficiently large
$k$ we can use $P$ to find a set
$Q \subset U_k \cap X \subset U_k$ that is
$\frac{r}{2}$-separated and lives in a ball of radius $2R$. Therefore
we require at least
\[
|Q| = |P| = D\left(\frac{R}{r}\right)^\alpha = 8^{-\alpha} D \left( \frac{2R}{r/4} \right)^\alpha
\]
balls of radius $\frac{r}{4}$ to cover $U_k$ inside this ball. For sufficiently large
$D$, this gives a contradiction.
The lower bound for conformal Assouad dimension follows from the first part
of the theorem and an Arzela-Ascoli type argument.
We sketch the argument for the reader's convenience; details are given
in~\cite{Mac-Tys-cdimexpo}.
Suppose that
$\mathcal{C}\mathrm{dim}_A(U) < \mathcal{C}\mathrm{dim}_A(\hat{U})$. Then there exists a quasi-symmetric homeomorphism
$f:U \rightarrow V$, where $\dim_A(V)<\mathcal{C}\mathrm{dim}_A(\hat{U})$.
We can take a weak tangent to $f$ and get a quasi-symmetric map
$\hat{f} : \hat{U} \rightarrow \hat{V}$, where $\hat{V}$ is some weak tangent to $V$.
Thus, by the first part of the theorem, we have a contradiction:
\[
\mathcal{C}\mathrm{dim}_A(\hat{U}) \leq \dim_A(\hat{V}) \leq \dim_A(V) < \mathcal{C}\mathrm{dim}_A(\hat{U}).\qedhere
\] \end{proof}
\section{Assouad dimension}\label{sec-dima} \begin{proof}[Proof of Theorem~\ref{thm-main}] First we prove the upper bound on the Assouad dimension. This follows the proof of the bound on upper Minkowski dimension given by McMullen~\cite{McM-84-Carpets}.
Since $n > m$, individual rectangles in the carpet get increasingly thin as we go down into the construction. To approximate squares with these rectangles we group them together as follows. For any $k \in \mathbb{N}$, choose $l < k$ so that $n^l \leq m^k < n^{l+1}$. That is, $l = \lfloor k\log_n(m) \rfloor$. For any $p,q\in\mathbb{N}$, let \begin{equation}\label{eq-Rk}
R_k(p,q) = \left[ \frac{p}{n^l}, \frac{p+1}{n^l} \right] \times
\left[ \frac{q}{m^k}, \frac{q+1}{m^k} \right]. \end{equation}
Let $\alpha = \log_m(s) + \log_n(t)$. Since rectangles of the form $R_k$ are present at every scale and location, and behave like balls of radius $m^{-k}$, the proof that $\dim_A(S) \leq \alpha$ reduces to the following lemma. \begin{lemma}\label{lem-rects}
There exists a constant $C$ so
that for every $1 \leq k' \leq k$, and any $p',q'$,
the set $S \cap \mathrm{Int}(R_{k'}(p',q'))$ can be covered using
at most $C m^{(k-k')\alpha}$ rectangles of the form $R_k(p,q)$. \end{lemma} \begin{proof}
Let $l'$ and $l$ be chosen as before, corresponding
to $k'$ and $k$ respectively. Fix $R_{k'}(p',q')$.
Let $N_k$ be the number of rectangles of the form
$R_k(p,q)$ that meet $S \cap \mathrm{Int}(R_{k'}(p',q'))$.
$N_k$ equals the number of ways to choose $(x_i)_{i=l'}^{l}$
and $(y_i)_{i=k'}^{k}$, subject to certain restrictions.
We have two cases to consider.
\noindent\textbf{Case 1:} $1 \leq l' \leq k' \leq l \leq k$. Then
\begin{enumerate}
\item $(x_i, y_i) \in A$, $y_i$ is fixed by $q'$, for $i = l'+1, \ldots, k'$,
\item $(x_i, y_i) \in A$, for $i = k'+1, \ldots, l$,
\item $(\widetilde{x_i}, y_i) \in A$, for some $\widetilde{x_i}$, $i = l+1, \ldots k$.
\end{enumerate}
In this case
\[
N_k \leq (t)^{k'-l'} (|A|)^{l-k'} (s)^{k-l}
\leq t^{k'-l'} (st)^{l-k'} s^{k-l} = t^{l-l'} s^{k-k'},
\]
where we used that $|A| \leq st$.
\noindent\textbf{Case 2:} $1 \leq l' \leq l \leq k' \leq k$. Then
\begin{enumerate}
\item $(x_i, y_i) \in A$, $y_i$ is fixed by $q'$, for $i = l'+1, \ldots, l$,
\item $(\widetilde{x_i}, y_i) \in A$, for some $\widetilde{x_i}$, $y_i$ is fixed by $q'$,
for $i = l+1, \ldots, k'$,
\item $(\widetilde{x_i}, y_i) \in A$, for some $\widetilde{x_i}$, $i = k'+1, \ldots k$.
\end{enumerate}
Again we see that
\[
N_k \leq (t)^{l-l'} (1)^{k'-l} (s)^{k-k'} = t^{l-l'} s^{k-k'}.
\]
Therefore,
\begin{align*}
\log_m (N_k) & \leq \log_m( t^{l-l'} s^{k-k'} ) = (l-l') \log_m(t) + (k-k')\log_m(s) \\
& \leq \Big(k\log_n(m) - k'\log_n(m)+1\Big) \log_m(t) + (k-k')\log_m(s) \\
& = (k-k')\Big(\log_n(t) + \log_m(s)\Big) + \log_m(t).\qedhere
\end{align*} \end{proof}
It remains to bound the Assouad dimension from below. Choose $y_*$ with $0 \leq y_* < m$ so that $t = t_{y_*}$. Fix some $x_*$ so that $(x_*, y_*) \in A$. We will follow this rectangle into the construction in order to build a suitable weak tangent.
For each $k \in \mathbb{N}$, let $p_k = \sum_{i=0}^{l-1} x_* n^i$, and let $q_k = \sum_{i=0}^{k-1} y_* m^i$, where $l$ is related to $k$ as before. Let $f_k : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be the similarity with scaling factor $m^k$ that takes $R_k(p_k,q_k)$ to $[0,m^k n^{-l}] \times [0,1]$. Note that $m^k n^{-l} \in [1,n)$.
Fix $X = [0,n+1] \times [0,1]$. Since $(\mathcal{M}(X),d_H)$ is compact, some subsequence of $f_k(S) \cap X$ converges to a compact set $\hat{S}$ in $X$. Furthermore, we can assume that $m^k n^{-l}$ converges to some $w \in [1,n]$. Let $g:\mathbb{R}^2\rightarrow \mathbb{R}^2$ be given by $g(x,y) = (x/w, y)$, and let \[
W = g\big( \hat{S} \cap ([0,w] \times [0,1])\big) \subset [0,1]^2. \]
$W$ looks like the product of two Cantor sets. To be precise, define $A'$ to be the set of pairs $(x,y)$ which satisfy $(x, y_*) \in A$ and $(\tilde{x},y) \in A$, for some $\tilde{x}$, and for $y_*$ as fixed above.
We can use $A'$ to build another carpet $S'=S(A')$. This carpet has the same $s$ value as before, and each non-empty row has $t$ entries. Therefore McMullen's result shows that \begin{equation}\label{eq-dimM-tangentcarpet}
\overline{\dim}_M(S') = \log_m(s) + \log_n(t). \end{equation} In fact, by construction $S'$ is the product of two (self-similar) Cantor sets $C_x$ and $C_y$, of dimensions $\log_n(t)$ and $\log_m(s)$ respectively. See Figure~\ref{fig-53zoom} for an enlarged part of carpet $S_2$ from the introduction, showing part of this structure emerging.
\begin{figure}
\caption{A magnified part of Carpet $S_2$}
\label{fig-53zoom}
\end{figure}
\begin{lemma}\label{lem-carpet-tangent}
With the above notation, $W = S'$. \end{lemma}
\begin{proof}
Consider rectangles in $[0,1]^2$ of the form
\begin{equation}\label{eq-special-rect}
\left[ \sum_{i=1}^{k-l} \frac{x_i}{n^i} ,
\sum_{i=1}^{k-l} \frac{x_i}{n^i} + n^{-(k-l)} \right] \times [0,1],
\end{equation}
where $(x_i, y_*) \in A$ for each $1 \leq i \leq k-l$.
Let $W_k$ be the subset of $[0,1]^2$ given by placing an affine copy
of $S$ into each such rectangle.
Note that $W_k$ is just an affine copy of the set
$R_k(p_k, q_k) \cap S$ (with uniformly bounded distortion).
Consequently, $W$ is the Hausdorff limit of the sets $W_k$.
Now, each rectangle in \eqref{eq-special-rect} is of width $n^{-(k-l)}$,
so the copy of $S$ inside is within Hausdorff distance $n^{-(k-l)}$ of a
copy of $C_y$ given the appropriate $x$-coordinate.
Moreover, the $x$-coordinates of the rectangles in \eqref{eq-special-rect}
are within Hausdorff distance $n^{-(k-l)}$ of $C_x$.
Therefore,
\[ d_H(W_k, C_x \times C_y) \leq 2 n^{-(k-l)}
\leq 2 \left(\frac{m}{n}\right)^k \rightarrow 0 \:\:\: \text{as } k \rightarrow \infty, \]
thus
$d_H(W, C_x \times C_y) = 0$, so $W = C_x \times C_y = S'$. \end{proof}
Equation~\eqref{eq-dimM-tangentcarpet}, Lemma~\ref{lem-carpet-tangent} and Proposition~\ref{prop-tangent-dimA} combine to give
\begin{equation*}
\log_m(s) + \log_n(t) = \overline{\dim}_M(W)
\leq \dim_A(W)
= \dim_A(\hat{S}) \leq \dim_A(S).\qedhere
\end{equation*} \end{proof}
\section{Conformal Assouad dimension}\label{sec-cdima} \begin{proof}[Proof of Theorem~\ref{thm-cdima}] We first consider the case when either $s=m$, or $t=n$ (or both). In this case we have constructed a weak tangent $W$ to $S$ that is the product of a self-similar Cantor set and a line. Such spaces are minimal for Assouad dimension~\cite[Lemma 6.3]{Pan-89-cdim}. Combining Theorem~\ref{thm-main} and the second part of Proposition~\ref{prop-tangent-dimA}, we have \[
\dim_A(S) = \dim_A(W) = \mathcal{C}\mathrm{dim}_A(W)
\leq \mathcal{C}\mathrm{dim}_A(S) \leq \dim_A(S). \]
Now we may assume that $s < m$ and $t < n$. We wish to show that $S$ has $\mathcal{C}\mathrm{dim}_A(S) = 0$. As seen in~\cite[Theorem 4.1]{Tys-01-CdimA}, it suffices to show that $S$ is \emph{uniformly disconnected}: there exists some $C>0$ so that for every ball $B(z,r) \subset S$, there is no $\frac{r}{C}$-chain of points joining $z$ to $S \setminus B(z,r)$. That is, there is no sequence $z=z_0, z_1, \ldots, z_N$ in $S$ so that $d(z_i, z_{i+1}) \leq \frac{r}{C}$ for $0 \leq i < N$, with $z_N \notin B(z,r)$.
This property is not immediate since, even though $t<n$, $S$ may project onto the unit interval in the $x$-axis. (See Figure~\ref{fig-43dust}.)
Suppose $z \in S \cap R_k(p,q)$ (see \eqref{eq-Rk}). Since $s < m$, any $\frac{1}{2}m^{-(k+1)}$-chain cannot travel vertically more than $m^{-k}$. In fact, its $y$-coordinate will stay entirely inside either $(q/m^k,(q+2)/m^k)$ or $((q-1)/m^k,(q+1)/m^k)$. We will assume the former, and show that suitable chains cannot travel too far to the right.
Consider the rectangles $R_k(p+i,q)$ and $R_k(p+i,q+1)$, for $1 \leq i \leq n$. Since $n^{-l} \geq m^{-k}$, any $\frac{1}{2}m^{-k}$-chain moving through these rectangles to the right must pass through either $R_k(p+i,q)$ or $R_k(p+i,q+1)$ for each $1 \leq i \leq n$. As $t<n$, for some $1 \leq i \leq n$ we must have that the interior of $R_k(p+i,q+1)$ does not meet $S$, and so the chain passes through $R_k(p+i,q)$ from left to right. Since $t<n$, it is impossible for any $\frac{1}{2}n^{-(l+1)}$-chain to travel through $S \cap R_k(p+i,q)$ from left to right.
A similar argument shows that chains cannot travel too far to the left.
In summary, we have shown that $\frac{1}{2}n^{-1}m^{-k}$-chains cannot escape from the ball $B(z,2n^{-(l-1)})$. (Note that $\frac{1}{2}n^{-1}m^{-k} \leq \min\{\frac{1}{2}m^{-(k+1)},\frac{1}{2}n^{-(l+1)}\}$.) Given arbitrary $r$, we can choose $k$ so that \[
2n^{-(l-1)} \leq 2 n^2 m^{-k} \leq r \leq 2 n^2 m^{-(k-1)}. \] Therefore, \[
\frac{1}{2}n^{-1}m^{-k} = \frac{ 2 n^2 m^{-(k-1)} }{4mn^3} \geq \frac{r}{4mn^3}. \] We have shown that for any $z \in S$, $r > 0$, no $\frac{r}{C}$-chain from $z$ can leave $B(z,r)$, where $C = 4mn^3$. \end{proof}
\section{Lalley-Gatzouras carpets}\label{sec-lal-gatz}
As discussed in the introduction, Lalley and Gatzouras calculated the Hausdorff and upper Minkowski dimensions of sets generalizing the construction of Bedford and McMullen. Such a set $S$ arises as the limit set of the semigroup generated by the mappings $A_{ij} : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ defined by \[
A_{ij}(x,y) = \big(a_{ij}x+c_{ij},\ b_i y + d_i \big), \quad (i,j) \in \mathcal{J}, \] where $\mathcal{J} = \{ (i,j) : 1 \leq i \leq m, 1 \leq j \leq n_i \}$ is the index set. The constants are fixed to satisfy $0 < a_{ij} < b_i < 1$ for each $(i,j)$, $\sum_{i=1}^m b_i \leq 1$, and $\sum_{j=1}^{n_i} a_{ij} \leq 1$ for each $i$. The self-affine copies are forced to be disjoint be requiring that $0 \leq d_1 < d_2 < \cdots < d_m < 1$ with $d_{i+1} \geq d_i + b_i$ and $1 \geq d_m+b_m$, and, for each $i$, $0 \leq c_{i1} < c_{i2} < \cdots < c_{in_i} < 1$ with $c_{i(j+1)} \geq c_{ij} + a_{ij}$, and $1 \geq c_{in_i}+a_{in_i}$.
Let $C_y$ be the self-similar Cantor set which is the projection of $S$ onto the $y$-axis. Recall that its Hausdorff dimension is $\beta_y$, where $\beta_y \in (0,1]$ is the solution to $\sum_{i=1}^m b_i^{\beta_y}=1$. Lalley and Gatzouras calculate the following. \begin{theorem}[{\cite[Theorem 2.4]{Lal-Gatz-92-self-affine}}]\label{thm-lg-mink}
The upper Minkowski dimension of $S$ is the unique $\delta$ satisfying
$\sum_{i=1}^m \sum_{j=1}^{n_i} b_i^{\beta_y} a_{ij}^{\delta-\beta_y} = 1$. \end{theorem}
Choose $i_* \in \{1, 2, \ldots, m\}$ so that the solution $\beta_x$ to $\sum_{j=1}^{n_{i_*}} {a_{i_*j}}^{\beta_x} = 1$ is maximized. The transformations $T_j: \mathbb{R} \rightarrow \mathbb{R}$ defined by $T_j(x) = a_{i_*j}x+c_{i_*j}$, for $1 \leq j \leq n_{i_*}$, generate a semigroup whose limit set $C_x$ is a self-similar Cantor set of Hausdorff dimension $\beta_x$.
The proof of Lemma~\ref{lem-carpet-tangent} easily adapts to give the following lemma. \begin{lemma}\label{lem-carpet-tangent-lg}
There is a weak tangent $W$ of $S$ containing a bi-Lipschitz copy of $C_x \times C_y$. \end{lemma} As a consequence, we have $\dim_A(S) \geq \beta_x+\beta_y$. Assuming Theorem~\ref{thm-main-gl} to be true, we can calculate the conformal Assouad dimension of $S$. \begin{proof}[Proof of Theorem~\ref{thm-cdima-gl}]
If $\beta_x = 1$ or $\beta_y = 1$, then one of $C_x$ or $C_y$ is the entire interval.
As in the proof of Theorem~\ref{thm-cdima}, this implies that $S$ is minimal for conformal Assouad dimension.
If $\beta_x < 1$ and $\beta_y < 1$, then again we can show that $S$ is uniformly disconnected,
and so we have $\mathcal{C}\mathrm{dim}_A(S)=0$. \end{proof}
All that remains is to complete the proof of Theorem~\ref{thm-main-gl} by showing that $\dim_A(S) \leq \beta_x + \beta_y$. To do this, we adapt the somewhat technical arguments of Lalley and Gatzouras used to prove Theorem~\ref{thm-lg-mink}, and we assume that the reader has access to their paper. In this proof, $C$ is a constant which varies as necessary. \begin{proof}[Proof of Theorem~\ref{thm-main-gl}]
First we must define the analogue of the approximate squares of \eqref{eq-Rk}.
Note that we move between the sequence space $\mathcal{J}^\mathbb{N}$ and the limit set $S$ as necessary.
Given $\omega = ((i_1,j_1),(i_2,j_2),\ldots) \in \mathcal{J}^\mathbb{N}$, and $k \in \mathbb{N}$,
let $l \in \mathbb{N}$ be maximal so that
\[
R_k(\omega) := \prod_{\nu=1}^k b_{i_\nu} \leq \prod_{\nu=1}^l a_{i_\nu j_\nu}.
\]
Note that $l \leq k$.
An \emph{approximate square} is the set $B_k(\omega)$ of all $\omega' = ((i_1', j_1'), \ldots) \in \mathcal{J}^\mathbb{N}$
satisfying $i_\nu' = i_\nu$ for $1 \leq \nu \leq k$, and $j_\nu' = j_\nu$ for $1 \leq \nu \leq l$.
As in the proof of Theorem~\ref{thm-main}, and \cite[Lemma 2.1]{Lal-Gatz-92-self-affine}, it suffices
to show that there exists $C > 0$ so that for any $\epsilon >0$,
any approximate square $B_{k'}(\omega')$ can be covered using
at most $C(R_{k'}(\omega') / \epsilon)^{\beta_x+\beta_y}$
approximate squares of diameter comparable to $\epsilon$.
Following \cite[Lemma 2.2]{Lal-Gatz-92-self-affine}, it suffices to count the number of elements
of the following set, for fixed $\omega'$, $k'$ and $l'$:
let $\mathcal{F}_\epsilon^*$ be the set of all
\[
(i_1, i_2, \ldots, i_{k+1}; j_1, j_2, \ldots, j_{l+1}),
\]
satisfying
\[
\prod_{\nu=1}^k b_{i_\nu} \geq \epsilon > \prod_{\nu=1}^{k+1} b_{i_\nu}
\quad \text{and} \quad
\prod_{\nu=1}^l a_{i_\nu j_\nu} \geq \epsilon > \prod_{\nu=1}^{l+1} a_{i_\nu j_\nu},
\]
with $i_\nu = i_\nu'$ for $\nu=1, \ldots, k'$, $j_\nu = j_\nu'$ for $\nu=1, \ldots, l'$,
and, finally, we require one of the following two conditions to hold.
\noindent\textbf{Condition 1:} $1 \leq l' \leq k' \leq l+1 \leq k+1$. Then
\begin{enumerate}
\item $i_\nu = i_\nu'$, $j_\nu \in \{1, \ldots, n_{i_\nu'}\}$, for $\nu = l'+1, \ldots, k'$,
\item $(i_\nu, j_\nu) \in \mathcal{J}$, for $\nu = k'+1, \ldots, l+1$,
\item $i_\nu \in \{1, \ldots, m\}$, for $\nu = l+2, \ldots, k+1$.
\end{enumerate}
\noindent\textbf{Condition 2:} $1 \leq l' \leq l+1 \leq k' \leq k+1$. Then
\begin{enumerate}
\item $i_\nu = i_\nu'$, $j_\nu \in \{1, \ldots, n_{i_\nu'}\}$, for $\nu = l'+1, \ldots, l+1$,
\item $i_\nu = i_\nu'$, for $\nu = l+2, \ldots, k'$,
\item $i_\nu \in \{1, \ldots, m\}$, for $\nu = k', \ldots, k+1$.
\end{enumerate}
We begin by counting the size of the subset $\mathcal{F}_2$ of $\mathcal{F}_\epsilon^*$ with Condition 2.
Fix $R = R_{k'}(\omega')$.
In (3), we count the set
\[
\Bigg\{ (i_{k'+1}, \ldots, i_{k+1}) : \prod_{\nu=k'+1}^k b_{i_\nu} \geq \frac{\epsilon}{R} >
\prod_{\nu=k'+1}^{k+1} b_{i_\nu} \Bigg\},
\]
which by \cite[Lemma 2.3]{Lal-Gatz-92-self-affine} has cardinality at most $C(R/\epsilon)^{\beta_y}$.
The number of choices in (1) is bounded from above by a constant multiple of the number of
$\epsilon$ balls needed to cover a horizontal cross section of $S$ of length $R$,
which is bounded from above by $C(R/\epsilon)^{\beta_x}$.
These choices combine to give an upper bound of $C(R/\epsilon)^{\beta_x+\beta_y}$ for the size of $\mathcal{F}_2$.
It remains to count the size of the subset $\mathcal{F}_1$ of $\mathcal{F}_\epsilon^*$ satisfying Condition 1.
By choosing $j_{l'+1}, \ldots, j_{k'}$, we have determined a rectangle $T$ of height $R$
and width $R u$, where $u$ is the aspect ratio $u= \prod_{\nu = l'+1}^{k'} a_{i_\nu' j_\nu}$.
The number of rectangles of width $\epsilon$ and height $\epsilon/u$ required to cover $T$
equals the number of approximate squares of size $\epsilon$ needed to cover an approximate square of
side $Ru$, which by Theorem~\ref{thm-lg-mink} is bounded by $C(Ru / \epsilon)^\delta \leq C (Ru/\epsilon)^{\beta_u+\beta_y}$.
The number of approximate squares of side $\epsilon$ needed to cover a rectangle of width $\epsilon$ and
height $\epsilon/u$ is at most $C(\frac{\epsilon/u}{\epsilon})^{\beta_y} = C(1/u)^{\beta_y}$,
by \cite[Lemma 2.3]{Lal-Gatz-92-self-affine}.
Combining these observations, we see that the size of $\mathcal{F}_1$ is at most:
\begin{align*}
C\ \sum_{j_{l'+1}, \ldots, j_{k'}} \bigg( \frac{R u}{\epsilon} \bigg)^{\beta_x + \beta_y}
\left(\frac{1}{u} \right)^{\beta_y}
& = C \left( \frac{R}{\epsilon} \right)^{\beta_x+\beta_y}
\sum_{j_{l'+1}, \ldots, j_{k'}} \bigg( \prod_{\nu = l'+1}^{k'} a_{i_\nu' j_\nu} \bigg)^{\beta_x}
\\ & \leq C \left( \frac{R}{\epsilon} \right)^{\beta_x+\beta_y}.
\end{align*}
The last inequality follows from the definition of $\beta_x$.
Combining both cases, we conclude that, as desired,
\[ | \mathcal{F}_\epsilon^* | \leq C \left(\frac{R}{\epsilon}\right)^{\beta_x+\beta_y}. \qedhere \] \end{proof}
\end{document} |
\begin{document}
\noindent
\title{Efficient routing in Poisson small-world networks}
\author{ M. {\sc Draief} \thanks{Statistical Laboratory, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WB UK E-mail: {\tt M.Draief@statslab.cam.ac.uk}} ~and~A. {\sc Ganesh} \thanks{Microsoft Research, 7 J.J. Thomson Avenue, Cambridge CB3 0FB E-mail: {\tt ajg@micorsoft.com} }}
\date{}
\maketitle
\begin{abstract} \noindent
In recent work, Jon Kleinberg considered a small-world network model consisting of a $d$-dimensional lattice augmented with shortcuts. The probability of a shortcut being present between two points decays as a power, $r^{-\alpha}$, of the distance $r$ between them. Kleinberg showed that greedy routing is efficient if $\alpha = d$ and that there is no efficient decentralised routing algorithm if $\alpha \neq d$. The results were extended to a continuum model by Franceschetti and Meester. In our work, we extend the result to more realistic models constructed from a Poisson point process, wherein each point is connected to all its neighbours within some fixed radius, as well as possessing random shortcuts to more distant nodes as described above. \end{abstract}
\section{Introduction}
A classical random graph model introduced by Erd\H{o}s and R\'enyi consists of $n$ nodes, with the edge between any pair of vertices being present with probability $p(n)$, independent of other pairs. Recently, there has been considerable interest in alternative models where the nodes are given coordinates in an Euclidean space, and the probability of an edge between a pair of nodes $u$ and $v$ is given by a function $g(\cdot)$ of the distance $r(u,v)$ between the nodes; edges between different node pairs are independent. Such `random connection' or `spatial random graph' models and variants thereof arise, for instance, in the study of wireless communication networks.
The ``small-world phenomenon" (the principle that all people are linked by short chains of acquaintances), which has long been a matter of folklore, was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram \cite{Mil67}. Recent works have suggested that the phenomenon is pervasive in networks arising in nature and technology, and motivated interest in mathematical models of such networks. While Erd\H{o}s-R\'enyi random graphs possess the property of having a small diameter (smaller than logarithmic in the number of nodes, above the connectivity threshold for $p(n)$), they are not good models for social networks because of the independence assumption. On the other hand, spatial random graphs are better at capturing clustering because of the implicit dependence between edges induced by the connection function $g(\cdot)$.
Watts and Strogatz \cite{WaSt98} conducted a set of re-wiring experiments on graphs, and observed that by re-wiring a few random links in finite lattices, the average path length was reduced drastically (approaching that of random graphs). This led them to propose a model of ``small-world graphs" which essentially consists of a lattice augmented with random links acting as shortcuts, which play an important role in shrinking the average path link. By the length of a path we mean the number of edges on it, and distance refers to graph distance (length of shortest path) unless otherwise specified.
The diameter of the Watts-Strogatz model in the 1-dimensional case was obtained by Barbour and Reinert \cite{BR}. Benjamini and Berger \cite{BB} considered a variant of this 1-dimensional model wherein the shortcut between any pair of nodes, instead of being present with constant probability, is present with probability given by a connection function $g(\cdot)$; they specifically considered connection functions of the form $g(r) \sim \beta r^{-\alpha}$, where $\beta$ and $\alpha$ are given constants, and $r(u,v)$ is the graph distance between $u$ and $v$ in the underlying lattice (i.e., the $L_1$ distance).
The general $d$-dimensional version of this model, on the finite lattice with $n^d$ points, was studied by Coppersmith et al. \cite{CGS}. They showed that the diameter of the graph is (i) $\Theta(\log n/\log \log n)$ if $\alpha=d$, (ii) at most polylogarithmic in $n$ if $d<\alpha<2d$, and (iii) at least polynomial in $n$ if $\alpha>2d$. Finally, it was shown by Benjamini et al. \cite{BKPS} that the diameter is a constant if $\alpha<d$.
The sociological experiments of Milgram demonstrated not only that there is a short chain of acquaintances between strangers but also that they are able to find such chains. What sort of graph models have this property? Specifically, when can decentralised routing algorithms (which we define later) find a short path between arbitrary source and destination nodes?
This question was addressed by Jon Kleinberg \cite{Klei00} for the class of finite $d$-dimensional lattices augmented with shortcuts, where the probability of a shortcut being present between two nodes decays as a power, $r^{-\alpha}$ of the distance $r$ between them. Kleinberg showed that greedy routing is efficient if $\alpha = d$ and that there is no efficient decentralised routing algorithm if $\alpha \neq d$. The results were extended to a continuum model by Franceschetti and Meester \cite{FrMee04}. Note that these results show that decentralised algorithms cannot find short routes when $\alpha \neq d$, even though such routes are present for $\alpha<2d$ by the results of Benjamini et al. and Coppersmith et al. cited above; when $\alpha > 2d$, no short routes are present.
\section{Our Model}
In this work, we consider a model constructed from a Poisson point process on a finite square, wherein each point is connected to all its neighbours within some fixed radius, as well as possessing random shortcuts to more distant nodes. More precisely: \begin{itemize} \item We consider a sequence of graphs indexed by $n\in \mathbb{N}$. \item Nodes form a Poisson process of rate $1$ on the square $[0,\sqrt{n}]^2$. \item Each node $x$ is linked to all nodes that are distance less that $r_n=\sqrt{c\log n}$ for a sufficiently large constant, $c$. In particular, if $c>1/\pi$, then this graph is connected with high probability (abbreviated whp, and meaning with probability going to 1 as $n$ tends to infinity); see \cite{penrose}. These links are referred to as local edges and the corresponding nodes as the local contacts of $x$. \item For two nodes $u$ and $v$ such that ${r}(u,v) > \sqrt{c\log n}$, the edge $(u,v)$ is present with probability $a_n{r}(u,v)^{-\alpha} \wedge 1$. Such edges are referred to as shortcuts. The parameter $a_n$ is chosen so that the expected number of shortcuts per node is equal to some specified constant, ${\overline d}$. \end{itemize}
The objective is to route a message from an arbitrary source node $s$ to an arbitrary destination $t$ using a small number of hops. We are interested in decentralised routing algorithms, which do not require global knowledge of the graph topology. It is assumed throughout that each node knows its location (co-ordinates) on the plane, as well as the location of all its neighbours, both local and shortcut, and of the destination $t$. We show that efficient decentralised routing is possible only if $\alpha = 2$. More precisely, we show the following: \begin{itemize} \item[$\bullet$] $\alpha=2$: there is a greedy decentralised algorithm to route a message from source to destination in $O(\log^2 n)$ hops. \item[$\bullet$] $\alpha<2$: any decentralised routing needs more than $n^{\gamma}$ hops on average, for any $\gamma$ such that $\gamma<(2-\alpha)/6$. \item[$\bullet$] $\alpha>2$: any decentralised routing needs more than $n^{\gamma}$ hops on average, for any $\gamma<\frac{\alpha-2}{2(\alpha-1)}$. \end{itemize}
As noted by Kleinberg for the lattice model, the case $\alpha=2$ corresponds to a ``scale-free" network: the expected number of shortcuts from a node $x$ to nodes which lie between distance $r$ and $2r$ from it is the same for any $r$. It was observed by Franceschetti and Meester in their continuum model that this property is related to the impossibility of efficient decentralised routing when $\alpha \neq 2$ through the fact that shortcuts can't make sufficient progress towards the destination when $\alpha > 2$ (they are too short) while they can't home in on small enough neighbourhoods of the destination when $\alpha < 2$ (they are too long). Similar remarks apply to our model as well.
A model very similar to ours was considered by Sharma and Mazumdar \cite{SM} who use it to describe an ad-hoc sensor network. The sensors are located at the points of a Poisson process and can communicate with nearby sensors through wireless links (corresponding to local contacts). In addition, it is possible to deploy a small number of wired links (corresponding to shortcuts), and the question they address is that of how to place these wired links in order to enable efficient decentralised routing.
In the analysis presented below, we ignore edge effects for ease of exposition. This is equivalent to considering distances as being defined on the torus obtained by identifying opposite edges of the square.
\section{Efficiency of greedy routing when $\alpha=2$}
When $\alpha=2$, we show that the following \emph{approximately} greedy algorithm succeeds whp in reaching the destination in a number of hops which is polylogarithmic in $n$, the expected number of nodes.
Denote by $C(u,r)$ the circle of radius $r$ centred at node $u$. If there is no direct link from the source $s$ to the destination $t$, then the message is passed via intermediate nodes as follows. At each stage, the message carries the address (co-ordinates) of the destination $t$, as well as a radius $r$ which is initialised to ${r}(s,t)$, the distance between $s$ and $t$. Suppose the message is currently at node $x$ and has radius $r > \sqrt{c\log n}$. (If $r \le \sqrt{c\log n}$, then the node which updated $r$ would have contained $t$ in its local contact list and delivered the message immediately.) If node $x$ has a shortcut to some node $y \in A(t,r)$, where the annulus $A(t,r)$ is defined as $A(t,r)= C(t,\frac{r}{2}) \setminus C(t,\frac{r}{4})$, then $x$ forwards the message to $y$. If there is more than one such node, the choice can be arbitrary. Otherwise, it forwards the message to one of its local contacts which is closer to $t$ than itself. When a node $y$ receives a message, it updates $r$ to $r/2$ if ${r}(y,t)\le r/2$, and leaves $r$ unchanged otherwise.
In other words, if $x$ can find a shortcut which reduces the distance to the destination by at least a half but by no more than three-quarters, it uses such a shortcut. Otherwise, it uses a local contact to reduce the distance to the destination. In that sense, the algorithm is approximately greedy. The reason for considering such an algorithm rather than a greedy algorithm that would minimize the distance to the destination at each step is to preserve independence, which greatly simplifies the analysis. Note that if a greedy step from $x$ takes us to $y$ (i.e., of all nodes to which $x$ possesses a shortcut, $y$ is closest to $t$), then the conditional law of the point process in the circle $C(t,r(t,y))$ is no longer unit rate Poisson. The fact that there are no shortcuts from $x$ to nodes within this circle biases the probability law and greatly complicates the analysis. Our approximate greedy algorithm gets around this problem.
Observe that if the message passes through a node $x$, the value of $r$ immediately after visiting $x$ lies between ${r}(x,t)$ and $2{r}(x,t)$.
We have implicitly assumed that any node can find a local contact closer to $t$ than itself. We first show that this assumption holds whp if $c$ is chosen sufficiently large.
Fix $c>0$ and $n\in \mathbb{N}$. For two points $x$ and $y$ in the square $[0,\sqrt{n}]^2$, and a realisation $\omega$ of the unit rate Poisson process on the square, define the properties $$ {\cal P}_n(x,y,\omega) = \{ \exists \ u\in \omega: {r}(u,y) < {r}(x,y) \quad \mbox{and} \quad {r}(u,x) \le \sqrt{c\log n} \}, $$ and $$ {\cal P}_n(\omega) = \bigwedge_{(x,y):{r}(x,y) \ge \sqrt{c\log n}} {\cal P}_n(x,y,\omega). $$ \begin{lem} \label{lem:good_local_contact} If $c>0$ is sufficiently large, then $P({\cal P}_n(\cdot)) \to 1$ as $n$ tends to infinity. \end{lem}
In words, with high probability, any two points $x$ and $y$ in the square $[0,\sqrt{n}]^2$ with ${r}(x,y) > \sqrt{c\log n}$ have the property that there is a point $u$ of the unit rate Poisson process within distance $\sqrt{c\log n}$ of $x$ which is closer than $x$ to $y$. In particular, if $x$ and $y$ are themselves points of the Poisson process, then $u$ is a local contact of $x$ which is closer to $y$. The key point to note about the lemma is that it gives a probability bound which is uniform over all such node pairs.
\begin{proof} Suppose ${r}(x,t)\ge \sqrt{c\log n}$. Consider the circle $C_1$ of radius $\sqrt{c\log n}$ centred at $x$ and the circle $C_2$ of radius ${r}(x,t)$ centred at $t$. For any point $y\neq x$ in their intersection, ${r}(y,t) < {r}(x,t)$. Moreover, the intersection contains a sector of $C_1$ of angle $2\pi/3$. Denote this sector $D_1$. Now consider a tessellation of the square $[0,\sqrt{n}]^2$ by small squares of side $\beta \sqrt{c\log n}$.
Note that for a sufficiently small geometrical constant $\beta$ that doesn't depend on $c$ or $n$ ($\beta =1/2$ suffices), the sector $D_1$ fully contains at least one of the smaller squares. Hence, if every small square contains at least one point of the Poisson process, then every node at distance greater than $\sqrt{c\log n}$ from $t$ can find at least one local contact which is closer to $t$. Number the small squares in some order and let $X_i$ denote the number of nodes in the $i^{\rm th}$ small square, $i=1,\ldots,n/(\beta^2 c\log n)$. The number of squares is assumed to be an integer for simplicity. Clearly, the $X_i$ are iid Poisson random variables with mean $\beta^2 c\log n$. Hence, by the union bound, $$ P(\exists \ i: X_i=0) \le \sum_{i=1}^{n/(\beta^2 c\log n)} P(X_i=0) = \frac{n}{\beta^2 c\log n} e^{-\beta^2 c\log n}, $$ which goes to zero as $n$ tends to infinity, provided that $\beta^2 c>1$. In particular, $c>4$ suffices since we can take $\beta=1/2$. \end{proof}
We now state the main result of this section.
\begin{thm} \label{thm:beta2} Consider the small world random graph described above with $\alpha =2$, expected node degree $\overline{d}=1$, and $c>0$ sufficiently large, as required by Lemma \ref{lem:good_local_contact}. Then, the number of hops for message delivery between any pair of nodes is of order $\log^2 n$ whp. \end{thm}
\begin{proof} We first evaluate the normalisation constant $a_n$ by noting that the expected degree, $\overline{d}$, of a node located at the centre of the square satisfies $$ \overline{d} \le a_n \int_{\sqrt{c\log n}}^{\sqrt{n/2}} x^{-2} 2\pi x dx = \pi a_n (\log n-\log \log n - \log (2c)), $$ and so \begin{equation} \label{eq:normalisation1} a_n \ge \frac{1}{\log n}, \end{equation} for all $n$ sufficiently large, by the assumption that $\overline{d}=1$.
Next, we compute the probability of finding a suitable shortcut at each step of the greedy routing algorithm. We think of the routing algorithm as proceeding in phases. The value of $r$ is halved at the end of each phase. The value of $r$ immediately after the message reaches a node $x$ satisfies the relation ${r}(x,t) \in (r/2,r]$ at each step of the routing algorithm.
We suppose that $r > k \sqrt{c\log n}$, for some large constant $k$.
Denote by $N_A$ the number of nodes in the annulus $A(t,r)$ and observe that $N_A$ is Poisson with mean $3\pi r^2/16$. The distance from $x$ to any of these nodes is bounded above by $3r/2$, and so the probability that a shortcut from $x$ is incident on a particular one of these nodes is bounded below by $a_n (3r/2)^{-2}$. Thus, conditional on $N_A$, the probability that $x$ has a shortcut to one of the $N_A$ nodes in $A(t,r)$ is bounded below by \begin{equation} \label{eq:good_hop_prob1} p(r,N_A) = 1 - \Bigl( 1- \frac{4a_n}{9r^2} \Bigr)^{N_A}. \end{equation} If $x$ doesn't have such a shortcut, the message is passed via local contacts which are successively closer to $t$, and hence satisfy the same lower bound on the probability of a shortcut to $A(t,r)$. Consequently, the number of local steps $L_x$ until a shortcut is found is bounded above by a geometric random variable with conditional mean $1/p(r,N_A)$. Since $N_A \sim \mbox{Pois}(3\pi r^2/16)$, we have by a standard application of the Chernoff bound that $$ P(N_A \le \gamma r^2/16) \le \exp \Bigl( -\frac{(3\pi-\gamma)r^2}{16} + \frac{\gamma r^2}{16}\log\frac{3\pi}{\gamma} \Bigr), $$ for any $\gamma<3\pi$.
Suppose first that $r \ge k\sqrt{c\log n}$ for some large constant $k$. Taking $\gamma = 3\pi/2$, we obtain \begin{equation} \label{eq:node_number_bd} P \Bigl( N_A \le \frac{3\pi r^2}{32} \Bigr) \le \exp \Bigl( -\frac{3\pi k^2 c\log n}{32} (1-\log 2) \Bigr). \end{equation}
Suppose first that $N_A < 3\pi r^2/32$. The number of local hops, $L_x$, to route the message from $x$ to $A$ is bounded above by the number of nodes outside $A$, since the distance to $t$ is strictly decreasing after each hop. Hence, \begin{equation} \label{eq:localhops_condbd1}
E \Bigl[ L_x \Bigm| N_A < \frac{3\pi r^2}{32} \Bigr] \le n- \mbox{area}(A) \le n. \end{equation}
Next, if $N_A \ge 3\pi r^2/32$, then we have by (\ref{eq:good_hop_prob1}) and (\ref{eq:normalisation1}) that
$$ p(r,N_A) \ge 1 - \exp \Bigl( -\frac{\pi a_n}{24} \Bigr) \ge 1 - \exp \Bigl( -\frac{\pi}{24\log n} \Bigr) \ge \frac{\pi}{48\log n}, $$
where the last inequality holds for all $n$ sufficiently large. Since the number of hops to reach $A$ is bounded above by a geometric random variable with mean $1/p(r,N_A)$, we have \begin{equation} \label{eq:localhops_condbd2}
E \Bigl[ L_x \Bigm| N_A \ge \frac{3\pi r^2}{32} \Bigr] \le \frac{48}{\pi}\log n. \end{equation} Finally, we obtain from (\ref{eq:node_number_bd}), (\ref{eq:localhops_condbd1}) and (\ref{eq:localhops_condbd2}) that $$ E[L_x] \le n\exp \Bigl( -\frac{3\pi k^2 c(1-\log 2)}{32} \log n \Bigr) + \frac{48}{\pi}\log n. $$ The first term in the sum above can be made arbitrarily small by choosing $k$ large enough, so $E[L_x]=O(\log n)$. It can also be seen from the arguments above that $L_x=O(\log n)$ whp. In other words, while $r\ge k\sqrt{c\log n}$, the number of hops during each phase is of order $\log n$. Moreover, the number of such phases is of order $\log n$ since the initial value of $r$ is at most $\sqrt{2n}$, and $r$ halves at the end of each phase.
Hence, the total number of hops until $r < k\sqrt{c\log n}$ is of order $\log^2 n$. Once the message reaches a node $x$ with ${r}(x,t) < k\sqrt{c\log n}$, the number of additional hops to reach $t$ is bounded above by the total number of nodes in the circle $C(t,k\sqrt{c\log n})$. By using the Chernoff bound for a Poisson random variable, it can be shown that this number is of order $\log n$ whp. This completes the proof of the theorem. \end{proof}
\section{Impossibility of efficient routing when $\alpha \neq 2$}
We now show that if $\alpha<2$, then no decentralised algorithm can route between arbitrary source-destination pairs in time which is polylogarithmic in $n$. In fact, the number of routing hops is polynomial in $n$ with some fractional power that depends on $\alpha$.
We now make precise what we mean by a decentralised routing algorithm. As specified earlier, each node knows the locations of all its local contacts with distance $\sqrt{c\log n}$ and of all its shortcut neighbours, as well as other nodes (if any) from which shortcuts are incident to it. A routing algorithm specifies a (possibly random) sequence of nodes $s=x_0, x_1, \ldots, x_k = t, x_{k+1}=t, \ldots$, where the only requirement is that each node $x_i$ be chosen from among the local or shortcut contacts of nodes $\{ x_0,\ldots,x_{i-1} \}$. (This is the same definition as used by Kleinberg \cite{Klei00}).
\begin{thm} \label{thm:betaless2} Consider the small world random graph described above with $\alpha < 2$, and arbitrarily large constants $c$ and $\overline{d}$. Suppose the source $s$ and destination $t$ are chosen uniformly at random from the node set. Then,
the number of hops for message delivery in any decentralised algorithm exceeds $n^{\gamma}$ whp, for any $\gamma < (2-\alpha)/6$. \end{thm}
It is not important that the source and destination be chosen uniformly but only that the distance between them be of order $n^a$ whp for some $a>0$.
\begin{proof} We first evaluate the normalisation constant $a_n$ by noting that the expected degree satisfies $$ \overline{d} \ge a_n \int_{\sqrt{c\log n}}^{\sqrt{n}/2} x^{-\alpha} 2\pi x dx = \frac{2 \pi a_n}{2-\alpha} \Bigl( \frac{n^{(2-\alpha)/2}}{2^{2-\alpha}} - (c\log n)^{(2-\alpha)/2} \Bigr), $$ which, on simplification, yields that \begin{equation} \label{eq:normalisation2} a_n \le \frac{4\overline{d}}{n^{(2-\alpha)/2}}, \end{equation} for all $n$ sufficiently large. Note that $a_n$ is an upper bound on the probability that there is a shortcut between any pair of nodes.
Suppose that the source $s$ and destination $t$ are chosen uniformly from all nodes on $[0,\sqrt{n}]^2$. Fix $\delta \in (\gamma,1/2)$ and define $C_{\delta} = C(t,n^{\delta})$ to be the circle of radius $n^{\delta}$ centred at $t$. It is clear that, for any $\epsilon>0$, the distance ${r}(s,C_{\delta})$ from $s$ to the circle $C_{\delta}$ is bigger than $n^{(1/2)-\epsilon}$ whp. Suppose now that this inequality holds, but that there is a routing algorithm which can route from $s$ to $t$ in fewer than $n^{\gamma}$ hops. Denote by $s=x_0, x_1, \ldots, x_m=t$, the sequence of nodes visited by the routing algorithm, with $m\le n^{\gamma}$. We claim that there must be a shortcut from at least one of the nodes $x_0,x_1,\ldots, x_{m-1}$ to the set $C_{\delta}$. Indeed, if there is no such shortcut, then $t$ must be reached starting from some node outside $C_{\delta}$ and using only local links. Since the length of each local link is at most $\sqrt{c\log n}$ and the number of hops is at most $n^{\gamma}$, the total distance traversed by local hops is strictly smaller than $n^{\delta}$ (for large enough $n$, by the assumption that $\delta>\gamma$), which yields a contradiction. We now estimate the probability that there is a shortcut from one of the nodes $x_0,\ldots,x_{m-1}$ to the set $C_{\delta}$.
The number of nodes in the circle $C_{\delta}$, denoted $N_C$, is Poisson with mean $\pi n^{2\delta}$, so $N_C < 4n^{2\delta}$ whp. Now, by (\ref{eq:normalisation2}) and the union bound, $$
P(\exists \mbox{ shortcut between $u$ and $C_{\delta}$}|N_C < 4n^{2\delta}) \le 16 \ \overline{d} \ n^{(4\delta+\alpha-2)/2}, $$ for any node $u$. Applying this bound repeatedly for each of the nodes $x_0, x_1, \ldots, x_{m-1}$ generated by the routing algorithm, we get, \begin{equation} \label{eq:shortcut_bd2} P(\exists \mbox{ shortcut to $C_{\delta}$ within $n^{\gamma}$
hops)} |N_C < 4n^{2\delta}) \le 16 \ \overline{d} \ n^{(2\gamma + 4\delta+\alpha-2)/2}. \end{equation}
Now $\gamma < (2-\alpha)/6$ by assumption, and $\delta>\gamma$ can be chosen arbitrarily. In particular, we can choose $\delta$ so that $2\gamma + 4\delta + \alpha -2$ is strictly negative, in which case the conditional probability of a shortcut to $C_{\delta}$ goes to zero as $n\to \infty$. Since $P(N_C \ge 4n^{2\delta})$ also goes to zero, we conclude that the probability of finding an $s-t$ route with fewer than $n^{\gamma}$ hops also goes to zero. This concludes the proof of the theorem. \end{proof}
{\bf Remarks:} The theorem continues to hold if we assume 1-step lookahead. By this, we mean that when a node decides where to send the message at the next step, it can not only use the locations of all its local and shortcut contacts, but also the locations of their contacts. All this means is that after visiting $n^{\gamma}$ nodes, the algorithm has knowledge about $O(n^{\gamma} \log n)$ nodes. If none of these nodes has a shortcut into the set $C_{\delta}$, which is the case whp, then the arguments above still apply. The same is true for $k$-step lookahead, for any constant $k$.
\begin{thm} \label{thm:betamore2} Consider the small world random graph described above with $\alpha > 2$, and arbitrarily large constants $c$ and $\overline{d}$. Suppose the source $s$ and destination $t$ are chosen uniformly at random from the node set. Then,
the number of hops for message delivery in any decentralised algorithm exceeds $n^{\gamma}$ whp, for any $\gamma < (\alpha-2)/(2(\alpha-1))$. \end{thm}
\begin{proof} For a node $u$, the probability that a randomly generated shortcut has length bigger than $r$ is bounded above by $$ \frac{ \int_r^{\infty} x^{-\alpha} 2\pi xdx }{ \int_{\sqrt{c\log n}}^{\sqrt{n}/2} x^{-\alpha} 2\pi xdx} \le \mbox{const. }r^{2-\alpha} (\log n)^{(\alpha-2)/2}, $$ for all $n$ sufficiently large. Since there are $2\overline{d}$ shortcuts per node on average, the probability that two nodes $u$ and $v$ separated by distance $r$ or more possess a shortcut between them is bounded above by the same function, but with the constant suitably modified.
Now, for randomly chosen nodes $s$ and $t$, ${r}(s,t) > n^{(1/2)-\epsilon}$ whp, for any $\epsilon>0$. Hence, there can be a path of length $n^{\gamma}$ hops between $s$ and $t$ only if at least one of the hops is a shortcut of length $n^{(1/2)-\epsilon-\gamma}$ or more. By the above and the union bound, the probability of there being such a shortcut is bounded above by $$ \mbox{const. } n^{\gamma} \Bigl( n^{(1/2)-\epsilon-\gamma} \Bigr)^{2-\alpha} (\log n)^{(\alpha-2)/2}. $$ The exponent of $n$ in the above expression is $$ \frac{2-\alpha}{2}(1-2\epsilon) + \gamma(\alpha-1). $$ The exponent above is negative for sufficiently small $\epsilon>0$ provided $\gamma < (\alpha-2)/(2(\alpha-1))$. In other words, if this inequality is satisfied, then the probability of finding a route with fewer than $n^{\gamma}$ hops goes to zero as $n\to \infty$. This establishes the claim of the theorem. \end{proof}
\end{document} |
\begin{document}
\title[Fractional Schr\"odinger-Poisson equations] {Fractional Schr\"odinger-Poisson equations with general nonlinearities}
\author[Ronaldo C. Duarte]{Ronaldo C. Duarte}
\email{ronaldocesarduarte@gmail.com} \email{marco@dme.ufcg.edu.br}
\author[Marco A. S. Souto]{Marco A. S. Souto}
\keywords{positive solutions, ground state solutions, periodic potential}
\subjclass{Primary 35J60; Secondary 35J10}
\begin{abstract} In this paper we investigate the existence of positive solutions and ground state solutions for a class of fractional Schr\"odinger-Poisson equations in $\mathbb R^3$ with general nonlinearity. \end{abstract}
\maketitle
\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}}
\section{Introduction}
In this article we consider the Schr\"odinger-Poisson system $$ \left\{\begin{array}{lcl} \left(-\Delta\right)^{s} u +V(x)u+\phi u= f(u), &\mbox{ in }& \mathbb R^3, \\ \left(-\Delta\right)^{t} \phi=u^2, &\mbox{ in }& \mathbb R^3, \end{array}\right. \leqno (P) $$ where $\left(-\Delta\right)^\alpha$ is the fractional Laplacian for $\alpha = s,t$. This paper was motivated by \cite{amsss}. In \cite{amsss} the authors show the existence of positive solutions for the system $$ \left\{\begin{array}{lcl} - \Delta u +V(x)u+\phi u= f(u), &\mbox{ in }& \mathbb R^3, \\ -\Delta \phi=u^2, &\mbox{ in }& \mathbb R^3, \end{array}\right. $$ where $V:\mathbb{R}^3 \to \mathbb R$ is a continuous periodic potential and positive. Our purpose is to show that when we consider this system with fractional Laplacian operator instead of the Laplacian, then we get a positive solution and a ground state solution for the system. We emphasize that we prove the existence of weak solution to the system and without using results of regularity, we show that the weak solution is positive almost everywhere in $\mathbb{R}^{3}$. To prove this, we present another version of the Logarithmic lemma and we deduce a weak comparison principle for the solution of the system (See Theorem \ref{lm41}).
We will admit that the potential $V$ satisfies, \begin{enumerate} \item[$(V_o)$\ ] $V(x) \geq \alpha >0$, $\forall x \in \mathbb R^3$, for some constant $\alpha >0$,
\item[$(V_1)$\ ] $V(x)=V(x+y)$, for all $x \in \mathbb R^3$, $y\in \mathbb Z^3$.\newline \end{enumerate} Also, we will assume that $f\in C(\mathbb R,\mathbb R)$ is a function satisfying: \begin{enumerate} \item[$(f_1)$\ ] $f(u)u>0$, $u \neq 0$;
\item[$(f_2)$\ ] $\displaystyle\lim_{u\rightarrow 0} \frac{f(u)}{u} = 0$;
\item[$(f_3)$\ ] there exists $p \in (4,2^{*}_{s})$ and $C>0$, such that
$$|f(u)|\leq C(|u|+|u|^{p-1}),$$ for all $u \in \mathbb{R}$, where $2^{*}_{s}=\frac{6}{3-2s}$.
\item[$(f_4)$\ ] $\displaystyle\lim_{u\rightarrow +\infty} \frac{F(u)}{u^4} =+\infty$, where $F(u)=\int_0^u f(z)dz$;
\item[$(f_5)$\ ]The function
$u \longmapsto \frac{f(u)}{|u|^{3}}$ is increasing
in $|u|\neq 0$. \end{enumerate} We will denote $g(u):=f(u^{+})$ and $G(t)=\int_{0}^{t}g(s)ds$.
The System $(P)$ was studied in \cite{bf}. The author studied the following one dimensional system $$ \left\{\begin{array}{lcl}
- \Delta u +\phi u= a|u|^{p-1}u, &\mbox{ in }& \mathbb R, \\ \left(-\Delta\right)^{t} \phi=u^2, &\mbox{ in }& \mathbb R, \end{array}\right. $$ for $p\in(1,5)$ and $t \in (0,1)$. In \cite{zjs2}, the authors show the existence of positive solutions for the system $$ \left\{\begin{array}{lcl} - \Delta u + u + \lambda \phi u= f(u), &\mbox{ in }& \mathbb{R}^{3}, \\ -\Delta \phi= \lambda u^2, &\mbox{ in }& \mathbb R, \end{array}\right. $$ for $\lambda>0$ and general critical nonlinearity, $f$. In \cite{zjs}, the authors have proved the existence of radial ground state solutions of $(P)$ when $V=0$. In \cite{zhang}, the system was studied, although the sign of the solutions is not considered. In this paper, we prove the existence of positive solutions for system $(P)$. Moreover, by the method of the Nehari manifold, we ensure the existence of a ground state solution for the problem.
Our result is:
\begin{theorem}{\label{fth}} Suppose that $s \in (\frac{3}{4},1)$, $t \in (0,1)$, $V$ satisfies $(V_o)$ and $(V_1)$, and $f$ satisfies $(f_1)- (f_5)$. Then the system ($P$) has a positive solution and a ground state solution. \end{theorem}
The hypothesis $s \in (\frac{3}{4},1)$ is required to ensure that the interval $(4,2^{\ast}_{s})$ is nondegenerate.
\begin{remark}{\label{rm1}} The condition $(f_5)$ implies that
$H(u)=uf(u)-4F(u)$ is a non-negative function. \end{remark}
In the paper \cite{gsd}, Lemma 2.3, the authors proved another version of the Lions lemma. We will need this lemma to prove our result. It states that: \begin{lemma}\label{l1.3} If $\left\{u_{n}\right\}_{n \in \mathbb{N}}$ is a bounded sequence in $H^{s}(\mathbb{R}^{3})$ such that for some $R>0$ and $2\leq q< 2^{\ast}_{s}$ we have $$
\sup_{x \in \mathbb{R}^{3}}\int_{B_{R}(x)}|u_{n}|^{q} \longrightarrow 0 $$ when $n \rightarrow \infty$, then $u_{n}\rightarrow 0$ in $L^{r}(\mathbb{R}^{3})$ for all $r \in (2,2^{\ast}_{s})$. \end{lemma}
\section{Some preliminary results}
Let $s \in (0,1)$, we denote by $\dot{H}^{s}(\mathbb{R}^{3})$ the homogeneous fractional space. It is defined as the completion of $C_{0}^{\infty}(\mathbb{R}^{3})$ under the norm $$
||u||_{\dot{H}^{s}}=\left(\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy\right)^{\frac{1}{2}} $$ and we define $$
H^{s}(\mathbb{R}^{3}):=\left\{u \in L^{2}(\mathbb{R}^{3});\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy<\infty \right\}. $$ The space $H^{s}(\mathbb{R}^{3})$ is a Hilbert space with the norm $$
||u||_{H^{s}}=\left(\int_{\mathbb{R}^{3}}|u|^{2}dx+\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy\right)^{\frac{1}{2}} $$ We define the fractional Laplacian operator $\left(-\Delta\right)^{s}:\dot{H}^{s}(\mathbb{R}^{3})\longrightarrow (\dot{H}^{s}(\mathbb{R}^{3}))'$ by $(\left(-\Delta\right)^{s}u,v)=\frac{\zeta}{2}(u,v)_{\dot{H}^{s}}$, where $
\zeta=\zeta(s)=\left(\int_{\mathbb{R}^{3}}\frac{1-cos(\xi_{1})}{|\xi|^{3+2s}}d \xi \right)^{-1} $ and $(\cdot,\cdot)_{H^{s}}$ is an inner product of $H^{s}(\mathbb{R}^{3})$. The constant $\zeta$ satisfies $$
\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{3+2s}}dxdy=2\zeta^{-1}\int_{\mathbb{R}^{3}}|\xi|^{2s}\mathcal{F}u(\xi) \overline{\mathcal{F}v(\xi)}d \xi, $$ where $\mathcal{F}u$ is the Fourier transform of $u$ (see Proposition 3.4 of \cite{dpv}). The fractional Laplacian operator is a bounded linear operator.
A pair $(u, \phi_{u})$ is a solution of $(P)$ if $$
\frac{\zeta(t)}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(\phi_{u}(x)-\phi_{u}(y))(w(x)-w(y))}{|x-y|^{3+2t}}dxdy = \int_{\mathbb{R}^{3}}u^{2}wdx. $$ for all $w \in \dot{H}^{t}(\mathbb{R}^{3})$, and $$ (\left(-\Delta\right)^{s}u,v)+\int_{\mathbb{R}^{3}}V(x)uvdx+\int_{\mathbb{R}^{3}}\phi_{u}uvdx=\int_{\mathbb{R}^{3}}f(u)vdx $$ for all $v\in H^{s}(\mathbb{R}^{3})$.
Let us recall some facts about the Schr\"odinger-Poisson equations (see \cite{Ruiz,ap,zz,G} for instance). We can transform $(P)$ into a fractional Schr\"odinger problem with a nonlocal term. For all $u\in H^{s}(\mathbb R^3)$, there exists a unique $\phi=\phi_u \in \dot{H}^{t}(\mathbb R^3)$ such that $$ \left(-\Delta\right)^{t} \phi=u^2. $$ In fact, since $H^{s}(\mathbb R^3)\hookrightarrow L^{\frac{22^{\ast}_{t}}{2^{\ast}_{t}-1}}(\mathbb R^3)$ (continuously), a simple application of the Lax-Milgram theorem shows that $\phi_u$ is well defined and $$
||\phi_{u}||_{\dot{H}^{t}}^{2}\leq S^2 ||u||^4_{\frac{22^{\ast}_{t}}{2^{\ast}_{t}-1}}, $$
where $||.||_p$ denotes the $L^p(\mathbb R^3)$ norm and $S$ is the best constant of the Sobolev immersion $H^s(\mathbb R^3) \rightarrow L^{2^{\ast}_{t}}(\mathbb R^3)$, that is $$
S= \inf_{u \in \dot{H}^{t}(\mathbb{R}^{3})\setminus \left\{0\right\}}\frac{||u||_{\dot{H}^{t}}^{2}}{||u||_{2^{\ast}_{t}}^{2}}. $$
\begin{lemma}{\label{lm1}} We have:
$i)$ there exists $C>0$ such that $||\phi_u||_{\dot{H}^{t}}\leq C||u||_{H^{s}}^2$ and $$
\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(\phi_{u}(x)-\phi_{u}(y))^{2}}{|x-y|^{3+2t}}dxdy \leq C||u||_{{H}^{s}}^{4} $$ for all $u\in H^s(\mathbb R^3)$;
$ii)$ $\phi_u\geq 0$, $\forall u\in H^s(\mathbb R^3)$;
$iii)$ $\phi_{tu}=t^2\phi_u$, $\forall t>0, u\in H^s(\mathbb R^3)$.
$iv)$ If $\tilde{u}(x):=u(x+z)$ then $\phi_{\tilde{u}}(x) = \phi_{u}(x+z)$ and $$ \int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx = \int_{\mathbb{R}^{3}}\phi_{\tilde{u}}\tilde{u}^{2}dx. $$ for all $z \in \mathbb{R}^{3}$ and $u \in H^{s}(\mathbb{R}^{3})$.
$v)$ If $\left\{u_{n}\right\}$ converges weakly to $u$ in $H^{s}(\mathbb{R}^{3})$, then $\left\{\phi_{u_{n}}\right\}$ converges weakly to $\phi_{u}$ in $\dot{H}^{t}(\mathbb{R}^{3})$. \end{lemma} The proof is analogous to the case of Poisson equation in $\mathcal{D}^{1,2}(\mathbb{R}^{3})$ (See \cite{amsss, Ruiz, zz}).
At first, we are interested in showing the existence of a positive solution for $(P)$. We will consider the following Euler-Lagrange functional $$ \begin{array}{cccl} I:&H^{s}(\mathbb{R}^{3})&\longrightarrow&\mathbb{R}\\ &u& \longmapsto &
\begin{array}{ll}\frac{\zeta(s)}{4}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy+\frac{1}{2}\int_{\mathbb{R}^{3}}V(x)u^{2}dx\\ &+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx-\int_{\mathbb{R}^{3}}G(u)dx, \end{array} \end{array} $$ whose derivative is $$ \begin{array}{ll}
I^{'}(u)(v)&=\frac{\zeta}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{3+2s}}dxdy\\ & +\int_{\mathbb{R}^{3}}V(x)uvdx +\int_{\mathbb{R}^{3}}\phi_{u}u vdx-\int_{\mathbb{R}^{3}}g(u)vdx \\ &= (\left(-\Delta\right)^{s}u,v)+\int_{\mathbb{R}^{3}}V(x)uvdx +\int_{\mathbb{R}^{3}}\phi_{u}u vdx-\int_{\mathbb{R}^{3}}g(u)vdx. \end{array} $$ Remark that critical points of $I$ determine solutions for $P$.
\begin{lemma} The function $$
u \longmapsto ||u||:=\left(\frac{\zeta(s)}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy + \int_{\mathbb{R}^{3}}V(x)u^{2}dx\right)^{\frac{1}{2}} $$ defines a norm in $H^{s}(\mathbb{R}^{3})$ wich is equivalent to the standard norm. \end{lemma}
The proof of the previous lemma is trivial and therefore we will omit it in this paper. \section{Existence of the Solution}
\begin{theorem}\label{th31} Suppose that $1>s> \frac{3}{4}$, $t \in (0,1)$, $V$ satisfies $(V_{0}), (V_{1})$ and $f$ satisfies $(f_{1})-(f_{4})$. Then, the problem $(P)$ has a nontrivial solution. \end{theorem}
\begin{proof} By usual arguments, we prove that the functional $I$ has the mountain pass geometry. By Montain Pass theorem, there is a Cerami's sequence for $I$ at the mountain pass level c. That is, there is $\left\{u_{n}\right\}_{n \in \mathbb{N}} \subset H^{s}(\mathbb{R}^{3})$ such that $$ I(u_{n})\rightarrow c $$ and $$
(1+||u_{n}||)I^{'}(u_{n}) \rightarrow 0. $$ where $$ c= \inf_{\gamma \in \Gamma}\sup_{t \in [0,1]}I(\gamma(t)) $$ and $$ \Gamma=\left\{\gamma \in C([0,1], H^{s}(\mathbb{R}^{3}));\gamma(0)=0, \gamma(1)=e \right\}, $$ where $e \in H^{s}(\mathbb{R}^{3})$, and $e$ satisfies $I(e)<0$. By Remark \ref{rm1} $$ \begin{array}{ll}
4I(u_{n})-I'(u_{n})u_{n}
&=||u_{n}||^{2}+\int_{\mathbb{R}^{3}}[f(u_{n})u_{n}-4F(u_{n})]dx \\
& \geq ||u_{n}||^{2} \end{array} $$ Therefore $\left\{u_{n}\right\}$ is bounded in $H^{s}(\mathbb{R}^{3})$. So, there is $u \in H^{s}(\mathbb{R}^{3})$ such that $\left\{u_{n}\right\}$ converges weakly to $u$. The Lemma \ref{lm1}, $(f_{2})$, and $(f_{3})$ imply that $u$ is a critical point for $I$. If $u \neq 0$ then $u$ is a nontrivial solution for $(P)$. Suppose that $u = 0$. We claim that $\{u_{n}\}$ does not converge to $0$ in $L^{r}(\mathbb{R}^{3})$ for all $r \in (2,2^{\ast}_{s})$. Indeed, otherwise, by $(f_{2})$, $(f_{3})$ and the boundedness of $\{u_{n}\}$ in $L^{2}(\mathbb{R}^{3})$ we have $$ \int_{\mathbb{R}^{3}}g(u_{n})u_{n}dx \rightarrow 0; $$ By Lemma \ref{lm1} $$ \begin{array}{ll}
||u_{n}||^{2}& \leq ||u_{n}||^{2}+\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx \\ & = \int_{\mathbb{R}^{3}}g(u_{n})u_{n}dx + I'(u_{n})u_{n}. \end{array} $$ The right side of this last inequality converges to $0$. In this case, $u_{n}\rightarrow 0$ in $H^{s}(\mathbb{R}^{3})$. Consequently $$ c=\lim I(u_{n})=0. $$ This last equality can not occur. Then, we can assume that there are $R>0$ and $\delta>0$ such that passing to a subsequence if necessary $$ \int_{B_{R}(y_{n})}u_{n}^{2}dx\geq \delta, $$ for some sequence $\{y_{n}\} \subset \mathbb{Z}^{3}$ (See Lemma \ref{l1.3}). For each $n \in \mathbb{N}$, we define $$ w_{n}(x):=u_{n}(x+y_{n}). $$ Note that $w_{n} \in H^{s}(\mathbb{R}^{3})$. Moreover, changing the variables in the integral below, we have $$ \begin{array}{ll} I(w_{n})
&= \begin{array}{ll}\frac{\zeta}{4}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u_{n}(x+y_{n})-u_{n}(y+y_{n}))^{2}}{|(x+y_{n})-(y+y_{n})|^{3+2s}}dxdy+\frac{1}{2}\int_{\mathbb{R}^{3}}V(x)u_{n}(x+y_{n})^{2}dx\\ &+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{w_{n}}w_{n}^{2}dx-\int_{\mathbb{R}^{3}}G(u_{n}(x+y_{n}))dx \end{array} \\
& = \begin{array}{ll}\frac{\zeta}{4}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u_{n}(z)-u_{n}(w))^{2}}{|z-w|^{3+2s}}dzdw+\frac{1}{2}\int_{\mathbb{R}^{3}}V(z)u_{n}(z)^{2}dz\\ &+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx-\int_{\mathbb{R}^{3}}G(u_{n}(z))dz. \end{array}\\ & = I(u_{n}). \end{array} $$ Analogously, for every $\phi \in H^{s}(\mathbb{R}^{3})$ $$ \begin{array}{ll}
I'(w_{n})\phi&=\begin{array}{ll}\frac{\zeta}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(w_{n}(x)-w_{n}(y))(\phi(x)-\phi(y))}{|x-y|^{3+2s}}dxdy+\int_{\mathbb{R}^{3}}V(x)w_{n}\phi dx\\ &+\int_{\mathbb{R}^{3}}\phi_{w_{n}}w_{n}\phi dx-\int_{\mathbb{R}^{3}}g(w_{n})\phi dx \end{array} \\
& = \begin{array}{ll}\frac{\zeta}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u_{n}(x+y_{n})-u_{n}(y+y_{n}))(\phi(x)-\phi(y))}{|(x+y_{n})-(y+y_{n})|^{3+2s}}dxdy\\&+\int_{\mathbb{R}^{3}}V(x+y_{n})u_{n}(x+y_{n})\phi(x) dx\\ &+\int_{\mathbb{R}^{3}}\phi_{u_{n}}(x+y_{n})u_{n}(x+y_{n})\phi dx\\ &-\int_{\mathbb{R}^{3}}g(u_{n}(x+y_{n}))\phi(x) dx \end{array} \\
& = \begin{array}{ll}\frac{\zeta}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u_{n}(z)-u_{n}(w)(\phi(z-y_{n})-\phi(w-y_{n}))}{|z-w|^{3+2s}}dzdw\\ &+\int_{\mathbb{R}^{3}}V(z)u_{n}(z)\phi(z-y_{n}) dz\\ &+\int_{\mathbb{R}^{3}}\phi_{u_{n}}(z)u_{n}(z)\phi(z-y_{n}) dz\\ &-\int_{\mathbb{R}^{3}}g(u_{n}(z))\phi(z-y_{n}) dz \end{array} \\
& = I'(u_{n})\overline{\phi} \end{array} $$ where $\overline{\phi}(x)=\phi(x-y_{n})$. This implies that $\{w_{n}\}$ is a Cerami's sequence for $I$ at the level $c$. Analogously, we can show that $\{w_{n}\}$ is bounded, $\{w_{n}\}$ converges weakly to some $w_{0}\in H^{s}(\mathbb{R}^{3})$ and that $I'(w_{0})=0$. Passing to a subsequence, if necessary, we can assume that $\{w_{n}\}$ converges on $L^{2}_{loc}(\mathbb{R}^{3})$ to $w_{0}$. Then $$ \begin{array}{ll} \int_{B_{R}(0)}w_{0}^{2}dx & = \lim\limits_{n \rightarrow \infty}\int_{B_{R}(0)}w_{n}^{2}dx \\ & = \lim\limits_{n \rightarrow \infty}\int_{B_{R}(0)}u_{n}(x+y_{n})^{2}dx \\ & = \lim\limits_{n \rightarrow \infty}\int_{B_{R}(y_{n})}u_{n}(z)^{2}dz \geq \delta. \end{array} $$ Therefore, $w_{0}$ is a nontrivial solution for $(P)$. Thus, if $u=0$ we prove that there is a critical point for $I$, that is nontrivial. \end{proof}
\section{Positivity of the solution}
In this section, we will prove that the solution of Theorem \ref{th31} is positive. Initially, we will prove a version of Logarithmic lemma. The Logarithmic lemma was presented by Di Castro, Kuusi and Palatucci. (lemma 1.3 of \cite{dkp}). In the Logarithmic lemma, the authors give an estimate for weak solutions of the equation $$ \left\{\begin{array}{rcccc} \left(-\Delta_{p}\right)^{s}u&=&0&in& \Omega \\ u&=&g&in& \mathbb{R}^{n}\setminus\Omega \end{array} \right. $$ in $B_{r}(x_{0})\subset B_{\frac{R}{2}}(x_{0}) \subset \Omega$, for $x_{0}\in \Omega$ and $u \geq 0$ in $B_{R}(x_{0})$. Following the ideas from Di Castro, Kuusi and Palatucci, we will show a similar estimate for a supersolution of the problem $$ \begin{array}{lll} \left(-\Delta\right)^{s}u+a(x)u=0&in &\mathbb{R}^{n} \end{array} $$ (See Lemma \ref{lm41} bellow). Supersolutions are defined in the following way $$
\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{(u(x)-u(y))\left(v(x)-v(y)\right)}{|x-y|^{n+2s}}dxdy+ \int_{\mathbb{R}^{n}}a(x)u(x)v(x)dx\geq 0, $$ for all $v \in H^{s}(\mathbb{R}^{n})$ with $v \geq 0$ almost everywere. Also, in this situation, we need not to assume that $u\geq0$ in some subset of $\mathbb{R}^{n}$. With this estimate, we conclude that the supersolution satisfies $u > 0$ almost everywere in $\mathbb{R}^{3}$ or $u=0$ almost everywere in $\mathbb{R}^{3}$.
\begin{lemma}\label{lm41} Suppose that $a:\mathbb{R}^{n}\rightarrow \mathbb{R}$ is a nonnegative function and $u \in H^{s}(\mathbb{R}^{n})$. If $$
\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{(u(x)-u(y))\left(v(x)-v(y)\right)}{|x-y|^{n+2s}}dxdy+ \int_{\mathbb{R}^{n}}a(x)u(x)v(x)dx\geq 0. $$ for all $v \in H^{s}(\mathbb{R}^{n})$ with $v \geq 0$ almost everywere, then $u \geq 0$ almost everywere. In other words, if $\left(-\Delta\right)^{s}u+a(x)u\geq0$ then $u \geq 0$ almost everywere. \end{lemma}
\begin{proof} Define $v=u^{-} = \max\{0,-u\}$. By hypothesis $$
\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{(u(x)-u(y))\left(u^{-}(x)-u^{-}(y)\right)}{|x-y|^{n+2s}}dxdy+ \int_{\mathbb{R}^{n}}a(x)u(x)u^{-}(x)dx\geq 0. $$ But, \begin{itemize} \item if $u(x)>0$ and $u(y)>0$ then $(u(x)-u(y))\left(u^{-}(x)-u^{-}(y)\right) =0$.
\item if $u(x)<0$ and $u(y)<0$ then $(u(x)-u(y))\left(u^{-}(x)-u^{-}(y)\right) = - (u(x)-u(y))^{2} \leq 0$.
\item if $u(x)>0$ and $u(y)<0$ then $(u(x)-u(y))\left(u^{-}(x)-u^{-}(y)\right) = (u(x)-u(y))u(y) \leq 0$.
\item if $u(x)<0$ and $u(y)>0$ then $(u(x)-u(y))\left(u^{-}(x)-u^{-}(y)\right) = (u(x)-u(y))(-u(x)) \leq 0$.
\item if $u(x) < 0$, then $a(x)u(x)u^{-}(x) = -a(x)u^{2}(x)<0$, and $a(x)u(x)u^{-}(x)$ $= 0$ in the case $u(x)\geq0$. \end{itemize} We conclude that each one of the integrals above is equal to zero. Therewith
$$
\frac{(u(x)-u(y))\left(u^{-}(x)-u^{-}(y)\right)}{|x-y|^{n+2s}}= 0. $$ Therefore $u^{-}$ is constant in $H^{s}(\mathbb{R}^{n})$, that is, $u^{-}=0$. \end{proof} \begin{lemma}\label{lm42} Suppose that $\epsilon \in \left(\left.0,1\right]\right.$ and $a,b \in \mathbb{R}^{n}$. Then $$
|a|^{2}\leq |b|^{2}+2\epsilon|b|^{2}+\frac{1+\epsilon}{\epsilon}|a-b|^{2} $$ \end{lemma} \begin{proof} $$ \begin{array}{ll}
|a|^{2} & \leq \left(|b|+|a-b|\right)^{2} \\
& = |b|^{2}+2|b||a-b|+|a-b|^{2} \\ \end{array} $$ By Cauchy inequality with $\epsilon$ $$
|b||a-b|\leq \epsilon|b|^{2}+\frac{|a-b|^{2}}{4\epsilon} \leq
\epsilon|b|^{2}+\frac{|a-b|^{2}}{2\epsilon} $$ Replacing in the inequality above $$ \begin{array}{ll}
|a|^{2} & \leq |b|^{2}+2\epsilon|b|^{2}+\frac{|a-b|^{2}}{\epsilon}+|a-b|^{2} \\
& = |b|^{2}+2\epsilon|b|^{2}+\frac{1+\epsilon}{\epsilon}|a-b|^{2}. \end{array} $$ \end{proof} \begin{lemma}\label{lm43} With the same assumptions of Lemma \ref{lm41} and $a \in L^{1}_{loc}(\mathbb{R}^{3})$, we have for all $r,d>0$ and $x_{0} \in \mathbb{R}^{n}$ \begin{equation}
\int_{B_{r}}\int_{B_{r}}\left|\log\left(\frac{d+u(x)}{d+u(y)}\right)\right|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \leq Cr^{n-2s} + \int_{B_{2r}}a(x)dx, \end{equation} where $B_{r}=B_{r}(x_{0})$ and $C=C(n,s)>0$ is a constant. \end{lemma}
\begin{proof}
Consider $\phi \in C_{0}^{\infty}(B_{\frac{3r}{2}})$, $0\leq \phi \leq 1$, $\phi = 1$ in $B_{r}$ and $K>0$ such that $||D\phi||_{\infty} \leq Kr^{-1}$. The function $$ \eta=\frac{\phi^{2}}{u+d} $$ is in $H^{s}(\mathbb{R}^{n})$ and $\eta\geq0$ (see Lemma 5.3 in \cite{dpv}). By hypothesis $$ \begin{array}{ll}
0&\leq \int_{R^{n}}\int_{R^{n}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy + \int_{\mathbb{R}^{n}}a(x)u(x)\eta(x)dx \\
&= \int_{B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\
& + \int_{R^{n}-B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\ & +
\int_{B_{2r}}\int_{R^{n}-B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\ & +
\int_{R^{n}-B_{2r}}\int_{R^{n}-B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\
& + \int_{\mathbb{R}^{n}}a(x)u(x)\eta(x)dx. \end{array} $$ We will prove some statements about the five integrals of the last inequality.
\begin{itemize}
\item $A.1)$ There are constants $C_{2},C_{3}>0$, such that, they depend only on $n$ and $s$ and $$ \begin{array}{ll}
&\int_{B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\
& \leq -C_{2}\int_{B_{2r}}\int_{B_{2r}}\left|\log\left(\frac{d+u(x)}{d+u(y)}\right)\right|^{2}\frac{1}{|x-y|^{n+2s}}\min\left\{\phi(y)^{2}, \phi(x)^{2}\right\}dxdy \\
&+C_{3}\int_{B_{2r}}\int_{B_{2r}}\frac{|\phi(x)-\phi(y)|^{2}}{|x-y|^{n+2s}}dxdy, \end{array} $$ where $\min\left\{a,b\right\} = a$ if $a \leq b$ and $\min\left\{a,b\right\} = b$ if $a \geq b$, for all $a,b \in \mathbb{R}$. \end{itemize}
Fix $x,y \in B_{2r}$ and suppose that $u(x)>u(y)$. Define $$ \epsilon = \delta\frac{u(x)-u(y)}{u(x)+d} $$ where $\delta \in (0,1)$ is chosen small enough such that $\epsilon \in (0,1)$. Taking $a= \phi(x)$ and $b=\phi(y)$ in the Lemma \ref{lm42}, we get $$
|\phi(x)|^{2}\leq |\phi(y)|^{2}+2 \delta\frac{u(x)-u(y)}{u(x)+d}|\phi(y)|^{2}+\left(\delta^{-1}\frac{u(x)+d}{u(x)-u(y)}+1\right)|\phi(x)-\phi(y)|^{2} $$ Replacing $$ \begin{array}{ll}
&\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}\\
&=(u(x)-u(y))\left(\frac{\phi^{2}(x)}{u(x)+d}-\frac{\phi^{2}(y)}{u(y)+d}\right)\frac{1}{|x-y|^{n+2s}}\\ & \begin{array}{ll}
&\leq (u(x)-u(y))\left(\frac{ |\phi(y)|^{2}+2 \delta\frac{u(x)-u(y)}{u(x)+d}|\phi(y)|^{2}+\left(\delta^{-1}\frac{u(x)+d}{u(x)-u(y)}+1\right)|\phi(x)-\phi(y)|^{2}}{u(x)+d}\right.\\&
\left.-\frac{\phi^{2}(y)}{u(y)+d}\right)\frac{1}{|x-y|^{n+2s}} \end{array} \\ & \begin{array}{ll}
&=(u(x)-u(y))\frac{|\phi(y)|^{2}}{u(x)+d}\left[ 1+ 2\delta\frac{u(x)-u(y)}{u(x)+d}+\left(\delta^{-1}\frac{u(x)+d}{u(x)-u(y)}+1\right)\frac{|\phi(x)-\phi(y)|^{2}}{|\phi(y)|^{2}}\right.\\
&\left.-\frac{u(x)+d}{u(y)+d}\right]\frac{1}{|x-y|^{n+2s}} \end{array}
\\ &
\begin{array}{ll} &=(u(x)-u(y))\frac{|\phi(y)|^{2}}{u(x)+d}\frac{1}{|x-y|^{n+2s}}\left( 1+ 2\delta\frac{u(x)-u(y)}{u(x)+d}-\frac{u(x)+d}{u(y)+d}\right)\\
& + \left(\delta^{-1} + \frac{(u(x)-u(y))}{u(x)+d}\right)|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}} \end{array} \\
& \begin{array}{ll}
&\leq (u(x)-u(y))\frac{|\phi(y)|^{2}}{u(x)+d}\frac{1}{|x-y|^{n+2s}}\left( 1+ 2\delta\frac{u(x)-u(y)}{u(x)+d}-\frac{u(x)+d}{u(y)+d}\right) \\
&+ 2\delta^{-1}|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}}.
\end{array}
\\ \end{array} $$ We will rewrite the first part of the sum appearing on the right side of the last inequality $$ \begin{array}{ll}
&(u(x)-u(y))\frac{|\phi(y)|^{2}}{u(x)+d}\frac{1}{|x-y|^{n+2s}}\left( 1+ 2\delta\frac{u(x)-u(y)}{u(x)+d}-\frac{u(x)+d}{u(y)+d}\right)\\
& = \left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[ \frac{u(x)+d}{u(x)-u(y)}+ 2\delta-\frac{u(x)+d}{u(y)+d}\cdot\frac{u(x)+d}{u(x)-u(y)} \right]\\
& = \left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[\frac{1-\frac{u(x)+d}{u(y)+d}}{1-\frac{u(y)+d}{u(x)+d}} + 2\delta \right].\\ \end{array} $$ Define the function $g:(0,1) \rightarrow \mathbb{R}$ by $$ g(t)= \frac{1-t^{-1}}{1-t}. $$ It satisfies $g(t) \leq -\frac{1}{4}\frac{t^{-1}}{1-t}$ if $t \in \left(\left.0,\frac{1}{2}\right]\right.$ and $g(t)\leq -1$ for all $t \in (0,1)$. We have two cases. If $\frac{u(y)+d}{u(x)+d} \leq \frac{1}{2}$ then, we conclude that $$ \begin{array}{ll}
&\left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[\frac{1-\frac{u(x)+d}{u(y)+d}}{1-\frac{u(y)+d}{u(x)+d}} + 2\delta \right]\\
& \leq \left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[-\frac{1}{4}\frac{\frac{u(x)+d}{u(y)+d}}{\frac{u(x)-u(y)}{u(x)+d}} + 2\delta \right]\\
& = \frac{u(x)-u(y)}{u(x)+d}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[-\frac{1}{4}\frac{u(x)+d}{u(y)+d} + 2\delta\frac{u(x)-u(y)}{u(x)+d} \right] \\
& = \frac{u(x)-u(y)}{u(y)+d}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[-\frac{1}{4} + 2\delta\frac{(u(x)-u(y))(u(y)+d)}{(u(x)+d)^{2}} \right] \\
&\leq \frac{u(x)-u(y)}{u(y)+d}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[-\frac{1}{4} + 2\delta \right]. \end{array} $$ In the last inequality, we use that $$ \frac{(u(x)-u(y))(u(y)+d)}{(u(x)+d)^{2}} \leq 1. $$ Choosing $\delta=\frac{1}{16}$ we have $$ \begin{array}{ll}
&\left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[\frac{1-\frac{u(x)+d}{u(y)+d}}{1-\frac{u(y)+d}{u(x)+d}} + 2\delta \right]\\
& \leq -\frac{1}{8}\frac{u(x)-u(y)}{u(y)+d}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\\
& \leq -\frac{1}{8}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}. \end{array} $$ Above, we have used that $ (\log(t))^{2}\leq t-1 $ for all $t\geq2$, and that $\frac{u(x)+d}{u(y)+d}\geq 2$. But, if $ \frac{u(y)+d}{u(x)+d} > \frac{1}{2}$, then using that $g(t) \leq -1$ and that $\delta= \frac{1}{16}$ $$ \begin{array}{ll}
&\left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[\frac{1-\frac{u(x)+d}{u(y)+d}}{1-\frac{u(y)+d}{u(x)+d}} + 2\delta \right]\\ &\leq \left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}\left[-1 + 2\delta \right]\\
& \leq-\frac{7}{8}\left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}} \\
&\leq-\frac{7}{32}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}. \end{array} $$ Here, we have used that $$ \begin{array}{ll} \left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2} & = \left[\log\left(1+ \frac{u(x)-u(y)}{u(y)+d}\right)\right]^{2} \\ & \leq 4\left(\frac{u(x)-u(y)}{u(x)+d}\right)^{2}. \end{array} $$ This is a consequence of $$ \log(1+t)\leq t $$ for all $t>0$, and that $$ t=\frac{u(x)-u(y)}{u(y)+d}=\frac{u(x)-u(y)}{u(x)+d}\cdot \frac{u(x)+d}{u(y)+d}\leq 2\frac{u(x)-u(y)}{u(x)+d}. $$ In short $$ \begin{array}{ll}
&(u(x)-u(y))\frac{|\phi(y)|^{2}}{u(x)+d}\frac{1}{|x-y|^{n+2s}}\left( 1+ 2\delta\frac{u(x)-u(y)}{u(x)+d}-\frac{u(x)+d}{u(y)+d}\right)\\
&\leq -\frac{1}{8}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}.
\end{array} $$ We have proved that, if $u(x)>u(y)$ then $$ \begin{array}{ll}
&\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}\\
&\leq -\frac{1}{8}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}} + 32|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}}. \end{array} $$ Integrating in $B_{2r}$ the last inequality, we get $$ \begin{array}{ll}
&\int_{B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy\\
&=\int_{B_{2r}}\int_{\left\{x;u(x)>u(y)\right\}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\
&+\int_{B_{2r}}\int_{\left\{x;u(x)<u(y)\right\}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \\
& \leq -\frac{1}{8}\int_{B_{2r}}\int_{\left\{x;u(x)>u(y)\right\}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
& -\frac{1}{8}\int_{B_{2r}}\int_{\left\{x;u(x)<u(y)\right\}}\left[\log\left(\frac{u(y)+d}{u(x)+d}\right)\right]^{2}\phi(x)^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
& + 32\int_{B_{2r}}\int_{B_{2r}}|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}}dxdy. \\ \end{array} $$
But, using that $\left|\log(x)\right| = \left|\log \left(\frac{1}{x}\right)\right|$ for all $x\neq 0$, we obtain that $$ \left[\log\left(\frac{u(y)+d}{u(x)+d}\right)\right]^{2} = \left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}. $$ Replacing $$ \begin{array}{ll}
&\int_{B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy\\
& \leq -\frac{1}{8}\int_{B_{2r}}\int_{\left\{x; u(x)>u(y)\right\}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(y)^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
& - \frac{1}{8}\int_{B_{2r}}\int_{\left\{x;u(x)<u(y)\right\}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\phi(x)^{2}\frac{1}{|x-y|^{n+2s}}dxdy\\
& + 32\int_{B_{2r}}\int_{B_{2r}}|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
& \leq -\frac{1}{8}\int_{B_{2r}}\int_{\left\{x; u(x)>u(y)\right\}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\min{\left\{\phi(y)^{2},\phi(x)^{2}\right\}}\frac{1}{|x-y|^{n+2s}}dxdy \\
& - \frac{1}{8}\int_{B_{2r}}\int_{\left\{x;u(x)<u(y)\right\}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\min{\left\{\phi(y)^{2},\phi(x)^{2}\right\}}\frac{1}{|x-y|^{n+2s}}dxdy\\
& + 32\int_{B_{2r}}\int_{B_{2r}}|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
& = -\frac{1}{8}\int_{B_{2r}}\int_{B_{2r}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\min{\left\{\phi(y)^{2},\phi(x)^{2}\right\}}\frac{1}{|x-y|^{n+2s}}dxdy\\
&+32\int_{B_{2r}}\int_{B_{2r}}|\phi(x)-\phi(y)|^{2}\frac{1}{|x-y|^{n+2s}}dxdy, \end{array} $$ Thus, we have proved the claim 1. \begin{itemize}
\item $A.2)$ There is $C_{3}>0$ such that, it depends only on $s$ and $n$ and $$
\int_{R^{n}-B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy \leq C_{3}r^{n-2s}. $$ \end{itemize} Indeed,
$$ \begin{array}{ll}
&\int_{R^{n}-B_{2r}}\int_{B_{2r}}\frac{(u(x)-u(y))(\eta(x)-\eta(y))}{|x-y|^{n+2s}}dxdy\\
&=\int_{\mathbb{R}^{n}-B_{2r}}\int_{\mathbb{R}^{n}}(u(x)-u(y))\left(\frac{\phi^{2}(x)}{u(x)+d}-\frac{\phi^{2}(y)}{u(y)+d}\right)\frac{1}{|x-y|^{n+2s}}dxdy \\
& =\int_{\mathbb{R}^{n}-B_{2r}}\int_{\mathbb{R}^{n}}|\phi(x)|^{2}\frac{u(x)-u(y)}{u(x)+d}\frac{1}{|x-y|^{n+2s}}dxdy \\
& \leq \int_{\mathbb{R}^{n}-B_{2r}}\int_{\mathbb{R}^{n}}|\phi(x)|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \end{array} $$ In the last equality, we have used that $u(y)\geq 0$ and therefore $$ \frac{u(x)-u(y)}{u(x)+d} \leq 1. $$ A simple calculation shows that $$ \begin{array}{ll}
&\int_{\mathbb{R}^{n}-B_{2r}}\int_{\mathbb{R}^{n}}|\phi(x)|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \leq C_{3}r^{n-2s}\\ \end{array} $$ and $C_{3}$ depends only on $n$ and $s$. Therefore we get the assertion 2.
\begin{itemize} \item $A.3)$ We claim that $$\int_{\mathbb{R}^{n}}a(x)u(x)\eta(x)dx \leq \int_{B_{2r}}a(x)dx$$. \end{itemize} Indeed, $$ \begin{array}{ll} \int_{\mathbb{R}^{n}}a(x)u(x)\eta(x)dx& = \int_{\mathbb{R}^{n}}a(x)u(x)\frac{\phi^{2}(x)}{u(x)+d}dx \\ & =\int_{B_{2r}}a(x)u(x)\frac{\phi^{2}(x)}{u(x)+d}dx \\ & =\int_{B_{2r}}a(x)\frac{u(x)}{u(x)+d} \phi^{2}(x)dx\\ & \leq \int_{B_{2r}}a(x)dx \end{array} $$ We have used that $supp(\eta)\subset B_{2r}$, that $\phi(x)\in (0,1)$ and that $\frac{u(x)}{u(x)+d} \leq 1$.
The statements 1,2 and 3 imply that $$ \begin{array}{ll}
&\int_{B_{2r}}\int_{B_{2r}}\left[\log\left(\frac{u(x)+d}{u(y)+d}\right)\right]^{2}\min\left\{\phi(y)^{2}, \phi(x)^{2}\right\}\frac{1}{|x-y|^{n+2s}}dxdy \\
&\leq C_{5}\int_{B_{2r}}\int_{B_{2r}}\frac{|\phi(x)-\phi(y)|^{2}}{|x-y|^{n+2s}}dxdy +C_{6}r^{n-2s}+\int_{B_{2r}}a(x)dx. \end{array} $$ for constants $C_{5},C_{6}$. The constants $C_{5},C_{6}$ depend only on $n$ and $s$. But $\phi=1$ in $B_{r}$ implies that \begin{equation}\label{eq1} \begin{array}{ll}
&\int_{B_{r}}\int_{B_{r}}\left|\log\left(\frac{d+u(x)}{d+u(y)}\right)\right|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
&\leq C_{5}\int_{B_{2r}}\int_{B_{2r}}\frac{|\phi(x)-\phi(y)|^{2}}{|x-y|^{n+2s}}dxdy +C_{6}r^{n-2s}+\int_{B_{2r}}a(x)dx \end{array} \end{equation} Finally, we will show that $$
\int_{B_{2r}}\int_{B_{2r}}\frac{|\phi(x)-\phi(y)|^{2}}{|x-y|^{n+2s}}dxdy \leq C_{7}r^{n-2s} $$ By hypothesis $$ \begin{array}{ll}
\int_{B_{2r}}\int_{B_{2r}}\frac{|\phi(x)-\phi(y)|^{2}}{|x-y|^{n+2s}}dxdy &\leq Kr^{-2}\int_{B_{2r}}\int_{B_{2r}}\frac{|x-y|^{2}}{|x-y|^{n+2s}}dxdy \\
&=Kr^{-2}\int_{B_{2r}}\int_{B_{2r}}\frac{1}{|x-y|^{n+2(s-1)}}dxdy \\
&\leq Kr^{-2}\frac{r^{2(1-s)}}{2(1-s)}|B_{2r}|=C_{7}r^{n-2s} \end{array} $$ where $C_{7}$ depends only on $n$ and $s$. Replacing this last estimate in (\ref{eq1}) we obtain the Lemma \ref{lm43}. \end{proof} Following the same ideas of Theorem A.1 in \cite{bf2}, we will prove the theorem stated at the beginning of the section. \begin{theorem}\label{th44} Suppose that $u \in H^{s}(\mathbb{R}^{n})$ and $a\geq 0$ with $a \in L^{1}_{loc}(\mathbb{R}^{n})$. We will assume that $$
\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{(u(x)-u(y))\left(v(x)-v(y)\right)}{|x-y|^{n+2s}}dxdy+ \int_{\mathbb{R}^{n}}a(x)u(x)v(x)dx\geq 0, $$ for all $v \in H^{s}(\mathbb{R}^{n})$ with $v \geq 0$ almost everywere. Then $u > 0$ almost everywere in $\mathbb{R}^{n}$ or $u=0$ almost everywere in $\mathbb{R}^{n}$. \end{theorem}
\begin{proof} By Lemma \ref{lm41}, $u \geq 0$. Suppose that $x_{0}\in \mathbb{R}^{n}$ and $r>0$. Define $$ Z:=\{x \in B_{r}(x_{0}); u(x)=0\} $$
If $|Z|>0$, then we define $$ \begin{array}{cccl} F_{\delta}:&B_{r}(x_{0})& \longrightarrow & \mathbb{R} \\ &x& \longmapsto & \log\left(1+\frac{u(x)}{\delta}\right) \end{array} $$ for all $\delta>0$. We have $F_{\delta}(y)=0$ for all $y\in Z$. Therefore, if $x \in B_{r}(x_{0})$ and $y \in Z$ $$
|F_{\delta}(x)|^{2} = \frac{|F_{\delta}(x)-F_{\delta}(y)|^{2}}{|x-y|^{n+2s}}|x-y|^{n+2s} $$ Integrating with respect to $ y \in Z $ we get $$ \begin{array}{ll}
|Z||F_{\delta}(x)|^{2} &= \int_{Z}\frac{|F_{\delta}(x)-F_{\delta}(y)|^{2}}{|x-y|^{n+2s}}|x-y|^{n+2s}dy \\
&\leq 2r^{n+2s}\int_{Z}\frac{|F_{\delta}(x)-F_{\delta}(y)|^{2}}{|x-y|^{n+2s}}dy \end{array} $$ Now, integrating with respect to $x \in B_{r}$ we get $$ \begin{array}{ll}
\int_{B_{r}(x_{0})}|F_{\delta}(x)|^{2}dx & \leq \frac{1}{|Z|} 2r^{n+2s}\int_{B_{r}(x_{0})}\int_{Z}\frac{|F_{\delta}(x)-F_{\delta}(y)|^{2}}{|x-y|^{n+2s}}dydx
\\ &\leq \frac{1}{|Z|} 2r^{n+2s}\int_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\frac{|F_{\delta}(x)-F_{\delta}(y)|^{2}}{|x-y|^{n+2s}}dydx \\
& = \frac{1}{|Z|} 2r^{n+2s}\int_{B_{r}(x_{0})}\int_{B_{r}(x_{0})}\left|\log \left(\frac{\delta+u(x)}{\delta+u(y)}\right)\right|^{2}\frac{1}{|x-y|^{n+2s}}dxdy \\
& \leq \frac{1}{|Z|} 2r^{n+2s}\left(Cr^{n-2s}+\int_{B_{2r}}a(x)dx\right) \\
& = \frac{1}{|Z|} 2Cr^{2n}+ \frac{1}{|Z|} 2r^{n+2s}\int_{B_{2r}}a(x)dx:=L. \end{array} $$ The number $L$ does not depend on $\delta$. In short, we have proved that $$
\int_{B_{r}(x_{0})}\left|\log\left(1+\frac{u(x)}{\delta}\right)\right|^{2}dx \leq C $$
for some constant $C>0$ and $C$ does not depend on $\delta$. If $u(x) \neq 0$ then $F_{\delta}(x) \rightarrow \infty$ when $\delta \rightarrow 0$. By Fatou's lemma, if $|B_{r} \cap Z^{c}|>0$, $$
+\infty \leq \liminf_{\delta \rightarrow 0}\int_{B_{r}\cap Z^{c}}|F_{\delta}(x)|^{2} \leq C. $$
Therefore $|Z|=|B_{r}|$ and $u=0$ almost everywere in $B_{r}(x_{0})$. Now, we define $$ A=\left\{B_{r}(x); r>0, x \in \mathbb{R}^{n}, u>0\ in\ B_{r}(x)\right\} $$ $$ B=\left\{B_{r}(x); r>0, x \in \mathbb{R}^{n}, u=0\ in\ B_{r}(x)\right\} $$ $$ S=\bigcup_{V\in A}V $$ and $$ W=\bigcup_{V\in B}V $$ $S$ and $W$ are open sets. Consider $x \in \mathbb{R}^{n}$ and $r>0$. We have two possibilities, either $u \neq 0$ in $B_{r}(x)$ or $u=0$ in $B_{r}(x)$. If $u \neq 0$ in $B_{r}$ then $u>0$ in $B_{r}$. In this case, $x \in S$. If $u=0$ in $B_{r}(x)$, then $x \in W$. Consequently $$ \mathbb{R}^{n}= S \cup W. $$ By connectedness, we should have $S=\emptyset$ or $W=\emptyset$. If $\mathbb{R}^{n}=S$ then $u>0$ almost everywere in $\mathbb{R}^{n}$. If $\mathbb{R}^{n}=W$ then $u=0$ almost everywere in $\mathbb{R}^{n}$. \end{proof}
\begin{corollary}\label{cl45} The solution $u$ found in Theorem \ref{th31} is positive in the following sense, $ u>0$ almost everywere in $\mathbb{R}^{3}$. \end{corollary}
\begin{proof} For some $v\in H^{s}(\mathbb{R}^{3})$, with $v \geq0$ almost everywere, we have $$ \begin{array}{ll}
\frac{(\zeta)}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{3+2s}}dxdy&+ \int_{\mathbb{R}^{3}}V(x)uvdx \\&+\int_{\mathbb{R}^{3}}\phi_{u}uvdx=\int_{\mathbb{R}^{3}}g(u)vdx \geq 0. \end{array} $$ If we define $a(x)=\frac{2}{\zeta}(V(x)+\phi_{u}(x))$, we have that $a \in L^{1}_{loc}(\mathbb{R}^{3})$, because $L^{2^{\ast}_{t}}(\mathbb{R}^{3})\subset L^{1}_{loc}(\mathbb{R}^{3})$ and $V$ is continuous. By $(V_{0})$ and Lemma \ref{lm1} we have $a(x)>0$ in $\mathbb{R}^{3}$. Thereby, $$
\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{3+2s}}dxdy + \int_{\mathbb{R}^{3}}a(x)uvdx \geq 0. $$ for all $v\in H^{s}(\mathbb{R}^{3})$ with $v\geq0$. But $u \neq 0$. Then, Theorem \ref{th44} implies that $u>0$ almost everywere in $\mathbb{R}^{3}$. \end{proof}
\begin{remark}\label{rm46} Define $\mathcal{N}=\left\{u \in H^{s}(\mathbb{R}^{3})\setminus\left\{0\right\};I'(u)u=0\right\}$, where $$ \begin{array}{ll}
\begin{array}{ll}I(u)=\frac{\zeta(s)}{4}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy+\frac{1}{2}\int_{\mathbb{R}^{3}}V(x)u^{2}dx\\ &+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx-\int_{\mathbb{R}^{3}}F(u)dx. \end{array} \end{array} $$ If $f$ satisfies $(f_{1})-(f_{5})$ then $$ I_{\infty}=inf_{u\in \mathcal{N}} I(u) $$ coincides with the mountain pass level associated with $I$. \end{remark}
\begin{theorem} If $f$ satisfies $(f_{1})-(f_{5})$ and $V$ satisfies $(V_{0})$ and $(V_{1})$, then the problem $(P)$ has a ground state solution. \end{theorem}
\begin{proof} Taking the following Euler-Lagrange functional $$ \begin{array}{cccl} I:&H^{s}(\mathbb{R}^{3})&\longrightarrow&\mathbb{R}\\ &u& \longmapsto &
\begin{array}{ll}\frac{\zeta(s)}{4}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}&\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy+\frac{1}{2}\int_{\mathbb{R}^{3}}V(x)u^{2}dx\\ &+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx-\int_{\mathbb{R}^{3}}F(u)dx, \end{array} \end{array} $$ and following with the same ideas of Theorem \ref{th31}, we prove that there is a nonzero solution $u$ to the system $(P)$. Also, we prove that there is a Cerami's sequence $\left\{w_{n}\right\}$ in the montain pass level associated with $I$ converging to $u$. By Remark \ref{rm1} and Fatou's lemma $$ \begin{array}{ll} 4c&= \liminf_{n \rightarrow \infty}\left(4I(w_{n})-I'(w_{n})w_{n}\right) \\
& = \liminf_{n \rightarrow \infty}\left(||w_{n}||^{2}+ \int_{\mathbb{R}^{3}}H(w_{n})dx\right) \\
& \geq \liminf_{n \rightarrow \infty}||w_{n}||^{2}+ \liminf_{n \rightarrow \infty}\int_{\mathbb{R}^{3}}H(w_{n})dx \\
& \geq ||u||^{2}+\int_{\mathbb{R}^{3}}H(u)dx \\ & = 4I(u)-I'(u)u \\ & = 4I(u). \end{array} $$ where $H(u)=uf(u)-4F(u)$. By definition $u \in \mathcal{N}$. Then $I(u) \leq \inf_{u\in \mathcal{N}}I(u)$. By Remark \ref{rm46} $$ I(u)=\inf_{u\in \mathcal{N}}I(u). $$ \end{proof}
\section{Asymptotically Periodic Potential} In this section, we study the problem $(P)$, when we consider $V$ satisfying the condition $(V_{0})$ and \begin{itemize} \item[$(V_3)$\ ] There is a function $V_{p}$ satisfying $(V_{1})$ such that $$
\lim_{|x|\rightarrow \infty}|V(x)-V_{p}(x)|=0; $$
\item[$(V_4)$\ ] $V(x)\leq V_{p}(x)$ and there is a open set $\Omega \subset \mathbb{R}^{3}$ with $|\Omega|>0$ and $V(x)< V_{p}(x)$ in $\Omega$. \end{itemize} Here $V_{p}$ is a periodic continuous potential. This case follows the same ideas already studied in Schr\"odinger-Poisson system with asymptotically periodic potential in \cite{amsss}. We are writing this case to make a most complete work for the reader. \begin{theorem} Suppose that $V$ satisfies $(V_{0})$, $(V_{3})$, $(V_{4})$ and $f$ satisfies $(f_{1})-(f_{5})$. Then, the problem $(P)$ has a ground state solution. \end{theorem}
\begin{proof} We can define in $H^{s}(\mathbb{R}^{3})$ the norm, $$
||u||_{p}=\left(\frac{\zeta}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy+\int_{\mathbb{R}^{3}}V_{p}(x)u^{2}dx\right)^{\frac{1}{2}}. $$ Consider the functional $I_{p}$ $$
I_{p}(u)=\frac{1}{2}||u||_{p}^{2}+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx - \int_{\mathbb{R}^{3}}F(u)dx. $$ We claim that there is $w_{p} \in H^{s}(\mathbb{R}^{3})$ such that $I_{p}'(w_{p})=0$ and $I_{p}(w_{p})=c_{p}$, where $c_{p}$ is the mountain pass level associated with $I_{p}$. We will consider another norm in $H^{s}(\mathbb{R}^{3})$. $$
||u||=\left(\frac{\zeta}{2}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{(u(x)-u(y))^{2}}{|x-y|^{3+2s}}dxdy+\int_{\mathbb{R}^{3}}V(x)u^{2}dx\right)^{\frac{1}{2}}. $$ Then, we define $$
I(u)=\frac{1}{2}||u||^{2}+\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u}u^{2}dx - \int_{\mathbb{R}^{3}}F(u)dx. $$ The functional $I$ has a mountain pass geometry. If $c$ is the mountain pass level associated with $I$ then $c<c_{p}$. Indeed, there is a $t_{\ast}$ such that $t_{\ast}w_{p} \in \mathcal{N}$ (see remark \ref{rm46}) and it is the unique with this property. Then $$ \begin{array}{ll} c &\leq I(t^{\ast}w_{p})\\ &<I_{p}(t^{\ast}w_{p}) \\ & \leq \max_{t\geq 0}I_{p}(tw_{p}) \\ & = I_{p}(w_{p})=c_{p} \end{array} $$ Consider $\{u_{n}\}_{n \in \mathbb{N}}$ a Cerami's sequence at the mountain pass level $c$ associated with $I$. Similarly to the periodic case, we prove that the sequence $\{u_{n}\}$ is bounded and therefore, converges weakly to $u \in H^{s}(\mathbb{R}^{3})$. Additionally $I'(u)=0$. Now we will prove that $u \neq 0$. Suppose that $u=0$.
Regarding the sequence $\left\{u_{n}\right\}$, the following equalities are true $\newline$ \begin{enumerate}
\item $\lim\limits_{n \rightarrow \infty}\int_{\mathbb{R}^{3}}|V(x)-V_{p}(x)|u_{n}^{2}dx= 0$
\item $\lim\limits_{n \rightarrow \infty}|||u_{n}||-||u_{n}||_{p}|=0$.
\item $\lim\limits_{n \rightarrow \infty}|I_{p}(u_{n})-I(u_{n})|=0$
\item $\lim\limits_{n \rightarrow \infty}|I_{p}'(u_{n})u_{n}-I'(u_{n})u_{n}|=0$ \end{enumerate}
We will prove (1). The limits (2), (3) and (4) are immediate consequences of (1). Consider $\epsilon>0$ and $A>0$ such that $||u_{n}||_{2}^{2}< A$ for all $n \in \mathbb{N}$. By $(V_{3})$, there is $R>0$ such that, for all $|x|>R$ we have $$
|V(x)-V_{p}(x)|< \frac{\epsilon}{2A}. $$ But $\{u_{n}\}$ converges weakly to $u=0$. Then $u_{n}\rightarrow 0$ in $L^{2}(B_{R}(0))$. This convergence implies that there is $n_{0} \in \mathbb{N}$ such that $$
\int_{B_{R}(0)}|V(x)-V_{p}(x)|u_{n}^{2}dx< \frac{\epsilon}{2} $$ for all $n \geq n_{0}$. Then, if $n\geq n_{0}$ $$ \begin{array}{ll}
&\int_{\mathbb{R}^{3}}|V(x)-V_{p}(x)|u_{n}^{2}dx\\
&=\int_{B_{R}(0)}|V(x)-V_{p}(x)|u_{n}^{2}dx+\int_{(B_{R}(0))^{c}}|V(x)-V_{p}(x)|u_{n}^{2}dx \\ &<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon. \end{array} $$ \newline Consider $s_{n}>0$ such that $$ s_{n}u_{n} \in \mathcal{N}_{p} $$ for every $n \in \mathbb{N}$. Where $N_{p}=\left\{u \in H^{s}(\mathbb{R}^{3})\setminus\left\{0\right\};I_{p}'(u)u=0\right\}$. We claim that $\limsup_{n \rightarrow \infty}s_{n}\leq 1$. Indeed, otherwise, there is $\delta>0$ such that, passing to a subsequence if necessary, we can assume that $s_{n}\geq 1+\delta$ for all $n \in \mathbb{N}$. By $(4)$ we have $I_{p}'(u_{n})u_{n}\rightarrow 0$, that is, $$ \begin{array}{ll}
||u_{n}||_{p}^{2}+\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx = \int_{\mathbb{R}^{3}}f(u_{n})u_{n}dx+o_{n}(1) \end{array} $$ From $s_{n}u_{n}\in \mathcal{N}_{p}$ we have $I_{p}'(s_{n}u_{n})u_{n}=0.$ Equivalently $$
s_{n}||u_{n}||_{p}^{2}+s_{n}^{3}\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx = \int_{\mathbb{R}^{3}}f(s_{n}u_{n})u_{n}dx $$ Therefore \begin{equation}\label{eq53}
\int_{\mathbb{R}^{3}}\left[\frac{f(s_{n}u_{n})}{(s_{n}u_{n})^{3}}-\frac{f(u_{n})}{(u_{n})^{3}}\right]u_{n}^{4}dx = \left(\frac{1}{s_{n}^{2}}-1\right)||u_{n}||_{p}^{2}+o_{n}(1) \leq o_{n}(1). \end{equation} If $\{u_{n}\}_{n \in \mathbb{N}}$ converges to $0$ in $L^{q}(\mathbb{R}^{3})$ for all $q \in (2,2^{\ast}_{s})$, then by Lemma \ref{lm1} $$
||u_{n}||^{2}\leq||u_{n}||^{2}+\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx = \int_{\mathbb{R}^{3}}f(u_{n})u_{n}+I'(u_{n})u_{n} $$ consequently $\{u_{n}\}$ would have limit $0$ in $H^{s}(\mathbb{R}^{3})$ and this would contradict the fact that $c>0$. Therefore, there is a sequence $\{y_{n}\} \subset \mathbb{Z}^{n}$, $R>0$ and $\beta>0$ such that $$ \int_{B_{R}(y_{n})}u_{n}^{2}dx \geq \beta>0 $$
Taking $v_{n}(x):=u_{n}(x+y_{n})$ we have $||v_{n}||=||u_{n}||$ and therefore we can assume that $\{v_{n}\}_{n \in \mathbb{N}}$ converges weakly to some $v \in H^{s}(\mathbb{R}^{3})$. Note that $$ \int_{B_{R}(0)}v^{2}dx \geq \beta>0 $$ The inequality $(\ref{eq53})$, Remark \ref{rm1} and Fatou's lemma imply that $$ \begin{array}{ll} 0<&\int_{\mathbb{R}^{3}}\left[\frac{f((1+\delta)v)}{((1+\delta)v)^{3}}-\frac{f(v)}{(v)^{3}}\right]v^{4}dx \\ & \leq \liminf_{n \rightarrow \infty}\int_{\mathbb{R}^{3}}\left[\frac{f((1+\delta)v_{n})}{((1+\delta)v_{n})^{3}}-\frac{f(v_{n})}{(v_{n})^{3}}\right]v_{n}^{4}dx \\ & \leq \liminf_{n \rightarrow \infty}\int_{\mathbb{R}^{3}}\left[\frac{f(s_{n}v_{n})}{(s_{n}v_{n})^{3}}-\frac{f(v_{n})}{(v_{n})^{3}}\right]v_{n}^{4}dx \\ & = \liminf_{n \rightarrow \infty}\int_{\mathbb{R}^{3}}\left[\frac{f(s_{n}u_{n})}{(s_{n}u_{n})^{3}}-\frac{f(u_{n})}{(u_{n})^{3}}\right]u_{n}^{4}dx \\ & \leq \liminf_{n \rightarrow \infty}o_{n}(1)=0. \end{array} $$
The last inequality is a contradiction. Therefore $\limsup_{n \rightarrow \infty}s_{n}\leq 1$. Now, we will prove that for $n$ large enough, $s_{n}> 1$. Suppose that the statement is false. In this case, passing to a subsequence if necessary, we can assume that $s_{n}\leq 1$ for all $n \in \mathbb{N}$. Note that by $(f_{5})$, the function $H(u):=uf(u)-4F(u)$ is increasing in $|u| \neq 0$. Then $$ \begin{array}{ll} 4c_{p}&=4\inf_{u \in N_{p}}I_{p}(u) \\ & \leq 4I_{p}(s_{n}u_{n}) \\ & = 4I_{p}(s_{n}u_{n}) - I_{p}'(s_{n}u_{n})(s_{n}u_{n})\\
& = s_{n}^{2}||u_{n}||_{p}^{2}+\int_{\mathbb{R}^{3}}f(s_{n}u_{n})(s_{n}u_{n})-4F(s_{n}u_{n})dx \\
& \leq ||u_{n}||_{p}^{2}+\int_{\mathbb{R}^{3}}f(u_{n})(u_{n})-4F(u_{n})dx \\
& \leq 4I(u_{n})-I'(u_{n})u_{n}+\int_{\mathbb{R}^{3}}|V(x)-V_{p}(x)|u_{n}^{2}dx. \end{array} $$ This implies that $$ 4c_{p} \leq 4c. $$ But, this last inequality is false, because we have proved that $c<c_{p}$. Therefore, we have that $s_{n}> 1$ for $n$ large enough. Then, about the sequence $\left\{s_{n}\right\}$ we have proved that $$ 1 \leq \liminf_{n \rightarrow \infty}s_{n} \leq \limsup_{n \rightarrow \infty}s_{n} \leq 1. $$ and therefore \begin{equation}\label{eq54} \lim\limits_{n \rightarrow \infty}s_{n}=1. \end{equation} The Fundamental Theorem of Calculus implies that \begin{equation}\label{eq55} \begin{array}{ll} \int_{\mathbb{R}^{3}}F(s_{n}u_{n})dx-\int_{\mathbb{R}^{3}}F(u_{n})dx = \int_{1}^{s_{n}}\left[\int_{\mathbb{R}^{3}}f(\tau u_{n})u_{n}dx \right] d \tau. \end{array} \end{equation} Also, by $(f_{3})$ we obtain $C>0$ such that \begin{equation}\label{eq56}
\int_{\mathbb{R}^{3}}f(\tau u_{n})u_{n}dx \leq C(s_{n}||u_{n}||^{2}+s_{n}^{p-1}||u_{n}||^{p}). \end{equation} for all $\tau \in (1,s_{n})$. We have that the sequence $\{u_{n}\}$ is bounded. Then, by (\ref{eq54}), (\ref{eq55}) and (\ref{eq56}) $$ \int_{\mathbb{R}^{3}}F(s_{n}u_{n})dx-\int_{\mathbb{R}^{3}}F(u_{n})dx = o_{n}(1). $$ Then $$ \begin{array}{ll} &I_{p}(s_{n}u_{n})-I_{p}(u_{n})\\
& = \frac{(s_{n}^{2}-1)}{2}||u_{n}||^{2}+\frac{(s_{n}^{4}-1)}{4}\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx - \int_{\mathbb{R}^{3}}F(s_{n}u_{n})dx+\int_{\mathbb{R}^{3}}F(u_{n})dx \\ & = o_{n}(1) \end{array} $$ because $\{u_{n}\}$ is bounded and
$\int_{\mathbb{R}^{3}}\phi_{u_{n}}u_{n}^{2}dx = ||\phi_{u_{n}}||_{\dot{H}^{t}(\mathbb{R}^{3})}^{2} \leq C||u_{n}||^{4}$. By $(3)$ $$ \begin{array}{ll} c_{p}&\leq I_{p}(s_{n}u_{n}) \\ & = I_{p}(u_{n})+o_{n}(1).\\ &= I(u_{n})+o_{n}(1) \end{array} $$ Taking $n \rightarrow \infty$ we obtain $$ c_{p} \leq c $$ But, this last inequality is false, because we have proved that $c<c_{p}$. This contradiction was generated because we assumed that $u = 0$. It follows that $u$ is nontrivial. Particularly $$ I(u) \geq \inf_{u \in \mathcal{N}}I(u). $$ As in the periodic case $$ I(u)\leq c=\inf_{u \in \mathcal{N}}I(u). $$ Therefore $u$ is a ground state solution for the system $(P)$. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{as}{article}{
author={Alves, C.},
author={Souto, M.},
title={On existence of solution for a class of semilinear elliptic equations wit nonlinearities that lies between two different powers },
journal={Abst. and Appl. Analysis},
volume={ID 578417},
date={2008},
pages={1--7},
review={ }
}
\bib{amsss}{article}{
author={Alves, C.},
author={Souto, M.},
author={Soares, S.}
title={Schr\"odinger-Poisson equations without Ambrosetti-Rabinowitz condition},
journal={J. Math. Anal Appl},
volume={377},
date={2011},
pages={584--592},
review={ }
}
\bib{ap}{article}{
author={Azzollini, A.},
author={Pomponio, A.},
title={Ground state solutions for the nonlinear Schr\"odinger-Maxwell equations },
journal={J. Math. Appl.},
volume={14},
date={2008},
pages={--},
review={doi:10.1016/jmaa.2008.03.057 }
} \bib{bf2}{article}{
author={Brasco, L.},
author={Franzina, G.},
title={Convexity Properties of Dirichlet Integrals and Picone-Type Inequalities},
journal={Kodai Math. J.},
volume={37},
date={2014},
pages={769-799},
review={}
}
\bib{dkp}{article}{
author={Di Castro, A.},
author={Kuusi, T.},
author={Palatucci, G.},
title={Local behavior of fractional p-minimizers},
journal={Annales de l'Institut Henri Poincare (C) Non Linear Analysis},
volume={},
date={2015},
pages={},
review={}
} \bib{dpv}{article}{
author={Di Nezza, E.},
author={Palatucci, G.},
author={Valdinoci, E.},
title={Hitchhiker's guide to the fractional Sobolev spaces},
journal={Bull. Sci. Math.},
volume={136},
date={2012},
pages={512-573},
review={ }
}
\bib{Ek}{article}{
author={Ekeland, I.},
title={Convexity Methods in Hamilton Mechanics},
journal={Springer Verlag},
volume={},
date={1990},
pages={},
review={}
} \bib{W}{article}{
author={Evans, L. C.},
title={Partial Differential Equations},
journal={American Mathematical Society},
date={2010}
}
\bib{G}{article}{
author={Gaetano, S.},
title={Multiple positive solutions for a Schr\"odinger-Poisson-Slater system },
journal={J. Math. Analysis and Appl., Issue 1},
volume={365},
date={2010},
pages={288--299},
review={doi:10.1016/j.jmaa.2009.10.061 }
}
\bib{gsd}{article}{
author={Gaetano, S.},
author={Squassina, M.},
author={D'avenia, P.},
title={On Fractional Choquard Equations },
journal={Math. Models Methods Appl. Sci.},
volume={25},
date={2015},
pages={1447-1476},
review={}
}
\bib{bf}{article}{
author={A. R. Giammetta},
title={Fractional Schr\"odinger-Poisson-Slater system in one dimension},
journal={arXiv:1405.2796v1.}
volume={}
date={}
pages={}
review={}
}
\bib{jj}{article}{
author={Jeanjean, L.},
title={On the existence of bounded Palais-Smale sequences and application to a Landesman-Lazer type problem set on $\mathbb R^3$},
journal={Proc. Roy, Soc. Edinburgh, A},
volume={129},
date={1999},
pages={787--809},
review={}
}
\bib{L1}{article}{
author={Lions, P. L.},
title={The concentration-compactness principle in the calculus of variations. The locally compact case, part 2},
journal={Analyse Nonlinéaire},
volume={I},
date={1984},
pages={223--283},
review={}
}
\bib{Ruiz}{article}{
author={Ruiz, D.},
title={The Schr\"odinger-Poisson equation under the effect of a nonlinear local term },
journal={J. Funct. Analysis},
volume={237},
date={2006},
pages={655--674},
review={doi:10.1016/j.jfa.2006.04.005 }
}
\bib{W}{article}{
author={Willem, M.},
title={Minimax Theorems},
journal={Birkhauser},
date={1986}}
\bib{zhang}{article}{
author={Zhang, J.},,
title={Existence and Multiplicity results for the Fractional Schr\"odinger-Poisson Systems},
journal={arXiv:1507.01205v1.},
volume={},
date={},
pages={},
review={ }
}
\bib{zjs}{article}{
author={Zhang, J.},
author={Do Ó, J. M,},
author={Squassina, M.}
title={Fractional Schr\"odinger-Poisson Systems with a general subcritical or critical nonlinearity},
journal={Adv. Nonlinear Stud.},
volume={16},
date={2016},
pages={15-30},
review={}
}
\bib{zjs2}{article}{
author={Zhang, J.},
author={Do Ó, J. M,},
author={Squassina, M. }
title={Schr\"odinger-Poisson systems with a general critical nonlinearity},
journal={Communications in Contemporary Mathematics},
volume={},
date={2015},
pages={},
review={}
}
\bib{zz}{article}{
author={Zhao, F.},
author={Zhao, L.},
title={Positive solutions for the nonlinear Schr\"odinger-Poisson equations with the critical exponent },
journal={Nonlinear Analysis, Theory, Meth. and Appl.},
volume={},
date={2008},
pages={--},
review={doi:10.1016/na.2008.02.116 }
}
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\title{Limit holomorphic sections and Donaldson's construction of symplectic submanifolds}
Donaldson proved (in \cite{Do96}) that if $L$ is a suitable positive line bundle over a closed symplectic manifold $X$, then, for $k$ sufficiently large, the tensor power $L^k$ admits sections whose zero sets are symplectic submanifolds of $X$ (the sections are approximately holomorphic and they satisfy some uniform transversality condition). The construction relies on the following observation: the local geometry of the bundles $L^k$ near any point $p\in X$, after a normalization, converges to a model holomorphic Hermitian line bundle $K$ over (some ball in) the tangent space $T_p X$. In this note, we will describe this phenomenon in detail and exploit it to reformulate Donaldson's theorem as a compactness result: near each point $p$, the sections he obtains accumulate to holomorphic sections of $K$ (that we call ``limit sections'') and their uniform transversality properties correspond to transversality properties of their limits. Of course, similar considerations apply to all constructions based on Donaldson's techniques (e.g. \cite{Au01}, \cite{IbMaPr00}).
{\bf Acknowledgements.} I want to thank Emmanuel Giroux for many important suggestions.
\section{Limit sections}
Let $X=(X,\, \omega,\, J,\, g)$ be a closed almost-K\"{a}hler manifold. Hence $\omega$ is a symplectic form, $J$ is an almost-complex structure and $g$ is a Riemannian metric, satisfying the following compatibility condition: $g(V,W) = \omega(V,JW)$. Endow $X$ with a prequantization $L$ (a prequantization is a Hermitian line bundle over $X$ equipped with a unitary connection of curvature $-i2\pi \omega$).
The charts we will use are normal coordinates with respect to the renormalized metric $g_k=kg$. Let $B\subset \mathbb{C}^n$ denote the unit ball, with $n= \frac{1}{2}\, $dim$_{\mathbb{R}} X$. Fix, for every large integer $k$, a chart $\varphi_k : B \rightarrow X$ satisfying two conditions:
(1) The chart $\varphi_k$ is an exponential map for the metric $g_k$ (i.e. given any unit vector $v\in \mathbb{C}^n$, the curve $t \mapsto \varphi_k (tv)$ is a geodesic with $g_k-$length $1$ velocity vector).
(2) The differential $D\varphi_k (0)$ is a $\mathbb{C} - $linear map. \\ \\ Since $\varphi_k$ is a local diffeomorphism, one can transfer to $B$ the renormalized almost-K\"{a}hler structure $(\omega_k = k\omega,\, J,\, g_k =kg)$ and it is well known that this almost-K\"{a}hler structure tends to the standard flat K\"{a}hler structure on $B$, as $k \rightarrow \infty$, in the ${\cal C}^{\infty} -$topology.
The following observation is well-known to experts: the local geometry of the bundle $L^k$ converges to the geometry of a model line bundle. Fix some unitary radially flat isomorphism between the pullback line bundle $\varphi_k^* L^k$ and the trivial Hermitian line bundle $B \times \mathbb{C} \rightarrow B$. Hence, the connection of $\varphi_k^* L^k$ induces a unitary connection $\nabla^k$ on $B \times \mathbb{C} \rightarrow B$. As $k \rightarrow \infty$, the connection $\nabla^k$ tends to some model connection $\nabla^{\infty}$ on $B \times \mathbb{C} \rightarrow B$, defined by: $$ \nabla^{\infty} = d- i\pi \sum_{\alpha = 1}^{n} (x_{\alpha} dy_{\alpha} - y_{\alpha} dx_{\alpha}). $$
There is a more conceptual description of $\nabla^{\infty}$: the model connection $\nabla^{\infty}$ is the only radially trivial connection with curvature $-i2 \pi \sum_{\alpha = 1}^{n} dx_{\alpha} \wedge dy_{\alpha}$. \\ \\
Warning. Let $s$ be a section of the trivial bundle $B \times \mathbb{C} \rightarrow B$. We say that $s$ is holomorphic if it is holomorphic for the connection $\nabla^{\infty}$. Although the section $s$ is a function, it is not the usual concept of holomorphic function. For example, the function $exp\left( - \frac{\pi}{2} \sum_{\alpha = 1}^{n} | z_{\alpha} |^2 \right)$ is a holomorphic section and, more generally, the section $s$ of $B \times \mathbb{C} \rightarrow B$ is holomorphic if and only if the function
$s \,exp\left( \frac{\pi}{2} \sum_{\alpha = 1}^{n} | z_{\alpha} |^2 \right)$ is holomorphic in the usual sense. \\ \\ This set of tools is well-known to experts. We will use it to study sequences of sections. The following two definitions play an important role in our reformulation of Donaldson's theory.
\begin{defn}\label{Def1} For every sufficiently large integer $k$, let $s_k$ be a ${\cal C}^{\infty} -$smooth section of $L^k$. We say that the sequence $(s_k)$ is {\it renormalizable} if it satisfies the following compactness condition.
Let $(k_l)$ be a subsequence of the positive integers. For every sufficiently large integer $l$, let $\varphi_l$ be a chart satisfying conditions (1) for $k_l$ (that is, $\varphi_l$ is an exponential map for $g_{k_l}$) and (2) and let $j_l$ be a unitary radially flat isomorphism between the trivial line bundle $B \times \mathbb{C} \rightarrow B$ and the pullback bundle $\varphi_l^* L^{k_l}$. If $\sigma_l$ denotes the section of the trivial bundle $B \times \mathbb{C} \rightarrow B$ corresponding to the pullback section $\varphi_l^*s_{k_l}$ via the isomorphism $j_l$, then the sequence $(\sigma_l)$ has a subsequence $(\sigma_{l_m})$ which converges over $B$ for the smooth compact-open topology. \end{defn}
\begin{defn}\label{Def2} The limit of $(\sigma_{l_m})$ is called a {\it limit section} of the renormalizable sequence $(s_k)$. Hence, a limit section is a section of $B \times \mathbb{C} \rightarrow B$. \end{defn}
We emphasize that we {\it don't} assume that all charts $\varphi_l$ have the same center. \\ \\ Let $(s_k)$ be a renormalizable sequence. If the sections $s_k$ are holomorphic then the limit sections are holomorphic. More generally, let's state an informal principle: if the sections $s_k$ satisfy some closed condition then one may infer that the limit sections satisfy some corresponding condition. We won't be more specific about this principle (we won't even explain the meaning of the word {\it closed}).
Of course, concerning open conditions, it goes in the opposite direction. For example, if all limit sections are transverse to $0$ then $s_k$ is transverse to $0$, for every sufficiently large integer $k$. If, in addition, the zero sets of limit sections are symplectic submanifolds in $B$ (for the symplectic form $\sum_{\alpha = 1}^{n} dx_{\alpha} \wedge dy_{\alpha}$), then for every sufficiently large integer $k$, the zero set of $s_k$ is a symplectic submanifold in $X$. Note that every complex submanifold is symplectic. Hence one get the following proposition.
\begin{prop}{\label{o}} For every sufficiently large integer $k$, let $s_k$ be a ${\cal C}^{\infty} -$smooth section of $L^k$. Suppose that $(s_k)$ is a renormalizable sequence and suppose that every limit section of the sequence $(s_k)$ is holomorphic and transverse to $0$. Then for every sufficiently large integer $k$, the zero set of $s_k$ is a codimension $2$ symplectic submanifold in $X$. \end{prop}
In the integrable case ($X$ is K\"{a}hler), the sections $s_k$ we will consider are often holomorphic whereas in the non-integrable case ($X$ almost-K\"{a}hler), typically, the limit sections are holomorphic but the sections $s_k$ aren't.
\section{Transversality theorems}
To compare with, let's state a consequence of the Kodaira embedding theorem.
\begin{thm}{\label{BK}} Suppose $J$ is integrable. Then, for every sufficiently large integer $k$, there exists a holomorphic section $s_k$ of $L^k$ which is transverse to $0$. \end{thm} Proof. Kodaira's theorem implies that, for every sufficiently large $k$, there are no base points. Hence, almost every section is transverse to $0$, by the Bertini theorem. \\ \\ \indent Let's first state Donaldson's theorem in the integrable case.
\begin{thm}{\label{DI}} Suppose $J$ is integrable. Then, for every integer $k\geq 1$, there exists a holomorphic section $s_k$ of $L^k$ such that:
(1) The sequence $(s_k)$ is renormalizable.
(2) The limit sections of the sequence $(s_k)$ are transverse to $0$. \end{thm} In the integrable case, one may describe Donaldson's theorem an elaborate variant of Theorem {\ref{BK}}. The variant has the advantage of being easily transferable to symplectic geometry. Of course this was Donaldson's main goal and most applications of his techniques are symplectic and contact results. It is known that if $X$ is almost-K\"{a}hler, then, in general, one can't get holomorphic sections. Nevertheless, we get {\it asymptotically holomorphic} sections. In our reformulation, the definition is quite simple: a renormalizable sequence of smooth sections is {\it asymptotically holomorphic} if every limit section is holomorphic for the connection $\nabla^{\infty}$. Note that this definition is weaker than the usual quantitative definition. However, in practice, the following version of Donaldson's theorem is sufficient for many corollaries.
\begin{thm}{\label{D}} For every integer $k\geq 1$, there exists a ${\cal C}^{\infty} -$smooth section $s_k$ of $L^k$ such that:
(1) The sequence $(s_k)$ is renormalizable.
(2) The limit sections of the sequence $(s_k)$ are holomorphic and transverse to $0$. \end{thm}
(Hence, for every sufficiently large integer $k$, the section $s_k$ is transverse to $0$ and, by Proposition \ref{o}, the zero set of $s_k$ is a codimension $2$ symplectic submanifold.) \\ \\ \indent Proof of Theorem {\ref{DI}} and Theorem {\ref{D}}. Donaldson's techniques (in \cite{Do96}, see also \cite{Au97} and \cite{Do99}) produce sections $s_k$ satisfying two famillies of estimates: \begin{eqnarray*}
\| s_k \|_{{\cal C}^r, g_k} & = & O (1) \\
\| \overline{\partial} s_k \|_{{\cal C}^r , g_k} & = & O (k^{-\frac{1}{2}}) \end{eqnarray*} (for every natural integer $r$), and a uniform transversality condition: \begin{eqnarray*}
\min_{p\in X} \left( \| s_k(p) \| + \| \nabla s_k(p) \|_{g_k} \right) & \geq & \eta \end{eqnarray*} where one calculates the ${\cal C}^r-$norm and the norm of $\nabla s_k(p)$ with the renormalized metric $g_k = kg$. Here $\eta$ denotes a positive number, independent of $k$.
Recall the notations of Definition {\ref{Def1}}. The section $\sigma_l$ of $B \times \mathbb{C} \rightarrow B$ corresponds to the pull-back section $\varphi_l^*s_{k_l}$ where $\varphi_l$ is the exponential map for the renormalized metric $g_{k_l}$. The first estimate implies the following estimate: $$
\| \sigma_{l} \|_{{\cal C}^r} = O (1) $$ on the unit ball $B$. Hence, some subsequence of $(\sigma_l)$ converges in the smooth topology and the sequence $(s_k)$ is renormalizable.
The connection of $\varphi_l^* L^{k_l}$ induces a unitary connection $\nabla^{k_l}$ on $B \times \mathbb{C} \rightarrow B$. Let $\overline{\partial}^{k_l}$ denote the $(0,1)-$part of $\nabla^{k_l}$ and let $\overline{\partial}^{\infty}$ denote the $(0,1)-$part of the limit connection $\nabla^{\infty}$. Donaldson's second estimate implies the following estimate: $$
\| \overline{\partial}^{k_l} \sigma_{l} \|_{{\cal C}^r} = O (k_l^{-\frac{1}{2}}). $$ Since $\nabla^{k_l}$ tends to $\nabla^{\infty}$, the $(0,1)-$part $\overline{\partial}^{\infty} \sigma_l$ tends to $0$ and the sequence $(s_k)$ is asymptotically holomorphic.
The third estimate implies the following estimate: $$
\min_{p\in B} \left( \| \sigma_{l}(p) \| + \| \nabla \sigma_{l}(p) \| \right) \geq \frac{\eta}{2}. $$ Every limit $\sigma_{\infty}$ of a subsequence $(\sigma_{l_m})$ (see Definition \ref{Def2}) satisfies the same estimate: $$
\min_{p\in B} \left( \| \sigma_{\infty}(p) \| + \| \nabla \sigma_{\infty}(p) \| \right) \geq \frac{\eta}{2}. $$ Hence, $\sigma_{\infty}$ is transverse to $0$. The proof of Theorem \ref{D} is completed. In the integrable case, Donaldson's sections are holomorphic and the proof of Theorem \ref{DI} is similar. \\ \\ \indent As noted by Donaldson, the asymptotic transversality property provides bounds for the Riemannian geometry of the zero set, see \cite[Corollary 33]{Do96}. For example, one gets the following result.
\begin{prop} Let $(s_k)$ be a renormalizable sequence. Suppose every limit sequence is transverse to $0$. Let $Y_k$ be the zero set of $s_k$. For every sufficiently large integer $k$, if $p$ lies in $Y_k$ and $A\subset T_p Y_k$ is a 2-plane, then the sectional curvature $K_{Y_k,g_k} (p,A)$ of $Y_k$ at $(p,A)$ for the metric $g_k$ satisfies the following estimate: $$
| K_{Y_k,g_k} (p,A) | \leq C $$ where the bound $C$ is independent of $k$, $p$ and $A$. \end{prop}
(Hence, if one prefers to calculate with the metric $g$, the sectional curvature is bounded by some linear function of $k$ because $K_{Y_k,g} (p,A) = k\, K_{Y_k,g_k} (p,A)$.) \\ \\
\indent Proof. Define $u_k= \max_{(p,A)} | K_{Y_k,g_k} (p,A) |$. Since $Y_k$ is compact, there exist a point $p_k \in Y_k$ and a 2-plane $A_k \subset T_{p_k} Y_k$ satisfying the following equation:
$$ | K_{Y_k,g_k} (p_k,A_k ) | = u_k.$$
Let $\varphi_k$ be a chart centered at $p_k$ satifying conditions (1) and (2) of Definition {\ref{Def1}}. Consider the 2-plane $A_k' =(d\varphi_k(0))^{-1} (A_k) \subset \mathbb{C}^n$.
Since the set of 2-planes of $\mathbb{C}^n$ is compact, every subsequence of $(A_k')$
admits a convergent subsubsequence $(A_{k_l}')$.
The corresponding sequence $\sigma_l$
(using the notations of Definition {\ref{Def1}})
admits a limit $\sigma_{\infty}$ which is transverse to $0$. Therefore the zero set $Y_{\infty} \subset B$ of $\sigma_{\infty}$ is a submanifold and the local geometry of the corresponding submanifolds $(Y_{k_l})$ converges to the geometry of $Y_{\infty}$. In particular, the sequence
$(u_{k_l})$ tends to $| K_{Y_{\infty},\mu} (0,A'_{\infty}) |$ where $\mu$ is the standard Euclidean metric on $\mathbb{C}^n$ and $A'_{\infty}$ is the limit 2-plane.
Hence every subsequence of the sequence $(u_k)$ admits a convergent subsubsequence and therefore $(u_k)$ is a bounded sequence.
\end{document} |
\begin{document}
\parindent = 0pt \parskip = 8pt
\begin{abstract}
Two new Banach space moduli, that involve weak convergent sequences, are introduced. It is shown that if either one of these moduli are strictly less than 1 then the Banach space has Property($K$).
\end{abstract}
\title{$D(X) < 1$ or $\hat{D}
\section{Introduction}
A Banach space, $X$, has the weak fixed point property, w-FPP, if every nonexpansive mapping, $T$, on every weak compact convex nonempty subset, $C$, has a fixed point. The past forty or so years has seen a number of Banach space properties shown to imply the w-FPP. Some such properties are weak normal structure, Opial's condition, Property($K$) and Property($M$). Here two new moduli are introduced and are linked to one of these properties, Property($K$). More information on the w-FPP and associated Banach space properties and moduli can be found in [3].
The key definitions and terminology are below.
\begin{definition}
Sims, [6]
A Banach space $X$ has property($K$) if there exists $K \in [0, 1)$ such that whenever $x_n \rightharpoonup 0, \lim_{ n \rightarrow \infty} \| x_n \| = 1 \mbox{ and } \liminf_{n \rightarrow \infty} \| x_n - x \| \leqslant 1 \mbox{ then } \| x \| \leqslant K.$
\end{definition}
\begin {definition}
Opial [5]
A Banach space has Opial's condition if
\[ x_n \rightharpoonup 0 \ \mbox {and } x \not = 0 \mbox { implies } \limsup_n \| x_n \| < \limsup_n \| x_n - x \|. \]
The condition remains the same if both the $\limsup$s are replaced by $\liminf$s.
\end {definition}
Later a modulus was introduced to gauge the strength of Opial's condition and a stronger version of the condition was defined.
\begin{definition}
Lin, Tan and Xu, [4]
Opial's modulus is
\[ r_X(c) = \inf \{ \liminf_{n\rightarrow \infty} \| x_n - x \| - 1: c \geqslant 0, \| x \| \geqslant c, x_n \rightharpoonup 0 \mbox{ and }\liminf_{n\rightarrow \infty} \| x_n \| \geqslant 1 \}. \]
$X$ is said to have uniform Opial's condition if $r_X(c) > 0$ for all $c > 0.$ See [4] for more details.
\end{definition}
There is a direct link between Opial's modulus and Property($K$). Dalby proved in [1] that $r_{X}(1) > 0$ is equivalent to $X$ having Property($K$). This will be used in the next section.
The two new moduli are defined next.
\begin{definition}
Let $X$ be a Banach space. Let
\[ D(X) = \sup\{ \liminf_{n \rightarrow \infty}\| x_n - x \|: x_n \rightharpoonup x, \| x_n \| = 1 \mbox{ for all } n\} \] and let
\[ \hat{D}(X) = \sup\{ \| x \|: x_n \rightharpoonup x, \| x_n \| = 1 \mbox{ for all } n\}. \]
\end{definition}
So $0 \leqslant D(X) \leqslant 2$ and $ \hat{D}(X) \leqslant 1.$
Some values for $D(X)$ are $D(\ell_1) = 0, D(c_0) = 1 \mbox{ and } D(\ell_p) = 2^{1/p}.$
The reason that these two moduli are introduced is that in [2] Dalby showed that if in the dual, $X^*,$ a certain weak* convergent sequence, $(w_n^*),$ satisfies either one of two properties then $X$ satisfied the w-FPP. Let $w_n^* \stackrel{*}{\rightharpoonup} w^* \mbox { where } \| w^* \| \leqslant 1$ then if $w^*$ is `deep' within the dual unit ball or $w_n^* - w^*$ eventually `deep' within the dual unit ball then $X$ has the w-FPP. So $D(X^*) < 1$ or $\hat{D}(X^*) < 1$ ensures this.
The w-FPP is known to be separably determined so all Banach spaces are assumed to be separable.
\section{Results}
\begin{proposition}
Let $X$ be a separable Banach space. If $D(X) < 1$ then $r_X(1) > 0.$ That is, $X$ has Property($K$).
\end{proposition}
\begin{proof}
Let $x_n \rightharpoonup 0, \liminf_{n \rightarrow \infty}\| x_n \| \geqslant 1 \mbox{ and } \| x \| \geqslant 1.$
Using the lower semi-continuity of the norm, $\liminf_{n \rightarrow \infty}\| x_n + x \| \geqslant \| x \| \geqslant 1.$ By taking subsequences if necessary we may assume that $\| x_n + x \| \not = 0$ for all $n.$
Now $ \left \| \frac{\displaystyle {x_n + x}}{ \displaystyle {\| x_n + x \|}} \right \| = 1 \mbox{ for all } n, {\frac{\displaystyle{x_n + x}}{ \displaystyle{\| x_n + x \| }}} \rightharpoonup \frac{\displaystyle{x}}{ \displaystyle \liminf_{n \rightarrow \infty}\| x_n + x \| }.$
For ease of reading let $\alpha = \liminf_{n \rightarrow \infty}\| x_n + x \|.$
Then
\begin{align*}
\liminf_{n \rightarrow \infty}\left \| \frac{\displaystyle x_n + x}{\| x_n + x \|} - \frac{\displaystyle x}{\displaystyle \alpha} \right \|
& = \liminf_{n \rightarrow \infty} \frac{\displaystyle 1}{\| x_n + x \|} \liminf_{n \rightarrow \infty}\left \| x_n + x - \| x_n + x \| \frac{\displaystyle x}{\displaystyle \alpha} \right \| \\
& = \frac{\displaystyle 1}{\alpha} \liminf_{n \rightarrow \infty}\left \| x_n + x - \| x_n + x \| \frac{\displaystyle x}{\displaystyle \alpha} \right \| \\
& = \frac{\displaystyle 1}{\alpha} \liminf_{n \rightarrow \infty}\left \| x_n - \left (\frac{\displaystyle \| x_n + x \|}{ \alpha} - 1 \right ) x \right \| \\
& \geqslant \frac{\displaystyle 1}{\alpha} \left | \liminf_{n \rightarrow \infty} \| x_n \| - \liminf_{n \rightarrow \infty} \left | \frac{\displaystyle \| x_n + x \|}{ \alpha} - 1 \right | \| x \| \right | \\
& = \frac{\displaystyle 1}{\alpha} \left | \liminf_{n \rightarrow \infty} \| x_n \| + \left | \frac{\displaystyle \alpha}{\alpha} - 1 \right | \| x \| \right ) \\
& = \frac{\displaystyle 1}{\alpha} \liminf_{n \rightarrow \infty} \| x_n \| \\
& \geqslant \frac{\displaystyle 1}{\alpha} \\
& = \frac{\displaystyle 1}{\liminf_{n \rightarrow \infty}\| x_n + x \|}.
\end{align*}
We have
\[ D(X) \geqslant \liminf_{n \rightarrow \infty}\left \| \frac{\displaystyle x_n + x}{\displaystyle \| x_n + x \|} - \frac{\displaystyle x}{\displaystyle \liminf_{n \rightarrow \infty}\| x_n + x \|} \right \| \geqslant \frac{1}{\displaystyle \liminf_{n \rightarrow \infty} \| x_n + x \|}. \qquad \dag \]
So $\liminf_{n \rightarrow \infty} \| x_n + x \| \geqslant \frac{\displaystyle 1}{\displaystyle D(X)}.$
This means that $r_X(1) + 1 \geqslant \frac{\displaystyle 1}{\displaystyle D(X)} \mbox{ or } r_X(1) \geqslant \frac{\displaystyle 1}{\displaystyle D(X)} - 1 > 0.$
\end{proof}
A second way to prove this proposition is via a contradiction as shown below.
\begin{proof}
Assume that $D(X) < 1 \mbox{ and } r_X(1) \not > 0.$ Then $r_X(1) = 0.$
Given $\epsilon > 0 $ there exists a sequence $(x_n) \mbox{ in } X \mbox{ where } x_n \rightharpoonup 0, \newline \liminf_{n \rightarrow \infty}\| x_n \| \geqslant 1 \mbox{ and } x \in X, \| x \| \geqslant 1 \mbox{ such that } \liminf_{n \rightarrow \infty}\| x_n + x \| < 1 + \epsilon.$
Therefore $1 \leqslant \| x \| \leqslant \liminf_{n \rightarrow \infty}\| x_n + x \| < 1 + \epsilon.$ So apart from the last inequality the set up is the same as in the previous proof and this proof follows the same pathway. So now jumping to a line above, the one labeled with \dag,
\[ D(X) \geqslant \frac{1}{\displaystyle \liminf_{n \rightarrow \infty} \| x_n + x \|} > \frac{1}{ \displaystyle 1 + \epsilon }. \]
Letting $\epsilon \rightarrow 0 \mbox{ gives } D(X) \geqslant 1 \mbox{ but } D(X) < 1.$
So the desired contradiction is arrived at.
\end{proof}
Next is the second moduli's turn.
\begin{proposition} Let $X$ be a separable Banach space. If $\hat{D}(X) < 1$ then $r_X(1) > 0.$ That is, $X$ has Property($K$). \end{proposition}
\begin{proof}
Let $x_n \rightharpoonup 0, \liminf_{n \rightarrow \infty}\| x_n \| \geqslant 1 \mbox{ and } \| x \| \geqslant 1.$
Now $x_n + x \rightharpoonup x \mbox{ so } \liminf_{n \rightarrow \infty}\| x_n + x \| \geqslant \| x \| \geqslant 1.$ Without loss of generality we may assume $\| x_n + x \| \not = 0$ for all $n.$
Then $\left \| \frac{\displaystyle x_n + x}{\displaystyle \|x_n + x \|} \right \| = 1 \mbox{ for all } n, \frac{\displaystyle x_n + x}{\displaystyle \|x_n + x \|} \rightharpoonup \frac{\displaystyle x}{ \displaystyle \liminf_{n \rightarrow \infty}\| x_n + x \| }.$
Hence $1 > \hat{D}(X) \geqslant \frac{\displaystyle \| x \|}{\displaystyle \liminf_{n \rightarrow \infty}\| x_n + x \| }$ leading to
\begin{align*}
\| x \| & \leqslant \liminf_{n \rightarrow \infty}\| x_n + x \| \hat{D}(X) \\
\liminf_{n \rightarrow \infty}\| x_n + x \| & \geqslant \frac{\displaystyle \| x \|}{\displaystyle \hat{D}(X)} \\ & \geqslant \frac{\displaystyle 1}{\displaystyle \hat{D}(X)} \\ \mbox{ Thus } r_X(1) + 1 & \geqslant \frac{\displaystyle 1}{\displaystyle \hat{D}(X)} \\ r_X(1) & > \frac{\displaystyle 1}{\displaystyle \hat{D}(X)} - 1 \\ & > 0. \end{align*}
\end{proof}
A second way to prove this proposition is by finding a value of $K$ for Property($K$).
\begin{proof}
Let $x_n \rightharpoonup 0, \| x_n \| = 1 \mbox{ for all } n \mbox{ and }\liminf_{n \rightarrow \infty}\| x_n - x \| \leqslant 1.$
If $\liminf_{n \rightarrow \infty}\| x_n - x \| = 0$ then because $\| x \| \leqslant \liminf_{n \rightarrow \infty}\| x_n - x \|$ we have $x = 0$ and $K$ can be taken as zero.
So assume $\liminf_{n \rightarrow \infty}\| x_n - x \| > 0$ and by taking subsequences if necessary, assume $\| x_n - x \| \not = 0$ for all $n.$
Using the same argument as in the previous proof
\[\| x \| \leqslant \liminf_{n \rightarrow \infty}\| x_n - x \| \hat{D}(X) \leqslant \hat{D}(X) < 1.\]
So $K$ can be taken as $\hat{D}(X).$
\end{proof}
\end{document} |
\begin{document}
\title[Zero-Divisor Graphs of $\mathbb{Z}_n$, their products and $D_n$]{Zero-Divisor Graphs of $\mathbb{Z}_n$, their products and $D_n$}
\author{Amrita Acharyya} \address{Department of Mathematics and Statistics\\ University of Toledo, Main Campus\\ Toledo, OH 43606-3390} \email{Amrita.Acharyya@utoledo.edu}
\author{Robinson Czajkowski} \address{Department of Mathematics and Statistics\\ University of Toledo, Main Campus\\ Toledo, OH 43606-3390} \email{ Robinson.Czajkowski@rockets.utoledo.edu}
\subjclass[2010]{68R10, 68R01, 03G10, 13A99}
\keywords{zero-divisor graph, commutative ring, finite products, poset, type graph}
\begin{abstract} This paper is an endeavor to discuss some properties of zero-divisor graphs of the ring $\mathbb{Z}_n$, the ring of integers modulo $n$. The zero divisor graph of a commutative ring $R$, is an undirected graph whose vertices are the nonzero zero-divisors of $R$, where two distinct vertices are adjacent if their product is zero. The zero divisor graph of $R$ is denoted by $\Gamma(R)$. We discussed $\Gamma(\mathbb{Z}_n)$'s by the attributes of completeness, k-partite structure, complete k-partite structure, regularity, chordality, $\gamma - \beta$ perfectness, simplicial vertices. The clique number for arbitrary $\Gamma(\mathbb{Z}_n)$ was also found. This work also explores related attributes of finite products $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$, seeking to extend certain results to the product rings. We find all $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ that are perfect. Likewise, a lower bound of clique number of $\Gamma(\mathbb{Z}_m\times\mathbb{Z}_n)$ was found. Later, in this paper we discuss some properties of the zero divisor graph of the poset $D_n$, the set of positive divisors of a positive integer $n$ partially ordered by divisibility. \end{abstract}
\maketitle
\section{Introduction} \label{s:Intro}
Zero-divisor graphs were first discussed by Beck~\cite{nB66} as a way to color commutative rings. They were further discussed by Livingston and Anderson in ~\cite{jS83} and ~\cite{jW98}. A zero-divisor graph of a ring $R$, denoted by $\Gamma(R)$, is a graph whose vertices are all the zero divisors of $R$. Two distinct vertices $u$ and $v$ are adjacent if $uv = 0$. Beck~\cite{nB66} considered every element of $R$ a vertex, with 0 sharing an edge with all other vertices. Since then, others have chosen to omit 0 from zero-divisor graphs [2, 3, 4, 5]. For our purposes, we omit 0 so that the vertex set of $\Gamma(\mathbb{Z}_n)$ denoted by $ZD(\mathbb{Z}_n)$ will only be the non-zero zero-divisors.\\ In the first section, we explore a concept explored by Smith ~\cite{jK55} called type graphs. In ~\cite{jK55}, type graphs were used to find all perfect $\Gamma(\mathbb{Z}_n)$. We extended the notion of type graphs for $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ to find all perfect zero-divisor graphs of such products, where $n_1, n_2, \cdots , n_k$ are positive integers and $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ is the direct product of $Z_{n_i}$s, $1\le i\le k$. We then move on to various properties of $\Gamma(\mathbb{Z}_n)$ and $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$. AbdAlJawad and Al-Ezeh ~\cite{bH77} discussed the domination number of $\Gamma(\mathbb{Z}_n)$. We extend this result to find an upper bound and lower bound for the domination number of finite product $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ and discussed coefficient of smallest degree of domination polynomial of $\Gamma(\mathbb{Z}_n)$. In the last section, we explore zero divisor graphs of the poset $D_n$, the set of positive divisors of a positive integer $n$ partially ordered by divisibility and we catalog them in a similar way. Zero divisor graph of poset is studied in \cite{jW99}, \cite{jW100}, \cite{jW101}.
\section{Type Graphs}
When we consider zero-divisor graphs of $\Gamma(\mathbb{Z}_n)$, it is useful to consider the type graphs of these rings. A type graph has vertices of $T_a$ where $a$ is a factor of $n$ that is neither 1 nor 0. The set of all $T_i$ forms a partition of the zero divisor graph by $T_a = \{ x \in ZD(\mathbb{Z}_n) | gcf(x, n) = a \}$. This concept was shown by Smith ~\cite{jK55}, where the type graph was used to find all perfect $\Gamma(\mathbb{Z}_n)$. Smith used the notation $\Gamma^T(\mathbb{Z}_n)$ to denote the type graph. In that paper, four key observations were shown to be true regarding the type graphs on $\mathbb{Z}_n$. In this section, we modify the definition of type graph to fit the graph of $\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k}$. Additionally, we show these observations to be true over this type graph as well. We then use analogues of some theorems from ~\cite{jK55} to characterize the perfectness of $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$.\\
The following are two important theorems from ~\cite{jK55}.
\begin{theorem}[Smith's Main Theore] ~\cite{jK55}
A graph $\Gamma(\mathbb{Z}_n)$ is perfect iff $n$ is of one of the following forms:
\begin{enumerate}
\item[1.] $n = p^a$ for prime $p$ and positive integer $a$.
\item[2.] $n = p^aq^b$ for distinct primes $p, q$ and positive integers $a, b$.
\item[3.] $n = p^aqr$ for distinct primes $p, q, r$ and positive integer $a$.
\item[4.] $n = pqrs$ for distinct primes $p, q, r, s$.
\end{enumerate} \end{theorem}
\begin{theorem}[Simth's Theorem 4.1] ~\cite{jK55} $\Gamma(\mathbb{Z}_n)$ is perfect iff its type graph $\Gamma^T(\mathbb{Z}_n)$ is perfect.\\ \end{theorem}
\begin{definition}[Type graph of $\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k}$]
The type graph of $\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k}$ denoted by $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ has a vertex set of the type classes $T(x_1, x_2, \cdots, x_k)$ where $(x_1, x_2, \cdots, x_k) \neq (0, 0, \cdots, 0)$ nor $(1, 1, \cdots, 1)$, and $x_i$ is a divisor of $n_i$, 1, or 0.\\
$T(x_1, x_2, \cdots, x_k) = \{ (a_1, a_2, \cdots, a_k) \mid| a_i \in \mathbb{Z}_{n_i}/0$ and $gcf(a_i, n_i) = x_i$ or $a_i=0$ if $x_i=0$ $\}$. Arbitrary $T(x_1, x_2, \cdots, x_k)$ shares an edge with arbitrary $T(y_1, y_2, \cdots, y_k)$ iff $x_iy_i = 0$ for all $i$.
\end{definition}
Smith ~\cite{jK55} gave the following four observations for the type graph of $\Gamma(\mathbb{Z}_n)$.
\begin{theorem}\label{1.1} Each vertex of $\Gamma(\mathbb{Z}_{n})$ is in exactly one type class. \end{theorem}
\begin{theorem}\label{1.2} Arbitrary distinct vertices $T_x$ and $T_y$ share an edge in $\Gamma^T(\mathbb{Z}_{n})$ iff each $a\in T_x$ shares an edge with each $b\in T_y$ in $\Gamma(\mathbb{Z}_{n})$. \end{theorem}
\begin{theorem}\label{1.3} Arbitrary distinct vertices $T_x $ and $T_y $ don't share an edge in $\Gamma^T(\mathbb{Z}_{n})$ iff each $a\in T_x$ doesn't share an edge with each $b\in T_y$ in $\Gamma(\mathbb{Z}_{n})$. \end{theorem}
\begin{theorem}\label{1.4} In $\Gamma(\mathbb{Z}_{n})$ consider arbitrary $a$ and $b$ in the same type class. An arbitrary vertex $c$ in $\Gamma(\mathbb{Z}_{n})$ shares an edge with $b$ iff it shares an edge with $a$ also. \end{theorem}
Following are the four analogues to the above results for $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$.\\
\begin{theorem}\label{1.1 a} Each vertex of $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ is in exactly one type class. \end{theorem} \begin{proof} Assume otherwise. Then there is a vertex $v$ that is not in any type class, or $v$ is in multiple type classes.\\ \begin{enumerate} \item[Case 1:] $v$ is not in any type class.\\
Then $v$ must have an element $a_i$ that is not 0 and whose gcf with $n_i$ is not a number $x_i$ which is clearly not true.\\ \item[Case 2:] $v$ is in multiple type classes.\\
Let $v = (a_1, a_2, \cdots, a_k) \in T(x_1, x_2, \cdots, x_k) \cap T(y_z, y_2, \cdots, y_k)$. Then for all $i \in \{1, 2, \cdots, k\}$ if $a_i = 0$, then $x_i = y_i = 0$ and if $a_i \neq 0$, then $gcd(a_i, n_i) = x_i = y_i$ giving $(x_1, x_2, \cdots, x_k) = (y_1, y_2, \cdots, y_k)$ which is a contradiction. \end{enumerate} \end{proof}
\begin{theorem}\label{1.2a} Arbitrary distinct vertices $T_x = T(x_1, x_2, \cdots, x_k)$ and $T_y = T(y_1, y_2, \cdots, y_k)$ share an edge in $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ iff each $a\in T_x$ shares an edge with each $b\in T_y$ in $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$. \end{theorem}
\begin{proof} Let $T_x$ shares and edge with $T_y$. By the definition, $x_iy_i = 0$ for every $i$. Consider arbitrary $(a_1, \cdots, a_i)\in T_x$ and $(b_1, \cdots, b_i)\in T_y$.Since each $a_i$ is a multiple of $x_i$ and each $b_i$ is a multiple of $y_i$, $a_ib_i$ is a multiple of $x_iy_i$ and therefore equal to 0. Then $(a_1, \cdots, a_i)$ and $(b_1, \cdots, b_i)$ share an edge.\\ Conversely, let every $a\in T_x$ and $b\in T_y$ share an edge. Since $x = (x_1, x_2, \cdots, x_k)$ is an element of $T_x$, and $y = (y_1, y_2, \cdots, y_k)$ is an element of $T_y$, $x$ and $y$ share an edge. Then $T_x$ must share an edge with $T_y$. \end{proof}
\begin{theorem}\label{1.3a} Arbitrary distinct vertices $T_x = T(x_1, x_2, \cdots, x_k)$ and $T_y = T(y_1, y_2, \cdots, y_k)$ don't share an edge in $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ iff each $a\in T_x$ doesn't share an edge with each $b\in T_y$ in $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$. \end{theorem}
\begin{proof} Let $T_x$ does not share an edge with $T_y$. By the definition, $x_iy_i \neq 0$ for some $i$, which means $x_iy_i$ lacks some factor $f$ of $n_i$. Consider arbitrary $(a_1, \cdots, a_i\cdots, a_k)\in T_x$ and $(b_1, \cdots, b_i, \cdots b_k)\in T_y$. Now, $a_i$ is a multiple of $x_i$ and $b_i$ is a multiple of $y_i$, and thus, $a_ib_i$ is a multiple of $x_iy_i$. Since $gcf(a_i, n_i) = x_i$ and $gcf(b_i, n_i) = y_i$, $a_ib_i$ also lacks the factor $f$ from $n_i$ and is therefore non-zero. So $(a_1, \cdots, a_k)$ and $(b_1, \cdots, b_k)$ do not share an edge.\\ Conversely, let each $a\in T_x$ and $b\in T_y$ do not share an edge. Since $x = (x_1, x_2, \cdots, x_k)$ is an element of $T_x$, and $y = (y_1, y_2, \cdots, y_k)$ is an element of $T_y$, $x$ and $y$ don't share an edge. Then $T_x$ must not share an edge with $T_y$. \end{proof}
\begin{theorem}\label{1.4a} In $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ consider arbitrary $a = (a_1, a_2, \cdots, a_k)$ and $b = (b_1, b_2, \cdots, b_k)$ in the same type class $T(t_1, t_2, \cdots, t_k)$. An arbitrary vertex $c = (c_1, c_2, \cdots, c_k)$ shares an edge with $b$ iff it shares an edge with $a$ also. \end{theorem} \begin{proof} Follows from Theorem \ref{1.2} and \ref{1.3}. \end{proof}
Next, we want have the following theorem:\
\begin{theorem}\label{1.8} $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ is perfect iff its type graph $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ is perfect. \end{theorem}
To show this, we will use the following three theorems, whose proofs are analogous to the corresponding proofs in ~\cite{jK55}.\\
\begin{theorem}\label{1.5} Given arbitrary hole or antihole $H$ of length greater than $4$ in $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$, every vertex in $H$ belongs to a different type class. \end{theorem}
\begin{theorem}\label{1.6} Let there be a hole or antihole $H$ length $l>4$ in $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$. Then the type graph $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ must also contain a hole or antihole length $l$. \end{theorem}
\begin{theorem}\label{1.7}
Let there be a hole or antihole $H$ length $l>4$ in the type class $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$. Then the graph $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$ must also contain a hole or antihole length $l$. \end{theorem}
Using these theorems, now we can establish the following proof of Theorem \ref{1.8}.\\ \begin{proof} The proof is analogous to the proof in ~\cite{jK55}. \end{proof}
Now that we know perfectness in the type graph implies perfectness in the zero-divisor graph, it is possible to find all such perfect $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$. As it turns out, for both $\Gamma^T(\mathbb{Z}_n)$ and $\Gamma^T(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$, we can exchange the primes of each $n_i$, and as long as the form of the primes (the amount of primes and the power of each prime) stays the same, the type graph will be isomorphic. To illustrate this, consider $\Gamma^T(\mathbb{Z}_{p^2q}\times\mathbb{Z}_p)$ where $p, q$ are prime. This type graph is isomorphic to $\Gamma^T(\mathbb{Z}_{r^2s}\times\mathbb{Z}_t)$ where $r, s, t$ are prime, even if the value of the primes change. We will use this to find all perfect $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots\times\mathbb{Z}_{n_k})$.
\begin{theorem}\label{1.9} Consider some $\Gamma^T(\mathbb{Z}_n)$ and $\Gamma^T(\mathbb{Z}_m)$ such that $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ and $m=q_1^{\alpha_1}q_2^{\alpha_2}\cdots q_k^{\alpha_k}$. Then $\Gamma^T(\mathbb{Z}_n) \cong \Gamma^T(\mathbb{Z}_m)$. \end{theorem} \begin{proof} Consider arbitrary vertex $u$ in $\Gamma^T(\mathbb{Z}_n)$. $u$ is a factor of $n$, so we can write $u=p_1^{x_1}p_2^{x_2}\cdots p_k^{x_k}$. Note that $0\leq x_i \leq \alpha_i$, $\forall i$. Define a function $f:\Gamma^T(\mathbb{Z}_n)\to \Gamma^T(\mathbb{Z}_m)$ as $f(u)=f(p_1^{x_1}p_2^{x_2}\cdots p_k^{x_k}) = q_1^{x_1}q_2^{x_2}\cdots q_k^{x_k}$. Since $n$ and $m$ both have the same amount of prime factors, and each corresponding prime has the same power $\alpha_i$, the result follows.\\
\end{proof}
\begin{theorem}\label{1.10} Consider $\Gamma^T(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ and $\Gamma^T(\mathbb{Z}_{m_1}\times\cdots\times\mathbb{Z}_{m_k})$ where the prime factorization of $n_i$ has the same form as $m_i$ for each $i$. That is, $n_i$ and $m_i$ have the same amount of prime factors and the same power for each prime. Then $\Gamma^T(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k}) \cong \Gamma^T(\mathbb{Z}_{m_1}\times\cdots\times\mathbb{Z}_{m_k})$. \end{theorem} \begin{proof} Take arbitrary $n_i$.\\ Denote the prime factorization of $n_i = p_{i, 1}^{\alpha_{i, 1}}\cdots p_{i, j_i}^{\alpha{i, j_i}}$ where $j_i$ is the amount of prime of $n_i$. Likewise, $m_i = q_{i, 1}^{\alpha_{i, 1}}\cdots q_{i, j_i}^{\alpha{i, j_i}}$. Note that the only difference between these factorizations are the value of the primes being used. The powers and amount of primes are the same. Consider arbitrary $(u_1, \cdots, u_k) \in \Gamma^T(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$. Each $u_i$ is a factor of $n_i$ or 0. We can write $u_i = p_{i, 1}^{x_{i, 1}}\cdots p_{i, j_i}^{x_{i, j_i}}$ where $0 \leq x_{i, l} \leq \alpha_{i, l}$. Note that if $u_i$ is 1, each $x_{i, l}$ is 0 and if $u_i$ is 0, $x_{i, l} = \alpha_{i, l}$ for every $l$.\\ Define a function $f: \Gamma^T(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k}) \to \Gamma^T(\mathbb{Z}_{m_1}\times\cdots\times\mathbb{Z}_{m_k})$ as $f(u_1, \cdots, u_k) = f(p_{1, 1}^{x_{1, 1}}\cdots p_{1, j_1}^{x_{1, j_1}}, \cdots, p_{k, 1}^{x_{k, 1}}\cdots p_{k, j_k}^{x_{k, j_k}})$\\ $= (q_{1, 1}^{x_{1, 1}}\cdots q_{1, j_1}^{x_{1, j_1}}, \cdots, q_{k, 1}^{x_{k, 1}}\cdots q_{k, j_k}^{x_{k, j_k}}) = (v_1, \cdots, v_k)$. Note that all we did was only replaced the primes. Hence the result follows as the previous one. \end{proof}
\begin{theorem}\label{1.11} $\Gamma^T(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ is isomorphic to $\Gamma^T(\mathbb{Z}_{n_1\cdots n_k})$ if all $n_i$'s are mutually co-prime. \end{theorem} \begin{proof} The proof follows by Chineese Remainder theorem. \end{proof}
The next theorem will show how we can characterize perfectness of $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$. Because now by the above three theorem without loss of generality we can simply choose primes that will make each $n_i$ co-prime. Then we know the type graph will be isomorphic to $\Gamma(\mathbb{Z}_n)$ where $n$ is the product of all such co-prime $n_i$. So $n$ will have a prime factorization with the total amount of primes in all $n_i$ and they will have corresponding powers. So, we have the following theorem.
\begin{theorem}\label{1.12} $\Gamma(\mathbb{Z}_{n_{1}}\times\mathbb{Z}_{n_{2}}\cdots \mathbb{Z}_{n_{k}})$ is perfect iff it is possible to find mutually co prime positive integers $m_1, m_2 \cdots m_k$, so that each $m_{i}$ has same amount of prime factors with same exponent in it's prime factorization as that in $n_i$ and $\Gamma(\mathbb{Z}_{m_{1}m_{2}\cdots m_{k}})$ is perfect. \end{theorem}
\begin{example} For example, $\Gamma(\mathbb{Z}_{p^2q}\times\mathbb{Z}_{p})$ is perfect because $\Gamma(\mathbb{Z}_{a^2bc})$ is perfect as shown by ~\cite{jK55}. Also note, no product with a dimension greater than four can be perfect. $\Gamma(\mathbb{Z}_{p_1}\times\cdots\times\mathbb{Z}_{p_5})$ is not perfect since no $\Gamma(\mathbb{Z}_{p_1\cdots p_5})$ is perfect as shown by ~\cite{jK55}. \end{example}
\section{Some properties of $\Gamma(\mathbb{Z}_n)$}
In this section we characterize $\Gamma(\mathbb{Z}_n)$ by various qualities such as completeness, cordiality and clique number. A helpful construction used is the strong type graph. We define the strong type graph as the type graph with self loops. We normally do not consider self-loops, in zero-divisor graphs and type graphs, but in the strong type graph, a vertex has a loop at it if it annihilates itself. We denote the strong type graph of $\Gamma(\mathbb{Z}_n)$ as $\Gamma^S(\mathbb{Z}_n)$.\\ Another construction used commonly in this section is $n^*$. Consider some $\Gamma(\mathbb{Z}_n)$. Let $n = p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_m^{\alpha_m}$, then $n^*=p_1^{\beta_1}p_2^{\beta_2}\cdots p_m^{\beta_m}$ where $\beta_i$ is half of $\alpha_i$ rounded up. This construction is very useful, as some properties of vertices can be associated with whether or not the vertex is a multiple of $n^*$.
\begin{lemma}\label{2.0} Two arbitrary vertices $u$ and $v$ in $\Gamma(\mathbb{Z}_n)$ that are both in the same type class $T_i$ share an edge iff $T_i$ has a self-loop in the strong type graph. \end{lemma} \begin{proof} Let $T_i$ have a self-loop. Then $i^2 = 0$. Since every $u, v\in T_i$ are multiples of $i$, $u$ and $v$ will share an edge.\\ Conversely, let $T_i$ does not have a self-loop. Take arbitrary $u$ and $v$ in $T_i$. According to the definition of type class, $u$ and $v$ are some multiple of $i$ where $gcf(u, n)=i$ and likewise for $v$. We can write $u=ai$ and $v=bi$ where $gcf(a, n/i)=1$ and $gcf(b, n/i)=1$. Assume $u$ and $v$ share and edge. Then $uv=cn$, $abi^2=cn$ where $c$ is a natural number. So $\frac{abi^2}{n}=c$. Since $T_i$ does not have a self-loop, $i^2\neq 0$ which means $n$ contains a factor not contained by $i^2$. Let this factor be called $d$. Let $\frac{g}{d}$ represent the simplified form of the fraction $\frac{i^2}{n}$ where $d$ is guaranteed to not be 1. By substitution, $\frac{abg}{d} = c$. But this is a contradiction since $a$, $b$ and $g$ do not share a factor with $n/i$, so cannot cancel the $d$ out of the denominator. Therefore, the expression cannot be equal to $c$, a natural number. $u$ and $v$ do not share and edge. \end{proof}
\begin{theorem}\label{2.1} $\Gamma(\mathbb{Z}_{p^2})$ is complete where $p$ is prime. \end{theorem} \begin{proof} Take arbitrary zero divisors of $\mathbb{Z}_n$, $u$ and $v$. $u$ and $v$ must both share a common factor with $n$, and the only possible factor is $p$ since $p^2$ is zero. So both $u$ and $v$ have a factor of $p$. Then $u$ and $v$ share an edge. $\Gamma(\mathbb{Z}_{p^2})$ is complete. \end{proof}
\begin{theorem}\label{2.2} $\Gamma(\mathbb{Z}_{p^x})$ where $p$ is prime and $x \geq 3$ is not complete. \begin{proof} Let $x\geq 3$. \begin{enumerate} \item[Case 1:] $p=2$: $p$ and $3p$ are distinct non-zero zero-divisors that are not connected. \item[Case 2:] $p\ne 2$: $p$ and $2p$ are distinct non-zero zero-divisors that are not connected. \end{enumerate} \end{proof} \end{theorem}
\begin{theorem}\label{2.3} $\Gamma(\mathbb{Z}_n)$, where $n \geq 2$ is complete iff $n=p^2$. \end{theorem} \begin{proof} Let $\Gamma(\mathbb{Z}_n)$ be complete. Assume two or more distinct prime factors of $n$ exist. Label the smallest such factor by $p$. Now choose another distinct prime factor of $n$ as $q$. $p$ is a zero divisor and shares an edge with $n/p$. Since $p$ and $q$ are both prime factors of $n$, $pq\leq n$. Also, since $p<q$, $p^2<pq$. So $p^2<pq\leq n$ which means $p^2$ is non-zero and distinct from $p$. $p^2$ shares an edge with $n/p$ so $p^2$ is a distinct zero-divisor that does not share an edge with $p$, making $\Gamma(\mathbb{Z}_n)$ not complete. So $n$ must only have one prime factor. Then, by Theorem 2.2, $\Gamma(\mathbb{Z}_{p^x})$ is not complete if $x\geq 3$. So $x=2$. So when $\Gamma(\mathbb{Z}_n)$ is complete, $n=p^2$. The converse follows by Theorem \ref{2.1}. \end{proof}
\begin{theorem}\label{2.4} $\Gamma(\mathbb{Z}_n)$ is k-partite if $\Gamma^S(\mathbb{Z}_n)$ is k-partite. \end{theorem} \begin{proof} Let $\Gamma^S(\mathbb{Z}_n)$ be k-partite. Then $\Gamma^S(\mathbb{Z}_n)$ can be partitioned into $k$ disjoint subsets $S_1, S_2, \cdots, S_k$ such that no vertex in the same set share an edge. Partition $\Gamma(\mathbb{Z}_n)$ into a similar grouping $Q_1, Q_2, \cdots, Q_k$ where $u\in Q_i$ iff $u\in T_u \in S_i$. Consider arbitrary $u$ and $v$, vertices of $\Gamma(\mathbb{Z}_n)$ that are in the same partitioned set $Q_i$.\\ \begin{enumerate} \item[Case 1:] $u$ and $v$ are in different type classes.\\ Call such classes $T_u$ and $T_v$. Then since $u$ and $v$ are both in $Q_i$, $T_u$ and $T_v$ are both in $S_i$ which means $T_u$ does not share an edge with $T_v$. So, by ~\cite{jK55} $u$ and $v$ do not share an edge.\\ \item[Case 2:] $u$ and $v$ are in the same type class.\\ Call this class $T_u$. Then since $\Gamma^S(\mathbb{Z}_n)$ is k-partite, $T_u$ does not form a loop with itself. Hence, by Lemma \ref{2.0}, $u$ and $v$ do not share an edge. \end{enumerate} \end{proof}
\begin{theorem}\label{2.5} $\Gamma(\mathbb{Z}_n)$ is complete k-partite if $\Gamma^S(\mathbb{Z}_n)$ is complete k-partite. \begin{proof} Let $\Gamma^S(\mathbb{Z}_n)$ be complete k-partite. Then by Theorem 2.4, $\Gamma(\mathbb{Z}_n)$ is k-partite. Using the partition used in Theorem \ref{2.4}, if we let $\Gamma^S(\mathbb{Z}_n)$ be partitioned into $k$ disjoint subsets $S_1, S_2, \cdots, S_k$, then $\Gamma(\mathbb{Z}_n)$ can be partitioned into $k$ disjoint subsets $Q_1, Q_2, \cdots, Q_k$, where arbitrary vertex of $\Gamma(\mathbb{Z}_n)$ is in $Q_i$ if its type class is in $S_i$. Consider arbitrary vertices in $\Gamma(\mathbb{Z}_n)$, $u$ and $v$, that are not in the same $Q_i$. Then $u$ and $v$ must be in different type classes in two different $S_i$'s. Call these classes $T_u$ and $T_v$. Since $\Gamma^S(\mathbb{Z}_n)$ is complete k-partite, $T_u$ and $T_v$ share an edge. Then $u$ and $v$ share an edge by ~\cite{jK55}. \end{proof} \end{theorem}
\begin{remark}The converse of Theorem \ref{2.4} and \ref{2.5} is not always true. If the zero-divisor graph is k-partite, but has a self-annihilating vertex, the strong type graph will have a self-loop, which prevents it from being k-partite. \end{remark}
\begin{theorem}\label{2.6} If $n$ is square free, $\Gamma(\mathbb{Z}_{n})$ is k-partite, where $k$ is the number of distinct prime factors of $n$. \end{theorem} \begin{proof} Consider the strong type graph $\Gamma^S(\mathbb{Z}_{n})$. Let, $n = p_1p_2\cdots p_k$. Partition the graph into $k$ sets $S_1, S_2, \cdots, S_k$. A vertex $T_a$ in the strong type graph is in $S_i$ if $gcf(a, p_i) = 1$ and $gcf(a, p_h) > 1$ for all $h<i$.\\ We now claim that $S_1, S_2, \cdots, S_k$ covers all the vertices of $\Gamma^S(\mathbb{Z}_{n})$.\\ Assume there is a $T_a$ that is not in any $S_i$. Since $T_a$ is a vertex, $a$ must be a factor of $n$ that is also less than $n$. So $a$ must omit at least one $p_i$. So $gcf(a, p_i) = 1$. Since $T_a$ is not in any $S_i$, there must exist some $h<i$ such that $gcf(a, p_h) = 1$. Choose the smallest index $h$ of such $p_h$. Then $T_a$ must be in $S_h$ which is a contradiction.\\ Our next claim is any two vertices $u$ and $v$ in the same partition do not share an edge.\\ Consider arbitrary $u$ and $v$ in $S_i$. Both $u$ and $v$ do not contain $p_i$ so they do not share an edge. So the strong type graph is k-partite.\\ By Theorem \ref{2.4}, $\Gamma(\mathbb{Z}_{p_1p_2\cdots p_k})$ is k-partite. \end{proof}
\begin{lemma}\label{2.7} Arbitrary type class $T_a$ in $\Gamma^T(\mathbb{Z}_n)$ contains only one element iff $a=\frac{n}{2}$. \end{lemma} \begin{proof} Let $T_a \in \Gamma(\mathbb{Z}_n)$ have a type class that has only one element. Assume $a\neq \frac{n}{2}$. Since $a$ is a factor of $n$, $\frac{n}{a}=f$ is also a factor of $n$. Note that $f \geq 3$.\\ Consider the vertex $a(f-1)$. The quantity $(f-1)$ does not share any factors with $f$. Since $af = n$, $gcf(a(f-1), n) = a$. So $a(f-1)\in T_a$. Also note that $a < a(f-1) < n$. So $a(f-1)$ is a distinct vertex in $T_a$ which is a contradiction. So $a= \frac{n}{2}$\\ \\ Let $a= \frac{n}{2}$. Then a is the only element in $T_a$ since $2a = n$. \end{proof}
\begin{corollary} Analogous to above, $T_{n/p}$ in $\Gamma^T(\mathbb{Z}_n)$ contains exactly $p - 1$ elements if $p$ is the smallest prime factor of $n$. \end{corollary} \begin{lemma}\label{2.8} There is at most one type class with only one element. \begin{proof} Assume there are two or more distinct type classes that have only one element. Call two of these classes $T_u$ and $T_v$. By Lemma \ref{2.7}, $u=v=\frac{n}{2}$ which is a contradiction. \end{proof} \end{lemma}
\begin{theorem}\label{2.9} $\Gamma(\mathbb{Z}_n)$ is k-partite if $\Gamma^S(\mathbb{Z}_n)$ is k-partite or $\Gamma^T(\mathbb{Z}_n)$ is k-partite and the only self-connected vertex of $\Gamma(\mathbb{Z}_n)$ is $T_\frac{n}{2}$. \end{theorem} \begin{proof} Let $\Gamma^S(\mathbb{Z}_n)$ be k-partite. By Theorem \ref{2.4}, $\Gamma(\mathbb{Z}_n)$ is k-partite. Let $\Gamma^T(\mathbb{Z}_n)$ be k-partite and let $\Gamma^S(\mathbb{Z}_n)$ have only one self-connected vertex, $T_\frac{n}{2}$. Consider arbitrary distinct $u$ and $v$, zero divisors of $\Gamma(\mathbb{Z}_n)$, that are in the same partition.\\ \begin{enumerate} \item[Case 1:] $u$ and $v$ are in the same type class.\\ By Lemma \ref{1.10}, $T_\frac{n}{2}$ has only one element, so if $u$ and $v$ are distinct, they cannot be in $T_\frac{n}{2}$. Then the type class they are in are not self-connected so $u$ and $v$ do not share an edge.\\ \item[Case 2:] $u$ and $v$ are in different type classes.\\ Since $u$ and $v$ are in the same partition, their type classes are in the same partition and do not share an edge. Thus, $u$ and $v$ do not share an edge.\\ \end{enumerate} \end{proof}
\begin{lemma}\label{2.10} A vertex in $\Gamma(\mathbb{Z}_n)$ annihilates itself iff it is a multiple of $n^*$. \end{lemma}
\begin{lemma}\label{2.11} Consider two arbitrary vertices in $\Gamma(\mathbb{Z}_n)$ $u$ and $v$ such that $u$ is a factor of $v$. The largest clique containing $v$, $M_v$ has a magnitude greater than or equal to the $M_u$, the largest clique containing $u$. \end{lemma} \begin{proof} Take arbitrary vertices $u$ and $v$ in $\Gamma(\mathbb{Z}_n)$. Let $u$ be a factor of $v$. Assume the opposite, that $M_u$ has a larger magnitude than that of $M_v$. Every element $e$ in $M_u\setminus u$ has the property $eu=0$. Then $\forall e\in M_u$, $ev=0$. So a clique $C$ exists with $v$ and each $e$ in $M_u\setminus u$. $C$ has a magnitude equal to the magnitude of $M_u$ which is a contradiction since $M_v$ is the largest clique containing $v$. \end{proof}
\begin{theorem}\label{2.12} $cl(\Gamma(\mathbb{Z}_n)) \geq \frac{n}{n^*} + k - 1$ where $k$ is the number of odd-power primes in the prime factorization of $n$. \end{theorem} \begin{proof} We claim that any two multiples of $n^*$ share an edge.\\ Take two arbitrary multiples of $n^*$, $an^*$ and $bn^*$. Since $(n^*)^2 \geq n$ these two vertices will share an edge. So the multiples of $n^*$ form a clique. Call it $C$. An arbitrary vertex of $C$ will be of the form $an^*$ for $1<a<\frac{n}{n^*}$. The amount of elements in this clique is $\frac{n}{n^*} - 1$, so the clique number of the graph is at least $\frac{n}{n^*} - 1$. Now consider all vertices of the form $n^*/q$ where $q$ is an arbitrary odd-power prime in the prime factorization of $n$. Because $n^*$ has a factor of $q$ with power of half rounded up, and $n^*/q$ has a power of half rounded down, arbitrary $n^*/q$ shares an edge with each $an^*$ in $C$. Also, each $n^*/q_1$ shares an edge to each other $n^*/q_2$. This is because the power of $q_1$ in $n^*/q_1$ is half rounded down and in $n^*/q_2$ it is half rounded up, and likewise for $q_2$. Since $k$ is the number of distinct odd powered primes in the prime factorization of $n$, $cl(\Gamma(\mathbb{Z}_n)) \geq \frac{n}{n^*} + k - 1$. \end{proof}
\begin{theorem}\label{2.13} $cl(\Gamma(\mathbb{Z}_n)) \leq \frac{n}{n^*} + k - 1$ where $k$ is the number of odd-power primes in the prime factorization of $n$. \end{theorem} \begin{proof} Consider arbitrary clique $C$. Partition $C$ into sets $L$ and $N$ where $L$ is the set of vertices of $C$ that are not multiples of $n^*$ and $N$ is the set of vertices of $C$ that are multiples of $n^*$. Consider arbitrary vertex $l_1$ in $L$. Since $l_1$ is not a multiple of $n^*$, there must be some prime factor $p_1$ of $n$ whose power in $l_1$ is less than half of its power in $n$ (since if every prime factor was greater than or equal to half, $l_1$ would be a multiple of $n^*$). Every other $l_i$ in $L$ must have its $p_1$ factor with a power greater than or equal to half its power in $n$ for it to share an edge with $l_1$. Consider another vertex $l_2$ in $L$. $l_2$ must also have a prime factor whose power is less than half its power in $n$, but it cannot be $p_1$. Call it $p_2$. So each $l_i$ in $L$ must have a distinct prime factor $p_i$ that has a power less that or equal to half its power in $n$. Let $m$ be the number of distinct prime factors of $n$. Then there can be a maximum of $m$ many $l_i$ in $L$. $N$ has a maximum size of $\frac{n}{n^*}-1$, so the clique number is at most $\frac{n}{n^*}+m-1$.\\ Consider some $e_1$, a vertex in $L$ whose corresponding $p_1$, has an even power in $n$. $e_1$ does not share an edge with $n^*$. This means the clique number is one less if $n$ has an even-powered prime. Consider another $e_2$ that has an even $p_2$ whose power is less than half. Then $e_2$ does not share an edge with $p_1 n^*$. In general, a vertex $e_i$ whose corresponding $p_i$ has an even power does not share an edge with distinct vertices $p_1 p_2 \cdots p_{i-1} n^*$. So the size of $C$ is reduced by the number of even powered-primes of $n$. This value can be represented by $m-k$ where $k$ is the number of odd-powered primes of $n$. Hence, since $C$ is arbitrary, $cl(\Gamma(\mathbb{Z}_n)) \leq \frac{n}{n^*} + m - (m - k) - 1$. $cl(\Gamma(\mathbb{Z}_n)) \leq \frac{n}{n^*} + k - 1$. \end{proof}
\begin{theorem}\label{2.14} $cl(\Gamma(\mathbb{Z}_n)) = \frac{n}{n^*} + k - 1$. \end{theorem}
\begin{proof} The proof follows by Theorem \ref{2.12} and Theorem \ref{2.13}. \end{proof}
\begin{theorem}\label{2.15} There are no non-empty, non-complete, regular $\Gamma(\mathbb{Z}_n)$. \end{theorem} \begin{proof} Consider all $\Gamma(\mathbb{Z}_n)$ that are non-empty and not complete. Assume $\exists$ some regular graph among these graphs.\\ \begin{enumerate} \item[Case 1:] $n=p^x$ where $p$ is prime\\ If $x=1$, the graph is empty, and if $x=2$, the graph is complete, so $x\geq 3$. Then $p$ is a vertex that shares an edge with $p-1$ many other vertices, and $p^2$ is a vertex that shares an edge with $p^2-1$ many other vertices. Since the graph is regular, $p-1=p^2-1$, thus $p=p^2$, which means $p=1$, a contradiction.\\ \item[Case 2:] $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_m^{\alpha_m}$, $m \geq 2$ and $p_i$ are all prime\\ Vertex $p_1$ shares an edge with $p_1-1$ many other vertices, and the vertex $p_2$ shares an edge with $p_2-1$ many other vertices. Since the graph is regular, $p_1-1=p_2-1$, thus $p_1=p_2$ which is a contradiction since $p_1$ and $p_2$ are distinct.\\ So the only non-empty regular graphs are complete. \end{enumerate} \end{proof}
\begin{theorem}\label{2.16} $\Gamma(\mathbb{Z}_n)$ is chordal iff $n=p^x, 2p$ or $2p^2$, where $p$ is prime and $x$ is a positive integer. \end{theorem} \begin{proof} Let $n=p^x$. Assume that $\Gamma(\mathbb{Z}_{p^x})$ is not chordal. Then $\exists$ a cycle $C$ of length $>$ 3, that has no chord. Let $y$ be a vertex of $C$ that is not a multiple of $n^*$. Then, since the power of $p$ in $y$ has a power strictly less than $\frac{x}{2}$, each neighbor must be a multiple of $n^*$. Then the two neighbors of $y$ in $C$ share an edge which is a chord. So all vertices in $C$ must be a multiple of $n^*$ which also causes a chord. So $\Gamma(\mathbb{Z}_{p^x})$ is chordal.\\
Let $n=2p$. $\Gamma(\mathbb{Z}_{2p})$ is a star because it is a line segment only. Then, $\Gamma(\mathbb{Z}_{2p})$ is chordal.\\
Let $n=2p^2$. Assume $\Gamma(\mathbb{Z}_{2p^2})$ is non-chordal. Then $\exists$ a cycle $C$ of length $>$ 3 that has no chord.\\ Let $a$ be a vertex of $C$ in the type class $T_p$. Each neighbor of $a$ must be a multiple of $2p$, and therefore, is in the type class $T_{2p}$. Each multiple of $2p$ shares an edge, so there exists a chord in $C$. So there can be no vertices in the type class $T_p$ in $C$.\\ Let $b$ be a vertex of $C$ that is in the type class $T_2$. Every neighbor of $b$ must be in the type class $T_{p^2}$. But there is only one element in $T_{p^2}$ so $b$ cannot have two distinct neighbors. So $b$ is not a vertex of $C$.\\ So each vertex of $C$ must be in either $T_{p^2}$ or $T_{2p}$. Then since there is only one element of $T_{p^2}$, and the magnitude of $C$ is at least 4, there are at least 3 elements of $T_{2p}$ in $C$. Those 3 elements form a triangle since each multiple of $2p$ annihilates each other multiple of $2p$. But $C$ can't have a triangle since it is chord-less. This is a contradiction.\\ \\ Let $n$ not be $p^x$, $2p$ or $2p^2$.\\ \begin{enumerate} \item[Case 1:] $n=2^xp^y$ where $y\geq 3$, $x \geq 1$ and $p$ is an odd prime.\\ Then $2^xp-p^y-2^{x+1}p-p^{y-1}$ is a chord-less cycle.\\ \item[Case 2:] $n=2^xp^y$ where $x\geq 2$, $y \geq 1$ and $p$ is an odd prime.\\ Then $2p^y-2^x-p^y-2^{x+1}$ is a chord-less cycle.\\ \item[Case 3:] $n=p^xq^y$ where $p, q\geq 3$ where $p \neq q$ are primes and $x, y$ are non-zero.\\ Then $p^x-q^y-2p^x-2q^y$ is a chord-less cycle.\\ \item[Case 4:] $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ where $k\geq 3$ and $\alpha_i$ is non-zero.\\ Since $k\geq 3, n$ has an odd prime factor $p_1$.
Then $p_1^{\alpha_1}-n/p_1^{\alpha_1}-2p_1^{\alpha_1}-2n/p_1^{\alpha_1}$ is a chord-less cycle.\\ So $\Gamma(\mathbb{Z}_n)$ is non-chordal if $n$ is not $p^x$, $2p$ or $2p^2$. \end{enumerate} \end{proof}
\begin{lemma}\label{2.17} If $n^* \neq n$, $\Gamma(\mathbb{Z}_n)$ has a simplicial vertex.
\end{lemma} \begin{proof} Let $n^* \neq n$. Then $n/n^*$ is a vertex since $n/n^*$ shares an edge with $n^*$ which is not a multiple of $n$. Since every neighbor of $n/n^*$ is a multiple of $n^*$ and every multiple of $n^*$ shares an edge, $n/n^*$ is a simplicial vertex. So $\Gamma(\mathbb{Z}_n)$ has a simplicial vertex. \end{proof}
Another construction $n_*$ can be useful. It is similar to $n^*$, but for the odd powered primes, round down instead of up. Consider $\Gamma(\mathbb{Z}_n)$ where $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$. Define $n_*$ as $n_* = p_1^{\beta_1}p_2^{\beta_2}\cdots p_k^{\beta_k}$ and $\beta_i = \alpha_i/2$ if $\alpha_i$ is even and $\beta_i = (\alpha_i-1)/2$ if $\alpha_i$ is odd.\\ \\ Note that $n_*n^* = 0$ and if $n$ is square-free, $n_* = 1$.
\begin{lemma}\label{2.18} Arbitrary vertex $v$ in $\Gamma(\mathbb{Z}_n)$ is a simplicial vertex iff $v\in T_2$ or $v\in T_g$ where $g$ is a factor of $n_*$. \end{lemma} \begin{proof} Take arbitrary $v$ in $\Gamma(\mathbb{Z}_n)$. Let $v\in T_2$. Then $v$ only shares an edge with vertices in $T_{n/2}$. By Lemma \ref{2.7}, $T_{n/2}$ has only one element, which makes a clique. So $v$ is simplicial.\\ Let $v\in T_g$ where $g$ is a factor of $n_*$. So $n_* = ag$ where $a$ is a positive integer. Consider some vertex $h$ in $T_j$ that shares an edge with $v$. Then $j*g = bn$ for positive integer $b$. $\frac{jn_*}{a} = bn_*n^*$. $\frac{j}{a} = bn^*$. Then $j=abn^*$. So $j$ is a multiple of $n^*$ and therefore, $h$ is a multiple of $n^*$. Since every multiple of $n^*$ shares an edge with every other such multiple, $v$ is a simplicial vertex.\\ Conversely, let $v$ be neither in $T_2$ nor in any $T_g$ where $g$ is a factor of $n_*$. Then, since $v$ is not in any $T_g$, $v$ has some prime with a power greater than half of that in $n$. Call that prime $p_x$ and its power in $v$, $\alpha_x$. Let the type class of $v$ be called $T_w$. Consider the type class $T_{n/w}$. Each vertex in $T_{n/w}$ shares an edge with $v$. Since $v\notin T_2$, $T_{n/w} \neq T_{n/2}$. So by Lemma \ref{2.7}, $T_{n/w}$ has more than one element. Since $n/w$ has a power of $p_x$ less than that of half in $n$, none of the vertices in $T_{n/w}$ share an edge with each other. So the neighbors of $v$ do not form a clique. Hence, $v$ is not simplicial. \end{proof}
\begin{theorem}\label{2.19} $\Gamma(\mathbb{Z}_n)$ has a simplicial vertex iff the prime factorization of $n$ is not square free or $n$ is even. \end{theorem} \begin{proof} Let $n$ not be square free. Then, $n^* \neq n$. So by Lemma \ref{2.17}, $\Gamma(\mathbb{Z}_n)$ has a simplicial vertex.\\ \\ Let $n$ be even. Then, 2 is a zero divisor. Every neighbor of 2 must be a multiple of $n/2$ which there is only one of, so 2 is a simplicial vertex.\\ \\ Let $n$ be square free and odd. 2 is therefore not a factor of $n$. Then consider arbitrary vertex $x$. $x$ shares an edge with both $n/x$ and $2n/x$. $2n/x$ is non-zero since $x$ is necessarily odd, and $n/x$ and $2n/x$ do not share an edge since $n$ is odd. For if $\frac{n}{x}\frac{2n}{x} = ny$, $2n = yx^2$ and $n = \frac{yx^2}{2}$ which is a contradiction. So there are no simplicial vertices of $\Gamma(\mathbb{Z}_n)$. \end{proof}
Note: It follows by ~\cite{jK55}, (observation 3.2), if in $\Gamma(\mathbb{Z}_n)$ a vertex $u$ is simplicial then $T_u$ is simplicial in $\Gamma^{T}(\mathbb{Z}_n)$. But, not conversely. For example, in $\Gamma^{T}(\mathbb{Z}_{12}), T_3$ is simplicial, where as $3$ is not so in $\Gamma(\mathbb{Z}_{12})$.
\begin{lemma}\label{2.20} If $\Gamma(\mathbb{Z}_n)$ has three or more prime factors of $n$, $\Gamma(\mathbb{Z}_n)$ is not $\gamma-\beta$ perfect. \end{lemma} \begin{proof} Let $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ where $k\geq 3$. By ~\cite{bH77}, the domination number is $k$. Construct some vertex map $V$ whose size is $k$.\\ We claim that $V$ must contain the vertex $n/p_x$ for every $p_x$ prime factor of $n$.\\
Consider the vertex $n/p_x$ for some $p_x$ prime factor of $n$. Let $n/p_x$ not be in $V$. Construct set $C = \{ p_xp_i | 1\leq i\leq k \}$. $n/p_x$ shares an edge with every vertex in $C$. Since $n/p_x \notin V$, every element of $C$ is in $V$. $C$ has $k$ many vertices, so $V$ has at least $k$ many vertices. Consider vertex $p_x$. $p_x$ shares an edge with $n/p_x$ which is not covered by $V$, so $V$ has at least $k+1$ vertices. That is a contradiction since the size of $V$ is $k$. So each $n/p_x$ is in $V$.\\ \\ Consider the type classes $T_{n/p_1}$, $T_{n/p_2}$ and $T_{n/p_3}$. By Lemma \ref{2.8}, there can be at most one type class with only one element. At least two of these type classes have more than one element. Without loss of generality, let them be $T_{n/p_1}$ and $T_{n/p_2}$. Since $n/p_1$ and $n/p_2$ are both in $V$, choose different vertices in the type classes $u$ and $v$. $u$ and $v$ share an edge since they are multiples of $n/p_1$ and $n/p_2$ respectively, so they share an edge, but are not in $V$. Then $V$ must contain at least one other element making the size of $V$ at least $k+1$. This is a contradiction. We cannot construct a vertex map size $k$. So $\Gamma(\mathbb{Z}_n)$ is not $\gamma-\beta$ perfect. \end{proof}
\begin{theorem}\label{2.21} The only $\gamma-\beta$ perfect $\Gamma(\mathbb{Z}_n)$ are $n=2^3, 3^2, p, 2p$ and $3p$. \end{theorem} \begin{proof} Let $n=2^3$. The domination number clearly equals the smallest vertex map.\\
\begin{tikzpicture}
\node (2) at (1, 1) {2}; \node (4) at (2, 2) {4}; \node (6) at (3, 1) {6};
\foreach \colon/\to in {2/4, 4/6}
\draw (\colon) -- (\to);
\end{tikzpicture}\\
Let $n=3^2$. The domination number clearly equals the smallest vertex map.\\
\begin{tikzpicture}
\node (3) at (1, 1) {3}; \node (6) at (2, 1) {6};
\foreach \colon/\to in {3/6}
\draw (\colon) -- (\to);
\end{tikzpicture}\\
Let $n=2p$. Then the graph is a star, so the domination number and the smallest vertex map are both 1.\\ Let $n=3p$. Then $V = \{p, 2p\}$ is both a minimal dominating set and a minimal vertex map.\\ Let $n=p$. Then both the domination number and the smallest vertex map is 0 since the graph is empty.\\ \\ Now, we will show that all other $\Gamma(\mathbb{Z}_n)$ are not $\gamma-\beta$ perfect.\\\\
Let $n=2^2$. The empty set is a vertex map since there are no edges in this map, so smallest vertex map and the domination number are not the same.\\ Let $n=2^x$, $x\geq 4$. Then $2^{x-1}-2^{x-2}-3\cdot2^{x-2}$ is a triangle. Triangles prevent vertex maps of size 1, and by ~\cite{bH77} the domination number is 1, so the values do not match.\\ Let $n=3^x$, $x\geq 3$. Then $3^{x-1}-2\cdot3^{x-1}-3^{x-2}$ is a triangle that prevents vertex maps of size 1.\\ Let $n=p^x$, $p\geq 5$, $x\geq 2$. Then $p^{x-1}-2\cdot p^{x-1}-3\cdot p^{x-1}$ is a triangle.\\ Let $n=pq$, $q>p\geq 5$. The domination number is 2 by ~\cite{bH77}. $p-q-2p-2q-3p-2q$ is a hole size 6. There cannot be a vertex map that covers a hole of that size, so the smallest vertex map is not 2.\\ Let $n=p^xq$, $x\geq 2$. The domination number is 2.\\ \begin{enumerate} \item[Case 1:] $p=2$.\\ Then $p^{x-1}q-p^x-q-p^{x+1}-pq$ is a non-induced sub-graph that cannot be covered by a vertex map size 2.\\ \item[Case 2:] $p\neq 2$.\\ Then $p^x-p^{x-1}q-p-2p^{x-1}q-2p$ is a non-induced sub-graph that cannot be covered by a vertex map size 2. The smallest vertex map is larger than 2 making the graph not $\gamma-\beta$ perfect.\\ Let $n=p^xq^y$, $x, y \geq 2$. The domination number is 2 by ~\cite{bH77}. Assume there is a vertex map $V$ size 2. Consider the edges $p-p^{x-}q^y$ and $q-p^xq^{y-1}$. $V$ must contain at least vertex one of each edge. By Lemma \ref{2.8} only one type class can have only one vertex. Consider the type classes $T_{p^xq^{y-1}}$ and $T_{p^{x-1}q^y}$. At least one of them must contain more than one vertex. Without loss of generality let that be $T_{p^{x-1}q^y}$. Then there exists some $u\in T_{p^{x-1}q^y}$ that is not in $V$. The edge $p-u$ is not covered by $V$, so the size of $V$ is at least one more than 2 which is a contradiction.\\ Let $n=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$, $k\geq 3$. Then by Lemma \ref{2.20}, the graph is not $\gamma-\beta$ perfect.\\ So the only $\gamma-\beta$ graphs $\Gamma(\mathbb{Z}_n)$ are $2^3, 3^2, p, 2p$ and $3p$. \end{enumerate} \end{proof}
\section{Some properties of $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$}
In this section, we discuss some facts about $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$. It is often possible to relate some properties of the individual $\Gamma(\mathbb{Z}_{n_i})$ to the graph of the product. One example is that the domination number of $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ has an upper and lower bound corresponding to the domination number of each $\Gamma(\mathbb{Z}_{n_i})$.
\begin{theorem}\label{3.0} Consider two arbitrary commutative rings with unity, $R$ and $S$. $\Gamma(R\times S)$ is complete iff $|R|=|S|=2$. \end{theorem} \begin{proof}
Consider some $R$ and $S$ such that $|R|=|S|=2$. Since both $R$ and $S$ have $1$, the only elements of $R$ and $S$ are $0$ and $1$, where by $1$ we denote the unity of the respective ring . Then the zero divisor graph is $(0, 1) - (1, 0)$ which is complete.\\ Conversely, let $R$ or $S$ have more than 2 elements. Without loss of generality, let $R$ have more than 2 elements. Then $R$ has some element $a$ that is neither $1$ nor $0$. The graph $\Gamma(R\times S)$ has vertices $(1, 0)$ and $(a, 0)$. These vertices do not share an edge because $1\cdot a = a$ which is not zero. So $\Gamma(R \times S)$ is not complete.\\ \end{proof}
\begin{theorem}\label{3.1} $\Gamma(R_1\times \cdots\times R_k)$ where $k \geq 2$ and each $R_i$ is a commutative ring with $1$. This graph is complete iff $k=2$ and $|R_i| = 2$ for all $i$. \end{theorem} \begin{proof}
Consider some $\Gamma(R_1\times\cdots\times R_k)$ where $k=2$ and all $|R_i| = 2$. Then by Theorem 3.0, $\Gamma(R_1\times\cdots\times R_k)$ is complete.\\
Consider some $\Gamma(R_1\times \cdots\times R_k)$ that does not meet this criteria. If $k \geq 3$, then $(1, 0, 1)$ and $(1, 1, 0)$ are two vertices that do not share an edge. If any $|R_i| \geq 2$, then $R_i$ has an element $a$ that is not 0 or 1. Then $(\cdots, a, \cdots)$ does not share an edge with $(\cdots, 1, \cdots)$, where $a$ and $1$ are placed in the $i-th$ entry of the respective elements. So $\Gamma(R_1\times \cdots\times R_k)$ is not complete. \end{proof}
\begin{theorem}\label{3.2} $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ where $n, m \geq 2$ is complete-bipartite iff $n$ and $m$ are prime. \begin{proof} Let $m$ and $n$ be prime. Then partition $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ into sets $S_n$ and $S_m$ such that
$S_n=\{(x, 0)|0<x<n\}$ and $S_m=\{(0, y)|0<y<m\}$.\\ We claim that $S_n\cup S_m = \Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$.\\ Assume, $\exists$ a zero divisor $a=(a_1, a_2)$ that is not in $S_n\cup S_m$. Both $a_1$ and $a_2$ are non-zero as $m$ and $n$ are prime. Since $a$ is a zero-divisor, there must be some $b = (b_1, b_2)$ that shares an edge with $a$. So $a_1 b_1 = 0$. Since $\mathbb{Z}_n$ has no non-zero divisors, and $a_1$ is not zero, $b_1 = 0$. In the same way we find that $b_2$ is zero. This means $a$ is not a zero-divisor because it only shares an edge with 0. So $S_n\cup S_m = \Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$.\\ Take arbitrary $u, v\in S_n$. Then $u=(u_1, 0)$ and $v=(v_1, 0)$. Since $u_1 v_1 \neq 0$, $uv\neq (0, 0)$ which means $u$ and $v$ do not share an edge. In the same way $u$ and $v$ do not share an edge if they are both in $S_m$. So $u$ and $v$ do not share an edge if they are in the same partition which is the definition of bipartite.\\ Thus, it follows from the construction of $S_m$ and $S_m$, that $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ is complete bipartite.\\ Conversely, let $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ be complete bipartite. Assume one or both n and m are not prime. Let the non-prime be $n$. Then, there is a non-zero zero divisor of $\mathbb{Z}_n$. Call it $k$. Since $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ is complete-bipartite, the vertices of $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ can be partitioned into 2 disjoint subsets such that no edges exist between two vertices in the same partition, and every pair of vertices in different partitions share an edge. $(1, 0)$ is a zero divisor since it shares an edge with $(0, 1)$. $(k, 0)$ is also a zero divisor since it also shares an edge with $(0, 1)$. Since $(k, 0)$ does not share an edge with $(1, 0)$, they must be in the same partition. Call it $S_1$ and let the other partition be $S_2$. Since $k$ is a zero-divisor of $\mathbb{Z}_n$, $\exists k'$ not necessarily distinct such that $k\cdot k' = 0$. Then $(k', 1)$ shares an edge with $(k, 0)$ which means $(k', 1)\in S_2$. Since $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ is complete-bipartite, $(1, 0)$ must share an edge with $(k', 1)$ since they are in opposite partitions, but their product is not $0$, which is a contradiction. So both $n$ and $m$ must be prime. \end{proof} \end{theorem}
\begin{corollary} From this theorem it follows that $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ has a complete bipartite sub-graph. \end{corollary} This is formed by $S_n \cup S_m$. If one of them is not a prime, we can delete all vertices that has at least one an entry dividing either $n$ or $m$ respectively, to get a complete bipartite subgraph.
\begin{theorem}\label{3.3} $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ where $\forall n_i \geq 2$ and $k\geq 2$ is bipartite iff $k=2$ and both $n_i$ are prime, or one $n_x$ is prime and the other is $4$. \end{theorem} \begin{proof} Let $k=2$ and both $n_1$ and $n_2$ be prime. By Theorem \ref{3.2}, $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2})$ is bipartite.\\ Let $k=2$ and let one of $n_i$ be 4 and the other be prime. Without loss of generality, let $n_1=4$. Then $n_2$ is prime. Partition the vertices into set $A$ and $B$ where $A$ is the set of all vertices of the form $(a, 0)$ where $a\in\mathbb{Z}_{n_1}/0$ and $B$ is everything else. Consider arbitrary, distinct elements of $A$, $(a_1, 0)$ and $(a_2, 0)$. They do not share an edge, since there are no two distinct $a_1$ and $a_2$ that share an edge in $\Gamma(\mathbb{Z}_{n_1})$. Consider all vertices in $B$. Assume $\exists u, v \in B$ such that $u$ shares an edge with $v$. Then, $u = (u_1, u_2)$ and $v = (v_1, v_2)$. Note that $u_2v_2 \neq 0$. $u_2 v_2 = 0$ which means $u_2$ and $v_2$ are zero divisors in $\Gamma(\mathbb{Z}_{n_2})$. This is impossible since there are no zero divisors in $\Gamma(\mathbb{Z}_{n_2})$. So $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ is bipartite.\\ Conversely, let $\Gamma(\mathbb{Z}_{n_1}\times\cdots\times\mathbb{Z}_{n_k})$ be bipartite.\\ We first claim that $k=2$.\\ Assume $k\geq 3$. Then, $(1, 0, 0, \cdots, 0) - (0, 1, 0, \cdots, 0) - (0, 0, 1, \cdots, 0)$ is a triangle which cannot exist in a bipartite graph. So $k < 3$. By our definition, $k\geq 2$, so $k=2$\\ We now claim no $\Gamma(\mathbb{Z}_{n_i})$ can have two or more distinct zero divisors.\\ Assume otherwise. Call two such divisors $u$ and $v$ that share an edge in $\Gamma(\mathbb{Z}_{n_i})$. Without loss of generality, let $u$ and $v$ be in the first slot (so $i=1$). Then $(u, 0) - (v, 0) - (0, 1)$ is a triangle which cannot exist in a bipartite graph. The only $\Gamma(\mathbb{Z}_{n_i})$ that has one element is $\Gamma(\mathbb{Z}_4)$. So all $n_i$ must be either 4 or prime.\\ Our final claim is it is not possible for both $n_i$ to be 4.\\ Assume otherwise. Then $(2, 0) - (2, 2) - (0, 2)$ is a triangle which cannot exist in a bipartite graph. So, because $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\cdots \times\mathbb{Z}_{n_k})$ is bipartite, $k=2$ and either both $n_i$ are prime, or one is 4 and the other is prime. \end{proof}
\begin{theorem}\label{3.4} $\Gamma(R_1\times\cdots\times R_k)$ where each $R_i$ is a commutative ring with 1 is not perfect if some $\Gamma(R_i)$ is not perfect. \end{theorem} \begin{proof} Let some $\Gamma(R_i)$ be non-perfect. Then by the Strong Perfect Graph theorem, there exists an odd hole or anti-hole $H$ of length 5 or greater. Let $H$ have a length $l$. Then we write it as, $v_1-v_2-\cdots-v_{l-1}-v_l-v_1$. Then a hole exists in $\Gamma(R_1\times\cdots\times R_k)$. Fill in the $i$th position with the vertices of $H$, and fill the rest in with zeros. The hole is $(0, \cdots, 0, v_1, 0, \cdots, 0)-(0, \cdots, 0, v_2, 0, \cdots, 0)-\cdots-(0, \cdots, 0, v_{l-1}, 0, \cdots, 0)-(0, \cdots, 0, v_l, 0, \cdots, 0)-(0, \cdots, 0, v_1, 0, \cdots, 0)$.\\ The same proof can be used for an anti-hole. So if any $\Gamma(R_i)$ are non-perfect, $\Gamma(R_1\times\cdots\times R_k)$ will also be non-perfect. \end{proof}
\begin{note} The converse of Theorem 3.4 is not true. In the graph $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2)$, every $\Gamma(\mathbb{Z}_2)$ is perfect, but we find the hole $(1, 1, 0, 0, 0)-(0, 0, 1, 1, 0)-(1, 0, 0, 0, 1)-(0, 1, 1, 0, 0)-(0, 0, 0, 1, 1)$. \end{note}
\begin{theorem}\label{3.5} $\Gamma(R_1\times\cdots\times R_x)$ where each $R_i$ is a commutative ring with 1 is not regular if any $\Gamma(R_i)$ is not empty. \end{theorem} \begin{proof} Take $\Gamma(R_1\times\cdots\times R_x)$. Let some $\Gamma(R_i)$ be non-empty. Consider the vertex $g=(0, \cdots, 0, 1, 0, \cdots, 0)$ that has a 1 at the $i^{th}$ index and 0 filled in all other indices. All neighbors of $g$ must be of the form $(a_1, a_2, \cdots, a_{i-1}, 0, a_{i+1}\cdots, a_{x-1}, a_x)$, with a zero at the $i^{th}$ index and any value in the other indices, not all zero. Let there are $f$ many such vertices. Since $\Gamma(R_i)$ is non-empty, $\exists k\in \Gamma(R_i)$. Since $k$ is a zero divisor, there must be some $k'\in \Gamma(R_i)$, not necessarily distinct, such that $k\cdot k'=0$. Consider the vertex $h=(0, \cdots, 0, k, 0, \cdots, 0)$ with $k$ in the $i^{th}$ index and the rest filled in with 0. This vertex shares an edge with all vertices that share an edge with $g$. So $h$ shares an edge with at least $f$ vertices. But it also shares an edge with $(1, \cdots, 1, k', 1\cdots, 1)$ which means $h$ shares an edge with at least $f+1$ vertices. This means $g$ and $h$ have a different number of neighbors, so $\Gamma(R_1\times\cdots\times R_x)$ is not regular. \end{proof}
\begin{theorem}\label{3.6} For arbitrary rings $R$ and $S$, $cl(\Gamma(R\times S)) \geq cl(\Gamma(R)) + cl(\Gamma(S)) + |R'||S'|$ where $R'$ and $S'$ are any set of self-annihilating vertices in a maximal clique of $\Gamma(R)$ and $\Gamma(S)$. \end{theorem} \begin{proof}
Let $C$ be a maximal clique in $\Gamma(R)$ and $D$ be a maximal clique in $\Gamma(S)$. Construct an induced sub graph $X = \{(c, 0) or (0, d) | c\in C, d\in D\}$. Take two arbitrary, distinct vertices in $X$, call them $u$ and $v$.\\ \begin{enumerate} \item[Case 1:] $u=(c_1, 0), v=(c_2, 0)$\\ Since $c_1$ shares an edge with $c_2$, $u$ and $v$ share an edge.\\ \item[Case 2:] $u=(0, d_1), v=(0, d_2)$\\ since $d_1$ shares an edge with $d_2$, $u$ and $v$ share an edge.\\ \item[Case 3:] $u$ and $v$ are not of the same form.\\ Then, without loss of generality, let $u = (c, 0)$ and $v = (0, d)$. $u$ shares an edge with $v$.\\ So $X$ is a clique in $\Gamma(R\times S)$ with size $cl(\Gamma(R)) + cl(\Gamma(S))$.\\
Now consider $R'$, the set of all self-annihilating vertices in $C$. Each vertex in $R'$ shares an edge with each other vertex in $R'$ because it is an induced sub-graph of a clique. It also shares an edge with every vertex in $C$. Likewise, every vertex in $S'$, the set of all self-annihilating vertices in $D$, shares an edge with every other vertex in $S'$ and every vertex in $D$. Define the induced sub-graph $Y = \{ (r, s) | r\in R', s\in S' \}$. Every vertex $(r, s) \in Y$ shares an edge with every other vertex in $Y$ and every vertex in $X$, so $X\cup Y$ forms a clique size $cl(\Gamma(R)) + cl(\Gamma(S)) + |R'||S'|$. \end{enumerate} \end{proof}
\begin{corollary}
Consider $n$ many arbitrary rings $R_1, R_2, \cdots R_n$. Then,
$cl(\Gamma(R_1\times R_2\cdots R_n)) \geq \sum_{i=1}^{n}cl(\Gamma(R_i))+\sum_{i\neq j, i, j \in \{1, 2, \cdots n\}}|R_{i}'||R_{j}'|+\sum_{i\neq j\neq k; i, j, k \in \{1, 2, \cdots n\}}|R_{i}'||R_{j}'||R_{k}'|+\cdots + |R_{1}'||R_{2}'|\cdots|R_{k}'| $, where each $R_{i}'$ is any set of self-annihilating vertices in a maximal clique in $\Gamma(R_{i})$. \end{corollary}
\begin{proof}
Extending a similar type of construction in the proof of the above theorem, we can consider $C_1, C_2, \cdots C_n$, a collection of maximal cliques in $\Gamma(R_1), \Gamma(R_2), \cdots, \Gamma(R_n)$ respectively. Construct an induced sub graph $X_{i} = \{(0, 0, \cdots, c_{i}, \cdots 0)| c_{i}\in C_{i}, \}, X =\bigcup_{i=1}^{n} {X_{i}}$, where we place the $c_{i}$ in the $i-th$ coordinate. Then $X$ forms a click of cardinality $\sum_{i=1}^{n}cl(\Gamma(R_i))$. Then consider, $X_{ij} = \{(0, 0, \cdots, c_{i}, \cdots c_j \cdots 0)| c_{i}\in R'_{i}, c_{j} \in R'_{j}\}$, where $R'_{i}, R'_{j}$ are any set of self annihilating vertices in maximal clique in $\Gamma(R_{i})$ and $\Gamma(R_{j})$, where we place the $c_{i}$ and $c_{j}$ in the $i-th$ and $j-th$ entries respectively. Set $Y = \bigcup_{i\neq j; i, j \in \{1, 2, \cdots n\}} X_{ij}$. Then $Y$ forms a click of cardinality $\sum_{i\neq j; i, j \in \{1, 2, \cdots n\}}|R_{i}'||R_{j}'|$, that is disjoint from $X$. In a similar fashion we can construct $X_{ijk}$ for each distinct triplets $i, j, k \in \{1, 2, \cdots n\}$ and call their union $Z$ and $Z$ gives a click of cardinality $\sum_{i\neq j\neq k; i, j, k \in \{1, 2, \cdots n\}}|R_{i}'||R_{j}'||R_{k}'|$. Proceeding in this way the result follows.
\end{proof}
\begin{lemma}\label{3.7} Consider $\Gamma(\mathbb{Z}_n)$ for arbitrary $n$. There is a maximal clique $M$ that contains all self-annihilating vertices. \end{lemma} \begin{proof} Follows from Theorem \ref{2.12} and Lemma \ref{2.10}. \end{proof}
\begin{theorem}\label{3.8} The clique number of $\Gamma(\mathbb{Z}_n\times\mathbb{Z}_m)$ has a lower bound of $cl(\Gamma(\mathbb{Z}_n)+cl(\Gamma(\mathbb{Z}_m)) + (\frac{n}{n^*}-1)(\frac{m}{m^*}-1)$. \end{theorem} \begin{proof} Follows from Theorem \ref{3.6} and the proof of Theorem \ref{2.12} and Lemma \ref{2.10}. \end{proof}
\begin{theorem}\label{3.11} $\Gamma(R_1\times\cdots\times R_k)$ where $k \geq 2$ and $R_i$ is a commutative ring with 1 has a simplicial vertex iff some $\Gamma(R_i)$ has a simplicial vertex or some $|R_i| = 2$. \end{theorem} \begin{proof} Take arbitrary $\Gamma(R_1\times\cdots\times R_k)$. Let some $\Gamma(R_i)$ have a simplicial vertex $c$. Then consider the vertex $(1, \cdots, 1, c, 1, \cdots, 1)$ where $c$ is in the $i$th slot. Each neighbor of $(1, \cdots, 1, c, 1, \cdots, 1)$ must have 0 in every slot except the $i$th slot, and the value of the $i$th slot must be a neighbor of $c$ in $\Gamma(R_i)$. Since each neighbor of $c$ shares an edge and each other slot is 0, all such neighbors of $(1, \cdots, 1, c, 1, \cdots, 1)$ form a clique. So $\Gamma(R_1\times\cdots\times R_k)$ has a simplicial vertex.\\ \\
Let some $|R_i| = 2$. Then $(1, \cdots, 1, 0, 1, \cdots, 1)$ only shares an edge with $(0, \cdots, 0, 1, 0, \cdots, 0)$ making $(1, \cdots, 1, 0, 1, \cdots, 1)$ simplicial.\\ \\ Let $\Gamma(R_1\times\cdots\times R_k)$ have a simplicial vertex $v$. Also,
assume all $|R_i| > 2$ and no $\Gamma(R_i)$ have any simplicial vertices. Consider arbitrary $v$ in $\Gamma(R_1\times\cdots\times R_k)$. Let $v$ have 0 at some index, $v = (\cdots, 0, \cdots)$. Then since no $|R_i| = 2$, there exists some vertex $a\in R_i$ that is not 0 or 1. $v$ then shares an edge with $(0, \cdots, 0, 1, 0, \cdots, 0)$ and $(0, \cdots, 0, a, 0, \cdots, 0)$ which do not share an edge. So for $v$ to be simplicial, it cannot contain any 0. Let $v$ have $a$ at some index, where $a$ is a zero divisor in its respective $\Gamma(R_i)$. $v = (\cdots, a, \cdots)$. Then $v$ shares an edge with every $(0, \cdots, 0, a', 0, \cdots, 0)$ where $a\cdot a' = 0$ in $\Gamma(R_i)$. $a$ is not simplicial since no $\Gamma(R_i)$ have any simplicial vertex, so some neighbor $(0, \cdots, 0, a', 0, \cdots, 0)$ will not share an edge with another neighbor of the same form. So $v$ is not simplicial if it has any zero-divisors in its slots. For $v$ to be simplicial, every slot must be a non-zero, non-zero-divisor. However, elements of that form are not vertices. So $\Gamma(R_1\times\cdots\times R_k)$ has no simplicial vertices, which is a contradiction. The assumption that all $|R_i| > 2$ and no $\Gamma(R_i)$ have any simplicial vertices is false. So some $|R_i| > 2$ or some $\Gamma(R_i)$ has a simplicial vertex. \end{proof}
\begin{theorem}\label{3.12} $\Gamma(R_1\times\cdots\times R_k)$ where $R_i$ is a commutative ring with 1 is non-chordal if any $\Gamma(R_i)$ is non-chordal. \end{theorem} \begin{proof} Consider arbitrary $\Gamma(R_1\times\cdots\times R_k)$. Then let some $\Gamma(R_i)$ be non-chordal. So there exists a cycle $a_1-a_2-\cdots-a_k-a_1$ greater than 3 with no chords. Then in $\Gamma(R_1\times\cdots\times R_k)$, there is a cycle $(0, .., a_1, \cdots, 0)-(0, \cdots, a_2, \cdots, 0)-\cdots-(0, \cdots, a_k, \cdots, 0)-(0, \cdots, a_1, \cdots, 0)$, which makes it non-chordal. \end{proof}
\begin{lemma}\label{3.13} $\Gamma(R_1\times\cdots\times R_k)$ where $R_i$ is a commutative ring with 1 and $k\geq 2$ is non-chordal if more than one $|R_i| \geq 3$. \end{lemma} \begin{proof}
In $\Gamma(R_1\times\cdots\times R_k)$, let two or more $|R_i| \geq 3$. Without loss of generality, let the first two slots be the $R_i$ with a magnitude greater than or equal to 3. Then $(1, 0, \cdots, 0)-(0, 1, \cdots, 0)-(a, 0, \cdots, 0)-(0, b, \cdots, 0)$ where $a$ is a non-trivial element of $R_1$ and $b$ is a non-trivial element of $R_2$, is a cycle with no chord. So $\Gamma(R_1\times\cdots\times R_k)$ is non-chordal. \end{proof}
\begin{lemma}\label{3.14} $\Gamma(R_1\times\cdots\times R_k)$ where $R_i$ is a commutative ring with 1 is non-chordal if $k \geq 4$. \end{lemma} \begin{proof} Let $k>4$. Then $(1, 1, 0, 0, \cdots, 0)-(0, 0, 1, 1, \cdots, 0)-(1, 0, 0, 0, \cdots, 0)-(0, 0, 0, 1, \cdots, 0)$ is a chord-less cycle. So $\Gamma(R_1\times\cdots\times R_k)$ is non-chordal. \end{proof}
\begin{lemma}\label{3.15} $\Gamma(\mathbb{Z}_{n_1}\times\mathbb{Z}_{n_2}\times\mathbb{Z}_{n_3})$ where at least one $n_i>2$ is non-chordal.
\end{lemma} \begin{proof} Without loss of generality, let $n_3 > 2$. Then, \\ $(1, 0, 0)-(0, 0, 2)-(1, 1, 0)-(0, 0, 1)$ is a chord-less cycle. \end{proof}
\begin{theorem}\label{3.16} The only chordal $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$ where $n_i \geq 2$ and $k\geq 2$ are $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_p)$, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{p^2})$ and $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2)$. \end{theorem} \begin{proof}
Consider $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_p)$. Since $\Gamma(\mathbb{Z}_p)$ has no vertices, the only vertices of $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_p)$ are $(1, 0)$ or of the form $(0, x)$ where $0<x<p$. So the graph is a star making it chordal. \\
Consider $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{p^2})$. Assume that $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{p^2})$ is non-chordal. Then there exists a cycle $C$ length greater than $3$ that has no chord. Let $v$ be an arbitrary vertex in $C$.\\ Let $v$ have a multiple of $p$ as its second element, $v=(a, bp)$. Then every vertex that is not a neighbor of $v$ in $C$ must have a non-zero non-multiple of $p$ as its second element. Therefore, both neighbors of $v$ must have 0 as their second element so that they share an edge with their other neighbor. So both neighbors of $v$ are $(1, 0)$. We cannot repeat vertices so $v$ cannot have a multiple of $p$ as its second element. That means the only possible vertices in $C$ are $(1, 0)$ and $(0, b)$ where $b$ is a non-zero non-multiple of $p$. A cycle of size $4$ or greater cannot be constructed out of these vertices since we cannot write $(1, 0)$ twice and $(0, b)$ does not share an edge with itself. $C$ cannot be constructed, so $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{p^2})$ is chordal. \\
Consider $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2)$. The graph of $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_2)$ is shown below and is chordal. \\
\begin{tikzpicture}
\node (a) at (1, 1) {(1, 0, 1)}; \node (b) at (2, 2) {(0, 1, 0)}; \node (c) at (4, 2) {(0, 0, 1)}; \node (d) at (3, 3) {(1, 0, 0)}; \node (e) at (3, 4) {(0, 1, 1)}; \node (f) at (5, 1) {(1, 1, 0)};
\foreach \colon/\to in {b/c, c/d, b/d, a/b, c/f, d/e}
\draw (\colon) -- (\to);
\end{tikzpicture}\\
To prove the converse, let's assume the opposite. Let there be a chordal $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$ not listed. By Lemma \ref{3.13}, only one $n_i$ can be greater than 2. By Lemma \ref{3.14}, $k \leq 3$. By Theorem \ref{3.12}, if any $n_i$ are non-chordal, $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$ will be non-chordal. So every $n_i$ must be $p^x$, $2p$, or $2p^2$ which was shown by Theorem \ref{3.15}.
So the only possible $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$ are $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{p^x})$, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{2p})$, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{2p^2})$, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_{p^x})$, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_{2p})$ and $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_{2p^2})$.
In $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{p^x})$ where $x \geq 3$ and $p$ is prime, $(1, p^{x-1})-(0, (p-1)p)-(1, 0)-(0, p)$ is a chord-less cycle.\\
In $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{2p})$ where $p \geq 3$ is a prime, $(1, 0)-(0, 4)-(1, p)-(0, 2)$ is a chord-less cycle.\\
In $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_{2p^2})$ where $p \geq 3$ is a prime, $(1, 2p)-(0, p)-(1, 4p)-(0, p^2)$ is a chord-less cycle.\\
By Lemma 3.15, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_{p^x})$, $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_{2p})$ and $\Gamma(\mathbb{Z}_2\times\mathbb{Z}_2\times\mathbb{Z}_{2p^2})$ are all non-chordal where $p \geq 3$.\\
So there are no other chordal $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$. \end{proof}
\begin{lemma}\label{3.17} $D(\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k}))$ has an upper bound of $2[D(\Gamma(\mathbb{Z}_{n_1})) + D(\Gamma(\mathbb{Z}_{n_2})) + \cdots + D(\Gamma(\mathbb{Z}_{n_k}))]$. \end{lemma} \begin{proof} Let $d_i$ be the domination number of $\Gamma(\mathbb{Z}_{n_i})$. Then each $\Gamma(\mathbb{Z}_{n_i})$ has a dominating set $D_i$ size $d_i$ Then consider the sets\\
$A_i = \{ (0, \cdots, 0, v, 0, \cdots, 0) \mid| v\in D_i\}$ where the $i$-th slot is filled with arbitrary vertex in $D_i$ and the rest are 0. Also consider $B_i$, the set of neighbors of each vertex in $A_i$, with only one neighbor for each vertex. Now consider $\cup_{i=1}^k (A_i\cup B_i)$ and arbitrary vertex $v \in \Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$.\\ \begin{enumerate} \item[Case 1:] $v$ has an element $e$ in some $i$th slot that is a vertex of $\Gamma(\mathbb{Z}_{n_i})$.\\ If $e\in D_i$, then $v$ shares an edge with the corresponding vertex in $B_i$, and if $e\notin D_i$, then $v$ shares an edge with some vertex in $D_i$.\\ \item[Case 2:] $v$ has 0 in some $i$th slot.\\
Then $v$ shares an edge with some vertex in $A_i$.\\ So if neither of these cases is true, none of the elements of $v$ can be zero or a vertex in its corresponding $\Gamma(\mathbb{Z}_{n_i})$, so the only neighbor of $v$ is $(0, .., 0)$ which means $v$ is not a vertex. Then $\cup_{i=1}^k (A_i\cup B_i)$ is a dominating set. $A_i$ and $B_i$ both have size $D(\Gamma(\mathbb{Z}_{n_i}))$ since it only has one vertex for each vertex in its corresponding $D_i$. So the size of $\cup_{i=1}^k (A_i\cup B_i)$ is $2(D(\Gamma(\mathbb{Z}_{n_1})) + D(\Gamma(\mathbb{Z}_{n_2})) + \cdots + D(\Gamma(\mathbb{Z}_{n_k})))$ which is an upper bound of the domination number. \end{enumerate} \end{proof}
\begin{lemma}\label{3.18} $D(\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k}))$ has a lower bound of $D(\Gamma(\mathbb{Z}_{n_1}))+D(\Gamma(\mathbb{Z}_{n_2}))\\ +\cdots+D(\Gamma(\mathbb{Z}_{n_k}))$. \end{lemma} \begin{proof} Let $D$ be an arbitrary dominating set of $\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k})$. Consider the vertex $v=(1, \cdots, 1, a, 1, \cdots, 1)$ where $a$ in the $i$th slot is a vertex of $\Gamma(\mathbb{Z}_{n_i})$. The only possible neighbors of $v$ are of the form $(0, \cdots, 0, b, 0, \cdots, 0)$ where $b$ is a neighbor of $a$ in $\Gamma(\mathbb{Z}_{n_i})$. Construct a subset $A_i$ that is all vertices in $D$ of the form $(0, .., 0, b, 0, \cdots, 0)$ or $(1, \cdots, 1, b, 1, \cdots, 1)$ where $b$ is a vertex in $\Gamma(\mathbb{Z}_{n_i})$.\\ We claim that arbitrary $A_i$ has a size of at least $d_i$, where $d_i$ is the domination number of $\Gamma(\mathbb{Z}_{n_i})$.\\ Assume otherwise. Then there are less than $d_i$ vertices of the form $(0, \cdots, 0, b, 0, \cdots, 0)$ and $(1, \cdots, 1, b, 1, \cdots, 1)$. Take some $a$ in $\Gamma(\mathbb{Z}_{n_i})$. Since some vertex in $D$ shares an edge with every vertex not in $D$, arbitrary $v=(1, \cdots, 1, a, 1, \cdots, 1)$ either shares an edge with some $(0, \cdots, 0, b, 0, \cdots, 0)$ or is itself in $D$ and therefore in $A_i$. If $v$ is not in $D$, then $v$ shares an edge with some $(0, \cdots, 0, b, 0, \cdots, 0)$ which means $a$ shares an edge with some $b$ in $\Gamma(\mathbb{Z}_{n_i})$. If $v$ is in $D$, then $(1, \cdots, 1, a, 1, \cdots, 1)$ is in $A_i$. Construct a set $H$ that contains all $a$ in the $i$th slot of all such $v$. $H$ forms a dominating set of $\Gamma(\mathbb{Z}_{n_i})$ size less than $d_i$ which is a contradiction since $D(\Gamma(\mathbb{Z}_{n_i}) = d_i$. So $A_i$ has a size of at least $d_i$.\\ Next, each $A_i$ is disjoint from each other since $b$ in the $i$th slot can never be 0 or 1, which means there will be no duplicate vertices. So the sum of the sizes of each $A_i\subseteq D$ will be greater than or equal to the sum of the domination number of each $\Gamma(\mathbb{Z}_{n_i})$. This is a lower bound of the domination number. \end{proof}
Combining Lemma \ref{3.17} and \ref{3.18} we get the following.
\begin{theorem}\label{3.19} $D(\Gamma(\mathbb{Z}_{n_1}))+D(\Gamma(\mathbb{Z}_{n_2}))+\cdots+D(\Gamma(\mathbb{Z}_{n_k}))$ $\leq$ $D(\Gamma(\mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times \cdots\times\mathbb{Z}_{n_k}))$ $\leq$ $2[D(\Gamma(\mathbb{Z}_{n_1})) + D(\Gamma(\mathbb{Z}_{n_2})) + \cdots + D(\Gamma(\mathbb{Z}_{n_k}))]$.\\ \end{theorem}
The next theorem talks about the coefficients of a Domination Polynomial.\\
\begin{theorem}\label{3.20} In arbitrary $\Gamma(\mathbb{Z}_{p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}})$, $k \geq 3$, and each $p_i$ is a distinct prime number, the coefficient $c$ of the smallest degree of the domination polynomial is $(p_1-1)(p_2-1)\cdots(p_k-1)$. \end{theorem}
\begin{proof}
Consider $\Gamma(\mathbb{Z}_{p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}})$, $k \geq 3$. Let $n = p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$. Construct set $D$ that has exactly one element from each type class $T_{n/p_i}$. Since every vertex of $\Gamma(\mathbb{Z}_{p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}})$ must be a multiple of some $p_i$, every vertex shares an edge with some vertex in some $T_{n/p_i}$ and therefore, $D$. So $D$ is a dominating set.\\
We claim that for arbitrary minimal dominating set $D$, exactly one vertex must be present from each type class $T_{n/p_i}$.\\
Assume the opposite. Then there exists a dominating set $D$ that either doesn't have a vertex from some type class $T_{n/p_x}$ or has an extra vertex not in any type class $T_{n/p_x}$. Let $D$ not have any vertices from $T_{n/p_x}$. Since the only neighbors of vertices in $T_{p_x}$ are in $T_{n/p_x}$, every vertex in $T_{p_x}$ must be in $D$. $p_x \neq n/2$ because otherwise $2p_x=n$, $2p_x=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ and $2 = \frac{p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}}{p_x}$ which is not possible. So by Lemma 1.10, $T_{p_x}$ has more than one element. If $D$ contains at least one vertex from all $T_{n/p_x}$ except $T_{p_x}$, then the size of $D$ is larger than $A$ above. So $D$ is not a minimal dominating set. If $D$ lacks any vertices from other $T_{n/p_i}$, then for each vertex missing, two or more must be added from $T_{p_x}$. In which case $D$ would also not be minimal. So $D$ must contain at least one vertex from each $T_{n/p_i}$. There also cannot be any additional vertices, either from type classes not of the form $T_{n/p_x}$, nor multiple from the same type class. Otherwise $D$ would not be minimal.\\
Since there are $p_i - 1$ vertices in $T_{n/p_i}$, the total amount of possible minimal dominating sets $D$ is $(p_1-1)(p_2-1)\cdots(p_k-1)$.
\end{proof}
\begin{theorem}\label{3.21} $\Gamma(\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}\times\cdots\times\mathbb{Z}_{p_k})$ is k-partite where every $p_i$ is prime. \end{theorem} \begin{proof} Consider some graph $\Gamma(\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}\times\cdots\times\mathbb{Z}_{p_k})$. Construct a collection of subsets $S_i$ which is the set of all vertices with a non-zero term in the $i$th slot and zero in any slot less than $i$.\\
$S_1 = \{ (a, \cdots.) | a\in \mathbb{Z}_{p_1}, a\neq 0 \}$\\
$S_2 = \{ (0, a, \cdots) | a\in \mathbb{Z}_{p_2}, a\neq 0 \}$\\ $\cdots$\\
$S_k = \{ (0, 0, \cdots, 0, a) | a\in \mathbb{Z}_{p_k}, a\neq 0 \}$\\ We claim that no two vertices $u, v$ from the same subset $S_x$ share an edge.\\ Consider arbitrary vertices $u$ and $v$ in some $S_x$. By the definition, the $x$th slot of $u$ and $v$ has a non-zero term from $\mathbb{Z}_{p_x}$. Since $\mathbb{Z}_{p_x}$ has no non-zero, zero divisors, the terms in the $x$th slot will not multiply to get 0, so $u$ and $v$ do not share an edge.\\ No we claim that all the $S_i$ form a partition of $\Gamma(\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}\times\cdots\times\mathbb{Z}_{p_k})$.\\ Assume there is a vertex $v$ not in any $S_i$. Let $v$ have a non-zero element $a$ in the $x$th slot. The by definition it is in $S_x$, or in some $S_i$, $i<x$ if some $i$th slot also has a non-zero element. So $v$ cannot have any non-zero elements and thus, $v=0$ which is not a vertex. $\cup S_i$ is the vertex set of $\Gamma(\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}\times\cdots\times\mathbb{Z}_{p_k})$.\\ Assume there is a $v$ in multiple $S_i$, say $S_x$ and $S_y$. Without loss of generality, let $x<y$. Then the $x$th slot of $v$ has a non-zero term $a$ because it is in $S_x$. But the $x$th slot must be zero because $v$ is in $S_y$. That is a contradiction, so there are no overlaps in the partition.\\ $S_i$ is a k-partite partition, so $\Gamma(\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}\times\cdots\times\mathbb{Z}_{p_k})$ is k-partite. \end{proof}
\section{Zero divisor graph of the poset $D_n$}
Zero divisor graph of a poset has been studied in ~\cite{jW99}, ~\cite{jW100}, ~\cite{jW101}. We always have Clique number of the zero divisor graph of a ring does not exceed the Chromatic number of that. Beck conjectured that that for an arbitrary ring $R$, they are same. But Anderson and Naseer~\cite{jW102} have shown that this is not the case in general, namely they presented an example of a commutative local ring $R$ with $32$ elements for which Chromatic number is strictly bigger than the clique number.In ~\cite{jW102} Nimbhorkar, Wasadikar and DeMeyer have shown that Beck's conjecture holds for meet-semilattices with $0$, i.e., commutative semigroups with $0$ in which each element is idempotent. Infact, it is valid for a much wider class of relational structures, namely for partially ordered sets (posets, briefly) with $0$. Now, to any poset $(P, \leq)$, with a least element $0$ we can assign the graph $G$ as follows: its vertices are the nonzero zero divisors of $P$, where a nonzero $x \in P$ is a called a zero divisor if there exists a non zero $y\in P$, so that $L(x, y)=0, L(x, y)=\{z\in P|z\leq x, y\} $. And $x, y$ are connected by an edge if $L(x, y)=0$. We discuss here some properties of the zero divisor graph of a specific poset $D_n$. Very often we used the prime factorization of the positive integer $n$. By abuse of notation, let us call $D_n$ as the zero divisor graph of the poset $D_n$. Note that, the vertex set of $D_n$ is the set of all factors of $n$ that are not divisible by some prime factor of $n$. Also, note that, two vertices in $D_n$ are connected by an edge if and only if they are mutually co-prime.\\
\begin{remark}[Properties of $D_n$] $\phantom e$\\ \begin{enumerate}
\item[i.] If $n =p^{m}$ for some prime $p$ and positive integer $m$, then $D_{n}$ is trivial.\\
So from now on consider $D_n$ where $n \neq p^{m}$ where $p$ and $m$ are as mentioned.\\
\item [ii.] The diameter of $D_n$ is 3 iff $n$ has three distinct prime factors namely $p$, $q$, $r$. This is shown by the path $pq - r - p - qr$. Otherwise, the diameter is 1 or 2, as $D_{p^mq^n}$ is complete bipartite which has diameter 2 or in the case of $m = n = 1$ has diameter 1. \cite{jW103} shows zero divisors of a poset have diameter of 1, 2, or 3.
\item[iii.] $D_n$ is complete only when $n=pq$, where $p$ and $q$ are two distinct primes. $D_n$ is complete bipartite iff $n = p^{m}q^{s}$ where $m$ and $s$ are two positive integers.\\
\item[iv.]We have the clique number of $D_n$ and a few coefficients of the clique polynomial.The clique number of $D_n$ is the number of distinct prime factors of $n$. If $n= p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}p_{3}^{\alpha_{3}}\cdots p_{r}^{\alpha_{r}}$ where $p_{i}$'s are distinct primes $\forall i$, any set of vertices $\{ p_{1}^{\beta_{1}}, p_{2}^{\beta_{2}}, p_{3}^{\beta_{3}}\cdots p_{r}^{\beta_{r}}\}$, where $1\leq \beta_{i}\leq \alpha_{i}$ $\forall i$ forms a maximal clique. This is a clique because no two vertices in a clique can have a common prime factor. Also, if any vertex of a clique has more than one prime factor, the clique will not be maximal. Hence the clique number is $r$, the number of distinct primes of $n$. And the leading coefficient in the clique polynomial is $\alpha_{1}\alpha_{2}\cdots\alpha_{r}$. The coefficient of $x^{r-1}$ is $\sum_{i=1}^{r}(\alpha_1\alpha_2\cdots \alpha_{i-1}\alpha_{i+1}\cdots \alpha_{r})+\binom {r}{2}\alpha_1\alpha_2\cdots\alpha_r.$ Reason: Consider a clique of size $r-1$. If all the vertices has single prime factors then, there are $\sum_{i=1}^{r}(\alpha_1\alpha_2\cdots \alpha_{i-1}\alpha_{i+1}\cdots \alpha_{r})$ many of this type, as a typical clique of this type is a set of the form $\{p_{1}^{\beta_{1}}, p_{2}^{\beta_{2}}, \cdots p_{i-1}^{\beta_{i-1}}, p_{i+1}^{\beta_{i+1}}, \cdots p_{r}^{\beta_{r}} \}$, where $1\leq \beta_{j}\leq \alpha_{j}\forall j \in \{1, 2, \cdots r\}$. Otherwise, exactly one vertex will contain two primes. And in that case we will obtain $\binom {r}{2}\alpha_1\alpha_2\cdots\alpha_r$ many such clique sets with cardinality $r-1$. No element in a clique set can have three distinct primes in it's prime factorization. Hence the result follows.\\
\item[v.]The domination number of $D_n$ is the number of distinct prime factors of $n$, same as the clique number, as any dominating set must not omit a prime factor of $n$. If some $p_{i}$ is missing from a set of vertices $V$, then the vertex $p_{1}p_{2}\cdots p_{i-1}p_{i+1}\cdots p_{r}$ is not adjacent to any vertex in $V$. Furthermore, if we let $V$ be the set of all distinct primes of $n$, each vertex in $D_n$ must share an edge with at least one vertex in $V$ because each vertex in $D_n$ must omit at least one prime of $n$ from its prime factorization.\\
\item[vi.]$D_{n}$ is regular iff $n= (pq)^{m}$ for some positive integer $m$. If $n = p^{m}q^{r}, m\neq r$, then $D_{n}= K_{m, n}$ which is not regular. Then, if $n$ has more than two distinct primes in it's prime factorization, $p$ and $pq$ are vertices with a different degrees. Every vertex that shares an edge with $pq$ shares an edge with $p$, but $p$ shares an edge with $q$ while $pq$ does not, making the graph non-regular.\\
\item[vii.]In \cite{jW100}, it is discussed that the girth of the zero divisor graph of any poset is 3, 4, or $\infty$. The girth of $D_{n}$ is $\infty$ iff $n= p^{m}q$, where $p$ and $q$ are two distinct primes and $m$ is a positive integers bigger than $1$. The girth of $D_{n}$ is $4$, if and only if $n= p^{m}q^{r}$, where $p$ and $q$ are two distinct primes and $m$ and $r$ are both positive integers bigger than $1$. Otherwise, the girth of $D_{n}$ is $3$, because if $n$ has at least $3$ different prime factors $p$, $q$ and $r$, then $p-q-r-p$ is a triangle in $D_{n}$.\\
\item[viii.] $D_n$ is not perfect if $n$ is the product of least five different primes $p, q, r, s, t$ in it's prime factorization, then $ps-qt-pr-qs-tr-ps$ is a cycle of length five in $D_n$. Hence by Strong perfect graph theorem $D_n$ is not perfect.\\
Suppose $n$ has 4 distinct prime factors $p$, $q$, $r$ and $s$. Assume there is an odd cycle of length 5 or greater that contains a vertex $v$ that is the product of two such primes. Let $v = p^xq^y$. Then the two neighbors of $v$ cannot be a multiple of $p$ or $q$. Suppose the neighbors both consists of $r^a$ for some positive integer $a$. Then, we get part of the cycle as $r^a - p^xq^y - r^b$ for another positive integer $b$. Then, $r^a$ will necessarily share an edge with the other neighbors of $r^b$ making the cycle length 4. So the neighbors of $v$ must have both $r$ and $s$. Additionally, these part of the cycle must be of the form $r^a - p^xq^y - r^ws^z$, otherwise we get a cycle of length $4$ again. But any vertex that shares an edge with $r^ws^z$ must also share an edge with $r^a$ making such a cycle impossible. This means any odd cycle length greater than 5 cannot contain a vertex with two or more prime factors, making an odd cycle length greater than 4 impossible.The other two situations when $v$ consists of only one prime, or three primes also gives contradiction.Thus $D$ is perfect iff $n$ has 4 or fewer prime factors (the $n$ $< 4$ case follows).\\
\item[ix.] $D_n$ is chordal iff $n = p^mq$ or $n = pqr$ where $p$, $q$ and $r$ are distinct primes and $m \geq 1$. For if $n$ is not of that form, $p-q-p^{2}-q^{2}-p$ or $p - q - p^2 - qr - p$ or $p - r - pq - rs - p$ will give holes of length greater than $3$ in respective $D_n$'s.\\
\item[x.]Let, $n$ be a square free positive integer. Then, it's simplicial vertices are precisely those factors of $n$ which misses exactly one prime in it's prime factorization. Now, suppose $n$ is not square free. Then, if all primes in it's prime factorization are not square free, it has no simplicial vertex. Otherwise, the simplicial vertices are precisely those which misses exactly one square free prime factor. For example, if $n = p^2q^2r$, $pq$, $p^2q$, $pq^2$ and $p^2q^2$ are the only simplicial vertices because $r$ is the only square free prime factor.\\
\item[xi.] The only planar $D_n$ has $n$ of the form $n = p^mq$, $p^mq^2$, $pqr$ or $p^2qr$. First, let $n$ have only 2 prime factors. We will first examine this case. If $n = p^mq^l$ where $l \geq 3$ and $m \geq 3$, then $K_{3, 3}$ is a subgraph of $D_n$ and therefore a minor of $D_n$. Then by Wagner's theorem, $D_n$ is non planar. But in the case of $p^mq$, $D_n$ is a star, so it is planar. And in the case of $p^mq^2$, the graph can be drawn without any crossing edges. Next, let's examine $n$ with 3 prime factors. If $n = pqr$ or $n = p^2qr$ the graph is clearly planar if drawn. But, if $n = p^mqr$ where $m \geq 3$, The subgraph consisting of $p$, $p^2$, $p^3$, $q$, $r$ and $qr$ form $K_{3, 3}$ if we delete the edge between $q$ and $r$. Then by Wagner's theorem the graph is non-planar since $K_{3, 3}$ is a minor. Next, if $n = p^mq^lr$, where $m \geq 2$ and $l \geq 2$ the set of vertices $q$, $q^2$, $p$, $p^2$, $r$, $pr$ and $qr$ is a subdivision of $K_5$. Then, by Kuratowski's theorem, the graph is non-planar. So the only planar $D_n$ with only 3 primes in $n$ are $pqr$ and $p^2qr$. Lastly, consider the case where $n$ has 4 primes in its prime factorization, $n = pqrs$. Then, the vertex set of $p$, $q$, $r$, $s$, $pq$ and $rs$ can be made isomorphic to $K_5$ by contracting the edge between $pq$ and $rs$ to make a single vertex. Therefore, $K_5$ is a minor of $D_n$ for this case, and by Wagner's theorem the graph is non-planar.\\
\item[xii.] $D_n$ is Eulerian iff the power of each prime in the prime factorization of $n$ is even.
For, if $n$ has a prime $p^{\alpha}$ that appears in it's prime factorization where $\alpha$ is odd, then the vertex $\frac{n}{p^\alpha}$ has odd degree, otherwise every vertex has even degree.\\
\item[xiii.] If $n$ is square free, then we have the edge cardinality of $D_n$ as $\sum_{i=1}^{r-1}2^{r-i-1}\binom{r}{i}-2^{r-1}-1$, where $r$ is the number of distinct primes of $n$.
For, if we consider $n= p_{1}p_{2}\cdots p_{r}$, where $p_{i}$'s are distinct primes, then the degree of each vertex $p_{i}$ is $\sum_{i=1}^{r-1}\binom{r-1}{i}= 2^{r-1}-1$ giving $r(2^{r-1}-1)$ to the degree sum of the vertices. Similarly each vertex $p_{i}p_{j}$ is adjacent to $\sum_{i=1}^{r-2}\binom{r-2}{i} =2^{r-2}-1$
many vertices, giving $\binom{r}{2}(2^{r-1}-2)$ in the degree sum. Proceeding in this way, we obtain the sum of the vertex degrees are $\sum_{i=1}^{r-1}\binom{r}{i}(2^{r-i}-1) = \sum_{i=1}^{r-1}\binom{r}{i}2^{r-i}-2^{r}-2$. Then, as the sum of vertex degrees is twice the edge cardinalities the result follows.\\
\item[xiv.] We have a lower bound for Independence number of $D_n$. Let,
$n= p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}} \cdots p_{r}^{\alpha_{r}}$, where $p_{i}$'s are distinct primes.Then if $I$ is the independence number of $D_n$, \\$ I\geq Max_{1\leq i\leq r}[ \alpha_{i}\{1+\sum \alpha_{i_1}\alpha_{i_2}\cdots \alpha_{i_l} |\alpha_{i}\neq \alpha_{i_j}\neq \alpha_{i_k}, j, k \in\{1, 2, \cdots l\}\}]$, $\{\alpha_{i_1}, \alpha_{i_2}\cdots \alpha_{i_l}\}$ varies over all non empty proper subset of $\{\alpha_1, \alpha_2\cdots \alpha_{i-1}, \alpha_{i+1}\cdots \alpha_r\}$\\
\begin{proof} Consider any independent set containing $p_{i}$ from the list of primes in the prime factorization of $n$. Then, the largest possible independent set containing $p_{i}$, will have $p_{i}$ as a factor in all it's vertices. So, that must contain $p_i, p_{i}^{2}, \cdots, p_{i}^{\alpha_{i}}$ giving $\alpha_{i}$ many vertices in the independent set. In order to maximize the cardinality of the set, we need to consider all possible factors of $n$ that has a factor $p_{i}$ and that misses atleast one prime in the prime factorization of $n$. Thus we get\\ $p_{i}p_{1}, p_{i}p_{1}^{2}, \cdots p_{i}p_{1}^{\alpha_{1}}, p_{i}p_{2}, p_{i}p_{2}^{2}, \cdots p_{i}p_{2}^{\alpha_{2}}, \cdots p_{i}p_{r}, p_{i}p_{r}^{2}, \cdots p_{i}p_{r}^{\alpha_{r}}$ are inside the independent set giving $\alpha_{i}(\alpha_{1}+\alpha_{2}+\cdots \alpha_{i-1}+\alpha_{i+1}+\cdots \alpha_{r})=\alpha_{i}\sum_{j=1, j\neq i}^{r}\alpha_{j}$ many vertices. Similarly, we get more $\alpha_{i}\sum_{j=1, i\neq j\neq k}^{r}\alpha_{j}\alpha_{k}$ many vertices from the factors of $n$ that contains $p_{i}$ and that are product of three primes. Proceeding in this way get the necessary result. \end{proof}
\item [xv.] Let, $n$ be square free. Then, a lower bound of the Independence number of $D_n$ is $ 2^{r-1}-r$, where $r$ is the number of prime factors of $n$. If,
$n= p_{1}p_{2} \cdots p_{r}$, where $p_{i}$'s are distinct primes, then whenever $I$ is the independence number of $D_n$, $I \geq 2^{r-1}-r$.
\begin{proof} Consider any independent set in $D_n$. If we pick up any element from that set, that is divisible by some $p_{i}$, then, all possible proper divisors of $n$, that has $p_{i}$ as a factor forms an Independent set of $D_n$ and cardinality of that set is $2^{r-1}-r$. Hence the result follows. \end{proof}
\end{enumerate}
\end{remark}
\section{acknowledgment} The authors acknowledge Dr. Lisa DeMeyer for introducing this topic to us by an excellent presentation, which motivated us to work in this area.
\end{document} |
\begin{document}
\title{A dissipative time reversal technique for photo-acoustic tomography in a cavity} \author{Linh V. Nguyen and Leonid Kunyansky}
\begin{abstract} We consider the inverse source problem arising in thermo- and photo- acoustic tomography. It consists in reconstructing the initial pressure from the boundary measurements of the acoustic wave. Our goal is to extend versatile time reversal techniques to the case of perfectly reflecting boundary of the domain. Standard time reversal works only if the solution of the direct problem decays in time, which does not happen in the setup we consider. We thus propose a novel time reversal technique with a non-standard boundary condition. The error induced by this time reversal technique satisfies the wave equation with a dissipative boundary condition and, therefore, decays in time. For larger measurement times, this method yields a close approximation; for smaller times, the first approximation can be iteratively refined, resulting in a convergent Neumann series for the approximation. \end{abstract}
\maketitle
\section{Introduction\label{S:intro}}
We consider the inverse source problem arising in the thermo- and photoacoustic tomography (TAT and PAT) \cite{KrugerPAT,KrugerTAT,Oraev94,XW06}. It consists in reconstructing the initial pressure in the acoustic wave from the values of time-dependent pressure measured on a surface, completely or partially surrounding the object of interest \cite{KKun}. During the last decade, significant results were obtained in solving this problem under the assumption that the wave propagates in free space (see, for example \cite{FPR,Kun-expl,Kun-ser,Ng, US,US-Num,AK,FHR,Nat12,XW05,Pala,PS02,Ha09} and reviews \cite{KKun,KKun1,Sch} for additional references). Applicability of the free space approximation depends on the type of the device(s) used to conduct the measurements: it is valid if reflection of waves from the detectors can be neglected. There are, however, a number of situations where this simple model is not applicable. For example, when the object is surrounded by glass plates optically scanned to measure the pressure, the waves experience multiple reflections as they would in a resonant cavity \cite{Cox-Cavity,cox2008photo,Kunyansky-Cavity}. As a result, the assumption indispensable in the analysis of the classical TAT and PAT about the fast decrease of acoustic energy within the object, can no longer be made.
Thus, a novel approach is needed to solve the inverse source problem posed within a resonant cavity. The latter problem has attracted attention of analysts only recently. A particular case of a rectangular resonant cavity was considered in \cite{Cox-Cavity,cox2008photo,Kunyansky-Cavity}, and several solutions that exploit symmetries of such a geometry were proposed. In \cite{Holman-Kunyansky,Stefanov-Yang,Acosta}, more general acquisition geometries were considered; one of the main questions investigated in these papers is the applicability of various modifications of the time reversal algorithm to the problem under consideration.
Time reversal was successfully used by multiple authors, both theoretically and numerically, to solve the inverse source problem in the free space setting (\cite{FPR,HKN,US,HTRE,US-Num,US-Brain,Homan}). In the latter case, it consists in solving the wave equation backwards in time, within the domain $\Omega$ surrounded by the detectors, using the measured data to pose the Dirichlet boundary condition. In the simplest case of constant speed of sound $c(x)\equiv c_{0},$ the time reversal is initialized with zero conditions inside the domain (at time $t=T$ for sufficiently large $T)$. Such approach works due to the fact that the solution of the direct problem in $\mathbb{R}^{3}$ with $c(x)\equiv c_{0}$ vanishes within the domain in a finite time $T_{0}(\Omega)$, due to the Huygens principle (and so time $T$ is chosen to be greater or equal to $T_{0}(\Omega)).$ In 2D and/or when $c(x) $ is not constant, more sophisticated methods are used to initialize time reversal \cite{HKN,US,US-Num}; however, all these techniques require a time decay of the solution of the direct problem.
The inverse source problem in a cavity with \emph{partially} reflecting walls was solved in \cite{Acosta} using a time reversal approach. In this case, due to the outflow of energy through the boundary of the domain, solution of the direct problem still exhibits time decay, and, with some modifications, time reversal can still be applied. A more difficult case is that of \emph{perfectly} reflecting boundary (modeled by zero Neumann condition on the boundary). In this case, the direct problems is energy preserving, and acoustic oscillations in the cavity continue forever (here we neglect the effects of attenuation of waves in the tissues). This immediately makes classical time reversal inapplicable: the correct pressure and its time derivative inside the domain are unknown at any time. The error made when replacing these functions by any crude guess is not small. Under the standard Dirichlet boundary condition, this error does not decrease as $t $ approaches $0$.
Several attempts were made to modify the Dirichlet boundary condition so as to control this error. In \cite{Holman-Kunyansky}, the boundary condition is multiplied by a \ smooth cut off function $\eta(t/T),$ where $\eta(\tau)$ vanishes at $\tau=1$ with at least several derivatives. It was shown that under certain conditions on the eigenvalues of the Dirichlet and Neumann Laplacians on $\Omega,$ the approximate solutions corresponding to measurement time $T$ converge to the correct one in the limit $ T\rightarrow\infty.$ However, for an arbitrary domain the aforementioned conditions on the eigenvalues are difficult to verify. In \cite {Stefanov-Yang}, the Dirichlet boundary condition was modified in such a way that the computed function is the result of averaging of a family of time-reversed solutions with different times $T$ (\textquotedblleft averaged time reversal\textquotedblright). The authors showed that, in the case of full boundary measurements, the operator that describes such a procedure is a contraction. Therefore, the result can be used either as a crude approximation, or a better solution can be computed by a converging Neumann series (involving multiple solutions of the direct problem and averaged time reversals). For the case when the data are available only on a part of the boundary (i.e. for the \textquotedblleft partial data problem\textquotedblright), theoretical analysis was not completed; however, numerical experiments yielded successful reconstruction in this case, too. In \cite{Acosta}, the problem with perfectly reflecting walls was considered mostly from a theoretical standpoint, and its unique solvability was proven under sufficiently general conditions.
In the present paper we propose a new time reversal technique in which the Dirichlet boundary condition is replaced by a non-standard boundary condition involving a linear combination of the normal- and time- derivatives of the solution (see equation (\ref{E:Rev-p})). While this new condition is satisfied by the solution of the direct problem, the error resulting from the time reversal satisfies the wave equation with dissipative boundary condition, and, thus, it decreases as time $t$ approaches $0$. We call this technique a \textquotedblleft dissipative time reversal\textquotedblright. This method can be used either directly to obtain good approximations to the initial pressure (which converges exponentially as $T \to \infty$), or as a part of a Neumann series-based iterative algorithm.\footnote{ Our idea of using the Neumann series-based iterative method is inspired by the influential paper \cite{US}}. In addition to being efficient numerically, our technique is relatively easy to analyze, allowing us to deliver an intuitive proof for the convergences in both full and partial data cases.
\section{Formulation of the problem and preliminaries}
\subsection{The direct problem}
Let $\Omega$ be a domain in $\mathbb{R}^{d}$ with smooth boundary $ \partial\Omega$. In the realistic setup of PAT and PAT, the dimension $d$ equals $3$. However, we will consider any $d \geq 2$, for the sake of generality. We will denote by $\nu =\nu(x)$ the outward normal vector of $ \partial\Omega$ at $x$. The speed of sound $c(x)$ is a positive, infinitely smooth function on $\mathbb{R}^d$.
For some positive time $T,$ let us consider the following mixed boundary problem for the wave equation: \begin{equation} \left\{ \begin{array}{l} u_{tt}(x,t)-c^{2}(x)\,\Delta u(x,t)=0,\quad(x,t)\in\Omega\times(0,T], \\ [4pt] \frac{\partial}{\partial\nu}u(x,t)=0,\quad(x,t)\in\partial\Omega \times(0,T], \\[4pt] u(x,0)=u_{0}(x),~u_{t}(x,0)=u_{1}(x),\quad x\in\Omega, \end{array} \right. \label{E:TATg} \end{equation} \newline where $u_{0}$ and $u_{1}$ are arbitrary functions with $u_{0}\in H_{0}^{1}(\Omega),$ $u_{1}\in L^{2}(\Omega).$ When $u_{1}\equiv0$ and $ u_{0}(x)$ coincides with the initial pressure $f(x)$ in the tissues, the solution $u(x,t)$ of the above problem represents a model of TAT/PAT within a resonant cavity formed by the reflecting walls (\cite {Cox-Cavity,cox2008photo,Kunyansky-Cavity}). In this case, one attempts to reconstruct initial pressure $f(x)$ from the measurements of $u(x,t)$ made on some subset of boundary $\partial\Omega.$
For convenience, we consider here a slightly more general problem, where both $u_{0}$ and $u_{1}$ are allowed to be non-zero. Let us assume that the values of pressure $u(x,t)$ are measured on a part $\Gamma$ of the boundary $ \partial\Omega;$ we will consider both the case of full measurements when $ \Gamma$ coincides with $\partial\Omega,$ and the case of partial data when $ \Gamma$ is an open proper subset of $\partial\Omega$. We will denote the measured data by $g(z,t):$ \begin{equation}
g=u|_{\Gamma\times\lbrack0,T]}. \label{E:def-g} \end{equation}
Our goal is to solve the following problem:
\begin{problem}
\label{P:TATg} Find the pair $(u_{0},u_{1})$ from $g=u|_{\Gamma\times \lbrack0,T]}$. In other words, invert the map \begin{equation} \Lambda_{T}:(u_{0},u_{1})\rightarrow g. \label{E:map} \end{equation} \end{problem}
\subsection{Our approach to the inverse problem}
Let us fix a function $\lambda\in C^{\infty}(\partial\Omega)$ such that $ \lambda>0$ on $\Gamma$ and $\lambda=0$ on $\partial\Omega\setminus\overline{ \Gamma}.$ In addition, we will extend the data $g(z,t)$ to $\partial \Omega\setminus\overline{\Gamma}\times\lbrack0,T]$ by zero. The main idea of this paper is to find an approximation to $u_{0}$ and $u_{1}$ by solving the following time reversal problem \begin{equation} \left\{ \begin{array}{l} v_{tt}(x,t)-c^{2}(x)\,\Delta v(x,t)=0,\quad(x,t)\in\Omega\times\lbrack 0,T], \\[4pt] \frac{\partial}{\partial\nu}v(x,t)-\lambda(x)\,v_{t}(x,t)=-\lambda (x)\,g_{t}(x,t),\quad(x,t)\in\partial\Omega\times\lbrack0,T], \\[4pt] v(x,T)=0,~v_{t}(x,T)=0,\quad x\in\Omega. \end{array} \right. \label{E:Rev-p} \end{equation}
As we show below, for sufficiently large values of $T,$ the functions $v(x,0) $ and $v_{t}(x,0)$ are good approximations to $u_{0}$ and $u_{1},$ correspondingly.
In particular, in the context of the inverse problem of TAT/PAT with $ u_{1}=0 $ and $u_{0}=f,$ the function $v(x,0)$ is a good approximation of $ f(x).$ One can either use this approximation directly, or to realize an iterative refinement scheme converging to $f$ and corresponding to a computation of a certain Neumann series$.$
The well-posedness of problem (\ref{E:Rev-p}), validity of a so-obtained approximation and convergence of the Neumann series are the subject of the following sections. The starting point, however, is the analysis of the direct problem (\ref{E:TATg}).
\subsection{Properties of the direct problem}
Let us define the space of pairs of functions $\mathbb{H}$ as follows \begin{equation*}
\mathbb{H}:=\{(u_{0},u_{1})|u_{0}\in H^{1}(\Omega),u_{1}\in L^{2}(\Omega)\}, \end{equation*} where $H^{1}(\Omega)$ is the standard Sobolev space, with the norm defined, for an arbitrary function $h(x)$ by \begin{equation*}
\|h\|_{H^{1}(\Omega)}^{2}\equiv\int\limits_{\Omega}\left[ h^{2}(x)+\left|
\nabla h(x)\right| ^{2}\right] dx=\Vert h\Vert_{L^{2}(\Omega)}^{2}+\Vert\nabla h\Vert_{L^{2}(\Omega)}^{2}. \end{equation*} Then, $\mathbb{H}$ is a Banach space under the norm $\Vert.\Vert$ defined by \begin{equation*} \Vert(u_{0},u_{1})\Vert^{2}=\Vert u_{0}\Vert_{H^{1}(\Omega)}^{2}+\Vert c^{-1}u_{1}\Vert_{L^{2}(\Omega)}^{2}. \end{equation*} For $(u_{0},u_{1})\in\mathbb{H}$, we define \begin{equation*} \mathbb{E}(u_{0},u_{1})=\Vert\nabla u_{0}\Vert_{L^{2}(\Omega)}^{2}+\Vert c^{-1}u_{1}\Vert_{L^{2}(\Omega)}^{2}, \end{equation*}
and the semi-norm $|.|$: \begin{equation*}
|(u_{0},u_{1})|=[\mathbb{E}(u_{0},u_{1})]^{1/2}. \end{equation*} It is well known that for any $(u_{0},u_{1})\in\mathbb{H}$ equation~(\ref {E:TATg}) has a unique solution $u$ lying within the following class $ \mathcal{K(}\Omega,T\mathcal{)}$: \begin{equation*} u\in\mathcal{K}(\Omega,T)\equiv\mathcal{C}([0,T];H^{1}(\Omega))~\cap ~ \mathcal{C}^{1}([0,T];L^{2}(\Omega)). \end{equation*} Therefore, $\Lambda_{T}$ (defined by (\ref{E:map})) is a well-defined map from $\mathbb{H}$ to $\mathcal{C}([0,T];H^{1/2}(\partial \Omega))$.
\section{Well-posedness of a mixed boundary problem}
\label{S:Well} Let $\lambda\in C^{\infty}(\partial\Omega)$. In this section, we study the following mixed boundary value problem: \begin{equation} \left\{ \begin{array}{l} w_{tt}(x,t)-c^{2}(x)\,\Delta w(x,t)=0,\quad(x,t)\in\Omega\times(0,T), \\ [4pt] \frac{\partial}{\partial\nu}w(x,t)+\lambda(x)\,w_{t}(x,t)=h_{t}(x,t),\quad (x,t)\in\partial\Omega\times(0,T), \\[4pt] w(x,0)=w_{0}(x),~w_{t}(x,0)=w_{1}(x),\quad x\in\Omega. \end{array} \right. \label{E:Basic} \end{equation}
Since this is not a standard problem, let us first define its weak solution. We will follow the spirit of \cite{Bardos}, where the weak (distributional) solution is defined for the case $h=0$. For our purposes, it is sufficient to assume $(w_{0},w_{1})\in\mathbb{H}$ and $h\in\mathcal{C} ([0,T];H^{1/2}(\partial\Omega))$.
\begin{defi} \label{D:Sol} Given $(w_{0},w_{1})\in\mathbb{H}$ and $h\in\mathcal{C} ([0,T];H^{1/2}(\partial\Omega))$, we say that $w\in\mathcal{D} ^{\prime}(\Omega\times(0,T))$ is a solution of (\ref{E:Basic}) if the following equation holds for any test function $\psi\in\mathcal{D} (\Omega\times(0,T))$: \begin{align} \left\langle w,\psi\right\rangle & =- \iint_{\Omega\times\lbrack0,T]}c^{-2}(x)\,\big[w_{0}(x)\, \Psi_{t}(x,0)-w_{1}(x)\,\Psi(x,0)\big]\,dx+\int_{\partial\Omega}\lambda(x) \,w_{0}(x)\,\Psi(x,0)\,dx \notag \\ & -\iint_{\partial\Omega\times\lbrack0,T]}h(x,t)\Psi_{t}(x,t)\,dx\,dt-\int_{ \partial\Omega}h(x,0)\,\Psi(x,0)\,dx. \label{E:weaksol} \end{align} Here, $\Psi$ is the solution of the dual problem \begin{equation*} \left\{ \begin{array}{l} c^{-2}(x)\,\Psi_{tt}(x,t)-\Delta\Psi(x,t)=\psi(x,t),\quad(x,t)\in\Omega \times(0,T), \\[4pt] \partial_{\nu}\Psi(x,t)-\lambda(x)\,\Psi_{t}(x,t)=0,\quad(x,t)\in \partial\Omega\times(0,T), \\[4pt] \Psi(x,T)=0,~\Psi_{t}(x,T)=0,\quad x\in\Omega. \end{array} \right. \end{equation*} \end{defi}
The uniqueness of solution $w$ is obvious, since $\left\langle w,\psi \right\rangle $ is defined by the right hand side of (\ref{E:weaksol}) for all test functions $\psi.$ The existence and regularity of this solution are more complicated. We only present here a partial result that is needed in the present paper:
\begin{prop} \label{P:Well} Assume that $h=0$ and $(w_{0},w_{1})\in\mathbb{H}$. Then, problem (\ref{E:Basic}) has a unique solution \begin{equation*} w\in\mathcal{K}(\Omega,T). \end{equation*}
\end{prop}
Let us recall a result by \cite{ikawa1970mixed}. Assume that $h =0$ and $ (w_{0},w_{1})\in H^{2}(\Omega)\times H^{1}(\Omega)$ satisfies the compatibility condition \begin{equation} \frac{\partial}{\partial\nu}w_{0}+\lambda(x)\,w_{1}=0,\quad\mbox{ on } \partial \Omega. \label{E:comp1} \end{equation} Then, problem (\ref{E:Basic}) has a unique solution \begin{equation*} w\in\mathcal{C}([0,T]; H^{2}(\Omega))\cap\mathcal{C}^{1}([0,T];H^{1}( \Omega))\cap\mathcal{C}^{2}([0,T];L^{2}(\Omega)). \end{equation*} Proposition~\ref{P:Well} can be proved by a simple approximation argument which we present here for the sake of completeness.
\begin{proof}[\textbf{Proof of Proposition~\protect\ref{P:Well}}] It suffices to prove the existence part, since the uniqueness is trivial as mentioned above. Consider $(\varphi_{0}^{n},\varphi_{1}^{n}) \in H^{2}(\Omega) \times H^{1}(\Omega)$ such that:
\begin{itemize} \item[1)] $(\varphi_{0}^{n},\varphi_{1}^{n})\rightarrow(w_{0},w_{1})$ in $ H^{1}(\Omega)\times L^{2}(\Omega)$,
\item[2)] $\partial_{\nu}\varphi^{n}_{0}|_{\partial \Omega}=0$ and $
\varphi_{1}^{n}|_{\partial\Omega}=0$. \end{itemize}
Such a sequence $\{(\varphi_{0}^{n},\varphi_{1}^{n})\}$ always exists. For instance, we can choose $\varphi_{0}^{n}$ to be a linear combination of eigenvectors of the Neumann Laplacian. Meanwhile, $\varphi_{1}^{n}$ can be chosen as a linear combination of eigenvectors of the Dirichlet Laplacian.
Consider problem (\ref{E:Basic}) with the initial condition $ (\varphi_{0}^{n},\varphi_{1}^{n})$ (instead of $(w_{0},w_{1})$). Since $ (\varphi_{0}^{n},\varphi_{1}^{n})$ satisfies the compatibility condition ( \ref{E:comp1}), problem (\ref{E:Basic}) with such initial condition has a unique solution \begin{equation*} w^{n}\in\mathcal{C}([0,T]; H^{2}(\Omega))\cap\mathcal{C}^{1}([0,T]; H^{1}(\Omega))\cap\mathcal{C}^{2}([0,T];L^{2}(\Omega)). \end{equation*} Moreover, by simple integration by parts, we obtain for any $t_{0}\in \lbrack0,T]$: \begin{equation*} \mathbb{E}(w^{n}-w^{m},t_{0})+\int_{\partial\Omega\times\lbrack0,t_{0}]}
\lambda(x)\,|w_{t}^{n}-w^{m}_{t}|^{2}\,dx\,dt=\mathbb{E}(w^{n}-w^{m},0). \end{equation*} Therefore, \begin{equation*} \mathbb{E}(w^{n}-w^{m},t_{0})\leq\mathbb{E}(w^{n}-w^{m},0). \end{equation*} We notice that \begin{equation*} \mathbb{E}(w^{n}-w^{m},0)=\Vert\nabla(\varphi_{0}^{n}-\varphi_{0}^{m}) \Vert_{L^{2}(\Omega)}^{2}+\Vert c^{-1}(\varphi_{1}^{n}-\varphi_{1}^{m})\Vert_{L^{2}(\Omega)}^{2} \rightarrow0. \end{equation*} Since \begin{equation*} \mathbb{E}(w^{n}-w^{m},t)=\Vert\nabla(w^{n}(t,.)-w^{m}(t,.))\Vert^{2}+\Vert c^{-1}(w_{t}^{n}(t,.)-w_{t}^{m}(t,.))\Vert^{2}, \end{equation*} we obtain $\{w^{n}\}$ is a Cauchy sequence in $\mathcal{C} ([0,T];H^{1}(\Omega))$ and $\{w_{t}^{n}\}$ is a Cauchy sequence in $\mathcal{ C}([0,T];L^{2}(\Omega))$. Therefore, there exists a function $w\in \mathcal{K }(\Omega,T)$ such that \begin{equation*} \{w^{n}\}\rightarrow w\mbox{ in }\mathcal{C}([0,T],H^{1}(\Omega)),\quad \mbox{ and }\{w_{t}^{n}\}\rightarrow w_{t}\mbox{ in }\mathcal{C} ([0,T];L^{2}(\Omega)). \end{equation*} It is easy to verify that $w$ is a solution of problem (\ref{E:Basic}), from the definition~\ref{D:Sol}. This finishes our proof for the existence.
\end{proof}
Let us apply the above result to the time reversal problem (\ref{E:Rev-p}).
\begin{prop} Problem (\ref{E:Rev-p}) has a unique solution $v\in\mathcal{K}(\Omega,T).$ \end{prop}
\begin{proof} We first notice that problem (\ref{E:Rev-p}) is the time reversed version of problem (\ref{E:Basic}), considered in Section~\ref{S:Well}. The uniqueness of the solution follows trivially from the definition~\ref{D:Sol}. We now prove the existence. Let us consider the following problem \begin{equation} \left\{ \begin{array}{l} U_{tt}(x,t)-c^{2}(x)\,\Delta U(x,t)=0,\quad(x,t)\in\Omega\times\lbrack 0,T], \\[4pt] \frac{\partial}{\partial\nu}U(x,t)-\lambda(x)\,U_{t}(x,t)=0,\quad (x,t)\in\partial\Omega\times\lbrack0,T], \\[4pt] U(x,T)=u(x,T),~U_{t}(x,T)=u_{t}(x,T),\quad x\in\Omega. \end{array} \right. \label{E:err} \end{equation} where $u(x,t)$ is the solution of the direct problem (\ref{E:TATg}). Since $ (u(.,T),u_{t}(.,T))\in\mathbb{H}$, we obtain from Proposition~\ref{P:Well} that the above problem has a (unique) solution $U\in\mathcal{K}(\Omega,T).$
We notice that $u$ also satisfies the boundary condition in problem (\ref {E:Rev-p}).
Let $v=u-U$. It is easy to verify that $v$ is a solution of (\ref{E:Rev-p}). Moreover, from the regularity of $u$ and $U$, we conclude that $v\in \mathcal{K}(\Omega,T).$ \end{proof}
\section{Solution of the inverse problem}
\subsection{Contraction properties of the time reversal operator}
Similarly to \cite{Stefanov-Yang,Acosta}, our analysis on the inversion of $ \Lambda $ relies on known results on stabilization of waves \cite{Bardos}. For the sake of simplicity, we assume that all the geodesics of $(\mathbb{R} ^{3},c^{-2}\,dx^{2})$ have finite contact order with the boundary $\partial \Omega $. Under this condition, the generalized bi-characteristics of the wave operator $\Box =\partial _{tt}-c^{2}(x)\,\Delta $ on $\overline{\Omega } $ are uniquely defined (see, e.g., \cite{Bardos}). Their projections on the physical space (i.e., $\overline{\Omega }$) are called the generalized rays.
Throughout the paper, we will assume that the following condition is satisfied:
\begin{condition} \label{A:Gcc} There is a finite value $T(\Omega,\Gamma)>0$ such that every generalized ray of length \footnote{ The length here is understood in the metric $c^{-2} \, dx^2$. Condition~\ref{A:Gcc} means that all the singularities of the solution of the wave equation (\ref{E:TATg}) that start propagating at time $0$, traveling along generalized bi-characteristics, reach the set $\Gamma$ within the time interval $[0,T(\Omega,\Gamma)]$.} $T(\Omega,\Gamma)$ intersects $\Gamma$ at one (or more) non-diffractive point(s). \end{condition}
This is the \textbf{geometric control condition} (GCC), well known in control theory. We refer the reader to \cite{Bardos} for a detailed discussion of GCC. This condition was shown in \cite{Stefanov-Yang,Acosta} to be sufficient for the stability of the inversion of $\Lambda$. Our results are also based on the assumption that GCC is satisfied.
Our work is based on the following fundamental result due to \cite{Bardos}:
\begin{prop} \label{P:contract} Consider problem (\ref{E:Basic}) with $g=0$. Assume that Condition~\ref{A:Gcc} holds. Then, there is $\delta(T)<1$ such that \begin{equation*}
|(u(.,T),u_{t}(.,T))|\leq\delta(T)\,|(u_{0},u_{1})|. \end{equation*} \end{prop}
\begin{remark} \label{R:exp}By applying Proposition \ref{P:contract} $k$ times, one obtains \begin{equation*}
|(u(.,kT),u_{t}(.,kT))|\leq\delta^{k}(T)\,|(u_{0},u_{1})|. \end{equation*}
It is easy to conclude that $|(u(.,T),u_{t}(.,T))|\rightarrow0$ exponentially as $T\rightarrow\infty$. That is, there are constant $ C_{1}(\Omega)$ and $a>0$ such that \begin{equation*}
|(u(.,T),u_{t}(.,T))|\leq C_{1}(\Omega)e^{-a T}\,|(u_{0},u_{1})|. \end{equation*} \end{remark}
Given boundary data $g(x,t)$, we define the time reversal operator $A$ through the solution $v(x,t)$ of the problem (\ref{E:Rev-p}): \begin{equation*} Ag=(v(.,0),v_{t}(.,0)). \end{equation*}
\begin{lemma} \label{T:Main-p} Assume that condition\textbf{~\ref{A:Gcc}} holds and $T \geq T(\Omega,\Gamma)$. Let $I$ denote the identity map, then there is $ \delta(T)<1$ such that \begin{equation*}
|(I-A\Lambda_{T})(u_{0},u_{1})|\leq\delta(T)|(u_{0},u_{1})|, \quad \mbox{ for all } (u_0,u_1) \in \mathbb{H}. \end{equation*}
In other words, $I-A\Lambda_{T}$ is a contraction on $\mathbb{H}$ under the semi norm $|\cdot|$. \end{lemma}
\begin{proof} Let $(u_{0},u_{1})\in \mathbb{H}$ and $u$ be the solution of (\ref{E:TATg}). We observe that \begin{equation*} (I-A\Lambda _{T})(u_{0},u_{1})=(U(.,0),U_{t}(.,0)), \end{equation*} where $U$ is the solution of the problem (\ref{E:err}).
Due to the well-known conservation of energy in the solution of the wave equation with Neumann boundary condition, \begin{equation*} \mathbb{E}(u_{0},u_{1})=\mathbb{E}(u(.,T),u_{t}(.,T)), \end{equation*} or \begin{equation*}
|(u_{0},u_{1})|=|(u(.,T),u_{t}(.,T))|. \end{equation*} Due to Proposition~\ref{P:contract}, \begin{equation*}
|(U(.,0),U_{t}(.,0))|\leq\delta(T)\,|(u(.,T),u_{t}(.,T))|, \end{equation*} and the proof follows. \end{proof}
\begin{remark} \label{R:Exp-p} Using Remark \ref{R:exp}, instead of Proposition~\ref {P:contract}, in the above proof, we obtain that there exist constants $ C_{1}(\Omega)$ and $a>0$ such that \begin{equation*}
|(I-A\Lambda_{kT})(u_{0},u_{1})|\leq C_{1}(\Omega)e^{-aT}|(u_{0},u_{1})|. \end{equation*} That is, the\ induced seminorm of $I-A\Lambda_{T}$ decreases exponentially as $T\rightarrow\infty$. \end{remark}
Lemma \ref{T:Main-p} describes the contraction property of our time reversal operator in the semi-norm $|.|$. By itself, this result is not sufficient to prove the convergence under the norm $\|.\|.$ However, by restricting our attention to appropriate subspaces of $\mathbb{H}$, such a convergence can be attained. Indeed, let us consider the subspace $\mathbb{H}_{0}$ of $ \mathbb{H}$ defined by \begin{equation*} \mathbb{H}_{0}\equiv\left\{ \mathbf{h}=(h_{0,}h_{1})\in\mathbb{H}\ \left\vert \ {\int\limits_{\partial\Omega}h_{0}\,dx=0}\right. \right\} , \end{equation*} and the subspace $\mathbb{H}_{1}$ of $\mathbb{H}_{0}$ consisting of pairs with the second component equal to zero: \begin{equation*} \mathbb{H}_{1}\equiv\left\{ \mathbf{h}=(h_{0,}0)\in\mathbb{H}_{0}\right\} . \end{equation*}
Let us introduce two projectors $\Pi_{0}$ and $\Pi_{1}$ mapping the elements of $\mathbb{H}$ into $\mathbb{H}_{0}$ and $\mathbb{H}_{1},$ respectively: \begin{align*} \Pi_{0}\mathbf{h} & \equiv\Pi_{0}(h_{0,}h_{1})\equiv\left( h_{0}-\frac {1}{
|\partial\Omega|}{\int\limits_{\partial\Omega}h_{0},h_{1}}\right) {,} \\ \Pi_{1}\mathbf{h} & \equiv\Pi_{1}(h_{0,}h_{1})\equiv\left( h_{0}-\frac {1}{
|\partial\Omega|}{\int\limits_{\partial\Omega}h_{0},0}\right) {,} \end{align*}
where $|\partial\Omega|$ is the surface area of $\partial\Omega$. These projectors do not increase the semi-norm $|.|$. That is, \begin{equation*}
|\Pi_{0}\mathbf{h|\leq}|\mathbf{h|,}\quad|\Pi_{1}\mathbf{h|\leq}|\mathbf{h|} . \end{equation*} Moreover, the subspaces $\mathbb{H}_{0}$ and $\mathbb{H}_{1}$ are invariant under compositions $\Pi_{0}(I-A\Lambda_{T})$ and $\Pi_{1}(I-A\Lambda_{T}).$ In addition, \begin{align*} \Pi_{0}(I-A\Lambda_{T})\mathbf{h} & \mathbf{=}(I-\Pi_{0}A\Lambda _{T}) \mathbf{h,\qquad\forall h}\text{ }\in\mathbb{H}_{0}, \\ \Pi_{1}(I-A\Lambda_{T})\mathbf{h} & \mathbf{=}(I-\Pi_{1}A\Lambda _{T}) \mathbf{h,\qquad\forall h}\text{ }\in\mathbb{H}_{1}. \end{align*} Therefore, in accordance with Lemma \ref{T:Main-p}, operators $(I-\Pi _{0}A\Lambda_{T})$ and $(I-\Pi_{1}A\Lambda_{T})$ are contractions in $ \mathbb{H}_{0}$ and $\mathbb{H}_{1}$ correspondingly, under the seminorm $
|.|:$ \begin{align}
\mathbf{|}(I-\Pi_{0}A\Lambda_{T})\mathbf{h|} & \mathbf{\leq|}(I-A\Lambda _{T})\mathbf{h}|\leq\delta(T)\mathbf{|h|,\qquad\forall h}\text{ }\in \mathbb{ H}_{0}, \label{E:seminorm0} \\
\mathbf{|}(I-\Pi_{1}A\Lambda_{T})\mathbf{h|} & \mathbf{\leq\mathbf{|}}
(I-A\Lambda_{T})\mathbf{\mathbf{h|}}\leq\delta(T)\mathbf{|h|,\qquad\forall h} \text{ }\in\mathbb{H}_{1}. \label{E:seminorm1} \end{align} Now we can invert operator $\Lambda_{T}$, restricted to $\mathbb{H}_{0}$, by constructing converging Neumann series \begin{equation*} \sum_{k=0}^{\infty}(I-\Pi_{0}A\Lambda_{T})^{k}\Pi_{0}A \Lambda_{T}(u_{0},u_{1})=\sum_{k=0}^{\infty}(I-\Pi_{0}A\Lambda_{T})^{k} \Pi_{0}Ag. \end{equation*}
Below we show that this series converges not only under the semi-norm $|.|$
but also in the norm $\|.\|.$ In other words, we claim that the partial sums $\mathbf{u}^{(n)}$of these series \begin{equation} \mathbf{u}^{(k)}=\sum_{j=0}^{k}(I-\Pi_{0}A\Lambda_{T})^{j}\Pi_{0}Ag \label{E:Neumann-partial} \end{equation}
converge to $\mathbf{u}=(u_{0},u_{1})$ in the norm $\|.\|.$ These partial sums can be easily computed by the following iterative algorithm \begin{align} \mathbf{u}^{(0)} & =0, \notag \\ \mathbf{u}^{(k+1)} & =(I-\Pi_{0}A\Lambda_{T})\mathbf{u}^{(k)}+\Pi_{0}Ag. \label{E:iterations-general} \end{align}
The following theorem gives us a solution to Problem~\ref{P:TATg} by a Neumann series:
\begin{theorem}
\label{T:Neumann-ser} Suppose that condition\textbf{~\ref{A:Gcc}} holds and the observation time $T$ satisfies $T\geq T(\Omega,\Gamma).$ Then, the iterations $\mathbf{u}^{(k)}$ defined by (\ref{E:Neumann-partial}) (or, equivalently, by (\ref{E:iterations-general})) converge to $\mathbf{u}$ in norm $\|.\|$, as follows: \begin{equation*} \Vert\mathbf{u}-\mathbf{u}^{(k)}\Vert\leq C_{P}(\Omega)\,\delta(T)^{k}\Vert \mathbf{u}\Vert, \quad \mbox{ for all } \mathbf{u} \in \mathbb{H}_0, \end{equation*} with some constant $C_{P}(\Omega)>1$ specified below. \end{theorem}
To prove the above theorem, we will need the following generalization of the Poincare inequality.
\begin{lemma} \label{L:Poincare} There is a constant $C_{P}(\Omega)>1$ such that for all $ h\in H^{1}(\Omega)$ satisfying $\int_{\partial\Omega}h\,dx=0$ the following inequality holds: \begin{equation*} \Vert h\Vert_{H^{1}(\Omega)}\leq C_{P}(\Omega)\Vert\nabla h\Vert_{L^{2}(\Omega)}. \label{E:Poincare} \end{equation*} \end{lemma}
\begin{proof} {Indeed, assume that the above statement is not true. Then, there exists a sequence $\{h_{n}\}_{n=1}^{\infty}\subset H^{1}(\Omega)$ such that \begin{equation*} \Vert h_{n}\Vert_{L^{2}(\Omega)}=1\quad\text{and}\quad\lim_{n\rightarrow \infty}\Vert\nabla h_{n}\Vert_{L^{2}(\Omega)}=0. \end{equation*} It follows that $\{h_{n}\}$ is bounded in $H^{1}(\Omega).$ Therefore, there is a weakly converging in $H^{1}(\Omega)$ subsequence $\{h_{n_{k}}\}_{k=1}^{ \infty}$ and function }$u\in${\ $H^{1}(\Omega),$ such that, \begin{align*} h_{n_{k}} & \rightarrow u\mbox{ strongly in }L^{2}(\Omega), \\ h_{n_{k}} & \rightarrow u\mbox{ weakly in }H^{1}(\Omega), \\ h_{n_{k}} & \rightarrow u\mbox{ weakly in }H^{1/2}(\partial\Omega). \end{align*} The limit }$u$ has the following three properties{\ \begin{equation*} \Vert u\Vert_{L^{2}(\Omega)}=1,\quad\Vert\nabla u\Vert_{L^{2}(\Omega)}=0,\quad\int\limits_{\partial\Omega}u\,dx=0. \end{equation*} The last two equations yield $u\equiv0$, which contradicts to the first property }$\Vert u\Vert_{L^{2}(\Omega)}=1${, thus completing the proof.} \end{proof}
\begin{corollary} \label{C:cor} There is a constant $C_{P}(\Omega)\,>1$ (given by the above Lemma) such that the following inequality holds: \begin{equation}
|\mathbf{h}|\leq\Vert\mathbf{h}\Vert\leq C_{P}(\Omega)|\mathbf{h}|, \quad \mbox{ for all } \mathbf{h} \in \mathbb{H}_0. \label{E:Corollary} \end{equation} \end{corollary}
\begin{proof}[\textbf{Proof of Theorem~\protect\ref{T:Neumann-ser}}] By a simple induction argument applied to the recurrence relation (\ref {E:iterations-general}) one obtains the following identity: \begin{equation*} \mathbf{u}-\mathbf{u}^{(k)}=(I-\Pi_{0}A\Lambda_{T})^{k}\mathbf{u},\quad k=0,1,2,3,... \end{equation*} Since $\mathbf{u}\in\mathbb{H}_{0}$, applying inequality (\ref{E:seminorm0}) results in the following inequality \begin{equation*}
|\mathbf{u}-\mathbf{u}^{(k)}|\leq\delta(T)^{k}|\mathbf{u}|. \end{equation*} Further, using inequalities (\ref{E:Corollary}), one can transition to the following norm estimate \begin{equation*}
\|\mathbf{u}-\mathbf{u}^{(k)} \| \leq C_{P}(\Omega)\delta(T)^{k} \|\mathbf{u}
\|, \end{equation*}
thus, proving convergence of the Neumann series in the norm $\|.\|.$ \end{proof}
\begin{remark} \label{myremark}We note that, due to Remark~\ref{R:Exp-p}, \begin{equation} \Vert\mathbf{u}-\Pi_{0}Ag\Vert\leq C_{1}(\Omega)C_{P}(\Omega)e^{-aT}\Vert \mathbf{u}\Vert. \end{equation} Therefore, when $T$ is sufficiently large, $\mathbf{u}^{(1)}=\Pi_{0}Ag$ is a good approximation to $\mathbf{u}$. \end{remark}
\subsection{Inverse problem of TAT/PAT}
The inverse problem of TAT/PAT is a particular case of Problem~\ref{P:TATg} with $u_{0}=f$ and $u_{1}=0.$ Therefore, $f$ can be recovered from $g$ by using the algorithm described in the previous section. However, the convergence can be accelerated and computations simplified by projecting computed approximations onto space $\mathbb{H}_{1}$ rather than $\mathbb{H} _{0}.$
Let us introduce the notation $\mathbf{f}=(f,0).$ The measured data $g$ are still defined by equation (\ref{E:def-g}) with $u(x,t)$ being a solution of the direct problem (\ref{E:TATg}) with initial conditions \begin{equation*} (u(0,x),u_{t}(0,x))=\mathbf{f}(x). \end{equation*} We show below that the partial sums \begin{equation*} \mathbf{u}^{(k)}=\sum_{j=0}^{k}(I-\Pi_{1}A\Lambda_{T})^{j}\Pi_{1}Ag; \end{equation*}
of the following Neumann series converge to $\mathbf{f}$ in norm $\|.\|$. These sums are easy to compute using the following iterative relation \begin{align} \mathbf{u}^{(0)} & =0, \notag \\ \mathbf{u}^{(k+1)} & =(I-\Pi_{1}A\Lambda_{T})\mathbf{u}^{(k)}+\Pi_{1}Ag. \label{E:final-alg} \end{align}
\begin{theorem} \label{T:TAT} Assume that the initial conditions of Problem ~\ref{P:TATg} is given by $(u_{0},u_{1})=\mathbf{f}.$ Suppose also condition\textbf{~\ref {A:Gcc}} is satisfied and $T\geq T(\Omega,\Gamma).$ Then, iterations $ \mathbf{u}^{(k)}$ defined by (\ref{E:Neumann-partial}) (or, equivalently, by
(\ref{E:iterations-general})) converge to $\mathbf{f}$ in norm $\|.\|$, as follows \begin{equation*} \Vert\mathbf{f}-\mathbf{u}^{(k)}\Vert\leq C_{P}(\Omega)\,\delta(T)^{k}\Vert \mathbf{f}\Vert. \end{equation*} \end{theorem}
\begin{proof} The proof is almost identical to that of Theorem \ref{T:Neumann-ser}, with inequality (\ref{E:seminorm1}) used instead of (\ref{E:seminorm0}). \end{proof}
\begin{remark} Similarly to Remark~\ref{myremark}, \begin{equation*} \Vert\mathbf{f}-\Pi_{1}Ag\Vert\leq C_{1}(\Omega)C_{P}(\Omega)e^{-aT}\Vert \mathbf{f}\Vert, \end{equation*} and, when $T$ is sufficiently large, $\mathbf{u}^{(1)}=\Pi_{1}Ag$ is a good approximation to $\mathbf{f}$.
\end{remark}
\section{Numerical implementation and simulations}
\begin{figure}
\caption{Reconstruction with $T=5$. In (d): gray line represents the phantom, dashed line shows image reconstructed from the partial data, black line shows full data reconstruction}
\label{F:t5}
\end{figure}
\subsection{Implementation}
\begin{figure}
\caption{Reconstruction with $T=5$ and with added 50\% noise (in $L^{2}$ norm). In (e): gray line represents the phantom, dashed line shows image reconstructed from the partial data, black line shows full data reconstruction}
\label{F:t5partial}
\end{figure}
One of the advantages of the present method is the ease of implementation using standard finite differences. Unlike algorithms of \cite{Stefanov-Yang} , our approach does not require solving the Dirichlet problem for Laplace equation to initialize the time reversal. (The latter problem is well studied and various methods for its solution are known. However, efficient numerical schemes for arbitrary domains are quite sophisticated and require noticeable effort to implement).
Our numerical realization of the algorithm is based on equation (\ref {E:final-alg}) that requires computing operators $\Lambda_{T}$ and $A$. These operators represent solutions of the wave equation forward and backwards in time, respectively; they were calculated using finite difference stencils as described below.
Our simulations were performed on a 2D square domain $[-1,1]\times \lbrack-1,1].$ Throughout this section we will represent our 2D spatial variable $x$ in the coordinate form, and will change the notations for all functions correspondingly: \begin{equation*} x=(\mathtt{x,y}),\quad u(x,t)=u(\mathtt{x,y},t),\quad v(x,t)=v(\mathtt{x,y} ,t),\quad\text{etc.} \end{equation*} Our square domain was discretized using Cartesian grid of size $257\times257$ , with the step $\Delta\mathtt{x}=\Delta\mathtt{y}=2/257;$ time was discretized uniformly with the step $\Delta t=0.5\Delta\mathtt{x}$. The speed of sound $c(\mathtt{x,y})$ was set to 1, for simplicity. Time stepping inside the domain in the forward direction was implemented by applying standard second-order centered stencils in both time and space to the discretized solution $u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j})$ \begin{align*} \frac{\partial^{2}}{\partial\mathtt{x}^{2}}u(\mathtt{x}_{k},\mathtt{y} _{l},t_{j}) & \thickapprox\widetilde{\frac{\partial^{2}}{\partial \mathtt{x} ^{2}}u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j})}\equiv\frac {u(\mathtt{x}_{k+1}, \mathtt{y}_{l},t_{j})+u(\mathtt{x}_{k-1},\mathtt{y}_{l},t_{j})-2u(\mathtt{x} _{k},\mathtt{y}_{l},t_{j})}{\Delta\mathtt{x}^{2}}, \\ \frac{\partial^{2}}{\partial\mathtt{y}^{2}}u(\mathtt{x}_{k},\mathtt{y} _{l},t_{j}) & \thickapprox\widetilde{\frac{\partial^{2}}{\partial \mathtt{y} ^{2}}u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j})}\equiv\frac {u(\mathtt{x}_{k}, \mathtt{y}_{l+1},t_{j})+u(\mathtt{x}_{k},\mathtt{y}_{l-1},t_{j})-2u(\mathtt{x }_{k},\mathtt{y}_{l},t_{j})}{\Delta\mathtt{x}^{2}}, \\ \frac{\partial^{2}}{\partial t^{2}}u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j}) & \thickapprox\widetilde{\frac{\partial^{2}}{\partial t^{2}}u(\mathtt{x}_{k}, \mathtt{y}_{l},t_{j})}\equiv\frac{u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j+1})+u( \mathtt{x}_{k},\mathtt{y}_{l},t_{j-1})-2u(\mathtt{x}_{k},\mathtt{y} _{l},t_{j})}{\Delta t^{2}}, \end{align*} resulting in the formula \begin{equation*} u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j+1})=2u(\mathtt{x}_{k},\mathtt{y} _{l},t_{j})-u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j-1})+\Delta t^{2}\left( \widetilde{\frac{\partial^{2}}{\partial\mathtt{x}^{2}}u(\mathtt{x}_{k}, \mathtt{y}_{l},t_{j})}+\widetilde{\frac{\partial^{2}}{\partial \mathtt{y}^{2} }u(\mathtt{x}_{k},\mathtt{y}_{l},t_{j})}\right) , \end{equation*} where tilde denotes approximate quantities. Time stepping backwards in time (when computing $A)$ were done similarly \begin{equation} v(\mathtt{x}_{k},\mathtt{y}_{l},t_{j-1})=2v(\mathtt{x}_{k},\mathtt{y} _{l},t_{j})-v(\mathtt{x}_{k},\mathtt{y}_{l},t_{j+1})+\Delta t^{2}\left( \widetilde{\frac{\partial^{2}}{\partial\mathtt{x}^{2}}v(\mathtt{x}_{k}, \mathtt{y}_{l},t_{j})}+\widetilde{\frac{\partial^{2}}{\partial \mathtt{y}^{2} }v(\mathtt{x}_{k},\mathtt{y}_{l},t_{j})}\right) . \label{E:inside} \end{equation}
When computing action of the operator $\Lambda_{T}$ (forward problem), the Neumann boundary condition was represented by the simplest first-order two-point stencil; this results in the values at the boundary points being set to the values of the nearest grid points.
The discretization of the non-standard boundary condition \begin{equation} \frac{\partial}{\partial\nu}v(x,t)-\lambda(x)\,\frac{\partial}{\partial t} v(x,t)=-\lambda(x)\,\frac{\partial}{\partial t}g(x,t), \label{E:smartboundary} \end{equation} arising in problem (\ref{E:Rev-p}) was performed as follows. The simplest first-order two-point forward stencils were used to approximate all the derivatives; for example $\frac{\partial}{\partial t}v$ was approximated at $ t=t_{j-1}$, \ $\mathtt{x}=\mathtt{x}_{0}$\ by \begin{equation*} \frac{\partial}{\partial t}v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1}) \thickapprox\widetilde{\frac{\partial}{\partial t}v(\mathtt{x}_{0},\mathtt{y} _{l},t_{j-1})}\equiv\frac{v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j})-v(\mathtt{x} _{0},\mathtt{y}_{l},t_{j-1})}{\Delta t}, \end{equation*} and $\frac{\partial}{\partial t}g$ was computed similarly. The normal derivative was also approximated by the simplest two-point stencil applied to values at time $t_{j-1}$ ; for example, one the side with $\mathtt{x}= \mathtt{x}_{0}$ the following formula was used: \begin{equation*} \frac{\partial}{\partial\nu}v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})=-\frac{ \partial}{\partial\mathtt{x}}v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1}) \thickapprox-\widetilde{\frac{\partial}{\partial\mathtt{x}}v(\mathtt{x}_{0}, \mathtt{y}_{l},t_{j-1})}\equiv-\frac{v(\mathtt{x}_{1},\mathtt{y} _{l},t_{j-1})-v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})}{\Delta\mathtt{x}}. \end{equation*} Substituting the last two equations into (\ref{E:smartboundary}) resulted in \begin{equation*} \frac{v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})-v(\mathtt{x}_{1},\mathtt{y} _{l},t_{j-1})}{\Delta\mathtt{x}}=\frac{\lambda(\mathtt{x}_{0},\mathtt{y}_{l}) }{\Delta t}\left[ v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j})-v(\mathtt{x}_{0}, \mathtt{y}_{l},t_{j-1})-g(\mathtt{x}_{0},\mathtt{y}_{l},t_{j})+g(\mathtt{x} _{0},\mathtt{y}_{l},t_{j-1})\right] \, \end{equation*} or \begin{equation} v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1}) = v(\mathtt{x}_{1},\mathtt{y} _{l},t_{j-1})+\gamma(\mathtt{x}_{0},\mathtt{y}_{l})\left[ v(\mathtt{x}_{0}, \mathtt{y}_{l},t_{j})-v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})-g(\mathtt{x} _{0},\mathtt{y}_{l},t_{j})+g(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})\right] , \label{E:smart1} \end{equation} where \[ \gamma(\mathtt{x}_{0},\mathtt{y}_{l}) \equiv\frac{\lambda( \mathtt{x}_{0},\mathtt{y}_{l})\Delta\mathtt{x}}{\Delta t}. \notag \] Solving (\ref{E:smart1}) for $v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})$ yielded \begin{equation} v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1})=\frac{v(\mathtt{x}_{1},\mathtt{y} _{l},t_{j-1})}{1+\gamma(\mathtt{x}_{0},\mathtt{y}_{l})}+\frac{\gamma(\mathtt{ x}_{0},\mathtt{y}_{l}) \left[v(\mathtt{x}_{0},\mathtt{y}_{l},t_{j})-g( \mathtt{x}_{0},\mathtt{y}_{l},t_{j})+g(\mathtt{x}_{0},\mathtt{y}_{l},t_{j-1}) \right]}{1+\gamma(\mathtt{x}_{0},\mathtt{y}_{l})}. \label{E:smart2} \end{equation} Approximation of boundary condition (\ref{E:smartboundary}) on other parts of the boundary was done similarly. In order to apply (\ref{E:smart2}) (and similar expression on the other parts of the boundary), one first applies (\ref{E:inside}) at all discretization points inside of the computational domain. Then (\ref{E:smart2}) is fully defined.
In the absence of experimentally measured data, in order to validate the reconstruction algorithm one needs to simulate values of $g(x,t)$ on $\Gamma. $ One could use the finite difference algorithm described above to approximately compute $g(x,t)$ for a chosen phantom $\mathbf{f}$ $=(f,0)$. However, doing so would constitute the so-called \textquotedblleft inverse crime\textquotedblright: sometimes simulations will produce inordinately good reconstructions due to the spurious cancellation of errors if the forward and direct problems are solved using the same discretization techniques. Thus, in order to compute $g$ we utilized the following method based on separation of variables. Function $f$ was expanded in the orthogonal series of eigenfunctions $\varphi_{k,l}$ of the Neumann Laplacian on our square domain \begin{align*} f(\mathtt{x,y}) & =\sum_{k,l}c_{k,l}\varphi_{k,l}(\mathtt{x,y}), \\ \varphi_{k,l}\mathtt{(x,y}) & =\cos(k\mathtt{\bar{x}})\cos(l\mathtt{\bar{y}} ),\quad k=0,1,2,...,\quad l=0,1,2,..., \\ \mathtt{\bar{x}} & =\pi(\mathtt{x}+1)/2,\quad\mathtt{\bar{y}}=\pi (\mathtt{y} +1)/2. \end{align*} This was done efficiently using the 2D Fast Cosine Fourier transform algorithm (FCT). Then, solution of the forward problem was computed as the series \begin{equation*} u(\mathtt{x,y},t)=\sum_{k,l}c_{k,l}\varphi_{k,l}(\mathtt{x,y})\cos (\lambda_{k,l}t),\quad\lambda_{k,l}=\frac{\pi}{2}\sqrt{k^{2}+l^{2}},\quad k,l=0,1,2,.... \end{equation*} For each value of $t,$ the above 2D cosine series were also summed using the FCT. The resulting algorithm is quite fast, and, more importantly, it is spectrally accurate with respect to $f.$ If $f$ has high degree of smoothness, this series solution yields much higher accuracy than the finite difference techniques we utilized as parts of the reconstruction algorithm.
\subsection{Simulations}
\begin{figure}
\caption{Reconstruction from the full boundary data, $T=1.6$. In (e): gray line is the phantom, dashed line shows the initial approximation, black line presents iteration \#5}
\label{F:fulldata}
\end{figure}
\begin{figure}
\caption{Reconstruction from the data given at the left and bottom sides of the square domain, $T=3$. In (e): gray line is the phantom, dashed line represents the initial approximation, black line shows iteration \#5}
\label{F:partialdata}
\end{figure}
We conducted several numerical experiments to verify theoretical conclusions of previous sections. Two acquisition schemes were considered: a full data scheme with $g(x,t)$ given on all sides of the square domain, and a partial data scheme with $g$ known only on the left and bottom sides of the square. Since we assumed $c(x)\equiv1,$ $T(\Omega,\Gamma)$ equals $2\sqrt{2}$ in the former case (i.e., the length of the diagonal of the square) and $4\sqrt{2}$ in the latter case. Experimentally, we found that these times are too pessimistic, and a half of that time is quite enough for the reconstruction. This implies that our theoretical results may be not sharp. Although we were not able to improve these theoretical estimates, we present below simulations with the measurement times significantly reduced compared to $ T(\Omega,\Gamma)$.
As a phantom, we utilized a sum of six shifted finitely supported $ C^{1}(\Omega)$ functions of the radial variable, shown as a color-scale image in Figure~1(a). Such a smooth phantom reduces errors related to finite difference computations and allows us to concentrate on convergence of the algorithm per se.
The goal of our first two simulations was to see how well the initial approximation $\Pi_{1}Ag$ can approximate $f.$ To this end the measurement time $T$ was chosen to equal to 5; this corresponds to the time of two and a half bounces of waves between the opposite sides of the domain. \ Reconstruction from full data is shown in Figure~1(b) and partial data reconstruction is presented in Figure~1(c). Image in Figure~1(d) demonstrates profiles of the central horizontal cross sections of the three previous images, with the phantom represented by a gray line, full time reconstruction shown as a black line, and partial data image drawn with a dashed line. The full data image is practically perfect; the corresponding black line in Figure~1(d) lying on top of the gray line, rendering the latter almost invisible. The relative $L^{2}(\Omega)$ reconstruction error equals 1.1\% \ in the image of Figure~1(b). The partial data reconstruction is just slightly less accurate, with the relative $L^{2}(\Omega)$ error equal to 6.8\%. In any reconstruction from real data such error would be negligible compared to errors introduced by imperfections of real data.
In order to illustrate the low sensitivity of the algorithm to the noise in the data, we repeated the previous simulation with the data contaminated by 50\% white noise (in the relative $L^{2}$ norm). As in the first simulation, only the initial guess $\Pi_{1}Ag$ was computed, without the successive iterative refinement. Figure~2(a) shows the time series representing $g(x,t)$ for one of the points $x,$ with and without added noise. Figure~2(b)-(d) demonstrate the same phantom, and the full and partial data reconstructions, respectively. The errors in the two reconstructions were of about the same order, 19\% and 22\% in the relative $L^{2}(\Omega)$ norm, with the partial data giving, for some reason, slightly better result in this norm. The central cross sections of these images are shown in Figure~2(e).
Our remaining two simulations were intended to verify the convergence of the algorithm in the case when measurement time $T$ is close to a half of $ T(\Omega,\Gamma)$. Figures~3(a)-(e) demonstrate results of a full data reconstruction with $T$ equal to 1.6 (compare to $T(\Omega,\Gamma)=2\sqrt{2} \thickapprox 2.828).$ Figures~3(a)-(d) show the phantom, the first approximation $\Pi _{1}Ag$, and the second and the fifth iterations $(\mathbf{u}^{(2)}$ and $\mathbf{u}^{(5)})$, respectively. Figure~3(e) presents central horizontal cross-sections of the phantom, the first approximation $\Pi_{1}Ag$, and of the fifth iterations. One can notice that, while the initial approximation had been noticeably distorted, the fifth iteration yields a close approximation to $f(x).$ The relative $L^{2}(\Omega) $ norm of the error in $\mathbf{u}^{(5)}$ was $3.3\%.$
The final series of images in Figure~4 demonstrates the results of the reconstruction from the partial data, with $T$ equal to $3$ (compare to $ T(\Omega,\Gamma)=4\sqrt{2}\thickapprox5.6569).$ As before, the phantom, the initial approximation, and the second and the fifth iterations are shown in Figure~4(a)-(d), respectively. Figure~4(e) presents the central horizontal cross sections of images in Figure~4(a), (b), and (d). The relative $ L^{2}(\Omega)$ error in the fifth iteration $\mathbf{u}^{(5)}$ was $5.4\%;$ for most practical purposes this would be more than acceptable.
\section{Conclusions}
We presented a novel dissipative time reversal approach for solving the inverse source problem of TAT/PAT posed within a cavity with perfectly reflecting walls. Unlike the previous work of \cite {Holman-Kunyansky,Stefanov-Yang} where Dirichlet boundary condition was used, we utilize the non-standard boundary condition (\ref{E:Rev-p}). The latter leads to the dissipative boundary condition (\ref{E:err}) imposed on the error $U(x,t)$, and, hence, to a natural decay of $U(x,0)$ with the growth of $T$. Our approach results in two reconstruction methods: i) a non-iterative approximation, converging exponentially to $f$ as $T \to \infty $, and ii) a Neumann series formula. These two algorithms are applicable for both full and partial data problems.
Compared to \cite{Holman-Kunyansky}, where rather stringent conditions on the eigenvalues of the Neumann and Dirichlet Laplacians on $\Omega$ are required for convergence, our approach is based on the much less restrictive GCC (Condition~\textbf{\ref{A:Gcc}}). Moreover, unlike the method of~\cite {Stefanov-Yang}, our technique does not require computing the harmonic extension of the boundary values, which significantly simplifies its numerical realization. It should be noted that the requirement $T \geq T(\Omega,\Gamma)$ is sharp for the convergence in Theorem~\ref{T:Neumann-ser} (dealing with the general Problem~\ref{P:TATg}). However, it is not sharp for the convergence in Theorem~\ref{T:TAT} (dealing with the inverse problem of TAT/PAT). Specifically, for the case of the full data, it is twice of the sharp time for the convergence of the TAT/PAT's inverse problem, obtained in~\cite{Stefanov-Yang}. Nevertheless, our numerical simulations show that the present algorithm performs very well with such sharp measurement times.
While we only considered the simplest wave equation in the Euclidean spaces, our analysis can be extended to problems formulated on Riemannian manifolds (as in \cite{Stefanov-Yang}) and/or to problems with a potential (as in \cite {Acosta}).
\section*{Acknowledgment}
The first author is grateful to C. Bardos, G. Nakamura, and P. Stefanov for helpful discussions and comments. The second author thanks L. Friedlander for a helpful discussion. The first and second authors were partially supported by the NSF/DMS awards \# 1212125 and 1211521, respectively.
\end{document} |
\begin{document}
\title{f Stability of generalized Tur\'an number \for linear forests}
\begin{abstract} Given a graph $T$ and a family of graphs $\mathcal{F}$, the generalized Tur\'an number of $\mathcal{F}$ is the maximum number of copies of $T$ in an $\mathcal{F}$-free graph on $n$ vertices, denoted by $ex(n,T,\mathcal{F})$. When $T = K_r$, $ex(n, K_r, \mathcal{F})$ is a function specifying the maximum possible number of $r$-cliques in an $\mathcal{F}$-free graph on $n$ vertices. A linear forest is a forest whose connected components are all paths and isolated vertices. Let $\mathcal{L}_{k}$ be the family of all linear forests of size $k$ without isolated vertices. In this paper, we obtained the maximum possible number of $r$-cliques in $G$, where $G$ is $\mathcal{L}_{k}$-free with minimum degree at least $d$. Furthermore, we give a stability version of the result. As an application of the stability version of the result, we obtain a clique version of the stability of the Erd\H{o}s-Gallai Theorem on matchings.
\noindent{\bf Keywords:} spanning linear forest, generalized Tur\'an number, stability
\noindent{\bf AMS (2000) subject classification:} 05C35 \end{abstract}
\section{Introduction}
Let $\mathcal{F}$ be a family of graphs. The \textit{Tur\'an number} of $\mathcal{F}$, denoted by $ex(n, \mathcal{F})$, is the maximum number of edges in a graph with $n$ vertices which does not contain any subgraph isomorphic to a graph in $\mathcal{F}$.
When $\mathcal{F}=\{F\}$, we write $ex(n, F)$ instead of $ex(n, \{F\})$.
The problem of determining Tur\'an number for assorted graphs traces its history back to 1907, when Mantel showed that $ex(n,K_3)=\lfloor\frac{n^2}{4}\rfloor$.
In 1941, Tur\'an \cite{1941Turan} proved that if a graph does not contain a complete subgraph $K_r$, then the maximum number of edges it can contain is given by the Tur\'an-graph, a complete balanced $(r-1)$-partite graph.
For a graph $G$ and $S,T\subseteq V(G)$, denote by $E_G(S,T)$ the set of edges between $S$ and $T$ in $G$, i.e., $E_G(S,T)=\{uv\in E(G)\colon\, u\in S, v\in T\}$.
Let $e_G(S,T)=|E_G(S,T)|$.
If $S=T$, we use $e_G(S)$ instead of $e_G(S,S)$.
For a vertex $v\in V(G)$, the {\it degree} of $v$, written as $d_G(v)$ or simply $d(v)$, is the number of edges incident with $v$. We use $d_T(v)$ instead of $e_G(S,T)$ when $S=\{v\}$.
For any $U \subseteq V(G)$, let $G[U]$ be the subgraph induced by $U$ whose edges are precisely the edges of $G$ with both ends in $U$.
Let $G$ be a graph of order $n$, $P$ a property defined on $G$, and $k$ a positive integer.
A property $P$ is said to be \textit{$k$-stable}, if whenever $G+uv$ has the property $P$ and $d_G(u) + d_G(v) \geq k$, then $G$ itself has the property $P$. The $k$-\textit{closure} of a graph $G$ is the (unique) smallest graph $G'$ of order $n$ such that $E(G) \subseteq E(G')$ and $d_{G'}(u)+d_{G'}(v)<k$ for all $u v \notin E(G')$.
The $k$-closure can be obtained from $G$ by a recursive procedure of joining nonadjacent vertices with degree-sum at least $k$. In particular, if $G'=G$, we say that $G$ is \textit{stable under taking} $k$-closure.
Thus, if $P$ is $k$-stable and the $k$-closure of $G$ has property $P$, then $G$ itself has property $P$.
For a natural number $\alpha$ and a graph $G$, the $\alpha$-\textit{disintegration} of a graph $G$ is the process of iteratively removing from $G$ the vertices with degree at most $\alpha$ until the resulting graph has minimum degree at least $\alpha+1$ or is empty. The resulting subgraph $H=H(G, \alpha)$ will be called the $(\alpha+1)$-\textit{core} of $G$. It is well known that $H(G, \alpha)$ is unique and does not depend on the order of vertex deletion (for instance, see \cite{1996P}).
The \textit{matching number} $\nu(G)$ is the number of edges in a maximum matching of $G$.
The $n$-vertex graph $H(n, k, a)$ is defined as follows.
The vertex set of $H(n, k, a)$ is partitioned into three sets $A, B, C$ such that $|A|=a,|B|=k-2a,|C|=n-k+a$, and the edge set of $H(n, k, a)$ consists of all edges between $A$ and $C$ together with all edges in $A \cup B$.
Let $H^+(n, k, a)$ and $H^{++}(n, k, a)$ be the graph obtained by adding one edge and two independent edges in $C$ of $H(n, k, a)$, respectively.
The number of $r$-cliques in $H(n, k, a)$ is denoted by $h_{r}(n, k, a):=\binom{k-a}{r}+(n-k+a)\binom{a}{r-1}$, where $h_{r}(n, k, 0)=\binom{k}{r}$.
A \textit{linear forest} is a forest whose connected components are all paths and isolated vertices.
Let $\mathcal{L}_{k}$ be the family of all linear forests of size $k$ without isolated vertices.
In \cite{2019Wang}, Wang and Yang proved that $ex\left(n ; \mathcal{L}_{n-k}\right)=\binom{n-k}{2}+O\left(k^{2}\right)$ when $n\geq 3k$.
Later, Ning and Wang \cite{2020Ning} completely determined the Tur\'an number $ex\left(n ; \mathcal{L}_{k}\right)$ for all $n>k$.
\begin{figure}
\caption{$H(n,k,a)$}
\end{figure}
\begin{theorem}[Ning and Wang \cite{2020Ning}]\label{span}
For any integers $n$ and $k$ with $1 \leq k \leq n-1$, we have $$ ex\left(n,\mathcal{L}_{k}\right)=\max \left\{h_2\left(n, k, 0\right),h_2\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}. $$ \end{theorem}
Given a graph $T$ and a family of graphs $\mathcal{F}$, the \textit{generalized Tur\'an number} of $\mathcal{F}$ is the maximum number of copies of $T$ in an $\mathcal{F}$-free graph on $n$ vertices, denoted by $ex(n,T,\mathcal{F})$.
Note that $ex(n, K_2, \mathcal{F})=ex(n, \mathcal{F})$.
The problem to estimate generalized Tur\'an number has received a lot of attention.
In 1962, Erd\H{o}s \cite{1962E} generalized the classical result of Tur\'an by determining the exact value of $ex(n,K_r,K_t)$.
Luo \cite{2018Luo} determined the upper bounds on $ex(n,K_r,P_{k})$ and $ex(n,K_r,\mathcal{C}_{\geq k})$, where $\mathcal{C}_{\geq k}$ is the family of all cycles with length at least $k$.
In \cite{2020Gerbner}, Gerbner, Methuku and Vizer investigated the function $ex(n,T,kF)$, where $kF$ denotes $k$ vertex disjoint copies of a fixed graph $F$.
The systematic study of $ex(n,T,\mathcal{F})$ was initiated by Alon and Shikhelman \cite{2016Alon}.
Recently, Zhang, Wang and Zhou \cite{2021Zhang} determined the exact values of $ex(n,K_r,\mathcal{L}_{k})$ by using the shifting method.
\begin{theorem}[Zhang, Wang and Zhou \cite{2021Zhang}]\label{2021Zhang}
For any $r\geq 2$ and $n\geq k+1$, $$ex(n,K_r,\mathcal{L}_{k})=\max \left\{h_r\left(n, k, 0\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}.$$ \end{theorem}
Let $N_r(G)$ denote the number of $r$-cliques in $G$.
When $T = K_r$, $ex(n, K_r, \mathcal{F})$ is a function specifying the maximum possible number of $r$-cliques in an $\mathcal{F}$-free graph on $n$ vertices.
We extend Theorem \ref{2021Zhang} as follows.
\begin{theorem}\label{clique}
Let $G$ be an $\mathcal{L}_{k}$-free graph on $n$ vertices with minimum degree $d$ and $d\leq \lfloor\frac{k-1}{2}\rfloor$.
Then $$N_r(G)\leq \max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}.$$
The graphs $H(n,k,d)$ and $H\big(n,k,\lfloor\frac{k-1}{2}\rfloor\big)$ show that this bound is sharp. \end{theorem}
Many extremal problems have the property that there is a unique extremal example, and moreover any construction of close to maximum size is structurally close to this extremal example.
In \cite{2019F}, F\"uredi, Kostochka, and Luo studied the maximum number of cliques in non-$\ell$-hamiltonian graphs, where the property non-$\ell$-hamiltonian is $(n+\ell)$-stable.
Actually, they not only asked to determine the maximum number of cliques in graphs having a stable property $P$, but also asked to prove a stability version of it.
Motivated by the question proposed by F\"uredi, Kostochka, and Luo \cite{2019F}, we give the following result which is the stability version of Theorem \ref{clique}.
\begin{theorem}\label{stab2}
Let $G$ be an $\mathcal{L}_{k}$-free graph on $n$ vertices with minimum degree at least $d$.
If $n > k^5$, $r\leq \lfloor\frac{k-3}{2}\rfloor$ and $$N_r(G)>\max \left\{h_r(n, k, d),h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right)\right\},$$ then \\
(i) $G$ is a subgraph of the graph $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ if $k$ is odd;\\
(ii) $G$ is a subgraph of the graph $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$, $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^{++}\left(n, k-2, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ if $k$ is even. \end{theorem}
In 1959, Erd\H{o}s and Gallai \cite{1959E} determined the maximum numbers of edges in an $n$-vertex graph with $\nu(G)\leq k$.
\begin{theorem}[Erd\H{o}s-Gallai Theorem \cite{1959E}]\label{1959E}
Let $G$ be a graph on $n$ vertices.
If $\nu(G) \leq k$, then $$e(G)\leq \max \left\{h_{2}(n, 2k+1, 0),h_{2}(n, 2k+1, k)\right\}.$$ \end{theorem}
In \cite{2020Duan}, Duan et al. extended Erd\H{o}s-Gallai Theorem as follows.
\begin{theorem}[Duan et al. \cite{2020Duan}]\label{clique11}
If $G$ is a graph with $n\geq 2k+2$ vertices, minimum degree $d$, and $\nu(G) \leq k$,
then $$N_r(G)\leq\max \left\{h_r\left(n, 2k+1, d\right), h_r\left(n, 2k+1, k\right)\right\}.$$ \end{theorem}
As an application of our result, we give the stability version of Theorem \ref{clique11} for $2\leq r\leq k-1$.
\begin{theorem}\label{thm11}
Let $G$ be a graph on $n$ vertices with $\delta(G) \geq d$ and $\nu(G) \leq k$.
If $r\leq k-1$, $n > (2k+1)^5$ and
$$N_r(G)>\max \left\{h_{r}(n, 2k+1, d),h_{r}(n, 2k+1, k-2)\right\},$$
then $G$ is a subgraph of $H(n, 2k+1, k)$ or $H(n, 2k+1, k-1)$. \end{theorem}
\section{The maximum number of cliques in $\mathcal{L}_{k}$-free graphs with given minimum degree}
The closure technique, which is initiated by Bondy and Chv\'atal \cite{1976Bondy} in 1976, played a crucial role in the proof of Theorem \ref{clique}.
In \cite{2020Ning}, Ning and Wang proved the property $\mathcal{L}_{k}$-free is $k$-stable.
\begin{lemma}[\cite{2020Ning}]\label{closure}
Let $G$ be a graph on $n$ vertices. Suppose that $u, v \in V(G)$ with $d(u)+d(v) \geq k$. Then $G$ is $\mathcal{L}_{k}$-free if and only if $G+u v$ is $\mathcal{L}_{k}$-free. \end{lemma}
\noindent\textbf{Proof of Theorem \ref{clique}.}
Suppose, by way of contradiction, that $G$ is an $\mathcal{L}_{k}$-free graph with $N_{r}(G)>\max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}$.
Let $G^{\prime}$ be the $k$-closure of $G$.
Then Lemma \ref{closure} implies that $G'$ is $\mathcal{L}_{k}$-free.
Obviously, $\delta\left(G^{\prime}\right) \geq \delta(G)=d$.
Let $H_{1}$ denote the $\lfloor\frac{k+1}{2}\rfloor$-core of $G'$, i.e., the resulting graph of applying $\lfloor\frac{k-1}{2}\rfloor$-disintegration to $G'$.
\noindent\textbf{Claim 1.} $H_{1}$ is nonempty.
\noindent\textbf{Proof.} Suppose $H_{1}$ is empty.
Since one vertex is deleted at each step during the process of $\lfloor\frac{k-1}{2}\rfloor$-disintegration, that destroys at most $\binom{\lfloor\frac{k-1}{2}\rfloor}{r-1}$ cliques of size $r$.
The number of $K_{r}$'s contained in the last $\lceil\frac{k+1}{2}\rceil$ vertices is at most $\binom{\lceil\frac{k+1}{2}\rceil}{r}$.
Therefore, $$ \begin{aligned} N_{r}\left(G^{\prime}\right) & \leq \binom{\left\lceil\frac{k+1}{2}\right\rceil}{r} + \left(n-\left\lceil\frac{k+1}{2}\right\rceil\right)\binom{\lfloor\frac{k-1}{2}\rfloor}{r-1}\\
& = h_{r}\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right) \\
& \leq \max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}, \end{aligned} $$ contradicting to the assumption of $G^{\prime}$, the claim follows.\qed
\noindent\textbf{Claim 2.} $H_{1}$ is a clique.
\noindent\textbf{Proof.} Note that $d_{G^{\prime}}(u)\geq \lfloor\frac{k+1}{2}\rfloor$ for any vertex $u$ in $H_1$.
Since $G^{\prime}$ is closed under taking $k$-closure, $H_1$ is a clique.\qed
Let $t=\left|V\left(H_{1}\right)\right|$.
Now we estimate the range of $t$.
\noindent\textbf{Claim 3.} $\lfloor\frac{k+3}{2}\rfloor\leq t \leq k-d$.
\noindent\textbf{Proof.}
As $H_{1}$ is a clique and $d_{H_{1}}(u) \geq \lfloor\frac{k+1}{2}\rfloor$ for any vertex $u$ in $H_{1}$, we get $t \geq \lfloor\frac{k+3}{2}\rfloor$.
If $t \geq k-d+1$, then $d_{G^{\prime}}(u) \geq d_{H_{1}}(u) = t-1\geq k-d$ for any vertex $u$ in $H_1$.
Let $v$ be any vertex in $V\left(G^{\prime}\right) \backslash V\left(H_{1}\right)$.
Notice that $d_{G^{\prime}}(v) \geq d_{G}(v) \geq d$ and $d_{G^{\prime}}(u)+d_{G^{\prime}}(v) \geq k-d+d=k$.
Since $G^{\prime}$ is the $k$-closure of $G$, $v$ is adjacent to $u$.
Then $G^{\prime}$ contains a $P_{k+1}$, which is a contradiction.
Thus $\lfloor\frac{k+3}{2}\rfloor \leq t \leq k-d$.\qed
Let $H_{2}$ be the $(k+1-t)$-core of $G^{\prime}$.
Since $t \geq \lfloor\frac{k+3}{2}\rfloor$, we obtain $k+1-t \leq \lfloor\frac{k+1}{2}\rfloor$.
Therefore, $H_{1} \subseteq H_{2}$.
\noindent\textbf{Claim 4.} $H_{1} \neq H_{2}$.
\noindent\textbf{Proof.}
Suppose $H_{1}=H_{2}$.
Then $\left|V\left(H_{2}\right)\right|=t$.
Since each step during the process of $(k-t)$-disintegration destroys at most $\binom{k-t}{r-1}$ cliques of size $r$,
we have $N_{r}\left(G^{\prime}\right) \leq\binom{t}{r}+(n-t)\binom{k-t}{r-1}=h_{r}(n, k, k-t)$.
Note that $d \leq k-t\leq \lceil\frac{k-3}{2}\rceil$ from Claim 3.
By the convexity of $h_{r}(n, k, k-t)$, we have $N_{r}\left(G^{\prime}\right) \leq \max \left\{h_{r}(n, k, d), h_{r}(n, k, \lceil\frac{k-3}{2}\rceil)\right\}\leq \max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}$, a contradiction.
Thus the claim follows.\qed
By Claim 4, $H_{1}$ is a proper subgraph of $H_{2}$.
This implies that there are non-adjacent vertices $u$ and $v$ such that $u \in V(H_{1})$ and $v \in V(H_{2}) \backslash V(H_{1})$.
We have $d_{G^{\prime}}(u)+d_{G^{\prime}}(v) \geq t-1+(k+1-t)=k$.
As $G^{\prime}$ is stable under taking $k$-closure, $u$ must be adjacent to $v$.
We obtained a contradiction.
It is easy to see that graphs $H(n,k,d)$ and $H(n,k,\lfloor\frac{k-1}{2}\rfloor)$ are $\mathcal{L}_{k}$-free. Then either $H(n,k,d)$ or $H(n,k,\lfloor\frac{k-1}{2}\rfloor)$ obtains the bound.
The theorem is proved.\qed
\section{Stability on $\mathcal{L}_{k}$-free graphs}
\subsection{Proof of Theorem \ref{stab2}}
Let $G$ be a graph on $n$ vertices.
If there are at least $s$ vertices in $V(G)$ with degree at most $q$, then we say $G$ has $(s, q)$-\textit{P\'osa property}.
If $G$ has $(s, q)$-P\'osa property and $n \geq s+q$, then we can check that
$$N_r(G) \leq\binom{n-s}{r}+s \binom{q}{r-1}.$$
The following two lemmas show the relationship between the $k$-stable property and the P\'osa property.
With the help of these two lemmas, we can approximate the structure of $k$-closure of a graph.
\begin{lemma}\label{posa}
Let $n \geq k+1$.
Assume property $P$ is $k$-stable and the complete graph $K_n$ has the property $P$.
Suppose $G$ is a graph on $n$ vertices with minimum degree at least $d$.
If $G$ does not have property $P$, then there exists an integer $q$ with $d \leq q \leq \frac{k-1}{2}$ such that G has $(n-k+q, q)$-P\'osa property. \end{lemma}
\noindent\textbf{Proof.}
Let $G^{\prime}$ be the $k$-closure of $G$ and $d_{G^{\prime}}\left(v_{1}\right), d_{G^{\prime}}\left(v_{2}\right), \cdots, d_{G^{\prime}}\left(v_{n}\right)$ be the degree sequence of $G^{\prime}$ such that $d_{G^{\prime}}\left(v_{1}\right)$ $\geq d_{G^{\prime}}\left(v_{2}\right) \geq \cdots \geq d_{G^{\prime}}\left(v_{n}\right)$.
Clearly, $G'$ is not a complete graph.
Otherwise $G'$ has property $P$, so does $G$, a contradiction.
Let $v_{i}$ and $v_{j}$ be two non-adjacent vertices in $G^{\prime}$ with $1 \leq i<j \leq n$ and $d_{G^{\prime}}\left(v_{i}\right)+d_{G^{\prime}}\left(v_{j}\right)$ as large as possible.
Obviously, $d_{G^{\prime}}\left(v_{i}\right)+d_{G^{\prime}}\left(v_{j}\right) \leq k-1$.
Let $S$ be the set of vertices in $V \backslash\{v_i\}$ which are not adjacent to $v_{i}$ in $G'$.
By the choice of $v_j$, we have $d_{G^{\prime}}(v) \leq d_{G^{\prime}}\left(v_{j}\right)$ for any $v \in S$.
Then $$
|S|=n-1-d_{G^{\prime}}\left(v_{i}\right) \geq n-k+d_{G^{\prime}}\left(v_{j}\right) . $$
There are at least $n-k+d_{G^{\prime}}\left(v_{j}\right)$ vertices in $V\left(G^{\prime}\right)$ with degree at most $d_{G^{\prime}}\left(v_{j}\right)$.
Let $q=d_{G^{\prime}}\left(v_{j}\right)$.
Then $G'$ has $(n-k+q, q)$-P\'osa property.
Moreover, since $d_{G^{\prime}}\left(v_{i}\right) \geq d_{G^{\prime}}\left(v_{j}\right)$ and $d_{G^{\prime}}\left(v_{i}\right)+d_{G^{\prime}}\left(v_{j}\right) \leq k-1$, it follows that $q=d_{G^{\prime}}\left(v_{j}\right) \leq \frac{k-1}{2}$.
Since $G$ is a subgraph of $G^{\prime}$ and $$ d_{G^{\prime}}\left(v_{j}\right) \geq \delta\left(G^{\prime}\right) \geq \delta(G) \geq d, $$
we complete the proof.\qed
The following lemma gives a structural characterization of graphs with P\'osa property.
\begin{lemma}\label{bipartite}
Suppose G has $n$ vertices and is stable under taking $k$-closure.
Let $q$ be the minimum integer such that $G$ has $(n-k+q, q)$-P\'osa property and $q \leq \frac{k-1}{2}$.
If $T$ is the set of vertices in $V(G)$ with degree at least $k-q$ and $T^{\prime}=V(G) \backslash T$, then $G\left[T, T^{\prime}\right]$ is a complete bipartite graph. \end{lemma}
\noindent\textbf{Proof.}
Assume that $G\left[T, T^{\prime}\right]$ is not a complete bipartite graph.
Choose two non-adjacent vertices $u \in T$ and $v \in T^{\prime}$ such that $d(u)+d(v)$ is as large as possible.
Clearly, $d(u)+d(v) \leq k-1$ and $T$ forms a clique in $G$ as $G$ is stable under taking $k$-closure.
Now denote by $S$ the set of vertices in $V \backslash\{u\}$ which are not adjacent to $u$ in G. Clearly, for any $v^{\prime} \in S, d\left(v^{\prime}\right) \leq d(v)$ and $$
|S|=n-1-d(u) \geq n-k+d(v). $$
Since $d(u) \geq k-q$ and $d(u)+d(v) \leq k-1, d(v) \leq q-1$.
Let $q^{\prime}=d(v) \leq q-1$.
We have at least $n-k+q^{\prime}$ vertices in $V(G)$ with degree at most $q^{\prime}$.
Then $G$ has $(n-k+q', q')$-P\'osa property with $q'<q$, which contradicts the minimality of $q$.
The lemma follows.\qed
Let $g(k,\Delta)$ be the maximum number of edges in a graph such that the size of linear forests is at most $k$ and the maximum degree is at most $\Delta$.
The following lemma estimates the upper bound of $g(k,\Delta)$.
\begin{lemma}\label{k2} For $k\geq 1$ and $\Delta\geq 3$, \\ (i) ~$g(k,2)\leq \frac{3}{2}k.$\\ (ii) $g(k,\Delta)\leq k(\Delta-1).$ \end{lemma}
\noindent\textbf{Proof of (i).}
Let $G$ be an $\mathcal{L}_{k+1}$-free graph with $e(G)=g(k,2)$ and $\Delta(G)\leq 2$.
Clearly, $g(1,2)=1$ and $g(2,2)=3$.
Now suppose that $k\geq 3$.
Since the maximum degree is at most 2, each nontrivial component is either a path or a cycle.
We claim that each component with at least 3 vertices is a cycle.
If not, we add an edge between the two ends of the path and the resulting graph is still $\mathcal{L}_{k+1}$-free, which contradicts the maximality of $G$.
If there is a component consisting of exactly one edge, we replace this edge and a component $C_{\ell}$ in $G$ with $C_{\ell+1}$.
Then the resulting graph is still $\mathcal{L}_{k+1}$-free and the number of edges is equal to $g(k,2)$.
Therefore, we can further assume that each nontrivial component of $G$ is a cycle.
Let $C_{k_1}$, \ldots, $C_{k_t}$ be the nontrivial components of $G$.
Then $k=(k_1-1) + \cdots + (k_t-1)$ and $e(G)=k_1+\cdots+k_t=k+t$.
Note that $k_i-1\geq 2$, we have $t\leq \frac{k}{2}$.
Thus $g(k,2)=e(G)\leq \frac{3}{2}k$.\qed
\noindent\textbf{Proof of (ii).}
We use induction on $k$.
It is easy to check that $g(1,\Delta)=1$ and $g(2,\Delta)=\Delta$.
Thus lemma holds for $k=1,2$.
Suppose that the lemma holds for all $k'<k$.
Let $G$ be an $\mathcal{L}_{k+1}$-free graph with $\Delta(G)\leq \Delta$.
Let $P=v_0v_1\cdots v_{t}$ be the longest path in $G$ and $B=V(G)\backslash V(P)$.
Then $G[B]$ is $\mathcal{L}_{k+1-t}$-free and $e(G[B])\leq (k-t)(\Delta-1)$ by the induction hypothesis.
Since $P$ is the longest path in $G$, $d_B(v_0)=d_B(v_t)=0$ and $d_B(v_i)\leq \Delta-2$ for $1\leq i\leq t-1$.
Thus, $$ \begin{aligned}
e(G[V(P)])+e_G[V(P), B])
& = \frac{1}{2}\left(\sum\limits_{i=0}^t d_G(v_i)+\sum\limits_{i=0}^t d_B(v_i)\right)\\
& \leq \frac{1}{2}\left((t+1)\Delta+(t-1)(\Delta-2)\right)\\
& = t(\Delta-1)+1 \end{aligned} $$
The equality holds only if $d_G(v_0)=\cdots=d_G(v_t)=\Delta$, $d_B(v_1)=\cdots=d_B(v_{t-1})=\Delta-2$ and $d_B(v_0)=d_B(v_t)=0$ hold simultaneously, which is impossible.
Therefore, $e(G[V(P)])+e_G[V(P), B])\leq t(\Delta-1)$.
Moreover, we have
$$ \begin{aligned}
e(G)&=e(G[B])+e(G[V(P)])+e_G[V(P), B])\\
&\leq (k-t)(\Delta-1)+t(\Delta-1)\\
&\leq k(\Delta-1). \end{aligned} $$ \qed
\noindent\textbf{Remark.}
The graph consisting of $k/3$ pairwise disjoint $K_4$'s shows the bound in Lemma \ref{k2} (ii) is sharp when $3$ divides $k$ and $\Delta=3$.
For integers $m, l, r$, the following combinatorial identity is well-known. \begin{eqnarray}\label{3.1} \binom{m+l}{r}=\sum_{j=0}^{r}\binom{m}{j}\binom{l}{r-j} \end{eqnarray}
The following lemma bounds the number of $r$-cliques by the number of edges.
\begin{lemma}[\cite{2021Chakraborti}]\label{cliques} Let $r \geq 3$ be an integer, and let $x \geq r$ be a real number. Then, every graph with exactly $\binom{x}{2}$ edges contains at most $\binom{x}{r}$ cliques of order $r$. \end{lemma}
For two disjoint vertex sets $T$ and $T'$ of $G$, we use $N_r^i\left(T, T'\right)$ and $N_r^{\geq i}\left(T, T'\right)$ to denote
the number of $r$-cliques in $G[T,T']$ that contain exactly $i$ vertices and at least $i$ vertices in $T'$, respectively.
\noindent\textbf{Proof of Theorem \ref{stab2}.}
Let $G^{\prime}$ be the $k$-closure of $G$.
Then $G'$ is $\mathcal{L}_{k}$-free from Lemma \ref{closure}.
By Lemma \ref{posa}, there exists an integer $q$ with $d \leq q \leq \lfloor\frac{k-1}{2}\rfloor$ such that $G^{\prime}$ has $(n-k+q, q)$-P\'osa property.
Furthermore, we assume $q$ is as small as possible.
Then either $q=\lfloor\frac{k-1}{2}\rfloor$ or $q=\lfloor\frac{k-3}{2}\rfloor$.
Otherwise, $d \leq q \leq \lfloor\frac{k-5}{2}\rfloor$ implies that $N_r(G) \leq \binom{k-q}{r}+(n-k+1)\binom{q}{r-1}= h_r(n, k, q) \leq \max \left\{h_r(n, k, d),h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right)\right\}$, a contradiction.
\noindent\textbf{ (i)} $k$ is odd.
\noindent\textbf{Case 1.} $q=\frac{k-1}{2}$.
Let $T_{1}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+1}{2}$, i.e., $$ T_{1}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+1}{2}\right\} . $$
Then $T_1$ is a clique in $G^{\prime}$. Let $T_1'=V\left(G^{\prime}\right) \backslash T_{1}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{1}, T_1'\right]$ is a complete bipartite graph. We will show that $\left|T_{1}\right|=\frac{k-1}{2}$ or $\left|T_{1}\right|=\frac{k-3}{2}$.
\noindent\textbf{Claim 1.} $\left|T_{1}\right|\leq\frac{k-1}{2}$
\noindent\textbf{Proof.}
Otherwise, $\left|T_{1}\right|\geq\frac{k+1}{2}$. Since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, all vertices in $T'$ with degree at least $\frac{k+1}{2}$.
It implies that $T'_1$ is an empty set.
Thus $G'$ is a complete graph.
Since $n\geq k+1$, $G'$ contains a linear forest of size $k$, a contradiction. \qed
\noindent\textbf{Claim 2.} $\left|T_{1}\right|\geq\frac{k-3}{2}$.
\noindent\textbf{Proof.} Otherwise, $\left|T_{1}\right|\leq\frac{k-5}{2}$.
Suppose $|T_{1}|=\frac{k-1}{2}-t$, then $2\leq t\leq \frac{k-1}{2}$.
Since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{1}^{\prime}\right]$ is at most $t$.
Moreover, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_{2t+1}$-free.
Otherwise we will find a linear forest of size at least $k$ in $G^{\prime}$.
By Lemma \ref{k2}, $e(T^{\prime})\leq g(2t,t)\leq 2t(t-1)$ when $t\geq 3$ and $e(T^{\prime})\leq g(2t,t)\leq 6$ when $t=2$.
Suppose $uv\in E(G'[T_1'])$.
Since the degrees of $u$ and $v$ are at most $\frac{k-1}{2}$, $u$ and $v$ have at most $\frac{k-3}{2}$ common neighbors. Thus the edge $uv$ is contained in at most $\binom{\frac{k-3}{2}}{r-2}$ $r$-cliques.
If $t=2$, then
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k-5}{2}}{r} + \left(n-\frac{k-5}{2}\right)\binom{\frac{k-5}{2}}{r-1}+ 6\binom{\frac{k-3}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}+5\binom{\frac{k-5}{2}}{r-1}+6 \binom{\frac{k-3}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the last inequality follows from (\ref{3.1}), a contradiction.
If $3\leq t\leq \frac{k-1}{2}$, then
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k-1}{2}-t}{r} + \left(n-\frac{k-1}{2}+t\right)\binom{\frac{k-1}{2}-t}{r-1}+ 2t(t-1) \binom{\frac{k-3}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k-7}{2}\right)\binom{\frac{k-7}{2}}{r-1}+ \frac{(k-1)(k-3)}{2} \binom{\frac{k-3}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k+5}{2}\right)\left(\binom{\frac{k-5}{2}}{r-1}-\binom{\frac{k-7}{2}}{r-2}\right) +6\binom{\frac{k-7}{2}}{r-1} + \frac{(k-1)(k-3)}{2} \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the third inequality follows from (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 1 and Claim 2, we have $\left|T_{1}\right|=\frac{k-1}{2}$ or $\left|T_{1}\right|=\frac{k-3}{2}$.
When $\left|T_{1}\right|=\frac{k-3}{2}$, since $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph and all the vertices in $T_{1}^{\prime}$ have degree at most $\frac{k-1}{2}$, it follows that all vertices in $T_{1}^{\prime}$ have degree at most one in $G^{\prime}\left[T_{1}^{\prime}\right]$.
Therefore, $G'[T_{1}^{\prime}]$ consists of independent edges and isolated vertices.
We claim there are at most two edges in $G^{\prime}\left[T_{1}^{\prime}\right]$.
Otherwise, one can find $P_{k-2}\cup 3 P_2$ in $G'$, a contradiction.
Thus, $G'\subseteq H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$. When $\left|T_{1}\right|=\frac{k-1}{2}$, since $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph and vertices in $T_{1}^{\prime}$ have degree at most $\frac{k-1}{2}$, it follows that $T_{1}^{\prime}$ forms an independent set of $G^{\prime}$.
Then $G^{\prime}$ is isomorphic to $H(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor)$.
\noindent\textbf{Case 2.} $q=\frac{k-3}{2}$.
Let $T_{2}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+3}{2}$, i.e., $$ T_{2}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+3}{2}\right\} . $$ Then $T_2$ is a clique in $G^{\prime}$. Let $T_{2}^{\prime}=V\left(G^{\prime}\right) \backslash T_{2}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph. We will show that $\left|T_{2}\right|=\frac{k-3}{2}$.
\noindent\textbf{Claim 3.} $\left|T_{2}\right|\leq\frac{k-3}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\geq\frac{k-1}{2}$.
The fact $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph implies that all vertices in $T_{2}^{\prime}$ have degree at least $\frac{k-1}{2}$.
Therefore $G'$ has no vertex with degree less than or equal to $\frac{k-3}{2}$, which contradicts to the fact that $G'$ has $(n-k+\frac{k-3}{2}, \frac{k-3}{2})$-P\'osa property. \qed
\noindent\textbf{Claim 4.} $\left|T_{2}\right|\geq\frac{k-3}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\leq\frac{k-5}{2}$. Suppose $|T_{2}|= \frac{k-1}{2}-t$, where $2\leq t\leq \frac{k-1}{2}$.
Since $G^{\prime}\left[T_2, T_2^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{2}^{\prime}\right]$ is at most $t+1$.
Moreover, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_{2t+1}$-free.
Otherwise we will find a linear forest of size at least $k$ in $G^{\prime}$.
When $t=2$,
since $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_5$-free with maximum degree at most 3.
By Lemma \ref{k2}, $e(T^{\prime})\leq g(4,3)< 10=\binom{5}{2}$.
Then we have $N_r(G'[T'_{2}])\leq \binom{5}{r}$ from Lemma \ref{cliques}.
Thus the following inequality holds:
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1(T_2, T'_2) + \sum\limits_{i=2}^5 N_r^i(T_2, T_2')\\[1mm]
& \leq \binom{\frac{k-5}{2}}{r} + \left(n-\frac{k-5}{2}\right)\binom{\frac{k-5}{2}}{r-1}+ \sum\limits_{i=2}^5\binom{5}{i} \binom{\frac{k-5}{2}}{r-i}\\[1mm]
& = \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the second equality follows from (\ref{3.1}), a contradiction.
When $3\leq t\leq \frac{k-1}{2}$,
by Lemma \ref{k2}, $e(T^{\prime})\leq g(2t,t+1)\leq 2t^2$.
Note each edge in $G'[T_2']$ is contained in at most $\binom{\frac{k-1}{2}}{r-2}$ $r$-cliques.
Thus we have
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1\left(T_2, T'_2\right) + N_r^{\geq 2}\left(T_2,T'_2\right)\\[1mm]
& \leq \binom{\frac{k-1}{2}-t}{r} + \left(n-\frac{k-1}{2}+t\right)\binom{\frac{k-1}{2}-t}{r-1}+ 2t^2 \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k-7}{2}\right)\binom{\frac{k-7}{2}}{r-1}+ \frac{(k-1)^2}{2} \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k+5}{2}\right)\left[\binom{\frac{k-5}{2}}{r-1}-\binom{\frac{k-7}{2}}{r-2}\right] +6\binom{\frac{k-7}{2}}{r-1} + \frac{(k-1)^2}{2} \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the third inequality follows from (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 3 and Claim 4, we have
$\left|T_{2}\right|=\frac{k-3}{2}$. Then $G^{\prime}\left[T_{2}^{\prime}\right]$ must be $\mathcal{L}_3$-free. Otherwise we can find a linear forest of size $k$.
Moreover, each vertex in $G^{\prime}\left[T_{2}^{\prime}\right]$ has degree at most two.
Thus $G'[T_{2}^{\prime}]$ is a subgraph of $C_3\cup (n-3)K_1$ or $2P_2\cup (n-4)K_1$.
It follows that $G^{\prime}$ is a subgraph of $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.
Combining the two cases above, we get that $G$ is a subgraph of $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.\qed
\noindent\textbf{ (ii)} $k$ is even.
\noindent\textbf{Case 1.} $q=\frac{k-2}{2}$.
Let $T_{1}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+2}{2}$, i.e., $$ T_{1}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+2}{2}\right\} . $$
Then $T_1$ is a clique in $G^{\prime}$. Let $T_1'=V\left(G^{\prime}\right) \backslash T_{1}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{1}, T_1'\right]$ is a complete bipartite graph. We will show that $\left|T_{1}\right|=\frac{k-2}{2}$ or $\left|T_{1}\right|=\frac{k-4}{2}$.
\noindent\textbf{Claim 5.} $\left|T_{1}\right|\leq\frac{k-2}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{1}\right|\geq\frac{k}{2}$. The fact $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph implies that all vertices in $T_{1}^{\prime}$ have degree at least $\frac{k}{2}$. Then $G^{\prime}$ has no vertex with degree less than or equal to $\frac{k-2}{2}$, which is a contradiction to the fact that $G^{\prime}$ has $(n-k+\frac{k-2}{2}, \frac{k-2}{2})$-P\'osa property. \qed
\noindent\textbf{Claim 6.} $\left|T_{1}\right|\geq\frac{k-4}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{1}\right|\leq\frac{k-6}{2}$. Suppose $|T_{1}|= \frac{k}{2}-t$, then $3\leq t\leq \frac{k}{2}$.
Since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{1}^{\prime}\right]$ is at most $t$.
Moreover, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_{2t}$-free.
Otherwise we will find a linear forest of size at least $k$ in $G^{\prime}$.
By Lemma \ref{k2}, $e(T^{\prime})\leq g(2t-1,t)\leq (2t-1)(t-1)$.
If $t=3$, then
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k-6}{2}}{r} + \left(n-\frac{k-6}{2}\right)\binom{\frac{k-6}{2}}{r-1}+ 10 \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}+6\binom{\frac{k-6}{2}}{r-1}+ 10 \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the last inequality follows from (\ref{3.1}).
If $4\leq t\leq \frac{k}{4}$, then
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k}{2}-t}{r} + \left(n-\frac{k}{2}+t\right)\binom{\frac{k}{2}-t}{r-1}+ (2t-1)(t-1) \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k-8}{2}\right)\binom{\frac{k-8}{2}}{r-1}+ \frac{(k-1)(k-2)}{2} \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k+6}{2}\right)\left(\binom{\frac{k-6}{2}}{r-1}-\binom{\frac{k-8}{2}}{r-2}\right)+7\binom{\frac{k-8}{2}}{r-1}+ \frac{(k-1)(k-2)}{2} \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the third inequality holds since (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 5 and Claim 6, we have $\left|T_{1}\right|=\frac{k-4}{2}$ or $\left|T_{1}\right|=\frac{k-2}{2}$. When $\left|T_{1}\right|=\frac{k-4}{2}$, since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{1}^{\prime}\right]$ is at most two.
Moreover, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_4$-free.
Therefore, $G^{\prime}\left[T_{1}^{\prime}\right]$ (without isolated vertices) is a subgraph of $\{C_4,C_3\cup P_2, 3P_2\}$.
Thus $G$ is a subgraph of $H(n,k,\lfloor\frac{k-3}{2}\rfloor)$, $H^{+}(n,k-1,\lfloor\frac{k-3}{2}\rfloor)$ or $H^{++}(n,k-2,\lfloor\frac{k-3}{2}\rfloor)$. When $\left|T_{1}\right|=\frac{k}{2}-1$, since $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_{2}$-free, i.e. there is at most one edge in $G^{\prime}\left[T_{1}^{\prime}\right]$.
Thus, $G'\subseteq H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$.
\noindent\textbf{Case 2.} $q=\frac{k-4}{2}$.
Let $T_{2}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+4}{2}$, i.e., $$ T_{2}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+4}{2}\right\} . $$
Then $T_2$ is a clique in $G^{\prime}$. Let $T_{2}^{\prime}=V\left(G^{\prime}\right) \backslash T_{2}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph. We will show that $\left|T_{2}\right|=\frac{k-4}{2}$
\noindent\textbf{Claim 7.} $\left|T_{2}\right|\leq\frac{k-4}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\geq\frac{k-2}{2}$. The fact $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph implies that all vertices in $T_{2}^{\prime}$ have degree at least $\frac{k-2}{2}$.
Therefore $G'$ has no vertex with degree less than or equal to $\frac{k-4}{2}$, which contradicts to the fact that $G'$ has $(n-k+\frac{k-4}{2}, \frac{k-4}{2})$-P\'osa property. \qed
\noindent\textbf{Claim 8.} $\left|T_{2}\right|\geq\frac{k-4}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\leq\frac{k-6}{2}$. Suppose $|T_{2}|=\frac{k}{2}-t$, then $3\leq t\leq \frac{k}{2}$.
Since $G^{\prime}\left[T_2, T_2^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{2}^{\prime}\right]$ is at most $t+1$.
Moreover, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_{2t}$-free.
Otherwise, we will find a linear forest of size at least $k$ in $G^{\prime}$.
When $t=3$, since $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_6$-free with maximum degree at most 4.
By Lemma \ref{k2} (ii), $e(T'_{2})\leq 15=\binom{6}{2}$.
So $N_r(G'[T'_{2}])\leq \binom{6}{r}$ from Lemma \ref{cliques}.
Then we have
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1(T_2, T'_2) + \sum\limits_{i=2}^6 N_r^i(T_2, T'_2)\\[1mm]
& \leq \binom{\frac{k-6}{2}}{r} + \left(n-\frac{k-6}{2}\right)\binom{\frac{k-6}{2}}{r-1}+ \sum\limits_{i=2}^6\binom{6}{i} \binom{\frac{k}{2}-3}{r-i}\\[1mm]
& = \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right), \end{aligned} $$ where the second equality follows from (\ref{3.1}), a contradiction.
When $4\leq t\leq \frac{k}{2}$,
by Lemma \ref{k2}, $e(T^{\prime})\leq g(2t-1,t+1)\leq (2t-1)t$.
Thus we have
$$ \begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1\left(T_2, T'_2\right) + N_r^{\geq 2}\left(T_2,T'_2\right)\\[1mm]
& \leq \binom{\frac{k}{2}-t}{r} + \left(n-\frac{k}{2}+t\right)\binom{\frac{k}{2}-t}{r-1}+ (2t-1)t \binom{\frac{k}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k-8}{2}\right)\binom{\frac{k-8}{2}}{r-1}+ \frac{k(k-1)}{2} \binom{\frac{k}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k+6}{2}\right)\left[\binom{\frac{k-6}{2}}{r-1}-\binom{\frac{k-8}{2}}{r-2}\right] + 7\binom{\frac{k-8}{2}}{r-1}+\frac{k(k-1)}{2} \binom{\frac{k}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right) \end{aligned} $$ where the third inequality follows from (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 7 and Claim 8, we have $\left|T_{2}\right|=\frac{k-4}{2}$.
Since $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph, all vertices in $T_{2}^{\prime}$ have degree at least $\frac{k-4}{2}$.
The $(n-k+\frac{k-4}{2},\frac{k-4}{2})$-p\'osa property implies that there are at most 4 vertices in $T_{2}^{\prime}$ with degree great than 0.
Thus $G^{\prime}\left[T_{2}^{\prime}\right]$ is a subgraph of $K_4\cup (n-\frac{k+4}{2})K_1$. Then $G\subseteq H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.
Combining the two cases above, we get that $G$ is a subgraph of $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$, $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^{++}\big(n,k-2,\lfloor\frac{k-3}{2}\rfloor\big)$. The proof is finished. \qed
\section{The clique version of the stability of Erd\H{o}s-Gallai Theorem}
Notice that a linear forest with at least $2k + 1$ edges has a matching of size at least $k + 1$. A graph $G$ with $\nu(G) \leq k$ must be $\mathcal{L}_{2k+1}$-free.
Combining Theorem \ref{stab2} (i) and further discussions, we obtain Theorem \ref{thm11}.
\noindent\textbf{Proof of Theorem \ref{thm11}.}
Let $G$ be a graph satisfying the conditions of Theorem \ref{thm11}.
Then $G$ is $\mathcal{L}_{2k+1}$-free.
By Theorem \ref{stab2} (i), if $G\nsubseteq H^+\left(n, 2k, k-1\right)$, then $G$ is a subgraph of $H(n, 2k+1, k)$ or $H\left(n, 2k+1, k-1\right)$.
Next we will show that if $G\subseteq H^+\left(n, 2k, k-1\right)$, then $G\subseteq H\left(n, 2k+1, k-1\right)$.
If $G\subseteq H^+\left(n, 2k, k-1\right)$ and $G\subseteq H\left(n, 2k+1, k-1\right)$, then we are done.
Now we suppose that $G\subseteq H^+(n, 2k, k-1)$ and $G\nsubseteq H\left(n, 2k+1, k-1\right)$.
Note that $H^+(n, 2k, k-1)$ can be viewed as a graph obtained from $H(n, 2k-1, k-1)$
by adding two independent edges, say $x_1y_1$ and $x_2y_2$.
If $G\subseteq H^+(n, 2k, k-1)$ but $G\nsubseteq H\left(n, 2k+1, k-1\right)$, then $x_1y_1$ and $x_2y_2$ must be in $E(G)$.
Let $G_1=G-\{x_1,y_1,x_2,y_2\}$.
Then $G_1\subseteq H(n-4,2k-1,k-1)$ and \begin{align}\label{G'}
N_r(G_1)
& > h_r(n,2k+1,k-2)-4\binom{k-1}{r-1}-2\binom{k-1}{r-2}\notag\\
& > \binom{k-1}{r}+(n-k-3)\binom{k-2}{r-1} \end{align}
Since $G_1\subseteq H(n-4,2k-1,k-1)$, there exists an independent set $I$ satisfies $|I|=n-k-3$ and $d_{G_1}(v)\leq k-1$ for all $v\in I$.
Suppose that there are $t$ vertices in $I$ with degree $k-1$.
Then $t\leq k-2$.
Otherwise, we can find a $(k-1)$-matching $M$ in $G_1$.
The $(k-1)$-matching $M$ together with the edges $x_1y_1$ and $x_2y_2$ form a $(k+1)$-matching in $G$, a contradiction.
\noindent\textbf{Case 1.} $t=0$.
In this case, all vertices in $I$ have degree at most $k-2$.
Thus
$$N_r(G_1)\leq \binom{k-1}{r}+(n-k-3)\binom{k-2}{r-1},$$
contradicting to (\ref{G'}).
\noindent\textbf{Case 2.} $1\leq t\leq k-2$.
There are at most $k-2-t$ vertices in $I$ with degree $k-2$.
Otherwise, for any $S\subseteq V(G_1)\setminus I$, $|N(S)|\geq |S|$. By Hall's Theorem,
there exists a $(k-1)$-matching $M$ in $G_1$. The $(k-1)$-matching $M$ together with the edges $x_1y_1$ and $x_2y_2$ form a $(k+1)$-matching in $G$, a contradiction.
Thus \begin{align*}
N_r(G_1)
& \leq \binom{k-1}{r}+t\binom{k-1}{r-1}+(k-2-t)\binom{k-2}{r-1}+(n-k-2)\binom{k-3}{r-1}\\[1mm]
& < \binom{k-1}{r}+(k-1)\binom{k-1}{r-1}+(n-k-3)\binom{k-3}{r-1}\\[1mm]
& < \binom{k-1}{r}+(n-k-3)\binom{k-2}{r-1}, \end{align*}
where the last inequality follows from $n>(2k+1)^5$, which is a contradiction to (\ref{G'}).
Thus $G\subseteq H^+(n, 2k, k-1)$ implies $G\subseteq H\left(n, 2k+1, k-1\right)$.
That is, $G$ is a subgraph of $H(n, 2k+1, k)$ or $H(n, 2k+1, k-1)$, completing the proof. \qed
\end{document} |
\begin{document}
\def\mathcal{O}{\mathcal{O}} \def\mathcal{S}{\mathcal{S}} \def\mathcal{X}{\mathcal{X}} \def\mathcal{Y}{\mathcal{Y}} \def\mathcal{S}'{\mathcal{S}'} \defJ{J} \defL{L} \defR{R}
\newcommand{\removableFootnote}[1]{}
\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{remark}
\newtheorem{remark}{Remark}[section]
\title{ Sequences of Periodic Solutions and Infinitely Many Coexisting Attractors in the Border-Collision Normal Form. } \author{ D.J.W.~Simpson\\ Institute of Fundamental Sciences\\ Massey University\\ Palmerston North\\ New Zealand} \maketitle
\begin{abstract} The border-collision normal form is a piecewise-linear continuous map on $\mathbb{R}^N$ that describes dynamics near border-collision bifurcations of nonsmooth maps. This paper studies a codimension-three scenario at which the border-collision normal form with $N=2$ exhibits infinitely many attracting periodic solutions. In this scenario there is a saddle-type periodic solution with branches of stable and unstable manifolds that are coincident, and an infinite sequence of attracting periodic solutions that converges to an orbit homoclinic to the saddle-type solution. Several important features of the scenario are shown to be universal, and three examples are given. For one of these examples infinite coexistence is proved directly by explicitly computing periodic solutions in the infinite sequence. \end{abstract}
\section{Introduction} \label{sec:intro}
For a map that is smooth except on codimension-one switching manifolds where it is only continuous, a border-collision bifurcation occurs when a fixed point of the map collides with a switching manifold under parameter change, and local to the bifurcation the map is asymptotically piecewise-linear \cite{DiBu08,ZhMo03,DiFe99}. Except in special cases, dynamical behavior near a border-collision bifurcation is completely determined by the linear components. Upon omitting higher order terms and introducing convenient local coordinates, in two dimensions the map may be written as \begin{equation} \begin{gathered} \left[ \begin{array}{c} x_{i+1} \\ y_{i+1} \end{array} \right] = \left\{ \begin{array}{lc} f^{L}(x_i,y_i) \;, & x_i \le 0 \\ f^{R}(x_i,y_i) \;, & x_i \ge 0 \end{array} \right. \;, \\ f^{J}(x,y) = A_{J} \left[ \begin{array}{c} x \\ y \end{array} \right] + \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \mu \;, \qquad A_{J} = \left[ \begin{array}{cc} \tau_{J} & 1 \\ -\delta_{J} & 0 \end{array} \right] \;, \qquad
J \in \{ L, R \} \;. \end{gathered} \label{eq:f} \end{equation} The two-dimensional border-collision normal form (\ref{eq:f}) is piecewise-linear and continuous on the single switching manifold, $x=0$. The four parameters, $\tau_{L}$, $\delta_{L}$, $\tau_{R}$ and $\delta_{R}$, may take any value in $\mathbb{R}$. The remaining parameter $\mu$ controls the border-collision bifurcation. The bifurcation occurs at $\mu = 0$, and in the context of border-collision $\mu$ is presumed to be small. However, for $\mu \ne 0$ the structure of the dynamics of (\ref{eq:f}) is independent of the magnitude of $\mu$ because the half-maps $f^{L}$ and $f^{R}$ are affine. All bounded invariant sets of (\ref{eq:f}) collapse to the origin as $\mu \to 0$. Hence for the purposes of determining the behavior of (\ref{eq:f}) it suffices to assume $\mu \in \{ -1, 0, 1 \}$. In $N$ dimensions, the matrices in the border-collision normal form are $N \times N$ companion matrices and there are a total of $2 N$ parameters in addition to $\mu$ \cite{Di03}.
The construction and basic properties of (\ref{eq:f}) were first described by Nusse and Yorke \cite{NuYo92}\removableFootnote{ though some study of piecewise-linear continuous maps had been performed earlier, refer to references within \cite{DiFe99,MiGa96} }. The border-collision normal form also arises as a Poincar\'{e} map for corner-collisions in Filippov systems \cite{DiBu01c}, as well as grazing-sliding bifurcations in these systems, albeit with the restriction that one of the two matrices in the map has a zero eigenvalue \cite{DiKo02,KuRi03,Co08b}. Recently it has been shown that dynamics near a grazing-sliding bifurcation in an $(N+2)$-dimensional system may be partially captured by the $N$-dimensional border-collision normal form because sliding motion relates to a loss of dimension \cite{GlJe12}. Various piecewise-linear continuous maps (that may be transformed to the normal form) have been used as mathematical models, particularly in social sciences \cite{PuSu06}.
Since (\ref{eq:f}) is piecewise-linear, it is extremely nonlinear yet relatively amenable to an exact analysis. Glendinning and Wong showed that (\ref{eq:f}) may exhibit an attractor that fills a two-dimensional region of phase space by constructing Markov partitions to calculate this region exactly \cite{GlWo11}. Arnold tongues of (\ref{eq:f}) typically display a chain structure with points of zero width at which there exists an invariant polygon \cite{ZhMo06b,SuGa08,SiMe09}. The map (\ref{eq:f}) may have a unique fixed point for all $\mu$ that is asymptotically stable for all $\mu \ne 0$, yet unstable when $\mu = 0$ \cite{Do07,HaAb04}. Additional dynamics is possible when (\ref{eq:f}) is non-invertible \cite{MiGa96}, such as snap-back repellers which imply chaotic dynamics and the coexistence of infinitely many unstable periodic solutions \cite{Gl10}.
Several authors have described coexisting attractors for (\ref{eq:f}). Since bounded attractors converge to the origin as $\mu \to 0$, coexistence produces an unavoidable uncertainty near the border-collision bifurcation in the presence of small noise \cite{DuNu99}. The coexistence of six attracting periodic solutions for (\ref{eq:f}) was noted briefly in \cite{Si10}. The purpose of this paper is to show that (\ref{eq:f}) may exhibit infinitely many attracting periodic solutions, and to describe conditions on the parameter values that indicate when this phenomenon may occur.
To study orbits of (\ref{eq:f}) it is helpful to consider symbol sequences, $\mathcal{S} : \mathbb{Z} \to \{ L, R \}$. We can associate such a symbol sequence to any orbit of (\ref{eq:f}) by setting $\mathcal{S}_i = L$ if $x_i < 0$ and $\mathcal{S}_i = R$ if $x_i > 0$, for all $i \in \mathbb{Z}$. (If $x_i = 0$, it is convenient to place no restriction on $\mathcal{S}_i$ because (\ref{eq:f}) is a continuous map.) Conversely, given a symbol sequence $\mathcal{S}$ and an initial point $(x_0,y_0)$, we can define a forward orbit that follows $\mathcal{S}$ by setting $(x_{i+1},y_{i+1}) = f^{\mathcal{S}_i}(x_i,y_i)$\removableFootnote{ If (\ref{eq:f}) is invertible we may define a full orbit. }. In general, iterating the two half-maps of (\ref{eq:f}) in this fashion produces a different orbit than iterations of (\ref{eq:f}). However, if the resulting orbit is {\it admissible}, that is $x_i \le 0$ whenever $\mathcal{S}_i = L$, and $x_i \ge 0$ whenever $\mathcal{S}_i = R$, then the orbit is identical to that produced by iterating (\ref{eq:f}). An orbit that is not admissible is said to be {\it virtual}. If $\mathcal{S}$ is periodic, a periodic solution that follows $\mathcal{S}$ is referred to as an {\it $\mathcal{S}$-cycle}.
\begin{figure}
\caption{ A phase portrait of the two-dimensional border-collision normal form (\ref{eq:f}) with $\mu = 1$ and (\ref{eq:paramF}), at which there are infinitely many attractors. There is a $R L R$-cycle (period-$3$ solution with symbol sequence $R L R$) of saddle-type. Branches of the stable and unstable manifolds of the $R L R$-cycle (with stability indicated by arrows) are coincident and together with the $R L R$-cycle form an invariant quadrilateral.
For each $k \in \mathbb{Z}^+$, there is an attracting $\mathcal{S}[k]$-cycle, where $\mathcal{S}[k] = (R L R)^k L R$, (\ref{eq:SRLRkLR}), and a saddle-type $\mathcal{S}'[k]$-cycle, where $\mathcal{S}'[k] = (R L R)^k R R$, (\ref{eq:SRLRkRR}). The $\mathcal{S}[k]$ and $\mathcal{S}'[k]$-cycles are indicated by small circles and triangles, respectively, up to $k = 8$. So that these periodic solutions may be distinguished clearly, for each $k$, points of the $\mathcal{S}[k]$ and $\mathcal{S}'[k]$-cycles are connected with dotted line segments. Take care to note that these line segments are not invariants and do not relate to dynamics of the map. Also shown is the fixed point of $f^{R}$, $\left( \frac{1}{5},\frac{-3}{10} \right)$, which lies in the right half-plane and thus is a fixed point of (\ref{eq:f}). }
\label{fig:infFa}
\end{figure}
As an example, (\ref{eq:f}) has infinitely many attracting periodic solutions when $\mu = 1$ and the remaining parameter values are given by \begin{equation} \tau_{L} = -\frac{55}{117} \;, \qquad \delta_{L} = \frac{4}{9} \;, \qquad \tau_{R} = -\frac{5}{2} \;, \qquad \delta_{R} = \frac{3}{2} \;. \label{eq:paramF} \end{equation} Fig.~\ref{fig:infFa} shows a phase portrait. Attracting periodic solutions are $\mathcal{S}[k]$-cycles, where $k \in \mathbb{Z}^+$ (the set of positive integers) and \begin{equation} \mathcal{S}[k] = (R L R)^k L R \;, \label{eq:SRLRkLR} \end{equation} is a periodic symbol sequence of period $3k+2$. This example exhibits several features that in later sections are shown to be universal. There is an $R L R$-cycle of saddle-type; specifically its stability multipliers are $\frac{6}{13}$ and $\frac{13}{6}$. As $k \to \infty$, the $\mathcal{S}[k]$-cycles approach an orbit that is homoclinic to the $R L R$-cycle. In comparison, near homoclinic and heteroclinic orbits of ODEs there may be infinitely many attractors \cite{PaTa93,Ch97}, and for area-preserving maps there may be infinitely many elliptic periodic solutions \cite{GoSh00}. However, in Fig.~\ref{fig:infFa} the intersection of the stable and unstable manifolds of the $R L R$-cycle is non-transversal. The branches of the stable and unstable manifolds that intersect are coincident and there is no topological horseshoe. Furthermore, for every $k \in \mathbb{Z}^+$, there exist saddle-type $\mathcal{S}'[k]$-cycles, where \begin{equation} \mathcal{S}'[k] = (R L R)^k R R \;. \label{eq:SRLRkRR} \end{equation} Each $\mathcal{S}'[k]$ differs from $\mathcal{S}[k]$ by a single symbol. The stable manifolds of the $\mathcal{S}'[k]$-cycles appear to form the boundaries of the basins of attraction of the $\mathcal{S}[k]$-cycles that are shown in Fig.~\ref{fig:infFb}.
The remainder of this paper is organized as follows. Conditions for the existence, admissibility and stability of periodic solutions to (\ref{eq:f}) are given in \S\ref{sec:periodic}. Periodic solutions may be found by solving linear matrix equations because any composition of $f^{L}$ and $f^{R}$ is affine. Sequences of symbol sequences of the form $\mathcal{S}[k] = \mathcal{X}^k \mathcal{Y}$ are considered in \S\ref{sec:necessary}. The main result of this section is Theorem \ref{th:codim3} that gives consequences of the existence of infinitely many stable $\mathcal{S}[k]$-cycles when the $\mathcal{X}$-cycle is of saddle-type (as in Fig.~\ref{fig:infFa} for which $\mathcal{X} = R L R$). The theorem reveals three necessary conditions on the parameter values, from which we find that this scenario is codimension-three, and that the $\mathcal{S}[k]$-cycles limit to a homoclinic orbit as $k \to \infty$. In \S\ref{sec:further}, additional assumptions are placed on $\mathcal{X}$ and $\mathcal{Y}$ leading to further consequences, such as the coincidence of branches of the stable and unstable manifolds of the $\mathcal{X}$-cycle. In \S\ref{sec:finding}, the results are used to obtain parameter values of infinite coexistence for three different choices of $\mathcal{X}$ and $\mathcal{Y}$. For one of these choices, corresponding to Figs.~\ref{fig:infFa} and \ref{fig:infFb}, the existence of attracting $\mathcal{S}[k]$-cycles and saddle-type $\mathcal{S}'[k]$-cycles for all $k \in \mathbb{Z}^+$ is demonstrated formally in \S\ref{sec:sufficient}. Conclusions and future directions are discussed in \S\ref{sec:conc}.
\begin{figure}
\caption{ Basins of attraction of the $\mathcal{S}[k]$-cycles of Fig.~\ref{fig:infFa} up to $k = 8$ computed numerically by iterating (\ref{eq:f}) from a $2048 \times 1536$ grid of initial points. The color of each of the eight basins matches the color of the $\mathcal{S}[k]$-cycle shown in Fig.~\ref{fig:infFa}, with red for $k=1$ and dark green for $k=8$. Initial points are shaded white if the forward orbit appeared to diverge, and shaded black if the forward orbit appeared to neither converge to an $\mathcal{S}[k]$-cycle with $k \le 8$, or diverge. For clarity, the $\mathcal{S}'[k]$-cycles are not shown. }
\label{fig:infFb}
\end{figure}
\section{Periodic solutions} \label{sec:periodic}
Calculations of periodic solutions of $N$-dimensional piecewise-linear continuous maps are given in \cite{SiMe09,Si10,SiMe10}. In this section these calculations are summarized for the two-dimensional normal form (\ref{eq:f}).
Let $\mathcal{S} : \mathbb{Z} \to \{ L, R \}$ be a periodic symbol sequence with minimal period $n \ge 1$ (that is, $\mathcal{S}_{i+n} = \mathcal{S}_i$, for all $i \in \mathbb{Z}$, and $\mathcal{S}$ does not exhibit this property for a smaller value of $n$). Then the word $\mathcal{S}_0 \cdots \mathcal{S}_{n-1}$ is {\it primitive}, that is, cannot be written as a power (e.g.~$R L R$ is primitive, but $R L R L = (R L)^2$ is not). Conversely, given a primitive word of length $n$, the sequence defined by the infinite repetition of this word has minimal period $n$. For this reason, throughout this paper whenever we write a periodic symbol sequence, as in (\ref{eq:SRLRkLR}) and (\ref{eq:SRLRkRR}), we list the symbols $\mathcal{S}_0 \cdots \mathcal{S}_{n-1}$\removableFootnote{ Note: (i) sequences are infinite, e.g.~$\mathcal{S} = \cdots R L R R L R \cdots$, (ii) but for brevity we write periodic symbol sequences as, e.g.~$\mathcal{S} = R L R$. This is misleading because (iii) a finite list of symbols is a word. The notation is okay because (iv) periodic symbol sequences of minimal period $n$ are isomorphic(?) to primitive words of length $n$. }.
Let \begin{equation} f^{\mathcal{S}} = f^{\mathcal{S}_{n-1}} \circ \cdots \circ f^{\mathcal{S}_0} \;, \label{eq:fS} \end{equation} denote the $n^{\rm th}$ iterate of (\ref{eq:f}) following $\mathcal{S}$. A straight-forward expansion leads to \begin{equation} f^{\mathcal{S}}(x,y) = M_{\mathcal{S}} \left[ \begin{array}{c} x \\ y \end{array} \right] + P_{\mathcal{S}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \mu \;, \label{eq:fS2} \end{equation} where \begin{equation} M_{\mathcal{S}} = A_{\mathcal{S}_{n-1}} \cdots A_{\mathcal{S}_0} \;, \qquad P_{\mathcal{S}} = I + A_{\mathcal{S}_{n-1}} + A_{\mathcal{S}_{n-1}} A_{\mathcal{S}_{n-2}} + \cdots + A_{\mathcal{S}_{n-1}} \cdots A_{\mathcal{S}_1} \;. \label{eq:MSPS} \end{equation} Let $\mathcal{S}^{(i)}$ denote the $i^{\rm th}$ left shift permutation of $\mathcal{S}$
(e.g.~if $\mathcal{S} = L L L R R$, then $\mathcal{S}^{(2)} = L R R L L$). The $i^{\rm th}$ point of an $\mathcal{S}$-cycle, denoted $\left( x^{\mathcal{S}}_i,y^{\mathcal{S}}_i \right)$, is a fixed point of $f^{\mathcal{S}^{(i)}}$. When $I - M_{\mathcal{S}^{(i)}}$ is non-singular, this point is unique. Since the spectrum of $I - M_{\mathcal{S}^{(i)}}$ is independent of $i$\removableFootnote{ Let $\lambda$ be an eigenvalue of $I - M_{\mathcal{S}}$.\\ Then $(I - M_{\mathcal{S}}) v = \lambda v$, for some nonzero vector $v \in \mathbb{R}^2$.\\ By multiplying both sides of this equation by $A_{\mathcal{S}_0}$ on the left, we obtain $\left( I - M_{\mathcal{S}^{(1)}} \right) A_{\mathcal{S}_0} v = \lambda A_{\mathcal{S}_0} v$.\\ Therefore $\lambda$ is an eigenvalue of $I - M_{\mathcal{S}^{(1)}}$.\\ By either a repetition or generalization of this argument we conclude that $\lambda$ is an eigenvalue of $I - M_{\mathcal{S}^{(i)}}$, for any $i$. }, we have the following result.
\begin{lemma}[Existence]
The $\mathcal{S}$-cycle is unique if and only if $\det \left( I - M_{\mathcal{S}} \right) \ne 0$. Moreover, if $\det \left( I - M_{\mathcal{S}} \right) \ne 0$, then for each $i$, \begin{equation} \left[ \begin{array}{c} x^{\mathcal{S}}_i \\ y^{\mathcal{S}}_i \end{array} \right] = \left( I - M_{\mathcal{S}^{(i)}} \right)^{-1} P_{\mathcal{S}^{(i)}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \mu \;. \label{eq:existence} \end{equation} \label{le:existence} \end{lemma}
An $\mathcal{S}$-cycle is admissible if every point lies on the ``correct'' side of $x=0$, or on $x=0$. The following formula results from manipulating (\ref{eq:existence}) (see \cite{Si10,SiMe10})\removableFootnote{ Here $\varrho = (1,1)^{\sf T}$ and $b = (1,0)^{\sf T}$, thus $\varrho^{\sf T} b = 1$. }: \begin{equation} x^{\mathcal{S}}_i = \frac{\det \left( P_{\mathcal{S}^{(i)}} \right) \mu}{\det(I - M_{\mathcal{S}})} \;.
\nonumber \end{equation} Admissibility is therefore determined by the signs of $\det \left( P_{\mathcal{S}^{(i)}} \right)$, as described in the following lemma\removableFootnote{ Furthermore, if the $\mathcal{S}$-cycle is admissible, then the $n$ points are distinct because $n$ is the minimal period. }.
\begin{lemma}[Admissibility] Suppose $\mu \ne 0$ and $\det(I - M_{\mathcal{S}}) \ne 0$. Then the $\mathcal{S}$-cycle is an admissible periodic solution of (\ref{eq:f}) if and only if, whenever $\det \left( P_{\mathcal{S}^{(i)}} \right) \ne 0$, \begin{equation} \begin{gathered} {\rm if~} \mathcal{S}_i = {L}, {\rm ~then~} {\rm sgn} \left( \det \left( P_{\mathcal{S}^{(i)}} \right) \right) = -{\rm sgn} \left( \mu \det \left( I - M_{\mathcal{S}} \right) \right) \;, \\ {\rm and~if~} \mathcal{S}_i = {R}, {\rm ~then~} {\rm sgn} \left( \det \left( P_{\mathcal{S}^{(i)}} \right) \right) = {\rm sgn} \left( \mu \det \left( I - M_{\mathcal{S}} \right) \right) \;. \nonumber \end{gathered} \end{equation} \label{le:admissibility} \end{lemma}
If no points of an admissible $\mathcal{S}$-cycle lie on the switching manifold then there exists a neighborhood of each point $\left( x^{\mathcal{S}}_i,y^{\mathcal{S}}_i \right)$ for which the $n^{\rm th}$ iterate of (\ref{eq:f}) is given by $f^{\mathcal{S}^{(i)}}$. In this case the $\mathcal{S}$-cycle is asymptotically stable if and only if both eigenvalues of $M_{\mathcal{S}}$ (these are the stability multipliers of the $\mathcal{S}$-cycle) lie inside the unit circle. For two-dimensional maps it is well-known that stability relates to a particular triangle in the space of coordinates ${\rm trace} \left( M_{\mathcal{S}} \right)$ and $\det \left( M_{\mathcal{S}} \right)$ \cite{El08,Ga07,MeLi01}, and we have the following result.
\begin{lemma}[Stability] Suppose $\mu \ne 0$, $\det \left( I - M_{\mathcal{S}} \right) \ne 0$, $\det \left( P_{\mathcal{S}^{(i)}} \right) \ne 0$ for all $i$, and the $\mathcal{S}$-cycle is admissible. Then the $\mathcal{S}$-cycle is an asymptotically stable periodic solution of (\ref{eq:f}) if and only if the following three conditions are satisfied \begin{align} \det(M_{\mathcal{S}}) - {\rm trace}(M_{\mathcal{S}}) + 1 &> 0 \;, \label{eq:stabConditionSN} \\ \det(M_{\mathcal{S}}) + {\rm trace}(M_{\mathcal{S}}) + 1 &> 0 \;, \label{eq:stabConditionPD} \\ \det(M_{\mathcal{S}}) - 1 &< 0 \;. \label{eq:stabConditionNS} \end{align} \label{le:stability} \end{lemma} If there is equality in at least one of (\ref{eq:stabConditionSN})-(\ref{eq:stabConditionNS}) but the conditions are satisfied otherwise, then the $\mathcal{S}$-cycle is stable but not asymptotically stable.
Equality in (\ref{eq:stabConditionSN})-(\ref{eq:stabConditionNS}) corresponds to, in order, an eigenvalue $1$, an eigenvalue $-1$, and a complex pair of eigenvalues on the unit circle when $|{\rm trace}(M_S)| < 2$. Note that $\det \left( I - M_{\mathcal{S}} \right) \equiv \det(M_{\mathcal{S}}) - {\rm trace}(M_{\mathcal{S}}) + 1$, hence the assumption $\det \left( I - M_{\mathcal{S}} \right) \ne 0$ eliminates the possibility of equality in (\ref{eq:stabConditionSN}).
\section{Necessary conditions for infinite coexistence} \label{sec:necessary}
In this section we consider sequences of symbol sequences of the form $\mathcal{S}[k] = \mathcal{X}^k \mathcal{Y}$ and obtain conditions on the parameter values of (\ref{eq:f}) that are necessary in order for infinitely many $\mathcal{S}[k]$-cycles to be admissible and stable. The main result is Theorem \ref{th:codim3}. First some additional notation is introduced.
Suppose that for a given periodic symbol sequence $\mathcal{X}$, the matrix $M_{\mathcal{X}}$ has distinct real eigenvalues $\lambda_1$ and $\lambda_2$, neither of which are equal to $1$. In this case $\det \left( I - M_{\mathcal{X}} \right) \ne 0$, so by Lemma \ref{le:existence} the $\mathcal{X}$-cycle is unique. We write the $\mathcal{X}$-cycle as $\left( x^{\mathcal{X}}_i,y^{\mathcal{X}}_i \right)$, for $i=0,\ldots,n_{\mathcal{X}}-1$, where $n_{\mathcal{X}}$ denotes the minimal period of $\mathcal{X}$. From (\ref{eq:existence}), with $i=0$ \begin{equation} \left[ \begin{array}{c} x^{\mathcal{X}}_0 \\ y^{\mathcal{X}}_0 \end{array} \right] = \left( I - M_{\mathcal{X}} \right)^{-1} P_{\mathcal{X}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \mu \;. \label{eq:zStarX} \end{equation} Let $\zeta_1$ and $\zeta_2$ be eigenvectors of $M_{\mathcal{X}}$, corresponding to $\lambda_1$ and $\lambda_2$ respectively, and let $Q = \big[ \zeta_1 \; \zeta_2 \big]$. We then consider the change of coordinates \begin{equation} \left[ \begin{array}{c} u \\ v \end{array} \right] = Q^{-1} \left( \left[ \begin{array}{c} x \\ y \end{array} \right] - \left[ \begin{array}{c} x^{\mathcal{X}}_0 \\ y^{\mathcal{X}}_0 \end{array} \right] \right) \;, \label{eq:uv} \end{equation} and, for any $\mathcal{S}$, let $g^{\mathcal{S}}$ denote $f^{\mathcal{S}}$ in $(u,v)$-coordinates. The coordinates (\ref{eq:uv}) are defined such that $g^{\mathcal{X}}$ is linear and completely decoupled, specifically \begin{equation} g^{\mathcal{X}}(w) = \left[ \begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2 \end{array} \right] w \;, \label{eq:gX} \end{equation} where we let $w = (u,v)$.
\begin{theorem} Let $\mathcal{S}[k] = \mathcal{X}^k \mathcal{Y}$, where $\mathcal{X}$ is primitive and $\mathcal{X}_0 \ne \mathcal{Y}_0$. Let $\tau_{L}, \delta_{L}, \tau_{R}, \delta_{R} \in \mathbb{R}$ and $\mu \ne 0$. Suppose there exist infinitely many values of $k \ge 1$ for which the map (\ref{eq:f}) exhibits a unique, admissible, stable $\mathcal{S}[k]$-cycle that has no points on the switching manifold. Suppose $\lambda_1$ and $\lambda_2$ are eigenvalues of $M_{\mathcal{X}}$, where $0 \le \lambda_1 < 1 < \lambda_2$, and that the $v$-axis (as defined by (\ref{eq:uv})) is not parallel to the switching manifold ($x=0$). Then, \begin{enumerate}[label=\roman{*}),ref=\roman{*}] \item \label{it:gY} $g^{\mathcal{Y}}$ maps the $v$-axis to the $u$-axis; \item \label{it:lambda2} $\lambda_1 \ne 0$ and $\lambda_2 = \frac{1}{\lambda_1}$; \item \label{it:HCorbit} the $\mathcal{X}$-cycle is admissible
and $\mathcal{S}[k]$-cycles limit to an orbit that is homoclinic to the $\mathcal{X}$-cycle as $k \to \infty$. \end{enumerate} \label{th:codim3} \end{theorem}
A proof of Theorem \ref{th:codim3} is given after some remarks. First notice that the scenario depicted in Fig.~\ref{fig:infFa} conforms to the assumptions of Theorem \ref{th:codim3}. Here $\mathcal{X} = R L R$, which is primitive, and $\mathcal{Y} = L R$, which begins with a different symbol than $\mathcal{X}$. The $(u,v)$-coordinates are centered at $\left( x^{R L R},y^{R L R} \right)$ -- the right-most point of the $R L R$-cycle. Locally, the $u$ and $v$-axes are, respectively, the stable and unstable manifolds of this point, and it is straight-forward to verify directly that parts (\ref{it:gY})-(\ref{it:HCorbit}) of Theorem \ref{th:codim3} are satisfied for this example.
The form $\mathcal{S}[k] = \mathcal{X}^k \mathcal{Y}$, with the assumptions of Theorem \ref{th:codim3}, is highly general. Given $\mathcal{S}[k] = \mathcal{X}^k \mathcal{Y}$ with $\mathcal{X}$ not primitive, we may redefine $\mathcal{X}$ so that it is primitive by introducing a higher power of $k$. Also if $\mathcal{X}_0 = \mathcal{Y}_0$, or $\mathcal{S}[k]$ involves symbols preceding $\mathcal{X}^k$, as long as $\mathcal{Y}$ is not a power of $\mathcal{X}$ we may apply a shift permutation and redefine $\mathcal{X}$ and $\mathcal{Y}$ so that $\mathcal{S}[k]$ takes the form $\mathcal{X}^k \mathcal{Y}$ with $\mathcal{X}_0 \ne \mathcal{Y}_0$. Also, the assumption that $\mathcal{S}[k]$-cycles have no points on the switching manifold is made in part for simplicity -- so that their stability is determined purely by the eigenvalues of $M_{\mathcal{S}[k]}$ as opposed to more sophisticated methods \cite{DoKi08} -- and in part because the presence of points on the switching manifold represents an additional degeneracy.
The restrictions on the eigenvalues of $M_{\mathcal{X}}$ are motivated by observations of infinitely many stable or asymptotically stable periodic solutions in smooth maps occurring when there is a homoclinic or heteroclinic connection (which requires the existence of an invariant of saddle-type). It is not clear if infinite coexistence is possible when $M_{\mathcal{X}}$ has a negative eigenvalue, because in this instance both branches of the corresponding invariant manifold are involved.
In $(u,v)$-coordinates, $g^{\mathcal{Y}}$ is an affine map; let us write it as \begin{equation} g^{\mathcal{Y}}(w) = \left[ \begin{array}{cc} \gamma_{11} & \gamma_{12} \\ \gamma_{21} & \gamma_{22} \end{array} \right] w + \left[ \begin{array}{c} \sigma_1 \\ \sigma_2 \end{array} \right] \;, \label{eq:gY} \end{equation} for some constants $\gamma_{ij}$, $\sigma_1$ and $\sigma_2$. Then part (\ref{it:gY}) of Theorem \ref{th:codim3} is equivalent to the statement, $\gamma_{22} = \sigma_2 = 0$. Assuming that the three requirements $\lambda_2 = \frac{1}{\lambda_1}$ (part (\ref{it:lambda2}) of the theorem), $\gamma_{22} = 0$ and $\sigma_2 = 0$ involve no degeneracy,
it follows that the assumptions of Theorem \ref{th:codim3} describe a scenario that is at least codimension-three\removableFootnote{ Careful! Note, for the three examples below it is evident that the three requirements involve no degeneracy. On the other hand note that the theorem doesn't tell us if whether or not there may be more requirements. Given that the three requirements do not involve a degeneracy (except in a higher codimension scenario!) the three requirements describe a codimension-three scenario and the theorem tells us that the assumptions of the theorem describe a scenario that is at least codimension-three. }.
The demonstration of $\sigma_2 = 0$ in the proof below uses the assumption that the $v$-axis is not parallel to the switching manifold (equivalently, $\left[ 0,1 \right]^{\sf T}$ is not an eigenvector of $M_{\mathcal{X}}$ for the eigenvalue $\lambda_2$). As evident in the following proof, with $\sigma_2 \ne 0$, $\mathcal{S}[k]$-cycles grow in size with $k$ without bound and are virtual for large $k$ if the $v$-axis is not parallel to the switching manifold. It remains to determine if infinite coexistence is achievable for (\ref{eq:f}) in the case that $\sigma_2 \ne 0$ and the $v$-axis is parallel to the switching manifold\removableFootnote{ I did a brief search for parameter values without success. }.
\begin{proof}[Proof of Theorem \ref{th:codim3}] Here parts (\ref{it:gY})-(\ref{it:HCorbit}) of Theorem \ref{th:codim3} are demonstrated sequentially. \begin{enumerate}[label=\roman{*}),ref=\roman{*}] \item By composing $k$ instances of (\ref{eq:gX}) with (\ref{eq:gY}), in $(u,v)$-coordinates the image of a point $w$ under $\mathcal{S}[k]$ is \begin{equation} g^{\mathcal{S}[k]}(w) = \left[ \begin{array}{cc} \gamma_{11} \lambda_1^k & \gamma_{12} \lambda_2^k \\ \gamma_{21} \lambda_1^k & \gamma_{22} \lambda_2^k \end{array} \right] w + \left[ \begin{array}{c} \sigma_1 \\ \sigma_2 \end{array} \right] \;. \label{eq:gSk} \end{equation}
The matrix part of (\ref{eq:gSk}) has the same spectrum as $M_{\mathcal{S}[k]}$ (the matrix part of $f^{\mathcal{S}[k]}$), therefore \begin{equation} {\rm trace} \left( M_{\mathcal{S}[k]} \right) = \gamma_{11} \lambda_1^k + \gamma_{22} \lambda_2^k \;.
\nonumber \end{equation} By Lemma \ref{le:stability}, the assumption that $\mathcal{S}[k]$-cycles are stable for large $k$ implies ${\rm trace} \left( M_{\mathcal{S}[k]} \right) \not\to \infty$ as $k \to \infty$. Therefore we must have $\gamma_{22} = 0$, since $\lambda_2 > 1$.
With $\gamma_{22} = 0$, ${\rm trace} \left( M_{\mathcal{S}[k]} \right) \to 0$, as $k \to \infty$, and\removableFootnote{ For stability we require $\det \left( M_{\mathcal{S}[k]} \right) \not\to \infty$, but this does not (quite) imply $\lambda_1 \lambda_2 \le 1$ because may it be possible to have $\gamma_{12} \gamma_{21} = 0$. Note that the only way by which we may have $\lambda_1 \lambda_2 > 1$ and $\gamma_{12} \gamma_{21} = 0$ is if $\mathcal{X}$ consists of a single symbol. } \begin{equation} \det \left( M_{\mathcal{S}[k]} \right) = -\gamma_{12} \gamma_{21} \lambda_1^k \lambda_2^k \;.
\nonumber \end{equation} We now show that $\det \left( I - M_{\mathcal{S}[k]} \right)$ is bounded away from zero for large $k$. Since $\det \left( I - M_{\mathcal{S}[k]} \right) = \det \left( M_{\mathcal{S}[k]} \right) - {\rm trace} \left( M_{\mathcal{S}[k]} \right) + 1$, this is certainly true if $\det \left( M_{\mathcal{S}[k]} \right) \to 0$. If $\det \left( M_{\mathcal{S}[k]} \right) \not\to 0$, then, due to the assumption that $\mathcal{S}[k]$-cycles are stable, we must have $\lambda_1 \ne 0$, $\lambda_2 = \frac{1}{\lambda_1}$, and consequently $\det \left( M_{\mathcal{S}[k]} \right) = -\gamma_{12} \gamma_{21}$ for all $k$. Furthermore, in this case we cannot have $\det \left( M_{\mathcal{S}[k]} \right) = -1$ because $\mathcal{S}[k]$-cycles are assumed to be unique and stable for large $k$. Therefore, in either case, $\det \left( I - M_{\mathcal{S}[k]} \right)$ is bounded away from zero for large $k$.
In $(u,v)$-coordinates, we denote points of the $\mathcal{S}[k]$-cycle by $w^{\mathcal{S}[k]}_i = \left( u^{\mathcal{S}[k]}_i,v^{\mathcal{S}[k]}_i \right)$, for $i = 0,\ldots,k n_{\mathcal{X}} + n_{\mathcal{Y}} - 1$, where $n_{\mathcal{X}}$ and $n_{\mathcal{Y}}$ are the lengths of the words $\mathcal{X}$ and $\mathcal{Y}$, respectively. The point, $w^{\mathcal{S}[k]}_0$, is the unique fixed point of (\ref{eq:gSk}), and for each $j = 1,\ldots,k$, $w^{\mathcal{S}[k]}_{j n_{\mathcal{X}}} = g^{\mathcal{X}} \left( w^{\mathcal{S}[k]}_{(j-1) n_{\mathcal{X}}} \right)$.
From these equations, and substituting $\gamma_{22} = 0$, we obtain \begin{equation} w^{\mathcal{S}[k]}_{j n_{\mathcal{X}}} = \frac{1}{1 - \gamma_{11} \lambda_1^k - \gamma_{12} \gamma_{21} \lambda_1^k \lambda_2^k} \left[ \begin{array}{c} \sigma_1 \lambda_1^j + \gamma_{12} \sigma_2 \lambda_1^j \lambda_2^k \\ \left( \gamma_{21} \sigma_1 - \gamma_{11} \sigma_2 \right) \lambda_1^k \lambda_2^j + \sigma_2 \lambda_2^j \end{array} \right] \;, \label{eq:wSkj} \end{equation} valid for $j = 0,\ldots,k$.
The symbols $\mathcal{S}[k]_{(k-1) n_{\mathcal{X}}} = \mathcal{X}_0$ and $\mathcal{S}[k]_{k n_{\mathcal{X}}} = \mathcal{Y}_0$ are different, by assumption. Hence, for each $k$ for which the $\mathcal{S}[k]$-cycle is admissible, each point $w^{\mathcal{S}[k]}_{(k-1) n_{\mathcal{X}}}$ lies on one side of the switching manifold (or on the switching manifold) and each point $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ lies on the other side of the switching manifold (or on the switching manifold). We now show that this observation implies $\sigma_2 = 0$, $\sigma_1 \ne 0$, $\gamma_{21} \ne 0$, $\lambda_1 \ne 0$ and $\lambda_2 = \frac{1}{\lambda_1}$.
By (\ref{eq:wSkj}), if $\sigma_2 \ne 0$, then, as $k \to \infty$, the $u$ component of $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ converges whereas the $v$ component diverges. Consequently, any line that divides the points $w^{\mathcal{S}[k]}_{(k-1) n_{\mathcal{X}}}$ and $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ for infinitely many values of $k$ must be parallel to the $v$-axis. The $v$-axis is assumed to be not parallel to the switching manifold, thus this scenario is not permitted. Therefore $\sigma_2 = 0$ and (\ref{eq:gY}) is given by \begin{equation} g^{\mathcal{Y}}(w) = \left[ \begin{array}{cc} \gamma_{11} & \gamma_{12} \\ \gamma_{21} & 0 \end{array} \right] w + \left[ \begin{array}{c} \sigma_1 \\ 0 \end{array} \right] \;, \label{eq:gY2} \end{equation} which verifies part (\ref{it:gY}).
\item With $\sigma_2 = 0$, (\ref{eq:wSkj}) reduces to \begin{equation} w^{\mathcal{S}[k]}_{j n_{\mathcal{X}}} = \frac{\sigma_1}{1 - \gamma_{11} \lambda_1^k - \gamma_{12} \gamma_{21} \lambda_1^k \lambda_2^k} \left[ \begin{array}{c} \lambda_1^j \\ \gamma_{21} \lambda_1^k \lambda_2^j \end{array} \right] \;. \label{eq:wSkj2} \end{equation}
Hence $\sigma_1 \ne 0$, since the points $w^{\mathcal{S}[k]}_{j n_{\mathcal{X}}}$ must be distinct. Therefore, as $k \to \infty$, $u^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ (the $u$-component of (\ref{eq:wSkj2}) with $j = k$) tends to zero. We cannot have $\gamma_{21} \ne 0$ and $\lambda_1 \lambda_2 > 1$, because then $\left| v^{\mathcal{S}[k]}_{k n_{\mathcal{X}}} \right| \to \infty$ as $k \to \infty$, which is immediately seen to be not possible in view of the assumption that the $v$-axis is not parallel to the switching manifold\removableFootnote{ Unlike above, here $u^{\mathcal{S}[k]}_{k n_{\mathcal{X}}} \to 0$, so even with the $v$-axis parallel to the switching manifold a line cannot be fit between infinitely many $w^{\mathcal{S}[k]}_{(k-1) n_{\mathcal{X}}}$ and $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$, although my proof of this is rather long. }.
If $\gamma_{21} = 0$ or $\lambda_1 \lambda_2 < 1$, then $v^{\mathcal{S}[k]}_{k n_{\mathcal{X}}} \to 0$, which we now show is also not possible. When the $\mathcal{S}[k]$-cycle is admissible, under $n_{\mathcal{X}} + n_{\mathcal{Y}}$ iterations of (\ref{eq:f}) $w^{\mathcal{S}[k]}_{(k-1) n_{\mathcal{X}}}$ maps to $w^{\mathcal{S}[k]}_0$ (following $\mathcal{X} \mathcal{Y}$) and $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ maps to $w^{\mathcal{S}[k]}_{n_{\mathcal{X}}}$ (following $\mathcal{Y} \mathcal{X}$). In this scenario, the distance between the points $w^{\mathcal{S}[k]}_{(k-1) n_{\mathcal{X}}}$ and $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ tends to zero as $k \to \infty$, therefore, since (\ref{eq:f}) is a continuous map, the distance between the $(n_{\mathcal{X}} + n_{\mathcal{Y}})^{\rm th}$ iterates of these two points must also tend to zero. However, from (\ref{eq:wSkj2}) we see that the distance between $w^{\mathcal{S}[k]}_0$ and $w^{\mathcal{S}[k]}_{n_{\mathcal{X}}}$ does not tend to zero, which is contradiction. Therefore $\gamma_{21} \ne 0$, $\lambda_1 \ne 0$ and $\lambda_2 = \frac{1}{\lambda_1}$.
\item We now show that the $\mathcal{X}$-cycle is admissible. When the $\mathcal{S}[k]$-cycle is admissible, each $w^{\mathcal{S}[k]}_{j n_{\mathcal{X}}}$ for $j = 0,\ldots,k-1$ follows $\mathcal{X}$ for the next $n_{\mathcal{X}}$ iterations under (\ref{eq:f}). By (\ref{eq:wSkj2}), with $j \approx \frac{k}{2}$, $w^{\mathcal{S}[k]}_{j n_{\mathcal{X}}} \to (0,0) = w^{\mathcal{X}}_0$ as $k \to \infty$. Since (\ref{eq:f}) is continuous and $\mathcal{S}[k]$-cycles are admissible for infinitely many values of $k$, the forward orbit of $w^{\mathcal{X}}_0$ also follows $\mathcal{X}$ under (\ref{eq:f}), thus the $\mathcal{X}$-cycle is admissible\removableFootnote{ Conceivably one or more points of the $\mathcal{X}$-cycle (but not $w^{\mathcal{X}}_0$) could lie on the switching manifold, although I have not seen an example of this. }.
Finally, by (\ref{eq:wSkj2}), as $k \to \infty$, $w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}}$ and $w^{\mathcal{S}[k]}_0$ limit to points on the $v$-axis and $u$-axis respectively. By the continuity of (\ref{eq:f}), the first limit point belongs to the unstable manifold of $w^{\mathcal{X}}_0$. This limit point maps to the second limit point under $n_{\mathcal{Y}}$ iterations of (\ref{eq:f}) (following $\mathcal{Y}$) and the second limit point belongs to the stable manifold of $w^{\mathcal{X}}_0$. Hence these points belong to a homoclinic orbit of the $\mathcal{X}$-cycle and $\mathcal{S}[k]$-cycles limit to this orbit as $k \to \infty$. \end{enumerate} \end{proof}
\section{Further consequences of infinite coexistence} \label{sec:further}
Part (\ref{it:HCorbit}) of Theorem \ref{th:codim3} tells us that $\mathcal{S}[k]$-cycles (comprised of points that in $(u,v)$-coordinates (\ref{eq:uv}) are denoted $w^{\mathcal{S}[k]}_i$, for $i = 0,\ldots,k n_{\mathcal{X}} + n_{\mathcal{Y}} - 1$) limit to an orbit that is homoclinic to the $\mathcal{X}$-cycle as $k \to \infty$\removableFootnote{ Generically we expect the homoclinic orbit of part (\ref{it:HCorbit}) of Theorem \ref{th:codim3} to have no points on the switching manifold. In this case, since $\mathcal{S}[k]$-cycles converge to this orbit it is immediately evident that not only are there infinitely many values of $k$ for which the $\mathcal{S}[k]$-cycle is unique, admissible, and asymptotically stable, but that this is true for all $k \ge \kappa$, for some $\kappa \in \mathbb{Z}$. I have omitted this from the statement of Theorem \ref{th:codim3} because it is not terribly important, and as to not crowd the theorem. If the homoclinic orbit has one or more points on the switching manifold I suspect that this statement still holds but that a demonstration of the statement is no longer trivial. Note, the assumption that the homoclinic orbit has no points on the switching manifold implies $\varphi_0$ and $\psi_0$ lie in the interior of $\Phi_0$. }. Therefore the stable and unstable manifolds of the $\mathcal{X}$-cycle intersect. In this section it is shown that with additional assumptions that do not add to the codimension\removableFootnote{ Intuitively this is clear. Really I can only state this based on the examples in \S\ref{sec:finding} and the proof in \S\ref{sec:sufficient}. } of the scenario described by Theorem \ref{th:codim3}, the branches of the stable and unstable manifolds that intersect are coincident.
Before we begin, it is helpful to introduce abbreviated labels to four points of the homoclinic orbit: \begin{equation} \begin{split} a &= \lim_{k \to \infty} w^{\mathcal{S}[k]}_{(k-1) n_{\mathcal{X}}} = \left( 0 \;, \frac{\gamma_{21} \sigma_1 \lambda_1}{1 - \gamma_{12} \gamma_{21}} \right) \;, \qquad b = \lim_{k \to \infty} w^{\mathcal{S}[k]}_{k n_{\mathcal{X}}} = \left( 0 \;, \frac{\gamma_{21} \sigma_1}{1 - \gamma_{12} \gamma_{21}} \right) \;, \\ c &= \lim_{k \to \infty} w^{\mathcal{S}[k]}_0 = \left( \frac{\sigma_1}{1 - \gamma_{12} \gamma_{21}} \;, 0 \right) \;, \qquad d = \lim_{k \to \infty} w^{\mathcal{S}[k]}_{n_{\mathcal{X}}} = \left( \frac{\sigma_1 \lambda_1}{1 - \gamma_{12} \gamma_{21}} \;, 0 \right) \;, \end{split} \label{eq:abcd} \end{equation} where the formulas in (\ref{eq:abcd}) are obtained from (\ref{eq:wSkj2}). Under (\ref{eq:f}), $a$ maps to $b$ following $\mathcal{X}$, $b$ maps to $c$ following $\mathcal{Y}$, and $c$ maps to $d$ following $\mathcal{X}$. These points are shown in a schematic of $(u,v)$-coordinates, Fig.~\ref{fig:uvCoords}. Let $\Phi_0$ denote the closed line segment connecting $a$ and $b$, and let $\Phi_i$ denote the image of $\Phi_{i-1}$ under (\ref{eq:f}), for each $i = 1,\ldots,n_{\mathcal{X}}+n_{\mathcal{Y}}$. Also let, \begin{equation} \Xi = \bigcup_{i=0}^{n_{\mathcal{X}}+n_{\mathcal{Y}}-1} \Phi_i \;.
\nonumber \end{equation}
\begin{figure}\label{fig:uvCoords}
\end{figure}
Since $a$ maps to $c$ under $g^{\mathcal{X} \mathcal{Y}}$, $b$ maps to $d$ under $g^{\mathcal{Y} \mathcal{X}}$, and the homoclinic orbit is admissible, it follows that $\Phi_i$ intersects the switching manifold whenever $(\mathcal{X} \mathcal{Y})_i \ne (\mathcal{Y} \mathcal{X})_i$. Since $\mathcal{X}_0 \ne \mathcal{Y}_0$, as assumed in Theorem \ref{th:codim3}, then $(\mathcal{X} \mathcal{Y})_i \ne (\mathcal{Y} \mathcal{X})_i$ for $i=0$ and for at least one other value of $i$. Therefore $\Xi$ must have at least two intersections with the switching manifold. The following theorem concerns the simplest scenario: that $\Xi$ has exactly two such intersections.
\begin{theorem} Suppose (\ref{eq:f}) is invertible and satisfies the conditions of Theorem \ref{th:codim3}. Suppose $\Xi$ intersects the switching manifold at only two points. Then $\Phi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ is the line segment connecting $c$ and $d$, and the branches of the stable and unstable manifolds of the $\mathcal{X}$-cycle that involve the homoclinic orbit of part (\ref{it:HCorbit}) of Theorem \ref{th:codim3} are coincident. \label{th:coincident} \end{theorem}
As evident in the following proof, the assumption that $\Xi$ has only two intersections with the switching manifold is sufficient to ensure that all points on the branch of the stable manifold involving the homoclinic orbit map to the unstable manifold. The additional assumption of invertibility allows us to conclude that the stable and unstable branches are coincident. Note that (\ref{eq:f}) is invertible if and only if $\delta_{L} \delta_{R} > 0$.
\begin{proof}[Proof of Theorem \ref{th:coincident}] Since $\mathcal{X}_0 \ne \mathcal{Y}_0$ and $\Xi$ is assumed to have only two intersections with the switching manifold, we have $(\mathcal{X} \mathcal{Y})_i \ne (\mathcal{Y} \mathcal{X})_i$ for $i=0$ and exactly one other index in the range $i=0,\ldots,n_{\mathcal{X}}+n_{\mathcal{Y}}-1$, call it $\alpha$. Then there exist $\varphi_0, \psi_0 \in \Phi_0$, such that $\varphi_0$ and $\psi_{\alpha}$ are the two points of intersection of $\Xi$ with the switching manifold (where $\{ \varphi_i \}$ and $\{ \psi_i \}$ denote the forward orbits of $\varphi_0$ and $\psi_0$ under (\ref{eq:f})).
For ease of explanation suppose that either $\varphi_0$ lies closer to $a$ than $\psi_0$, or $\varphi_0 = \psi_0$. (Analogous arguments produce the same result in the case that $\varphi_0$ is further from $a$ than $\psi_0$.) Then under (\ref{eq:f}), the line segment connecting $a$ and $\varphi_0$ follows $\mathcal{X}$ to a line segment on the $v$-axis, then follows $\mathcal{Y}$ to a line segment on the $u$-axis. Thus $\varphi_{n_{\mathcal{X}}+n_{\mathcal{Y}}}$ lies on the $u$-axis. Similarly, under (\ref{eq:f}), the line segment connecting $b$ and $\psi_0$ follows $\mathcal{Y}$ to a line segment on the $u$-axis, then follows $\mathcal{X}$ to a line segment elsewhere on the $u$-axis. Thus $\psi_{n_{\mathcal{X}}+n_{\mathcal{Y}}}$ also lies on the $u$-axis. Since $\Phi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ is a piecewise-linear connection from $c$ to $d$ with possible kinks only at $\varphi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ and $\psi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$, $\Phi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ must lie entirely on the $u$-axis. Since (\ref{eq:f}) is assumed to be invertible, $\Phi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ must be the line segment connecting $c$ and $d$.
Since the homoclinic orbit is admissible, under $n_{\mathcal{X}}$ iterations of (\ref{eq:f}) both $w^{\mathcal{X}}_0$ and $c$ (which lie on the $u$-axis, see Fig.~\ref{fig:uvCoords}) follow $\mathcal{X}$. Therefore the switching manifold ($x=0$) does not intersect the $u$-axis at a point between $w^{\mathcal{X}}_0$ and $c$. Therefore $\Phi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ is contained in the stable manifold of the $\mathcal{X}$-cycle. Furthermore, since (\ref{eq:f}) is invertible and every orbit in the branch of the unstable manifold involving the homoclinic orbit intersects $\Phi_0$, the stable and unstable branches must be coincident. \end{proof}
In the above proof $\varphi_0$ and $\psi_{\alpha}$ denote the intersections of $\Xi$ with the switching manifold (where $\{ \varphi_i \}$ and $\{ \psi_i \}$ are orbits of (\ref{eq:f}) with $\varphi_0, \psi_0 \in \Phi_0$) and it was shown that the kinks of $\Phi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ at $\varphi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ and $\psi_{n_{\mathcal{X}} + n_{\mathcal{Y}}}$ are spurious. This implies that $\Xi$ exhibits one of the following properties. Either $\Phi_0$ and $\Phi_{\alpha}$ both intersect the switching manifold at the unique angle for which their images do not accumulate a kink, or $\varphi_0 = \psi_0$. Since the former property corresponds to an additional codimension, for the remainder of this paper we do not consider it further.
If $\varphi_0 = \psi_0$, then the next $n_{\mathcal{X}}+n_{\mathcal{Y}}$ iterates under (\ref{eq:f}) of any point sufficiently close to $\varphi_0$ (and not necessarily on $\Phi_0$) follow one of four symbol sequences depending on which side of the switching manifold the point and its image under $\alpha$ iterations of (\ref{eq:f}) are located. These are $\mathcal{X} \mathcal{Y}$, $\mathcal{Y} \mathcal{X}$, $\mathcal{X}^{\overline{0}} \mathcal{Y}$ and $\mathcal{Y}^{\overline{0}} \mathcal{X}$, where $\mathcal{S}^{\overline{0}}$ is used to denote the word that differs from $\mathcal{S}$ in only the $0^{\rm th}$ index. The following theorem concerns $\mathcal{X}^k \mathcal{Y}^{\overline{0}}$-cycles, as these are saddle-type periodic solutions for each of the examples in the following section\removableFootnote{ An analogous result holds for $\mathcal{X}^{\overline{0}} \mathcal{X}^{k-1} \mathcal{Y}$-cycles, but for the examples below such periodic solutions are virtual. }.
\begin{theorem} Suppose (\ref{eq:f}) is invertible and satisfies the conditions of Theorem \ref{th:codim3}. Suppose $\Xi$ intersects the switching manifold at only two points: $\varphi_0 \in \Phi_0$, and its $\alpha^{\rm th}$ iterate under (\ref{eq:f}), $\varphi_{\alpha}$. Assume $g^{\mathcal{Y}^{\overline{0}}}$ does not map the $v$-axis to the $u$-axis. Let $\mathcal{S}'[k] = \mathcal{X}^k \mathcal{Y}^{\overline{0}}$. Then, as $k \to \infty$, $\mathcal{S}'[k]$-cycles limit to a homoclinic orbit of the $\mathcal{X}$-cycle that has two points on the switching manifold: $\varphi_0$ and $\varphi_{\alpha}$. \label{th:saddles} \end{theorem}
Since $g^{\mathcal{Y}}$ maps the $v$-axis to the $u$ axis (part (\ref{it:gY}) of Theorem \ref{th:codim3}), it is reasonable to assume that this is not the case for $g^{\mathcal{Y}^{\overline{0}}}$. Note that Theorem \ref{th:saddles} does not ensure admissibility of $\mathcal{S}'[k]$-cycles for large $k$.
\begin{proof}[Proof of Theorem \ref{th:saddles}] In $(u,v)$-coordinates let us write $g^{\mathcal{Y}^{\overline{0}}}$ as\removableFootnote{ By the continuity of (\ref{eq:f}), the slopes of the partition of phase space about $\varphi_0$ and the maps under $\mathcal{X} \mathcal{Y}$ and $\mathcal{Y} \mathcal{X}$ completely determine the maps under $\mathcal{X}^{\overline{0}} \mathcal{Y}$ and $\mathcal{Y}^{\overline{0}} \mathcal{X}$. } \begin{equation} g^{\mathcal{Y}^{\overline{0}}}(w) = \left[ \begin{array}{cc} \xi_{11} & \xi_{12} \\ \xi_{21} & \xi_{22} \end{array} \right] w + \left[ \begin{array}{c} \chi_1 \\ \chi_2 \end{array} \right] \;, \label{eq:gY0} \end{equation} for some constants $\xi_{ij}$, $\chi_1$ and $\chi_2$. We can express $\chi_1$ and $\chi_2$ in terms of other coefficients by using the requirement that $g^{\mathcal{Y}}$ and $g^{\mathcal{Y}^{\overline{0}}}$ map $\varphi_0$ to the same point because (\ref{eq:f}) is a continuous map. Write $\varphi_0 = \left( 0,\hat{v} \right)$, in $(u,v)$-coordinates. Then by (\ref{eq:gY2}) and (\ref{eq:gY0}) respectively, \begin{equation} g^{\mathcal{Y}}(\varphi_0) = \left[ \begin{array}{c} \gamma_{12} \hat{v} + \sigma_1 \\ 0 \end{array} \right] \;, \qquad g^{\mathcal{Y}^{\overline{0}}}(\varphi_0) = \left[ \begin{array}{c} \xi_{12} \hat{v} + \chi_1 \\ \xi_{22} \hat{v} + \chi_2 \end{array} \right] \;. \nonumber \end{equation} Therefore $\chi_1 = \sigma_1 + (\gamma_{12} - \xi_{12}) \hat{v}$ and $\chi_2 = -\xi_{22} \hat{v}$. The assumption that $g^{\mathcal{Y}^{\overline{0}}}$ does not map the $v$-axis to the $u$-axis implies $\xi_{22} \ne 0$.
By composing (\ref{eq:gY0}) with $k$ instances of $g^{\mathcal{X}}$ (\ref{eq:gX}), and substituting $\lambda_2 = \frac{1}{\lambda_1}$ and the above formulas for $\chi_1$ and $\chi_2$, we obtain \begin{equation} g^{\mathcal{Y}^{\overline{0}} \mathcal{X}^k}(w) = \left[ \begin{array}{cc} \xi_{11} \lambda_1^k & \xi_{12} \lambda_1^k \\ \frac{\xi_{21}}{\lambda_1^k} & \frac{\xi_{22}}{\lambda_1^k} \end{array} \right] w + \left[ \begin{array}{c} \left( \sigma_1 + \left( \gamma_{12} - \xi_{12} \right) \hat{v} \right) \lambda_1^k \\ -\frac{\xi_{22} \hat{v}}{\lambda_1^k} \end{array} \right] \;. \label{eq:gY0Xk} \end{equation} Since $\xi_{22} \ne 0$, for large $k$ (\ref{eq:gY0Xk}) has the unique fixed point \begin{equation}
w^{\mathcal{S}'[k]}_{k n_{\mathcal{X}}} = \left[ \begin{array}{c} \left( \sigma_1 + \gamma_{12} \hat{v} \right) \lambda_1^k + \mathcal{O} \left( \lambda_1^{2k} \right) \\ \hat{v} + \mathcal{O} \left( \lambda_1^k \right) \end{array} \right] \;. \nonumber \end{equation} Therefore $w^{\mathcal{S}'[k]}_{k n_{\mathcal{X}}} \to \varphi_0$, as $k \to \infty$, and also $w^{\mathcal{S}'[k]}_{k n_{\mathcal{X}} + \alpha} \to \varphi_{\alpha}$. \end{proof}
\section{Finding infinitely many attracting periodic solutions} \label{sec:finding}
This section introduces a practical method to finding parameter values $\tau_{L}$, $\delta_{L}$, $\tau_{R}$ and $\delta_{R}$ for which (\ref{eq:f}) has infinitely many attracting periodic solutions. The method is then applied to produce three examples.
By Theorem \ref{th:codim3}, three requirements necessary for infinite coexistence are $\lambda_2 = \frac{1}{\lambda_1}$, $\gamma_{22} = 0$ and $\sigma_2 = 0$, where $\lambda_1$ and $\lambda_2$ are the eigenvalues of $M_{\mathcal{X}}$, and $\gamma_{22}$ and $\sigma_2$ are coefficients of the map $g^{\mathcal{Y}}$ (\ref{eq:gY}). In order to identify suitable values of $\tau_{L}$, $\delta_{L}$, $\tau_{R}$ and $\delta_{R}$, we translate these requirements into three restrictions on the parameter values.
We begin with the requirement $\lambda_2 = \frac{1}{\lambda_1}$, which is equivalent to $\det \left( M_{\mathcal{X}} \right) = 1$. Let $l_{\mathcal{X}}$ denote the number of $L$'s that are present in the word $\mathcal{X}$.
Then $M_{\mathcal{X}}$ is a product of $l_{\mathcal{X}}$ instances of $A_{L}$, and $n_{\mathcal{X}} - l_{\mathcal{X}}$ instances of $A_{R}$, hence $\det \left( M_{\mathcal{X}} \right) = \delta_{L}^{l_{\mathcal{X}}} \delta_{R}^{n_{\mathcal{X}} - l_{\mathcal{X}}}$. Therefore $\lambda_2 = \frac{1}{\lambda_1}$ is equivalent to \begin{equation} \delta_{L} = \delta_{R}^{-\frac{n_{\mathcal{X}} - l_{\mathcal{X}}}{l_{\mathcal{X}}}} \;. \label{eq:deltaL} \end{equation}
It is impractical to directly impose the remaining two requirements, $\gamma_{22} = 0$ and $\sigma_2 = 0$, because expressions for $\gamma_{22}$ and $\sigma_2$ in terms of the four parameters are extremely complicated even for simple choices of $\mathcal{X}$ and $\mathcal{Y}$. This is because $\gamma_{22}$ and $\sigma_2$ are coefficients in $(u,v)$-coordinates (\ref{eq:uv}), and consequently expressions for these coefficients involve formulas for the eigenvalues of $M_{\mathcal{X}}$, which involve a square root. Algebraic manipulations are made substantially more manageable by instead using the results of \S\ref{sec:further} to express $\zeta_1$ and $\zeta_2$ (eigenvectors of $M_{\mathcal{X}}$) in terms of the parameters. This is explained below. We then let \begin{equation} Q = \big[ \zeta_1 \; \zeta_2 \big] \;, \qquad \Omega = Q^{-1} M_{\mathcal{X}} Q \;. \label{eq:QOmega} \end{equation} If $\zeta_1$ and $\zeta_2$ are linearly independent eigenvectors of $M_{\mathcal{X}}$, then the matrix $\Omega$ must be diagonal. That is, \begin{equation} \omega_{12} = 0 \;, \qquad \omega_{21} = 0 \;, \label{eq:omega1221} \end{equation} where $\omega_{ij}$ denotes the $(i,j)$-element of $\Omega$. The equations of (\ref{eq:omega1221}) represent an alternative to $\gamma_{22} = 0$ and $\sigma_2 = 0$ that are significantly simpler when expressed in terms of $\tau_{L}$, $\delta_{L}$, $\tau_{R}$ and $\delta_{R}$.
To obtain expressions for $\zeta_1$ and $\zeta_2$, let us assume that $(\mathcal{X} \mathcal{Y})_i \ne (\mathcal{Y} \mathcal{X})_i$ only for $i=0$ and $i=\alpha$, and $\varphi_0 = \psi_0$ (see \S\ref{sec:further}). The point $\varphi_0$ lies on the switching manifold, and its image under $\alpha$ iterations of (\ref{eq:f}) following $\mathcal{X} \mathcal{Y}$ also lies on the switching manifold. In $(x,y)$-coordinates let us write $\varphi_0 = (0,\hat{y})$. The value of $\hat{y}$ may be determined from the requirement that the $x$-component of $\varphi_{\alpha}$ is zero. Also, $\varphi_0$ lies on the unstable manifold of $\left( x^{\mathcal{X}}_0, y^{\mathcal{X}}_0 \right)$. The point $\left( x^{\mathcal{X}}_0, y^{\mathcal{X}}_0 \right)$ is given by (\ref{eq:zStarX}), and its unstable manifold has direction $\zeta_2$. Therefore, when $\mu = 1$, there exists $\eta \in \mathbb{R}$ such that \begin{equation} \left( I - M_{\mathcal{X}} \right)^{-1} P_{\mathcal{X}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] + \eta \zeta_2 = \left[ \begin{array}{c} 0 \\ \hat{y} \end{array} \right] \;. \label{eq:unstabIntCritPoint} \end{equation} By using $M_{\mathcal{X}} \zeta_2 = \lambda_2 \zeta_2$, we may rearrange (\ref{eq:unstabIntCritPoint}) to obtain \begin{equation} \eta \left( 1 - \lambda_2 \right) \zeta_2 = \left( I - M_{\mathcal{X}} \right) \left[ \begin{array}{c} 0 \\ \hat{y} \end{array} \right] - P_{\mathcal{X}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \;. \label{eq:unstabIntCritPoint2} \end{equation} Since we are free to choose the magnitude of $\zeta_2$, by (\ref{eq:unstabIntCritPoint2}) we may set \begin{equation} \zeta_2 = \left( I - M_{\mathcal{X}} \right) \left[ \begin{array}{c} 0 \\ \hat{y} \end{array} \right] - P_{\mathcal{X}} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \;. \label{eq:zeta2} \end{equation} In $(x,y)$-coordinates, the $u$ and $v$-axes have the same directions as $\zeta_1$ and $\zeta_2$, respectively. Part (\ref{it:gY}) of Theorem \ref{th:codim3} tells us that $g^{\mathcal{Y}}$ maps the $v$-axis to the $u$-axis. Therefore, $\zeta_1$ is a scalar multiple of $M_{\mathcal{Y}} \zeta_2$. We could set $\zeta_1 = M_{\mathcal{Y}} \zeta_2$, but for the examples below it is more convenient to set \begin{equation} \zeta_1 = M_{\mathcal{X}}^{-1} M_{\mathcal{Y}} \zeta_2 \;. \label{eq:zeta1} \end{equation}
In summary, (\ref{eq:deltaL}) and (\ref{eq:omega1221}) represent three restrictions on the parameter values of (\ref{eq:f}) for which the map has infinitely many admissible, stable $\mathcal{S}[k]$-cycles, where $\omega_{12}$ and $\omega_{21}$ are the off-diagonal elements of $\Omega$ (\ref{eq:QOmega}), and $\zeta_1$ and $\zeta_2$ are given by (\ref{eq:zeta2}) and (\ref{eq:zeta1}). We now find solutions to (\ref{eq:deltaL}) and (\ref{eq:omega1221}) for three different combinations of $\mathcal{X}$ and $\mathcal{Y}$.
\subsubsection*{An example with $n_{\mathcal{X}} = 3$}
Suppose\removableFootnote{ {\sc findExampleF4.m} } \begin{equation} \mathcal{X} = R L R \;, \qquad \mathcal{Y} = L R \;, \label{eq:XYRLRkLR} \end{equation}
as in Fig.~\ref{fig:infFa}. Here $\mathcal{X}$ has one $L$ and three symbols total, i.e.~$l_{\mathcal{X}} = 1$ and $n_{\mathcal{X}} = 3$. Thus by (\ref{eq:deltaL}), $\delta_{L} = \frac{1}{\delta_{R}^2}$.
Also $\alpha = 1$ (because $\mathcal{X} \mathcal{Y} = R L R L R$ and $\mathcal{Y} \mathcal{X} = L R R L R$, thus $(\mathcal{X} \mathcal{Y})_i \ne (\mathcal{Y} \mathcal{X})_i$ only for $i=0$ and $i=1$). Hence we require that $\varphi_0 = (0,\hat{y})$ maps to the switching manifold under a single iteration of (\ref{eq:f}). This implies $\hat{y} = -1$, when $\mu = 1$, with which (\ref{eq:zeta2}) gives $\zeta_2 = \left[ -\tau_{R}-1 ,\; \delta_{R}-1 \right]^{\sf T}$, and (\ref{eq:zeta1}) gives $\zeta_1 = A_{R}^{-1} \zeta_2 = \left[ \frac{1}{\delta_{R}}-1 ,\; -\frac{\tau_{R}}{\delta_{R}}-1 \right]^{\sf T}$. By substituting these into (\ref{eq:QOmega}) we obtain \begin{align} \omega_{12} &= \frac{1} {\delta_{R} \left( \delta_{R}^2 + \tau_{R} \delta_{R} - \delta_{R} + \tau_{R}^2 + \tau_{R} + 1 \right)} \Big( -\tau_{L} \delta_{R}^3 + \tau_{R} \delta_{R}^3 - \tau_{L} \tau_{R}^2 \delta_{R}^3 + \tau_{R}^2 \delta_{R}^3 + \tau_{R} - \tau_{R}^2 \delta_{R} \nonumber \\ &\quad- \tau_{R} \delta_{R} - 2 \delta_{R} + \tau_{R}^2 + 1 - \tau_{L} \tau_{R} \delta_{R}^2 + \tau_{L} \delta_{R}^4 + \tau_{R} \delta_{R}^4 + \delta_{R}^4 - \tau_{L} \tau_{R}^2 \delta_{R}^2 - \tau_{L} \tau_{R}^3 \delta_{R}^2 + \delta_{R}^2 \Big) \;, \label{eq:omega12F} \\ \omega_{21} &= \frac{1} {\delta_{R}^2 \left( \delta_{R}^2 + \tau_{R} \delta_{R} - \delta_{R} + \tau_{R}^2 + \tau_{R} + 1 \right)} \Big( -\tau_{L} \delta_{R}^3 + 2 \delta_{R}^4 + \tau_{R} \delta_{R}^3 + \tau_{L} \delta_{R}^4 - \tau_{R} \delta_{R}^4 + \tau_{L} \tau_{R}^2 \delta_{R}^3 - \delta_{R}^5 \nonumber \\ &\quad- \tau_{R}^2 \delta_{R}^3 - \tau_{R} \delta_{R} - \delta_{R} + \tau_{L} \tau_{R}^2 \delta_{R}^2 + \tau_{L} \tau_{R}^3 \delta_{R}^2 + \tau_{R}^2 \delta_{R}^2 - \tau_{R}^2 - \tau_{R} + \tau_{L} \tau_{R} \delta_{R}^4 - \delta_{R}^3 \Big) \;. \label{eq:omega21F} \end{align} We wish to solve $\omega_{12} = \omega_{21} = 0$. To this end we notice that the sum of (\ref{eq:omega12F}) and (\ref{eq:omega21F}) factors conveniently: \begin{equation} \omega_{12} + \omega_{21} = \frac{\left( \delta_{R} - 1 \right) \left( \tau_{R} \delta_{R}^3 + \tau_{L} \delta_{R}^3 - \tau_{L} \tau_{R}^2 \delta_{R}^2 + \tau_{R} + 2 \delta_{R}^2 \right) \left( \delta_{R} + \tau_{R} + 1 \right)} {\delta_{R}^2 \left( \delta_{R}^2 + \tau_{R} \delta_{R} - \delta_{R} + \tau_{R}^2 + \tau_{R} + 1 \right)} \;. \label{eq:sumomega1221} \end{equation}
The first factor in the numerator of (\ref{eq:sumomega1221}) is zero when $\delta_{R} = 1$\removableFootnote{ with which we find that $\omega_{12} = 0$ when $\tau_{L} = \frac{1}{\tau_{R}}$. }. For any $\tau_{R} < -1$, this combination of parameter values gives infinitely many admissible, stable $\mathcal{S}[k]$-cycles, but the $\mathcal{S}[k]$-cycles are not asymptotically stable because the eigenvalues of $M_{\mathcal{S}[k]}$ lie on the unit circle. In this case (\ref{eq:f}) is area-preserving and the $\mathcal{S}[k]$-cycles are elliptic.
The second factor in the numerator of (\ref{eq:sumomega1221}) is zero when $\tau_{L} = \frac{\tau_{R} \delta_{R}^3 + \tau_{R} + 2 \delta_{R}^2} {\delta_{R}^2 \left( \tau_{R}^2 - \delta_{R} \right)}$. However, we then have $\omega_{12} = \frac{\det(Q) \delta_{R}}{\tau_{R}^2 - \delta_{R}}$, which cannot be zero because $Q$ must be non-singular.
Finally, the third factor in the numerator of (\ref{eq:sumomega1221}) is zero when $\tau_{R} = -1-\delta_{R}$. Then $\omega_{12} = 0$ when $\tau_{L} = -1 + \frac{1}{\delta_{R}} - \frac{1}{\delta_{R}^2 \left( \delta_{R}^2 + 1 \right)}$. Therefore, $\delta_{R}$ is undetermined and \begin{equation} \tau_{L} = -1 + \frac{1}{\delta_{R}} - \frac{1}{\delta_{R}^2 (\delta_{R}^2+1)} \;, \qquad \delta_{L} = \frac{1}{\delta_{R}^2} \;, \qquad \tau_{R} = -1 - \delta_{R} \;. \label{eq:paramRLRkLR} \end{equation} With (\ref{eq:paramRLRkLR}), $\delta_{R} > 1$ and $\mu = 1$, (\ref{eq:f}) indeed has infinitely many admissible, attracting $\mathcal{S}[k]$-cycles. This is proved in \S\ref{sec:sufficient}. Fig.~\ref{fig:infFa} illustrates this scenario with $\delta_{R} = \frac{3}{2}$. For different values of $\delta_{R} > 1$ the primary features of the phase portrait are unchanged.
\subsubsection*{An example with $n_{\mathcal{X}} = 4$}
Suppose\removableFootnote{ {\sc findExampleI2.m} } \begin{equation} \mathcal{X} = R L L R \;, \qquad \mathcal{Y} = L L R \;. \label{eq:XYRLLRkLLR} \end{equation} Then (\ref{eq:deltaL}) gives $\delta_{L} = \frac{1}{\delta_{R}}$. Also $\alpha = 2$, which implies $\hat{y} = -1-\frac{1}{\tau_{L}}$, when $\mu = 1$. By continuing with the above method we find that expressions for $\omega_{12}$ and $\omega_{21}$ are too complicated to include here -- and the author has been unable to solve (\ref{eq:deltaL}) and (\ref{eq:omega1221}) analytically for this example -- but with $\tau_{L} = 0.5$, (\ref{eq:deltaL}) and (\ref{eq:omega1221}) admit the following approximate numerical solution \begin{equation} \tau_{L} = 0.5 \;, \qquad \delta_{L} = \frac{1}{\delta_{R}} \;, \qquad \tau_{R} = -1.139755486 \;, \qquad \delta_{R} = 1.378851759 \;. \label{eq:paramI} \end{equation} A phase portrait of (\ref{eq:f}) with $\mu = 1$ and (\ref{eq:paramI}) is shown in Fig.~\ref{fig:infI}. Here at least eight $\mathcal{S}[k]$-cycles are admissible and attracting, which suggests that with the exact solution to (\ref{eq:deltaL}) and (\ref{eq:omega1221}) infinitely many $\mathcal{S}[k]$-cycles are admissible and attracting.
\begin{figure}\label{fig:infI}
\end{figure}
\subsubsection*{An example with $n_{\mathcal{X}} = 5$}
Suppose\removableFootnote{ {\sc findExampleC3.m} } \begin{equation} \mathcal{X} = R L R L R \;, \qquad \mathcal{Y} = L R \;. \label{eq:XYRLRLRkLR} \end{equation} Here (\ref{eq:deltaL}) gives $\delta_{L} = \delta_{R}^{-\frac{3}{2}}$. Also $\alpha = 1$, thus $\hat{y} = -1$ when $\mu = 1$. As with the previous example, it does not appear to be possible to solve (\ref{eq:deltaL}) and (\ref{eq:omega1221}) analytically.
An approximate numerical solution to (\ref{eq:deltaL}) and (\ref{eq:omega1221}) is \begin{equation} \tau_{L} = -0.7 \;, \qquad \delta_{L} = \delta_{R}^{-\frac{3}{2}} \;, \qquad \tau_{R} = -3.308423793 \;, \qquad \delta_{R} = 1.659870677 \;, \label{eq:paramC} \end{equation} which is illustrated in Fig.~\ref{fig:infC}. As with the previous example, from Fig.~\ref{fig:infC} we infer that with the exact solution to (\ref{eq:deltaL}) and (\ref{eq:omega1221}), (\ref{eq:f}) has infinitely many admissible, attracting $\mathcal{S}[k]$-cycles.
\begin{figure}
\caption{ A phase portrait of (\ref{eq:f}) with $\mu = 1$ and (\ref{eq:paramC}), using the same conventions as Figs.~\ref{fig:infFa} and \ref{fig:infI}. The parameter values approximate those admitting infinite coexistence, with $\mathcal{X}$ and $\mathcal{Y}$ given by (\ref{eq:XYRLRLRkLR}).
$\mathcal{S}[k]$ and $\mathcal{S}'[k]$-cycles are plotted for $k=1$ to $k=8$. For the given parameters, the map (\ref{eq:f}) also has an unstable fixed point (the isolated triangle), an attracting $R L$-cycle (the two isolated circles near the middle of the figure), and an attracting $R L R L L$-cycle (two points of this periodic solution are visible in the figure). }
\label{fig:infC}
\end{figure}
\section{Verification of infinite coexistence} \label{sec:sufficient}
We have shown that if the map (\ref{eq:f}) with $\mu = 1$ has infinitely many admissible, attracting $\mathcal{S}[k]$-cycles with $\mathcal{X} = R L R$ and $\mathcal{Y} = L R$, then, with reasonable assumptions given in sections \ref{sec:necessary} and \ref{sec:further}, the parameter values must satisfy (\ref{eq:paramRLRkLR}). In this section it is shown that the additional restriction, $\delta_{R} > 1$, is sufficient for (\ref{eq:f}) to exhibit such infinite coexistence, as indicated in the following theorem.
\begin{theorem} Let $\mathcal{S}[k] = \left( R L R \right)^k L R$, and $\mathcal{S}'[k] = \left( R L R \right)^k R R$. Let $\mu = 1$, $\delta_{R} > 1$, and suppose that the remaining parameter values of (\ref{eq:f}) are given by (\ref{eq:paramRLRkLR}). Then for all $k \ge 1$, (\ref{eq:f}) has a unique, admissible, asymptotically stable $\mathcal{S}[k]$-cycle, and a unique, admissible, saddle-type $\mathcal{S}'[k]$-cycle. \label{th:RLRkLR} \end{theorem}
The theorem is proved below by directly verifying asymptotic stability of the $\mathcal{S}[k]$-cycles, and admissibility of the $\mathcal{S}[k]$ and $\mathcal{S}'[k]$-cycles. By the results of \S\ref{sec:periodic}, this may be done by calculating the determinant and trace of $M_{\mathcal{S}[k]}$ and $M_{\mathcal{S}'[k]}$, and the determinant of each $P_{\mathcal{S}[k]^{(i)}}$ and $P_{\mathcal{S}'[k]^{(i)}}$. Since the details of these calculations are relatively lengthy and involve significant repetition, for brevity the majority of the calculations for the $\mathcal{S}'[k]$-cycles are omitted. We begin by computing the determinant and trace of $M_{\mathcal{S}[k]}$.
\begin{lemma} Let $\mathcal{S}[k] = \left( R L R \right)^k L R$, suppose $\delta_{R} \ne 0$ and that the parameter values of (\ref{eq:f}) satisfy (\ref{eq:paramRLRkLR}). Then for all $k \ge 1$, \begin{align} \det \left( M_{\mathcal{S}[k]} \right) &= \frac{1}{\delta_{R}} \;, \label{eq:detMS} \\ {\rm trace} \left( M_{\mathcal{S}[k]} \right) &= \frac{-(\delta_{R} + 1) \lambda_1^k}{\delta_{R}^2 + 1} \;, \label{eq:traceMS} \end{align} where \begin{equation} \lambda_1 = \frac{\delta_{R}}{\delta_{R}^2+1} \;. \label{eq:lambda1} \end{equation} \label{le:MS} \end{lemma}
\begin{proof} Here $M_{\mathcal{S}[k]}$ is the product of $k+1$ instances of $A_{L}$, and $2k+1$ instances of $A_{R}$, therefore $\det \left( M_{\mathcal{S}[k]} \right) = \delta_{L}^{k+1} \delta_{R}^{2k+1}$. By substituting $\delta_{L} = \frac{1}{\delta_{R}^2}$ into this expression we obtain (\ref{eq:detMS}).
More effort is required to compute ${\rm trace} \left( M_{\mathcal{S}[k]} \right)$. By the definitions of $\mathcal{S}[k]$ and $M_{\mathcal{S}}$ (\ref{eq:MSPS}), we can write $M_{\mathcal{S}[k]} = A_{R} A_{L} M_{R L R}^k$. An evaluation of $M_{R L R} = A_{R} A_{L} A_{R}$ using (\ref{eq:f}) and (\ref{eq:paramRLRkLR}) produces \begin{equation} M_{R L R} = \frac{1}{\delta_{R}^2 + 1} \left[ \begin{array}{cc} (\delta_{R} + 1)^2 & \delta_{R}^3 - 1 \\ \delta_{R}^2 - \frac{1}{\delta_{R}} & (\delta_{R}^2 + 1)(\delta_{R} - 1) + \frac{1}{\delta_{R}} \end{array} \right] \;. \end{equation} The matrix $M_{R L R}$ has eigenvalues $\lambda_1$ (\ref{eq:lambda1}) and $\lambda_2 = \frac{1}{\lambda_1}$. To take powers of $M_{R L R}$ we let $Q = \left[ \begin{array}{cc} 1 & \frac{\delta_{R}}{\delta_{R} - 1} \\ \frac{-1}{\delta_{R} - 1} & 1 \end{array} \right]$, as the columns of this matrix are eigenvectors of $M_{R L R}$. Then $M_{R L R}^k = Q^{-1} \left[ \begin{array}{cc} \lambda_1^k & 0 \\ 0 & \frac{1}{\lambda_1^k} \end{array} \right] Q$, and consequently \begin{equation} M_{R L R}^k = \frac{1}{1 + \frac{\delta_{R}}{(\delta_{R} - 1)^2}} \left[ \begin{array}{cc} \lambda_1^k + \frac{\delta_{R}}{(\delta_{R}-1)^2} \lambda_1^{-k} & \frac{-\delta_{R}}{\delta_{R}-1} \left( \lambda_1^k - \lambda_1^{-k} \right) \\ \frac{-1}{\delta_{R}-1} \left( \lambda_1^k - \lambda_1^{-k} \right) & \frac{\delta_{R}}{(\delta_{R}-1)^2} \lambda_1^k + \lambda_1^{-k} \end{array} \right] \;. \label{eq:MRLRk} \end{equation} An evaluation of ${\rm trace} \left( A_{R} A_{L} M_{R L R}^k \right)$ using (\ref{eq:MRLRk}) produces (\ref{eq:traceMS}). \end{proof}
Next we derive expressions for the determinant of each $P_{\mathcal{S}[k]^{(i)}}$. Since $\mathcal{S}[k]$ has period $3k+2$, we require $\det \left( P_{\mathcal{S}[k]^{(i)}} \right)$ for each $i = 0, \ldots, 3k+1$.
\begin{lemma} Let $\mathcal{S}[k] = \left( R L R \right)^k L R$, suppose $\delta_{R} \ne 0$ and that the parameter values of (\ref{eq:f}) satisfy (\ref{eq:paramRLRkLR}). Then for all $j = 0,\ldots,k-1$, \begin{align} \det \left( P_{\mathcal{S}[k]^{(3j)}} \right) &= \frac{1}{\delta_{R}^2-\delta_{R}+1} \left( \delta_{R}(\delta_{R} + 1) + \frac{\delta_{R}^2(\delta_{R}+1)}{\delta_{R}^2 + 1} \lambda_1^k - (\delta_{R}^2+\delta_{R}+1) \lambda_1^{k-j} - \frac{\delta_{R}^3-1}{\delta_{R}^2 + 1} \lambda_1^j \right) \;, \label{eq:Ps3j} \\ \det \left( P_{\mathcal{S}[k]^{(3j+1)}} \right) &= -\frac{1}{\delta_{R}^2-\delta_{R}+1} \bigg( (\delta_{R}+1)(\delta_{R}^2+1) + \delta_{R} (\delta_{R}+1) \lambda_1^k \nonumber \\ &\quad-\frac{(\delta_{R}^2+1)(\delta_{R}^2+\delta_{R}+1)}{\delta_{R}} \lambda_1^{k-j} - \frac{\delta_{R}^2(\delta_{R}^2+\delta_{R}+1)}{\delta_{R}^2+1} \lambda_1^j \bigg) \;, \label{eq:Ps3jp1} \\ \det \left( P_{\mathcal{S}[k]^{(3j+2)}} \right) &= \frac{1}{\delta_{R}^2-\delta_{R}+1} \bigg( \frac{\delta_{R}+1}{\delta_{R}^2} + \frac{\delta_{R}+1}{\delta_{R} (\delta_{R}^2+1)} \lambda_1^k \nonumber \\ &\quad+\frac{(\delta_{R}-1)(\delta_{R}^2+\delta_{R}+1)(\delta_{R}^2+1)}{\delta_{R}^3} \lambda_1^{k-j} - \frac{\delta_{R}^2+\delta_{R}+1}{(\delta_{R}^2+1)^2} \lambda_1^j \bigg) \;, \label{eq:Ps3jp2} \\ \det \left( P_{\mathcal{S}[k]^{(3k)}} \right) &= -\frac{1 - \lambda_1^k}{\delta_{R}^2 - \delta_{R} + 1} \;, \label{eq:Ps3k} \\ \det \left( P_{\mathcal{S}[k]^{(3k+1)}} \right) &= \frac{(\delta_{R}^5 + \delta_{R}^3 + \delta_{R} - 1) \lambda_1^k + 1} {\delta_{R}^2 (\delta_{R}^2 + 1) (\delta_{R}^2 - \delta_{R} + 1)} \;, \label{eq:Ps3kp1} \end{align} where $\lambda_1$ is given by (\ref{eq:lambda1}). \label{le:detPSi} \end{lemma}
\begin{proof} Here we derive only (\ref{eq:Ps3j}). Derivations of (\ref{eq:Ps3jp1}) and (\ref{eq:Ps3jp2}) are similar; derivations of (\ref{eq:Ps3k}) and (\ref{eq:Ps3kp1}) are simpler.
Taking the left shift permutation of $\mathcal{S}[k]$ a total of $3j$ places yields \begin{equation} \mathcal{S}[k]^{(3j)} = \left( R L R \right)^{k-j} L R \left( R L R \right)^j \;. \nonumber \end{equation} Careful use of (\ref{eq:MSPS}) produces\removableFootnote{ Also \begin{align} \mathcal{S}[k]^{(3j+1}) &= L R \left( R L R \right)^{k-j-1} L R \left( R L R \right)^j R \;, \\ \mathcal{S}[k]^{(3j+2}) &= R \left( R L R \right)^{k-j-1} L R \left( R L R \right)^j R L \;, \\ \mathcal{S}[k]^{(3k}) &= L R \left( R L R \right)^k \;, \\ \mathcal{S}[k]^{(3k+1}) &= R \left( R L R \right)^k L \;, \\ \end{align} and thus \begin{align} P_{\mathcal{S}[k]^{(3j+1)}} &= I + A_{R} \left( \sum_{p=0}^{j-1} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) \nonumber \\ &\quad+A_{R} M_{R L R}^j \left( I + A_{R} + A_{R} A_{L} \left( \sum_{p=0}^{k-j-2} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) + M_{R L R}^{k-j-1} \left( I + A_{R} \right) \right) \;, \\ P_{\mathcal{S}[k]^{(3j+2)}} &= I + A_{L} + A_{L} A_{R} \left( \sum_{p=0}^{j-1} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) \nonumber \\ &\quad+A_{L} A_{R} M_{R L R}^j \left( I + A_{R} + A_{R} A_{L} \left( \sum_{p=0}^{k-j-2} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) + M_{R L R}^{k-j-1} \right) \;, \\ P_{\mathcal{S}[k]^{(3k)}} &= \left( \sum_{p=0}^{k-1} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) + M_{R L R}^k \left( I + A_{R} \right) \;, \\ P_{\mathcal{S}[k]^{(3k+1)}} &= I + A_{L} \left( \sum_{p=0}^{k-1} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) + A_{L} M_{R L R}^k \;. \end{align} } \begin{align} P_{\mathcal{S}[k]^{(3j)}} &= \left( \sum_{p=0}^{j-1} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) \nonumber \\ &\quad+M_{R L R}^j \left( I + A_{R} + A_{R} A_{L} \left( \sum_{p=0}^{k-j-1} M_{R L R}^p \right) \left( I + A_{R} + A_{R} A_{L} \right) \right) \;. \label{eq:PS3j} \end{align} Powers of $M_{R L R}$ are given by (\ref{eq:MRLRk}). To obtain explicit expressions for the two finite series that appear in (\ref{eq:PS3j}), we use the following formulas for the partial sum of a geometric series: \begin{equation} \sum_{p=0}^{j-1} \lambda_1^p = \frac{1 - \lambda_1^j}{1 - \lambda_1} \;, \qquad \sum_{p=0}^{j-1} \lambda_1^{-p} = \frac{\lambda_1(\lambda_1^{-j}-1)}{1 - \lambda_1} \;. \nonumber \end{equation} This gives \begin{equation} \sum_{p=0}^{j-1} M_{R L R}^p = \frac{1} {\left( 1 + \frac{\delta_{R}}{(\delta_{R} - 1)^2} \right) \left( 1 - \lambda_1 \right)} \left[ \begin{array}{cc} 1-\lambda_1^j + \frac{\delta_{R}}{(\delta_{R}-1)^2} \lambda_1(\lambda_1^{-j}-1)& \frac{-\delta_{R}}{\delta_{R}-1} \left( 1-\lambda_1^j - \lambda_1(\lambda_1^{-j}-1) \right) \\ \frac{-1}{\delta_{R}-1} \left( 1-\lambda_1^j - \lambda_1(\lambda_1^{-j}-1) \right) & \frac{\delta_{R}}{(\delta_{R}-1)^2} (1-\lambda_1^j) + \lambda_1(\lambda_1^{-j}-1) \end{array} \right] \;. \label{eq:sumMRLRj} \end{equation} Then (\ref{eq:Ps3j}) results by directly evaluating the determinant of (\ref{eq:PS3j}) via the use of (\ref{eq:f}), (\ref{eq:paramRLRkLR}), (\ref{eq:MRLRk}) and (\ref{eq:sumMRLRj}). (For simplicity the author achieved this using symbolic computations in {\sc matlab}\removableFootnote{ {\sc goSymF.m}. }.) \end{proof}
\begin{proof}[Proof of Theorem \ref{th:RLRkLR}] By Lemma \ref{le:MS}, we have $0 < \det \left( M_{\mathcal{S}[k]} \right) < 1$ and ${\rm trace} \left( M_{\mathcal{S}[k]} \right) < 0$. Thus for all $k \ge 1$, $\det \left( I - M_{\mathcal{S}[k]} \right) \ne 0$, therefore by Lemma \ref{le:existence}, $\mathcal{S}[k]$-cycles are unique. Furthermore, we immediately see that the inequalities (\ref{eq:stabConditionSN}) and (\ref{eq:stabConditionNS}) hold for all $k \ge 1$. The inequality (\ref{eq:stabConditionPD}) also holds for all $k \ge 1$ because by Lemma \ref{le:MS} we have \begin{equation} \det \left( M_{\mathcal{S}[k]} \right) + {\rm trace} \left( M_{\mathcal{S}[k]} \right) + 1 = \frac{ \left( \delta_{R} + 1 \right) \left( \left( \delta_{R}^2 + 1 \right)^{k+1} - \delta_{R}^{k+1} \right)} {\delta_{R} \left( \delta_{R}^2 + 1 \right)^{k+1}} \;, \nonumber \end{equation} which is positive. Thus by Lemma \ref{le:stability} the $\mathcal{S}[k]$-cycles are asymptotically stable (assuming $\det \left( P_{\mathcal{S}[k]^{(i)}} \right) \ne 0$ for all $i$, which is demonstrated below).
By Lemma \ref{le:admissibility}, for admissibility we require \begin{equation} \begin{split} \det \left( P_{\mathcal{S}[k]^{(3j)}} \right) &> 0 \;, {\rm ~for~} j = 0,\ldots,k-1 \;, \\ \det \left( P_{\mathcal{S}[k]^{(3j+1)}} \right) &< 0 \;, {\rm ~for~} j = 0,\ldots,k-1 \;, \\ \det \left( P_{\mathcal{S}[k]^{(3j+2)}} \right) &> 0 \;, {\rm ~for~} j = 0,\ldots,k-1 \;, \\ \det \left( P_{\mathcal{S}[k]^{(3k)}} \right) &< 0 \;, \\ \det \left( P_{\mathcal{S}[k]^{(3k+1)}} \right) &> 0 \;. \end{split} \label{eq:detPSi2} \end{equation} From (\ref{eq:Ps3j}) we can see that, as a function of $j$, $\det \left( P_{\mathcal{S}[k]^{(3j)}} \right)$ has a single turning point (at $j \approx \frac{k}{2}$) that corresponds to a maximum. Therefore, given $\delta_{R}$ and $k$, over the range $j = 0,\ldots,k-1$, $\det \left( P_{\mathcal{S}[k]^{(3j)}} \right)$ achieves its minimum at either $j = 0$ or $j = k-1$. By (\ref{eq:Ps3j}), for $j = 0$: \begin{align} \det \left( P_{\mathcal{S}[k]} \right) &= \frac{\delta_{R}^4 + \delta_{R}^3 + \delta_{R} + 1 - \left( \delta_{R}^4 + \delta_{R}^2 + \delta_{R} + 1 \right) \lambda_1^k} {\left( \delta_{R}^2 - \delta_{R} + 1 \right) \left( \delta_{R}^2 + 1 \right)} \nonumber \\ &\ge \frac{\delta_{R}^2 \left( \delta_{R} - 1 \right)} {\left( \delta_{R}^2 - \delta_{R} + 1 \right) \left( \delta_{R}^2 + 1 \right)} > 0 \;, \nonumber \end{align} where we have substituted $k=0$ to produce the inequality. Similarly for $j = k-1$: \begin{align} \det \left( P_{\mathcal{S}[k]^{(3(k-1))}} \right) &= \frac{\delta_{R}^4 + \delta_{R}^2 \left( \delta_{R} + 1 \right) \lambda_1^k - \left( \delta_{R}^3 - 1 \right) \lambda^{k-1}} {\left( \delta_{R}^2 - \delta_{R} + 1 \right) \left( \delta_{R}^2 + 1 \right)} \nonumber \\ &\ge \frac{\delta_{R}^4 - \delta_{R}^3 + 1} {\left( \delta_{R}^2 - \delta_{R} + 1 \right) \left( \delta_{R}^2 + 1 \right)} > 0 \;. \nonumber \end{align} Therefore $\det \left( P_{\mathcal{S}[k]^{(3j)}} \right)$ is positive for all $\delta_{R} > 1$, $k \ge 1$ and $j = 0,\ldots,k-1$. The remaining inequalities in (\ref{eq:detPSi2}) may be verified in the same fashion; these calculations are omitted for brevity. We then conclude that, for each $k \ge 1$, the unique $\mathcal{S}[k]$-cycle is admissible and asymptotically stable.
Computations for $\mathcal{S}'[k]$-cycles are analogous. The key formulas are \begin{align} \det \left( M_{\mathcal{S}'[k]} \right) &= \delta_{R}^2 \;, \nonumber \\ {\rm trace} \left( M_{\mathcal{S}'[k]} \right) &= -\delta_{R} \lambda_1^k + \left( \delta_{R}^2 + \delta_{R} + 1 \right) \lambda_1^{-k} \;, \nonumber \end{align} and \begin{align} \det \left( P_{\mathcal{S}'[k]^{(3j)}} \right) &= -\frac{1}{\delta_{R}^2-\delta_{R}+1} \bigg( \delta_{R}^2 \left( \delta_{R}^2+\delta_{R}+1 \right) \left( \lambda_1^{-k}-\lambda_1^{-j}+\lambda_1^{k-j} \right) - \delta_{R}^2 \left( \delta_{R}^2+1 \right) \nonumber \\ &\quad-\left( \delta_{R}^3-1 \right) \lambda_1^j \left( \lambda_1^{-k}-1 \right) - \delta_{R}^3 \lambda_1^k \bigg) \;, \nonumber \\ \det \left( P_{\mathcal{S}'[k]^{(3j+1)}} \right) &= \frac{\delta_{R}}{\delta_{R}^2-\delta_{R}+1} \bigg( \left( \delta_{R}^2+\delta_{R}+1 \right) \left( \delta_{R}^2+1 \right) \left( \lambda_1^{-k}-\lambda_1^{-j}+\lambda_1^{k-j} \right) - \left( \delta_{R}^2+1 \right)^2 \nonumber \\ &\quad-\delta_{R} \left( \delta_{R}^2+\delta_{R}+1 \right) \lambda_1^j \left( \lambda_1^{-k}-1 \right) - \delta_{R} \left( \delta_{R}^2+1 \right) \lambda_1^k \bigg) \;, \nonumber \\ \det \left( P_{\mathcal{S}'[k]^{(3j+2)}} \right) &= -\frac{1}{\delta_{R} \left( \delta_{R}^2-\delta_{R}+1 \right)} \bigg( \left( \delta_{R}^2+\delta_{R}+1 \right) \lambda_1^{-k} - \frac{\delta_{R} \left( \delta_{R}^2+\delta_{R}+1 \right)}{\delta_{R}^2+1} \lambda_1^j \left( \lambda_1^{-k}-1 \right) \nonumber \\ &\quad-\left( \delta_{R}^2+1 \right) \left( \delta_{R}^3-1 \right) \lambda_1^{-j} \left( \lambda_1^k-1 \right) - \left( \delta_{R}^2+1 \right) - \delta_{R} \lambda_1^k \bigg) \;, \nonumber \\ \det \left( P_{\mathcal{S}'[k]^{(3k)}} \right) &= -\frac{1-\lambda_1^k}{\delta_{R}^2-\delta_{R}+1} \;, \nonumber \\ \det \left( P_{\mathcal{S}'[k]^{(3k+1)}} \right) &= -\frac{\delta_{R}^3 \left( 1-\lambda_1^k \right)}{\delta_{R}^2-\delta_{R}+1} \;, \nonumber \end{align} where $\lambda_1$ is given by (\ref{eq:lambda1}). From these formulas, uniqueness and admissibility of $\mathcal{S}'[k]$-cycles for $k \ge 1$ follows in the same fashion as for $\mathcal{S}[k]$-cycles. \end{proof}
\section{Conclusions} \label{sec:conc}
In this paper it is shown for the first time that the two-dimensional border-collision normal form (\ref{eq:f}) may have infinitely many coexisting attractors. Theorem \ref{th:RLRkLR} states that with $\mu = 1$, $\delta_{R} > 1$ and (\ref{eq:paramRLRkLR}), (\ref{eq:f}) has an attracting periodic solution with symbol sequence $\mathcal{S}[k] = \left( R L R \right)^k L R$, for all $k \ge 1$. Theorem \ref{th:RLRkLR} was proved by explicitly verifying all admissibility and stability conditions of the $\mathcal{S}[k]$-cycles.
Fig.~\ref{fig:infFa} shows a plot of the $\mathcal{S}[k]$-cycles with $\delta_{R} = \frac{3}{2}$. As $k$ increases the $\mathcal{S}[k]$-cycles approach an orbit that is homoclinic to an $R L R$-cycle. Furthermore, the branches of the stable and unstable manifolds of the $R L R$-cycle that intersect are coincident. In sections \ref{sec:necessary} and \ref{sec:further} it was shown that in general such coincidence is to be expected. Given $\mathcal{X}$ and $\mathcal{Y}$, if the $\mathcal{X}$-cycle is of saddle-type and (\ref{eq:f}) has infinitely many admissible, stable $\mathcal{X}^k \mathcal{Y}$-cycles, then, with some additional assumptions, (\ref{eq:f}) must display several important features. Parts (\ref{it:gY}) and (\ref{it:lambda2}) of Theorem \ref{th:codim3} give three consequences that imply that such coexistence is at least a codimension-three phenomenon. In view of Theorem \ref{th:RLRkLR}, we conclude that the scenario is generically codimension-three. By part (\ref{it:HCorbit}) of Theorem \ref{th:codim3}, the stable and unstable manifolds of the $\mathcal{X}$-cycle intersect. By Theorem \ref{th:coincident}, this intersection is non-transversal.
The results of \S\ref{sec:further} included the assumption that $(\mathcal{X} \mathcal{Y})_i \ne (\mathcal{Y} \mathcal{X})_i$ for $i=0$ and only one other index, call it $\alpha$. It was shown that we expect the unstable manifold of the $\mathcal{X}$-cycle to intersect the switching manifold at two points, and for one of these points to map to the other under $\alpha$ iterations of (\ref{eq:f}). By Theorem \ref{th:saddles}, $\mathcal{X}^k \mathcal{Y}^{\overline{0}}$-cycles limit to the homoclinic orbit of the $\mathcal{X}$-cycle that includes these two points of intersection, as $k \to \infty$.
In \S\ref{sec:finding}, parameter values for which (\ref{eq:f}) exhibits infinite coexistence were identified for three different combinations of $\mathcal{X}$ and $\mathcal{Y}$. These three examples each satisfy all the assumptions given in sections \ref{sec:necessary} and \ref{sec:further}, and the consequences listed above may be verified directly for these examples.
There are many avenues for future work. It remains to remove some of the assumptions made in sections \ref{sec:necessary} and \ref{sec:further}, if possible, and identify other mechanisms, if any exist, by which (\ref{eq:f}) may have infinitely many attractors. As parameters are varied from a point at which there exist infinitely many attractors, we would like to determine the rate at which the number of coexisting attractors decreases. We would also like to understand exactly for which combinations of $\mathcal{X}$ and $\mathcal{Y}$ (\ref{eq:f}) can exhibit infinitely many admissible, attracting $\mathcal{X}^k \mathcal{Y}$-cycles.
Perhaps the most important problem that stems from this work is a generalization to the $N$-dimensional border-collision normal form. In more than two dimensions calculations of periodic solutions can be performed in the same manner, but stable and unstable manifolds of saddle-type periodic solutions may have a dimension greater than one, which presents more possibilities and difficulties. Also, as noted in \cite{GlJe12,GlKo12}, it is not known how many attractors may be born simultaneously in grazing-sliding bifurcations. The return map for grazing-sliding may be put in the border-collision normal form, but it remains to demonstrate that parameter values that give rise to infinitely many coexisting attractors are viable for grazing-sliding, and study the influence of higher-order terms.
\begin{comment} \appendix
\section{List of symbols} \label{sec:GLOS}
{\footnotesize \begin{tabular}{r@{~~--~~}l} $a$ & point of HC connection\\ $b$ & point of HC connection\\ $c$ & point of HC connection\\ $d$ & point of HC connection\\ $f^{J}$ & half-map\\ $g^{J}$ & half-map in alternate coordinates\\ $i$ & iterate index and coefficient index\\ $j$ & multi-purpose index\\ $k$ & parameter in sequences of symbol sequences\\ $n$ & period\\ $p$ & second multi-purpose index\\ $u$ & first alternate coordinate\\ $v$ & second alternate coordinate\\ $\hat{v}$ & $v$-coordinate of $\varphi_0$\\ $w$ & the vector $(u,v)$\\ $x_i$ & first component of iterate\\ $x^{\mathcal{S}}_i$ & $x$-value of $i^{\rm th}$ point of $\mathcal{S}$-cycle\\ $y_i$ & second component of iterate\\ $y^{\mathcal{S}}_i$ & $y$-value of $i^{\rm th}$ point of $\mathcal{S}$-cycle\\ $\hat{y}$ & $y$-coordinate of $\varphi_0$\\ $A_{J}$ & matrix in half-map\\ $J$ & left/right index\\ $L$ & left\\ $M_{\mathcal{S}}$ & stability matrix\\ $N$ & number of dimensions\\ $\mathcal{O}$ & order\\ $P_{\mathcal{S}}$ & border-collision matrix\\ $Q$ & matrix of eigenvectors\\ $R$ & right\\ $\mathcal{S}$ & symbol sequence\\ $\mathcal{X}$ & repeated part of symbol sequence\\ $\mathcal{Y}$ & non-repeated part of symbol sequence\\ $\alpha$ & second index for which $\mathcal{X} \mathcal{Y}$ and $\mathcal{Y} \mathcal{X}$ differ\\ $\gamma_{ij}$ & element of matrix part of $g^{\mathcal{Y}}$\\ $\delta_{J}$ & determinant of $A_{J}$\\ $\zeta_j$ & eigenvector for $\lambda_j$\\ $\eta$ & scalar mulitple of eigenvector\\ $\lambda_j$ & eigenvalues of $M_{\mathcal{X}}$\\ $\mu$ & border-collision bifurcation parameter\\ $\xi_{ij}$ & element of matrix part of $g^{\mathcal{Y}^{\overline{0}}}$\\ $\sigma_j$ & element of constant part of $g^{\mathcal{Y}}$\\ $\tau_{J}$ & trace of $A_{J}$\\ $\chi_i$ & element of constant part of $g^{\mathcal{Y}^{\overline{0}}}$\\ $\varphi_i$ & critical point on $\Phi_i$\\ $\psi_i$ & critical point on $\Phi_i$\\ $\omega_{ij}$ & element of $\Omega$\\ $\Xi$ & union of $\Phi_i$\\ $\Phi_i$ & line segment connecting points of HC connection\\ $\Omega$ & $Q^{-1} M_{\mathcal{X}} Q$ \end{tabular} \end{comment}
\end{document} |
\begin{document}
\begin{abstract} Working in the framework of $(\mon{T},\V)$-categories, for a symmetric monoidal closed category $\V$ and a (not necessarily cartesian) monad $\mon{T}$, we present a common account to the study of ordered compact Hausdorff spaces and stably compact spaces on one side and monoidal categories and representable multicategories on the other one. In this setting we introduce the notion of dual for $(\mon{T},\V)$-categories.
\hspace*{-\parindent}{\em Mathematics Subject Classification}: 18C20, 18D15, 18A05, 18B30, 18B35.
\noindent {\em Key words}: monad, Kock-Z\"oberlein monad, multicategory, topological space, $(\mon{T},\V)$-category. \end{abstract}
\title{Representable $(\mT, \V)$-categories}
\section{Introduction}
The principal objective of this paper is to present a common account to the study of ordered compact Hausdorff spaces and stably compact spaces on one side and monoidal categories and representable multicategories on the other one. Both theories have similar features but were developed independently.
On the topological side, the starting point is the work of Stone on the representation of Boolean algebras \cite{Sto36} and distributive lattices \cite{Sto38}. In the latter paper, Stone proves that (in modern language) the category of distributive lattices and homomorphisms is dually equivalent to the category of spectral topological spaces and spectral maps. Here a topological space is spectral whenever it is sober and the compact open subsets form a basis for the topology which is closed under finite intersections; and a continuous map is called spectral whenever the inverse image of a compact open subset is compact. Later Hochster \cite{Hoc69} showed that spectral spaces are, up to homeomorphism, the prime spectra of commutative rings with unit, and in the same paper he also introduced a notion of dual spectral space. A different perspective on duality theory for distributive lattices was given by Priestley in \cite{Pri70}: the category of distributive lattices and homomorphisms is also dually equivalent to the category of certain ordered compact Hausdorff spaces (introduced by Nachbin in \cite{Nac50}) and continuous monotone maps. In particular, this full subcategory of the category of ordered compact Hausdorff spaces is equivalent to the category of spectral spaces. In fact, this equivalence generalises to all ordered compact Hausdorff spaces: the category $\catfont{OrdCompHaus}$ of ordered compact Hausdorff spaces and continuous monotone maps is equivalent to the category $\catfont{StablyComp}$ of stably compact spaces and spectral maps (see \cite{GHK+80}). Furthermore, as shown in \cite{Sim82} (see also \cite{EF99}), stably compact spaces can be recognised among all topological spaces by a universal property; namely, as the algebras for a Kock-Z\"oberlein monad (or lax idempotent monad, or simply KZ; see \cite{Ko}) on $\catfont{Top}$. Finally, Flagg \cite{Fla97a} proved that $\catfont{OrdCompHaus}$ is also monadic over ordered sets.
Independently, a very similar scenario was developed by Hermida in \cite{Her00,Her01} in the context of higher-dimensional category theory, now with monoidal categories and multicategories in lieu of ordered compact Hausdorff spaces and topological spaces. More specifically, he introduced in \cite{Her00} the notion of representable multicategory and constructed a 2-equivalence between the 2-category of representable multicategories and the 2-category of monoidal categories; that is, representable multicategories can be seen as a higher-dimensional counterpart of stably compact topological spaces. More in detail, we have the following analogies: \begin{longtable}{rl}
topological space & multicategory,\\
ordered compact Hausdorff space & monoidal category,\\
stably compact space & representable multicategory; \end{longtable} \noindent and there are KZ-monadic 2-adjunctions \begin{align*}
\catfont{OrdCompHaus}\adjunct{}{}\Top && \catfont{MonCat}\adjunct{}{}\catfont{MultiCat}, \intertext{which restrict to 2-equivalences}
\catfont{OrdCompHaus}\simeq\catfont{StablyComp} && \catfont{MonCat}\simeq\catfont{RepMultiCat}. \end{align*}
To bring both theories under one roof, we consider here the setting used in \cite{CT03} to introduce $(\mon{T},\V)$-categories; that is, a symmetric monoidal closed category $\V$ together with a (not necessarily cartesian) monad $\mon{T}$ on $\Set$ laxly extended to the bicategory $\Rels{\V}$ of $\V$-relations. After recalling the notions of $(\mon{T},\V)$-categories and $(\mon{T},\V)$-functors, we proceed showing that the above-mentioned results hold in this setting: the $\Set$-monad $\mon{T}$ extends naturally to $\Cats{\V}$, and its Eilenberg--Moore category admits an adjunction \[
(\Cats{\V})^\mon{T}\adjunct{}{}\Cats{(\mon{T},\V)}, \] so that the induced monad is of Kock-Z\"oberlein type. Following the terminology of \cite{Her00}, we call the pseudo-algebras for the induced monad on $\Cats{(\mon{T},\V)}$ representable $(\mon{T},\V)$-categories. We explain in more detail how this notion captures both theories mentioned above. Finally, we introduce a notion of dual $(\mon{T},\V)$-category. We recall that this concept turned out to be crucial in the development of a completeness theory for $(\mon{T},\V)$-categories when $\V$ is a quantale, i.e.\ a small symmetric monoidal closed complete category (see \cite{CH09}).
From a more formal point of view, $(\mon{T}, \V)$-categories are monads within a certain bicategory-like structure. Some of the theory presented in this paper is ``formal monad theoretical" in character. This perspective will be developed in an upcoming paper \cite{Ch14}.
\section{Basic assumptions}
Throughout the paper \emph{$\V$ is a complete, cocomplete, symmetric monoidal-closed category, with tensor product $\otimes$ and unit $I$}. Normally we avoid explicit reference to the natural unit, associativity and symmetry isomorphisms.
The bicat\-egory $\V\mbox{-}\Rel$ of $\V$-relations (also called $\Mat(\V)$: see \cite{BCSW, RW}) has as \vspace*{1mm}\\ -- objects sets, denoted by $X$, $Y$, $\dots$, also considered as (small) discrete categories,\vspace*{1mm}\\ -- arrows (=1-cells) $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ are families of $\V$-objects $r(x,y)$ ($x\in X,\,y\in Y$),\vspace*{1mm}\\ -- 2-cells $\varphi:r\to r'$ are families of morphisms $\varphi_{x,y}:r(x,y)\to r'(x,y)$ ($x\in X,\,y\in Y$) in $\V$, i.e., natural transformations $\varphi :r\to r'$; hence, their (vertical) composition is computed componentwise in $\V$: \[(\varphi '\cdot \varphi )_{x,y}=\varphi '_{x,y}\varphi _{x,y}.\] The (horizontal) composition of arrows $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ and $s:Y\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Z$ is given by {\em relational multiplication}: \[(sr)(x,z)=\sum_{y\in Y}\;r(x,y)\otimes s(y,z),\] which is extended naturally to 2-cells; that is, for $\varphi :r\to r'$, $\psi:s\to s'$, \[(\psi\varphi )_{x,z}=\sum_{y\in Y}\;\varphi _{x,y}\otimes\psi_{y,z}:(sr)(x,z)\to(s'r')(x,z).\]
There is a pseudofunctor $\Set\longrightarrow \V\mbox{-}\Rel$ which maps objects identically and treats a $\Set$-map $f:X\to Y$ as a $\V$-relation $f:X\relto Y$ in $\V\mbox{-}\Rel$, with $f(x,y)=I$ if $f(x)=y$ and $f(x,y)=\bot$ otherwise, where $\bot$ is a fixed initial object of $\V$. If an arrow $r:X\relto Y$ is given by a $\Set$-map, we shall indicate this by writing $r:X\to Y$, and by normally using $f,\,g,\,\dots$, rather than $r,\,s,\,\dots$.
Like for $\V$, in order to simplify formulae and diagrams, we disregard the unity and associativity isomorphisms in the bicategory $\V\mbox{-}\Rel$ when convenient.
$\V\mbox{-}\Rel$ has a pseudo-involution, given by {\em transposition}: the transpose $r^\circ :Y\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} X$ of $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ is defined by $r^\circ (y,x)=r(x,y)$; likewise for 2-cells. In particular, there are natural and coherent isomorphisms \[(sr)^\circ \cong r^\circ s^\circ \] involving the symmetry isomorphisms of $\V$. The transpose $f^\circ $ of a $\Set$-map $f:X\to Y$ is a right adjoint to $f$ in the bicat\-egory $\V\mbox{-}\Rel$, so that $f$ is really a ``map" in Lawvere's sense; hence, there are 2-cells \[\xymatrix{1_X\ar[r]^{\lambda_f}& f^\circ f&\mbox{ and }&ff^\circ \ar[r]^{\rho_f}& 1_Y}\]satisfying the triangular identities.
We fix a \emph{monad $\mon{T}=(T,e,m)$ on $\Set$ with a lax extension to $\V\mbox{-}\Rel$}, again denoted by $\mon{T}$, so that: \begin{enumerate}[--] \item There is a lax functor $T:\V\mbox{-}\Rel\to\V\mbox{-}\Rel$ which extends the given $\Set$-functor; hence, for an arrow $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ we are given $Tr:TX\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} TY$, with $Tr$ a $\Set$-map if $r$ is one, and $T$ extends to 2-cells functorially: \[T(\varphi '\cdot\varphi )=T\varphi '\cdot T\varphi ,\;\;T1_r=1_{Tr};\] furthermore, for all $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ and $s:Y\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Z$ there are natural and coherent 2-cells \[\kappa=\kappa_{s,r}:TsTr\longrightarrow T(sr),\] so that the following diagrams commute: \begin{equation}\tag{lax}\label{eq:lax} \xymatrix{TsTr\ar[r]^{\kappa_{s,r}}\ar[d]_{(T\psi)(T\varphi )}&T(sr)\ar[d]^{T(\psi\varphi )}& TtT(sr)\ar[r]^{\kappa_{t,sr}}&T(tsr)\\ Ts'Tr'\ar[r]^-{\kappa_{s',r'}}&T(s'r') &TtTsTr\ar[r]^-{\kappa_{t,s}-}\ar[u]^{-\kappa_{s,r}}&T(ts)Tr \ar[u]_{\kappa_{ts,r}}} \end{equation} (also: $\kappa_{r,1_X}=1_{Tr}=\kappa_{1_Y,r}$; all unity and associativity isomorphisms are suppressed).\vspace*{1mm}\\ Furthermore, \emph{we assume that $T(f^\circ)=(Tf)^\circ$ for every map $f$.}
\noindent It follows that whenever $f$ is a set map $\kappa_{s, f}$ is invertible. Its inverse is the composite \[T(sf) \xrightarrow{-\lambda_{Tf}} T(sf)Tf^\circ Tf \xrightarrow{\kappa_{sf, f^\circ}-} T(sff^\circ) Tf \xrightarrow{T(s\rho_f)-} Ts Tf.\] Also, $\kappa_{f^\circ, t}$ is invertible, for $t:Z\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$. Its inverse is the composite \[T(f^\circ t) \xrightarrow{\lambda_{Tf}-} Tf^\circ Tf T(f^\circ t) \xrightarrow{-\kappa_{f, f^\circ t}} Tf^\circ T(f f^\circ t) \xrightarrow{-T(\rho_ft)} Tf^\circ Tt.\]
\item The natural transformations $e:1\to T$, $m:T^2\to T$ of $\Set$ are $\op$-lax in $\V\mbox{-}\Rel$, so that for every $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ one has natural and coherent $2$-cells \[\alpha=\alpha_r:e_Yr\to Tre_X,\;\;\beta=\beta_r:m_YT^2r\to Trm_X,\mbox{ as in}\] \begin{equation}\tag{oplax}\label{eq:oplax}
\xymatrix{ X \ar[r]|-{\object@{|}}^r \ar[d]_{e_X} & Y \ar[d]^{e_Y}
\ar@{}[dl]|{\mbox{\large $\stackrel{\alpha}{\Leftarrow}$}} &
T^2X\ar[r]|-{\object@{|}}^{T^2r}\ar[d]_{m_X}&T^2Y
\ar@{}[dl]|{\mbox{\large $\stackrel{\beta}{\Leftarrow}$}}\ar[d]^{m_Y} \\
TX \ar[r]|-{\object@{|}}_{Tr} & TY&
TX\ar[r]|-{\object@{|}}_{Tr}&TY} \end{equation} such that $\alpha_{f}=1_{e_Yf}$, $\beta_{f}=1_{m_YT^2f}$ whenever $r=f$ is a $\Set$-map. \item The following diagrams commute (where again we disregard associativity isomorphisms): \begin{equation}\tag{mon}\label{eq:mon} \xymatrix@!R=3mm{&&m_YTe_YTr\ar[r]^{-\kappa_{e_Y,r}}\ar[dd]_1& m_YT(e_Yr)\ar[r]^{-T\alpha_r}&m_YT(Tre_X)\ar[d]^{-\kappa^{-1}_{Tr,e_X}}\\ m_Ye_{TY}Tr\ar[r]^{-\alpha_{Tr}}\ar[d]_1&m_YT^2re_{TX}\ar[d]^{\beta_r-}& &&m_YT^2rTe_X\ar[d]^{\beta_r-}\\ Tr\ar[r]^-{1}&Trm_Xe_{TX}&Tr\ar[rr]^-{1}&&Trm_XTe_X\\ m_YTm_YT^3r\ar[d]_1\ar[r]^{-\kappa_{m_Y,T^2r}}& m_YT(m_YT^2r)\ar[r]^{-T\beta_r}& m_YT(Trm_X)\ar[d]^{-\kappa^{-1}_{Tr,m_X}}\\ m_Ym_{TY}T^3r\ar[d]_{-\beta_{Tr}}&&m_YT^2rTm_X\ar[d]^{\beta_r-}\\ m_YT^2rm_{TX}\ar[r]_{\beta_r-}&Trm_Xm_{TX}\ar[r]_1& Trm_XTm_X.}\end{equation} \item One also needs the coherence conditions \begin{equation}\tag{coh}\label{eq:coh} \xymatrix@!R=3mm{e_Zsr\ar[r]^{\alpha_s-}\ar[d]_{1}&Tse_Yr\ar[r]^{-\alpha_r}& TsTre_X\ar[d]^{\kappa_{s,r}-}\\ e_Zsr\ar[rr]^{\alpha_{sr}}&&T(sr)e_X\\ m_ZT^2sT^2r\ar[r]^{\beta_s-}\ar[d]_{-\kappa_{Ts,Tr}} &Tsm_YT^2r\ar[r]^{-\beta_r}&TsTrm_X\ar[dd]^{\kappa_{s,r}-}\\ m_ZT(TsTr)\ar[d]_{-T\kappa_{s,r}}&&\\ m_ZT^2(sr)\ar[rr]^{\beta_{sr}}&&T(sr)m_X.} \end{equation} \item And the following naturality conditions, for all $\varphi :r\to r'$, \begin{equation}\tag{nat}\label{eq:nat} T\varphi e_X\cdot\alpha_r=\alpha_{r'}\cdot e_Y\varphi \;\mbox{ and } T\varphi m_X\cdot\beta_r=\beta_{r'}\cdot m_YT^2\varphi . \end{equation} \end{enumerate} The op-lax natural transformations $e$ and $m$ induce two lax natural transformations \[(e^\circ,\hat{\alpha}):T\to\Id_{\V\mbox{-}\Rel}\mbox{ and }(m^\circ,\hat{\beta}):T\to T^2\] on $\V\mbox{-}\Rel$: for each $r:X\relto Y$ we have
\[\xymatrix{TX \ar[r]|-{\object@{|}}^{Tr}
\ar[d]|-{\object@{|}}_{e_X^\circ}\ar@{}[dr]|{\mbox{\large $\stackrel{\hat{\alpha}}{\Rightarrow}$}} & TY \ar[d]|-{\object@{|}}^{e_Y^\circ}
&
TX\ar[r]|-{\object@{|}}^{Tr}\ar[d]|-{\object@{|}}_{m_X^\circ}\ar@{}[dr]|{\mbox{\large $\stackrel{\hat{\beta}}{\Rightarrow}$}}&TY
\ar[d]|-{\object@{|}}^{m_Y^\circ} \\
X \ar[r]|-{\object@{|}}_{r} & Y&
T^2X\ar[r]|-{\object@{|}}_{T^2r}&T^2Y}\] where $\hat{\alpha}_r:re_X^\circ\to e_Y^\circ Tr$ and $\hat{\beta}_r:T^2rm_X^\circ\to m_Y^\circ Tr$, are mates of $\alpha_r$ and $\beta_r$ respectively, i.e. they are defined by the composites: \[\xymatrix{re_X^\circ\ar[r]^-{\lambda_{e_Y}-}&e_Y^\circ e_Y re_X^\circ\ar[r]^-{-\alpha_r-}&e_Y^\circ Tr e_Xe_X^\circ\ar[r]^-{-\rho_{e_X}}&e_Y^\circ Tr\\ T^2rm_X^\circ\ar[r]^-{\lambda_{m_Y}-}&m_Y^\circ m_Y T^2r m_X^\circ\ar[r]^-{-\beta_r-}&m_Y^\circ Tr m_X m_X^\circ\ar[r]^-{-\rho_{m_X}}&m_Y^\circ Tr.}\]
\section{$(\mon{T},\V)$-categories}\label{sect:three} Now we define the 2-cat\-egory $(\mT,\V)\mbox{-}\Cat$ of $(\mon{T},\V)$-categories, $(\mon{T},\V)$-functors and transformations between these: \begin{enumerate}[--] \item \emph{$(\mon{T},\V)$-categories} are defined as $(X,a,\eta_a,\mu_a)$, with $X$ a set, $a:TX\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} X$ a $\V$-relation, and $\eta_a$ and $\mu_a$ 2-cells as in the following diagrams: \[
\xymatrix{X\ar[dr]_{1_X}\ar[r]^{e_X}&TX\ar[d]|-{\object@{|}}^a&TX\ar[d]|-{\object@{|}}_a&T^2X\ar[l]|-{\object@{|}}_{Ta}\ar[d]^{m_X}\\
\ar@{}[ur]|(.75){\mbox{\large $\stackrel{\eta_a}{\Rightarrow}$}}&X&X\ar@{}[ur]|{\mbox{\large $\stackrel{\mu_a}{\Rightarrow}$}}&
TX;\ar[l]|-{\object@{|}}^a} \] furthermore, $\eta_a,\,\mu_a$ provide a generalized monad structure on $a$, i.e., the following diagrams must commute (modulo associativity isomorphisms): \begin{equation}\tag{cat}\label{eq:cat} \xymatrix{ae_Xa\ar[r]^-{-\alpha_a} & aTae_{TX}\ar[d]^-{\mu_a-} &aT(ae_X)\ar[r]^-{-\kappa^{-1}_{a,e_X}}&aTaTe_X\ar[d]^{\mu_a-}\\
a\ar[u]^{\eta_a -}\ar[r]^-{1} & am_Xe_{TX} & a\ar[u]^{-T\eta_a}\ar[r]^-{1}&a m_XTe_X\\ aTaT^2a\ar[r]^{-\kappa_{a,Ta}}\ar[d]_{\mu_a-} & aT(aTa)\ar[r]^{-T\mu_a} &aT(am_X)\ar[d]^{-\kappa^{-1}_{a,m_X}}\\ am_XT^2a\ar[d]_{-\beta_a}& &aTaTm_X\ar[d]^{\mu_a-}\\ aTam_{TX}\ar[r]^{\mu_a -}&am_Xm_{TX}\ar[r]^{1}&am_XTm_X.} \end{equation} We will sometimes denote a $(\mon{T},\V)$-category $(X,a,\eta_a,\mu_a)$ simply by $(X,a)$.
\item A \emph{$(\mon{T},\V)$-functor} $(f,\varphi _f):(X,a,\eta_a,\mu_a)\to(Y,b,\eta_b,\mu_b)$ between $(\mon{T},\V)$-categories is given by a $\Set$-map $f:X\to Y$ equipped with a 2-cell $\varphi_f:fa\to bTf$ \[\label{eq:old6}
\xymatrix{TX\ar[r]^{Tf}\ar[d]|-{\object@{|}}_a&
TY\ar[d]|-{\object@{|}}^b\\
X\ar[r]_f\ar@{}[ur]|{\mbox{\large $\stackrel{\varphi_f}{\Rightarrow}$}}&Y} \] making the following diagrams commute: \begin{equation}\tag{fun}\label{eq:fun} \xymatrix@!R=.2mm{f\ar[r]^-{-\eta_a}\ar[dd]_{\eta_b -} & fae_X\ar[dd]^{\varphi _f -}\\ \\
be_Yf\ar[r]^-{1} & bTfe_X\\
faTa\ar[rr]^{-\mu_a}\ar[dd]_{\varphi_f-}&&fam_X\ar[ddd]^{\varphi _f -}\\ &&\\ bTfTa\ar[dd]_{-\kappa_{f,a}}&&\\ &&bTfm_X\ar[ddd]^{1}\\ bT(fa)\ar[dd]_{-T\varphi _f}&&\\ &&\\ bT(bTf)\ar[r]^-{-\kappa^{-1}_{b,Tf}}&bTbT^2f\ar[r]^-{\mu_b-}&bm_YT^2f.} \end{equation}
\item A \emph{$(\mon{T},\V)$-natural transformation} (or simply a \emph{natural transformation}) between $(\mon{T},\V)$-functors $(f, \varphi _f) \to (g, \varphi _g)$ is defined as a 2-cell $\zeta: ga \to bTf$ \[
\xymatrix{TX\ar[r]^{Tf}\ar[d]|-{\object@{|}}_a&
TY\ar[d]|-{\object@{|}}^b\\
X\ar[r]_g\ar@{}[ur]|{\mbox{\large $\stackrel{\zeta}{\Rightarrow}$}}&Y} \] such that the two sides of the following diagram commute \[\xymatrix@C=.6em@R=1.2em{& \ar[ld]_{\zeta -}gae_Xa&& \ar[ll]_-{-\eta_a-} ga\ar[dddd]_{\zeta} \ar[rr]^-{1}&&ga \ar[rd]^{\varphi _g}&\\
\ar[d]^{1}bTfe_Xa&&&&&&bTg \ar[d]_{-T(g\eta_a)}\\ \ar[d]^{-\varphi _f}be_Yfa&&&&&&bT(gae_X)\ar[d]_{-T(\zeta e_X)}\\ \ar[rd]_-{-\alpha_b-}be_YbTf&&&&&&bT(bTfe_X)\ar[ld]^-{-\kappa^{-1}_{b,e_Yf}}\\ &\ar[rr]_-{\mu_b-}bTbe_{TY}Tf&&bTf&&bTbTe_YTf\ar[ll]^-{\mu_b-}&} \] Such a 2-cell $\zeta$ is determined by the 2-cell \begin{equation}\tag{$\zeta_0$}\label{eq:zeta0} \xymatrix{(g\ar[r]^-{\zeta_0}& be_Yf)=(g\ar[r]^-{-\eta_a}&gae_X\ar[r]^-{\zeta-}&bTfe_X= be_Yf),} \end{equation} from which it can be reconstructed by either side of the above diagram. \end{enumerate}
The composite of $(\mon{T}, \V)$-functors $(f, \varphi _f)$ and $(g, \varphi _g)$ is defined by the picture
\[\xymatrix{TX\ar[r]^{Tf}\ar[d]|-{\object@{|}}_a&
TY\ar[d]|-{\object@{|}}^b\ar[r]^{Tg}&TZ\ar[d]|-{\object@{|}}^c\\
X\ar[r]_f\ar@{}[ur]|{\mbox{\large
$\stackrel{\varphi_f}{\Rightarrow}$}}&Y\ar[r]_g\ar@{}[ur]|{\mbox{\large $\stackrel{\varphi_g}{\Rightarrow}$}}&Z,}\] that is as $(gf,\varphi_{gf})$, with $\varphi_{gf}=(\varphi_g Tf)(g\varphi_f)$. The identity $(\mon{T},\V)$-functor on $(X,a)$ is $(1_X,1_a)$. The horizontal composition of $(\mon{T},\V)$-natural transformations $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ and $\zeta' : (f', \varphi _{f'}) \to (g', \varphi _{g'})$ is defined by a picture obtained from the above one by replacing $\varphi _f$ and $\varphi _g$ with $\zeta$ and $\zeta'$. The vertical composition of $(\mon{T},\V)$-natural transformations $\zeta:(f,\varphi _f)\to (g,\varphi _g)$ and $\zeta':(g,\varphi _g)\to(h,\varphi _h)$ is defined by the diagram \[\xymatrix{TX\ar@/_3pc/[dd]_{1_{TX}}^{\mbox{\large ${\stackrel{T\eta_a}{\Rightarrow}}$}}
\ar[d]_{Te_X}\ar[rr]^{Tf}&&TY\ar[d]^{Te_Y}\ar@/^3pc/[ddd]|-{\object@{|}}^{bm_YTe_Y=b}_{\mbox{\large ${\stackrel{\mu_b-}{\Rightarrow}}$}}
\\
T^2X\ar@{}[rrd]|{\mbox{\large ${\stackrel{T\zeta}{\Rightarrow}}$}}\ar[d]|-{\object@{|}}_{Ta}\ar[rr]^{T^2f}&&T^2Y\ar[d]|-{\object@{|}}^{Tb}\\
TX\ar@{}[rrd]|{\mbox{\large ${\stackrel{\zeta'}{\Rightarrow}}$}}\ar[rr]^{Tg}\ar[d]|-{\object@{|}}_a&&TY\ar[d]|-{\object@{|}}^b\\ X\ar[rr]^h&&Y.}\] The identity natural transformation on a $(\mon{T}, \V)$-functor $(f, \varphi _f)$ is the 2-cell $\varphi _f$ itself.
The definitions of horizontal and vertical compositions can be naturally stated in terms of the alternative definition (\ref{eq:zeta0}) of $(\mon{T},\V)$-natural transformation too.
When $\mon{T}$ is the identity monad, identically extended to $\V\mbox{-}\Rel$, the category $(\mT,\V)\mbox{-}\Cat$ is exactly the 2-category $\V\mbox{-}\Cat$ of $\V$-categories, $\V$-functors and \V-natural transformations.
Next we summarize briefly our two main examples. In the first example, $\V=\mbox{\sf 2}$ and $\mon{T}$ is the ultrafilter monad together with a suitable extension to $\mbox{\sf 2}\mbox{-}\Rel = \Rel$. In this case $(\mT,\V)\mbox{-}\Cat$ is the category of topological spaces and continuous maps. In the second example, $\V=\Set$ and $\mon{T}$ is the free-monoid monad with a suitable extension to $\Set\mbox{-}\Rel = \rm\bf{Span}$. In this case $(\mT,\V)\mbox{-}\Cat$ is the category of multicategories and multifunctors. For details on these examples, as well as for other examples, see \cite{CT03, HST14}.
For any $\mon{T}$ there is an adjunction of 2-functors: \begin{equation}\tag{adj}\label{eq:adj} (\mT,\V)\mbox{-}\Cat\adjunct{A^\circ}{A_e}\V\mbox{-}\Cat. \end{equation} $A_e$ is the algebraic functor associated with $e$, that is, for any $(\mon{T},\V)$-category $(X,a,\eta_a,\mu_a)$, $(\mon{T},\V)$-functor $(f,\varphi_f)$ and $(\mon{T},\V)$-natural transformation $\zeta:(f,\varphi_f)\to(g,\varphi_g)$, $A_e(X,a,\eta_a,\mu_a)=(X,ae_X,\eta_a,\overline{\mu}_a)$, where \[\xymatrix{(ae_Xae_X\ar[r]^-{\overline{\mu}_a} &ae_X)=(ae_Xae_X\ar[r]^-{-\alpha_a-}&aTae_{TX}e_X\ar[r]^-{\mu_a-}&am_Xe_{TX}e_X=ae_X),}\] $A_e(f,\varphi_f)=(f,\varphi_fe_X)$ and $A_e(\zeta)=\zeta e_X$ (see \cite{CT03} for details).
$A^\circ$ is defined as follows. For a $\V$-category $(Z,c,\eta_c,\mu_c)$, $A^\circ(Z,c,\eta_c,\mu_c)$ is the $(\mon{T},\V)$-category $(Z,c^\sharp,\eta_{c^\sharp},\mu_{c^\sharp})$ where $c^\sharp=e_Z^\circ Tc$, while $\eta_{c^\sharp}:1\to e_Z^\circ Tce_Z$ and $\mu_{c^\sharp}:e_Z^\circ Tc T(e_Z^\circ Tc)\to e_Z^\circ Tc m_Z$ are defined by the composites \[\xymatrix{1\ar[r]^-{\lambda_{e_Z}}&e_Z^\circ e_Z\ar[r]^-{-T\eta_c-}&e_Z^\circ Tc e_Z}\] \[\xymatrix@R=1.5em{
T^2Z \ar[dd]_{m_Z} \ar[r]|-{\object@{|}}^{T^2c} \ar@{}[rdd]|{\mbox{\large $\stackrel{\beta_c}{\Leftarrow}$}} \ar@/^2.5pc/[rrr]|-{\object@{|}}^{T(e_Z^\circ Tc)}_{\mbox{\large $\stackrel{\kappa^{-1}_{e_Z^\circ, Tc}}{\Leftarrow}$}} &T^2Z \ar[rr]|-{\object@{|}}^{Te^\circ_Z} \ar[d]_{1_{T^2Z}} \ar@{}[rd]|{\mbox{\large $\stackrel{\rho_{Te_Z}}{\Leftarrow}$}} &&TZ \ar[d]|-{\object@{|}}_{Tc} \ar@/^1pc/[lld]^{Te_Z}\\
&T^2Z \ar[d]_{m_Z} && TZ \ar[d]|-{\object@{|}}_{e^\circ_Z}\\
TZ \ar[r]|-{\object@{|}}_{Tc} \ar@/_2pc/[rr]|-{\object@{|}}_{Tc}^{\mbox{\large $\stackrel{T\mu_c \kappa_{c,c}}{\Leftarrow}$}}& TZ \ar[r]|-{\object@{|}}_{Tc} &TZ \ar[r]|-{\object@{|}}_{e^\circ_Z}&Z. } \]
For a $\V$-functor $(f, \varphi _f) : (Z, c) \to (Z', c')$, $A^\circ(f, \varphi _f)$ is defined by the diagram
\[\xymatrix{TZ\ar[r]|-{\object@{|}}^{Tc}\ar[d]_{Tf}&
TZ\ar[r]|-{\object@{|}}^{e^\circ_Z}\ar[d]^{Tf}&Z\ar[d]^f\\
TZ'\ar[r]|-{\object@{|}}_{Tc'}\ar@{}[ur]|{\mbox{\large
$\stackrel{T\varphi_f}{\Leftarrow}$}}&TZ'\ar[r]|-{\object@{|}}_{e^\circ_{Z'}}\ar@{}[ur]|{\mbox{\large $\stackrel{}{\Leftarrow}$}}&Z',}\] wherein the right 2-cell is the mate of the identity 2-cell $1_{Tfe_Z=e_{Z'}f}$. On $\V$-natural transformations $A^\circ$ is defined by a similar diagram. By direct verifications $A^\circ$ is indeed a 2-functor, and as already stated we have:
\begin{prop} $A^\circ$ is a left 2-adjoint to $A_e$. \end{prop}
\begin{proof} The unit of the adjunction has the component at a $\V$-category $(Z,c)$ given by a $\V$-functor consisting of $1_Z$ and the 2-cell \[\xymatrix{c \ar[r]^-{\lambda_{e_Z}-}& e_Z^\circ e_Zc \ar[r]^-{- \alpha_c} &e_Z^\circ Tc e_Z.}\] The counit of the adjunction has the component at a $(\mon{T}, \V)$-category $(X, a)$ given by a $(\mon{T}, \V)$-functor consisting of $1_X$ and the 2-cell \[\xymatrix{e_X^\circ T(ae_X) \ar[r]^-{-\kappa^{-1}_{a, e_X}}& e_X^\circ Ta Te_X \ar[r]^-{\eta_a -}& a e_X e_X^\circ TaTe_X \ar[r]^-{-\rho_{e_X}-} &aTaTe_X\ar[r]^-{\mu_a-} &am_XTe_X = a.}\] The triangle identities are then directly verified. \end{proof}
The next proposition is a $(\mon{T}, \V)$-categorical analogue of the ordinary- and enriched-categorical fact that an adjunction between functors induces isomorphisms between hom-sets/-objects.
\begin{prop}\label{th:adj} Given an adjunction $(f, \varphi _f) \dashv (g, \varphi _g) : (X, a) \rightarrow (Y, b)$ in the 2-category $(\mT,\V)\mbox{-}\Cat$, there is an isomorphism: \[g^\circ a \cong bTf.\] \end{prop}
\begin{proof} The unit and the counit of the given adjunction are $(\mon{T}, \V)$-natural transformations $(1_X, 1_a) \to (g, \varphi _g)(f, \varphi _f)$ and $(f, \varphi _f)(g, \varphi _g) \to (1_Y, 1_b)$. These are given by 2-cells $\upsilon_0 : gf \to ae_X$ and $\epsilon_0: 1_Y \to be_Yfg$ respectively. Define a 2-cell $bTf \to g^\circ a$ by \[ \xymatrix@=2em{
TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l^d[dd]`^r[dd]_{1_{TX}} [dd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar[rr]^{1_Y} \ar@{}[rd]+UUU|{\mbox{\large $\stackrel{\lambda_g}{\Leftarrow}$}}&& Y,\\
T^2X \ar[d]_{m_X} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_a}{\Leftarrow}$}}&& TX \ar[rr]|-{\object@{|}}^{a}&& X \ar@<-.3em>[rru]_{g^\circ}&& \\
TX \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{a} && && && \\ } \] wherein the blank symbols stand for the obvious instances of $\kappa$ or $\kappa^{-1}$.
In the opposite direction define a 2-cell $g^\circ a \to bTf$ by \[ \xymatrix@=2em{
&& && && Y \ar[d]^{g} \ar@{}[ld]|{\mbox{\large $\stackrel{\rho_g}{\Leftarrow}$}} \ar@/^2pc/[dddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}}\\
TX \ar[rrrr]|-{\object@{|}}_{a} \ar[d]_{Tf} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}} && && X \ar[d]^{f} \ar[rr]^{1_X} \ar@<.3em>[rru]^{g^\circ} && X \ar[d]^{f} \\
TY\ar[rrrr]|-{\object@{|}}^{b} \ar[d]_{e_{TY}} \ar `l^d[dd]`^r[dd]_{1_{TY}} [dd] \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\alpha_b}{\Leftarrow}$}}&& && Y \ar[d]_{e_Y} && Y \ar[d]^{e_Y} \\
T^2Y \ar[rrrr]|-{\object@{|}}_{Tb}\ar[d]_{m_Y} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && TY \ar[d]|-{\object@{|}}_{b} && TY \ar[d]|-{\object@{|}}^{b}\\
TY \ar[rrrr]|-{\object@{|}}_{b} && && Y \ar[rr]_{1_Y} && Y. } \] These two 2-cells are inverses to each other. The following calculation shows that the equality $(bTf \to g^\circ a \to bTf) = 1_{bTf}$ holds. The remaining equation is proved using analogous arguments. Pasting the first diagram on top of the second, and using the equation $(\rho_gg) (g\lambda_g)= 1_g$ we obtain \[ \xymatrix@=2em{
TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l^d[dd]`^r[dd]_{1_{TX}} [dd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[ddddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
T^2X \ar[d]_{m_X} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_a}{\Leftarrow}$}}&& TX \ar[rr]|-{\object@{|}}^{a}&& X \ar[dd]_{f} \\
TX \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{a} \ar[d]_{Tf} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}} && && \\
TY\ar[rrrr]|-{\object@{|}}^{b} \ar[d]_{e_{TY}} \ar `l^d[dd]`^r[dd]_{1_{TY}} [dd] \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\alpha_b}{\Leftarrow}$}}&& && Y \ar[d]_{e_Y} \\
T^2Y \ar[rrrr]|-{\object@{|}}_{Tb}\ar[d]_{m_Y} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && TY \ar[d]|-{\object@{|}}_{b} \\
TY \ar[rrrr]|-{\object@{|}}_{b} && && Y; } \] using (\ref{eq:fun}) for $(f, \varphi _f)$ we get \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar@/_1.5pc/[ldd]_{1_{TX}} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[ddddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} \ar[ld]_{m_X}&& TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
TX \ar[rd]_{Tf} &T^2Y \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{m_Y} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} &&TY \ar[rr]|-{\object@{|}}^{b} && Y \ar[dd]_{e_Y}\\
&TY \ar[d]_{e_{TY}} \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b} \ar `l^d[dd]`^r[dd]_{1_{TY}} [dd] \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\alpha_b}{\Leftarrow}$}}&& && \\
&T^2Y \ar[rrrr]|-{\object@{|}}_{Tb}\ar[d]_{m_Y} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && TY \ar[d]|-{\object@{|}}_{b} \\
&TY \ar[rrrr]|-{\object@{|}}_{b} && && Y. } \] Then, using naturality of $\alpha$ we obtain \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar@/_1.5pc/[ldd]_{1_{TX}} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[ddddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} \ar[ld]_{m_X}&& TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
TX \ar[d]_{Tf} &T^2Y \ar[ld]_{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{e_{T^2Y}} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{Tb}}{\Leftarrow}$}} &&TY \ar[d]_{e_{TY}} \ar[rr]|-{\object@{|}}^{b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{b}}{\Leftarrow}$}}&& Y \ar[d]_{e_Y}\\
TY \ar@/_1.5pc/[rdd]_{1_{TY}} \ar[rd]_{e_{TY}}&T^3Y \ar[rr]|-{\object@{|}}^{T^2b} \ar[d]_{Tm_Y} \ar@{}[rrrd]|{\mbox{\large $\stackrel{-T\mu_b-}{\Leftarrow}$}}&& T^2Y \ar[rr]|-{\object@{|}}^{Tb}&& TY \ar[dd]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]_{m_Y} \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{Tb}\ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && \\
&TY \ar[rrrr]|-{\object@{|}}_{b} && && Y, } \] and using the associativity axiom in (\ref{eq:cat}) for $\mu_b$ we get \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[dddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
&T^2Y \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{e_{T^2Y}} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{Tb}}{\Leftarrow}$}} &&TY \ar[d]_{e_{TY}} \ar[rr]|-{\object@{|}}^{b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{b}}{\Leftarrow}$}}&& Y \ar[d]_{e_Y}\\
&T^3Y \ar[rr]|-{\object@{|}}^{T^2b} \ar[d]+<-1em,.8em>_{Tm_Y} \ar@<.5em>[d]^{m_{TY}} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\mu_b-}{\Leftarrow}$}}&& T^2Y \ar[d]_{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \enspace T^2Y \ar@<.5em>[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y. \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b} \ar[u]+<-1em,-.8em>;[]_{m_Y}&& && } \] From (\ref{eq:mon}) we obtain \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[dddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
&T^2Y \ar@/^1.5pc/[dd]^(0.4){1_{T^2Y}} \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{e_{T^2Y}} &&TY \ar@/_1.5pc/[dd]_(0.6){1_{T^2Y}} \ar[d]^{e_{TY}} \ar[rr]|-{\object@{|}}^{b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{b}}{\Leftarrow}$}}&& Y \ar[d]_{e_Y}\\
&T^3Y \ar[d]_{m_{TY}} && T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]_{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y, \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& && } \] and the axiom of a $(\mon{T}, \V)$-natural transformation for $\epsilon_0$ gives \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[d]^{Tg} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[d]^{Tf} && \\
&T^2Y \ar[dd]_{1_{T^2Y}} \ar[rr]|-{\object@{|}}^{Tb} &&TY \ar@/_1.5pc/[dd]_(0.6){1_{T^2Y}} \ar[d]^{Te_Y} \ar@{}[rr]|{\mbox{\large $\stackrel{-T\epsilon_0-}{\Leftarrow}$}}&& \\
& && T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y. \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& && } \] Using (\ref{eq:mon}) again we obtain \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[d]^{Tg} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[d]^{Tf} && \\
&T^2Y \ar@/_1.3pc/[dd]_{1_{T^2Y}} \ar[d]^{Te_{TY}} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\alpha_b-}{\Leftarrow}$}} &&TY \ar[d]^{Te_Y} \ar@{}[rr]|{\mbox{\large $\stackrel{-T\epsilon_0-}{\Leftarrow}$}}&& \\
&T^3Y \ar[d]^{m_{TY}} \ar[rr]|-{\object@{|}}^{T^2b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\beta_b}{\Leftarrow}$}}&& T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y, \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& && } \] and using associativity of $\mu_b$ again we get \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[d]^{Tg} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[d]^{Tf} && \\
&T^2Y \ar[d]_{Te_{TY}} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\alpha_b-}{\Leftarrow}$}} &&TY \ar[d]^{Te_Y} \ar@{}[rr]|{\mbox{\large $\stackrel{-T\epsilon_0-}{\Leftarrow}$}}&& \\
&T^3Y \ar[d]+<-1em,.8em>_{m_{TY}} \ar@<.5em>[d]^{Tm_Y} \ar[rr]|-{\object@{|}}^{T^2b} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\mu_b-}{\Leftarrow}$}}&& T^2Y \ar[rr]|-{\object@{|}}^{Tb} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \enspace T^2Y \ar@<.5em>[d]^{m_Y} \ar@/^-1.2pc/[rrrru]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && Y. \\
& TY \ar[u]+<-1em,-.8em>;[]_{m_Y} \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& && } \] Now, one of the triangle equations satisfied by the unit $\upsilon_0$ and the counit $\epsilon_0$ of our adjunction gives us \[ \xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrrrddd]|{\mbox{\large $\stackrel{-T\eta_b-}{\Leftarrow}$}} && TY \ar[lldd]^{Te_Y} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\ &T^2X \ar[d]_{T^2f} && && \\
&T^2Y \ar[d]_{Te_{TY}} \ar[rrrrd]|-{\object@{|}}^{Tb} \ar@/^1.5pc/[dd]^{1_{TY}} && && \\
&T^3Y \ar[d]_{Tm_Y} && && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]_{m_Y} \ar@/^-1.2pc/[rrrru]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && Y, \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& && } \] and finally, by the unity axiom in (\ref{eq:cat}), this equals to \[ \xymatrix@=1.5em{
&TX \ar[rr]^{Tf} \ar[dd]_{Tf} && TY \ar[ld]_{Te_Y} \ar[dd]|-{\object@{|}}^{b} \\ &&T^2Y \ar[ld]_{m_Y} & \\
& TY \ar[rr]|-{\object@{|}}^{b}&& Y, } \] which is the identity map $1_{bTf}$.
We leave it to the reader to verify the equality $(g^\circ a \to bTf \to g^\circ a) = 1_{g^\circ a}$. \end{proof}
\section{$\mon{T}$ as a $\V\mbox{-}\Cat$ monad}\label{sect:VCatMonad}
In this section we show that the properties of the lax extension of the $\Set$-monad $\mon{T}$ to $\V\mbox{-}\Rel$ allow us to extend $\mon{T}$ to $\V\mbox{-}\Cat$. Straightforward calculations show that: \begin{lemma} \begin{enumerate}[\em (1)] \item If $(X,a,\eta_a,\mu_a)$ is a $\V$-category, then $(TX,Ta,T\eta_a,T\mu_a\kappa_{a,a})$ is a $\V$-category. \item If $(f,\varphi _f):(X,a,\eta_a,\mu_a)\to(Y,b,\eta_b,\mu_b)$ is a $\V$-functor, then $(Tf,\varphi _{Tf}):(TX,Ta)\to(TY,Tb)$, where $\varphi _{Tf}:=\kappa^{-1}_{b,f} \, T\varphi _f \, \kappa_{f,a}$, is a $\V$-functor as well. \item If $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ is a $\V$-natural transformation, then so is $\kappa^{-1}_{b,f} T\zeta \, \kappa_{g,a}:(Tf, \varphi _{Tf}) \to (g, \varphi _{Tg})$. \end{enumerate} \end{lemma} \noindent These assignments define an endo 2-functor on $\V\mbox{-}\Cat$ that we denote again by $T:\V\mbox{-}\Cat\to\V\mbox{-}\Cat$. The 2-cells $\alpha, \beta$ of the oplax natural transformations $e, m$ on $\V\mbox{-}\Rel$ equip $e$ and $m$ so that they become natural transformations in $\V\mbox{-}\Cat$, as we show next.
\begin{lemma} For each $\V$-category $(X,a)$: \begin{enumerate}[\em (1)] \item $(e_X,\alpha_a):(X,a)\to(TX,Ta)$ is a $\V$-functor; \item $(m_X,\beta_a):(T^2X,T^2a)\to(TX,Ta)$ is a $\V$-functor. \end{enumerate} \end{lemma} \begin{proof} To check that the diagrams \[\xymatrix@!C=15ex{e_X\ar[r]^{-\eta_a}\ar[d]_{T\eta_a-}&e_Xa\ar[ld]^{\alpha_a}& m_X\ar[r]^-{-\eta_{T^2a}}\ar[d]_{\eta_{Ta}-}&m_XT^2a\ar[ld]^{\beta_a}\\ Tae_X&&Tam_X}\] commute one uses the naturality conditions (\ref{eq:nat}) with respectively $\varphi=\eta$ and $\varphi=\beta$. For the diagrams \[\xymatrix@!C=100pt{e_Xaa\ar[rr]^{-\mu_a}\ar[d]_{\alpha_a-}\ar[rdd]^{\alpha_{a,a}}&&e_Xa\ar[dd]^{\alpha_a}\\ Tae_Xa\ar[d]_{-\alpha_a}\\
TaTae_X\ar@{}[rruu]^(0.2){\framebox{1}}\ar@{}[rruu]|{\framebox{2}}\ar[r]^{\kappa_{a,a}-}&T(aa)e_X\ar[r]^{T\mu_a-}&Tae_X\\ m_XT^2aT^2a\ar[r]^{-\kappa_{Ta,Ta}}\ar[d]_{\beta_a-}&m_XT(TaTa)\ar[r]^{-T\kappa_{a,a}}&m_XT^2(aa) \ar[r]^{-T^2\mu_a}\ar[dd]^{\beta_{aa}}&m_XT^2a\ar[dd]^{\beta_a}\\ Tam_XT^2a\ar[d]_{-\beta_a}\\
TaTam_X\ar@{}[rruu]|{\framebox{3}}\ar[rr]^{\kappa_{a,a}-}&&T(aa)m_X\ar@{}[ruu]|{\framebox{4}}\ar[r]^{T\mu_a-}&Tam_X,}\] commutativity of $\framebox{1}$ and $\framebox{3}$ follows from the coherence conditions (\ref{eq:coh}), while commutativity of $\framebox{2}$ and $\framebox{4}$ follows from the naturality conditions (\ref{eq:nat}). \end{proof}
\begin{lemma} For each $\V$-category $(X,a)$, let $e_{(X,a)}=(e_X,\alpha_a)$ and $m_{(X,a)}=(m_X,\beta_a)$. \begin{enumerate}[\em (1)] \item $e=(e_{(X,a)})_{(X,a)\in\V\mbox{-}\Cat}:\Id_{\V\mbox{-}\Cat}\to T$ is a 2-natural transformation. \item $m=(m_{(X,a)})_{(X,a)\in\V\mbox{-}\Cat}:T^2\to T$ is a 2-natural transformation. \end{enumerate} \end{lemma} \begin{proof} To check that, in the diagrams
\[\xymatrix{&X\ar[rr]^{e_X}\ar[ld]|-{\object@{|}}_a\ar[dd]^>>>>>>{f}&&TX\ar[ld]|-{\object@{|}}^{Ta}\ar[dd]^{Tf}&&T^2X\ar[rr]^{m_X}\ar[ld]|-{\object@{|}}_{T^2a}
\ar[dd]^>>>>>>{T^2f}&&TX\ar[ld]|-{\object@{|}}^{Ta}\ar[dd]^{Tf}\\
X\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{f}}\ar@{}[urrr]|{\mbox{\large $\stackrel{\alpha_a}{\Rightarrow}$}}\ar[rr]^(0.65){e_X}\ar[dd]_f&&TX\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{Tf}}\ar[dd]^(0.65){Tf}&&
T^2X\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{T^2f}}\ar@{}[urrr]|{\mbox{\large $\stackrel{\beta_a}{\Rightarrow}$}}\ar[dd]_{T^2f}\ar[rr]^(0.65){m_X}&&TX\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{Tf}}\ar[dd]^(0.65){Tf}\\
&Y\ar[rr]^(0.35){e_Y}\ar[ld]|-{\object@{|}}_b&&TY\ar[ld]|-{\object@{|}}^{Tb}&
&T^2Y\ar[rr]^(0.35){m_Y}\ar[ld]|-{\object@{|}}_{T^2b}&&TY\ar[ld]|-{\object@{|}}^{Tb}\\
Y\ar@{}[urrr]|{\mbox{\large $\stackrel{\alpha_b}{\Rightarrow}$}}\ar[rr]^{e_Y}&&TY&&T^2Y\ar@{}[urrr]|{\mbox{\large $\stackrel{\beta_b}{\Rightarrow}$}}\ar[rr]^{m_Y}&&TY}\] the composition of the 2-cells commute, one uses again diagrams (\ref{eq:nat}) and (\ref{eq:coh}). To prove 2-naturality just take in these diagrams a 2-cell $\zeta$ giving a transformation of $(\mon{T}, \V)$-functors instead of $\varphi _f$. \end{proof}
\begin{theorem} $(T,e,m)$ is a 2-monad on $\V\mbox{-}\Cat$. \end{theorem} \begin{proof} It remains to check the commutativity of the diagrams, for each category $(X,a)$, \[\xymatrix@!C=7ex{(TX,Ta)\ar[rr]^-{(e_{TX},\alpha_{Ta})}\ar[ddrr]_{(1,1)}&&(T^2X,T^2a)\ar[dd]^{(m_X,\beta_a)}&&(TX,Ta)\ar[ll]_-{(Te_X,\kappa^{-1} T\alpha_a\kappa)}\ar[ddll]^{(1,1)}&(T^3X,T^3a)\ar[rr]^-{(m_{TX},\beta_{Ta})}\ar[dd]_{(Tm_X,\kappa^{-1} T\beta_a\kappa)}&&(T^2X,T^2a)\ar[dd]^{(m_X,\beta_a)}\\ \\ &&(TX,Ta)&&&(T^2X,T^2a)\ar[rr]^-{(m_X,\beta_a)}&&(TX,Ta),}\] which follows again from diagrams (\ref{eq:nat}) and (\ref{eq:coh}). \end{proof}
Denoting the 2-category of algebras of this 2-monad by $(\V\mbox{-}\Cat)^\mon{T}$, we get a commutative diagram \begin{equation}\tag{$\mon{T}$-alg}\label{eq:Talg}
\xymatrix@!R=5ex{\Set^\mon{T}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&(\V\mbox{-}\Cat)^\mon{T}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[ll]\\
\Set\ar@<1ex>[u]^{F^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\V\mbox{-}\Cat.\ar@<-1ex>[ll]\ar@<1ex>[u]^{F^\mon{T}}} \end{equation}
\section{The fundamental adjunction}\label{sect:FundAdj} \emph{From now on we assume that $\hat{\beta}_r:Trm_X^\circ\to m_Y^\circ Tr$ is an isomorphism for each $\V$-relation $r:X\relto Y$, so that $m^\circ:T\to T^2$ becomes a pseudo-natural transformation on $\V\mbox{-}\Rel$.}
In this section we will build an adjunction \begin{equation}\tag{ADJ}\label{eq:ADJ} (\V\mbox{-}\Cat)^\mon{T}\adjunct{M}{K}(\mT,\V)\mbox{-}\Cat. \end{equation}
Let $((Z,c,\eta_c,\mu_c),(h,\varphi _h))$ be an object of $(\V\mbox{-}\Cat)^\mon{T}$. The $\V$-category unit $\eta_c$ is a 2-cell
$1_Z \to c = che_Z$. Let $\widetilde{\mu}_c$ be the 2-cell defined by: \begin{equation}\tag{$\widetilde{\mu}_c$}\label{eq:muc} \xymatrix{chT(ch)\ar[r]^-{-\kappa^{-1}_{c,h}}& chTcTh \ar[r]^-{-\varphi _h -}& cch Th = cch m_Z\ar[r]^-{\mu_c -}& ch m_Z.} \end{equation}
\begin{lemma} The data $(Z, ch,\eta_c,\widetilde{\mu}_c)$ gives a $(\mon{T}, \V)$-category. \end{lemma}
\begin{proof} Each of the three $(\mon{T}, \V)$-category axioms follows from the corresponding $\V$-category axiom for $(Z,c,\eta_c,\mu_c)$, using (\ref{eq:mon}) and the fact that $(h,\varphi _h)$ is an algebra structure. \end{proof} \noindent We set \[K((Z,c,\eta_c,\mu_c),(h,\varphi _h))=(Z,ch,\eta_c,\widetilde{\mu}_c).\]
$K$ extends to a 2-functor in the following way. For a morphism of $\mon{T}$-algebras $(f,\varphi _f):((Z,c), h)\to((W,d), k)$, we set $K(f,\varphi _f) = (f,\varphi _fh)$, where $\varphi _f h$ is regarded as a morphism $f ch \longrightarrow dfh=dk Tf$. For a natural transformation of $\mon{T}$-algebras $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ we define $K(\zeta)=\zeta h$. By straightforward calculations these indeed define a 2-functor.
Let now $(X,a,\eta_a,\mu_a)$ be a $(\mon{T},\V)$-category. Let $\hat{a}=Ta m_X^\circ$. Define a 2-cell $\eta_{\hat{a}} : 1_{TX} \to \hat{a}$ by the composite \begin{equation}\tag{$\eta_{\hat{a}}$}\label{eq:etaa} \xymatrix{1_{TX} = T1_X \ar[r]^-{T\eta_a}& T(ae_X)\ar[r]^{\kappa^{-1}_{a,e_X}}&Ta Te_X \ar[r]^-{-\lambda_{m_X}-}& Ta m_X^\circ m_XTe_X=Tam_X^\circ,} \end{equation} and define $\mu_{\hat{a}} : \hat{a}\hat{a} \to \hat{a}$ by \[ \xymatrix{
TX \ar[rr]|-{\object@{|}}^{m^\circ_X} \ar[d]|-{\object@{|}}_{m^\circ_X} && T^2X \ar[rr]|-{\object@{|}}^{Ta} \ar[d]|-{\object@{|}}_{m^\circ_{TX}} \ar@{}[rrd]|{\mbox{\large$\stackrel{\hat{\beta}^{-1}}{\Leftarrow}$ }}&& TX \ar[d]|-{\object@{|}}_{m^\circ_X} \\
T^2X \ar[rr]|-{\object@{|}}^{Tm^\circ_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{\rho_{Tm^\circ_X}}{\Leftarrow}$}} \ar@/_1.5pc/[rrd]_{1_{T^2X}} && T^3X \ar[rr]|-{\object@{|}}^{T^2a} \ar[d]_{Tm_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\mu_a-}{\Leftarrow}$}}&& T^2X \ar[d]|-{\object@{|}}_{Ta} \\
&&T^2X \ar[rr]|-{\object@{|}}^{Ta} && TX.\\ } \]
\begin{lemma} The data $(TX,\hat{a}, \eta_{\hat{a}},\mu_{\hat{a}})$ determines a $\V$-category. \end{lemma}
\begin{proof}
The three $\V$-category axioms follow from the corresponding $(\mon{T}, \V)$-category axioms for $(X,a,\eta_a,\mu_a)$. \end{proof} Let $\varphi _{\hat{a}} : m_XT\hat{a}\to \hat{a} m_X$ be the composite 2-cell \[\xymatrix@R=1.5em{
T^2X \ar[d]_{m_X} \ar[rr]|-{\object@{|}}^{Tm^\circ_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{}{\Leftarrow}$}} \ar@/^3.pc/[rrrr]|-{\object@{|}}^{T(Tam^\circ_X)}_{\mbox{\large $\stackrel{\kappa^{-1}_{Ta,m^\circ_X}}{\Leftarrow}$}} &&T^3X \ar[rr]|-{\object@{|}}^{T^2a} \ar[d]_{m_{TX}} \ar@{}[rrd]|{\mbox{\large $\stackrel{\beta_a}{\Leftarrow}$}} &&T^2X \ar[d]^{m_X}\\
TX \ar[rr]|-{\object@{|}}_{m^\circ_X}&&T^2X \ar[rr]|-{\object@{|}}_{Ta} && TX. } \] Wherein the left 2-cell is the mate of the identity map $1_{m_X m_{TX}=m_XTm_X}$. Direct calculations yield: \begin{lemma} The pair $(m_X, \varphi _{\hat{a}})$ is a $\V$-functor $T(TX, \hat{a}) \to (TX, \hat{a})$; moreover, it defines a $\mon{T}$-algebra structure on the $\V$-category $(TX, \hat{a})$. \end{lemma} We set \[M(X,a)=((TX,\hat{a}), (m_X, \varphi _{\hat{a}})).\] We extend this construction to a 2-functor as follows. For a $(\mon{T},\V)$-functor $(f,\varphi _f):(X,a)\to(Y,b)$, $M(f,\varphi _f)=(Tf,\widetilde{\varphi }_{Tf})$, where $\widetilde{\varphi }_{Tf}$ is given by \[\xymatrix@R=1.5em{
TX \ar[d]_{Tf} \ar[rr]|-{\object@{|}}^{m^\circ_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{\hat{\beta}_f}{\Leftarrow}$}} &&T^2X \ar[rr]|-{\object@{|}}^{Ta} \ar[d]_{T^2f} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} &&TX \ar[d]^{Tf}\\
TY \ar[rr]|-{\object@{|}}_{m^\circ_Y}&&T^2Y \ar[rr]|-{\object@{|}}_{Tb} && TY. }\] For a natural transformation of $(\mon{T},\V)$-functors $\zeta : (f, \varphi _f) \to (g, \varphi _g)$, $M(\zeta)$ is defined by a similar diagram. By direct verification $M$ is a 2-functor.
\begin{theorem} $M$ is a left 2-adjoint to $K$. \end{theorem}
\begin{proof} Given a $(\mon{T},\V)$-category $(X,a,\eta_a,\mu_a)$, \[ \xymatrix{(e_X,\widetilde{\alpha}_a):(X,a,\eta_a,\mu_a)\ar[r]& KM(X,a,\eta_a,\mu_a)=(TX,Tam_X^\circ m_X,\eta_{\hat{a}},\widetilde{\mu}_a),}\] is a $(\mon{T}, \V)$-functor, where $\widetilde{\alpha}_a$ is the composite \begin{equation}\tag{unit}\label{eq:unit} \xymatrix{(e_Xa\ar[r]^-{\alpha_a}&Tae_{TX}\ar[rr]^-{-\lambda_{m_X}-}&&Tam_X^\circ m_Xe_{TX}=Tam_X^\circ m_X Te_X),}\end{equation} These functors define a natural transformation $1 \rightarrow KM$. Given a $\mon{T}$-algebra $((Z,c,\eta_c,\mu_c),(h,\varphi _h))$, \[\xymatrix{(h, \widetilde{\varphi }_h): MK((Z,c,\eta_c,\mu_c),(h,\varphi _h))=(TZ,T(ch)m_X^\circ,\hat{\eta}_{ch},\mu_{\widehat{ch}}) \ar[r]& ((Z,c,\eta_c,\mu_c),(h,\varphi _h))},\] is a morphism of $\mon{T}$-algebras, where $\widetilde{\varphi }_h$ is defined as \[hT(ch)m_X^\circ\xrightarrow{-\kappa_{c,h}^{-1}}h TcTh m_X^\circ \xrightarrow{\varphi _h -} ch Th m_X^\circ = ch m_Xm_X^\circ \xrightarrow{- \rho_{m_X}} ch,\]
These define a natural transformation $MK \rightarrow 1$. These natural transformations serve as the unit and the counit of our adjunction. The triangle identities are straightforwardly verified. \end{proof}
\section{$\mon{T}$ as a $(\mT,\V)\mbox{-}\Cat$ monad}
Let us identify the 2-monad on $(\mT,\V)\mbox{-}\Cat$ induced by the adjunction $M \dashv K$, which we denote again by $\mon{T}= (KM = T,e,m)$.
Thus, $T = KM$ is a 2-endofunctor on $(\mT,\V)\mbox{-}\Cat$. To a $(\mon{T}, \V)$-category $(X, a,\eta_a,\mu_a)$ it assigns the $(\mon{T},\V)$-category $(TX, \hat{a} m_X=Tam_X^\circ m_X,\eta_{\hat{a}},\widetilde{\mu}_{\hat{a}})$ with components defined in the diagrams (\ref{eq:etaa}) and (\ref{eq:muc}) of the last section, to a $(\mon{T}, \V)$-functor $(f, \varphi _f)$ it assigns the $(\mon{T},\V)$-functor $(Tf,\widetilde{\varphi }_f)$ which can be diagrammatically specified by \[\xymatrix{T^2X\ar[d]_{m_X}\ar[rr]^{T^2f}&&T^2Y\ar[d]^{m_Y}\\
TX\ar[d]|-{\object@{|}}_{m_X^\circ}\ar[rr]^{Tf}&&TY\ar[d]|-{\object@{|}}^{m_Y^\circ}\\
T^2X\ar@{}[urr]|{\mbox{\large $\stackrel{\hat{\beta}_f}{\Rightarrow}$}}\ar[d]|-{\object@{|}}_{Ta}\ar[rr]^{T^2f}&&T^2Y\ar[d]|-{\object@{|}}^{Tb}\\
TX\ar@{}[urr]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Rightarrow}$}}\ar[rr]^{Tf}&&TY,}\]
and the $\mon{T}$-image of a $(\mon{T}, \V)$-natural transformation $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ is computed by a similar diagram.
The unit of the 2-monad is the unit $(e,\widetilde{\alpha})$ of the adjunction $K \dashv M$ defined in (\ref{eq:unit}). The multiplication of the 2-monad is given by $(m,\widetilde{\beta})$, the component of which at a $(\mon{T}, \V)$-category $(X, a)$,
-- which is a $(\mon{T}, \V)$-functor $MKMK(X, a) \to MK(X, a)$ --, is pictorially described by:
\[\xymatrix{T^3X\ar `l^d[ddddd]`^r[ddddd]|-{\object@{|}}_{MKMK(a)} [ddddd]
\ar[d]_{m_{TX}}\ar[rr]^{Tm_X}&&T^2X\ar `r_d[ddddd]`_l[ddddd]|-{\object@{|}}^{MK(a)} [ddddd]\ar[d]^{m_X}\\
T^2X\ar[rr]^{m_X}\ar[d]|-{\object@{|}}_{m_{TX}^\circ}&&TX\ar[d]^1\\
T^3X\ar[rr]_{m_Xm_{TX}}\ar@{}[urr]|{\mbox{\large $\stackrel{-\rho_{m_{TX}}}{\Rightarrow}$}}\ar[d]_{Tm_X}&&TX\ar[d]^1\\
T^2X\ar[d]|-{\object@{|}}_{Tm_X^\circ}\ar[rr]^{m_X}&&TX\ar[d]|-{\object@{|}}^{m_X^\circ}\\
T^3X\ar[rr]^{m_{TX}}\ar@{}[rru]|{\mbox{\large $\stackrel{(-\rho_{Tm_X})(\lambda_{m_X}-)}{\Rightarrow}$}}\ar[d]|-{\object@{|}}_{T^2a}&&T^2X\ar[d]|-{\object@{|}}^{Ta}\\
T^2X\ar[rr]^-{m_X}\ar@{}[rru]|{\mbox{\large $\stackrel{\beta_{a}}{\Rightarrow}$}}&&TX.}\]
\begin{theorem} The 2-monad $(T, e,m)$ on $(\mT,\V)\mbox{-}\Cat$ is a KZ monad. \end{theorem}
\begin{proof} One of the equivalent conditions expressing the KZ property is the existence of a modification $\delta : Te \to eT : T \to TT$ such that \begin{equation}\tag{mod}\label{eq:mod} \delta e = 1_{ee}\mbox{ and }m\delta = 1_{1_T}. \end{equation}
For a $(\mon{T}, \V)$-category $(X, a, \mu_a, \eta_a)$, let $\delta_{(X, a)}$ be the composite 2-cell \[\xymatrix{e_{TX} \ar[r]^-{T^2\eta_a-}& T^2(ae_X)e_{TX} \ar[r]^{T\kappa_{a,e_X}-} &T(TaTe_X)e_{TX} \ar[r]^{\kappa_{Ta,Te_X}-} & T^2aT^2e_Xe_{TX}}\] \[\xymatrix{= T(Ta)e_{T^2X}Te_X\ar[rrr]^-{T(Ta\lambda_{m_X})\lambda_{m_{TX}}-}&&& T(Tam_X^\circ m_X)m_{TX}^\circ m_{TX}e_{T^2X}Te_X.}\] This defines a $(\mon{T}, \V)$-natural transformation \[\delta_{(X,a)}:(Te_X, T\widetilde{\alpha}_a) \to (e_{TX}, \widetilde{\alpha}_{\hat{a} m_X}).\] The family of these natural transformations gives the required modification $Te \to eT$. The first of the two required equalities (\ref{eq:mod}) is straightforward. The second one follows from (mon). \end{proof}
\section{Representable $(\mon{T},\V)$-categories: from Nachbin spaces\\to Hermida's representable multicategories}
Being a KZ monad, for the monad $\mon{T}$ on $(\mT,\V)\mbox{-}\Cat$ a $\mon{T}$-algebra structure on a $(\mon{T},\V)$-category $(X,a)$ is, up to isomorphism, a reflective left adjoint to the unit $e_{(X,a)}$; hence, having a $\mon{T}$-algebra structure is a property, rather than an additional structure, for any $(\mon{T},\V)$-category. As Hermida in \cite{Her00}, we say that: \begin{definition} A $(\mon{T},\V)$-category is \emph{representable} if it has a pseudo-algebra structure for $\mon{T}$. \end{definition}
In the diagram below $((\mT,\V)\mbox{-}\Cat)^\mon{T}$ is the 2-category of $\mon{T}$-algebras, $F^\mon{T}\dashv G^\mon{T}$ is the corresponding adjunction, and $\widetilde{K}$ is the comparison 2-functor:
\[\xymatrix{(\V\mbox{-}\Cat)^\mon{T}\ar@{}[rd]|{\top}\ar[rr]^{\widetilde{K}}\ar@<1ex>[rd]^K&&((\mT,\V)\mbox{-}\Cat)^\mon{T}.\ar@<1ex>[ld]^{G^\mon{T}}\ar@{}[ld]|{\bot}\\ &(\mT,\V)\mbox{-}\Cat\ar@<1ex>[lu]^M\ar@<1ex>[ru]^{F^\mon{T}}}\] The composition of the adjunctions $F^\mon{T}\dashv G^\mon{T}$ and $A^\circ\dashv A_e$ (see (\ref{eq:adj}) in Section \ref{sect:three}) gives an adjunction $F_e^\mon{T}\dashv G_e^\mon{T}$ that induces again the monad $\mon{T}$ on $\V\mbox{-}\Cat$. Let $\widetilde{A}_e$ be the corresponding comparison 2-functor as depicted in the following diagram:
\[\xymatrix{(\V\mbox{-}\Cat)^\mon{T}\ar@<-1ex>@{}[rddd]|{\top}\ar@{.>}[rddd]\ar@{}[rd]|{\top}\ar@<-1ex>[rr]_{\widetilde{K}}\ar@<1ex>[rd]^K&&((\mT,\V)\mbox{-}\Cat)^\mon{T}.
\ar@<1ex>[ld]^{G^\mon{T}}\ar@{}[ld]|{\bot}\ar@{.>}@<-1ex>[ll]_{\widetilde{A}_e}\ar@{.>}@<2ex>[lddd]^{G_e^\mon{T}}\ar@{}@<1ex>[lddd]|{\bot}\\
&(\mT,\V)\mbox{-}\Cat\ar@<1ex>[lu]^M\ar@<1ex>[ru]^{F^\mon{T}}\ar@{.>}@<1ex>[dd]^{A_e}\ar@{}[dd]|{\dashv}\\ \\ &\V\mbox{-}\Cat\ar@{.>}@<2ex>[luuu]\ar@{.>}[ruuu]^{F_e^\mon{T}}\ar@<1ex>@{.>}[uu]^{A^\circ}}\]
\begin{theorem}\label{th:rep} $\widetilde{K}$ and $ \widetilde{A}_e$ define an adjoint 2-equivalence. \end{theorem}
\begin{proof} The isomorphism $\widetilde{A}_e\widetilde{K} \cong 1$ can be directly verified. We will establish that $\widetilde{K}\widetilde{A}_e \cong 1$.
Suppose that a $(\mon{T}, \V)$-functor $(f, \varphi _f) : T(X, a) \rightarrow (X, a)$ is a $\mon{T}$-algebra structure on a $(\mon{T}, \V)$-category $(X, a)$. Observe that the underlying $\V$-relation of the representable $(\mon{T}, \V)$-category $\widetilde{K}\widetilde{A}_e((X, a), (f, \varphi _f))$ is $ae_Xf : TX \longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} TX$.
Since $\mon{T}$ is a KZ monad, following \cite{KellyLack}, $(f, \varphi _f)$ is a left adjoint to the unit $(e_X, \widetilde{\alpha}_a)$ of $\mon{T}$. By Proposition \ref{th:adj} we get an isomorphism \[\omega : e^\circ_XTam^\circ_Xm_X \rightarrow aTf.\] Let $\iota$ denote the composite isomorphism \[ae_Xf = aTfe_{T_X} \xrightarrow{\omega^{-1}-} e^\circ_XTam^\circ_Xm_Xe_{T_X} =e^\circ_XTam^\circ_Xm_XTe_{X} \xrightarrow{\omega-} aTfTe_{X} = a.\] It can be verified that the pair $(1_X, \iota)$ is an isomorphism $\widetilde{K}\widetilde{A}_e((X, a), (f, \varphi _f)) \to ((X, a), (f, \varphi _f))$ in $((\mT,\V)\mbox{-}\Cat)^{\mon{T}}$. The family of these morphisms determine the required 2-natural isomorphism $\widetilde{K}\widetilde{A}_e \cong 1$. \end{proof}
We explain now how representable $(\mon{T},\V)$-categories capture two important cases which were developed independently.
\subsection*{Nachbin's ordered compact Hausdorff spaces.} For $\V = 2$ and $\mon{T}=\mon{U}=(U,e,m)$ the ultrafilter monad extended to $\mbox{\sf 2}$-$\Rel=\Rel$ as in \cite{Ba}, so that, for any relation $r:X\relto Y$, $Ur=Uq(Up)^\circ$, where $p:R\to X$, $q:R\to Y$ are the projections of $R=\{(x,y)\mid x\,r\,y\}$. Then $\Cats{2}\simeq\Ord$ and the functor $U:\Ord\to\Ord$ sends an ordered set $(X,\le)$ to $(UX,U\!\le)$ where \[
\mathfrak{x}\,(U\!\le)\,\mathfrak{y}\hspace{1em}\text{whenever}\hspace{1em}\forall A\in\mathfrak{x},B\in\mathfrak{y}\exists x\in A,y\in B\,.\,x\le y, \] for all $\mathfrak{x},\mathfrak{y}\in UX$. The algebras for the monad $\mon{U}$ on $\Ord$ are precisely the ordered compact Hausdorff spaces as introduced in \cite{Nac50}:
\begin{definition} An \emph{ordered compact Hausdorff space} is an ordered set $X$ equipped with a compact Hausdorff topology so that the graph of the order relation is a closed subset of the product space $X\times X$. \end{definition}
We denote the category of ordered compact Hausdorff spaces and monotone and continuous maps by $\catfont{OrdCompHaus}$. It is shown in \cite{Tho09} that, for a compact Hausdorff space $X$ with ultrafilter convergence $\alpha:UX\to X$ and an order relation $\le$ on $X$, the set $\{(x,y)\mid x\le y\}$ is closed in $X\times X$ if and only if $\alpha:UX\to X$ is monotone; and this shows \[
\catfont{OrdCompHaus}\simeq\Ord^\mon{U}, \] and the diagram (\ref{eq:Talg}) at the end of Section \ref{sect:VCatMonad} becomes \[
\xymatrix@!R=5ex{\CompHaus\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\catfont{OrdCompHaus}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[ll]\\
\Set\ar@<1ex>[u]^{F^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\Ord.\ar@<-1ex>[ll]\ar@<1ex>[u]^{F^\mon{T}}} \] The functor $K:\catfont{OrdCompHaus}\to\catfont{Top}=(\mon{U},\mbox{\sf 2})$-$\Cat$ of Section \ref{sect:FundAdj} can now be described as sending $((X,\leq),\alpha:UX\to X)$ to the space $KX=(X,a)$ with ultrafilter convergence $a:UX\relto X$ given by the composite
\[\xymatrix{UX\ar[r]^-{\alpha}&X\ar[r]|-{\object@{|}}^-{\leq}&X;}\] of the order relation $\le:X\relto X$ of $X$ with the ultrafilter convergence $\alpha:UX\to X$ of the compact Hausdorff topology of $X$. In terms of open subsets, the topology of $KX$ is given precisely by those open subsets of the compact Hausdorff topology of $X$ which are down-closed with respect to the order relation of $X$. On the other hand, for a topological space $(X,a)$, the ordered compact Hausdorff space $MX$ is the set $UX$ of all ultrafilters of $X$ with the order relation
\[\xymatrix{UX\ar[r]|-{\object@{|}}^-{m_X^\circ}&UUX\ar[r]|-{\object@{|}}^-{Ua}&UX,}\] and with the compact Hausdorff topology given by the convergence $m_X:UUX\to UX$; put differently, the order relation on $UX$ is defined by \[
\mathfrak{x}\le\mathfrak{y}\iff\forall A\in\mathfrak{x}\,.\,\overline{A}\in\mathfrak{y}, \] and the compact Hausdorff topology on $UX$ is generated by the sets \[
\{\mathfrak{x}\in UX\mid A\in\mathfrak{x}\}\hspace{2em}(A\subseteq X). \]
The monad $\mon{U}=(U,e,m)$ on $\Top$ induced by the adjunction $M\dashv K$ assigns to each topological space $X$ the space $UX$ with basic open sets \[
\{\mathfrak{x}\in UX\mid A\in\mathfrak{x}\}\hspace{2em}(A\subseteq X\text{ open}). \] By definition, a topological space $X$ is called \emph{representable} if $X$ is a pseudo-algebra for $\mon{U}$, that is, whenever $e_X:X\to UX$ has a (reflective) left adjoint. Note that a left adjoint of $e_X:X\to UX$ picks, for every ultrafilter $\mathfrak{x}$ on $X$, a smallest convergence point of $\mathfrak{x}$. The following result provides a characterisation of representable topological spaces.
\begin{theorem} Let $X$ be a topological space. The following assertions are equivalent. \begin{enumerate}[\em (i)] \item $X$ is representable. \item $X$ is locally compact and every ultrafilter has a smallest convergence point. \item $X$ is locally compact, weakly sober and the way-below relation on the lattice of open subsets is stable under finite intersection. \item $X$ is locally compact, weakly sober and finite intersections of compact down-sets are compact. \end{enumerate} \end{theorem}
Representable T$_0$-spaces are known under the designation \emph{stably compact spaces}, and are extensively studied in \cite{GHK+03,Jun04,Law11} and \cite{Sim82} (called \emph{well-compact spaces} there). One can also find there the following characterisation of morphisms between representable spaces.
\begin{theorem} Let $f:X\to Y$ be a continuous map between representable spaces. Then the following are equivalent. \begin{enumerate}[\em (i)] \item $f$ is a pseudo-homomorphism. \item For every compact down-set $K\subseteq Y$, $f^{-1}(K)$ is compact. \item The frame homomorphism $f^{-1}:\mathcal{O} Y\to\mathcal{O} X$ preserves the way-below relation. \end{enumerate} \end{theorem}
\subsection*{Hermida's representable multicategories}
We sketch now some of the main achievements of \cite{Her00,Her01} which fit in our setting and can be seen as counterparts to the classical topological results mentioned above. In \cite{Her00,Her01} Hermida is working in a finitely complete category $\catfont{B}$ admitting free monoids so that the free-monoid monad $\mon{M}=(M,e,m)$ is Cartesian; however, for the sake of simplicity we consider only the case $\catfont{B}=\Set$ here. We write $\catfont{Span}$ to denote the bicategory of spans in $\Set$, and recall that a \emph{category} can be viewed as a span \[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ C_0 && C_0} \] which carries the structure of a monoid in the category $\catfont{Span}(C_0,C_0)$. The 2-category of monoids in $\Cat$ (aka strict monoidal categories) and strict monoidal functors is denoted by $\catfont{MonCat}$, and the diagram (\ref{eq:Talg})
becomes
\[\xymatrix@!R=5ex{\Mon\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\catfont{MonCat}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[ll]\\
\Set\ar@<1ex>[u]^{F^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\Cat.\ar@<-1ex>[ll]\ar@<1ex>[u]^{F^\mon{T}}}\]
A \emph{multicategory} can be viewed as a span \[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0} \] in $\Set$ together with a monoid structure in an appropriate category. This amounts to the following data: \begin{enumerate}[--] \item a set $C_0$ of objects; \item a set $C_1$ of arrows where the domain of an arrow $f$ is a sequence $(X_1,X_2,\ldots,X_n)$ of objects and the codomain is an object $X$, depicted as \[
f:(X_1,X_2,\ldots,X_n)\to X; \] \item an identity $1_X:(X)\to X$; \item a composition operation. \end{enumerate} The 2-category of multicategories, morphisms of multicategories and appropriate 2-cells is denoted by $\MultiCat$. Keeping in mind that $\catfont{Span}$ is equivalent to $\Rels{\Set}$, for $\V=\Set$ and $\mon{T}=\mon{M}$, the fundamental adjunction (\ref{eq:ADJ}) of Section \ref{sect:FundAdj} specialises to:
\begin{theorem} There is a 2-monadic 2-adjunction $\MultiCat\adjunct{M}{K}\catfont{MonCat}$. \end{theorem}
Here, for a strict monoidal category \[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ C_0 && C_0} \] with monoid structure $\alpha:MC_0\to C_0$ on $C_0$, the corresponding multicategory is given by the composite of \[
\xymatrix{& MC_0\ar[dl]_1\ar[dr]^\alpha && C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0 && C_0} \] in $\catfont{Span}$; and to a multicategory \[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0} \] one assigns the strict monoidal category \[
\xymatrix{&& MC_1\ar[dl]_d\ar[dr]^c\\ & MMC_0\ar[dl]_{m_{C_0}} && MC_0\\ MC_0} \] where the objects in the span are free monoids.
The induced 2-monad on $\MultiCat$ is of Kock-Z\"oberlein type, and a \emph{representable multicategory} is a pseudo-algebra for this monad. In elementary terms, a multicategory \[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0} \] is representable precisely if for every $(x_1,\ldots,x_n)\in MC_0$ there exists a morphism (called universal arrow) \[
(x_1,\ldots,x_n)\to \otimes(x_1,\ldots,x_n) \] which induces a bijection \[
\hom((x_1,\ldots,x_n),y)\simeq\hom(\otimes(x_1,\ldots,x_n),y), \] natural in $y$, and universal arrows are closed under composition.
\section{Duals for $(\mon{T},\V)$-categories}
For a $\V$-category $(Z, c) = (Z, c, \eta_c, \mu_c)$, the \emph{dual} $D(Z, c)$ of $(Z, c)$ is defined to be the $\V$-category $Z^\op=(Z, c^\op, \eta_{c^\op}, \mu_{c^\op})$, with $c^\op=c^\circ$, $\eta_{c^\op}=\eta_c^\circ$ and $\mu_{c^\op}=\mu_c^\circ$. This construction extends to a 2-functor \[D : \V\mbox{-}\Cat \to \V\mbox{-}\Cat^\co\] as follows. For a $\V$-functor $(f, \varphi _f) : (Z, c) \to (W, d)$ set $D(f, \varphi _f) = f^\op=(f, \varphi _f^\op):(Z,c^\circ)\to(W,d^\circ)$, where $\varphi _f^\op$ is defined by \[\xymatrix{f c^\circ \ar[r]^-{-\lambda_f}& f c^\circ f^\circ f = f (fc)^\circ f \ar[r]^-{-(\varphi _f)^\circ -}& f (df)^\circ f = ff^\circ d^\circ f \ar[r]^-{\rho_f -}& d^\circ f.}\] On 2-cells $\zeta: (f, \varphi _f) \to (g, \varphi _g)$ of $\V\mbox{-}\Cat$, set $D(\zeta) = \zeta^\op$, which is defined analogously by \[\xymatrix{f c^\circ \ar[r]^-{-\lambda_g}& f c^\circ g^\circ g = f (gc)^\circ g \ar[r]^-{-\zeta^\circ -}& f (df)^\circ g = ff^\circ d^\circ g \ar[r]^-{\rho_f -}& d^\circ g.}\]
The monad $\mon{T}$ on $\V\mbox{-}\Cat$ of Section \ref{sect:VCatMonad} gives rise to a monad $\mon{T}$ on $\V\mbox{-}\Cat^\co$. From now on \emph{we assume that $T(c^\circ)=(Tc)^\circ$ for every $\V$-relation $c$}. Let $((Z, c), (h, \varphi _h))$ be a $\mon{T}$-algebra. Then \[\xymatrix{(TZ, Tc^\circ) \ar[rr]^{D(h, \varphi _h)} && (Z, c^\circ)}\] gives a $\mon{T}$-algebra structure on $(Z, c^\circ)$, which we write as $((Z, c^\circ), h^\op)$.
\begin{definition} The \emph{dual} of a $\mon{T}$-algebra $((Z, c), h)$ is the $\mon{T}$-algebra $(Z^\op,h^\op)=((Z, c^\circ), h^\op)$. \end{definition} This construction extends to a 2-functor \begin{equation}\tag{Dual}\label{eq:Dual} D : (\V\mbox{-}\Cat)^\mon{T} \longrightarrow ((\V\mbox{-}\Cat)^\mon{T})^\co \end{equation} as follows. If $(f, \varphi _f): ((Z, c), h) \to ((W,d), k)$ is a morphism of $\mon{T}$-algebras, then $D(f, \varphi _f)=f^\op:((Z, c^\circ), h^\op) \to ((W, d^\circ), k^\op)$ is a morphism of $\mon{T}$-algebras, and if $\zeta:(f,\varphi _f)\to(g,\varphi _g)$ is a 2-cell in $(\V\mbox{-}\Cat)^\mon{T}$, then $D(\zeta)=\zeta^\op:D(g, \varphi _g) \to D(f, \varphi _f)$ is a 2-cell in $\V\mbox{-}\Cat^\mon{T}$.
Using the adjunction $M\dashv K$ we can define the dual of a $(\mon{T},\V)$-category using the construction of duals in $(\V\mbox{-}\Cat)^\mon{T}$ via the composition:
\[\xymatrix@=8ex{(\V\mbox{-}\Cat)^\mon{T}\ar@`{(-20,-20),(-20,20)}^D\ar@{}[r]|{\top}\ar@<1mm>@/^2mm/[r]^{{K}} & (\mT,\V)\mbox{-}\Cat.\ar@<1mm>@/^2mm/[l]^{{M}}}\]
\begin{definition} The \emph{dual} of a $(\mon{T}, \V)$-category $(X,a)$ is the $(\mon{T},\V)$-category $KDM(X,a)$; that is, \[X^\op=(TX,m_XTa^\circ m_X).\] \end{definition}
For representable $(\mon{T},\V)$-categories $(X,a)$ we can use directly extensions of $\widetilde{K}$ and $\widetilde{A}_e$ to pseudo-algebras, so that we can obtain a dual structure $X^{\widetilde{\op}}$ on the same underlying set $X$ via the composition $\widetilde{K}D\widetilde{A}_e$:
\[\xymatrix@=8ex{(\V\mbox{-}\Cat)^\mon{T}\ar@`{(-20,-20),(-20,20)}^D\ar@{}[r]|-{\top}\ar@<1mm>@/^2mm/[r]^-{\widetilde{K}} & ((\mT,\V)\mbox{-}\Cat)^\mon{T}.\ar@<1mm>@/^2mm/[l]^-{\widetilde{A}_e}}\] Then it is easily checked that, for any $(\mon{T},\V)$-category $X$, \[X^\op=(TX)^{\widetilde{\op}},\] since $TX$, as a free $\mon{T}$-algebra on $(\mT,\V)\mbox{-}\Cat$, is representable.
For $\V$ a quantale, duals of $(\mon{T},\V)$-categories proved to be useful in the study of (co)completeness (see \cite{CH09,CH09a,Hof11}). Next we outline briefly the setting used and the role duals play there.
Let $\V$ be a quantale. When the lax extension of $T:\Set\to\Set$ to $\V\mbox{-}\Rel$ is determined by a map $\xi:TV\to V$ which is a $\mon{T}$-algebra structure on $\V$ (for the $\Set$-monad $\mon{T}$) as outlined in \cite[Section 4.1]{CH09}, then, under suitable conditions, $\V$ itself has a natural $(\mon{T},\V)$-category structure $\hom_\xi$ given by the composite \begin{equation}\tag{$(\mon{T},\V)$-$\hom$}\label{eq:TVhom}
\xymatrix{TV\ar[r]^\xi&V\ar[r]|-{\object@{|}}^\hom&V,} \end{equation} where $\hom$ is the internal hom on $V$.\footnote{This is the case when a \emph{topological theory} in the sense of \cite{Hof07} is given; see \cite{Hof07} for details.} Then the well-known equivalence:\\ \begin{quotation}
Given $\V$-categories $(X,a)$, $(Y,b)$, for a $\V$-relation $r:X\relto Y$,\vspace*{2mm} \begin{itemize} \item[] $r:(X,a)\relto (Y,b)$ is a $\V$-module (or profunctor, or distributor) \item[] $\iff$ the map $r:X^\op\otimes (Y,b)\to(\V,\hom)$ is a $\V$-functor. \end{itemize} \end{quotation} can be generalized to the $(\mon{T},\V)$-setting. Here a \emph{$(\mon{T},\V)$-relation} $r:X\krelto Y$ is a $\V$-relation $TX\relto Y$, and $(\mon{T},\V)$-relations $X\stackrel{r}{\krelto} Y\stackrel{s}{\krelto} Z$ compose as $\V$-relations as follows:
\[\xymatrix{TX\ar[r]|-{\object@{|}}^{m_X^\circ}&T^2X\ar[r]|-{\object@{|}}^{Tr}&TY\ar[r]|-{\object@{|}}^s&Z;}\] we denote this composition by $s\circ r$. A \emph{$(\mon{T},\V)$-module} $\varphi :(X,a)\krelto(Y,b)$ between $(\mon{T},\V)$-categories $(X,a)$, $(Y,b)$ is a $(\mon{T},\V)$-relation such that \[\varphi \circ a=\varphi =b\circ\varphi .\] The next result can be found in \cite{CH09} (see also \cite[Remark 5.1 and Lemma 5.2]{Hof13}).
\begin{theorem}\label{prop:Mod_vs_Fun} Let $(X,a)$ and $(Y,b)$ be $(\mon{T},\V)$-categories and $\varphi:X\krelto Y$ be a $(\mon{T},\V)$-relation. The following assertions are equivalent. \begin{enumerate}[\em (i)] \item $\varphi:(X,a)\krelto (Y,b)$ is a $(\mon{T},\V)$-module. \item The map $\varphi:TX\times Y\to\V$ is a $(\mon{T},\V)$-functor $\varphi:X^\op\otimes (Y,b)\to(\V,\hom_\xi)$. \end{enumerate} \end{theorem}
In particular, the $(\mon{T},\V)$-relation $a:X\krelto X$ is a $(\mon{T},\V)$-module from $(X,a)$ to $(X,a)$. Although $(\mT,\V)\mbox{-}\Cat$ is in general not monoidal closed for $\otimes$, the functor $X^\op\otimes-:(\mT,\V)\mbox{-}\Cat\to(\mT,\V)\mbox{-}\Cat$ has a right adjoint $(-)^{X^\op}:(\mT,\V)\mbox{-}\Cat\to(\mT,\V)\mbox{-}\Cat$ for every $(\mon{T},\V)$-category $X$, and from the $(\mon{T},\V)$-module $a$ we obtain the \emph{Yoneda $(\mon{T},\V)$-functor} \[
y_X:X\to\V^{X^\op}. \] By Theorem \ref{prop:Mod_vs_Fun}, we can think of the elements of $\V^{X^\op}$ as $(\mon{T},\V)$-modules from $(X,a)$ to $(1,e_1^\circ)$. The following result was proven in \cite{CH09} and provides a Yoneda-type Lemma for $(\mon{T},\V)$-categories.
\begin{theorem}\label{lem:Yoneda} Let $(X,a)$ be a $(\mon{T},\V)$-category. Then, for all $\psi$ in $\V^{X^\op}$ and all $\mathfrak{x}\in TX$, \[
\llbracket Ty_X(\mathfrak{x}),\psi\rrbracket=\psi(\mathfrak{x}), \] with $\llbracket-,-\rrbracket$ the $(\mon{T},\V)$-categorical structure on $\V^{X^\op}$. \end{theorem}
To generalise these results to the general setting studied in this paper, that is when $\V$ is not necessarily a thin category, one faces a first obstacle: When can we equip the category $\V$ with a canonical (although non-legitimate) $(\mon{T},\V)$-category structure as in (\ref{eq:TVhom})? The obstacle seems removable when $\mon{T}=\mon{M}$ is the free-monoid monad. In fact, as above, the monoidal structure $(X_1,\ldots,X_n)\mapsto X_1\otimes\dots\otimes X_n$ defines a lax extension of $\mon{M}$ to $\Rels{\V}$, a monoidal structure on $\Cats{(\mon{M},\V)}\simeq\V$-$\MultiCat$, and it turns $\V$ into a generalised multicategory. We therefore conjecture that Theorems \ref{prop:Mod_vs_Fun} and \ref{lem:Yoneda} hold also in this more general situation; however, so far we were not able to prove this.
\section*{Acknowledgments}
The first author acknowledges partial financial assistance by the Centro de Matem\'{a}tica da Universidade de Coimbra (CMUC), funded by the European Regional Development Fund through the program COMPETE
and by the Portuguese Government through FCT, under the project PEst-C/MAT/UI0324/2013 and grant number SFRH/BPD/79360/2011, and by the Presidential Grant for Young Scientists, PG/45/5-113/12, of the National Science Foundation of Georgia.
The second author acknowledges partial financial assistance by CMUC, funded by the European Regional Development Fund through the program COMPETE and by the Portuguese Government through FCT, under the project PEst-C/MAT/UI0324/2013.
The third author acknowledges partial financial assistance by Portuguese funds through CIDMA (Center for Research and Development in Mathematics and Applications), and the Portuguese Foundation for Science and Technology (``FCT -- Funda\c{c}\~ao para a Ci\^encia e a Tecnologia''), within the project PEst-OE/MAT/UI4106/2014, and by the project NASONI under the contract PTDC/EEI-CTP/2341/2012.
{\small
}
\end{document} |
\begin{document}
\title{The Lasserre Hierarchy in Almost Diagonal Form
hanks{Supported by the Swiss National Science Foundation project 200020-144491/1 ``Approximation Algorithms for Machine Scheduling Through Theory and Experiments''.} \begin{abstract} The Lasserre hierarchy is a systematic procedure for constructing a sequence of increasingly tight relaxations that capture the convex formulations used in the best available approximation algorithms for a wide variety of optimization problems. Despite the increasing interest, there are very few techniques for analyzing Lasserre integrality gaps. Satisfying the positive semi-definite requirement is one of the major hurdles to constructing Lasserre gap examples.
We present a novel characterization of the Lasserre hierarchy based on moment matrices that differ from diagonal ones by matrices of rank one (\emph{almost diagonal form}). We provide a modular \emph{recipe} to obtain positive semi-definite feasibility conditions by iteratively diagonalizing rank one matrices.
Using this, we prove strong lower bounds on integrality gaps of Lasserre hierarchy for two basic capacitated covering problems. For the min-knapsack problem, we show that the integrality gap remains arbitrarily large even at level $n-1$ of Lasserre hierarchy. For the min-sum of tardy jobs scheduling problem, we show that the integrality gap is unbounded at level $\Omega(\sqrt{n})$ (even when the objective function is integrated as a constraint). These bounds are interesting on their own, since both problems admit FPTAS. \end{abstract}
\section{Introduction}
The use of mathematical programming relaxations, such as linear programming (LP) and semidefinite programming (SDP), has been one of the most powerful tools in approximation algorithms. These algorithms are analyzed by comparing the value of the returned integer solution to that of the fractional solution. The \emph{integrality gap} is the quotient of the true optimum by the relaxationÕs optimal solution (which is always at least as good), and measures the quality of the approximation.
The integrality gap is sensitive to the original integer programming formulation, and an important question is when modifications to the integer program improve the algorithms of this framework.
This has lead to systematic procedures, also known as \emph{lift-and-project} methods, for constructing a sequence of increasingly tight mathematical programming relaxation. In particular, Lov\'{a}sz-Schrijver+ (LS+)~\cite{Lov91} and the stronger Lasserre~\cite{Lasserre01} semidefinite programming hierarchy, and Lov\'{a}sz-Schrijver (LS)~\cite{Lov91}, and the stronger Sherali-Adams (SA) hierarchy~\cite{SheraliA90} for linear programs were created to systematically improve semidefinite and linear programs at the cost of additional runtime (see \cite{Laurent03} for a comparison).
Introduced in 1999 by Grigoriev and Vorobjov~\cite{GrigorievV01}, the ÒSum of SquaresÓ (SOS) proof system is a powerful algebraic proof system. Shor~\cite{schor87}, Lasserre~\cite{Lasserre01} and Parrilo~\cite{parrilo00} show that this proof system is automatizable using semidefinite programming (SDP), meaning that any $n$-variable degree-$d$ proof can be found in time $n^{O(d)}$. Furthermore, the SDP is dual to the Lasserre SDP hierarchy, meaning that the ``d/2-round Lasserre value'' of an optimization problem is equal to the best bound provable using a degree-$d$ SOS proof (see also the monograph by Laurent~\cite{laurent09}). For brevity, we will interchange Lasserre hierarchy with SOS hierarchy since they are essentially the same in our context. For a brief history of the different formulations from \cite{GrigorievV01}, \cite{Lasserre01}, \cite{parrilo00} and the relations between them and results in real algebraic geometry we refer the reader to \cite{ODonnellZ13}.
These hierarchies provably imply some of the most celebrated approximation algorithms for NP-complete problems even after a few rounds. For example, the first round of LS+ (and hence also Lasserre) for the Independent Set problem implies the Lov\'{a}sz $\theta$-function~\cite{Lovasz79} and for the MaxCut problem gives the Goemans-Williamson relaxation~\cite{GoemansW95}. The ARV relaxation of the SparsestCut \cite{AroraRV09} problem is no stronger than the relaxation given in the third round of LS+ (and hence also Lasserre), and most recently the subexponential time algorithm for Unique Games~\cite{AroraBS10} is implied by a sublinear number of rounds of Lasserre~\cite{BarakRS11,GuruswamiS11}. Other improved approximation guarantees that arise from the first $O(1)$ levels of Lasserre (or weaker) hierarchy can be found in~\cite{BarakRS11,BateniCG09,Chlamtac07,ChlamtacS08,VegaK07,GuruswamiS11,MagenM09,RaghavendraT12}.
For a more detailed overview on the use of hierarchies in approximation algorithms, see the recent survey of Chlamt\'{a}\v{c} and Tulsiani~\cite{Chla12}, the reading group web-page in~\cite{bansalweb} and the references therein.
Integrality gap results for Lasserre are thus very strong unconditional negative results, as they apply to a ``model of computation" that includes the best known approximation algorithms for several problems (see~\cite{Chla12} for a discussion). If a large integrality gap persists after a large number of rounds then a wide class of efficient approximation algorithms are ruled out (this implicitly contains some of the most sophisticated approximation algorithms for many problems).
SOS hierarchy appears to be more powerful than those of LS+ and SA+ hierarchies: a recent work of Barak et al. \cite{BarakBHKSZ12} proved that $O(1)$ number of rounds of the SOS hierarchy can solve the unique games problem on instances which need a non-constant number of LS+, SA+ rounds. These results emphasize that a better understanding of the power and limitations of the SOS hierarchy is necessary.
Most of the known lower bounds for the SOS hierarchy originated in the works of Grigoriev~\cite{Grigoriev01b,Grigoriev01} (also independently rediscovered by Schoenebeck~\cite{Schoenebeck08}). These works show that random 3XOR or 3SAT instances cannot be solved by even $\Omega(n)$ rounds of SOS hierarchy. Subsequent lower bounds, such as those of~\cite{Tulsiani09}, \cite{BhaskaraCVGZ12} rely on \cite{Grigoriev01b}, \cite{Schoenebeck08} plus gadget reductions.
An interesting line of research is given by answering the following questions~\cite{KarlinMN11}: ``How strong are these restricted models of computation with respect to approximation? In other words, how much do lower bounds in these models tell us about the intrinsic hardness of the problems studied?''.
One set of weaknesses revolves around the fact that SOS has a hard time reasoning about $X_1 + ... + X_n$ using the fact that all $X_i$'s are integers. In~\cite{Grigoriev01}, it is assumed that $X_1, ..., X_n$ are +/-1 and that $n$ is odd. Then degree-$(n-1)$ SOS cannot disprove $X_1 + ... + X_n = 0$, even though of course $|X_1 + ... + X_n| \geq 1$. (A simplified proof can be found in~\cite{GrigorievHP02}).
In \cite{Cheung07}, Cheung considered the case of $X_1, ..., X_n$ constrained to be 0/1 (i.e., $X_i^2 = X_i$) along with the constraint $X_1 + ... + X_n \geq \delta$. Cheung showed that there exists $\delta = \delta(n) > 0$ such that with this constraint, degree-$(n-1)$ Lasserre/SOS cannot prove $X_1 + ... + X_n \geq 1$, even though that is of course true. Cheung was motivated by the ``Knapsack'' polytope. He shows that degree-$(n-1)$ SOS cannot refute $X_1 + ... + X_n = 1 - 1/(n+1)$. (In terms of SOS, the result presented in this paper for min-knapsack implies that degree-$(n-1)$ SOS cannot refute a much weaker formula, i.e. $X_1 + ... + X_n = 1/k$, where $k$ can be arbitrarily small.) Additional results and references can be found in the monograph by Laurent~\cite{laurent09}.
Karlin et al. \cite{KarlinMN11}, focused on one of the most basic \emph{packing problem} that is well-known to be ``easy'' from the viewpoint of approximability, namely the \emph{maximum knapsack problem}\footnote{Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.} which is well-known \cite{IbarraK75,Lawler79} to admit a fully polynomial time approximation scheme (FPTAS). While from the perspective of approximation algorithms, there is nothing to be gained by applying convex optimization techniques to this problem, it is a useful tool for gaining a better understanding of the strengths and properties of various hierarchies of relaxations. They show two results.
First they prove that an integrality gap close to 2 persists up to a linear number of rounds of Sherali-Adams. This confirms that Sherali-Adams restricted model of computation has serious weakness and a lower bound in this model does not necessarily imply that it is difficult to get a good approximation ratio (this has been observed also in other contexts, see e.g. \cite{CharikarMM09}).
On the other side, they show that after $r^2$ rounds of Lasserre, the integrality gap decreases quickly from $2$ to $r/(r-1)$, implying as a side product of their analysis a polynomial time approximation scheme. To some extent, their second result gives some further evidence that Lasserre's hierarchy might be seen as an effective model of computation for certain \emph{packing} problem. The results presented in this paper show that this does not seem the case for basic capacitated \emph{covering} problems.
\paragraph{Capacitated Covering Problems Relaxations.}
The \emph{min-knapsack problem} is defined by a set of items, each with a cost and a value, and a specified demand. The goal is to select a minimum cost set of items with total value at least the demand. The minimum knapsack problem is well-known to admit an FPTAS (the FPTAS for maximum knapsack \cite{IbarraK75,Lawler79} can be easily modified to work for the min-knapsack problem). This fundamental covering problem is a special case of many covering problems, including the very general capacitated covering integer program \cite{CarrFLP00}.
The \emph{capacitated covering integer program} (see e.g.~\cite{CarrFLP00}) is an integer program of the form $\min\{cx:Ux\geq d, 0\leq x\leq b, x\in Z^+\}$, where all the entries of $c,U$ and $d$ are nonnegative.
The min-knapsack problem, the capacitated network design problem and the min-sum of tardy jobs scheduling problem are only few examples of capacitated covering problems. One difficulty in approximating the capacitated covering problems lies in the fact that the ratio between the optimal IP solution to the optimal LP solution can be as bad as $||d||_{\infty}$, even when $U$ consists of a single row (i.e. the min-knapsack problem).
A powerful way to cope with this problem is to strengthen the LP by adding (exponentially many) knapsack cover (KC) inequalities introduced by Carr et al.~\cite{CarrFLP00}, that have proved to be a useful tool to address capacitated covering problems~\cite{BansalBN08,CarnesS08,LeviLS08,BansalGK10,ChakrabartyGK10}. For the min-knapsack problem, the improved IP/LP ratio with these inequalities is $2$.
The min-sum single-machine scheduling problem (often denoted $1||\sum f_j$) is defined by a set of $n$ jobs to be scheduled on a single machine. Each job has an integral processing time, and there is a monotone function $f_j(C_j)$ specifying the cost incurred when job $j$ is completed at a particular time $C_j$; the goal is to minimize $\sum_j f_j(C_j)$. A natural special case of this problem is given by the min-sum of tardy jobs (denoted $1||\sum_j w_jT_j$), where $f_j(C_j)= w_j \max\{C_j-d_j,0\}$, $w_j\geq 0$, and $d_j>0$ is a specified due date of job $j$. This problem is known to be NP-complete \cite{yuan92} even for unit weights. FPTASÕs are known with the additional restriction that there are only a constant number of deadlines \cite{KarakostasKW12}, or if jobs have unit weights \cite{lawler82}.
The first constant approximation algorithm for $1||\sum f_j$ (and for $1||\sum_j w_jT_j$) has been obtained by Bansal and Pruhs \cite{BansalP10} (they consider an even more general scheduling problem). Their $16$-approximation has been recently improved to a $(2+\varepsilon)$ primal-dual approximation by Cheung and Shmoys \cite{CheungS11}. Both approaches are based on using capacitated covering linear program relaxations (the relaxations in \cite{BansalP10,CheungS11} are different), with unbounded integrality gap (because the min-knapsack LP is a special case in both cases). Thus, in \cite{BansalP10,CheungS11} the authors strengthen these LPs by adding the knapsack cover (KC) inequalities introduced in~\cite{CarrFLP00}. Based on this approach, the combinatorial primal-dual approach in \cite{CheungS11} is currently the best known result.
However, no hardness of approximation result is known for $1||\sum f_j$ and, as remarked in \cite{CheungS11}, ``it is still conceivable (and perhaps likely) that there exists a polynomial time approximation scheme''. With this aim, better lower bounds are sought since (KC) inequalities are not sufficient: Indeed, even for the very special case of the min-knapsack problem, the integrality gap of the LP augmented with (KC) inequalities is $2$~\cite{CarrFLP00}.
On the other side, for the minimum knapsack problem, using the trick of ``lifting the objective function'' (i.e. when the objective function is integrated as a constraint), Karlin et al. results imply that the Lasserre SDP hierarchy reduces the integrality gap to $(1+\varepsilon)$ at level $O(1/\varepsilon)$, for any $\varepsilon>0$.
In light of the latter result, it is therefore natural to understand what happens if we strengthen the capacitated covering LPs with some levels of the Lasserre Hierarchy, instead of using (KC) inequalities. (Note that in order to claim that one can optimize over Lasserre hierarchy in polynomial time, one needs to assume that the number of constraint of the starting LP is polynomial in the number of variables (see the discussion in~\cite{Laurent03}). Therefore LP cannot be the linear program strengthened with the exponentially many (KC) inequalities).
\subsection{Our Results}
The contribution of this paper is twofold: \begin{enumerate} \item (Almost Diagonal Matrices) We provide a novel characterization of the Lasserre hierarchy based on almost diagonal matrices. Using this, we present an iterative recipe for computing positive semi-definite feasibility conditions. \item (Integrality Gaps) We provide strong Lasserre lower bounds for basic capacitated covering problems that admit FPTASs. \end{enumerate}
\paragraph{Almost Diagonal Matrices.}
It is known~\cite{Lasserre01} that the $0/1$ polytope is found at level $n$ of the Lasserre hierarchy. This argument concerns an elementary property of the zeta matrix of the lattice given by the collection of all subsets of $N=\{1,\ldots,n\}$ (see ~\cite{Laurent03}). It is enlightening to revisit this analysis by emphasizing some important aspects (Section~\ref{Sect:leveln}). We will use similar arguments and translate them to a generic level $t$ ($\leq n$) in a very natural way to obtain almost diagonal moment matrices (Section~\ref{Sect:almostdiag}), i.e. matrices that differ from diagonal ones by rank one matrices.
One of the main challenge in analyzing gap examples for Lasserre hierarchy is given by positive-semidefinite constraints, and by the hurdles of checking whether a solution satisfies them.
Indeed, it is well-known that there is no explicit general formula for computing the eigenvalues of a matrix $A$ (this because for polynomials of degree $5$ and higher there is no formula for computing the roots in terms of the coefficient in a finite number of steps; The eigenvalues of $A$ are, of course, the roots of the characteristic polynomial of $A$).
Nonetheless, there are very effective \emph{iterative algorithms} for computing the eigenvalues of a symmetric matrix. The original iterative algorithm for this purpose was devised by Jacobi (see e.g. \cite{Strang}).
Jacobi's idea is to use the similarity transform that diagonalizes a $2\times 2$ matrix (for which a closed formula exists) to \emph{partially diagonalize} any $n\times n$ matrix with the aim to reduce the ``norm'' of the off-diagonal entries.
By using almost diagonal moment matrices, we suggest an iterative \emph{recipe} for computing positive semi-definite feasibility conditions (Section~\ref{Sect:gershgorin}). These conditions are used for showing that certain solutions are feasible for the Lasserre Hierarchy. Our iterative approach can be seen as a reminiscent of Jacobi's algorithm. One of the main difference is that we diagonalize the matrices of rank one\footnote{Diagonalizing a rank one matrix boils down to reducing it to a zero matrix but one diagonal entry.} that appears in the almost diagonal form (instead of diagonalizing $2\times 2$ matrices). Again the goal is to reduce the ``importance'' of the off-diagonal entries or, in other words, the radii of Gershgorin disks~\cite{varga}. The Lasserre integrality gap constructions of Sections~\ref{sect:minknapsack} and~\ref{sect:knaplifted} are based on this technique.
Finally, starting from the almost diagonal form, we suggest an alternative formulation of the Lasserre hierarchy as a semi-infinite linear program (Section~\ref{sect:sip}). This gives a non-matricial definition of the hierarchy and a different point of view that might be convenient for certain problems. The Lasserre integrality gap construction of Section~\ref{sect:tardyjobs} is based on this formulation.
We think that the Lasserre hierarchy in almost diagonal form has several interesting aspects that will be useful in other applications and will stimulate further research. The proposed approach belongs to the very few techniques known so far for proving Lasserre integrality gaps.
\paragraph{Integrality Gaps.}
By using Lasserre in almost diagonal form, we prove strong Lasserre lower bounds for basic capacitated covering problems that admit FPTASs.
When we do not ``lift the objective function'', we show (Section~\ref{sect:minknapsack}) that the integrality gap for the min-knapsack remains arbitrarily large even at level $(n-1)$ of Lasserre's hierarchy (note that this is a tight characterization, since at level $n$ the solution is integral).
If we ``lift the objective function'', we show (Section~\ref{sect:tardyjobs}) that the integrality gap of the Lasserre hierarchy for the min-sum of tardy jobs scheduling problem is unbounded at level $t=\Omega(\sqrt{n})$. The standard covering LP that we use here is a common special case of the covering LPs used in \cite{BansalP10,CheungS11}, and therefore it shows that the approach used in \cite{BansalP10,CheungS11} cannot be improved by simply replacing the (KC) inequalities with the Lasserre Hierarchy at level $O(\sqrt{n})$. The same gap analysis holds for the min version of the multiple knapsacks problem (see Section \ref{sect:knaplifted}), and for the capacitated network design problem defined in~\cite{CarrFLP00}, by a straightforward modification of the gap construction and the same analysis.) Trivially the gap bounds immediately apply to the capacitated covering integer program and to all the subproblems for which the min-knapsack is just a special case (see e.g.~\cite{CarrFLP00}).
We note that most of prior results exhibiting gap instances for Lasserre hierarchy relaxations do so for problems that are already known to be hard to approximate, under some suitable assumption. Based on this hardness result, one would expect that the Lasserre hierarchy relaxations to have an integrality gap that matches the inapproximability factor. Some exceptions are also known where the known integrality gaps are substantially stronger than the (very weak) hardness bounds known for the problem (see \cite{BhaskaraCVGZ12} and the references therein), but here it is still conceivable that the apparent ``weakness'' of the Lasserre hierarchy is due to the inherent complexity of the problem, that has still to be fully understood, and are perhaps indicative of the hardness of approximating.
In this paper, our gap constructions are a rare exception to this trend, indeed we show unbounded integrality gaps for two ``easy'' problems that admit FPTASs. These results give some evidence that Lasserre restricted model of computation has serious and extreme weakness for covering problems of this type and might stimulate the study of better hierarchies (see e.g.~\cite{BienstockZ04}).
By quoting \cite{barakweb}: ``While until recently we had very little tools to take advantage of the SOS algorithm (at least in the sense of having rigorous analysis), we now have some indications that, when applied to the \emph{right problems} it can be a powerful toolÉ''. We believe that the provided integrality gaps help to shape the above sentence.
\paragraph{How to read this paper.} Most of the concepts and technical aspects in this paper are anticipated by concrete examples (see examples~\ref{ex:def}, \ref{examplediag}, \ref{example:almostdiag}, \ref{ex:strategy}, \ref{exampleknap}, \ref{ex:mkp} and Section~\ref{sect:exmkp}) and high level expositions, with the aim to provide the reader with the essential intuition. Generalizations of the examples and formal proofs are then subsequently presented. The non-expert reader can get the main sense of the content by reading only the definitions and the provided examples (and skip the remaining part). The expert-reader might skip some of the examples.
\section{The Lasserre Hierarchy: Definition}
In this section, we provide a definition of the Lasserre hierarchy~\cite{Lasserre01}. With a slight cost in notation, the system is introduced in its generality (and not tailored to the studied problems). The reason of this choice is because the almost diagonal form derived in this paper (see Lemma~\ref{th:lassalmostdiag}) holds for the general case and we believe it will be useful for other problems as well.
In our notation, we mainly follow the survey of Laurent~\cite{Laurent03}. We also provide some well-known properties with the aim to be self-contained and use well-known facts from linear algebra (see e.g. \cite{Strang} and Section~\ref{linear algebra}).
\paragraph{Variables and Moment Matrix.} Throughout this paper, vectors are written as columns. Let $N$ denote the set $\{1,\ldots,n\}$. The collection of all subsets of $N$ is denoted by $\mathcal{P}(N)$. For any integer $t\geq 0$, let $\mathcal{P}_t(N)$ denote the collection of subsets of $N$ having cardinality at most~$t$.
Let $y\in \mathbb{R}^{\mathcal{P}(N)}$. For convenience, $y_{\{i\}}$ is abbreviated as $y_i$ for all $i\in V$. Let $Diag(y)$ denote the diagonal matrix in $\mathbb{R}^{\mathcal{P}(N)\times \mathcal{P}(N)}$ with $(I,I)$-entry equal to $y_I$ for all $I\in \mathcal{P}(N)$. For any nonnegative integer $t\leq n$, let $M_t(y)$ denote the matrix with $(I,J)$-entry $y_{I\cup J}$ for all $I,J\in \mathcal{P}_t(N)$.
Matrix $M_n(y)$ is known as \emph{moment matrix} of $y$.
\paragraph{Lasserre Hierarchy Definition.} Let $\mathcal{K}$ be defined by the following \begin{equation}\label{eq:polytopeK} \mathcal{K}:=\{x\in[0,1]^n:g_{\ell}(x)\geq 0 \text{ for } \ell=1,\ldots,m\} \end{equation} where $g_{\ell}$ is a non constant polynomial\footnote{In our applications $g_{\ell}$ is a linear function in $x$.} in $x$ for $\ell=1,\ldots,m$.
We are interested in obtaining the convex hull of the integral points in $\mathcal{K}$, therefore we can assume that each variable occurs in every polynomial $g_{\ell}$ with degree at most one, since $x_i^2=x_i$ for every $i\in N$ when $x$ is integral.
Given a polynomial $g(x)$, we use the same symbol $g$ to denote the vector in $\mathbb{R}^{\mathcal{P}(N)}$ where the entry indexed by $I$ is equal to the coefficient of the term $\prod_{i\in I} x_i$ in $g(x)$, for all $I\in \mathcal{P}(N)$ and, therefore, $g(x) = \sum_{I\subseteq N}\left(g_I \prod_{i\in I} x_i\right )$.
For $g,y\in \mathbb{R}^{\mathcal{P}(N)}$, we define $g*y := M_N(y) g$, often called \emph{shift operator}\footnote{For any polynomial $g(x)$ the shift operation $(g*y)_I$ is obtained by ``linearizing'' the polynomial $g(x)\prod_{i\in I}x_i$.}; that is, the $I$-th entry of vector $g*y$, namely $(g*y)_I$, is equal to $\sum_{J\subseteq N} g_J y_{I\cup J}$.\footnote{ When $g(x)$ is a linear function of $x$, i.e. $g(x) = \sum_{i=1}^n g_{i} \cdot x_i + g_{\emptyset}$ we have $(g*y)_I=\sum_{i=1}^n g_i y_{I\cup \{i\}}+g_I$.}
We will use $A\succeq 0$ to denote that matrix $A$ is positive semi-definite (PSD) (see e.g. \cite{Strang} and Appendix~\ref{linear algebra}).
\begin{definition}\label{lassDef} The Lasserre hierarchy at the $t$-th level, denoted as $\text{\sc{Las}}_t(\mathcal{K})$, is the set of vectors $y\in \mathbb{R}^{\mathcal{P}(N)}$ that satisfy the following \begin{eqnarray} y_{\emptyset}&=&1 \\ M_{t+1}(y)&\succeq& 0 \\ M_{t}( g_{\ell}*y )&\succeq& 0 \qquad \ell=1\ldots m \end{eqnarray} \end{definition} In the following we will call informally $M_{t+1}(y)$ the \emph{Variable Moment Matrix}, and $M_{t}( g_{\ell}*y )$ the \emph{Constraint Moment Matrix} (the context will make it clear the level $t$ we are referring to).
\begin{example}\label{ex:def} Here we introduce a small example that will be used to gain the essential intuition of several properties before their formal exposition. The expert reader can skip these parts.
The example consists of a linear program with one constraint and two variables: \begin{equation}\label{eq:expolytope} \mathcal{K}:=\{x_1,x_2\in[0,1]:g(x)=a_1x_1+a_2x_2-b\geq 0\} \end{equation} At the $2$-nd level of Lasserre we obtain the \emph{full} variable and constraint moment matrices. The variable moment matrix is as follows. {\small \begin{equation}\label{ex:My} M_2(y) =\left( \begin{array}{cccc} y_{\emptyset} & y_{1} & y_{2} & y_{12} \\ y_{1} & y_{1} & y_{12} & y_{12} \\ y_{2} & y_{12} & y_{2} & y_{12} \\ y_{12} & y_{12} & y_{12} & y_{12} \end{array} \right) \succeq 0 \end{equation} } Similarly, let $z=g*y$, the constraint moment matrix is as follows: {\small \begin{equation}\label{ex:Mz} M_2(z) =\left( \begin{array}{cccc} z_{\emptyset} & z_{1} & z_{2} & z_{12} \\ z_{1} & z_{1} & z_{12} & z_{12} \\ z_{2} & z_{12} & z_{2} & z_{12} \\ z_{12} & z_{12} & z_{12} & z_{12} \end{array} \right) \succeq 0 \end{equation} } where $z_{\emptyset}=a_1y_1+a_2y_2-b$, $z_{1}=a_1y_1+a_2y_{12}-by_1$, $z_{2}=a_1y_{12}+a_2y_{2}-by_2$ and $z_{12}=(a_1+a_2-b)y_{12}$. \end{example}
\section{The Lasserre Hierarchy at Level $n$}\label{Sect:leveln}
With $n$ variables, the $n$-th level of the Lasserre hierarchy is sufficient to obtain a tight relaxation where the only feasible solutions are convex combinations of integral solutions~\cite{Lasserre01}. This can be proved by using the \emph{canonical lifting lemma} (see Laurent~\cite{Laurent03}), which characterizes when a full moment matrix $M_n(y)$ is positive semidefinite by diagonalizing it. In the following we revisit this result (see~\cite{Laurent03} for additional details), since it will be useful for introducing the almost diagonal form.
Informally, the properties of the moment matrices at level $n$ are ``revealed'' by diagonalizing; We will use ``partial'' diagonalization to ``reveal'' the properties of moment matrices at level $t$.
\begin{example}\label{examplediag}
By using Example~\ref{ex:def}, we present the essential core properties that are used in showing convergence to the integral hull of the Lasserre hierarchy.
It is well-known that elementary symmetric (i.e. row and column) matrix operations preserve the positive-semidefiniteness of a matrix. These transformations are known under the term \emph{congruent} transformations (see Appendix \ref{linear algebra} and the notation therein). For example, consider matrix \eqref{ex:Mz}. Remove from the first row, the second and the third row, and add the last row (and, symmetrically, do the same for the first column), we obtain the following congruent matrix that is PSD if and only if $M_2(z)$ is PSD (see Lemma~\ref{th:simop}). $$ {\small M_2(z) \cong \left( \begin{array}{cccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 & 0 \\ 0 & z_{1} & z_{12} & z_{12} \\ 0 & z_{12} & z_{2} & z_{12} \\ 0 & z_{12} & z_{12} & z_{12} \end{array} \right) \succeq 0 }$$ Now, remove the last row from the second row (symmetrically for columns); then remove the last row from the third row (simmetrically for columns). Then, we obtain the following diagonal matrix. {\small $$ M_2(z) \cong \left( \begin{array}{cccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 & 0 \\ 0 & 0 & z_{2}-z_{12} & 0 \\ 0 & 0 & 0 & z_{12} \end{array} \right) \succeq 0 $$} By playing a bit, it is not very difficult to understand the general rule to diagonalize any full moment matrix. This transformation can be suitably described by multiplying the moment matrix by a special matrix that is known as the M\"{o}bius matrix of $\mathcal{P}(N)$ (see~\cite{Laurent03}, below and the following sections). {\tiny $$\underbrace{\left( \begin{array}{cccc} 1 & -1 & -1 & 1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{array} \right)}_{Z^{-1}=\mbox{M\"{o}bius matrix}} \left( \begin{array}{cccc} z_{\emptyset} & z_{1} & z_{2} & z_{12} \\ z_{1} & z_{1} & z_{12} & z_{12} \\ z_{2} & z_{12} & z_{2} & z_{12} \\ z_{12} & z_{12} & z_{12} & z_{12} \end{array} \right) \underbrace{ \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ 1 & -1 & -1 & 1 \end{array} \right)}_{(Z^{-1})^{\top}}
= \left( \begin{array}{cccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 & 0 \\ 0 & 0 & z_{2}-z_{12} & 0 \\ 0 & 0 & 0 & z_{12} \end{array} \right) $$ }
{\tiny $$ \left( \begin{array}{cccc} z_{\emptyset} & z_{1} & z_{2} & z_{12} \\ z_{1} & z_{1} & z_{12} & z_{12} \\ z_{2} & z_{12} & z_{2} & z_{12} \\ z_{12} & z_{12} & z_{12} & z_{12} \end{array} \right)
= \underbrace{\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right)}_{Z\mbox{ matrix}} \left( \begin{array}{cccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 & 0 \\ 0 & 0 & z_{2}-z_{12} & 0 \\ 0 & 0 & 0 & z_{12} \end{array} \right)\underbrace{ \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 \end{array} \right)}_{Z^{\top}} $$ }
Similarly we can diagonalize the full moment matrix \eqref{ex:My} of the variables: {\small $$ M_2(y) \cong \left( \begin{array}{cccc} y_{\emptyset,\{1,2\}} & 0 & 0 & 0 \\ 0 & y_{\{1\},\{2\}} & 0 & 0 \\ 0 & 0 & y_{\{2\},\{1\}} & 0 \\ 0 & 0 & 0 & y_{\{1,2\},\{\emptyset\}} \end{array} \right) \succeq 0 $$}
where $y_{\emptyset,\{1,2\}}=y_{\emptyset}-y_1-y_2+y_{12}$, $y_{\{1\},\{2\}}=y_{1}-y_{12}$, $y_{\{2\},\{1\}}=y_{2}-y_{12}$ and $y_{\{1,2\},\{\emptyset\}}=y_{12}$.
Note that $M_2(y)\succeq 0$ if and only if the above diagonal matrix is PSD, and therefore when all the diagonal entries are nonnegative. Moreover, the sum of the diagonal entries is equal to $y_{\emptyset}=1$. So, the diagonal entries form a probability distribution. For example, $y_{\emptyset,\{1,2\}}$ denotes the probability that neither variable $x_1$ nor $x_2$ are set to one; $y_{\{1\},\{2\}}$ denotes the probability that we set to one only variable $x_1$ (and zero $x_2$). So we have a probability distribution over all the integral solutions.
It is also very instructive to give a closer look at the diagonalization of the constraint moment matrix. Consider the first diagonal entry: $z_{\emptyset}-z_1-z_2+z_{12}= (a_1y_1+a_2y_2-b)-(a_1y_1+a_2y_{12}-by_1)-(a_1y_{12}+a_2y_{2}-by_2)+(a_1+a_2-b)y_{12}=(-b)(y_{\emptyset}-y_1-y_2+y_{12})=-b \cdot y_{\emptyset,\{1,2\}}$. So the first entry is equal to the value of the constraint when none of the variables is set to one multiplied by the corresponding probability. It is not difficult to verify that similar things happen to the other diagonal entries (see Lemma~\ref{th:constentry}) and we obtain the following: {\small $$ M_2(z) \cong \left( \begin{array}{cccc} (-b)y_{\emptyset,\{1,2\}} & 0 & 0 & 0 \\ 0 & (a_1-b)y_{\{1\},\{2\}} & 0 & 0 \\ 0 & 0 & (a_2-b)y_{\{2\},\{1\}} & 0 \\ 0 & 0 & 0 & (a_1+a_2-b)y_{\{1,2\},\{\emptyset\}} \end{array} \right) \succeq 0 $$ } It follows that $M_2(z)$ is PSD if and only if the diagonal matrix is PSD, which implies that we have positive probability for an integral solution if and only if the constraint is satisfied by that integral solution. Note that $y_1= y_{\{1\},\{2\}}+y_{\{1,2\},\emptyset}$ (similarly for $y_2$) can be seen as convex combination of \emph{feasible} integral solutions and therefore we have no integrality gap.
In the following, our example with two variables is generalized to any number of variables and constraints. \end{example}
\subsection{The Lasserre Hierarchy at Level $n$ in Diagonal Form}
Let us start by introducing some basic notations and preliminary properties. We use the generic vector $w\in \mathbb{R}^{\mathcal{P}(N)}$ to denote either the vector $y\in\mathbb{R}^{\mathcal{P}(N)}$ of variables, or the shifted vector $g*y$, for any $g\in\mathbb{R}^{\mathcal{P}(N)}$. \begin{definition} Let $w\in \mathbb{R}^{\mathcal{P}(N)}$. For any $I,J\in \mathcal{P}(N)$, we define \begin{equation} \label{eq:probevent}
w_{I,J}: =\sum_{H\subseteq J} (-1)^{|H|} w_{H\cup I} \end{equation} Let $w^N\in \mathbb{R}^{\mathcal{P}(N)}$ be such that the $I$-th entry, with $I\subseteq N$, is equal to \begin{equation}\label{eq:totprob} w^N_{I} := w_{I,N\setminus I} \end{equation} \end{definition} Note that at level $n$, $y_I^N$ can be interpreted as the probability of the integral solution $\{y_i=1: i\in I, y_j=0 :j\in N\setminus I\}$. The following two properties are easy to check.
\begin{lemma}\label{th:nonempty} For any $w\in \mathbb{R}^{\mathcal{P}(N)}$ and $I\cap J\not= \emptyset$ we have $w_{I, J}=0$. \end{lemma} \longer{ \begin{proof} For $x\in I\cap J$,
$w_{I,J} =\sum_{H\subseteq J} (-1)^{|H|} w_{H\cup I} = \sum_{H\subseteq J\setminus \{x\}} \left((-1)^{|H|} + (-1)^{|H|+1} \right)w_{H\cup I}$.
\end{proof} }
\begin{lemma}\label{th:sum1} For any $J\subseteq N$ and $w\in \mathbb{R}^{\mathcal{P}(N)}$ we have \begin{equation} w_J = \sum_{I\subseteq N} w_{I\cup J, N\setminus I} \label{sum1} \end{equation}
\end{lemma}
\longer{ \begin{proof} By definition,
$\sum_{I\subseteq S} w_{I\cup J, S\setminus I} \label{sum1} = \sum_{I\subseteq S} \sum_{H\subseteq S\setminus I} (-1)^{|H|} w_{I\cup J\cup H}$.
Now consider the above sum and the generic term of the sum, say $w_{T\cup J}$, with $T\subseteq S$. Set $T$ can be seen as the union of two disjoint sets, say $I$ and $H$, with $T=I\cup H$; variable $w_{T\cup J}$ appears for every possible pair of disjoint sets $I$ and $H$, with $T=I\cup H$, each time multiplied by coefficient $(-1)^{|H|}$. Therefore the sum of the coefficients of the generic term $w_{T\cup J}$ is equal to $\sum_{H \subseteq T}(-1)^{|H|}$; if $|T|\geq 1$ then half of the subsets $H$ have even cardinality and therefore $\sum_{H \subseteq T}(-1)^{|H|}=0$; otherwise, $T=\emptyset$ and the coefficient is equal to 1 and the claim follows:
$\sum_{I\subseteq S} \sum_{H\subseteq S\setminus I} (-1)^{|H|} w_{I\cup J\cup H}
= \sum_{T\subseteq S} w_{T\cup J} \sum_{H\subseteq T} (-1)^{|H|} = w_J$.
\end{proof} }
\paragraph{Diagonalization.}\label{sect:inc-esc}
Let $Z$ denote \emph{zeta matrix} of the lattice $\mathcal{P}(N)$, that is the square $0$-$1$ matrix indexed by $\mathcal{P}(N)$ such that $Z_{I,J}=1$ if and only if $I \subseteq J$.
\begin{equation}\label{zetamatrix} Z_{I,J}= \left\{ \begin{array}{ll} 1 & \text{if } I \subseteq J,\\ 0 & \text{otherwise}. \end{array} \right. \end{equation}
This matrix is known to be invertible and the inverse is known as the M\"{o}bius matrix of $\mathcal{P}(N)$ whose entries are defined as follows: \begin{equation}\label{mobiusmatrix} Z^{-1}_{I,J}= \left\{ \begin{array}{ll}
(-1)^{|J\setminus I |} & \text{if } I \subseteq J,\\ 0 & \text{otherwise}. \end{array} \right. \end{equation}
The diagonalization of the moment matrices is obtained by the following \emph{congruent transformation} (see Section \ref{linear algebra} and Definition~\ref{Def:congruence}): for $A\in \{M_n(y), M_n(g*y)\}$, $A\rightarrow Z^{-1}A (Z^{-1})^{\top}$, where $Z^{-1}$ is the M\"{o}bius matrix of $\mathcal{P}(N)$ (see~\cite{Laurent03}).
\begin{lemma}\label{th:diagonalization} For any $w\in \mathbb{R}^{\mathcal{P}(N)}$, $Z Diag(w^N) Z^{\top} = M_n(w)$. \end{lemma} \begin{proof}
$\left(Z Diag(w^N) Z^{\top}\right)_{I,J} = \sum_{U\subseteq N} Z_{I,U} Z_{J,U}w_{U,N\setminus U}$, which is equal to $\sum_{\substack{U\subseteq N\\ I\cup J\subseteq U}} w_{U,N\setminus U}= \sum_{U\subseteq N\setminus (I\cup J)} w_{U\cup I\cup J,N\setminus U}=w_{I\cup J}$, where the latter equality follows from Lemma~\ref{th:sum1} and~\ref{th:nonempty}. \end{proof}
By the previous lemma it follows that for $y\in \mathbb{R}^{\mathcal{P}(N)}$, and for any constraint $g(x)\geq 0$, we have the following congruence transformations (recall $z = g*y$ and $z^N,y^N$ are defined by \eqref{eq:totprob}). \begin{eqnarray} M_n(y) &\cong& Diag(y^N)\\ M_n(z) &\cong& Diag(z^N) \end{eqnarray}
\begin{lemma}\label{th:constentry} For any $g,y\in \mathbb{R}^{\mathcal{P}(N)}$ and $z = g*y$ we have \begin{equation} z^N_I = \left(\sum_{K\subseteq I}g_K\right)\cdot y^N_I \end{equation} \end{lemma}
\begin{proof}
By definition, $z^N_I=\sum_{H\subseteq [n]\setminus I} (-1)^{|H|} z_{H\cup I}=\sum_{H\subseteq [n]\setminus I} (-1)^{|H|} \sum_{K\subseteq N} g_K \cdot y_{K\cup H\cup I}= \sum_{K\subseteq N} \left( g_K \sum_{H\subseteq [n]\setminus I} (-1)^{|H|} y_{K\cup H\cup I}\right)= \left( \sum_{K\subseteq I} g_K\right) \sum_{H\subseteq [n]\setminus I} (-1)^{|H|} y_{ H\cup I}$
and using the fact that for any $K\not \subseteq I$ we have $\sum_{H\subseteq [n]\setminus I} (-1)^{|H|} y_{K\cup H\cup I}=0$. \end{proof}
\begin{lemma} For any $g,y\in \mathbb{R}^{\mathcal{P}(N)}$ and $z = g*y$ we have
\begin{eqnarray} M_n(y)\succeq 0 &\Longleftrightarrow& \left(y^N_I \geq 0 \quad \forall I\subseteq N \right)\label{vars}\\ M_n(z)\succeq 0 &\Longleftrightarrow& \left(z^N_I = \left(\sum_{K\subseteq I}g_K\right)\cdot y^N_I \geq 0 \quad \forall I\subseteq N \right) \label{constraints} \end{eqnarray} \end{lemma} It follows that if $y^N_I>0$ then $\sum_{K\subseteq I}g_K\geq 0$, i.e. the solution obtained by setting $y_K=1$ for every $K\subseteq I$ and $y_H=0$ for every $H\not \subseteq I$ satisfies constraint $g(y)\geq 0$ (viceversa if $\sum_{K\subseteq I}g_K< 0$, then we must have $y^N_I =0$).
\longer{ \begin{example} Let $g(x)= 3x_1+x_2-x_3 - 3$, where $g(x)\geq0$ is a constraint. By conditions~\eqref{vars} and \eqref{constraints} it follows that $y^N_I = 0$ for any $I\subseteq N$ such that $\sum_{K\subseteq I}g_K<0$. For example, for $I=\{1,3\}$ we have $\sum_{K\subseteq I}g_K = 3-1-3<0$ and therefore $y^N_{\{1,3\}}=0$. \end{example} }
\begin{lemma}\label{th:decompositionLeveln}
The projection on $\mathcal{K}$ of any feasible solution $y\in\text{\sc{Las}}_n(\mathcal{K})$, i.e. $\{y_j:j\in N\}$, can be seen as convex combination of integral solutions that are feasible for $\mathcal{K}$. \end{lemma} \begin{proof} For any $j\in N$,
$y_j = \sum_{I\subseteq N} y_{I\cup \{j\}, N\setminus I} = \sum_{\substack{I\subseteq N\\ y_{I, N\setminus I}>0 }} \frac{y_{I\cup \{j\}, N\setminus I} }{y_{I, N\setminus I}} \cdot y_{I, N\setminus I}$
by Lemma~\ref{th:sum1}. Note that for any $y_{I, N\setminus I}>0$ we have $\frac{y_{I\cup \{j\}, N\setminus I} }{y_{I, N\setminus I}}=1$ if $j\in I$ and zero otherwise. \longer{ \begin{equation} \frac{y_{I\cup \{j\}, N\setminus I} }{y_{I, N\setminus I}} =\left\{ \begin{array}{ll} 1 & \text{if } j\in I\\ 0 & \text{else}. \end{array}\right. \end{equation} } By Lemma~\ref{th:sum1} we have $\sum_{I\subseteq N} y_{I, N\setminus I}=1$ and by \eqref{vars} we have $y_{I, N\setminus I}\geq 0$. It follows that solution $\{y_j:j\in N\}$ can be seen as a convex combinations of integral solutions: for every $y_{I, N\setminus I}>0 $, the integral solutions are those that are obtained by setting to one all the variables with indexes in $I$ and zero otherwise. Note that these integral solutions satisfy the constraints of $\mathcal{K}$ since $y_{I, N\setminus I}>0 $ implies that the constraints are satisfied by using \eqref{constraints}. \end{proof}
\section{The Lasserre Hierarchy at Level $t$}\label{Sect:levelt}
In this section we translate the arguments of level $n$ to level $t$, for any $1\leq t \leq n$, by providing a ``partial'' diagonalization of the moment matrices. We will use these congruent transformations in the gap analyses. The following example introduces the main concepts.
\begin{example}\label{example:almostdiag} Recall that $M_2(z)$ from Example~\ref{examplediag} is equal to the following.
{\tiny $$ \left( \begin{array}{cccc} z_{\emptyset} & z_{1} & z_{2} & z_{12} \\ z_{1} & z_{1} & z_{12} & z_{12} \\ z_{2} & z_{12} & z_{2} & z_{12} \\ z_{12} & z_{12} & z_{12} & z_{12} \end{array} \right)
= \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{cccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 & 0 \\ 0 & 0 & z_{2}-z_{12} & 0 \\ 0 & 0 & 0 & z_{12} \end{array} \right) \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 \end{array} \right) $$ } $M_1(z)$ is obtained from $M_2(z)$ by removing the last row and column and therefore it is equal to
{\tiny $$ \left( \begin{array}{ccc} z_{\emptyset} & z_{1} & z_{2} \\ z_{1} & z_{1} & z_{12} \\ z_{2} & z_{12} & z_{2} \end{array} \right)
=\underbrace{ \left( \begin{array}{cccc} 1 & 1 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)}_{A} \left( \begin{array}{ccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 \\ 0 & 0 & z_{2}-z_{12} \end{array} \right)\underbrace{ \left( \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{array} \right)}_{A^{\top}} + z_{12} \left( \begin{array}{c} 1 \\ 1 \\
1
\end{array} \right) \left( \begin{array}{ccc} 1 & 1 & 1 \end{array} \right) $$ } The inverse of $A$ is the M\"{o}bius matrix of $\mathcal{P}_1(N)$ and is equal to
$A^{-1} {\tiny = \left( \begin{array}{cccc} 1 & -1 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right)} $ and by multiplying the left-hand side of $M_1(z)$ by $A^{-1}$ and the right-hand side by $(A^{-1})^{\top}$, we obtain the following matrix that is congruent to $M_1(z)$: {\tiny $$ \left( \begin{array}{cccc} 1 & -1 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{ccc} z_{\emptyset} & z_{1} & z_{2} \\ z_{1} & z_{1} & z_{12} \\ z_{2} & z_{12} & z_{2} \end{array} \right) \left( \begin{array}{cccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{array} \right)
=\left( \begin{array}{ccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 \\ 0 & 0 & z_{2}-z_{12} \end{array} \right)+ z_{12}\underbrace{ \left( \begin{array}{cccc} 1 & -1 & -1 \\ -1 & 1 & 1 \\ -1 & 1 & 1 \end{array} \right)}_{\mbox{matrix of rank one}} $$ }
Note that $M_1(z)$ is congruent to a matrix that differs from a diagonal matrix by a matrix of rank one. In the next section we will see that this fact can be generalized to any level and for any number of variables.
The fact that the ``distance'' from the diagonal matrix can be expressed by matrices of rank one is an intriguing property and it will play a fundamental role in our analysis.
\end{example}
\subsection{The Lasserre Hierarchy at Level $t$ in Almost Diagonal Form}\label{Sect:almostdiag}
In 1960, Wilf \cite{wilf60} introduced the concept of almost diagonal matrices: a matrix $A$ is \emph{almost diagonal} if there exists a diagonal matrix $D$ and vectors $x$ and $y$ such that $A= D + xy^{\top}$, i.e. $A$ differs from a diagonal matrix by a matrix of rank one. Generalizing this, we say that $A$ is \emph{$k$-almost diagonal} if it differs from a diagonal matrix by $k$ matrices of rank one (we will omit $k$ for brevity).
For any $w\in\mathbb{R}^{\mathcal{P}(N)}$, let $Diag(w^N,t)$ be the submatrix of $Diag(w^N)$ indexed by $\mathcal{P}_t(N)$.\footnote{Vector $w$ is intended to be either the vector $y\in\mathbb{R}^{\mathcal{P}(N)}$ of variables or the shifted vector $g*y$ for any $g\in\mathbb{R}^{\mathcal{P}(N)}$.} In the following we show that any moment matrix $M_t(w)$ is congruent to a matrix $M_t^*(w^N)$ that differs from $Diag(w^N,t)$ by $k$ matrices of rank one, where $k= |\mathcal{P}(N)\setminus \mathcal{P}_{t}(N)|$. This gives a different view of the Lasserre hierarchy at level~$t$.
We will refer to this re-formulation as the \emph{Lasserre hierarchy in almost diagonal form} (and $M_t^*(w^N)$ as the \emph{almost diagonal decomposition} of $M_t(w)$).
\begin{lemma}\label{th:lassalmostdiag}[Almost Diagonal Form] Let $G(J)\in \mathbb{R}^{\mathcal{P}_t(N)}$ be a vector with the $I$-th entry equal to \begin{equation*} G(J)_{I} =\left\{ \begin{array}{ll}
(-1)^{t-|I|}{|J|-|I|-1 \choose t-|I|} & I\subseteq J\\
0 & \text{otherwise} \end{array} \right. \end{equation*}
For any $w\in\mathbb{R}^{\mathcal{P}(N)}$ and $t= 0,1,\ldots, n$ \begin{equation} \boxed{ M_t(w)\cong M_t^*(w^N) = Diag(w^N,t) +\sum_{J\in \mathcal{P}(N)\setminus \mathcal{P}_t(N)} w_J^N R(J)} \label{eq:conalmostdiag} \end{equation} where $R(J) =G(J)G(J)^{\top}$ is a matrix (of rank one). \end{lemma}
\begin{proof} For any $t=0,1,\ldots,n$, consider the following block decomposition of the zeta matrix~\eqref{zetamatrix}: \begin{eqnarray*} Z&=& \left( \begin{array}{cc} A{(t)} & B{(t)} \\
C{(t)} & D{(t)} \end{array} \right) \end{eqnarray*} where $A{(t)}\in \mathbb{R}^{\mathcal{P}_t(N)\times \mathcal{P}_t(N)}$ is the square submatrix of $Z$ indexed by $\mathcal{P}_t(N)$, and the submatrices $B(t),C(t),D(t)$ are defined accordingly.
Note that at level $t$, matrix $M_t(w)$ is equal to the square submatrix of $M_n(w)$ indexed by $\mathcal{P}_t(N)$.
Recall that $Diag(w^N,h)$ is defined as the submatrix of $Diag(w^N)$ indexed by $\mathcal{P}_h(N)$. Let $Diag(w^N,\overline{h})$ be the submatrix of $Diag(w^N)$ indexed by $\mathcal{P}(N)\setminus \mathcal{P}_h(N)$. It follows that
\begin{eqnarray*} M_{t}(w) &=& A(t) Diag(w^N,{t}) A(t)^{\top}+B(t) Diag(w^N,{\overline{t}}) B(t)^{\top} \end{eqnarray*} Since matrix $A(t)$ is also invertible, then $M_t(w) \cong M_t^*(w)$ where:
\begin{eqnarray}\label{eq:almostdiag} M_t^*(w)&=&Diag(w^N,t) +G Diag(w^N,{\overline{t}}) G^{\top} \label{eq:conalmostdiag}\\ G&=&A(t)^{-1}B(t) \end{eqnarray}
The claim follows by showing that for any $I\in \mathcal{P}_t(N)$ and $J\in \mathcal{P}(N)\setminus \mathcal{P}_t(N)$ the $(I,J)$-th entry of matrix $G$ is equal to (in the claim $G(J)$ denotes the $J$-th column of $G$) \begin{equation} G_{I,J} =\left\{ \begin{array}{ll}
(-1)^{t-|I|}{|J|-|I|-1 \choose t-|I|} & I\subseteq J\\
0 & \text{otherwise} \end{array} \right. \end{equation}
By definition $G(t)_{I,J} =\sum_{K\in \mathcal{P}_t(N)} A(h)^{-1}_{I,K} B(h)_{K,J}$. Note that $A(h)^{-1}_{I,K}$ is different from zero (and equal to $(-1)^{|K\setminus I|}$) when $I\subseteq K$, and $B(h)_{K,J}$ is different from zero (and equal to one) when $K\subseteq J$. Then note that $\sum_{\ell=0}^w (-1)^{\ell}{n \choose \ell}=(-1)^w {n-1\choose w}$ (assuming $ {n-1 \choose w}=0$ for any $w\geq n$) and therefore \begin{eqnarray*}
\sum_{\substack{I\subseteq K \subseteq J \\ K\in \mathcal{P}_t(N)}} (-1)^{|K\setminus I|}&=&\sum_{\ell=0}^{t- |I|} (-1)^{\ell}{|J|- |I|\choose \ell}=(-1)^{t-|I|}{|J|-|I|-1 \choose t-|I|} \end{eqnarray*}
\end{proof}
\begin{remark} In order to avoid misinterpretation, we remark the following. One difference between matrix $M_t(w)$ and $M_t^*(w^N)$ is that matrix $M_t(w)$ is function of variables in $\{w_I:I\in \mathcal{P}_t(N)\}$, whereas matrix $M_t^*(w^N)$ is function of variables in $\{w_I^N:I\subseteq N\}$.
Every $w_I^N$ that appears in $M_t^*(w^N)$ is either $y_I^N$ (if $w=y$), or $z^N_I$ (if $z=g*y=w$) where $z^N_I= g(I)y_I^N$ and $g(I)$ denote the value of $g(x)$ when we set to 1 all the variables in $\{x_{i}:i\in I\}$ and to zero the remaining. The relationships between $w_I$ and $w_I^N$ are given by equations \eqref{eq:probevent}, \eqref{eq:totprob} and \eqref{sum1}.
So in transforming matrix $M_t(w)$ to $M_t^*(w^N)$ we have operated a change of basis.
Note that any $y_I^N$ (also) depends on moments $y_J$ with $|J|>t$. But matrix $M_t(w)$ does not depend on moments $y_J$ with $|J|>t$, so also the congruent matrix $M_t^*(w^N)$ does not (by using \eqref{eq:probevent} one can check that higher order moments $y_J$ with $|J|>t$ cancel out).
One reason for using variables $\{y_I^N: I\subseteq N\}$ is because they have a very nice interpretation as (pseudo)probability (see \cite{BarakBHKSZ12,BarakKS13}): $y_I^N$ can be seen as the (pseudo)probability of the integral solution $\{x_{i}=1:i\in I\}$ (and zero the remaining variables); $z^N_I=g(I)y_I^N$ can be seen as the value $g(I)$ of constraint $g(x)\geq 0$ according to solution $\{x_{i}=1:i\in I\}$ multiplied by the corresponding (pseudo)probability. At level $n$, variables $\{y_I^N: I\subseteq N\}$ are actual probabilities, as already observed (see e.g. Example~\ref{examplediag}). At any level, any solution is a linear combination of these (pseudo)probabilities (see Equation \eqref{sum1}). \end{remark}
\begin{remark} Note that any matrix $R(J)=G(J)G(J)^{\top}$ in Lemma~\ref{th:lassalmostdiag} is PSD (see, e.g. Appendix \ref{linear algebra}). In the following we distinguish 3 parts of $M_t^*(w^N)$: the diagonal matrix $Diag(w^N,t)$ (that sometimes we call $D$ for brevity), the positive semi-definite ($\mathcal{PD}$) matrices (i.e. the rank one matrices $R(J)$ multiplied by \emph{positive} coefficient $w_J^N$), and the negative semi-definite ($\mathcal{ND}$) matrices (i.e. the rank one matrices $R(J)$ multiplied by \emph{negative} coefficient $w_J^N$). \end{remark}
\subsubsection{Almost Diagonal Form: User Guide}\label{Sect:gershgorin}
Assume that we want to prove that a given solution $y\in\mathbb{R}^{\mathcal{P}(N)}$ is a feasible solution for the Lasserre hierarchy at a certain level. For some $t\geq 0$, this boils down to checking if $M_t(w)\succeq 0$, (where, recall, vector $w$ is intended to be either the vector $y$ of variables, or the shifted vector $g*y$, for any $g\in\mathbb{R}^{\mathcal{P}(N)}$). By Lemma~\ref{th:lassalmostdiag}, this is equivalent to checking if $M_t^*(w^N)\succeq 0$, where $M_t^*(w^N)$ is the almost diagonal decomposition of $M_t(w)$.\footnote{Every $w_I^N$ that appears in $M_t^*(w^N)$ is either $y_I^N$ (if $w=y$), or $z^N_I$ (if $z=g*y=w$) where $z^N_I= g(I)y_I^N$ and $g(I)$ denote the value of $g(x)$ when we set to 1 all the variables in $\{x_{i}:i\in I\}$ and to zero the remaining.}
If $w_I^N\geq 0$ for every $I\subseteq N$, then it is straightforward to claim that $M_t^*(w^N)\succeq 0$. This simply because $M_t^*(w^N)$ is the sum of PSD matrices.
If $w_I^N\geq -\varepsilon$ for every $I\subseteq N$, for some $\varepsilon>0$, then it is not clear whether $M_t^*(w^N)\succeq 0$; actually the answer depends on several factors like the value of $\varepsilon$, the positive terms $w_I^N> 0$, the level $t$ and so on.
The strategy that we suggest in the following uses the Gershgorin disk theorem (see e.g. \cite{varga}), which identifies a region in the plane that contains all the eigenvalues of a square matrix. Let A be an $n\times n$ matrix. For each $i$ with $1\leq i\leq n$, define the \emph{radius} \begin{equation}\label{eq:r}
r_i= \sum_{\substack{j=1\\ j\not = i}}^n |A_{i,j}| \end{equation}
Let ${\delta}_i$ be the closed disc centered at $a_{ii}$ with radius $r_i$. Such a disk is called a \emph{Gershgorin disk}. Then each eigenvalue of $A$ is in at least one of the disks. So if all the disks are located in the nonnegative plane we are guaranteed to have a PSD matrix.
\begin{theorem}[Gershgorin Disk Theorem \cite{varga}]\label{th:gershgorin} Let A be an $n\times n$ matrix, and let $\mu$ be any eigenvalue of $A$. Then for some $i$ with $1\leq i\leq n$, \begin{equation*}
|\mu -A_{i,i}|\leq r_i \end{equation*} where $r_i$ is given by \eqref{eq:r}. \end{theorem}
\paragraph{Congruent Transformation of Gershgorin Disks.}\label{gershgorintech}
If we apply directly Theorem \ref{th:gershgorin} to matrix $M_t^*(w^N)$ this might give a loose condition: for example this happens if there is a disk with a very large radius and center close to zero.
The strategy, described in this section, is to apply a congruent transformation $T$ with the aim to obtain a matrix with tighter disks. The matrices of rank one play a fundamental role.
We introduce the idea by using the following example.
\begin{example}\label{ex:strategy} From Example~\ref{example:almostdiag}, the almost diagonal decomposition of $M_1(z)$ is: {\small $$ M_1^*(z^N)=\underbrace{\left( \begin{array}{ccc} z_{\emptyset}^N & 0 & 0 \\ 0 & z_{1}^N & 0 \\ 0 & 0 & z_{2}^N \end{array} \right)}_{D}+ z_{12}^N\underbrace{ \left( \begin{array}{cccc} 1 & -1 & -1 \\ -1 & 1 & 1 \\ -1 & 1 & 1 \end{array} \right)}_{R(\{1,2\})} $$ }
Let us assume, that according to a given solution $y$, we have $z_{1}^N<0$, whereas the other entries are positive. The question is to understand under which conditions we have $M_1(z)\succeq 0$.
According to the assumptions, the diagonal matrix $D$ above is negative semidefinite, whereas $z_{12}^N R(\{1,2\})$ is positive semidefinite.
A straightforward application of Gershgorin Theorem can be useless.
For example if $z_{\emptyset}^N=z_{2}^N=1, z_{12}^N=2, z_{1}^N=-1/3$, then the Gershgorin disks of matrix $M_1^*(z^N)$ are located as follows: disks ${\delta}_{\emptyset}$ and ${\delta}_{\{2\}}$ are centered in $3$ and have radius $4$, whereas disk ${\delta}_{\{1\}}$ is centered in $5/3$ and has radius $4$, i.e. the disks are not entirely located in the nonnegative plane (see the left-hand picture of Figure \ref{fig:transform}). In this situation Gershgorin Theorem gives a loose condition because the disks have too large radii. We provide a congruent transformation of $M_1^*(z^N)$ that transforms ``useless'' Gershgorin disks to ``meaningful'' ones.
Consider the following simple congruent transformation $T$, obtained by pivoting on entry $(\{1\},\{1\})$: add the second row to the first (and symmetrically for columns) and subtract the second row from the last (and symmetrically for columns). This transforms matrix $R(\{1,2\})$ into a matrix that has zero everywhere but the pivot entry $(\{1\},\{1\})$ (this is possible because $R(\{1,2\})$ has rank one). Then we obtain the following congruent matrix. {\small $$ T\cdot M_1^*(z^N)\cdot T^{\top}=\left( \begin{array}{ccc} z_{\emptyset}^N - z_{1}^N & z_{1}^N & -z_{1}^N \\ z_{1}^N & z_{1}^N & -z_{1}^N \\ -z_{1}^N & -z_{1}^N & z_{2}^N- z_{1}^N \end{array} \right)+ z_{12}^N \left( \begin{array}{cccc} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right) = \left( \begin{array}{ccc} z_{\emptyset}^N - z_{1}^N & z_{1}^N & -z_{1}^N \\ z_{1}^N & z_{12}^N+z_{1}^N & -z_{1}^N \\ -z_{1}^N & -z_{1}^N & z_{2}^N-z_{1}^N \end{array} \right) $$ }
Note that the effect of the described transformation $T$ on matrix $D$ is to \emph{perturb} the radius/center of the disks by a factor of $z_{1}^N$ (that is the value of the pivot entry $(\{1\},\{1\})$). Moreover, if we add the transformed $R(\{1,2\})$ to the transformed $D$, this has the effect of \emph{shifting} by $z_{12}^N$ the center of disk ${\delta}_{\{1\}}$ (i.e. the disk with negative center).
From the final matrix, we see that if $z_{\emptyset}^N\geq |z_{1}^N|,z_{2}^N\geq |z_{1}^N|, z_{12}^N\geq 3|z_{1}^N|$ then the Gershgorin disks of the transformed matrix are in the nonnegative plane, which implies that $M_1(z)\succeq 0$. Figure \ref{fig:transform} (right-hand picture) shows the effect of the congruent transformation on the Gershgorin disks for our numerical example above.
\begin{figure}
\caption{Gershgorin disks before and after the congruent transformation.}
\label{fig:transform}
\end{figure}
\end{example}
Generalizing the above example, a proof of $M_t^*(w^N)\succeq 0$ can be obtained by selecting a congruent transformation $T$ and a subset $\mathcal{X}$ from the positive semidefinite matrices $\mathcal{PD}$ so that the Gershgorin disks of
$\mathcal{M}=T\cdot \left(D+\sum_{A\in \mathcal{X}} A+\sum_{B\in \mathcal{ND}} B\right) \cdot T^{\top}$ are located in the nonnegative plane. Note that the latter implies that $M_t^*(w^N)\succeq 0$ since it is congruent to a sum of PSD matrices.
The congruent transformation $T$ is the concatenation of the basic congruent transformations introduced in the example (obtained by pivoting on some entries with the aim to shift/perturb disks with negative center).
In more formal terms, a \emph{basic congruent transformation} $T_S(H)$ consists of selecting a matrix from $\mathcal{PD}$ (or from $\mathcal{ND}$), say $w_H^N R(H)$, for $H\in \mathcal{P}(N)\setminus \mathcal{P}_t(N)$.
Select a pivot entry $(S,S)$ that is different from zero.\footnote{Note that at the beginning, since $R(H)=G(H)G(H)^{\top}$, every entry $(S,S)$ with $S\subseteq H$ (and $|S|\leq t$) is different from zero; later, if $\{T_P(L):L\in \mathcal{T}\}$ is the set of basic transformations applied so far, entry $(S,S)$ with $S\subseteq H$ is different from zero if it is in none of the basic transformations applied so far, i.e. $S\not \subseteq L$ for every $L\in \mathcal{T}$.} Pivot on that element to obtain a congruent matrix that has zero everywhere but entry $(S,S)$ ($R(H)$ has rank one).
We call this congruent matrix the \emph{$I$-reduced form of $R(H)$} and denote it by $R_S(H)= T_S(H)\cdot R(H) \cdot T_S(H)^{\top}$. The effect of adding $R_S(H)$ to any matrix $A$ is to shift the center of disk ${\delta}_S$ of matrix $A$ by a certain factor of $w_H^N$ (the shift is positive if $w_H^N>0$ and negative if $w_H^N<0$). The effect of transformation $T_S(H)$ on $D$ is to change the radius/center of disks $\{{\delta}_I: I\subseteq H\}$ by a certain factor of $w_S^N$.
We recall (and remark) that every basic congruent transformation does not change the positive (negative) semi-definiteness of any matrix, so the matrices in $\mathcal{PD}$ (or $\mathcal{ND}$) remain in the same set after the transformations.
We give an application of the Gershgorin disks transformation technique to get Lasserre integrality gaps in Section~\ref{sect:minknapsack} and~\ref{sect:knaplifted} (in the latter we use the general strategy sketched above, see also Appendix~\ref{sect:exmkp}).
\subsubsection{The Lasserre Hierarchy as a Semi-Infinite Linear Program}\label{sect:sip}
In the following we show an equivalent formulation of the Lasserre hierarchy as a semi-infinite linear program by using the almost diagonal form given by Lemma~\ref{th:lassalmostdiag}. This provides a different, non-matricial point of view that can be convenient for certain problems. We give an application of this characterization in Section~\ref{sect:tardyjobs}.
In optimization theory, \emph{semi-infinite programming} (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints (see e.g. \cite{GobernaL02}). It is well-known (and easy to see) that any SDP program can be written as a semi-infinite linear program. By Lemma~\ref{th:lassalmostdiag} we immediately obtain the following.
\begin{corollary}\label{th:constraintcond}
For any $w\in\mathbb{R}^{\mathcal{P}(N)}$ and $t\geq 0$, we have $M_{t}(w)\succeq 0$ if and only if for every unit vector $v\in \mathbb{R}^{\mathcal{P}_t(N)}$ the following holds. \begin{equation}\label{mkpcond}
\sum_{I:|I|\leq t} w^N_I v^2_I + \sum_{J\subseteq N: |J|\geq t+1} w^N_J \left( \sum_{i=0}^t (-1)^{t-i}{|J|-i-1 \choose t-i} \left(\sum_{I\subset J, |I|=i} v_I\right) \right)^2 \geq 0 \end{equation}
\end{corollary} \begin{proof} By Lemma~\ref{th:lassalmostdiag}, we can replace any condition $M_{t}(w)\succeq 0$ with $M_{t}^*(w^N)\succeq 0$. Then the claim follows by the definition of PSD matrices. Indeed, let $v\in \mathbb{R}^{\mathcal{P}_t(N)}$ be any eigenvector of matrix $M_{t}^*(w)$, i.e. $M_{t}^*(w) v=\lambda v$. W.l.o.g., we can assume that $v$ is a unit vector. If solution $y$ ensures $v^{\top} M_{t}^*(w) v\geq 0$ for every unit vector $v$, then we have $\lambda v^{\top} v\geq 0$, i.e. any eigenvalue $\lambda$ is nonnegative and therefore $M_{t}^*(w)\succeq 0$. It is easy to check that $v^{\top} M_{t}^*(w) v\geq 0$ is \eqref{mkpcond}.
\end{proof}
\section{Lasserre Integrality Gap for the \textsc{Min-Knapsack}}\label{sect:minknapsack}
In this section we analyze the Lasserre hierarchy integrality gap for the \textsc{Min-Knapsack} problem. The analysis uses the moment matrix in almost diagonal form and Gershgoring disk congruent transformation described in Section \ref{Sect:almostdiag}.
In the \textsc{Min-Knapsack} problem we are given a set $V$ of items with nonnegative costs $c_1,c_2\ldots$, profits $p_1,p_2,\ldots$ and demand $b$. The goal is to select a minimum cost set of items with total profit at least the demand.
The ``standard'' linear program (LP) relaxation for the {\textsc{Min-Knapsack}} problem has this form
$\min\{\sum_{j\in V} c_j x_j:\sum_{j \in V} p_j x_j \geq b, x_j \in [0,1]\text{ } j\in V\}$.
The integrality gap of (LP) is unbounded, as the following simple instance (also used later in our results for Lasserre gap) with $n+1$ items shows:
\begin{eqnarray}\label{LP:minKnapGap} \begin{array}{rll} (GapLP)
\min \{ \sum_{i=1}^n x_i: \sum_{i=1}^{n+1} x_i \geq 1+1/P, x_i \in [0,1] \mbox{ for } i\in[n+1]\} \end{array} \end{eqnarray}
The optimal \emph{integral} value of ($GapLP$) is one, whereas the optimal \emph{fractional} value is $1/P$, with integrality gap $P$.
\paragraph{Our Results.} We prove the following dichotomy-type result. If we allow a ``large'' $P$ (exponential in the number of the Lasserre level), then the Lasserre hierarchy is of no help to improve the unbounded integrality gap of ($GapLP$), even at level $(n-1)$. This analysis is tight since $\text{\sc{Las}}_{n}(GapLP)$ admits an optimal integral solution with $n+1$ variables\footnote{The projection of $y\in \text{\sc{Las}}_{n}(GapLP)$ on the original variables can be expressed as a convex combination of integral solutions on the first $n$ variables (see e.g.~\cite{KarlinMN11}). By selecting the solution with the lowest value and setting $x_{n+1}=1$ we obtain a feasible integral solution of value not larger than $\text{\sc{Las}}_{n}(GapLP)$.}. We also show that the requirement that $P$ is exponential in $n$ is necessary for having a ``large'' gap at level $(n-1)$. These results follow easily from the definition of the Lasserre hierarchy in almost diagonal form (see Lemma~\ref{th:lassalmostdiag}) and the Gershgorin disks transformation technique (Section \ref{gershgorintech}).
\begin{theorem}\label{th:knapgapbounds} (Integrality Gap Bounds for \textsc{Min-Knapsack}) \begin{enumerate}[(a)] \item If $P=k\cdot 2^{2n+1}$, for any $k\geq 1$, then the integrality gap of $\text{\sc{Las}}_{n-1}(GapLP)$ is at least $k$. \label{th:knapgapbounds(b)} \item For any $\varepsilon\in (0,1)$, if $P\leq (2^{n}-2)\varepsilon$ then the integrality gap of $\text{\sc{Las}}_{n-1}(GapLP)$ is smaller than $\frac{1}{1-\varepsilon}$. \label{th:knapgapbounds(a)} \end{enumerate} \end{theorem} (Note that the above results trivially imply that if $P=k\cdot 2^{2t+1}$, for any $t\geq 1$, then the integrality gap of $\text{\sc{Las}}_{t-1}(GapLP)$ is at least $k$. In a different working paper, with a superset of authors, we can prove that for \emph{any} $P\geq 1$, the integrality gap of $\text{\sc{Las}}_{t}(GapLP)$ is still $P(1-\varepsilon)$ even when $t=O(\log^{1-\varepsilon}n)$ for any $\varepsilon>0$.)
\subsection{Proof of Theorem~\ref{th:knapgapbounds}}
Consider the following reduced instance obtained by removing variable $x_{n+1}$ and subtracting $1$ from the right-hand-side of the covering constraint in~\eqref{LP:minKnapGap}:
\begin{eqnarray}\label{LP:minKnapGap'} \begin{array}{rll} (GapLP')
\min \{ \sum_{i=1}^n x_i: \sum_{i=1}^{n} x_i - 1/P \geq 0, x_i \in [0,1] \mbox{ for } i\in[n]\} \end{array} \end{eqnarray}
We will use $g(x)$ to denote $\sum_{i=1}^n x_i - 1/P$ and $N=\{1, \ldots ,n\}$. Moreover, let $z=g*y$, i.e. $z_I=(g*y)_I = \sum_{i=1}^n y_{\{i\}\cup I} - y_I/P$. By the following lemma any integrality gap for $\text{\sc{Las}}_{t}(GapLP')$ implies the same gap for $\text{\sc{Las}}_{t}(GapLP)$. Therefore, we will focus on the reduced instance ($GapLP'$) in the following.
\begin{lemma}\label{th:gaprelation} For any $t\in \mathbb{N}_0$, if $y'\in \text{\sc{Las}}_{t}(GapLP')$ then $y\in \text{\sc{Las}}_{t}(GapLP)$, where $y_I=y_{I\setminus\{n+1\}}'$ for any $I\in \mathcal{P}_{2t+2}([n+1])$. \end{lemma} \begin{proof}[Proof Sketch.] Let $h(x) = \sum_{i=1}^{n+1} x_i -1-1/P$. The proof follows by observing that any principal submatrix of $M_{t+1}(y)$ (or $M_{t}(h*y)$) has either determinant equal to zero or it is a principal submatrix in $M_{t+1}(y')$ (or $M_{t}(g*y')$). \end{proof}
\begin{remark} In the following we prove an unbounded integrality gap for $\text{\sc{Las}}_{n-1}(GapLP')$. With this aim we need to find a solution $y$ that satisfies $M_n(y)\succeq 0$ and $M_{n-1}(z)\succeq 0$. By \eqref{vars}, $M_n(y)\succeq 0$ holds if and only if $\{y^N_{I}:I\subseteq N\}$ is a probability distribution.
So, the \emph{only} interesting part is to find a probability distribution that satisfies $M_{n-1}(z)\succeq 0$ (the constraint moment matrix), since \emph{any} probability distribution satisfies $M_n(y)\succeq 0$. \end{remark}
\paragraph{Proof Structure.} By Lemma~\ref{th:lassalmostdiag}, $M_{n-1}(z)$ is congruent to a matrix $M_{n-1}^*(z^N)$ that differs from the diagonal matrix $Diag(z^N,n-1)$ ($D$ for brevity) by a matrix of rank one ($G$). As in Example~\ref{ex:strategy}, if an entry of $D$ is negative then, by pivoting on that element, we can transform matrix $G$ to have all zeros but the pivot; This is obtained at the cost of spreading the negative entry of the diagonal matrix everywhere, and increasing therefore the disks radii of the transformed diagonal matrix by some factor of the negative entry. Now, for our instance $GapLP'$, the first entry of $D$ is negative but it can be made arbitrarily small for ``large'' $P$. Therefore, in the resulting transformed matrix, the disks radii can be made arbitrarily small for large $P$ and the transformed $G$ shifts the negative entry by a positive value. A feasible solution, with an unbounded integrality gap, is obtained by locating Gershgorin disks in the nonnegative plane. Viceversa, if $P$ is ``small'', then the non negativity of the trace implies a ``small'' integrality gap.
Before giving the general proof, we show this for the smallest meaningful instance of $GapLP'$ with two items, by customizing the idea of Example~\ref{ex:strategy} for $GapLP'$.
\begin{example}\label{exampleknap} In Example~\ref{example:almostdiag} we showed that $M_1(z)$ is congruent to the following almost diagonal matrix $M_1^*(z^N)$.
{\small $$ M_1(z)\cong M_1^*(z^N)= \left( \begin{array}{ccc} z_{\emptyset}-z_1-z_2+z_{12} & 0 & 0 \\ 0 & z_{1}-z_{12} & 0 \\ 0 & 0 & z_{2}-z_{12} \end{array} \right)+ z_{12}\underbrace{ \left( \begin{array}{cccc} 1 & -1 & -1 \\ -1 & 1 & 1 \\ -1 & 1 & 1 \end{array} \right)}_{\mbox{matrix of rank one}} $$ }
Consider the instance of $(GapLP')$ with two items $N=\{1,2\}$. According to this instance $M_1^*(z^N)$ is equal to:
{\small $$ M_1^*(z^N)= \left( \begin{array}{ccc} -\frac{y_{\emptyset}^{N}}{P} & 0 & 0 \\ 0 & \left(1-\frac{1}{P}\right)y_{\{1\}}^{N} & 0 \\ 0 & 0 & \left(1-\frac{1}{P}\right)y_{\{2\}}^{N} \end{array} \right)+ \left(2-\frac{1}{P}\right)y_{\{1,2\}}^{N} \left( \begin{array}{cccc} 1 & -1 & -1 \\ -1 & 1 & 1 \\ -1 & 1 & 1 \end{array} \right) $$ }
We are considering Lasserre at the 1-st level with 2 variables, it follows that the variable moment matrix $M_2(y)$ is a full moment matrix. Therefore, $M_2(y)\succeq 0$ is \emph{equivalent} and satisfied by assuming that $y_{\emptyset}^{N},y_{\{1\}}^{N},y_{\{2\}}^{N},y_{\{1,2\}}^{N}$ are the probabilities of the different integral solutions (see Example~\ref{examplediag}), i.e., $M_2(y)\succeq 0$ if and only if: $y_{\emptyset}^{N},y_{\{1\}}^{N},y_{\{2\}}^{N},y_{\{1,2\}}^{N}\geq 0$ and $y_{\emptyset}^{N}+y_{\{1\}}^{N}+y_{\{2\}}^{N}+y_{\{1,2\}}^{N}=1$.
Note that the Gershgorin disks of matrix $M_1^*(z^N)$ are not entirely located in the nonnegative plane. Indeed, disk ${\delta}_{\emptyset}$ is centered in $\left(\left(2-\frac{1}{P}\right)y_{\{1,2\}}^{N}-\frac{y_{\emptyset}^{N}}{P}\right)$ and has radius $2\left(2-\frac{1}{P}\right)y_{\{1,2\}}^{N}$.
Let $\varepsilon = \frac{y_{\emptyset}^{N}}{P}$. By pivoting on the negative entry (i.e. add the first row (column) to the second and the third rows (columns)) we obtain the following congruent matrix: {\small \begin{eqnarray*} M_1^*(z^N)\cong \mathcal{M}&=& \left( \begin{array}{ccc} -\varepsilon & -\varepsilon & -\varepsilon \\ -\varepsilon & \left(1-\frac{1}{P}\right)y_{\{1\}}^{N}-\varepsilon & -\varepsilon \\ -\varepsilon & -\varepsilon & \left(1-\frac{1}{P}\right)y_{\{2\}}^{N}-\varepsilon \end{array} \right)+ \left(2-\frac{1}{P}\right)y_{\{1,2\}}^{N} \left( \begin{array}{cccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \\ &=& \left( \begin{array}{ccc} \left(2-\frac{1}{P}\right)y_{\{1,2\}}^{N}-\varepsilon & -\varepsilon & -\varepsilon \\ -\varepsilon & \left(1-\frac{1}{P}\right)y_{\{1\}}^{N}-\varepsilon & -\varepsilon \\ -\varepsilon & -\varepsilon & \left(1-\frac{1}{P}\right)y_{\{2\}}^{N}-\varepsilon \end{array} \right) \end{eqnarray*} }
Since $0\leq y_{\emptyset}^{N}\leq 1$, by choosing $P$ ``sufficiently large'', we can make $\varepsilon = \frac{y_{\emptyset}^{N}}{P}$ arbitrarily close to zero.
Therefore, if $P$ is ``large'' then every disk radius of the transformed matrix $\mathcal{M}$ can be made arbitrarily small. By Gershgorin's Theorem, if the diagonal entries are at least the radii, we have a feasible solution. So a feasible solution is obtained by choosing the probabilities as follows (which locate the disks of $\mathcal{M}$ in the nonnegative plane).
\begin{eqnarray*} y_{\{1,2\}}^{N} &=& 3\varepsilon/(2-1/P) \\ y_{\{1\}}^{N} &=& 3\varepsilon/(1-1/P) \\ y_{\{2\}}^{N} &=& 3\varepsilon/(1-1/P)\\ y_{\emptyset}^{N} &=& 1- y_{\{1,2\}}^{N}-y_{\{1\}}^{N}-y_{\{2\}}^{N} \end{eqnarray*}
Moreover, note that the cost of this solution is the the expected value, i.e. the sum of the probability of each integral solution multiplied by the corresponding solution cost:
$$y_1+y_2=\frac{3\varepsilon}{(2-1/P)}\cdot 2+ \frac{3\varepsilon}{(1-1/P)}\cdot 1+\frac{3\varepsilon}{(1-1/P)}\cdot 1=O\left(\frac{1}{P}\right) $$
The optimal integral solution has value $1$. It follows that the integrality gap is unbounded by increasing $P$.
For the lower bound on $P$, consider $\mathcal{M}$ written as follows:
{$$ \mathcal{M}= \left( \begin{array}{ccc} z_{\{1,2\}}+ z_{\emptyset}^N & z_{\emptyset}^N & z_{\emptyset}^N \\ z_{\emptyset}^N & z_{\{1\}}^N+ z_{\emptyset}^N & z_{\emptyset}^N\\ z_{\emptyset}^N & z_{\emptyset}^N & z_{\{2\}}^N+ z_{\emptyset}^N \end{array} \right) $$ } $\mathcal{M}\succeq 0$ implies that $\Tr(\mathcal{M})= 3z_{\emptyset}^N + z_{\{1\}}^N+ z_{\{2\}}^N+ z_{\{1,2\}} \geq 0$. This simplifies to $\Tr(\mathcal{M})= 2z_{\emptyset}^N + z_{\emptyset}= - \frac{2}{P} y_{\emptyset}^N +z_{\emptyset} \geq 0$ by Lemma~\ref{th:sum1}. The latter implies $ y_{\emptyset}^N\leq Pz_{\emptyset}/2$ (that generalizes to $y_{\emptyset}^N\leq \frac{Pz_{\emptyset}}{2^{n}-2}$ for $n$ items). It follows that if $P$ is ``small'' then $y_{\emptyset}^N$ is ``small''. But $y_{\emptyset}^N$ is the probability of the zero solution (i.e. the solution with all variables set to zero). Moreover, in our case, the projection of $y$ on the original variables can be expressed as a convex combination of the (infeasible) zero solution with the (feasible) positive integral solutions (i.e. the solutions with one or more variables set to $1$). A ``small'' $y_{\emptyset}^N$ implies a ``small'' integrality gap. More general and formal arguments will be provided in the following. \end{example}
\paragraph{(Almost) Diagonalization.}
By Lemma~\ref{th:lassalmostdiag}, $M_{n-1}(z)$ is congruent to the following matrix:
\begin{eqnarray} M_{n-1}(z)&\cong&D + z_N \cdot G \end{eqnarray} where $D=Diag(z^N,n-1)$, $G=G(N)G(N)^{\top}$ and the generic entry $(I,J)$ of matrix $G$ is equal to
$G_{I,J}=(-1)^{|I|+|J|}$.
\paragraph{Pivoting.}
Let $C\in \mathbb{R}^{\mathcal{P}_{n-1}(N)\times \mathcal{P}_{n-1}(N)}$ be a square matrix defined as follows. \begin{equation*} C_{I,J}=\left\{ \begin{array}{ll} 1 & \text{if } I=J \\
(-1)^{|I|-1} & \text{if } J=\emptyset, I\not =\emptyset \\ 0 & \text{otherwise} \end{array} \right. \end{equation*}
Matrix $C$ is invertible (see Section~\ref{linear algebra} and Lemma~\ref{th:matrix_transf}) and it is a congruent transformation that maps $G$ to its $\emptyset$-reduced form\footnote{This congruent transformation is equivalent to pivoting on the first entry by adding to the row indexed by set $H\subseteq \mathcal{P}_{n-1}(N)$ the first row (the one indexed by set $\emptyset$) multiplied by $(-1)^{|H|-1}$. Then perform the symmetric operations on the columns. This transforms $G$ into a matrix with all zeros but the first entry.}
$D+G z_N \cong \mathcal{M}=C \left(D+G z_N\right) C^{\top}$.
\begin{lemma}\label{matrix} $M_{n-1}(z) \cong \mathcal{M}$ where $\mathcal{M}\in \mathbb{R}^{\mathcal{P}_{n-1}(N)\times \mathcal{P}_{n-1}(N)}$ is as follows: \begin{eqnarray*} \mathcal{M}_{I,J}&=& \left\{ \begin{array}{ll} z_N+z^N_{\emptyset} & \text{if } I=J=\emptyset\\ z^N_{I}+z^N_{\emptyset} & \text{if } I=J\not=\emptyset\\
z^N_{\emptyset} (-1)^{|I|+1} & \text{if } I\not=J=\emptyset\\
z^N_{\emptyset} (-1)^{|J|+1} & \text{if } J\not=I=\emptyset\\
z^N_{\emptyset} (-1)^{|I|+|J|} & \text{otherwise } \end{array} \right. \end{eqnarray*} \end{lemma}
\begin{proof}
$\mathcal{M}=C(D+G z_N)C^{\top} =C D C^{\top}+C G C^{\top} z_N$.
\begin{itemize} \item $(CGC^{\top})_{I,J} = \underset{U,W\subseteq \mathcal{P}_{n-1}(N)}{\sum} C_{I,U}C_{J,W}G_{U,W} = \underset{U\in \{\emptyset, I\}}{\sum} \underset{W\in \{\emptyset, J\}}{\sum} C_{I,U}C_{J,W}G_{U,W}$
If $I=J=\emptyset$ then $(CGC^{\top})_{\emptyset,\emptyset} = C_{\emptyset,\emptyset}C_{\emptyset,\emptyset}G_{\emptyset,\emptyset}=1$. Otherwise {\begin{eqnarray*}
(CGC^{\top})_{I,J} &=& C_{I,\emptyset}C_{J,\emptyset}+ C_{I,\emptyset}C_{J,J} (-1)^{|J|} +
C_{I,I}C_{J,\emptyset} (-1)^{|I|}+ C_{I,I}C_{J,J} (-1)^{|I|+|J|} \\
&=& (-1)^{|I|+|J|}+ (-1)^{|I|+|J|-1}+(-1)^{|I|+|J|-1}+(-1)^{|I|+|J|}=0 \end{eqnarray*}}
\item
$\left(C D C^{\top}\right)_{I,J}= \sum_{U\in \{\emptyset, I\}} C_{I,U} \sum_{W\in \{\emptyset, J\}} C_{J,W}(D)_{U,W}$.
If $I=J=\emptyset$ we have $\left(C D C^{\top}\right)_{\emptyset,\emptyset}=z_{\emptyset}^N$. If $I=J\not =\emptyset$ we have $\left(C D C^{\top}\right)_{I,I}=\sum_{U\in \{\emptyset, I\}} C_{I,U} \sum_{W\in \{\emptyset, I\}} C_{I,W}(D)_{U,W} = z_{I}^N+z_{\emptyset}^N$. Otherwise, $I\not= J$, we have
$\left(C D C^{\top}\right)_{I,J}= \sum_{U\in \{\emptyset, I\}} C_{I,U} \sum_{W\in \{\emptyset, J\}} C_{J,W}(D)_{U,W} = C_{I,\emptyset} C_{J,\emptyset} z_{\emptyset}^N $
and the claim follows by the definition of matrix $C$. \end{itemize} \end{proof}
\subsection{Proof of Theorem~\ref{th:knapgapbounds}\eqref{th:knapgapbounds(b)}}
In the following we prove an unbounded integrality gap for $\text{\sc{Las}}_{n-1}(GapLP')$. With this aim we need to find a solution $y$ that satisfies $M_n(y)\succeq 0$ and $M_{n-1}(g*y)\succeq 0$.
\begin{lemma}\label{th:feasiblesdpsolnew}
By choosing $P=k\cdot 2^{2n+1}$, for any $k\geq 1$, the following solution \begin{eqnarray}
y^N_{I}&=& \frac{2^{n}}{P|I|-1} \qquad \forall I \subseteq N \text{ and } I\not = \emptyset \label{knapcond2}\\ y_{\emptyset}^N &=& 1-\sum_{I \subseteq N, I\not = \emptyset} y^N_{I} \label{knapcond1} \end{eqnarray}
guarantee $M_n(y)\succeq 0$ and $M_{n-1}(z)\succeq 0$.
\end{lemma} \begin{proof} By \eqref{vars}, $M_n(y)\succeq 0$ holds if and only if $y^N_{I}\geq 0$ for all $I \subseteq N$, which is guaranteed by the choice of $P$.
For $M_{n-1}(z)\succeq 0$, we use the Gershgorin's Theorem (see e.g. \cite{varga}, and Theorem~\ref{th:gershgorin}), with the congruent matrix $\mathcal{M}$, to obtain a set of sufficient conditions that guarantee the nonnegativity of the eigenvalues of matrix $M_{n-1}(z)$.
For each row $I$, the radius $r_I(\mathcal{M})$ can be bounded as follows:
$r_I(\mathcal{M}) = \frac{(2^n-2)}{P} y^N_{\emptyset} \leq \frac{(2^n-2)}{P}$, since $y^N_{\emptyset}\leq 1$ and $z^N_{\emptyset}=-\frac{1}{P} y^N_{\emptyset}$.
It follows that if $z_I^N+z_{\emptyset}^N-r_I(\mathcal{M})\geq 0$, for any $\emptyset \not= I\subseteq N$, then by Theorem~\ref{th:gershgorin}, the eigenvalues of $\mathcal{M}$ (and therefore also of $M_{n-1}(z)$) are nonnegative and the solution feasible. So it is sufficient to have $z_I^N \geq \frac{2^n}{P}$ for any $\emptyset \not= I\subseteq N$, where $z_I^N=y_I^N (|I|-1/P)$. \end{proof}
\paragraph{The integrality gap.} The value of the solution given by Lemma~\ref{th:feasiblesdpsolnew} is equal to: \begin{eqnarray*}
\sum_{i=1}^n y_i &=& \sum_{I\subseteq N} y_I^N |I| = \sum_{I\subseteq N}\frac{2^{n}}{P|I|-1} |I| \leq \frac{2^{2n+1}}{P} \end{eqnarray*}
By choosing $P=k\cdot 2^{2n+1}$, for any $k\geq 1$, the integrality gap is at least $k$.
\begin{comment} \paragraph{A Feasible Solution.} In the following we compute a solution $y$ that satisfies the conditions of Lemma~\ref{th:feasiblesdpsolnew}. Note that this implies that $y\in \text{\sc{Las}}_{n-1}(GapLP')$.
We consider a \emph{symmetric} solution $y$, namely for each $I,J\subseteq N$ such that $|I|=|J|$ then $y_I=y_J$. We denote with $y_{[\ell]}$ the value of $y_I$ whenever $|I|=\ell$. The symmetry of $y$ implies the symmetry of $z=g*y$, and therefore $z_I=z_J=z_{[k]}$ and $z_I^N=z_J^N=z_{[k]}^N$, whenever $|I|=|J|=k$.
\begin{lemma}\label{th:recurrence} For $P\geq 2^{2n}$, the following variable assignment satisfies Lemma~\ref{th:feasiblesdpsolnew} conditions. \begin{equation} \left \{ \begin{array}{ll} y_{[n-i]}= 2^{i} \varepsilon & \forall i\in\{0,1,\ldots, n-1\}\label{sol}\\ \varepsilon = \frac{2^{n}}{P-1} \end{array} \right. \end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{th:recurrence}]
By the solution symmetry, for any set $S$ of size $n-i$, with $i\in\{0,1,\ldots, n-1\}$, we have
$y_S ^N=y_{[n-i]} ^N = \sum_{H\subseteq N\setminus S} (-1)^{|H|}\cdot y_{H\cup I}
= \sum_{\ell =0}^{i} (-1)^{\ell}\cdot {i \choose \ell} y_{[n-i+\ell]} $.
A sufficient condition to satisfy~\eqref{knapcond2} is obtained by imposing $y^N_{[n-i]}=\varepsilon$, for any $i\in\{0,\ldots, n-1\}$. The latter is satisfied by choosing the variabiales $y_{[i]}$ as in the following recurrence relation.
\begin{eqnarray}\label{recurrence} y_{[n-i]}&=& \left\{ \begin{array}{ll} \varepsilon & \text{for } i=0 \\
\varepsilon + \sum_{\ell =1}^{i} (-1)^{\ell-1}\cdot {i \choose \ell} y_{[n-i+\ell]} & \text{for } i=1,\ldots, n-1\\ \end{array} \right. \end{eqnarray}
We show that solution \eqref{sol} satisfies \eqref{recurrence} by induction. For $i=0$ note that $y_{[n]}^N=y_{[n]}=\varepsilon$ (base case). For any $i\geq 1$, let us assume that recurrence~\eqref{recurrence} is satisfied by~\eqref{sol} for variables $y_{[n]}, \ldots, y_{[n-i-1]}$ (induction hypothesis), then we need to show it is also satisfied for $y_{[n-i]}$, namely that the following holds: $ 2^i \varepsilon = \varepsilon + \sum_{\ell =1}^{i} (-1)^{\ell-1}\cdot {i \choose \ell} (2^{i-\ell}\varepsilon) $. The latter can be easily checked by observing that $1+\sum_{\ell =1}^{i} (-1)^{\ell-1}\cdot {i \choose \ell} 2^{i-\ell} = 1-\sum_{\ell =1}^{i} {i \choose \ell} 2^{i-\ell} (-1)^{\ell}
= 1-\left(1- 2^i\right)
= 2^i $.
The claim follows by checking that~\eqref{knapcond1} holds (namely $y_{\emptyset}^N\geq 0$): $y_{\emptyset}^N = \sum_{\ell =0}^{n} (-1)^{\ell}\cdot {n \choose \ell} y_{[\ell]}= 1+ 2^{n} \varepsilon\sum_{\ell =1}^{n} {n \choose \ell} (-1/2)^{\ell} = 1 - \varepsilon (2^n-1 )= 1- \frac{2^{n}}{P-1}(2^n-1 )>0$.
\end{proof}
The value of solution~\eqref{sol} is $ny_{[1]}=\frac{n2^{2n-1}}{P-1}$. By choosing $P=k(n2^{2n})$, for any $k\geq 1$, we have an integrality gap larger than $k$. The claimed result follows by Lemma~\ref{th:gaprelation}. \end{comment}
\subsection{Proof of Theorem~\ref{th:knapgapbounds}\eqref{th:knapgapbounds(a)}}
By contradiction, for some $\varepsilon\in (0,1)$, let us assume that for $P\leq (2^n-2)\varepsilon$ we obtain a solution $y\in\text{\sc{Las}}_{n-1}(GapLP')$ whose value is $\sum_{i=1}^n y_i = 1-\varepsilon$.
Consider the trace of the congruent matrix $\mathcal{M}$:
$\Tr(\mathcal{M}) = \sum_{I\in \mathcal{P}_{n-1}(N)} \mathcal{M}_{I,I}= \sum_{I\subseteq N} z_I^N + (2^{n}-2)z_{\emptyset}^N \nonumber = z_{\emptyset}+(2^{n}-2)(-1/P)y_{\emptyset}^N$,
where we used the equality $\sum_{I\subseteq N} z_I^N =z_{\emptyset}$ and $z_{\emptyset} = \sum_{i=1}^n y_i -1/P$ (that is smaller than 1 by the assumptions).
Since $\mathcal{M}\succeq 0$, it follows that $\Tr(\mathcal{M})\geq 0$ and therefore
$y_{\emptyset}^N\leq \frac{Pz_{\emptyset}}{2^{n}-2}$.
Now, note that the objective function value can be bounded by
$\sum_{i\in N} y_i = \sum_{I\subseteq N} y_I^N|I| \geq 1-y_{\emptyset}^N$,
where we used the equality $\sum_{I\subseteq N} y_I^N =1$. It follows that
$\sum_{i\in N} y_i \geq 1-y_{\emptyset}^N\geq 1-\frac{Pz_{\emptyset}}{2^{n}-2}> 1-\frac{P}{2^{n}-2}$.
By the assumption we have $\sum_{i\in N}y_i= 1-\varepsilon$, and therefore $1-\varepsilon> 1-\frac{P}{2^{n}-2}$, which implies $P> (2^{n}-2)\varepsilon$, a contradiction.
\section{\textsc{Min-Knapsack} with Lifted Objective Function}\label{sect:knaplifted}
For \textsc{Min-Knapsack}, if we add the objective function as a constraint and impose that the value is at most one, then after one round of Lasserre the integrality gap vanishes. Indeed by adding the following constraint:
$\sum_{j=1}^n x_j \leq T$,
and setting $T=1$ we obtain that $y_I = 0$ for any $I\subseteq N$ with $|I|>1$. The latter implies that $M_1(z)\cong Diag(z^N,1)$ which implies that any feasible fractional solution can be obtained as a convex combination of feasible integral solution, so the integrality gap is one. We can also easily show (see~\cite{KarlinMN11}) that in general the integrality gap with the lifted objective function decreases rapidly, i.e. after $1/\varepsilon$ rounds the integrality gap is $1+O(\varepsilon)$.
A natural question is to understand if the ``trick'' of adding the objective function can avoid the weakness of the Lasserre method when ``easy'' problems are considered. In Section~\ref{sect:tardyjobs} we show that the weakness remains even after adding the objective function. Again the almost diagonal form of the Lasserre hierarchy will play a fundamental role in the analysis. More precisely, we prove an unbounded integrality gap for a special case of the min-sum scheduling problem (see \cite{BansalP10,CheungS11}) that admits an FPTAS. The same ideas\footnote{The simple gap instances that we consider make the two problems essentially the same.} can be used for proving integrality gaps for the \textsc{Min-Multiple-Knapsack} problem (the \textsc{Min-Knapsack} variant with multiple knapsacks).
In this section, we use the latter problem for introducing the Gershgorin disk transformation technique as explained in Section~\ref{gershgorintech} in its general form. We show this for a small instance, but the reader should have no problem to generalize it for any size, and obtain an unbounded gap for the \textsc{Min-Multiple-Knapsack} problem, as well for the min-sum scheduling problem (see Section~\ref{sect:tardyjobs}). We decided to omit this proof in full details and give an alternative proof technique that uses the Lasserre hierarchy characterization given in Section~\ref{sect:sip}.
\begin{example}\label{ex:mkp} Consider the instance of the \textsc{Min-Multiple-Knapsack} problem with 3 knapsacks, demand $\varepsilon=1/16$, two different items for each knapsack, with unit profit and cost. If we impose that the objective function value is not larger than two, then the linear program relaxation of the considered instance is as follows.
\begin{subequations} \label{LP:minmultknap} \begin{align}
(MKP) \hspace{1cm}& \sum_{i=1}^6 x_i\leq 2 ,\label{eq:kpcardconstr}\\
& x_1+x_2 \geq \varepsilon \label{kpdemand1}\\
& x_3+x_4 \geq \varepsilon \label{kpdemand2}\\
& x_5+x_6 \geq \varepsilon \label{kpdemand3}\\
& 0\leq x_{i}\leq 1, & \text{for }\ i \in[6]
\end{align} \end{subequations} Note that there is no integral solution that satisfies the above constraints. In the following we show that one level of Lasserre is not sufficient for ruling out this case, giving therefore an integrality gap of $3/2$ (the integrality gap here is defined as the ratio between the optimal integral value and the objective function upper bound). By increasing the number of items and knapsacks, it is not hard to generalize this for any level $t=O(\sqrt{n})$, where $n$ is the input size, and get an unbounded integrality gap (see Section \ref{sect:tardyjobs} for a different proof technique of this claim).
With this aim, consider $\text{\sc{Las}}_1(MKP)$ in almost diagonal form (Lemma~\ref{th:lassalmostdiag}). Let $N$ denote the set of items. Consider the
solution $S=\{y_I^N=\alpha : I\subseteq N, |I|\leq 2\}$ that forms a uniform probability distribution (with $y_I^N=0$ for every $|I|>2$), where $\alpha =1/(7+{6 \choose 2})=1/22$.
Note that solution $S$ immediately satisfies $M_n(y)\succeq 0$ by \eqref{vars}. Moreover, let $g(x) = 2- \sum_{i=1}^6 x_i\geq 0$ denote the objective function constraint~\eqref{eq:kpcardconstr} and let $g(I)$ denote the value of $g(x)$ when we set to 1 all the variables in $\{x_{i}:i\in I\}$ and to zero the remaining. Solution $S$ satisfies $M_1^*(z^N)\succeq 0$, where $z=g*y$. Indeed, it is not difficult to see that $M_1^*(z^N)$ is the sum of PSD matrices (the only positive probabilities $y_I^N$ are given to solutions with at most 2 picked items, i.e. $y_I^N g(I)\geq 0$; therefore the entries of the diagonal matrix in $M_1^*(z^N)$ are all nonnegative, and every other matrix of rank one (that is PSD) is multiplied by a nonnegative number).
By the previous arguments, the only interesting case is to check the claim for the moment matrices of the knapsack constraints. Consider $M_1^*(z^N)\succeq 0$, where $z= g_1 *y$ and $g_1(x)= x_1 + x_2-1/P\geq 0$ (by symmetry the same hold for the other constraints). Let $R(J)=G(J)G(J)^{\top}$ denote the rank one matrix defined in Lemma~\ref{th:lassalmostdiag}.
$M_1^*(z^N)$ divided by $\alpha$ is equal to:
{\small \begin{eqnarray*}\underbrace{ \left( \begin{array}{ccccccc} -\varepsilon & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1-\varepsilon & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1-\varepsilon & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\varepsilon & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\varepsilon & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\varepsilon & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\varepsilon \end{array} \right)}_{D} &+& \underbrace{(2-\varepsilon) R(\{1,2\}) + (1-\varepsilon)\left(\sum_{i=3}^6 R(\{1,i\})+R(\{2,i\})\right)}_{\mathcal{PD}}\\ && \underbrace{-\varepsilon \left(\sum_{i=3}^5 \sum_{j=i+1}^{6} R(\{i,j\})\right)}_{\mathcal{ND}} \end{eqnarray*} }
Recall that the matrices $R(\{i,j\})$ multiplied by a positive number are PSD matrices (i.e., those with $i=1$ or $j=2$), these terms belong to set $\mathcal{PD}$ (see notation in Section~\ref{Sect:almostdiag}). Note that all the other components of $M_1^*(z^N)$ are not PSD. It is easy to check that the Gershgorin disks of $M_1^*(z^N)$ are not located entirely in the nonnegative plane.
The idea is to pivot on each negative entry $(I,I)$ of the $D$-matrix to transform a certain PSD component $R(J)$ with $I\subseteq J$ into its $I$-reduced form $R^I(J)$. The transformed $D+R(J)$ can be roughly seen as the result of shifting the negative $(I,I)$-th entry of $D$ by a positive number, at the cost of spreading the negative entry value $-\varepsilon$, and therefore increasing the disks radii in $D$ by a factor of $\varepsilon$ (but the effect of the latter is ``small'' when $\varepsilon$ is ``small''). On the other side this transformation does not destroy the PSD-ness of the $\mathcal{PD}$ components (and do not change their rank). Moreover note that the contribution of the matrices in $\mathcal{ND}$ is to increase the radius by some factor of $-\varepsilon$, that is again ``small'' for sufficiently small $\varepsilon$. After pivoting on each negative entry of $D$, we obtain a congruent matrix with positive diagonal entries and ``small'' off-diagonal entries, (plus some additional PSD matrices). For $\varepsilon=1/16$ this congruent matrix is PSD. We provide the complete example in Appendix \ref{sect:exmkp}.
It is not very difficult to generalize this approach to prove unbounded integrality gap, for the \textsc{Min-Multiple-Knapsack} and the Min-Sum of Tardy Jobs problem (see Section~\ref{sect:tardyjobs}), at level $t=\Omega(\sqrt{n})$: solution \eqref{eq:solsched} can be shown to be feasible by iteratively pivoting on the negative entries of $D$ and obtain a final matrix with off-diagonal entries that depends only on $\varepsilon$ (plus some additional PSD matrices). The proof of the feasibility of solution \eqref{eq:solsched} follows by choosing $\varepsilon$ ($1/P$ in Section~\ref{sect:tardyjobs}) ``small'' enough.
\end{example}
\section{Lasserre Integrality Gap for the Min-Sum of Tardy Jobs}\label{sect:tardyjobs}
We consider the single machine scheduling problem to minimize the (weighted) sum of tardy jobs: we are given a set of $N$ jobs, each with a weight $w_j>0$, processing time $p_j>0$, and due date $d_j>0$. We have to sequence jobs on a single machine such that no two jobs overlap. If job $j$ completes at time $C_j$ the tardiness $T_j$ of job $j$ is $\max\{C_j-d_j,0\}$. The scheduling objective is to minimize the total weighted tardiness, i.e., $\sum_j w_j T_j$.
\paragraph{The starting LP.} Our result is based on the following ``natural'' linear program relaxation, that is a special case of the starting LPs used in \cite{BansalP10,CheungS11} (therefore the obtained unbounded integrality gap result also holds if we apply Lasserre to the LPs used in \cite{BansalP10,CheungS11}). For each job we introduce a variable $x_j\in[0,1]$ with the intended (integral) meaning that $x_j=1$ iff job $j$ completes after its deadline, so it is a tardy job. Then, for any time $t\in\{d_1,\ldots,d_N\}$, the sum of processing times of jobs with deadlines not larger than $t$, and that completes not later than $t$, must satisfy $\sum_{j:d_j\leq t} (1-x_j)p_j \leq t$. The latter constraint can be rewritten as a capacitated covering constraint, $\sum_{j:d_j\leq t} x_jp_j \geq D_t$, where $D_t:=\sum_{j:d_j\leq t} p_j -t$ represents the \emph{demand} at time $t$. The goal is to minimize $\sum_j w_j x_j$.
\paragraph{The Gap Instance.} We consider the following instance with $N=n^2$ jobs of unit costs. (By abusing notation we will use $N$ to denote both, the set and the total number of jobs.) Jobs are partitioned into $n$ blocks $N_1, N_2,\ldots, N_n$, each with $n$ jobs. For $i\in[n]$, jobs belonging to block $N_i$ have the same processing time $P^i$, and the same deadline $d_i=n\sum_{j=1}^i P^{j}-\sum_{j=1}^i P^{j-1}$. So the demand at time $d_i$ is $D_i=\sum_{j=1}^i P^{j-1}$, for $P>0$. For any $t\geq 0$, let $T$ be the smallest value that makes $\text{\sc{Las}}_t\left(LP(T)\right)$ feasible, where $LP(T)$ is defined as follows:
\begin{subequations} \label{LP:tardy} \begin{align}
LP(T) \hspace{1cm}& \sum_{i =1}^n \sum_{j =1}^n x_{ij}\leq T,\label{eq:cardconstr}\\
&\sum_{i =1}^k \sum_{j =1}^n x_{ij}\cdot P^i \geq D_k, & \text{for }\ k\in[n]\label{demand}\\
& 0\leq x_{ij}\leq 1, & \text{for }\ i,j\in[n]
\end{align} \end{subequations} Note that, for any feasible \emph{integral} solution for $LP(T)$, the smallest $T$ (i.e. the optimal integral value) can be obtained by selecting one job for each block, so the smallest $T$ for integral solutions is $n$. The \emph{integrality gap} of $\text{\sc{Las}}_t\left(LP(T)\right)$ (or $LP(T)$) is defined as the ratio between $n$ (i.e. the optimal integral value) and the smallest $T$ that makes $\text{\sc{Las}}_t\left(LP(T)\right)$ (or $LP(T)$) feasible.
It is easy to check that $LP(T)$ has an integrality gap $P$ for any $P\geq 1$: For $T=n/P$, a feasible fractional solution for $LP(T)$ exists by setting $x_{ij}=\frac{1}{nP}$.
\subsection{Unbounded Integrality Gap for the Lasserre Hierarchy}
Consider any $k\geq 1$ and $n$ such that $t+1=n/k$ is a positive integer. We show that $\text{\sc{Las}}_t(LP(t+1))$ has a feasible solution $y$ (for a suitably large $P$). So the integrality gap is at least $k$. (Note that at the next level, namely $t+1$, $\text{\sc{Las}}_{t+1}(LP(t+1))$ has no feasible solution for $k>1$, which gives a tight characterization of the integrality gap threshold phenomenon.)
\paragraph{Solution Structure.}
Set $y_I = 0$ for $I\subseteq N$ with $|I|>t+1$, which implies that $M_{t+1}(y)\cong Diag(y^N,t+1)$ by Lemma~\ref{th:lassalmostdiag}; the requirement $Diag(y^N,t+1)\succeq 0$ is therefore equivalent to
$y^N_I\geq 0$ for $I\in \mathcal{P}_{t+1}(N)$.
By setting \begin{eqnarray}\label{eq:solsched} y_I^N= \left\{ \begin{array}{ll}
\alpha = 1/ |\mathcal{P}_{t+1}(N)| & \forall I\in \mathcal{P}_{t+1}(N)\\ 0 & otherwise \end{array} \right. \end{eqnarray}
we have $M_{t+1}(y)\succeq 0$.
\paragraph{Constraint Moment Matrix.}
For any constraint~\eqref{eq:cardconstr} or \eqref{demand}, say $g(x)\geq 0$ ($\ell=1,\ldots,n$), we need to ensure that Condition~\eqref{mkpcond} (with $w=g*y$) is satisfied. By Solution~\eqref{eq:solsched}, Condition~\eqref{mkpcond} simplifies as follows.
\begin{lemma}\label{th:schedconstraintcond} For any constraint $g(x)\geq 0$ (from ~\eqref{eq:cardconstr} or \eqref{demand}), Solution \eqref{eq:solsched} satisfies $M_{t}(g*y)\succeq 0$ if and only if the following conditions hold. \begin{equation}\label{mkpcond2}
\sum_{I\in \mathcal{P}_t(N)} g(I) v^2_I + \sum_{J\subseteq N:|J|=t+1} \left(\sum_{\substack{I\in \mathcal{P}_t(N)\\ I\subset J}} v_I (-1)^{|I|} \right)^2 g(J) \geq 0 \quad \forall \mbox{ unit vector } v\in \mathbb{R}^{\mathcal{P}_t(N)} \end{equation} where $g(H)$ denote the value of $g(x)$ when we set to 1 all the variables in $\{x_{i}:i\in H\}$ and to zero the remaining. \end{lemma}
\paragraph{The Moment Matrix for Constraint~\eqref{eq:cardconstr}.}
Constraint~\eqref{eq:cardconstr} is as follows: $g(x)= n/k - \sum_i \sum_j x_{i,j}\geq 0$. Since $t+1=n/k$ then note that for any $J\in \mathcal{P}_{t+1}(N)$, we have $g(J)\geq 0$ and Condition \eqref{mkpcond2} is trivially satisfied for any vector $v$. Therefore, by Lemma \ref{th:schedconstraintcond} we have $M_{t}(g*y)\succeq 0$.
\paragraph{The Moment Matrix for Covering Constraints.}
\begin{lemma} For any covering constraint~\eqref{demand} $g_{\ell}(x)=\sum_{i =1}^{\ell} \sum_{j =1}^n x_{ij}\cdot P^i - D_{\ell} \geq 0$, with $\ell=1,\ldots,n$, Solution \eqref{eq:solsched} satisfies Condition \eqref{mkpcond2} for $P=n^{O(t^2)}$ . \end{lemma}
\begin{proof} Consider the $\ell$-th covering constraint $g_{\ell}(x)\geq 0$ (see~\eqref{demand}) and the corresponding semi-infinite set of linear requirements \eqref{mkpcond2}. Then consider the following partition of $\mathcal{P}_{t+1}(N)$.
\begin{eqnarray*} A&=&\{ I\in\mathcal{P}_{t+1}(N): I\cap N_{\ell} \not= \emptyset\}\\ B&=&\{I\in\mathcal{P}_{t+1}(N): I\cap N_{\ell}=\emptyset\} \end{eqnarray*}
Note that for $S\in A$ we have $g_{\ell}(S)\geq \left(P^{\ell}-\sum_{j=1}^{\ell}P^{j-1}\right)= P^{\ell}\left(1-\frac{P^{\ell}-1}{P^{\ell}(P-1)}\right)\geq P^{\ell}\left(1-\frac{1}{P-1}\right)$.
For $S\in B$ we have $g_{\ell}(S)\geq -\sum_{j=1}^{\ell}P^{j-1}\geq P^{\ell}\left(-\frac{1}{P-1}\right)$. Since $P>0$, by scaling $g_{\ell}(x)\geq 0$ (see~\eqref{demand}) by $P^{\ell}$, we will assume, w.l.o.g., that \begin{eqnarray*} g_{\ell}(S)\geq \left\{ \begin{array}{ll} 1-\frac{1}{P-1} & S\in A\\ -\frac{1}{P-1} & S\in B \end{array} \right. \end{eqnarray*}
Note that, since $v$ is a unit vector, we have $v_I^2\leq 1$, and for any $J\subseteq N:|J|=t+1$ the coefficient of $g_{\ell}(J)$ is bounded by $\left(\sum_{\substack{I\in \mathcal{P}_t(N)\\ I\subset J}} v_I (-1)^{|I|} \right)^2\leq 2^{O(t)}$. For all unit vectors $v$ let $\beta$ denote the smallest possible total sum of the negative terms in \eqref{mkpcond2} (these are those related to $g_{\ell}(I)$ for $I\in B$). Note that $\beta\geq -\frac{|B|2^{O(t)}}{P}= -\frac{n^{O(t)}}{P}$.
In the following, we show that, for sufficiently large $P$, Solution \eqref{eq:solsched} satisfies \eqref{mkpcond2}. We prove this by contradiction.
Assume that it exists a unit vector $v$ such that~\eqref{mkpcond2} is not satisfied with Solution \eqref{eq:solsched}. We start observing that under the previous assumption the following holds
\begin{equation}\label{eq:probcase} \forall I\in A\cap \mathcal{P}_t(N):v_I^2 = \frac{n^{O(t)}}{P} \end{equation}
(otherwise we would have an $I\in A\cap \mathcal{P}_t(N)$ such that $v_I^2 g_{\ell}(I)\geq -\beta$ contradicting the assumption that \eqref{mkpcond2} is not satisfied).
In the following we show that the previous bound on $v_I^2$ can be generalized to $v_I^2=\frac{n^{O(t^2)}}{P}$ for \emph{any} $I\in \mathcal{P}_t(N)$ (under the contradiction assumption). But, by choosing $P$ such that $v_I^2<1/n^{2t}$, for $I\in \mathcal{P}_t(N)$, then we have $\sum_{I\in \mathcal{P}_t(N)} v_I^2<1$, which contradicts that $v$ is a unit vector.
The claim follows by showing that $\forall I\in B\cap \mathcal{P}_t(N):v_I^2 \leq n^{O(t^2)}/P$. The proof is by induction on the size of $I$ for any $I \in B\cap \mathcal{P}_t(N)$.
Consider the empty set, since $\emptyset\in B\cap \mathcal{P}_t(N)$. We show that $v_{\emptyset}^2= n^{O(t)}/P$. With this aim, consider any $J\subseteq N_{\ell}$ with $|J|=t+1$. Note that $J\in A$, $g_{\ell}(J)\geq t+1-1/(P-1)$ and its coefficient $u_J^2=\left(\sum_{\substack{I\in \mathcal{P}_t(N)\\ I\subset J}} v_I (-1)^{|I|} \right)^2$ is the square of an algebraic sum of $v_{\emptyset}$ and other terms $v_I$, all with $I\in A\cap \mathcal{P}_t(N)$ and therefore $v_I^2= \frac{n^{O(t)}}{P}$. Moreover, note that $u_J^2$ is smaller than $-\beta$ (otherwise \eqref{mkpcond2} is satisfied). Therefore, we have the following bound $b_0$ for $|v_{\emptyset}|$ (here, and later, we use the loose bound that $g_{\ell}(J)\geq 1/2$ for $J\subseteq N_{\ell}$, for $P\geq 3$)
\begin{equation}\label{eq:indfirst}
|v_{\emptyset}|\leq \sqrt{-2\beta} + \sum_{\emptyset\not =I\subset J} |v_I|\leq b_0= O\left(\sqrt{-\beta}+2^{O(t)} \frac{n^{O(t)}}{\sqrt{P}}\right)= \frac{n^{O(t)}}{\sqrt{P}} \end{equation}
which implies that $v_{\emptyset}^2=n^{O(t)}/P$.
Similarly as before, consider any singleton set $\{i\}$ with $\{i\}\in B\cap \mathcal{P}_t(N)$ and any $J\subseteq N_{\ell}$ with $|J|=t$. Note that $J\in A$, $g_{\ell}(J)\geq t-1/(P-1)$ and its coefficient $u_J^2=\left(\sum_{\substack{I\in \mathcal{P}_t(N)\\ I\subset J\cup\{i\}}} v_I (-1)^{|I|} \right)^2$ is the square of an algebraic sum of $v_{\{i\}}$, $v_{\emptyset}$ and other terms $v_I$, with $I\subseteq J$ and therefore $v_I^2= \frac{n^{O(t)}}{P}$. Moreover, again note that $u_J^2$ is smaller than $-\beta$ (otherwise \eqref{mkpcond2} is satisfied). Therefore, for any singleton set $\{i\}\in B\cap \mathcal{P}_t(N)$, we have that \begin{equation*}
|v_{\{i\}}|\leq |v_{\emptyset}| + \sqrt{-2\beta} + \sum_{\emptyset\not =I\subset J} |v_I| \leq 2b_0 \end{equation*}
Generalizing by induction, consider any set $S\in B\cap \mathcal{P}_t(N)$ and any $J\subseteq N_{\ell}$ with $|J|=t+1-|S|$. We claim that $|v_{|S|}|\leq b_{|S|}$ where \begin{equation}\label{eq:recbound}
b_{|S|}= \sum_{i=0}^{|S|-1}\left( N^{i} b_{i}\right)+ b_0 \end{equation} The latter \eqref{eq:recbound} follows by induction hypothesis and by observing that again $g_{\ell}(J\cup S)u_{J\cup S}\leq -\beta$ and therefore, \begin{equation*}
|v_{S}|\leq \sum_{i=0}^{|S|-1} \left(\sum_{\substack{I\in B\\ |I|=i}} |v_{I}|\right) + \sqrt{-2\beta} + \sum_{I\subset J} |v_I| \end{equation*}
From \eqref{eq:recbound}, for any $S\in B\cap \mathcal{P}_t(N)$, we have that $|v_{S}|$ is bounded by $b_{t}=(N^{t-1}+1)b_{t-1}=N^{O(t^2)}b_0=\frac{n^{O(t^2)}}{\sqrt{P}}$.
\end{proof}
\paragraph{Acknowledgments.} I'm grateful to Joseph Cheriyan and Zhihan Gao for pointing out a mistake in an early version of the paper. I thank Adam Kurpisz and Sam Lepp\"anen for carefully reading the paper and their suggestions. I'm indebted with Ola Svensson for several stimulating discussions.
{\small
}
\pagebreak \appendix \appendixpage
\begin{comment}
\section{Probability Distribution interpretation over set $N$}\label{sect:probdistr}
By using \eqref{vars} and \eqref{sum1} we show that lasserre constraints implies that $y$ defines a probability distribution (see also \cite{roth12}). We will see that $y_I^N$ is actually the probability that variables $y_i$ with $i\in I$ are set to one, and the remaining, i.e. $y_j$ with $j\in N\setminus I$, are set to zero.
For any $I\subseteq N$, let $\{I, N\setminus I\}$ be the event that the variables in $I$ (more precisely the variables $y_i$ with $i\in I$) get value $1$ and the variables in $N\setminus I$ get value $0$. Note that for any two distinct $I\subseteq N$ and $I'\subseteq N$ the corresponding events $\{I, N\setminus I\}$ and $\{I', N\setminus I'\}$ are disjoint. For any $i=1,\ldots,n$, let $U_i\in\{0,1\}$ be a random variable. We have a valid probability distribution if the following holds: \begin{enumerate}[(a)] \item $\Pr\left[\bigwedge_{i\in I} U_i=1 \wedge \bigwedge_{j\in N\setminus I} U_j=0 \right]\geq 0$ for any $I\subseteq N$; \item $\sum_{I\subseteq N}Pr\left[\bigwedge_{i\in I} U_i=1 \wedge \bigwedge_{j\in N\setminus I} U_j=0 \right]=1$ \end{enumerate} Let $y$ be a feasible solution of $\text{\sc{Las}}_n(\mathcal{K})$. In the following we show that the variables in $\{y_I:I\subseteq N\}$ define a probability distribution over the integral solutions for variables in $N$. More precisely, we prove that the values $y_I$ can be seen as a valid probability distribution over the random variables $U_i$, such that \begin{equation}\label{prob} y_I=\Pr\left[\bigwedge_{i\in I} U_i=1\right] \end{equation}
The claim follows by showing that the two properties (a) and (b) above are satisfied when \eqref{prob} holds for every $I\subseteq N$. With this aim, we need first to compute $\Pr\left[\bigwedge_{i\in I} U_i=1 \wedge \bigwedge_{i\in N\setminus I} U_i=0 \right]$ as a function of $\Pr\left[\bigwedge_{i\in I} U_i=1\right]$ and then show that when \eqref{prob} holds we obtain a valid probability distribution.
\begin{eqnarray*} \Pr\left[\bigwedge_{i\in I} U_i=1 \wedge \bigwedge_{i\in N\setminus I} U_i=0 \right]&=&\Pr\left[\prod_{i\in I} U_i \prod_{j\in N\setminus I} (1-U_j)\right]
= \Pr\left[\sum_{H\subseteq N\setminus I}(-1)^{|H|}\prod_{i\in I} U_i \prod_{j\in H} U_j\right]\\
&=& \sum_{H\subseteq N\setminus I}(-1)^{|H|} \Pr\left[\prod_{i\in I\cup H} U_i \right]= y_I^N \end{eqnarray*} by using ~\eqref{prob} and linearity of expectation of the $0/1$ random variable $\prod_{i\in I} U_i \prod_{j\in H} U_j$ (it is well known that the expected value of $0/1$ random variable is equal to the probability of being one). By the above derivation, (a) follows by using \eqref{vars} and \eqref{sum1}. Moreover, property (b) follows from Lemma \ref{sum1} (with $S=N$ and $J=\emptyset$ we have $y_{J}=y_{\emptyset}=1$). It follows that $y$ is a valid probability distribution over the integral solutions of set $N$.
\end{comment}
\section{Linear Algebra: useful facts}\label{linear algebra}
\begin{definition}[PSD] A symmetric $n\times n$ matrix $A$ is positive semidefinite (PSD or $A\succeq 0$) if and only if for every $v\in \mathbb{R}^n$ we have $v^{\top} A v\geq 0$. \end{definition} \longer{ \begin{example}\label{ex:vvT} For any vector $y$ we have $y y^{\top}\succeq 0$ since $v^{\top} (y y^{\top}) v =\sum y_i y_j v_i v_j = (\sum v_i y_i)^2 \geq~0$. \end{example} } \begin{comment} Checking if a matrix A is PSD can be done in polynomial time by 2-sided Gaussian elimination, so by only using elementary operations. In case it is not PSD the Gaussian elimination gives a vector $v$ such that $v^{\top} A v<0$ as a result of elementary operations (no root extractions). This vector $v$ can be used to obtain a separating hyperplane in polynomial time (without computing the eigenvectors).
More in general, the finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every symmetric real matrix $A$ there exists a real orthogonal matrix $Q$ such that $D = Q^{\top} A Q$ is a diagonal matrix. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. \end{comment}
\begin{definition}\label{matrixOp} \emph{Symmetric matrix operations} on a symmetric $n\times n$ matrix $A$ are: \begin{enumerate} \item Multiplying both the i-th row and i-th column by $\lambda \not= 0$. \item Swapping the i-th and j-th column; and swapping the i-th and j-th row. \item Adding $\lambda \times$ i-th column to j-th column and adding $\lambda \times$ i-th row to j-th row. \end{enumerate} We say that $A\cong B$ (read $A$ is \emph{congruent} to $B$) if and only if $B$ is obtained from $A$ by zero or more symmetric matrix operations. \end{definition}
The following two facts are well known (see e.g. \cite{Strang}).
\begin{lemma}\label{th:simop} Let $A$, $B$ be symmetric. If $A \cong B$, then $A$ is PSD if and only if $B$ is PSD. \end{lemma}
\begin{definition}\label{Def:congruence} A \emph{congruent transformation} (or \emph{congruence transformation}) is a transformation of the form $A \rightarrow P^{\top} A P$, where $A$ and $P$ are square matrices, $P$ is invertible, and $P^{\top}$ denotes the transpose of P.
\end{definition}
\begin{lemma}\label{th:matrix_transf} $A\cong B$ if and only if $B= Z^{\top} A Z$ for some invertible $Z$. \end{lemma} \begin{proof}[Proof Sketch.] We prove that $A\succeq 0$ if and only if $B\succeq 0$. \begin{itemize} \item If $B\succeq 0$ then for any vector $w$ we have $w^{\top}Bw= w^{\top} Z^{\top} A Zw \geq 0$. Set $v=Zw$ and since $Z$ is invertible for any given $v$ we can define $w=Z^{-1}v$ and we have $A\succeq 0$. \item If $A\succeq 0$ then for any vector $v$ we have $v^{\top} A v = v^{\top} (Z^{-1})^{\top} B Z^{-1} v\geq 0$ and by setting $w=Z^{-1}v$ we obtain that $B\succeq 0$. \end{itemize} \end{proof}
\longer{
\paragraph{Simple facts.}
\begin{enumerate} \item Assume that $C\in \mathbb{R}^{a\times b}$ and $D\in \mathbb{R}^{b\times b}$ then $$(CDC^{\top})_{I,J}=\sum_{U,W}C_{I,U}C_{J,W}D_{U,W}$$ \item \begin{equation}
\sum_{H\subseteq L} (-1)^{|H|} = \left\{ \begin{array}{ll} 1 & \text{ if } L=\emptyset \\ 0 & \text{ else } \end{array} \right. \end{equation} \end{enumerate} } \begin{definition}[Principal Submatrix] An $m\times m$ matrix, P, is an $m\times m$ \emph{principal submatrix} of an $n\times n$ matrix, A, if P is obtained from A by removing any $n - m$ rows and the same $n - m$ columns. \end{definition}
\begin{lemma} A matrix $A$ is positive semidefinite if and only if all of its principal submatrices have nonnegative determinants. \end{lemma} \longer{ \begin{lemma} Let $g(x)$ be a polynomial decomposed as sum of other polynomials, i.e. $g(x)=\sum_{i\in [m]} g^{(i)}(x)$. If $M_t(g^{(i)}*y)\succeq 0$ for all $i\in [m]$ then $M_t(g*y)\succeq 0$.\end{lemma} \begin{proof} Note that $g*y=(\sum_{i\in [m]} g_i)*y=\sum_{i\in [m]} g_i*y$ and $M_t(\sum_{i\in [m]}g^{(i)}*y)=\sum_{i\in [m]}M_t(g^{(i)}*y)$. The claim follows by observing that $M_{t}(g*y)$ is equal to the sum of PSD matrices. \end{proof} }
\section{Example \ref{ex:mkp} (cont.)}\label{sect:exmkp}
Matrix $M^*_1(z)$ (normalized by $\alpha$) is as follows.
{\tiny \begin{eqnarray*} && \underbrace{\left( \begin{array}{ccccccc} -\varepsilon & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1-\varepsilon & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1-\varepsilon & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\varepsilon & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\varepsilon & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\varepsilon & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\varepsilon \end{array} \right)}_{D} + (2-\varepsilon)\underbrace{ \left( \begin{array}{c} -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top} }_{R(\{1,2\})}\\ &+& (1-\varepsilon)\left(\underbrace{ \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,3\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{2,6\})} \right)\\
&-& \varepsilon\left( \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,4\})} +\underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,5\})} +\underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{3,6\})}
+\underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{4,5\})} +\underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{4,6\})} +\underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)^{\top} }_{R(\{5,6\})} \right)
\end{eqnarray*} } Let us pivot on entry $(\emptyset,\emptyset)$ (negative in $D$) to reduce $R(\{1,2\})$ and add it to the $D$-matrix (for ease of notation we will call this sum again $D$). We obtain the following congruent matrix. We see that the new $D$ is roughly the old $D$ with entry $(\emptyset,\emptyset)$ shifted by $(2-\varepsilon)$ and the radius of the disks ${\delta}_I$, with $I\subseteq \{1,2\}$, increased by some factor of the negative entry $-\varepsilon$.
{\tiny \begin{eqnarray*} &&\underbrace{ \left( \begin{array}{ccccccc} 2 & -\varepsilon & -\varepsilon & 0 & 0 & 0 & 0\\ -\varepsilon & 1-2\varepsilon & -\varepsilon & 0 & 0 & 0 & 0\\ -\varepsilon & -\varepsilon & 1-2\varepsilon & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\varepsilon & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\varepsilon & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\varepsilon & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\varepsilon \end{array} \right)}_{D} \\ &+& (1-\varepsilon)\left(\underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,3\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{2,6\})} \right)\\
&-& \varepsilon\left( \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,4\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,5\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{3,6\})}
+\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{4,5\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{4,6\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)^{\top} }_{R(\{5,6\})} \right)
\end{eqnarray*} }
Now, pivot on entry $(3,3)$ to reduce $R(\{1,3\})$, and add it to the $D$-matrix. We obtain the following congruent matrix.
{\tiny \begin{eqnarray*} &&\underbrace{ \left( \begin{array}{ccccccc} 2-\varepsilon & -\varepsilon & -2\varepsilon & -\varepsilon & 0 & 0 & 0\\ -\varepsilon & 1-2\varepsilon & -\varepsilon & 0 & 0 & 0 & 0\\ -2\varepsilon & -\varepsilon & 1-3\varepsilon & -\varepsilon & 0 & 0 & 0 \\ -\varepsilon & 0 & -\varepsilon & 1-2\varepsilon & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\varepsilon & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\varepsilon & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\varepsilon \end{array} \right)}_{D} \\ &+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{2,6\})} \right)\\
&-& \varepsilon\left( \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,4\})} +\underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,5\})} +\underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{3,6\})}
+\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{4,5\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{4,6\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)^{\top} }_{R(\{5,6\})} \right)
\end{eqnarray*} }
Now pivot on entry $(4,4)$ to reduce $R(\{2,4\})$, and add it to the $D$-matrix. We obtain the following congruent matrix.
{\tiny \begin{eqnarray*} &&\underbrace{ \left( \begin{array}{ccccccc} 2-2\varepsilon & -2\varepsilon & -2\varepsilon & -\varepsilon & -\varepsilon & 0 & 0\\ -2\varepsilon & 1-3\varepsilon & -\varepsilon & 0 & -\varepsilon & 0 & 0\\ -2\varepsilon & -\varepsilon & 1-3\varepsilon & -\varepsilon & 0 & 0 & 0 \\ -\varepsilon & 0 & -\varepsilon & 1-2\varepsilon & 0 & 0 & 0 \\ -\varepsilon & -\varepsilon & 0 & 0 & 1-2\varepsilon & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\varepsilon & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\varepsilon \end{array} \right)}_{D} \\ &+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{2,6\})} \right)\\
&-& \varepsilon\left( \underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,4\})} +\underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,5\})} +\underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{3,6\})}
+\underbrace{ \left( \begin{array}{c} 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{4,5\})} +\underbrace{ \left( \begin{array}{c} 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{4,6\})} +\underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ -1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)^{\top} }_{R(\{5,6\})} \right)
\end{eqnarray*} }
Now, pivot on entry $(5,5)$ to reduce $R(\{1,5\})$, and add it to the $D$-matrix. We obtain the following congruent matrix.
{\tiny \begin{eqnarray*} &&\underbrace{ \left( \begin{array}{ccccccc} 2-3\varepsilon & -2\varepsilon & -3\varepsilon & -\varepsilon & -\varepsilon & -\varepsilon & 0\\ -2\varepsilon & 1-3\varepsilon & -\varepsilon & 0 & -\varepsilon & 0 & 0\\ -3\varepsilon & -\varepsilon & 1-4\varepsilon & -\varepsilon & 0 & -\varepsilon & 0 \\ -\varepsilon & 0 & -\varepsilon & 1-2\varepsilon & 0 & 0 & 0 \\ -\varepsilon & -\varepsilon & 0 & 0 & 1-2\varepsilon & 0 & 0 \\ -\varepsilon & 0 & -\varepsilon & 0 & 0 & 1-2\varepsilon & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\varepsilon \end{array} \right)}_{D} \\ &+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} + \underbrace{ \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{2,6\})} \right)\\
&-& \varepsilon\left( \underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,4\})} +\underbrace{ \left( \begin{array}{c} 1 \\ -1 \\ 1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ -1 \\ 1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,5\})} +\underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{3,6\})}
+\underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{4,5\})} +\underbrace{ \left( \begin{array}{c} 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{4,6\})} +\underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)^{\top} }_{R(\{5,6\})} \right)
\end{eqnarray*} }
Pivot on the last negative entry $(6,6)$ of the $D$-matrix to reduce $R(\{2,6\})$, and add it to the $D$-matrix. We obtain the following congruent matrix.
{\tiny \begin{eqnarray*} &&\underbrace{ \left( \begin{array}{ccccccc} 2-4\varepsilon & -3\varepsilon & -3\varepsilon & -\varepsilon & -\varepsilon & -\varepsilon & -\varepsilon\\ -3\varepsilon & 1-4\varepsilon & -\varepsilon & 0 & -\varepsilon & 0 & -\varepsilon\\ -3\varepsilon & -\varepsilon & 1-4\varepsilon & -\varepsilon & 0 & -\varepsilon & 0\\ -\varepsilon & 0 & -\varepsilon & 1-2\varepsilon & 0 & 0 & 0 \\ -\varepsilon & -\varepsilon & 0 & 0 & 1-2\varepsilon & 0 & 0 \\ -\varepsilon & 0 & -\varepsilon & 0 & 0 & 1-2\varepsilon & 0 \\ -\varepsilon & -\varepsilon & 0 & 0 & 0 & 0 & 1-2\varepsilon \end{array} \right)}_{D} \\ &+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} 0 \\ 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} \right)\\
&-& \varepsilon\left( \underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,4\})} +\underbrace{ \left( \begin{array}{c} 1 \\ -1 \\ 1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ -1 \\ 1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{3,5\})} +\underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{3,6\})}
+\underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{4,5\})} +\underbrace{ \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{4,6\})} +\underbrace{ \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)\left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{array} \right)^{\top} }_{R(\{5,6\})} \right)
\end{eqnarray*} } Now, the sum of the $\mathcal{ND}$ matrices is
\begin{eqnarray*} -\varepsilon \left( \begin{array}{ccccccc} 6 & 0 & 0 & 3 & 3 & 3 & 3\\ 0 & 2 & -2 & -1 & 1 & -1 & 1\\ 0 & -2 & 2 & 1 & -1 & 1 & -1\\ 3 & -1 & 1 & 3 & 1 & 1 & 1\\ 3 & 1 & -1 & 1 & 3 & 1 & 1\\ 3 & -1 & 1 & 1 & 1 & 3 & 1\\ 3 & 1 & -1 & 1 & 1 & 1 & 3 \end{array} \right) \end{eqnarray*} and we add it to the $D$-matrix and obtain:
{\tiny \begin{eqnarray*} &&\underbrace{ \left( \begin{array}{ccccccc} 2-10\varepsilon & -3\varepsilon & -3\varepsilon & -4\varepsilon & -4\varepsilon & -4\varepsilon & -4\varepsilon\\ -3\varepsilon & 1-6\varepsilon & \varepsilon & \varepsilon & -2\varepsilon & \varepsilon & -2\varepsilon\\ -3\varepsilon & \varepsilon & 1-6\varepsilon & -2\varepsilon & \varepsilon & -2\varepsilon & \varepsilon\\ -4\varepsilon & \varepsilon & -2\varepsilon & 1-5\varepsilon & -\varepsilon & -\varepsilon & -\varepsilon \\ -4\varepsilon & -2\varepsilon & \varepsilon & -\varepsilon & 1-5\varepsilon & -\varepsilon & -\varepsilon \\ -4\varepsilon & \varepsilon & -2\varepsilon & -\varepsilon & -\varepsilon & 1-5\varepsilon & -\varepsilon \\ -4\varepsilon & -2\varepsilon & \varepsilon & -\varepsilon & -\varepsilon & -\varepsilon & 1-5\varepsilon \end{array} \right)}_{D} \\ &+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{1,4\})} + \underbrace{ \left( \begin{array}{c} 0 \\ 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)\left( \begin{array}{c} 0 \\ 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 1 \end{array} \right)^{\top} }_{R(\{1,6\})} \right)\\
&+& (1-\varepsilon)\left( \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,3\})} + \underbrace{ \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \left( \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \end{array} \right)^{\top}
}_{R(\{2,5\})} \right)\\
\end{eqnarray*} }
We see that $\varepsilon=1/16$ locates Gershgorin disks of matrix $D$ in the nonnegative plane, whereas the other matrices are PSD. This shows that the suggested solution is feasible.
\end{document} |
\begin{document}
\title{Feasibility of 300 km Quantum Key Distribution with Entangled States}
\author{Thomas Scheidl$^1$, Rupert Ursin$^{1}$, Alessandro Fedrizzi$^1$, Sven Ramelow$^1$, Xiao-Song Ma$^1$, Thomas Herbst$^2$, Robert Prevedel$^2$, Lothar Ratschbacher$^1$, Johannes Kofler$^1$, Thomas Jennewein$^1$ and Anton Zeilinger$^{1,2}$}
\address{$^1$ Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, Vienna\\ $^2$ Faculty of Physics, University of Vienna}
\ead{zeilinger-office@univie.ac.at} \begin{abstract} A significant limitation of practical quantum key distribution (QKD) setups is currently their limited operational range. It has recently been emphasized \cite{Ma07} that entanglement-based QKD systems can tolerate higher channel losses than systems based on weak coherent laser pulses (WCP), in particular when the source is located symmetrically between the two communicating parties, Alice and Bob. In the work presented here, we experimentally study this important advantage by implementing different entanglement-based QKD setups on a 144~km free-space link between the two Canary Islands of La Palma and Tenerife. We established three different configurations where the entangled photon source was placed at Alice's location, asymmetrically between Alice and Bob and symmetrically in the middle between Alice and Bob, respectively. The resulting quantum channel attenuations of 35~dB, 58~dB and 71~dB, respectively, significantly exceed the limit for WCP systems \cite{Ma07}. This confirms that QKD over distances of 300~km and even more is feasible with entangled state sources placed in the middle between Alice and Bob. \end{abstract}
\maketitle
\section{Introduction} Quantum cryptography, which promises to solve the problem of secure key distribution for the encryption of messages, is the most mature technical application in the field of quantum information and quantum communication. Current QKD architectures can be broadly categorized into systems based either on weak coherent laser pulses (WCP) \cite{Hwang03,peng07,hasegawa07,Schmitt07}, on continuous variables \cite{Ralph99,Hillery00,ralph00,lodewyck07,qi07} or on entanglement \cite{Ekert91,Bennett92,honjo08}. On the commercial market, WCP systems have been available for a while \cite{magicQ,quantique}. Most recently a QKD network was successfully demonstrated in Vienna (Peev \emph{et al.} in the same issue), including a fully automatic entanglement-based QKD system, operating reliably with installed telecom fibers (Treiber \emph{et al.} in the same issue \cite{treiber09}). \par An important benchmark for a QKD system is the secure key rate that can be achieved for a given quantum channel attenuation. Due to absorptive losses in the communication channels and detector imperfections, the distance/attenuation over which a secure key can still be generated is limited for all QKD systems. The experimental method which presently offers the best performance in high loss regimes are symmetric entanglement-based systems. This was recently shown by X-F. Ma \emph{et al.} \cite{Ma07}, where the authors conclude that state-of-the-art pulsed entanglement based QKD systems with the source placed symmetrically in the middle between the receivers can cover up to twice as much attenuation as WCP systems. In their particular example, the maximal attenuation was evaluated to be an astonishing 70~dB (in the case of optimized mean photon number in the limit of an infinite number of key bits) when using experimental parameters as in~\cite{ursin07}. Comparatively, a WCP system based on the same parameters can cover only up to 35~dB attenuation.
\par The quantum channels in a future global quantum communication networks will mostly consist of optical fibers which are already widely installed. As an alternative, free-space connections will allow to quickly build up connections between parties with direct line-of-sight \cite{Bennett92d,jacobs96,Buttler98a,Buttler00a,kurtsiefer02,kurtsiefer02b,ursin07,Erven08,peloso08}. Additionally, orbital free-space links, e.g. satellite-to-ground links or inter-satellite links, will allow the efficient global interconnection of regional quantum networks \cite{Buttler98c,ursin08b}. The attenuation expected for a single link ground connection from a satellite is at least 30~dB, and its feasibility has been shown in first ground-based tests \cite{Schmitt07,ursin07}. In the more demanding two-link satellite scenario, QKD systems will have to cope with 60~dB attenuation. \par In this work, we experimentally test the performance of entanglement-based QKD in an attenuation range from 35~dB to 70~dB over a distance of 144~km. We demonstrate the feasibility of entanglement-based QKD in loss regimes, where secure communication is no longer possible using WCP systems. With respect to the channel symmetry, we show that the obtained secure key rates confirm the predicted advantage of the symmetric scenario over commonly used asymmetric systems and we compare our results with the theoretical model by X.-F. Ma \emph{et al.} \cite{Ma07}.
\subsection{Entanglement-based QKD} The most commonly used entanglement-based QKD scheme is the BBM92 protocol \cite{Bennett92}, where it was shown to be equivalent to the original BB84 scheme \cite{Bennett84} under ideal conditions: Ideally, a source generates polarization entangled photons in the state \begin{equation}\label{psiminus}
|\psi^-\rangle=\frac{1}{\sqrt{2}}\left(|H\rangle_a|V\rangle_b-|V\rangle_a|H\rangle_b\right), \end{equation}
where $|H\rangle$ ($|V\rangle$) denotes horizontal (vertical) polarization. The photons in modes \emph{a} and \emph{b} are sent through quantum channels to the two communicating parties, Alice and Bob. Both perform measurements on the incoming photons in one out of two randomly chosen, complementary bases. Let's assume that they chose between the $|H,V\rangle$-basis and the $|P,M\rangle$-basis, with $|P\rangle=\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$ and $|M\rangle=\frac{1}{\sqrt{2}}(|H\rangle-|V\rangle)$. Alice and Bob individually record their measurement outcomes, including the information about the measurement basis. Assigning the binary value ``0'' to the results $|H\rangle$ and $|P\rangle$ and the value ``1'' to the results $|V\rangle$ and $|M\rangle$, each of the observers obtains a completely random bit string, the \emph{raw key}. The \emph{sifted key} is gained after basis reconciliation, where the raw key is reduced by the basis reconciliation factor of $2$. This is due to the fact that Alice and Bob discard those events where they have accidentally chosen different bases. Since the quantum state (\ref{psiminus}) received by Alice and Bob was entangled, the retained measurement outcomes are perfectly anti-correlated, i.e. Alice's and Bob's sifted keys are perfectly inverse to each other. This key is then further used to encode a message which can be transmitted over a public channel. \par The security against an eavesdropper, Eve, is guaranteed by the laws of quantum mechanics. Any attempt by Eve to gain information on the transmitted qubits will inevitably reduce the entanglement and introduce errors in the sifted key. The quantum bit error ratio (QBER) $q$ is constantly monitored by Alice and Bob by comparing small parts of their keys. As long as the QBER stays below a certain value, classical error correction and privacy amplification protocols can be used to distill an unconditional secure key. \par In real-world QKD experiments, errors will most probably be caused by imperfections in the setup rather than by Eve. However, for unconditional security, all errors need to be treated as if coming from an attempted eavesdropping attack. The next section is devoted to a theoretical analysis how the various imperfections affect the QBER and in consequence limit the achievable secure key rate.
\subsection{Theoretical error model} In practical QKD setups, most errors will actually originate from experimental imperfections, e.g. non-perfect entanglement and higher order photon emissions at the source, noisy quantum channels, imperfect polarization analyzers and photon detectors. A direct estimate of the expected QBER in a quantum optics QKD experiment can be obtained by measuring the total quantum correlation visibility $V_{tot}$, which has a simple relation to $q$: \begin{equation}\label{QBER} q =\frac{1-V_{tot}}{2} \end{equation} and can be obtained from the maxima, $N_{max}$, and minima, $N_{min}$, of the observed coincidences: \begin{equation} V_{tot}=\frac{N_{max}-N_{min}}{N_{max}+N_{min}}. \end{equation} Given that all these parameters are experimentally accessible, one can model the performance of a QKD system. First, a finite coincidence time window $\tau_c$, limited by the timing resolution of the detection apparatus, results in a certain probability to accidentally detect two uncorrelated photons in coincidence, which do not belong to the same pair. Second, the statistical nature of the down-conversion process inherently generates multi-photon emissions within the coherence time of the photons. This also results in uncorrelated detection events at Alice and Bob. Furthermore, the finite coincidence time window leads to uncorrelated accidental coincidences from background light and intrinsic detector dark counts. In addition, imperfections and misalignment in the setup (source, polarization analysis, etc.) introduce systematic errors. In the following, the errors from uncorrelated detection events are characterized by the accidentals visibility $V_{acc}$ and the systematic errors by the system visibility $V_{sys}$. The total correlation visibility is given by $V_{tot}=V_{sys}\cdot V_{acc}$. \par The effect of these error sources on the secure key rate are analyzed analytically within a model devised by X.-F. Ma \emph{et al.} \cite{Ma07}, which assumes pulsed operation of the SPDC source. The input parameters for the model are the photon-pair generation rate at the source, the ratio between the coincidence and single rates at Alice and Bob including detector efficiencies and the system visibility $V_{sys}$. The model yields an error probability and a secure key gain per pump pulse as a function of total two-photon attenuation in a QKD experiment. A lower bound for the final secure bit rate per pulse $R$ is then calculated using Koashi and Preskill's security analysis \cite{Koashi03} via \begin{equation}\label{koashipreskill}
R\geq \frac{1}{2}\{P_c[1-f(q)H_2(q)-H_2(q)]\}. \end{equation} Here, $P_c$ is the coincidence detection probability between Alice and Bob per pump pulse, $\frac{1}{2}$ is the basis reconciliation factor and $H_2$ is the binary entropy function \begin{equation}\label{entropy}
H_2(x)=-x\log_2(x)-(1-x)\log_2(1-x). \end{equation} The correction factor $f(q)$ accounts for the fact that practical error-reconciliation protocols in general do not perform ideally at the Shannon limit. Instead of assuming $f(q)\approx1$, we used realistic values for the applied bidirectional error correction protocol CASCADE \cite{Brassard94}. Furthermore, for our case of a cw-type SPDC source, the model was adapted to yield a probability per coincidence time window instead of a probability per pump pulse. We want to remark that other error-reconciliation protocols (e.g. WINNOW \cite{Buttler03}) promise to perform more efficiently, and it will be the subject of future investigation whether it might carry advantages also for our specific experimental environment. \par Note that Equation (\ref{koashipreskill}) gives the final secure key rate in the limit of infinite key lengths. However, in a practical implementation the secure key is obtained via error correction and privacy amplification on a finite key, which will further reduce the secure key rate to some extend (see \cite{Ma07,scarani08} for more details). For simplicity, we will restrict the analysis of our experiments to the infinite bounds.
\section{Description of the experiments and results}
We implemented three different experimental QKD scenarios (see Figure \ref{Setupfigure}). In all three experiments, one photon of an entangled pair was sent to Bob via a 144 km free-space link, established between the islands of La Palma and Tenerife. The first and the second experiment were both asymmetrical with respect to the different channel losses for Alice and Bob. In the first experiment the SPDC source was placed at Alice (\emph{source at Alice}) and one photon of an entangled pair was measured directly at the source. In the second experiment (\emph{source asymmetric in between Alice and Bob}), Alice's photon was sent through a 6 km single-mode fiber before it was analyzed. In the third scenario both photons were sent via the 144 km free-space link to a common receiver where they were split up and analyzed separately. This can be seen as an effective realization of the \emph{source in the middle} scheme, since we have equal channel losses for Alice's and Bob's photons. \par To get an overview about the different scenarios and the corresponding results, please refer to Table \ref{summary}. A detailed discussion will be given in the next sections.
\begin{table}[!htb] \begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Scenario } & \textbf{Attn.} & \textbf{local pair rate} & $\textbf{V}_{\textbf{tot}}$ & \textbf{QBER} & \textbf{secure key rate}\\
source...& [dB] & [MHz] & [\%] & [\%] & [bits/s]\\ \hline\hline \hline\hline ... at Alice & 35 & 0.55 & 86.2 & 6.9 & 24 \\ (Figure \ref{Setupfigure}a) & & & & & \\ ... asymmetric in between & 58 & 2.5 & 86.2 & 6.8 & 0.6 \\ (Figure \ref{Setupfigure}b) & & & & & \\ ... in the middle & 71 & 1 & 92 & 4 & 0.02 \\ (Figure \ref{Setupfigure}c) & & & & & \\ \hline \end{tabular} \end{center} \caption{A summary of the parameters for the three different experimental scenarios and the corresponding results, i.e., the total two-photon attenuation, the locally detected coincidence rate, the total visibility $V_{tot}$, the quantum bit error ratio QBER and the finally obtained secure key rate.} \label{summary} \end{table}
\begin{figure}\label{Setupfigure}
\end{figure}
\subsection{Source at Alice}\label{35dB} The experimental situation is depicted in Figure \ref{Setupfigure}a. The SPDC source \cite{Fedrizzi07b} was located in La Palma and generated photon pairs in the entangled state (\ref{psiminus}). The photons in modes \emph{a} and \emph{b} were coupled into single-mode fibers, Alice's photon in mode \emph{a} was analyzed and detected locally after only 1 meter of single-mode fiber, while the photon in mode \emph{b} was sent through a 144 km free-space channel to Tenerife. There it was collected by the 1 meter diameter telescope of the optical ground station (OGS) operated by the European Space Agency ESA and analyzed by Bob.
\par Each polarization analyzer consisted of an electro optical modulator (EOM), a polarizing beam splitter (PBS) and two single-photon avalanche diodes. Triggering the EOMs by independent quantum random number generators, the analyzer modules randomly switched between the complementary analyzing bases $|H,V\rangle$ and $|P,M\rangle$ as required for the BBM92 protocol \cite{Bennett92}. At Alice and Bob, every detection event (including arrival time, detector channel and EOM setting information) was recorded onto local computer hard disks, using time-tagging units disciplined by the global positioning system (GPS) time standard. Note that using active analyzers which are triggered by a quantum random number generator represents a security advantage over passive QKD systems, because it prevents an eavesdropper from applying certain side-channel attacks \cite{Ma07,Makarov05} (e.g. faked states attack) . \par In the first scenario implemented, the free-space channel attenuation for photons in mode \emph{b} was measured to be approximately 32~dB on average (including all optical elements), while only half of the locally measured photons (3~dB) in mode \emph{a} were lost in Alice's analyzer module. The total two-photon attenuation was therefore 35~dB.
The SPDC source generated entangled photon pairs at an estimated rate of 7~MHz, limited by the peak count rate of Alice's detector system. After single-mode fiber coupling, 550000 coincidences were observed locally, corresponding to a combined coupling and detection efficiency of 28\%. The darkcount rate at Alice was 500~Hz while Bob's detectors showed an average of 1200~Hz. For these parameters and a coincidence window of $\tau_c=1.5~\mbox{ns}$, theory predicts an upper bound of $V_{th}=94.1\%$ for the total visibility, which includes the initially measured system visibility $V_{sys}=96\%$ as well as the background and multi-pair emission limited visibility of $V_{acc}=98\%$ . However, the actual visibility of the transmitted entangled state was measured to be $V_{tot}=86.2\%$ on average. The discrepancy to $V_{th}$ was most probably caused by a polarization drift in the fiber connecting the source with the transmitter telescope during the measurements. \par The result of a typical measurement is shown in Figure \ref{keyrates35dB}. In total, three measurements were performed and sifted keys containing 11024~bits (130 s integration time), 13642~bits (190 s integration time) and 16851~bits (190 s integration time), respectively, were obtained. During a measurement run, the free-space link usually undergoes strong atmospheric turbulence, resulting in a time dependent two-photon attenuation. This is reflected by the time-resolved sifted key rates (see Figure \ref{keyrates35dB}). The QBERs for these measurement runs of 6.6\%, 7.3\% and 6.9\%, respectively, were obtained by comparing the sifted keys and are in good agreement with the measured overall visibility. Finally, applying Equation (\ref{koashipreskill}) with $f(q)\approx1.18$ yields an averaged secure key rate of approximately 24~bits/s. A comparison of all experimental data points to the theoretically calculated secure key rate as a function of overall link attenuation is shown in Figure \ref{theoryplusdatapoints}.
\begin{figure}
\caption{The result of a typical QKD measurements as shown in Figure \ref{Setupfigure}a, where Alice is located at the source and with a two-photon attenuation of 35~dB. The sifted key rates in 0.2 s are plotted versus measurement time. The strong intensity fluctuations through the free-space link are reflected in the fluctuations of the key rates. Accumulating three such measurements and applying Koashi and Preskill security analysis yielded an average secure key rate of approximately 24 bit/s.}
\label{keyrates35dB}
\end{figure}
\subsection{Source asymmetric in between Alice and Bob} The second scenario extends the \emph{source at Alice} scheme (see section \ref{35dB}), such that the photons in mode \emph{a} were delayed by 29.6~$\mu$s in a 6 km long single-mode fiber (see Figure \ref{Setupfigure}b). The attenuation of the fiber was measured to be 17~dB and during this particular measurement series, the free-space link attenuation was 38~dB. Combined with the 3~dB loss in Alice's analyzer, the overall two-photon attenuation was 58~dB. For this experiment, we increased the output of the SPDC source to a pair generation rate of $32~\mbox{MHz}$ by operating at the maximum available pump laser power of 50~mW. Due to detector saturation, the fiber-coupled and locally detectable pair rate could only be extrapolated to be 2.5~MHz. In this situation, the initial system visibility was $V_{sys}=94\%$, a slight reduction compared to the 35~dB scenario, caused by the delay fiber. With the same coincidence window ($\tau_c=1.5~\mbox{ns}$) and darkcount rates as in the first experiment (500~Hz at Alice and 1200~Hz at Bob), the theoretic upper bound for this scheme turns out to be $V_{th}=88\%$ and the measured total visibility of the entangled state at the receiver was with $V_{tot}=86.2\%$ coincidentally the same as in the first scenario. \par A typical measurement result is depicted in Figure \ref{keyrates58dB}. In total, two sifted keys were obtained, containing 1107~bits (580 s integration time) and 1684~bits (880 s integration time). The corresponding QBERs were 6.9\% and 6.8\%, respectively, and a secure key rate of 0.6~bits/s could be obtained from Equation (\ref{koashipreskill}) with $f(q)\approx1.18$. For this scenario both the expected visibility and the key rate agree very well to the model (see Figure \ref{theoryplusdatapoints}).
\begin{figure}
\caption{A typical measurement run for the scenario shown in Figure \ref{Setupfigure}b, where the source was arranged asymmetrically between Alice and Bob, resulting in a two-photon attenuation of 58~dB. Accumulating data for approximately 1400 s, an average secure key rate of 0.6~bits/s was obtained.}
\label{keyrates58dB}
\end{figure}
\subsection{Source in the middle} The experimental situation for the third, the \emph{source in the middle} scenario, is depicted in Figure \ref{Setupfigure}c. The entangled photons in mode \emph{a} and mode \emph{b} were coupled into single mode fibers, guided to two separate transmitter telescopes and sent through a 144 km free-space channel to one common receiver in Tenerife. The total two-photon attenuation was measured to be about 71~dB (including all optical components). For a detailed description of the setup please refer to \cite{fedrizzi09a}.
The source produced photon pairs at a rate of 10~MHz from which 3.3~MHz single photons and 1~MHz photon-pairs were detected locally. Both photons were sent via two telescopes over the 144 km free-space links to Tenerife. On average, 0.071 transmitted photon pairs/s could be detected, using a coincidence window of 1.25 ns. Each detector registered a background count rate of 400~Hz. Accumulating data for a total amount of 10800 seconds we measured an averaged visibility of the transmitted entangled state of $V_{tot}=92\%$ (with $V_{sys}=99\%$ and $V_{acc}=94\%$), which is very close to the theoretic upper bound of $V_{th}=91.7\%$ . Based on these measurements we inferred that a QKD experiment employing a similar setup would have yielded a QBER of approximately 4\%. From the coincidence rate and the QBER-dependent performance of the classical key distillation protocols ($f(0.04)\approx1.16$), we estimate that our experiment would have yielded a final secure bit rate of approximately 0.02~bits/s (see Figure \ref{theoryplusdatapoints}). However, the implementation of a full QKD experiment was not possible, because only one receiver station and module was available in Tenerife.
\begin{figure}
\caption{A comparison of the obtained results with the theoretical model described in the main text. The solid curve (\emph{source at Alice}) starts at a two-photon attenuation of 3~dB, which corresponds to the fixed loss in Alice's analyzer module. The dashed curve represents the scheme with the \emph{source asymmetrically between Alice and Bob}, where the attenuation for Alice's fiber channel together with the analyzer module was 20~dB. The dotted-dashed curve is predicted by our model for the \emph{source in the middle} scheme. The three experimentally obtained secure bit rates are depicted as the square, the circle and the triangle, respectively. It is easy to see that the data point for the \emph{source in the middle} scenario (triangle) can not be explained by the models for the asymmetric cases. Similarly, the data point of the experiment with the \emph{source asymmetrically between Alice and Bob} can not be explained by the model for the \emph{source at Alice} scheme. Thus the advantage of the symmetric scenario is clearly verified by our experimental results. Furthermore, these results also show that our system should be able to generate secure keys at high rates from 10~kbit/s at 15~dB and 100~kbit/s at 3~dB, a range typical for free-space links in metropolitan areas.
}
\label{theoryplusdatapoints}
\end{figure}
\subsection{Clock synchronization} As an additional feature, our coincidence search algorithm used during the first two experiments could be utilized to synchronize the individual time bases at Alice and Bob within 0.5 ns. For the third experiment such a synchronization was not necessary, because one and the same time-tagging system was used for the measurements. \par Coincidence events between Alice and Bob were identified by calculating the cross-correlation function of the individual time-tagging data sets. A peak in the cross-correlation function indicated the current time offset $\Delta t$ between the time scales of the receiver units. Initially, Alice's and Bob's time bases were both disciplined by the GPS time standard. However, the two individual GPS receivers exhibited a relative drift during a measurement run. By analyzing the data in blocks of adjustable length, our software measured and compensated for this relative drift with 0.5 ns resolution by temporal alignment of the data blocks. The data of the first experiment described were analyzed in blocks of 5 s length, while the data obtained in the second experiment were analyzed in blocks of 30 s length. The corresponding results concerning the relative drift of Alice's and Bob's time bases are depicted in Figure \ref{gpsdrift}.
\begin{figure}
\caption{ This plot shows the relative drift between Alice's and Bob's local time-tagging systems that were directly disciplined by the global positioning system. Due to the relative drift (vertical axis of the plot), the offset for the coincidence analysis was adapted by recalculating the cross-correlation. For the three measurements within the \emph{source at Alice} scheme the recalculation was performed in steps of 5 s (black, red and green curve), while for the two measurements within the \emph{source asymmetric in between Alice and Bob} scenario it was performed every 30 s (blue and light blue curve).
}
\label{gpsdrift}
\end{figure}
\section{Conclusion} We experimentally studied entanglement based QKD in three different high-attenuation scenarios on a 144~km free-space link between the two Canary Islands La Palma and Tenerife to verify the symmetry advantage of such an implementation. This involved placing the source directly at Alice, asymmetrically between Alice and Bob, and finally, symmetrically between the two parties. Our results demonstrate that in the symmetric case (\emph{source in the middle}) secure keys can be generated up to a channel attenuation of 70~dB, a regime in which asymmetric schemes fail. We conclude that entanglement-based QKD systems are the systems of choice for long distance quantum communication. Compared to the expected link attenuations in a low earth orbit (LEO) satellite to ground scenario of 30~dB, our results show, that entanglement-based systems are suitable for use in either a single-link (\emph{source at Alice}) or a two-link (\emph{source in the middle}) scenario. Our results imply that, using similar technology as in our experiment, it should readily be possible to implement QKD over distances of 300 km. This holds also true for fiber-based QKD, as there the attenuation is typically of the order of 0.2~dB/km, resulting in similar total attenuations.
\end{document} |
\begin{document}
\title[The $\infty$-{F}u{\v{c}}{\'{\i}}k spectrum]{The $\infty$-{F}u{\v{c}}{\'{\i}}k spectrum}
\author[J.V. da Silva, J.D. Rossi and A.M. Salort]{Jo\~{a}o V. da Silva, Julio D. Rossi and Ariel M. Salort}
\address{Departamento de Matem\'atica, FCEyN - Universidad de Buenos Aires and
\break \indent IMAS - CONICET
\break \indent Ciudad Universitaria, Pabell\'on I (1428) Av. Cantilo s/n.
\break \indent Buenos Aires, Argentina.}
\email[J.D. Rossi]{jrossi@dm.uba.ar} \urladdr{http://mate.dm.uba.ar/~jrossi}
\email[A.M. Salort]{asalort@dm.uba.ar} \urladdr{http://mate.dm.uba.ar/~asalort}
\email[J. V. da Silva]{jdasilva@dm.uba.ar}
\subjclass[2010]{35B27, 35J60, 35J70}
\keywords{ {F}u{\v{c}}ik spectrum, Degenerate fully nonlinear elliptic equations, Infinity-Laplacian operator}
\begin{abstract} In this article we study the behavior as $p \nearrow+\infty$ of the {F}u{\v{c}}ik spectrum for $p$-Laplace operator with zero Dirichlet boundary conditions in a bounded domain $\Omega\subset \mathbb R^n$. We characterize the limit equation, and we provide a description of the limit spectrum. Furthermore, we show some explicit computations of the spectrum for certain configurations of the domain.
\end{abstract} \maketitle
\section{Introduction}
Given a bounded smooth domain $\Omega\subset \mathbb R^n$, $n\geq 1$, we are interested in studying the asymptotic behavior as $p\to \infty$ of the following non-linear eigenvalue problem \begin{equation}\label{ecu} \left\{ \begin{array}{ll}
-\Delta_p u(x) = \alpha_p(u^+)^{p-1}(x)- \beta_p(u^-)^{p-1}(x) & \text{in } \Omega \\
u(x) = 0 & \text{on } \partial\Omega, \end{array} \right. \end{equation}
where $\Delta_p u\mathrel{\mathop:}= \div(|\nabla u|^{p-2}\nabla u)$ denotes the $p$-Laplace operator and $\alpha_p$ and $\beta_p$ are two real parameters. As usual, $u^\pm = \max\{0,\pm u\}$ mean the positive and negative parts of $u$. Recall that the set $$
\Sigma_p \mathrel{\mathop:}= \{(\alpha_p,\beta_p)\in \mathbb R^2 \colon \,\, \mbox{there exists a nontrivial solution } u \mbox{ of }\eqref{ecu} \} $$ is currently known as the \textit{Fu{\v{c}}{\'{\i}}k spectrum} in honor to the Czech mathematician Svatopluk Fu{\v{c}}{\'{\i}}k, who in the late '70s, studied this kind of equations in one space dimension with periodic boundary conditions and their relationship with jumping nonlinearities. More precisely, in \cite{Fucik-libro} it was proved that $\Sigma_2$ for $\Omega = (a,b) \subset \mathbb R$ consists in two trivial lines and a family of hyperbolic-like curves passing thought the pairs $(\lambda,\lambda)$, being $\lambda$ an eigenvalue of the (zero) Dirichlet Laplacian in the interval $(a,b)$. Also, explicit formulas for such curves were found. When regarding the one-dimensional case for $p\neq 2$, the structure of the spectrum is similar, see for instance \cite{Dra-92}. Throughout the last decades several works have been devoted to studying $\Sigma_p$ in $\mathbb R^n$ (for $n\geq 1$). The bibliography on this subject is vast. For the linear case, $p=2$, we refer to the reader the papers \cite{Cu-Go-92, DAN, Da-93, dF-Go-94, Fu-80, Mi-94,S}. When $p\neq 2$ we address, for instance, to references \cite{Cu-dF-Go-99,Cu-dF-Go-98, Pe-04,PS,PS2}.
Observe that problem \eqref{ecu} is closely related with the eigenvalue problem of the (zero) Dirichlet $p$-Laplacian, since, when both parameters $\alpha_p$ and $\beta_p$ are considered to be the same, \eqref{ecu} becomes \begin{equation} \label{plap} \left\{ \begin{array}{ll}
-\Delta_p u(x) = \lambda |u(x)|^{p-2}u(x) & \text{in } \Omega \\
u(x) = 0 & \text{on } \partial \Omega \end{array} \right. \end{equation} and it follows that the pair $(\lambda_{k,p},\lambda_{k,p})$ belongs to $\Sigma_p$ for each $k\in\mathbb N$, where $\lambda_{k,p}$ denotes the $k$-th (variational) eigenvalue of \eqref{plap}. It is also straightforward to see that the trivial lines $\{\lambda_{1,p}\}\times \mathbb R$ and $\mathbb R\times \{\lambda_{1,p}\}$ belong to $\Sigma_p$. The following facts are well-known in the the literature, see \cite{Cu-Go-92, DAN, Da-93, dF-Go-94, Fu-80, Mi-94} and \cite{Cu-dF-Go-99,Cu-dF-Go-98, Pe-04}: the trivial lines are isolated in the spectrum and curves in $\Sigma_p$ emanating from each pair $(\lambda_{k,p},\lambda_{k,p})$ exist locally. Moreover, it is proved that the spectrum contains a continuous non-trivial first curve passing though $(\lambda_{2,p},\lambda_{2,p})$, which is, in fact asymptotic to the trivial lines, and it admits a variational characterization.
Let us recall some important properties on the spectrum of the $p$-Laplacian. For problem \eqref{plap} there exists a sequence of eigenvalues tending to infinity (note that, in general, it is not known if such a sequence constitutes the whole spectrum), that is, ($\lambda_{k,p})_{k \geq 1}$ such that there are nontrivial solutions to the problem \eqref{plap}, see \cite{GP1}. It is also known (cf. \cite{Anane}) that the first eigenvalue to \eqref{plap} is isolated, simple and can be variationally characterized as \begin{equation}\tag{{\bf \text{Eigenv.}}}\label{eqEigen}
\lambda_{1,p} (\Omega) = \inf_{u \in W^{1,p}_0 (\Omega) \setminus \{0\}} \frac{\| \nabla u\|^p_{L^p (\Omega)}}{\| u\|^p_{L^p (\Omega)}}. \end{equation}
In the last three decades there was an increasing number of works concerning the study of limit for $p$-Laplacian type problems as $p \to +\infty$. In this direction, one of pioneering works is \cite{BdBM} where it was studied the limit of torsional creep type problems for the $p$-Laplacian, namely $$
-\Delta_p u_p(x) = 1 \quad \text{in} \quad \Omega, $$
obtaining as ``limit equation'' $|\nabla u| = 1 \quad \text{in} \quad \Omega$ (the well-known \textit{Eikonal equation}) in the viscosity sense. Moreover, $u(x) = \dist(x, \partial \Omega)$ is the corresponding limiting solution (we also recall that more general problems are studied there). On the other hand, regarding the so-called $\infty$-eigenvalue problem, the main reference is \cite{JLM}, where the authors proved that such a quantity is obtained as a limit of the first eigenvalue \eqref{eqEigen} in the following way $$
\lambda_{1,\infty}(\Omega)=\lim_{p\to\infty} \lambda_{1,p}^{1/p}(\Omega). $$ An interesting piece of information is that such an $\infty$-eigenvalue admits a geometric characterization in terms of the radius of the biggest ball inscribed in $\Omega$: \begin{equation} \label{lam1}
\lambda_{1,\infty}(\Omega)= \frac{1}{\mathfrak{r}} \end{equation} where $\mathfrak{r}(\Omega)=\max\limits_{x\in\Omega} \dist(x,\partial \Omega)$. Moreover, \cite{JLM} also establishes that, up to subsequences, as $p\to\infty$ in \eqref{plap}, uniform limits, $\displaystyle u(x) = \lim_{p \to \infty} u_p(x)$, satisfy the following limit equation \begin{equation}\label{plap.infty} \left\{ \begin{array}{ll}
\min \{-\Delta_\infty u(x), |\nabla u(x)|-\lambda_{1,\infty}(\Omega) u(x)\} = 0 & \text{in } \Omega \\
u(x) = 0 & \text{on } \partial \Omega \end{array} \right. \end{equation} in the viscosity sense, where $$
\displaystyle \Delta_\infty u(x) \mathrel{\mathop:}= \sum_{i, j=1}^{N} \frac{\partial u}{\partial x_j}(x)\frac{\partial^2 u}{\partial x_j \partial x_i}(x) \frac{\partial u}{\partial x_i}(x) $$ is the nowadays well-known \textit{Infinity-Laplacian operator}. Recall that solutions to \eqref{plap.infty} minimize \begin{equation}\label{eqInfQuot}
\frac{\|\nabla u\|_{L^{\infty}(\Omega)}}{\|u\|_{L^{\infty}(\Omega)}} \end{equation} over all function $W^{1, \infty}_0(\Omega)\setminus \{0\}$. In spite of the fact that the function $u(x) = \dist(x, \partial \Omega)$ minimizes \eqref{eqInfQuot}, it is not always a viscosity solution to \eqref{plap.infty} (cf. \cite{JLM} for more details). Thereafter, in \cite{Ju-Li-05} it is proved that the limit of the second eigenvalue of \eqref{plap} (note that such an eigenvalue is also variational) exists and is obtained as $$
\lambda_{2,\infty}(\Omega)=\lim_{p\to\infty}\lambda_{2,p}^{1/p}(\Omega). $$ Furthermore, as before, this value also admits a geometric characterization given by \begin{equation} \label{lam2}
\lambda_{2,\infty}(\Omega)= \frac{1}{\mathfrak{R}} \end{equation} where $$
\mathfrak{R}(\Omega) = \sup \left\{r>0\colon \,\, \exists \,\, B_r^1, B_r^2 \subset \Omega \,\,\,\text{such that} \,\,\, B_r^1 \cap B_r^2 = \emptyset\right\}. $$ In this case, a uniform limit to \eqref{plap} satisfies the following limit equation in the viscosity sense $$ \left\{ \begin{array}{ll}
\min\{-\Delta_{\infty}\, u(x), |\nabla u(x)|- \lambda_{2,\infty}(\Omega)u(x)\} = 0 & \text{in } \{u>0\} \cap \Omega \\
\max\{-\Delta_{\infty}\,u(x), -|\nabla u(x)|-\lambda_{2,\infty}(\Omega)u(x)\} = 0 & \text{in } \{u<0\} \cap \Omega \\
-\Delta_{\infty}\,u(x) = 0 & \text{in } \{u=0\} \cap \Omega \\
u(x) = 0 & \text{on } \partial \Omega. \end{array} \right. $$
Concerning limits of higher eigenvalues in \eqref{plap} we also refer to reader the article \cite{Ju-Li-05}. Despite the fact that has been proved in \cite{Ju-Li-05} that the set of such $\infty$-eigenvalues is unbounded, a geometric characterization beyond $\lambda_{2,\infty}(\Omega)$ has not been achieved. However, when we bring to light the one-dimensional problem with $\Omega$ being the unit interval $(0,1)$, the spectrum is computed to be the sequence $\{\lambda_{k,\infty}\}_{k\in\mathbb N}$ given by \begin{equation} \label{lamk.1d} \lambda_{1,\infty}=k, \qquad \lambda_{k,\infty}=2\lambda_{1,\infty}, \quad k\in \mathbb N. \end{equation} For more results concerning the $\infty-$eigenvalue problem we refer to \cite{Champion, Crasta, Hynd, Navarro,Yu} and references therein.
According to our knowledge, up to date, there is no investigation on the asymptotic behavior of the {F}u{\v{c}}{\'{\i}}k spectrum as $p$ diverges. Therefore, in this manuscript we will turn our attention in studying both the structure and characterization of the \textit{$\infty$-{F}u{\v{c}}{\'{\i}}k spectrum}. Furthermore, in some particular configurations of the domain $\Omega$, we are able to perform explicit computations of the spectrum.
In our first theorem we obtain the equation associated to the $\infty$-{F}u{\v{c}}{\'{\i}}k spectrum, which is obtained letting $p\to \infty$ in equation \eqref{ecu}.
\begin{thm}\label{teo.eq}
Let $(\alpha_p, \beta_p)_{p> 1} \in \Sigma_p$ be such that $\alpha_p^{1/p}, \beta_p^{1/p}$ are bounded and $u_p \in W^{1, p}_0(\Omega)$ a corresponding eigenfunction normalized with $\|u_p\|_{L^p(\Omega)} = 1$. Then, up to a subsequence, $$ (\alpha_{\infty}, \beta_{\infty}) = \lim_{p \to \infty} \left(\alpha_p^{1/p}, \beta_p^{1/p}\right) \qquad \text{ and } \qquad \lim_{p\to\infty} u_p(x) = u_\infty(x) \text{ uniformly in }\Omega. $$ Moreover, the limit $u_{\infty}$ belongs to $W^{1,\infty}_0(\Omega)$ and is a viscosity solution to \begin{equation} \label{ecu.infty} \left\{ \begin{array}{ll}
\min\{-\Delta_{\infty}\,u_{\infty}(x), |\nabla u_{\infty}(x)|-\alpha_{\infty} u_{\infty}^{+}(x)\} = 0 & \text{in } \{u_{\infty}>0\} \cap \Omega \\
\max\{-\Delta_{\infty}\,u_{\infty}(x), -|\nabla u_{\infty}(x)|+\beta_{\infty} u_{\infty}^{-}(x)\} = 0 & \text{in } \{u_{\infty}<0\} \cap \Omega \\
-\Delta_{\infty}\,u_{\infty}(x) = 0 & \text{in } \{u_{\infty}=0\} \cap \Omega \\
u_{\infty}(x) = 0 & \text{on } \partial \Omega. \end{array} \right. \end{equation} \end{thm}
Regarding the limit equation, we define the $\infty-${F}u{\v{c}}{\'{\i}}k spectrum as $$
\Sigma_\infty \mathrel{\mathop:}= \Big\{(\alpha,\beta)\in \mathbb R^2 \colon \,\, \mbox{there exists a nontrivial viscosity solution } u \mbox{ of }\eqref{ecu.infty} \Big\}, $$ such a $u$ is defined to be an eigenfunction of the pair $(\alpha,\beta)$. Observe that, by construction, eigenfunction of \eqref{ecu.infty} belong to $W^{1,\infty}_0(\Omega)$.
When $\Omega$ is the unit interval in $\mathbb R$, a full characterization of the limit of the $p$-{F}u{\v{c}}{\'{\i}}k spectrum is obtained. \begin{thm} \label{teo.1d} The limit of the spectrum $\Sigma_p$ as $p\to \infty$ when $\Omega$ is the unit interval $(0,1)\subset \mathbb R$ is given by $$ \displaystyle \Sigma_\infty = \bigcup\limits_{k=1}^{\infty} \mathcal{C}_{k,\infty}^\pm, $$ where \begin{align*} &\mathcal{C}_{k,\infty}=\left\{ (k(1+s^{-1}),k(1+s)), \, s \in \mathbb R^+ \right\} &\quad \text{ if }k \text{ is even}\\ &\mathcal{C}_{k,\infty}^{+}=\left\{ (k-1+s^{-1}(k+1), k+1+s(k-1)), \, s \in \mathbb R^+ \right\} &\quad \text{ if }k \text{ is odd}\\ &\mathcal{C}_{k,\infty}^{-}=\left\{ (k+1+ s^{-1}(k-1), k-1+s(k+1)), \, s \in \mathbb R^+ \right\} &\quad \text{ if }k \text{ is odd}. \end{align*} \end{thm}
In the higher dimensional case the first nontrivial curve in the $\infty$-{F}u{\v{c}}{\'{\i}}k spectrum can be characterized as follows. \begin{thm} \label{teo.rn} The trivial lines in the spectrum of \eqref{ecu.infty} are given by $$
\mathcal{C}_{1,\infty}^{+} = \mathbb R\times \left\{\frac{1}{\mathfrak{r}}\right\} \quad \text{ and } \quad \mathcal{C}_{1,\infty}^{-}= \left\{\frac{1}{\mathfrak{r}}\right\} \times \mathbb R, $$ where $\mathfrak{r}=\mathfrak{r}(\Omega)$ is the radius of the biggest ball inscribed in $\Omega$. Moreover, the first non-trivial curve in $\Sigma_\infty$ is parametrized as \begin{equation} \label{curva.2}
\mathcal{C}_{2,\infty} = \{ (\alpha_\infty(t),\beta_\infty(t)), \,\,\,t\in \mathbb R^+\} \end{equation} where $\alpha_\infty(t)=t^{-1} c_\infty(t)$ and $\beta_\infty(t)=c_\infty(t)$, and $$
c_\infty(t)=\inf_{\mathcal{P}_2(\Omega)} \max\left\{\frac{t}{\mathfrak{r}(\omega_1)}, \frac{1}{\mathfrak{r}(\omega_2)}\right\}, \,\,\, t\in \mathbb R^+. $$ Here $\mathcal{P}_2(\Omega)$ denotes the class of all partitions in two disjoint and connected subsets of $\Omega$. Given $(\omega_1, \omega_2)\in \mathcal{P}_2(\Omega)$, we denote $\mathfrak{r}_i=\mathfrak{r}_i(\omega_i)$ the radius of the biggest ball inscribed in $\omega_i$ ($i=1,2$).
Moreover, the trivial curves $\mathcal{C}_{1,\infty}^{+}$ and $\mathcal{C}_{1,\infty}^{-}$ intersect the second curve $\mathcal{C}_{2,\infty} $ for almost any domain. In fact, the only exception where $\mathcal{C}_{2,\infty} $ is asymptotic to $\mathcal{C}_{1,\infty}^{+}$ and $\mathcal{C}_{1,\infty}^{-}$ is the ball, $\Omega = B_R$. \end{thm}
It is remarkable that, contrary to expectations, Theorem \ref{teo.rn} shows that $\mathcal{C}_{2,\infty}$ does not behaves as a hyperbolic curve asymptotic to the trivial lines, in fact, in most cases (the only exception is the ball) it intersect them (compare with \cite{Cu-dF-Go-99, Cu-dF-Go-98, Dra-92, Pe-04} for the $p$-Laplacian counterpart). In Section \ref{sec4}, we will present a complete description about the behaviour of $\mathcal{C}_{2,\infty}$ (see in particular Corollary \ref{cor4.1}). Finally, in Section \ref{ClasFucikSpect}, we will present some interesting examples in order to illustrate such an unusual phenomena for $\mathcal{C}_{2,\infty}$.
In conclusion, we would highlight that our approach is flexible enough in order to be applied for other classes of degenerate operators with $p$-Laplacian structure. Some enlightening examples are
\begin{itemize}
\item[\checkmark] Anisotropic operators like the \textit{pseudo $p$-Laplacian} $$
\displaystyle \tilde{\Delta}_p u \mathrel{\mathop:}= \sum_{i=1}^{N} \frac{\partial }{\partial x_i}\left(\left|\frac{\partial u}{\partial x_i}\right|^{p-2}\frac{\partial u}{\partial x_i}\right). $$ The eigenvalue problem and its corresponding limit as $p \to \infty$ for such a class of operators were studied in \cite{BK}.
\item[\checkmark] Nonlocal operators like the \textit{fractional $p$-Laplacian} $$
\left(-\Delta\right)_p^s u(x) \mathrel{\mathop:}= \displaystyle C_{N, s, p}.\text{P.V.}\int_{\mathbb R^N} \frac{|u(y)-u(x)|^{p-2}(u(y)-u(x))}{|x-y|^{n+sp}}dy. $$ where $p>1$, $s \in (0, 1)$, P.V. stands for the Cauchy principal value and $C_{N, s, p}$ is a normalizing constant. The mathematical tools in order to study the $\infty$-Fu{\v{c}}{\'{\i}}k spectrum for this class of operators can be found in the following articles, \cite{FP}, \cite{BKGS}, \cite{LL} and \cite{PSY}. \end{itemize}
The manuscript is organized as follows: In Section \ref{Prel} we introduce the mathematical machinery (notation and definitions) and several auxiliary results which play an important role in order to prove Theorem \ref{teo.eq}. In Section \ref{LimProb} we prove Theorem \ref{teo.eq}. In Subsection \ref{sect-1-d} we study in detail the one-dimensional case. The general case is analyzed in Subsection \ref{GenCase}. Finally, Section \ref{ClasFucikSpect}
is devoted to present several examples where explicit computation of the spectrum are made, as well as the profile of such solutions.
\section{Preliminary results}\label{Prel}
In this section we introduce some definitions and auxiliary lemmas we will use throughout this article.
Let us start by defining the notion of weak solution to \begin{equation} \label{plap.g}
- \Delta_p u = g(u) \quad \text{in} \quad \Omega, \end{equation}
where $g: \mathbb R \to \mathbb R$ is a continuous function. Since we will study the asymptotic behavior as $p \to \infty $, without loss of generality we can assume that $p>\max\{2,n\}$.
\begin{definition}\label{DefWS}
A function $u \in W^{1,p}(\Omega) \cap C(\Omega)$ is said to be a weak solution to \eqref{plap.g} if it fulfills
$$
\displaystyle \int_{\Omega} |\nabla u|^{p-2}\nabla u \cdot \nabla \phi\, dx = \int_{\Omega} g(u)\phi \,dx, \qquad \forall \phi \in C^{\infty}_0(\Omega).
$$ \end{definition} Since $p > 2$, \eqref{plap.g} is not singular at points where the gradient vanishes, and consequently, the mapping $$
x \mapsto \Delta_p \phi(x) = |\nabla \phi(x)|^{p-2}\Delta \phi(x) + (p-2)|\nabla \phi(x)|^{p-4}\Delta_{\infty} \phi(x) $$ is well-defined and continuous for all $\phi \in C^2(\Omega)$.
Next, we give the definition of viscosity solutions to \eqref{plap.g}. For the reader's convenience we recommend the survey \cite{CIL} on theory of viscosity solutions.
\begin{definition}
An upper (resp. lower) semi-continuous function $u: \Omega \to \mathbb R$ is said to be a viscosity sub-solution (resp. super-solution) to \eqref{plap.g} if, whenever $x_0 \in \Omega$ and $\phi \in C^2(\Omega)$ are such that $u-\phi$ has a strict local maximum (resp. minimum) at $x_0$, then $$
-\Delta_p \phi(x_0) \geq g(\phi(x_0)) \quad (\text{resp.} \,\,\,\leq g(\phi(x_0))). $$ Finally, a function $u \in C(\Omega)$ is said to be a viscosity solution to \eqref{plap.g} if it is simultaneously a viscosity sub-solution and a viscosity super-solution. \end{definition}
Throughout this article, we will consider $g$ defined as $$
g(u(x)) = \alpha_p(u^{+})^{p-1}(x)- \beta_p(u^{-})^{p-1}(x). $$
The following lemmas will be useful for our arguments.
\begin{lemma}\label{Lemma2.4} Assume $n<p < \infty$ and let $u \in W^{1, p}_0(\Omega)$ be a weak solution to \eqref{ecu} normalized by $\|u\|_{L^p(\Omega)} = 1$. Then, $u \in C^{0, \gamma}(\Omega)$, where $\gamma = 1- \frac{n}{p}$. Moreover, the following holds \begin{itemize}
\item[\checkmark] $L^{\infty}$-bounds
$$
\|u\|_{L^{\infty}(\Omega)} \leq \mathfrak{C}_1,
$$
\item[\checkmark] H\"{o}lder estimate
$$
\frac{|u(x)-u(y)|}{|x-y|^{\gamma}} \leq \mathfrak{C}_2,
$$
\end{itemize}
where $\mathfrak{C}_1$ and $\mathfrak{C}_2$ are constants depending on $n$, $\alpha_p$ and $\beta_p$. \end{lemma}
\begin{proof} Multiplying \eqref{ecu} by $u$ and integrating by parts we obtain $$
\int_{\Omega} |\nabla u|^p \, dx = \alpha_p\int_{\Omega} u_{+}^{p} \, dx + \beta_p\int_{\Omega} u_{-}^{p}\, dx \leq \max\{\alpha_p, \beta_p\}. $$ Now, by Morrey's estimates and the previous inequality, there is a positive constant $\mathfrak{C}=\mathfrak{C}(n,\Omega)$ independent on $p$ such that $$
\|u\|_{L^{\infty}(\Omega)} \leq \mathfrak{C} \|\nabla u\|_{L^p(\Omega)} \leq \mathfrak{C}\max\left\{\alpha_p^{1/p}, \beta_p^{1/p} \right\}, $$ which proves the first statement.
For the second part, since $p > n$, combining the H\"{o}lder's inequality and Morrey's estimates we have $$
\frac{|u(x)-u(y)|}{|x-y|^{\gamma}} \leq \mathfrak{C} \|\nabla u\|_{L^n(\Omega)} \leq \mathfrak{C}|\Omega|^{\frac{p-n}{pn}}\|\nabla u\|_{L^p(\Omega)}\leq \hat{\mathfrak{C}}|\Omega|^{\frac{p-n}{pn}}, $$ where $\hat{\mathfrak{C}}$ depends only on $n$ and $\Omega$. \end{proof}
The last result implies that any family of weak solutions to \eqref{ecu} with $\alpha_p^{1/p}$, $\beta_p^{1/p}$ bounded is pre-compact in the uniform topology. Therefore, the existence of a uniform limit is established in Theorem \ref{teo.eq}.
\begin{lemma} Let $\{u_p\}_{p>1}$ be a sequence of weak solutions to \eqref{ecu}. Suppose that $\left(\alpha_p^{1/p}, \beta_p^{1/p}\right) \to (\alpha_{\infty}, \beta_{\infty})$ as $p \to \infty$. Then, there exists a subsequence $p_i \to \infty$ and a limit function $u_{\infty}$ such that $$
\displaystyle \lim_{p_i \to \infty} u_{p_i}(x) = u_{\infty}(x) $$ uniformly in $\Omega$. Moreover, $u_{\infty}$ is Lipschitz continuous with $$
\frac{|u_{\infty}(x)-u_{\infty}(y)|}{|x-y|} \leq \mathfrak{C}\max\left\{\alpha_{\infty}, \beta_{\infty}\right\}. $$ \end{lemma}
\begin{proof} Existence of $u_{\infty}$ as a uniform limit is a direct consequence of the Lemma \ref{Lemma2.4} combined with an Arzel\`{a}-Ascoli compactness criteria. Finally, the last statement holds by passing to the limit in the H\"{o}lder estimates from Lemma \ref{Lemma2.4}. \end{proof}
The following lemma gives a relation between weak and viscosity sub and super-solution to \eqref{plap.g}.
\begin{lemma}\label{EquivSols} A continuous weak sub-solution (resp. super-solution) $u \in W_{\text{loc}}^{1,p}(\Omega)$ to \eqref{plap.g} is a viscosity sub-solution (resp. super-solution) to $$
-\left[|\nabla u|^{p-2} \Delta u + (p-2)|\nabla u(x)|^{p-4}\Delta_{\infty} u\right] = g(u(x)) \quad \text{in} \quad \Omega. $$ \end{lemma}
\begin{proof} Let us proceed for the case of super-solutions. Fix $x_0 \in \Omega$ and $\phi \in C^2(\Omega)$ such that $\phi$ touches $u$ by bellow at $x=x_0$, i.e., $u(x_0) = \phi(x_0)$ and $u(x)> \phi(x)$ for $x \neq x_0$. Our goal is to establish that $$
-\left[|\nabla \phi(x_0)|^{p-2}\Delta \phi(x_0) + (p-2)|\nabla \phi(x_0)|^{p-4}\Delta_{\infty} \phi(x_0)\right] -g(\phi(x_0)) \geq 0. $$ Let us suppose, for sake of contradiction, that the inequality does not hold. Then, by continuity there exists $r>0$ small enough such that $$
-\left[|\nabla \phi(x)|^{p-2}\Delta \phi(x) + (p-2)|\nabla \phi(x)|^{p-4}\Delta_{\infty} \phi(x)\right] -g(\phi(x)) < 0, $$ provided that $x \in B_r(x_0)$. Now, we define the function $$
\Psi \mathrel{\mathop:}= \phi+ \frac{1}{10}\mathfrak{m}, \quad \text{ where } \quad \mathfrak{m} \mathrel{\mathop:}= \inf_{\partial B_r(x_0)} (u(x)-\phi(x)). $$ Notice that $\Psi$ verifies $\Psi < u$ on $\partial B_r(x_0)$, $\Psi(x_0)> u(x_0)$ and \begin{equation}\label{EqPsi}
-\Delta_p \Psi(x) < g(\phi(x)). \end{equation} By extending by zero outside $B_r(x_0)$, we may use $(\Psi-u)_{+}$ as a test function in \eqref{plap.g}. Moreover, since $u$ is a weak super-solution, we obtain \begin{equation}\label{Eq3.4}
\displaystyle \int_{\{\Psi>u\}} |\nabla u|^{p-2}\nabla u \cdot \nabla (\Psi-u) dx \geq \int_{\{\Psi>u\}} g(u)(\Psi-u) dx. \end{equation} On the other hand, multiplying \eqref{EqPsi} by $\Psi- u$ and integrating by parts we get \begin{equation}\label{Eq3.5}
\displaystyle \int_{\{\Psi>u\}} |\nabla \Psi|^{p-2}\nabla \Psi \cdot \nabla (\Psi-u) dx < \int_{\{\psi>u\}} g(\phi)(\Psi-u) dx. \end{equation} Next, subtracting \eqref{Eq3.5} from \eqref{Eq3.4} we obtain \begin{align} \label{exxx}
\int_{\{\Psi>u\}} (|\nabla \Psi|^{p-2}\nabla \Psi - |\nabla u|^{p-2}\nabla u) \cdot \nabla (\Psi-u) dx < \int_{\{\psi>u\}} \mathcal{G}_{\phi}(u)(\Psi-u)dx, \end{align} where we have denoted $\mathcal{G}_{\phi}(u)=g(\phi)-g(u)$. Finally, since the left hand side in \eqref{exxx} is bounded by below by $$
\mathfrak{C}(p)\int_{\{\Psi>u\}} |\nabla \Psi- \nabla u|^pdx, $$ and the right hand side in \eqref{exxx} is negative, we can conclude that $\Psi \leq u$ in $B_r(x_0)$. However, this contradicts the fact that $\Psi(x_0)>u(x_0)$. Such a contradiction proves that $u$ is a viscosity super-solution.
An analogous argument can be applied to treat the sub-solution case. \end{proof}
\section{The limiting problem: Proof of Theorem \ref{teo.eq}}\label{LimProb}
In this section we deal with the limit equation obtained as $p\to\infty$ in \eqref{ecu}. We prove that, as $p\to\infty$, weak solutions of \eqref{ecu} converge uniformly to a limit function which, in fact, is characterized to satisfy \eqref{ecu.infty} in the viscosity sense.
\begin{proof}[Proof of Theorem \ref{teo.eq}] First of all, we prove that the limiting function $u_{\infty}$ is $\infty$-harmonic in its null set, i.e., $$
- \Delta_{\infty} u_{\infty}(x) = 0 \quad \text{in} \quad \{u_{\infty} = 0\} \cap \Omega. $$ To this end, let $x_0 \in \{u_{\infty} = 0\} \cap \Omega$ and $\phi \in C^2(\Omega)$ such that $u_{\infty}-\phi$ has a strict local maximum (resp. strict local minimum) at $x_0$. Since, up to subsequence, $u_p \to u_{\infty}$ local uniformly, there exists a sequence $x_p \to x_0$ such that $u_p-\phi$ has a local maximum (resp. local minimum) at $x_p$. Moreover, if $u_p$ is a weak solution (consequently a viscosity solution according to Lemma \ref{EquivSols}) to \eqref{ecu} we obtain $$
-\left[|\nabla \phi(x_p)|^{p-2}\Delta \phi(x_p) + (p-2)|\nabla \phi(x_p)|^{p-4}\Delta_{\infty} \phi(x_p)\right] \leq g(u(x_p)) \quad (\text{resp.}\,\, \geq ). $$
Now, if $|\nabla \phi(x_0)| \neq 0$ we may divide both sides of the above inequality by $(p-2)|\nabla \phi(x_p)|^{p-4}$ (which is different from zero for $p$ large enough). Thus, we obtain that $$
- \Delta_{\infty} \phi(x_p) \leq \frac{|\nabla \phi(x_p)|^2 \Delta \phi(x_p)}{p-2} + \frac{g( u(x_p))}{(p-2)|\nabla \phi(x_p)|^{p-4}} \quad (\text{resp.}\,\, \geq ), $$ where the RHS tends to zero as $p \to \infty$, because $g(u(x_p)) \to g(u(x_0)) = 0$. Therefore, $$
- \Delta_{\infty} \phi(x_0) \leq 0 \quad (\text{resp.}\,\, \geq 0), $$
and since such an inequality is immediately satisfied if $|\nabla \phi(x_0)| = 0$ we conclude that $u_{\infty}$ is a viscosity sub-solution (resp. super-solution) to the desired equation.
Next, we will prove that $u_{\infty}$ is a viscosity solution to $$
\max\{- \Delta_{\infty} u_{\infty}(x), -|\nabla u_{\infty}(x)|+\beta_{\infty} u^{-}_{\infty}(x)\} = 0 \quad \text{in} \quad \{u_{\infty}<0\} \cap \Omega. $$ First let us prove that $u_{\infty}$ is a viscosity super-solution. Fix $x_0 \in \{u_{\infty}<0\} \cap \Omega$ and let $\phi \in C^2(\Omega)$ be a test function such that $u_{\infty}(x_0) = \phi(x_0)$ and the inequality $u_{\infty}(x) > \phi(x)$ holds for all $x \neq x_0$. We want to show that $$
- \Delta_{\infty} \phi(x_0) \geq 0 \quad \text{or} \quad -|\nabla \phi(x_0)|+\beta_{\infty} \phi^{-}(x_0) \geq 0. $$
Notice that if $|\nabla \phi(x_0)| = 0$ there is nothing to prove. Hence, as a matter of fact, we may assume that \begin{equation}\label{eq5.1}
-|\nabla \phi(x_0)|+\beta_{\infty} \phi^{-}(x_0)<0. \end{equation} As in the previous case, there exists a sequence $x_p \to x_0$ such that $u_p-\phi$ has a local minimum at $x_p$. Since $u_p$ is a weak super-solution (consequently a viscosity super-solution according to Lemma \ref{EquivSols}) to \eqref{ecu} we get $$
-\left[|\nabla \phi(x_p)|^{p-2}\Delta \phi(x_p) + (p-2)|\nabla \phi(x_p)|^{p-4}\Delta_{\infty} \phi(x_p)\right] \geq -\beta_p(u_p^{-}(x_p))^{p-1}. $$
Now, dividing both sides by $(p-2)|\nabla \phi(x_p)|^{p-4}$ (which is different from zero for $p$ large enough due to \eqref{eq5.1}) we get $$
- \Delta_{\infty} \phi(x_p) \geq - \frac{|\nabla \phi(x_p)|^2 \Delta \phi(x_p)}{p-2} -\left( \frac{\beta_p^{\frac{1}{p-4}}u_p^{-}(x_p)}{|\nabla \phi(x_p)|}\right)^{p-4}\frac{(u_p^{-})^3(x_p)}{p-2}. $$ Passing the limit as $p \to \infty$ in the above inequality we conclude that $$ - \Delta_{\infty} \phi(x_0) \geq 0. $$ That proves that $u_{\infty}$ is a viscosity super-solution.
Now, we will analyze the another case. To this end, fix $x_0 \in \{u_{\infty}<0\} \cap \Omega$ and a test function $\phi \in C^2(\Omega)$ such that $u_{\infty}(x_0) = \phi(x_0)$ and the inequality $u_{\infty}(x) < \phi(x)$ holds for $x \neq x_0$. We want to prove that \begin{equation}\label{eq5.2}
- \Delta_{\infty} \phi(x_0) \leq 0 \quad \text{and} \quad -|\nabla \phi(x_0)|+\beta \phi^{-}(x_0) \leq 0. \end{equation} Again, as before, there exists a sequence $x_p \to x_0$ such that $u_p-\phi$ has a local maximum at $x_p$ and since $u_p$ is a weak sub-solution (resp. viscosity sub-solution) to \eqref{ecu}, we have that $$
-\frac{|\nabla \phi(x_p)|^2 \Delta \phi(x_p)}{p-2} - \Delta_{\infty} \phi(x_p) \leq -\left( \frac{\beta_p^{\frac{1}{p-4}}u_p^{-}(x_p)}{|\nabla \phi(x_p)|}\right)^{p-4}\frac{(u_p^{-})^3(x_p)}{p-2} \leq 0. $$
Thus, we obtain $- \Delta_{\infty} \phi(x_0) \leq 0$ letting $p \to \infty$. If $-|\nabla \phi(x_0)|+\beta_{\infty} \phi^{-}(x_0) > 0$, as $p \to \infty$, then the right hand side goes to $-\infty$, which clearly yields a contradiction because $\phi \in C^2(\Omega)$. Therefore \eqref{eq5.2} holds.
The last part of the proof consists in proving that $u_{\infty}$ is a viscosity solution to $$
\min\{- \Delta_{\infty} u_{\infty}(x), |\nabla u_{\infty}(x)|-\alpha_{\infty} u^{+}_{\infty}(x)\} = 0 \quad \text{in} \quad \{u_{\infty}>0\} \cap \Omega. $$ The argument is similar to the previous case and for this reason we will omit it. \end{proof}
\section{Characterization of $\Sigma_\infty$: Proof of Theorems \ref{teo.1d} and \ref{teo.rn}}\label{sec4}
\subsection{The one-dimensional case} \label{sect-1-d}
As we pointed out in the introduction, the spectrum of \eqref{ecu} as $p\to\infty$ is completely understood when $n=1$ since, in this case, the structure of $\Sigma_p$ is explicitly determined. When $\Omega=(0,1)$, $\Sigma_p$ it is composed by the two trivial lines $$ \mathcal{C}_{1,p}^+= \mathbb R\times \{\lambda_{1,p}\}, \qquad \mathcal{C}_{1,p}^-= \{\lambda_{1,p}\} \times \mathbb R, $$ and the family of hyperbolic-like curves $$ \mathcal{C}_{k,p}\, : \, \alpha_p^{-1/p}+\beta_p^{-1/p} = \frac{2}{k\pi_p } $$ when $k$ is even, and \begin{align*} &\mathcal{C}_{k,p}^+\, : \, \frac{k-1}{2}\alpha_p^{-1/p}+\frac{k+1}{2}\beta_p^{-1/p} = \frac{1}{\pi_p }, \\ &\mathcal{C}_{k,p}^-\, : \, \frac{k+1}{2}\alpha_p^{-1/p}+\frac{k-1}{2}\beta_p^{-1/p} = \frac{1}{\pi_p } \end{align*} when $k$ is odd. Here $\pi_p$ is given by $$
\pi_p = 2(p-1)^{1/p} \int_0^1 \frac{ds}{(1-s^p)^{1/p}}. $$
Observe that, since the eigenvalues of \eqref{plap} are explicitly given by $\lambda_{k,p}=(k\pi_p)^p$, $k\in\mathbb N,$ the curves $\mathcal{C}_k^\pm$ can be rewritten in terms of them as \begin{align*}
&\mathcal{C}_{k,p}\, : \, \left( \frac{\lambda_{k,p}}{\alpha_p} \right)^\frac1p + \left( \frac{\lambda_{k,p}}{\beta_p} \right)^\frac1p=2 \qquad &\mbox{ if } k \mbox{ is even} \\\
&\mathcal{C}_{k,p}^+\, : \, \left( \frac{\lambda_{(k-1)/2,p}}{\alpha_p} \right)^\frac1p + \left( \frac{\lambda_{(k+1)/2,p}}{\beta_p} \right)^\frac1p=1\qquad &\mbox{ if } k \mbox{ is odd}\\
&\mathcal{C}_{k,p}^-\, : \, \left( \frac{\lambda_{(k+1)/2,p}}{\alpha_p} \right)^\frac1p + \left( \frac{\lambda_{(k-1)/2,p}}{\beta_p} \right)^\frac1p=1. \end{align*}
\begin{proof}[Proof of Theorem \ref{teo.1d}]
In view of \eqref{lam2}, the trivial lines $\mathcal{C}_{1,p}^\pm$ converge to $$
\mathcal{C}_{1,\infty}^+=\mathbb R\times \{ 2 \}, \qquad \mathcal{C}_{1,\infty}^-= \{2\}\times \mathbb R. $$ as $p\to\infty$, since $\pi_p\to 2$ as $p\to \infty$.
Let us analyze the nontrivial curves $\mathcal{C}_{k,p}^+$ as $p\to\infty$ for $k$ odd. According to the previous expressions, this hyperbolic curve can be parametrized as \begin{align*} \mathcal{C}_{k,p}^+=\{ (\alpha_p(s),\beta_p(s)), \, s\in \mathbb R^+\} \end{align*} where $$ \alpha(s)=\left(\lambda_{\frac{k-1}{2},p}^\frac1p + s^{-1} \lambda_{\frac{k+1}{2},p}^\frac1p\right)^p, \qquad \beta(s)=s^p \alpha(s). $$ Here $(\alpha_p(s),\beta_p(s))$ denotes the intersection between $\mathcal{C}_{k,p}$ and the line of slope $s^p$ passing through the origin in $\mathbb R^2$. Observe that when $s=1$, it follows that $\alpha_p=\beta_p=(k\pi_p)^p=\lambda_{k,p}$.
If we define the curve $$
\mathcal{C}_{k,\infty}^+ \mathrel{\mathop:}= \{(\alpha_\infty(s),\beta_\infty(s)), \, s\in \mathbb R^+\} $$ where $$ \alpha_\infty(s) \mathrel{\mathop:}= \lim_{p\to\infty} \alpha_p(s)^{1/p}, \qquad \beta_\infty(s)\mathrel{\mathop:}= \lim_{p\to\infty} \beta_p(s)^{1/p}, $$ from \eqref{lam1}, we get \begin{align*} \alpha_\infty(s)=k-1+s^{-1}(k+1), \qquad \beta_\infty(s)= k+1+s(k-1). \end{align*}
Observe that, from \eqref{lamk.1d}, we have that $\lambda_{k,\infty}=2k$. In particular, when $s=1$, the curve $\mathcal{C}_{k,\infty}^+$ passes through the point $(2k,2k)$, as expected.
The previous expressions lead to the formula for $\mathcal{C}_{k,\infty}^+$ in the case in which $k$ is odd. In a similar way can be obtain the formulas for the remaining curves, and the proof is complete. \end{proof}
\subsection{The general case}\label{GenCase}
As it was described in the introduction, one immediately observe that $\Sigma_p$ contains the trivial lines $\{\lambda_{1,p}\}\times \mathbb R$ and $\mathbb R \times \{\lambda_{1,p}\}$, being $\lambda_{1,p}$ the first eigenvalue given by \eqref{eqEigen}. However, in contrast with the one-dimensional case, where a full description of the spectrum is available, when $n>1$ it is only known the existence of a curve $\mathcal{C}_{2,p}$ beyond the trivial lines. Such a curve has an hyperbolic shape and it is proved to be variational (see for instance \cite{dF-Go-94}). As far as we are concerned, we will consider the characterization given in \cite{terra}, in which the intersection of $\mathcal{C}_{2,p}$ with the line of slope $t\in \mathbb R^+$ passing through the origin in $\mathbb R^2$ can be written as \begin{equation} \label{par.ab}
(\alpha_p(t),\beta_p(t))=(t^{-1} c_p(t),c_p(t)),\quad t\in\mathbb R^+ \end{equation} where $$
c_p(t)=\inf_{\mathcal{P}_2} \max\{t \lambda_{1,p}(\omega_1), \lambda_{1,p}(\omega_2)\}, \qquad t\in \mathbb R^+ $$ being $\mathcal{P}_2=(\omega_1, \omega_2)$ the class of partitions in two disjoint, connected, open subsets of $\Omega$, and $\lambda_{1, p}(\omega)$ the first eigenvalue of \eqref{plap} in $\Omega=\omega$.
\begin{proof} [Proof of Theorem \ref{teo.rn}] The proof follows taking limit as $p\to\infty$ in the characterization of the curves $\mathcal{C}_{1,p}^\pm$ and $\mathcal{C}_{2,p}$.
The expression for $\mathcal{C}_{1,\infty}^\pm$ follows from \eqref{lam1}. Now, if we define the function $$ c_\infty(t)\mathrel{\mathop:}= \lim_{p\to\infty} \inf_{\mathcal{P}_2} \max\Big\{t \lambda_{1,p}(\omega_1)^\frac1p, \lambda_{1,p}(\omega_2)^\frac1p\Big\}, \qquad t\in \mathbb R^+ $$ again, from \eqref{lam1}, the following characterization holds \begin{equation} \label{c.infty} c_\infty(t)=\inf_{\mathcal{P}_2} \max\left\{\frac{t}{r_1}, \frac{1}{r_2}\right\}, \qquad t\in \mathbb R^+ \end{equation} where, given $(\omega_1, \omega_2)\in\mathcal{P}_2(\Omega)$, $r_i$ is the radius of the biggest ball contained in $\omega_i$, for $i=1,2$.
For $t\in \mathbb R^+$, defining the functions \begin{equation} \label{curva2} \alpha_\infty(t)=t^{-1} c_\infty(t), \quad \beta_\infty(t)=c_\infty(t), \end{equation} from \eqref{par.ab} it follows the desired parametrization of $\mathcal{C}_{2,\infty}$.
Now, let us see that, in fact, there is no point in $\Sigma_\infty$ between the trivial lines and $\mathcal{C}_{2,\infty}$, i.e., the first nontrivial curve is lower isolated when it detaches from the trivial lines. Suppose otherwise, that there is some $(\alpha_0,\beta_0)$ strictly between these curves, and denote $u$ the corresponding eigenfunction. Observe that $u$ cannot have constant sign in $\Omega$, since this would imply that $(\alpha_0,\beta_0)\in \mathcal{C}_{1,\infty}^\pm$. Then, there exists a nontrivial partition $(p_1,p_2)\in \mathcal{P}_2(\Omega)$ such that $u>0$ in $p_1$ and $u<0$ in $p_2$. Now, if we consider $t_0=\frac{\rho_1}{\rho_2}$, it is clear that the inequalities $$
\beta(t_0)> \beta_0(t_0), \qquad \alpha(t_0)> \alpha_0(t_0) $$ are strict, being $(\alpha(t_0),\beta(t_0))\in \mathcal{C}_{2,\infty}$ and $\rho_i$ the radius of the biggest ball inside $p_i$, $i=1,2$. However, by the definition of the first nontrivial curve \eqref{curva2}, it must hold that $$ \beta(t_0) = c_\infty(t_0)=\inf_{\mathcal{P}_2} \max\left\{ \frac{t_0}{r_1}, \frac{1}{r_2}\right\} = \max\left\{ \frac{t_0}{\rho_1} , \frac{1}{\rho_2}\right\} = \beta_0(t_0), $$ a contradiction. Consequently, $\mathcal{C}_{2,\infty}$ is lower isolated when it is different from the trivial lines.
Finally, we observe the following: if we take a ball of radius $$\mathfrak{r}(\Omega)=\max\limits_{x\in\Omega} \dist(x,\partial \Omega)$$ inside $\Omega$ and there is some room left, that is, $\Omega \setminus \overline{B_{\mathfrak{r}}} \not=\emptyset$, then we can consider as a partition of $\Omega$ the sets $\omega_1 = B_{\mathfrak{r}}$ and $\omega_2 = \Omega \setminus \overline{B_{\mathfrak{r}}}$. From our previous arguments we get that the points $(1/\mathfrak{r}(\Omega) , 1/ \mathfrak{r} (\Omega \setminus \overline{B_{\mathfrak{r}}}))$ and $( 1/ \mathfrak{r} (\Omega \setminus \overline{B_{\mathfrak{r}}}), 1/\mathfrak{r}(\Omega) )$ belong to $\mathcal{C}_{2,\infty} \cap \mathcal{C}_{1,p}^\pm$. Therefore, the curve $\mathcal{C}_{2,\infty}$ intersects the trivial lines when $\Omega \setminus \overline{B_{\mathfrak{r}}} \not=\emptyset$ and we just observe that this holds for every domain that is different from a ball. \end{proof} As a corollary, the following properties are fulfilled by $\mathcal{C}_{2,\infty}$.
\begin{corollary}\label{cor4.1} The following statements hold true: \begin{enumerate}
\item[(a)] $\mathcal{C}_{2,\infty}$ is a continuous and non-increasing curve, symmetric with respect to the diagonal.
\item[(b)] $\mathcal{C}_{2,\infty} \subset \left\{(x, y) \in \mathbb R^2 \suchthat x, y \geq \lambda_{1, \infty}(\Omega) \right\} \setminus \left\{(x, y) \in \mathbb R^2 \suchthat x, y > \lambda_{2, \infty}(\Omega)\right\}$.
\item[(c)] (Courant nodal domain theorem) Any eigenfunction associated to $(x, y)\in \mathcal{C}_{2,\infty} \setminus
\mathcal{C}_{1,p}^\pm $ admits exactly two nodal domains. \end{enumerate} \end{corollary}
\begin{proof} Symmetry of $\mathcal{C}_{2,\infty}$ arises from interchanging the roles of $\omega_1$ and $\omega_2$ in the the expression of $c_\infty(t)$. Continuity and monotonicity of $\mathcal{C}_{2,\infty}$ follow from the definition of $c_\infty(t)$.
The curve $\mathcal{C}_{2,\infty}$ always is above or coincides with the trivial lines since $\lambda_{1,\infty}(\Omega)\leq c_\infty(\Omega)$. Furthermore, any point belonging to $\mathcal{C}_{2,\infty}$ does not belong to $\{x,y>\lambda_{2,\infty}\}$. In fact, since $\mathcal{C}_{2,\infty}$ is continuous, all path linking $(\lambda_{2,\infty},\lambda_{2,\infty})$ to any point in $\{x,y>\lambda_{2,\infty}\}$ should increase at some moment, which would contradict the non-increasing nature of the curve.
Finally, by construction, any eigenfunction corresponding to a point of the curve admits exactly two nodal domains. \end{proof}
\begin{remark} {\rm Let $(\alpha, \beta)$ a point in $ \mathcal{C}_{2,\infty} \cap
\mathcal{C}_{1,p}^\pm $, that is, for example a point of the form $(1/\mathfrak{r}(\Omega), \beta)$
with $\beta$ large. For those points there are at least two different eigenfunctions (this
point of the spectrum is not simple). In fact, there is a positive eigenfunction (that comes
from the limit as $p\to \infty$ in the first eigenvalue for the $p-$Laplacian) and another one
that changes sign (that can be obtained from our construction since we assumed that
$(\alpha, \beta)\in \mathcal{C}_{2,\infty}$). Therefore, $\Sigma_\infty$ has eigenvalues with
multiplicity on the trivial lines (this fact does not happen for $\Sigma_p$).} \end{remark}
\section{Classifying the $\infty$-{F}u{\v{c}}ik spectrum}\label{ClasFucikSpect}
In this section we will study different families of domains based on the shape of the curve $\mathcal{C}_{2,\infty}$. As we will see, given a domain $\Omega\subset \mathbb R^n$, this classification will depend only on $\mathfrak{r}(\Omega)$, the radius of the biggest ball contained in $\Omega$, and $ \mathfrak{R}(\Omega)$, the maximum radius of a couple of balls of the same size fitted inside $\Omega$.
Regardless the configuration of $\Omega$, it always holds that the lines $y= \frac{1}{\mathfrak{r}(\Omega)}$ and $x=\frac{1}{\mathfrak{r}(\Omega)}$ define the trivial lines in the spectrum, and that the point $\left(\frac{1}{ \mathfrak{R}(\Omega)}, \frac{1}{\mathfrak{R}(\Omega)}\right)$ belongs to $\Sigma_\infty$. As we will see, the shape of $\mathcal{C}_{2,\infty}$ depends on the relation between $\mathfrak{r}(\Omega)$ and $\mathfrak{R}(\Omega)$.
Hereafter, given a partition $(\omega_1, \omega_2)\in \mathcal{P}_2(\Omega)$ we will denote $r_1$ and $r_2$ the radii of the biggest balls contained in each component.
We will distinguish two classes of domain depending on whether the corresponding curve $\mathcal{C}_{2,\infty}$ intersects or not the trivial lines.
\begin{figure}
\caption{The first three curves in $\Sigma_\infty$ for a domain type I (left) and type II (middle and right).}
\label{dib3}
\end{figure}
\subsection{Type I}
Here lies the domain $\Omega$ whose curve $\mathcal{C}_{2,\infty}$ is hyperbolic and asymptotic to the trivial lines.
As we have seen the only possibility is a ball.
\begin{example}[{\bf A ball}] Let us consider the domain $\Omega$ given by a open ball of radius $R$ in $\mathbb R^n$. It is immediate that $\lambda_{1,\infty}(\Omega)=\frac{1}{R}$.
Since the radii of two tangential balls fitted in $\Omega$ must satisfy $r_1+r_2=R$, the expression of \eqref{c.infty} will be minimized will be minimized when $\frac{t}{r_1}=\frac{1}{r_2}$, i.e., $$ t=\frac{r_1}{R-r_1}, \qquad \text{ which implies } \quad r_1=\frac{Rt}{1+t}. $$ In such case it follows that $$c_\infty(t)=\frac{1+t}{R}.$$
Finally, observe that values of $t$ approaching zero correspond to a partition $(\omega_1, \omega_2)\in \mathcal{P}(\Omega)$ in which the biggest ball in $\omega_2$ is almost the whole $\Omega$ and the biggest one in $\omega_1$ is very small; values of $t$ approaching $+\infty$ correspond to a partition in which the balls interchange their roles: the biggest ball in $\omega_1$ is almost the whole $\Omega$. See figure \ref{dib1}.
Consequently, according equation \eqref{curva.2}, the curve $\mathcal{C}_{2,\infty}$ is given by $$
\mathcal{C}_{2,\infty}=\left\{ \left( \frac{1+t}{Rt}, \frac{1+t}{R}\right),\, t\in \mathbb R^+\right\}. $$ Observe that when $t=1$ the curve contains the point $\left(\frac{2}{R}, \frac{2}{R}\right)$, which is precisely corresponds to $(\lambda_{2,\infty}(\Omega), \lambda_{2,\infty}(\Omega))$.
\begin{figure}
\caption{Partitions corresponding to $t=1$ ($r_1\sim0.16$ and $r_2\sim 0.83$), $t=1$ ($r_1=r_2=0.5)$ and $t=0.1$ ($r_1\sim0.909$ and $r_2\sim 0.09$)}
\label{dib1}
\end{figure} \end{example}
\begin{example}[{\bf The unit interval}] When $\Omega$ is considered to be an open interval in the real line, the picture of $\Sigma_\infty$ is analogous to $\Sigma_p$ for a fixed value of $p$, i.e., it consists in a sequence of hyperbolic-like curves, as it is showed in Figure \ref{dib0}.
Moreover, since explicit formulas for the eigenfunctions are known for a fixed value of $p$, it is possible to describe the profile of the limit problem. For instance, if $(\alpha,\beta) \in \mathcal{C}_{2,\infty}$, the corresponding eigenfunction $u_{\infty}$ will be given by \begin{align*} \lim_{p\to\infty} u_p(x) = u_{\infty}(x)= \begin{cases} x &\quad \text{in } \left(0,\frac{\ell}{2}\right]\\ -x+\ell &\quad \text{in } \left[\frac{\ell}{2},\frac{\ell+1}{2}\right]\\ x-1 &\quad \text{in } \left[\frac{\ell+1}{2}, 1\right) \end{cases} \end{align*} where $\ell\in(0,1)$ and \begin{align*} u_p(x)= \begin{cases} \displaystyle \sin_p\left(\frac{\pi_p x}{\ell}\right) &\quad \text{in } (0,\ell]\\ \displaystyle -\sin_p\left(\frac{\pi_p x}{1-\ell}\right) &\quad \text{in } [\ell,1). \end{cases} \end{align*} Finally, notice that $u_{\infty}$ is a viscosity solution to \eqref{ecu.infty} with $(\alpha(l), \beta(l)) = \left( \frac{2}{l}, \frac{2}{1-l}\right)$.
\begin{figure}
\caption{ $\Sigma_\infty$ for the unit interval in $\mathbb R$.}
\label{dib0}
\end{figure} \end{example}
\subsection{Type II}
All domain $\Omega$ whose curve $\mathcal{C}_{2,\infty}$ intersects the trivial lines. We can also subdivide this category in $\mathcal{C}_{2,\infty}$ connecting the trivial lines (type II.A) and $\mathcal{C}_{2,\infty}$ totally contained in the trivial lines (type II.B).
\begin{example}[{\bf ``Linked'' balls}] Given $R_1\leq R_2$, let us consider a domain $\Omega$ made as the union of the balls of radii $R_1$ and $R_2$ by means of a tube of length $\varepsilon<R_1$.
Since the radius of the biggest ball contained in $\Omega$ is $R_2$, we get $\lambda_{1,\infty}= \frac{1}{R_2}$. Now, the couple of biggest balls contained in $\Omega$ have radius $R_1$. If we fix $r$ such that $r=R_1$, the expression of $c_\infty$ will be minimized when $\frac{t}{r}=\frac{1}{R_1}$, that is, when both coefficients inside the maximum in the expression of $c_\infty$ are the same. In this case, $c_\infty=\frac{1}{R_1}$.
Observe that the case $r=R_1$, corresponds to $t=1$. As $r$ increases, the value of $t$ decreases. This process finishes when $r=R_2$, which corresponds to $t=\frac{R_2}{R_1}$.
Analogously, this process can be made interchanging the roles of $R_1$ and $R_2$, leading to the following expression for the second non-trivial curve $$
\mathcal{C}_{2,\infty}=\left\{ \left( \frac{1}{tR_1}, \frac{1}{R_1}\right),\, 1 \leq t \leq \frac{R_2}{R_1} \right\} \cup \left\{ \left( \frac{1}{R_1}, \frac{1}{tR_1}\right),\, 1 \leq t \leq \frac{R_2}{R_1} \right\}. $$
It is remarkable to see that in the extremal case $R_1=R_2$ the curve $\mathcal{C}_{2,\infty}$ is contained in the trivial curves. This situation occurs when the radius of the biggest ball contained in $\Omega$ coincides with the radius of biggest couple of balls of the same size fitted in $\Omega$ (for example in an annular domain or more generally in a stadium domain).
It is also straightforward to see that the analysis made above only depends of radius of the biggest ball contained in $\Omega$ and the radius of biggest couple of identical balls contained in $\Omega$. Consequently, the three domains exhibited in figure \ref{dib2} have the same first curves $\mathcal{C}_{1,\infty}^\pm$ and $\mathcal{C}_{2,\infty}$.
\begin{figure}
\caption{Three domains for which the first three curves in $\Sigma_\infty$ coincide.}
\label{dib2}
\end{figure} \end{example}
\begin{example}[{\bf The unit cube}] Let us consider $\Omega$ be the unit square in $\mathbb R^2$, $\Omega = (0,1)\times (0,1)$. In this case, since the biggest ball fitted in $\Omega$ has radius $R=\frac{1}{2}$ we have that $\lambda_{1,\infty}(\Omega)=2$.
Let us analyze the second nontrivial curve. When we compute $c_\infty(t)$ we must consider two balls of radii $r_1$ and $r_2$ contained in $\Omega$ such that $\frac{t}{r_1}$ and $\frac{1}{r_2}$ coincide. Notice that for $t=1$ we obtain $$
r_1=r_2=\frac{\sqrt{2}}{2(1+\sqrt{2})}. $$ But we can also consider a partition such that $r_2$ increases and $r_1$ decreases; since both balls are fitted in $\Omega$, they must verify $$r_1+r_2=\frac{\sqrt{2}}{1+\sqrt{2}}.$$ In this case, if $$
t=\frac{1}{r_1}\frac{\sqrt{2}}{1+\sqrt{2}}-1 $$ we can guarantee that $\frac{t}{r_1}$ and $\frac{1}{r_2}$ coincide. Observe that the computations made to enlarge $r_1$ (and then to obtain a smaller $r_2$) with the previous expression of $t$ can be performed provided that $r_1\leq \frac{1}{2}$. This procedure gives that \begin{equation} \label{tt1}
c_\infty(t)=\frac{t}{r_1}=\frac{1}{r_2}=(t+1)\Big( 1+\frac{\sqrt{2}}{2} \Big), \qquad \frac{2\sqrt{2}}{1+\sqrt{2}}-1 \leq t \leq 1. \end{equation}
Now we can fix the value $r_2=\frac{1}{2}$ to be the maximum radius of a ball fitted in $\Omega$ and to continue decreasing the value of $r_1$. In this case, when considering $$
t=2r_1 $$ we can assure that $\frac{t}{r_1}$ and $\frac{1}{r_2}$ coincides to be equal to $2$. This process can be continued as $r_2\to 0$ obtaining \begin{equation} \label{tt2}
c_\infty(t)= 2, \qquad 0 \leq t \leq \frac{2\sqrt{2}}{1+\sqrt{2}}-1. \end{equation} From \eqref{tt1} and \eqref{tt2} we get that $$
\mathcal{C}_{2,\infty}^1= \left\{
\begin{array}{ll}
\displaystyle
\left( \frac{2}{t}, 2\right) & \displaystyle\qquad 0 \leq t \leq \frac{2\sqrt{2}}{1+\sqrt{2}}-1\\
\displaystyle \left( \frac{\tau(t+1)}{t}, \tau(t+1) \right) &
\displaystyle \qquad \frac{2\sqrt{2}}{1+\sqrt{2}}-1 \leq t \leq 1
\end{array} \right.
$$ where $\tau=1+\frac{\sqrt{2}}{2}$.
Observe that this construction can be made, analogously, interchanging the roles of $r_1$ and $r_2$, leading to $$
\mathcal{C}_{2,\infty}^2=
\left\{
\begin{array}{ll}
\displaystyle
\left( 2, \frac{2}{t} \right) &\qquad \displaystyle 0 \leq t \leq \frac{2\sqrt{2}}{1+\sqrt{2}}-1\\
\displaystyle \left( \tau(t+1), \frac{\tau(t+1)}{t} \right) &
\displaystyle \qquad \frac{2\sqrt{2}}{1+\sqrt{2}}-1 \leq t \leq 1,
\end{array} \right. $$ and consequently, $\mathcal{C}_{2,\infty}= \mathcal{C}_{2,\infty}^1 \cup \mathcal{C}_{2,\infty}^2$. See Figure \ref{dib5}.
\begin{figure}\label{dib5}
\end{figure}
It is remarkable to see that for values of $t$ in the range $[0,\frac{2\sqrt{2}}{1+\sqrt{2}}-1]$, the curve $C_{2,\infty}$ is contained in the trivial lines, i.e., the first intersection among the second nontrivial curve and the trivial lines occurs at $(\tau_0,2)$ and $(2,\tau_0)$, where $\tau_0=\frac{2(\sqrt{2}+1)}{\sqrt{2}-1}$. See Figure \ref{dib4}. \end{example}
\begin{figure}
\caption{ The curve $C_{2,\infty}$ for the unit cube in $\mathbb R^2$. }
\label{dib4}
\end{figure}
{\bf Acknowledgments} This work has been partially supported by Consejo Nacional de Investigaciones Cient\'{i}ficas y T\'{e}cnicas (CONICET-Argentina). JVS would like to thank the Dept. of Math. FCEyN, Universidad de Buenos Aires for providing an excellent working environment and scientific atmosphere during his Postdoctoral program.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\maketitle \begin{abstract} Given a real algebraic variety $X$ of dimension $n$, a very ample divisor $D$ on $X$ and a smooth closed hypersurface $\Sigma$ of $\mathbf{R}^n$, we construct real algebraic hypersurfaces in the linear system $\abs{mD}$ whose real locus contains many connected components diffeomorphic to $\Sigma$. As a consequence, we show the existence of real algebraic hypersurfaces in the linear system $\abs{mD}$ whose Betti numbers grow by the maximal order, as $m$ goes to infinity. As another application, we recover a result by D. Gayet on the existence of many disjoint lagrangians with prescribed topology in any smooth complex hypersurface of $\mathbf{C}\mathbf{P}^n$. The results in the paper are proved more generally for complete intersections.
The proof of our main result uses probabilistic tools. \end{abstract} \section{Introduction} This paper deals with the topology of real algebraic varieties. A real algebraic variety $X$ is an algebraic variety defined over $\mathbf{R}$. We denote its real and complex locus respectively by $X(\mathbf{R})$ and $X(\mathbf{C})$ and its dimension by $n$. We assume that $X$ is smooth and projective and that its real locus is non empty.
\subsection{Topology of real algebraic hypersurfaces} A major restriction for the topology of real algebraic varieties is given by Smith-Thom inequality \cite{thomreelle}, which says that the sum of the $\mathbf{Z}/2$-Betti numbers of the real locus of a real algebraic variety is smaller or equal than the sum of the $\mathbf{Z}/2$-Betti numbers of its complex locus, that is \[ b\left(X(\mathbf{R})\right):=\displaystyle\sum_{i=0}^{n}b_i\left(X(\mathbf{R}),\mathbf{Z}/2\right)\leq \sum_{i=0}^{2n}b_i\left(X(\mathbf{R}),\mathbf{Z}/2\right)=:b\left(X(\mathbf{C})\right). \] In the case of equality, the variety is called a \emph{maximal variety}. For real algebraic curves, Smith-Thom inequality is known as Harnack-Klein inequality \cite{harnack,klein} and can be written $b_0\big(C(\mathbf{R})\big)\leq g+1$, where $g$ is the genus of $C(\mathbf{C})$.
A fundamental question in real algebraic geometry asks whether a given real algebraic variety contains maximal hypersurfaces in some given linear system. For example, for any $m\in\mathbf{N}$, there exists a degree $m$ maximal real algebraic curves in $\mathbf{P}^2$, see \cite{harnack}. However, in general, the answer to this question is negative. For example, given a non-maximal real algebraic curve $C$, the surface $C\times C$ does not contain any maximal curves. Nevertheless, such surface contains \emph{asymptotically maximal hypersurfaces}. To give the definition of asymptotically maximal hypersurfaces, let us fix a very ample divisor $D$ of $X$ and consider the linear system $\abs{mD}, m>0$, consisting of real divisors linearly equivalent to $mD$. By Bertini theorem, a generic element $Z$ of $\abs{mD}$ is smooth and, by Ehresmann theorem, the topology of the complex locus $Z(\mathbf{C})$ of $Z$ only depends on $D$ and on $m$. One can then compute the asymptotics $b\left(Z(\mathbf{C})\right)=D^{n}m^n+O(m^{n-1})$, as $m\rightarrow\infty$, where $D^n$ denotes the top self-intersection number of $D$ (which is a positive integer) and $n$ is the dimension of $X$. By Smith-Thom inequality, we then have $$b\left(Z(\mathbf{R})\right)\leq D^{n}m^n+O(m^{n-1})$$ and a sequence $Z_m\in\abs{mD}$ is called asymptotically maximal if $$\displaystyle\lim_{m\rightarrow\infty}\frac{b\left(Z_m(\mathbf{R})\right)}{D^{n}m^n}\rightarrow 1.$$
Asymptotically maximal hypersurfaces exists in projective spaces \cite{itenbergviro}, in toric varieties \cite{bertrand} and in surfaces \cite[Theorem 5]{gwexp}. However, one does not know if any real algebraic variety contains asymptotically maximal hypersurfaces and it is not clear (at least for the author) if one should expect this property to be true for any $X$.
One of the consequence of the main result of this paper is that \emph{any} real algebraic variety always contains real algebraic hypersurfaces whose Betti numbers grow as the maximal possible order. More precisely we obtain the following result. \begin{thm}\label{thm existence} Let $X$ be a real algebraic variety of dimension $n$ and $D$ be a real very ample divisor of $X$.
Then there exists $c>0$ and $m_0\in\mathbf{N}$ such that, for any $m\geq m_0$, there exists $Z\in\abs{mD}$ with $b_i\left(Z(\mathbf{R})\right)\geq cm^{n}$ for any $i\in\{0,\dots,n-1\}$. \end{thm} \subsection{Existence of hypersurfaces with prescribed components} We actually prove a more precise result than Theorem \ref{thm existence}. Indeed, for any smooth closed (not necessarly connected) hypersurface $\Sigma$ of $\mathbf{R}^n$, we can produce real algebraic hypersurfaces $Z_m\in\abs{mD}$ such that $Z_m(\mathbf{R})$ contains at least $cm^n$ connected component diffeomorphic to $\Sigma$. This is the content of Theorem \ref{thm existence of components}, which is the main result of our paper. In order to state it, we need the following definition. \begin{defn}\label{defN} Let $\Sigma$ be a closed smooth manifold, not necessarly connected. For any smooth manifold $S$, we denote by $\mathcal{N}_{\Sigma}(S)$ the number of connected components of $S$ that are diffeomorphic to $\Sigma$. \end{defn} \begin{thm}\label{thm existence of components} Let $X$ be a real algebraic variety of dimension $n$ and $D$ be a real very ample divisor of $X$. For any smooth closed hypersurface $\Sigma$ of $\mathbf{R}^n$,
there exists $c>0$ and $m_0\in\mathbf{N}$ such that, for any $m\geq m_0$, there exists $Z\in\abs{mD}$ with $\mathcal{N}_{\Sigma}\left(Z(\mathbf{R})\right)\geq cm^{n}$. \end{thm} Theorem \ref{thm existence} then follows from Theorem \ref{thm existence of components} by taking as $\Sigma$ any hypersurface with $b_i(\Sigma)\geq 1$ for any $i\in\{0,\dots,n-1\}$.
The proof of Theorem \ref{thm existence of components} is probabilistic and it is given in Section \ref{section probability}. The idea is to construct a probability measure on $\abs{mD}$ for which the expected value $\mathbf{E}\left(\mathcal{N}_{\Sigma}(Z(\mathbf{R})) \right)$ of $\mathcal{N}_{\Sigma}(Z(\mathbf{R}))$, for $Z\in\abs{mD}$, is at least $cm^n$, for $m$ large enough, where $c$ is a positive constant independent of $m$. This is the content of Theorem \ref{thm expected}. Remark that in Section \ref{Section complete int} we will state a generalization of Theorem \ref{thm existence of components} for complete intersections.
\subsection{Existence of many lagrangians in complex hypersurfaces} We now give an application of Theorem \ref{thm existence of components} in symplectic geometry. Recall that the complex projective space $\mathbf{C}\mathbf{P}^n$ is equipped with a natural symplectic form $\omega_{FS}$, called the Fubini-Study symplectic form. Any smooth complex hypersurface of $\mathbf{C}\mathbf{P}^n$ inherits, by restriction, a symplectic form. Recall also that a lagrangian of a symplectic manifold $M$ is a smooth closed submanifold of dimension half the dimension of $M$ and for which the restriction of the symplectic form is zero everywhere. For example, $\mathbf{R}\mathbf{P}^n$ is a lagrangian of $\mathbf{C}\mathbf{P}^n$ and, more generally, the real locus of a real algebraic hypersurface of $\mathbf{C}\mathbf{P}^n$ is a lagrangian of its complex locus. As an application of Theorem \ref{thm existence of components}, we obtain an easy proof of the following result by D. Gayet. \begin{thm}\cite[Theorem 1.1]{GayetLagrangian}\label{lagr} For any smooth closed hypersurface $\Sigma$ of $\mathbf{R}^n$, there exists $c>0$ and $m_0\in\mathbf{N}$ such that, for any $m\geq m_0$, any degree $m$ smooth complex hypersurface of $\mathbf{C}\mathbf{P}^n$ contains at least $cm^n$ pairwise disjoint lagrangians diffeomorphic to $\Sigma$. \end{thm} \begin{proof}
Theorem \ref{thm existence of components} provides smooth complex hypersurfaces of degree $m$ with at least $cm^n$ pairwise disjoint lagrangians diffeomorphic to $\Sigma$ (as the real locus of a real algebraic hypersurface is lagrangian). As, from a symplectic point of view, all degree $m$ smooth complex hypersurfaces of $\mathbf{C}\mathbf{P}^n$ are isomorphic, this implies that \textit{any} degree $m$ smooth complex hypersurface contains at least $cm^n$ pairwise disjoint lagrangians diffeomorphic to $\Sigma$.
\end{proof}
Actually, the same result and argument works for any K\"ahler $(X,\omega)$ equipped with an ample divisor $D$ with $\omega\in c_1(D)$, that \emph{can be defined over $\mathbf{R}$}. Although our proof is simpler than the one in \cite{GayetLagrangian} (because we do not use any quantitative Moser-type argument), it should be noted that in \cite{GayetLagrangian} this result is proved for any complex projective variety $X$, and not just for those that can be defined over $\mathbf{R}$.
It is worth noticing that the order $m^n$ appearing in Theorem \ref{lagr} is optimal when the Euler characteristic of $\Sigma$ is nonzero. Indeed, as remarked in \cite[Corollary 1.2]{GayetLagrangian}, if $\chi(\Sigma)\neq 0$ and $Z_m$ denotes a smooth degree $m$ complex hypersurface of $\mathbf{C}\mathbf{P}^n$, then (the homology class of) the disjoint langrangians diffeomorphic to $\Sigma$ are linearly independent in $H_{n-1}(Z_m,\mathbf{Z})$ and the dimension of $H_{n-1}(Z_m,\mathbf{Z})$ grows exactly as $m^n$ when $m\rightarrow\infty$. \subsection{Existence of complete intersections with prescribed topology}\label{Section complete int} We can actually prove all the previous results not only for hypersurfaces, but more generally for complete intersections. For this, recall that, as for the case of hypersurfaces, the topology of the complex locus of a complete intersection $Z_1\cap\dots\cap Z_r$, with $Z_i\in \abs{mD}$, does not depend on the choice of the hypersurfaces $Z_i$, if they are chosen generically. One can then compute the asymptotics for the total Betti number of $Z_1(\mathbf{C})\cap\dots\cap Z_r(\mathbf{C})$ for generic $Z_i\in\abs{mD}$ and get \[b(Z_1(\mathbf{C})\cap\dots\cap Z_r(\mathbf{C}))=\binom{n-1}{r-1}D^nm^n+O(m^{n-1}) \] as $m\rightarrow\infty$, see \cite[Proposition 2.2]{anc6}. In this setting, we have the following result.
\begin{thm}\label{thm existence of components 2} Let $X$ be a real algebraic variety of dimension $n$ and $D$ be a real very ample divisor of $X$. For any $r$-codimensional closed submanifold $\Sigma$ of $\mathbf{R}^n$ with trivial normal bundle,
there exists $c>0$ and $m_0\in\mathbf{N}$ such that for any $m\geq m_0$ there exists $Z_1,\dots,Z_r\in\abs{mD}$ with $$\mathcal{N}_{\Sigma}\left(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\right)\geq cm^{n}.$$ \end{thm} Theorem \ref{thm existence of components} is exacly Theorem \ref{thm existence of components 2} for $r=1$. Remark indeed that any closed hypersurface of $\mathbf{R}^n$ has trivial normal bundle. On the contrary, it is worth noting that Theorem \ref{thm existence of components 2} is not a consequence of Theorem \ref{thm existence of components} by some induction, for example by considering $Z_1\cap Z_2$ as a hypersurface of $Z_1$ and then applying Theorem \ref{thm existence of components} to $Z_1$. Indeed the hypersurfaces $Z_1,Z_2\in\abs{mD}$ have degree that varies with $m$, and thus no induction can be applied since Theorem \ref{thm existence of components} requires a fixed ambient variety. Even if one tried to get around this problem, fixed once for all one of the hypersurfaces, say $Z_1$, and considered $Z_1\cap Z_2$ as a hypersurface of $Z_1$, then applying Theorem \ref{thm existence of components} to $Z_1$ one would get only $cm^{n-1}$ connected components diffeomorphic to $\Sigma$, and not $cm^{n}$.
\begin{oss} Analogues of Theorems \ref{thm existence} and \ref{lagr} can then be translated for complete intersections and are direct consequences of the Theorem \ref{thm existence of components 2}, in the same way that Theorems \ref{thm existence} and \ref{lagr} are consequences of the Theorem \ref{thm existence of components}. \end{oss} \subsection{Organization of the paper} The paper is organized as follows.\\
In Section \ref{section probability}, we define a probability measure on the linear system $\abs{mD}$ and prove Theorem \ref{thm existence of components 2} admitting a result, namely Proposition \ref{barrier}. In Section \ref{section proof prop}, we prove Proposition \ref{barrier}.
In Section \ref{section questions} we make some comments on this paper and on related works and we adress some questions arising from this work.
\subsection*{Acknowledgments} This paper was written while I was postdoc at the Institut de Recherche Math\'ematique Avanc\'ee, Universit\'e de Strasbourg. I thank the IRMA for the excellent conditions given to me during my postdoc.
\section{Probability measures on the linear systems and proof of the main result}\label{section probability} In this section we equip the linear system $\abs{mD}$ with a probability measure. Here $D$ is a very ample divisor of a real algebraic variety $X$. With respect to this probability measure, we estimate the expected value of $\mathcal{N}_{\Sigma}(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R}))$, for $Z_i\in\abs{mD}$. This is the content of the Theorem \ref{thm expected}, which directly implies Theorem \ref{thm existence of components 2}.
\subsection{Linear systems and hyperplane sections}\label{subsec hyperplane} The very ample divisor $D$ embeds $X$ into $\mathbf{P}^N$, for some $N\in\mathbf{N}$. We fix once for all such an embedding and identify $X$ with its image in $\mathbf{P}^N$. A real algebraic hypersurface $Z\in\abs{D}$ is then obtained as a intersection $H\cap X$ of $X$ with an hyperplane $H\in \abs{\mathcal{O}(1)}$ defined over $\mathbf{R}$. More generally, given a degree $m$ real algebraic hypersurfaces $H$ in $\mathbf{P}^N$, its intersection with $X$ defines a real algebraic hypersurface $Z:=H\cap X$ of $X$ in the linear system $\abs{mD}$. Note however that not all hypersurfaces lying in $\abs{mD}$ are of the form $H\cap X$, for $H\in\abs{\mathcal{O}(m)}$, that is, the map $H\in\abs{\mathcal{O}(m)}\mapsto H\cap X \in \abs{mD}$ is not surjective in general, when $m>1$.
\subsection{Probability measure on $\abs{mD}$} Let $\mathbf{R}^{hom}_m[X_0,\dots,X_N]$ be the space of real homogeneous polynomials of degree $m$ in $N+1$ variables. This space coincides with the space of global algebraic sections of the line bundle $\mathcal{O}(m)$ on $\mathbf{P}^N$, which are defined over $\mathbf{R}$. We endow the space $\mathbf{R}^{hom}_m[X_0,\dots,X_N]$ with a Gaussian probability measure $\mu_m$ as follows. Let $h_m$ be the Fubini-Study metric on the line bundle $\mathcal{O}(m)$ and let $g_{FS}$ be the Riemannian Fubini-Study metric on $\mathbf{P}^N(\mathbf{R})$. Then, for any pair of real polynomials $P_1,P_2\in \mathbf{R}^{hom}_m[X_0,\dots,X_N]$ we define \begin{equation}\label{L2 scalar product} \langle P_1,P_2\rangle_{L^2}=\int_{\mathbf{P}^N(\mathbf{R})}h_m(P_1,P_2)d\mathrm{vol}_{FS}. \end{equation} This scalar product induces a Gaussian measure $\mu_m$ on $\mathbf{R}^{hom}_m[X_0,\dots,X_N]$ defined by \begin{equation} \mu_m(U)=\frac{1}{\sqrt{\pi}^{d_m}}\int_{U}e^{-\norm{P}_{L^2}^2}dP \label{gaussian measure}\end{equation} for any open set $U\subset\mathbf{R}^{hom}_m[X_0,\dots,X_N]$, where $d_m=\dim \mathbf{R}^{hom}_m[X_0,\dots,X_N]$.
For any polynomial $P\in \mathbf{R}^{hom}_m[X_0,\dots,X_N]$, we denote by $V_P=\{P=0\}\subset \mathbf{P}^N$ the real algebraic hypersurface defined by $P$ and by $Z_P:=V_P\cap X$ the real algebraic hypersurface of $X$ defined by $P_{\mid X}$. As said Section \ref{subsec hyperplane}, the hypersurface $Z_P$ is an element of the linear system $\abs{mD}$. \begin{defn}\label{defprob} The map $\varphi:P\in\mathbf{R}^{hom}_m[X_0,\dots,X_N]\mapsto Z_P\in \abs{mD}$ induces a probability measure $\mathrm{Prob}_m:=\varphi_*\mu_m$ on $\abs{mD}$. For any $r\in\{1,\dots,n\}$ we equip the $r$-fold cartesian products $\mathbf{R}^{hom}_m[X_0,\dots,X_N]^r$ and $\abs{mD}^r$ with the product measures, that we still denote respectively by $\mu_m$ and $\mathrm{Prob}_m$. \end{defn} \begin{oss} By what we said in Section \ref{subsec hyperplane}, the probability measure $\mathrm{Prob}_m$ is degenerate, that is, it is supported in a proper subspace of $\abs{mD}^r$. \end{oss} \subsection{Expected value of $\mathcal{N}_{\Sigma}(Z(\mathbf{R}))$} Let $\Sigma$ be a smooth closed $r$-codimensional submanifold of $\mathbf{R}^n$ with trivial normal bundle. We are interested in the random variable $$(Z_1,\dots,Z_r)\in\abs{mD}^r\mapsto \mathcal{N}_{\Sigma}\left(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\right),$$ where $\mathcal{N}_{\Sigma}$ denotes the number of connected components that are diffeomorphic to $\Sigma$, see Definition \ref{defN}. The next theorem estimates from below the expectation \begin{multline*} \mathbf{E}\left(\mathcal{N}_{\Sigma}(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})) \right) \\ =\displaystyle\int_{(Z_1,\dots,Z_r)\in\abs{mD}^r}\mathcal{N}_{\Sigma}\left(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\right)d\mathrm{Prob}_m(Z) \end{multline*} where $\mathrm{Prob}_m$ is the probability measure on $\abs{mD}^r$ defined in Definition \ref{defprob}.
\begin{thm}\label{thm expected} There exists $c>0$ and $m_0$ such that for any $m\geq m_0$ we have $$\mathbf{E}\left(\mathcal{N}_{\Sigma}(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})) \right)\geq cm^n.$$ \end{thm} Theorem \ref{thm expected} directly implies Theorem \ref{thm existence of components 2}, since we have the trivial bound $$\sup_{(Z_1,\dots,Z_r)\in\abs{mD}^r}\mathcal{N}_{\Sigma}\big(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\big)\geq \mathbf{E}\left(\mathcal{N}_{\Sigma}(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})) \right).$$
In the rest of the section we prove Theorem \ref{thm expected}. For this, remark that the real locus $X(\mathbf{R})$ of $X$ inherits from $\mathbf{P}^N(\mathbf{R})$ a Riemannian metric $g:=g_{FS \mid X(\mathbf{R})}$. For any point $x\in X(\mathbf{R})$, we denote by $B(x,R)$ the geodesic ball of radius $R$ around $x$.
Theorem \ref{thm expected} will be a direct consequence of the following proposition. \begin{prop}\label{barrier} There exists $R,c_\Sigma>0$ and $m_0$ such that for any point $x\in X(\mathbf{R})$ and any $m\geq m_0$ we have \[\mathrm{Prob}_m\bigg\{(Z_1,\dots,Z_r)\in\abs{mD}^r,\hspace{1mm} \mathcal{N}_{\Sigma}\left(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\cap B\big(x,\frac{R}{m}\big)\right)\geq 1\bigg\}\geq c_\Sigma.\] \end{prop} We postpone the proof of this proposition to the next section. We now show how this implies Theorem \ref{thm expected}. \begin{proof}[Proof of Theorem \ref{thm expected}] Let $R$ be the positive constant given by Proposition \ref{barrier}. For any $m\in\mathbf{N}$, we fix a set of points $\{x_i\}_{i\in I_m}$ of $X(\mathbf{R})$ with the property that the balls $B(x_i,\frac{R}{m})$ are all disjoint and the balls $B\left(x_i,\frac{2R}{m}\right)$ cover the whole $X(\mathbf{R})$. In particular, we get $$\textrm{Vol}\left(X(\mathbf{R})\right)\leq \sum_{i\in I_m}\textrm{Vol}\left(B\left(x_i,\frac{2R}{m}\right)\right)\leq \abs{I_m}\textrm{Vol}\left(B_{\mathbf{R}^n}\left(0,\frac{3R}{m}\right)\right)$$ for $m$ large enough. This implies that for $m$ large enough we have \begin{equation}\label{estimates on volumes} \abs{I_m}\geq \frac{\textrm{Vol}\left(X(\mathbf{R})\right)}{\textrm{Vol}\big(B_{\mathbf{R}^n}\left(0,\frac{3R}{m}\right)\big)}=c'\textrm{Vol}\left(X(\mathbf{R})\right)m^n. \end{equation} We can now estimate the expected value of $\mathcal{N}_{\Sigma}(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})) $, for $Z\in\abs{mD}$. We have the lower bound \begin{multline} \mathbf{E}\left(\mathcal{N}_{\Sigma}(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})) \right)\geq \\
\sum_{i\in I_m}\mathrm{Prob}_m\bigg\{(Z_1,\dots,Z_r)\in\abs{mD}^r,\hspace{1mm} \mathcal{N}_{\Sigma}\left(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\cap B\big(x_i,\frac{R}{m}\big)\right)\geq 1\bigg\}. \end{multline} By Proposition \ref{barrier}, the latter is bigger or equal than $c_\Sigma\abs{I_m}$ which, by \eqref{estimates on volumes}, is bigger or equal than $c'\textrm{Vol}\left(X(\mathbf{R})\right)m^n=:cm^n$. Hence the result. \end{proof} \section{Proof of Proposition \ref{barrier}}\label{section proof prop} In this section we prove Proposition \ref{barrier}. Following \cite{gw2}, the idea is to find one complete intersection $Z_1\cap\cdots\cap Z_r$ of degree $m\gg 1$ such that $$\mathcal{N}_{\Sigma}\left(Z_1(\mathbf{R})\cap\dots\cap Z_r(\mathbf{R})\cap B\big(x,\frac{R}{m}\big)\right)\geq 1$$ and then prove that this complete intersection can perturbed in a precise quantitative way so that this property happens with positive probability. Actually, still keeping in mind this idea, we will use a slightly different, more flexible, point of view, developped recently in \cite{LerarioStecconi}.
The formalism developped in \cite{LerarioStecconi} works better when one considers functions rather than hypersurfaces or sections of line bundles. We then consider $S^N$ the double covering of $\mathbf{P}^N(\mathbf{R})$ and $M\subset S^N$ the double covering of $X(\mathbf{R})$. We fix on $S^N$ the standard round metric $g_{S^N}$ (so that the Fubini-Study metric $g_{FS}$ is the quotient of $g_{S^N}$ by the antipodal map). For any $r$-tuple of real polynomials $\underline{P}=(P_1,\dots,P_r)\in \mathbf{R}^{hom}_m[X_0,\dots,X_N]^r$ we denote by $V_{\underline{P}}$ the complete intersection in $\mathbf{P}^N$ defined by $\{P_1=0\}\cap\dots\cap \{P_r=0\}$ and by $Z_{\underline{P}}$
the complete intersection in $X$ defined by $V_{\underline{P}}\cap X$. Fix any point $x\in M$. To prove Proposition \ref{barrier}, it is sufficient to show that \begin{equation}\label{new estimate} \mu_m\left\{\underline{P}\in \mathbf{R}^{hom}_m[X_0,\dots,X_N]^r, \mathcal{N}_{\Sigma}\left(Z_{\underline{P}}(\mathbf{R})\cap B\big(x,\frac{R}{m}\big)\right)\geq 1\right\}\geq c_\Sigma. \end{equation} where the probability measure $\mu_m$ is defined in \eqref{gaussian measure}.
Remark that the scalar product \eqref{L2 scalar product} can be also defined by $$\langle P_1,P_2\rangle_{L^2}=\int_{S^N}P_1P_2d\mathrm{vol}_{S^N}.$$ A random polynomial $P\in \mathbf{R}^{hom}_m[X_0,\dots,X_N]$ with respect to the Gaussian measure $\mu_m$ can be then written as $$P=\sum_{i=1}^{d_N}a_iP_i$$ where $d_N=\dim \mathbf{R}^{hom}_m[X_0,\dots,X_N]$, the family $\{P_i\}_{i}$ is an orthonormal basis of $\mathbf{R}^{hom}_m[X_0,\dots,X_N]$ and $(a_i)_i$ are independent standard Gaussian variables. A random $\underline{P}\in \mathbf{R}^{hom}_m[X_0,\dots,X_N]^r$ is then a $r$-tuple $\underline{P}=(P_1,\dots,P_r)$ of independent random polynomials $P_i$.
The covariance function $\mathcal{K}_m(x,y):=\mathbf{E}(P(x)P(y))$ of the random polynomial $P$ is known to have a universal limit at scale $m^{-1}$ (see \cite[Theorem 4.4]{hormander}). This means that for any fixed $R>0$, any point $x$ and any $u,v\in B_{\mathbf{R}^N}(0,R)\subset \mathbf{R}^N\simeq T_xS^N$ (where $T_xS^N\simeq \mathbf{R}^N$ is given by any isometry) one has that $$K_{m,x}(u,v):=m^{-n}\mathcal{K}_m\left(\exp_x\left(\frac{u}{m}\right),\exp_x\left(\frac{v}{m}\right)\right)$$ converges in the $\mathscr{C}^{\infty}$-topology to a function $K:B_{\mathbf{R}^N}(0,R)\times B_{\mathbf{R}^N}(0,R)\rightarrow\mathbf{R}$ which is independent of $x$. The function $K$ is explicit and given by $$K(u,v)=\int_{B_{\mathbf{R}^N}(0,1)}e^{i\langle u-v,\xi\rangle}d\xi.$$
By restriction, we then have that the covariance function of the random polynomial $P$ \textit{ restricted to $M$} also has a universal limit at scale $m^{-1}$. This universal limit is nothing but the restriction of $K$ to $B_{\mathbf{R}^n}(0,R)\times B_{\mathbf{R}^n}(0,R)$, where we see $T_xM\subset T_xS^N$ as the subspace $\mathbf{R}^n\subset \mathbf{R}^N$ given by the last $N-n$ coordinates equal to $0$.
The restriction of the function $K$ to $B_{\mathbf{R}^n}(0,R)\times B_{\mathbf{R}^n}(0,R)$ is the covariance function of a Gaussian field $F:B_{\mathbf{R}^n}(0,R)\rightarrow\mathbf{R}$ (such Gaussian field is precisely characterized by the property $K(u,v)=\mathbf{E}(F(u)F(v))$).
Consider $r$ independent copies $F_1,\dots, F_r$ of the Gaussian field $F$, and denote by $\underline{F}=(F_1,\dots,F_r)$, which is a Gaussian field taking values in $\mathscr{C}^{\infty}\left(B_{\mathbf{R}^n}(0,R),\mathbf{R}^r\right)$. We now need three lemmas about the geometry of this Gaussian field $\underline{F}$.
\begin{lemma}\label{lemma converg} The restriction to $B_{\mathbf{R}^n}(0,R)$ of the rescaled $r$-tuple of random polynomials $$\underline{P_m}(\cdot):=\left(m^{-n/2}P_1\left(\exp_x\left(\frac{\cdot}{m}\right)\right),\dots,m^{-n/2}P_r\left(\exp\left(\frac{\cdot}{m}\right)\right)\right)$$ converges in probability to $\underline{F}$ as $m\rightarrow\infty$. \end{lemma} \begin{proof} By construction, we have that the covariance function of the rescaled random polynomial $m^{-n/2}P\left(\exp_x\left(\frac{\cdot}{m}\right)\right)$ converges (in the $\mathscr{C}^{\infty}$-topology) to the covariance function of $F$. This implies that the covariance function of $\underline{P_m}$ converges to the covariance function of $\underline{F}$. By \cite[Theorem 5]{LerarioStecconi}, this implies the result. \end{proof} \begin{lemma}\label{support} For any smooth function $f\in\mathscr{C}^{\infty}\left(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R}^r\right)$ and any neighborhood $U$ of $f$ in $\mathscr{C}^{\infty}\left(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R}^r\right)$, the probability that $\underline{F}$ takes value in $U$ is strictly positive. That is, the support of $\underline{F}$ contains the space $\mathscr{C}^{\infty}\left(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R}^r\right)$. Here, $\bar{B}_{\mathbf{R}^n}(0,R)$ denotes the closed ball of radius $R$ around $0$. \end{lemma} \begin{proof} It is enough to prove that the support of each components of $\underline{F}$ includes $\mathscr{C}^{\infty}\left(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R}\right)$. By the density of polynomials inside $\mathscr{C}^{\infty}\left(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R}\right)$, it is actually enough to prove that any monomial $x_1^{k_1}\cdots x_n^{k_n}$ is in the support of each components of $\underline{F}$.
By definition, a component of $\underline{F}$ is a copy of the Gaussian field $F$ whose covariance function is $$K(u,v)=\int_{B_{\mathbf{R}^N}(0,1)}e^{i\langle u-v,\xi\rangle}d\xi$$ and whose spectral measure $\sigma$ is the Lebesgue measure on $B_{\mathbf{R}^N}(0,1)$.
The support of $F$ is known to be equal to the closure (in the $\mathscr{C}^{\infty}(\bar{B}(0,R),\mathbf{R})$ topology) of the following space of function $$\mathcal{H}_F=\mathrm{Span}\{K_v, v\in\mathbf{R}^n\}$$ where $K_v(x):=\int_{B_{\mathbf{R}^N}(0,1)}e^{i\langle x-v,\xi\rangle}d\xi$, see \cite[Theorem 6]{LerarioStecconi}. It is then enough to prove that any monomial $x_1^{k_1}\cdots x_n^{k_n}$ can be approximated (uniformly together with its derivatives) by elements of the space $\mathcal{H}_F$.
In order to prove this, let $\varphi:\mathbf{R}^N\rightarrow [0,1]\subset\mathbf{R}$ be an even smooth function, which equals $0$ outside $B_{\mathbf{R}^N}(0,1)$ and such that $\int_{\mathbf{R}^N}\varphi(\xi)\xi=1$. Let us consider the function $\varphi_t(\xi)=\frac{1}{t^N}\varphi(\frac{\xi}{t})$, for any $0<t\leq 1$.
Given $k_1,\dots,k_n\in\mathbf{N}$, with $\sum_{i=1}^nk_i=k$, consider the function $$\frac{\partial^k}{\partial \xi_1^{k_1}\cdots\partial \xi_n^{k_n}}\varphi_t.$$
By construction, the value at $x\in B_{\mathbf{R}^n}(0,R)$ of its inverse Fourier transform equals
$$\int_{\mathbf{R}^N}\frac{\partial^k}{\partial \xi_1^{k_1}\cdots\partial \xi_n^{k_n}}\varphi_t(\xi)e^{i\langle \xi,x\rangle}d\xi=(-1)^kx_1^{k_1}\cdots x_n^{k_n}\int_{\mathbf{R}^N}\varphi_t(\xi)e^{i\langle \xi,x\rangle}d\xi$$ where $\int_{\mathbf{R}^N}\varphi_t(\xi)e^{-i\langle \xi,x\rangle}d\xi\xrightarrow{t\rightarrow 0} 1$ in $\mathscr{C}^\infty(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R})$. Remark also that the support of $\varphi_t$ is included in $\bar{B}_{\mathbf{R}^N}(0,1)$ for any $t\leq 1$. Denote by $f_t$ the inverse Fourier transform of $(-1)^k\frac{\partial^k}{\partial \xi_1^{k_1}\cdots\partial \xi_n^{k_n}}\varphi_t.$ We then have that the function $$g_t(x):=\int_{v\in\mathbf{R}^N}\int_{\xi\in\bar{B}_{\mathbf{R}^N}(0,1)}f_t(v)e^{i\langle x-v,\xi\rangle}dvd\xi$$ converges to $x_1^{k_1}\cdots x_n^{k_n}$ in the $\mathscr{C}^\infty(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R})$ topology as $t\rightarrow 0$. On the other hand, the functions $g_t$ can be approximated in the $\mathscr{C}^\infty(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R})$ topology by elements of $\mathcal{H}_F$, by taking Riemann sums.
This implies that the monomial $x_1^{k_1}\cdots x_n^{k_n}$ can be approximated (uniformly together with its derivatives) by elements of $\mathcal{H}_F$. Hence the result.
\end{proof}
\begin{lemma}\label{stability} The probability that the zero locus of $\underline{F}$ is diffeomorphic to $\Sigma$ is bigger than some strictly positive number $c_\Sigma$. \end{lemma} \begin{proof} Let $f_{\Sigma}\in \mathscr{C}^{\infty}\left(\bar{B}_{\mathbf{R}^n}(0,R),\mathbf{R}^r\right)$ be a function vanishing transversally and such that $\{f_\Sigma=0\}$ is diffeomorphic to $\Sigma$. Remark that such function exists exactly by the hypothesis that the normal bundle of $\Sigma$ is trivial. By Thom isotopy lemma, the zero set of any other function $f$ lying in a small neighborhood $U_\Sigma$ of $f_\Sigma$ is still diffeomorphic to $\Sigma$. By Lemma \ref{support}, $f_{\Sigma}$ is in the support of $\underline{F}$, and then the probability that $\underline{F}$ takes value in $U_\Sigma$ is strictly positive, which proves the lemma. \end{proof} We are now able to finish the proof of Proposition \ref{barrier}. By Lemmas \ref{lemma converg} and \ref{stability} and by {\cite[Theorem 5]{LerarioStecconi},} we obtain that, for $m$ large enough, the probability that $Z_{\underline{P}}(\mathbf{R})\cap B(x,\frac{R}{m})$, for $\underline{P}\in\mathbf{R}_m^{hom}[X_0,\dots,X_N]^r$, has a connected component diffeomorphic to $\Sigma$ is bigger than $c_\Sigma$. This is exactly \eqref{new estimate}, which finishes the proof of Proposition \ref{barrier}. \qed \section{Further comments and related results}\label{section questions}
\subsection{Some questions} Let $D$ be a divisor on a real algebraic variety of dimension $n$. For any $i\in\{0,\dots,n-1\}$ we define $$v_i(D)=\displaystyle\limsup_{m\rightarrow\infty}\frac{1}{m^n}\sup_{Z\in\abs{mD}}b_i(Z(\mathbf{R}))\hspace{4mm}\mathrm{and}\hspace{4mm}v(D)=\displaystyle\limsup_{m\rightarrow\infty}\frac{1}{m^n}\sup_{Z\in\abs{mD}}b(Z(\mathbf{R})).$$ One always has the trivial inequality $v_i(D)\leq v(D)$. The present paper implies that $v_i(D)>0$ if $D$ is ample. In this case, the existence of asymptotically maximal hypersurfaces in $\abs{mD}, m\gg 0$ is equivalent to the equality $v(D)=D^n$, where $D^n$ denotes the top self-intersection number of $D$. What about the other numbers $v_i(D)$? Are there examples of $(X,D)$ for which $v_i(D)<v(D)$, for some $i$? Does there exist relations between the numbers $v_i(D)$, apart from the trivial equality $v_i(D)=v_{n-i-1}(D)$? In real algebraic geometry, these numbers may refine the concept of volume of divisors. Recall that, if $D$ is a divisor of a complex manifold $X$ then the volume of $D$ is defined by the formula $$\textrm{Vol}(D)=\displaystyle\limsup_{m\rightarrow\infty}\frac{n!}{m^n}\dim H^0(X,\mathcal{O}(mD)).$$ The volume of a divisor encodes some informations about it. For example if $D$ is ample, then $\textrm{Vol}(D)=D^n$. A divisor $D$ is big precisely when $\textrm{Vol}(D)>0$. If now we consider $X$ and $D$ to be defined over $\mathbf{R}$, can one caracterize the bigness of $D$ in terms of $v_i(D)$? If $D$ is a big, are the numbers $v_i(D)$ positive for any $i\in\{0,\dots,n-1\}$?
\subsection{Some comments on the probability measures on $\abs{mD}$} Let $D$ be an ample divisor over a real algebraic variety $X$ and denote by $H^0(X,\mathcal{O}(mD))$ the space over global algebraic sections of $\mathcal{O}(mD)$ defined over $\mathbf{R}$. From the algebro-geometric point of view, the probability measures on $H^0(X,\mathcal{O}(mD))$ that seem more natural are the so-called complex Fubini-Study measures. These measures are the Gaussian measure on $H^0(X,\mathcal{O}(mD))$ induced by the scalar product defined by \begin{equation}\label{complex scalar product} \langle s_1,s_2\rangle=\int_{X(\mathbf{C})}h^{\otimes m}(s_1,s_2)\frac{\omega^{\wedge n}}{n!} \end{equation} where $h$ is a hermitian metric on $\mathcal{O}(D)$ with positive curvature $\omega$. It should be noted that the Kodaira embeddings $\Phi_m:X(\mathbf{C})\rightarrow \mathbf{P}^{d_m}(\mathbf{C})$ induced by the Hermitian products \eqref{complex scalar product} are asymptotically isometries \cite{tian,bouche}, in the sense that $\frac{1}{m}\Phi^*_m\omega_{FS}\rightarrow\omega$, as $m\rightarrow\infty$. With respect to the probability measure induced by the Hermitian product \eqref{complex scalar product}, asymptotically maximal hypersurfaces are exponentially rare \cite{anc5,diatta,gwexp}. This reflects the fact that it is very difficult in practice to construct such hypersurfaces: if one randomly writes down an equation, it is unlikely to define a (almost) maximal hypersurface. For these probability measures, the expected value $\mathbf{E}\left(b_i\left(Z(\mathbf{R})\right)\right)$ of each Betti number of a random real hypersurface $Z\in\abs{mD}$ is of order $m^{n/2}$ (see \cite{gw3,gw2}) and any closed hypersurface $\Sigma\subset\mathbf{R}^n$ appears with positive probability in any ball of radius $\sim m^{-1/2}$ (see \cite{gw2}).
In contrast, the probability measure used in this paper seems less natural from an algebraic geometry point of view and produces, because of this, more extreme effects: the average of the Betti numbers of a random hypersurface is of maximal order (for $X=\mathbf{P}^n$ and for the number of connected components, this was proved in \cite{ll,ns}) and this implies precisely the existence of hypersurfaces with rich topologies. Note that for the sphere $S^n$ the existence of real algebraic hypersurfaces with many components diffeomorphic to a given closed hypersurface $\Sigma\subset\mathbf{R}^n$ can also be obtain from \cite{gwuniversalcomponent}, by remarking that linear combinations of the Laplacian eigenfunctions with eigenvalues smaller than $L$ coincide with homogenous polynomials of degree $\sim \sqrt{L}$.
We recall that for the projective space, and more generally for toric varieties, the existence of asymptotically maximal hypersurfaces is known. In these cases, where more combinatorial tools are accessible, it would be interesting to construct probability measures in which such hypersurfaces appear with high probability.
\end{document} |
\begin{document}
\title{1-Overlap Cycles for Steiner Triple Systems}
\begin{abstract}
A number of applications of Steiner triple systems (e.g. disk erasure codes) exist that require a special ordering of its blocks. Universal cycles, introduced by Chung, Diaconis, and Graham in 1992, and Gray codes are examples of listing elements of a combinatorial family in a specific manner, and Godbole invented the following generalization of these in 2010. 1-overlap cycles require a set of strings to be ordered so that the last letter of one string is the first letter of the next. In this paper, we prove the existence of 1-overlap cycles for automorphism free Steiner triple systems of each possible order. Since Steiner triple systems have the property that each block can be represented uniquely by a pair of points, these 1-overlap cycles can be compressed by omitting non-overlap points to produce rank two universal cycles on such designs, expanding on the results of Dewar. \end{abstract}
\textbf{Keywords:} Overlap cycles, universal cycles, Gray codes.
\textbf{MSC Classifications:} 68R15, 05B05.
\section{Introduction}
Steiner triple systems, or $(v,3,1)$-designs, appear in many interesting applications. They can be viewed as set systems, combinatorial designs, hypergraphs, or many other types of structures. A Steiner triple system of order $v$, or STS($v$), is a pair $(X, \mathcal{B})$ with $|X|=v$, where $\mathcal{B}$ is a set of triples, or blocks, from $X$. The set $\mathcal{B}$ has the property that every pair of points from $X$ appears in exactly one triple in $\mathcal{B}$. It has been completely determined for which values $v$ such a set system can exist.
\begin{thm}\label{STS}
\emph{\cite{Kirkman}} There exists an STS($v$) if and only if $v \equiv 1, 3 \pmod 6$. \end{thm}
Several authors have considered ordering blocks in Steiner triple systems in specific ways. For example, in \cite{Disks}, the authors consider such an ordering to construct erasure codes. In \cite{Dewar}, Dewar uses a modified universal cycle structure to list the blocks and points within each block of these designs in an organized manner. \textbf{Universal cycles}, or \textbf{ucycles}, are a type of cyclic Gray code in which a string $a_1a_2 \ldots a_n$ may follow string $b_1 b_2 \ldots b_n$ if and only if $a_{i+1} = b_i$ for all $i \in \{1,2, \ldots , n-1\}$. That is, the two substrings $a_2 a_3 \ldots a_n$ and $b_1 b_2 \ldots b_{n-1}$ are identical \cite{CDG}. We can think of this as an $n-1$ overlap between the strings.
Because any two blocks in a Steiner triple system can share at most one point in common, finding a ucycle over the blocks of an STS is clearly impossible - they would need to overlap in two points. To remedy this problem, Dewar introduces a modified ucycle structure. A \textbf{rank two universal cycle} is a ucycle on a block design in which each block is represented by just two of its elements. Since any pair of points appears in exactly one block of an STS, they completely identify a unique block in the triple system. Dewar constructs rank two ucycles for the special class of cyclic Steiner triple systems. A \textbf{cyclic} design has automorphism group containing the cyclic group of order $v$, isomorphic to $\mathbb{Z}_v$, as a subgroup. Since this subgroup contains the automorphism $\pi: i \mapsto i+1 \pmod v$, this implies that we can partition the blocks into classes so that within one class each block can be obtained from any other by repeated applications of $\pi$ on the block elements.
\begin{thm}\label{Dewar}
\emph{(\cite{Dewar}, p. 200)} Every cyclic STS$(v)$ with $v \neq 3$ admits a ucycle of rank two. \end{thm}
While this result allows us to write the list of blocks as a modified ucycle, it is not easy to recover the design from a given ucycle. Given just two points of a block, the only way to recover the missing point is to have a lookup table at hand, which (depending on applications) may defeat the purpose of creating a compact listing. For this reason, we consider overlap cycles.
Overlap cycles were first introduced in \cite{Godbole} for binary and $m$-ary strings. To extend this concept to Steiner triple systems, let $(X, \mathcal{B})$ be an STS($v$). An \textbf{$s$-overlap cycle} (or $s$-\textbf{ocycle}) on $(X, \mathcal{B})$ is an ordered listing of the blocks in $\mathcal{B}$ so that the last $s$ points in one block are the first $s$ points of its successor in the listing. In the case of triple systems, a 2-ocycle is a ucycle, which as previously discussed cannot be formed on any STS. Hence we consider 1-ocycles.
When writing out ocycles, we can list the sequence fully or we can choose to omit points that do not appear as an overlap (\textbf{hidden} points). When we omit the hidden points, we say that the cycle is written in \textbf{compressed form}. Using this concept, we can view Dewar's rank two ucycles as compressed 1-ocycles and easily obtain the following corollary to Theorem \ref{Dewar}.
\begin{cor}\label{CDewar}
Every cyclic STS$(v)$ with $v \neq 3$ admits a 1-ocycle. \end{cor}
It is a well-known result that there exists a cyclic STS($v$) for every $v \equiv 1, 3 \pmod 6$ except $v = 9$ (See \cite{TripleSystems}, Theorem 7.3). In order to further differentiate our results from Dewar's, we will consider a different class of Steiner triple systems, namely automorphism free (AF) Steiner triple systems, and prove the following result using recursive constructions. However, the constructions used herein may be utilized with various base cases for different (and perhaps not automorphism free) Steiner triple systems.
\begin{result}\label{main}
For every $v \equiv 1, 3 \pmod 6$ with $v \geq 15$, there exists an AF STS$(v)$ with a 1-ocycle. \end{result}
We also include two other direct constructions of 1-ocycles for a Steiner triple system of each order (Results \ref{OC3m6} and \ref{OC1m6}), as well as another recursive construction (Result \ref{OCPC}). While Dewar constructs rank two ucycles for all cyclic designs of each order, these direct constructions may be a simpler method of finding an ocycle when any STS($v$) will do.
In this paper, we will begin with a review of some recursive constructions of automorphism free Steiner triple systems in Section 2 and show 1-ocycle constructions that correspond. Some of the larger but necessary base cases for these constructions may be found in the appendix. Section 3 discusses similar results for other STS constructions and their corresponding 1-ocycle constructions. As a future direction, it would be interesting to consider these structures over other types of designs, such as Steiner quadruple systems (see \cite{OCSQS}).
\section{Constructions of AF Steiner Triple Systems and 1-Ocycles}
\subsection{Recursive Constructions of AF Steiner Triple Systems}
The first construction produces an AF STS($2v+1$) from an AF STS($v$).
\begin{const}\label{C2v+1}
Given $(X, \mathcal{A})$, an STS($v$) with $v \geq 15$ and with $X$ identified with $\mathbb{Z}_v$, construct a new design $(Y, \mathcal{B})$ with points: $$Y = (\mathbb{Z}_2 \times \mathbb{Z}_v )\cup\{\infty\} $$ and blocks:
\begin{enumerate}
\item $\{(1,a), (1,b), (1,c)\}$ with $\{a,b,c\} \in \mathcal{A}$,
\item $\left\{ (0,x), (0,y), \left(1, \frac{x+y}{2} \right) \right\}$ with $\{x,y\} \subset X$, and
\item $\{(0,x), (1,x), \infty\}$ with $x \in X$.
\end{enumerate} \end{const}
The following theorem from \cite{AFSTS} proves that Construction \ref{C2v+1} is correct.
\begin{thm}\label{2v+1}
\emph{\cite{AFSTS}}. If $(X, \mathcal{A})$ is an STS($v$), then $(Y, \mathcal{B})$ is an STS($2v+1$). In particular, if $(X, \mathcal{A})$ is AF, then $(Y, \mathcal{B})$ is AF. \end{thm}
The second construction produces an AF STS($2v+7$) from an AF STS($v$).
\begin{const}\label{C2v+7}
Given $(X, \mathcal{A})$, an STS($v$) with $v \geq 15$, and with $X$ identified with $\mathbb{Z}_v$, construct a new design $(Y, \mathcal{B})$ with points: $$Y = (\mathbb{Z}_2 \times \mathbb{Z}_v) \cup \{ \infty_i \mid |i| \leq 3\}.$$ Fix $(Z, \mathcal{C})$ as some STS(7) on the points $\{-3, -2, -1, 0, 1, 2, 3\}$. The blocks in our new design are as follows:
\begin{enumerate}
\item $\{(1,i), (1,j), (1,k)\}$ with $\{i,j,k\} \in \mathcal{A}$,
\item $\{ \infty_i , \infty_j , \infty_k\}$ with $\{i,j,k\} \in \mathcal{C}$,
\item $\{(0,x), (0, x+2), (0,x+6)\}$ with $x \in \mathbb{Z}_v$,
\item $\{(0,x), (1,x+y), (0, x+2y)\}$ with $\{x, y\} \subset \mathbb{Z}_v$ and $|y|>3$, and
\item $\{\infty_i, (1,j), (0, i+j)\}$ with $|i| \leq 3$ and $j \in \mathbb{Z}_v$.
\end{enumerate} \end{const}
The following theorem from \cite{AFSTS} proves that Construction \ref{C2v+7} is correct.
\begin{thm}\label{2v+7}
\emph{\cite{AFSTS}}. If $(X, \mathcal{A})$ is an STS($v$), then $(Y, \mathcal{B})$ is an STS($2v+7$). In particular, if $(X, \mathcal{A})$ is AF, then $(Y, \mathcal{B})$ is AF. \end{thm}
\subsection{Base Cases}\label{BaseC}
The recursive constructions given in the previous subsection require six base cases in order to construct recursively an AF STS($v$) for every $v \equiv 1, 3 \pmod 6$ with $v \geq 15$. These base cases are STS($v$)'s for $v = 15, 19, 21,25,27,33$. We also provide 1-ocycles for a non-cyclic STS($v$) for $v = 9,13$. We include a 1-ocycle for the cyclic STS(7) as it is used in the second recursive construction. See the appendix for all cases with $v \geq 19$. \\ \\ $\mathbf{v=7:}$ We use the cyclic $(7,3,1)$-design and produce the 1-ocycle: $$\underline{2},1, \underline{0}, 3, \underline{4}, 2, \underline{5}, 0, \underline{6},4,\underline{1},5,\underline{3},6,\underline{2}$$ or, since each pair appears in exactly one block, we may omit the non-overlap points to write it in compressed form as: $$(2,0,4,5,6,1,3,2).$$
$\mathbf{v=9:}$ We use the non-cyclic design (from \cite{SmallSTS}) and produce the 1-ocycle:
$$\underline{0},1, \underline{2}, 8, \underline{5}, 3, \underline{4}, 1, \underline{7},8,\underline{6},3,\underline{0},4,\underline{8},1,\underline{3},7, \underline{2},4,\underline{6},1,\underline{5},7,\underline{0}$$ or in compressed form: $$(0,2,5,4,7,6,4,8,3,2,6,5).$$ \\ \\
$\mathbf{v=13:}$ We use the non-cyclic design (from \cite{AFSTS}) and produce the 1-ocycle:
$$\underline{1}, 2, \underline{0}, 9, \underline{10}, 12, \underline{1}, 3, \underline{5}, 7, \underline{11}, 9, \underline{6}, 7, \underline{12}, 8, \underline{4}, 9, \underline{5}, 10, \underline{8}, 6, \underline{3}, 11, \underline{10}, 7, \underline{4},$$
$$\underline{4}, 0, \underline{3}, 7, \underline{2}, 5, \underline{12}, 3, \underline{9}, 8, \underline{2}, 10, \underline{6}, 5, \underline{0}, 12, \underline{11}, 2, \underline{4}, 6, \underline{1}, 9, \underline{7}, 0, \underline{8}, 11, \underline{1}.$$ \\ \\ $\mathbf{v=15:}$ We use the AF design (from \cite{SmallSTS}) and produce the 1-ocycle:
$$\begin{array}{c|c|c|c|c}
210 & 807 & 5b7 & da4 & b94 \\
0a9 & 73c & 742 & 4e5 & 48c \\
971 & ce1 & 2dc & 52a & c95 \\
153 & 1db & cb0 & a7e & 58d \\
304 & b82 & 0de & eb6 & d76 \\
461 & 236 & e83 & 6ca & 689 \\
1a8 & 605 & 39d & a3b & 9e2
\end{array}$$
\subsection{Recursive Constructions of 1-Overlap Cycles}
\begin{result}\label{OC2v+1}
If there exists an AF STS$(v)$ with a 1-ocycle, then there exists an AF STS$(2v+1)$ with a 1-ocycle when $v \geq 15$. \end{result} \begin{proof}
Using Construction \ref{C2v+1}, we construct an overlap cycle for $(Y, \mathcal{B})$ as follows. We will construct a 1-overlap cycle for triples of type (1), and then for the triples of types (2) and (3), and finally show that this sequence may be joined with the sequence for triples of type (1). \\ \\
\textbf{Step 1: Triples of type (1):} Let $O$ be a 1-ocycle on $\mathcal{A}$. Define $\{1\} \oplus O$ to be the cycle obtained by preceding each point in $O$ to with a 1, i.e. each point becomes an ordered pair with first coordinate 1. Then $\{1\} \oplus O$ is a 1-ocycle for the set of triples of type (1).
\\
\\
\textbf{Step 2: Triples of type (2):} We first define the \textbf{difference} of the triple to be the smaller of $x-y$ and $y-x$ (modulo $v$). Then we partition the set of triples of type (2) depending on their difference $d$. This creates an equivalence relation on the set of triples of type (2). We will construct 1-ocycles for each equivalence class separately.
\begin{description}
\item[$\mathbf{d=1}$:] We have the overlap cycle (in compressed form, with hidden elements removed): $$(0,0),(0,1),(0,2), \ldots , (0,v-1), (0,0).$$
\item[$\mathbf{d \geq 3}$:] We follow the same procedure as for $d=1$ by beginning with point $(0,0)$ and moving to point $(0, d)$, then $(0, 2d)$, and so on. We note however that if $d \mid v$ the procedure will not produce a cycle that covers all triples. However, when this happens we can repeat the process beginning with the first triple that remains unused. In this manner, we will obtain several disjoint ocycles, and every triple of type (2) with the given difference $d$ will be covered by one of these cycles.
\end{description}
Note that for difference $d=1$, every point of type $(0,x)$ for $x \in X$ appears as an overlap point. Thus for all overlap cycles associated with $d \geq 3$, we can join them to the cycle for $d=1$. We reserve the triples corresponding to $d=2$ to include with the triples of type (3).
\\
\\
\textbf{Step 3: Triples of type (3):} We construct an ocycle to include triples of type (2) with $d=2$ and triples of type (3) as follows. First, connect pairs of triples of the form: $$\begin{array}{ccc} (1,x+1) & (0,x) & (0,x+2) \\ \\ & \hbox{and} &\\ \\ (0, x+2) & \infty & (1,x+2) \end{array}$$ Then we may use all of these pairs to form the ocycle: $$\underline{(1,1)}, (0,0), \underline{(0,2)}, \infty, \underline{(1,2)}, (0,1), \underline{(0,3)}, \infty, \underline{(1,3)}, \ldots , \underline{(1,0)}, (0,v-1), \underline{(0,1)}, \infty, \underline{(1,1)}.$$ This cycle accounts for $v$ of these pairs of triples, and since no triple is covered twice it must cover all triples of type (2) with $d=2$ and all triples of type (3).
\\
\\
To connect all of the constructed cycles, we note that the cycle created in Step 3 contains the points $(0,x)$ and $(1,x)$ for every $x \in X$ as an overlap. Thus we can connect the cycle created in Step 1 to this cycle, as well as the cycle created in Step 2. This produces one long 1-ocycle that covers all triples. \end{proof}
\begin{result}\label{OC2v+7}
If there exists an AF STS$(v$) with a 1-ocycle, then there exists an AF STS$(2v+7)$ with a 1-ocycle, when $v \geq 15$. \end{result}
\begin{proof}
Using Construction \ref{C2v+7}, we will find ocycles for subsets of blocks, and show that they can be combined to form one long ocycle for the entire design.
\\
\\
\textbf{Step 1: Triples of type (1):} Let $O$ be a 1-ocycle on $\mathcal{A}$. Then $\{1\} \oplus \mathcal{A}$ also has a 1-ocycle, given by $\{1\} \oplus O$.
\\
\\
\textbf{Step 2: Triples of type (3):} We construct one long cycle: $$\underline{(0,0)}, (0,6), \underline{(0,2)}, (0,8), \underline{(0,4)}, \ldots , \underline{(0, v-1)}, (0,5), \underline{(0,1)}, \ldots , \underline{(0,v-2)}, (0,4), \underline{(0,0)}.$$ Note that since $v$ must always be odd, we see the point $(0,x)$ for every $x \in \mathbb{Z}_v$ as an overlap point in this cycle.
\\
\\
\textbf{Step 3: Triples of type (4) with $|y|> 4$:} We start by creating the ocycle (in compressed form, with hidden elements removed): $$(0,0), (0,2y), (0,4y), (0,6y), \ldots , (0,0).$$ This cycle contains all triples of type (4) associated with a particular $y$, since $v$ must be odd. Note also that in each cycle, all of the points $(0,x)$ for every $x \in \mathbb{Z}_v$ appear as overlaps. Thus we can connect all of these cycles for each choice of $y$ with $|y| > 4$.
\\
\\
\textbf{Step 4: Triples of type (4) with $|y| = 4$ and type (5) with $i = -3$:} We begin by pairing up blocks as follows so as to partition $\mathcal{B}$: $$\begin{array}{ccc} \{ (0,x), & (0,x+8), & (1,x+4)\} \\ \\ & \hbox{and} & \\ \\ \{(1,x+4), & \infty_{-3}, & (0,x+1)\} \end{array}$$ We can connect up these pairs in order, starting with the pair that begins $(0,0)$ and then moving to the pair that begins $(0,1)$, and so on. We will eventually end with the pair starting $(0,v-1)$, which ends with the point $(0,0)$. Thus we have an overlap cycle. Note that in this cycle the points $(0,x)$ and $(1,x)$ appear as overlap points for every $x \in \mathbb{Z}_v$.
\\
\\
\textbf{Step 5: Triples of type (2):} The triples of type (2) correspond to an STS(7). We have shown in Section \ref{BaseC} that a 1-ocycle exists for the unique STS(7). We will use the cycle from Step 4 to join the triples of type (2). If we break the cycle from Step 4 between the blocks $$\begin{array}{cccc} \{(0,v-8), & (0,0), & (1,v-4)\} & \hbox{and} \\ \{(1,v-4), & \infty_{-3}, & (0,v-7)\} \end{array}$$ and also between the blocks $$\begin{array}{cccc} \{(1,3), & \infty_{-3}, & (0,0)\} & \hbox{and} \\ \{(0,0), & (0,8), & (1,4)\} \end{array}$$ then we now have two 1-overlap paths: $$\underline{(0,0)}, (0,8), \underline{(1,4)}, \ldots , \underline{(0,v-8)}, (0,0), \underline{(1,v-4)}$$ $$\hbox{and}$$ $$\underline{(1,v-4)}, \infty_{-3}, \underline{(0,v-7)}, \ldots , \underline{(1,3)}, \infty_{-3}, \underline{(0,0)}.$$ We can swap the order of the last two elements in the first path, and swap the order of the first two and the order of the last two elements in the second path two obtain the following two 1-ocycles: $$\underline{(0,0)}, (0,8), \underline{(1,4)}, \ldots , \underline{(0,v-8)}, (1,v-4), \underline{(0,0)}$$ $$\hbox{and}$$ $$\underline{\infty_{-3}}, (1,v-4), \underline{(0,v-7)}, \ldots , \underline{(1,3)}, (0,0), \underline{\infty_{-3}}.$$ Now we have $\infty_{-3}$ as an overlap point in the second cycle and so we can join this ocycle to the STS(7) ocycle (which contains every point $\infty_i$ as an overlap point).
\\
\\
\textbf{Step 6: Triples of type (5) with $i \neq -3$:} We construct three separate ocycles as follows. For $k \in \{-2,0,2\}$, construct the cycle: $$(0,0), (1,k), (0,1), (1,k+1), \ldots$$ When $k=-2$, this covers all triples of type $$\{(0,x), (1,x-2), \infty_2\} \hbox{ and } \{(0,x), (1,x-3), \infty_3\}.$$ When $k=0$, this covers all triples of type $$\{(0,x), (1,x), \infty_0\} \hbox{ and } \{(0,x), (1,x-1), \infty_1\}.$$ When $k=2$, this covers all triples of type $$\{(0,x), (1,x+2), \infty_{-2}\} \hbox{ and } \{(0,x), (1,x+1), \infty_{-1}\}.$$ These three cycles cover all triples of type (5) with $i \neq -3$.
\\
\\
The cycles from Steps 2, 3, 5, and 6 all contain the point $(0,x)$ for every $x \in \mathbb{Z}_v $, and so can be connected. The cycles from Steps 1, 5, and 6 all contain the point $(1,x)$ for every $x \in \mathbb{Z}_v \setminus \{v-4\}$, and so can be connected. Since the cycle from Step 5 appears in both cases, these two long cycles can also be connected. Thus, all triples are contained in one of the connected cycles, and so we have a 1-ocycle that covers all blocks. \end{proof}
We are now ready to prove Result \ref{main}. \begin{proof}[Proof of Result \ref{main}]
We proceed by induction on $n$. For $n=15,19,21,25,27,33$, we have shown ocycles in Section \ref{BaseC} and the appendix.
For $n \geq 37$ and $n \equiv 1 \pmod {12}$, there exists $v \equiv 3 \pmod 6$ with $n = 2v+7$. Note that $n \geq 37$ implies that $v \geq 15$. Thus we use Result \ref{OC2v+7} to find the STS($n$).
For $n \geq 39$ and $n \equiv 3 \pmod {12}$, there exists $v \equiv 1 \pmod 6$ with $n = 2v+1$. Note that $n \geq 39$ implies that $v \geq 19$. Thus we use Result \ref{OC2v+1} to find the STS($n$).
For $n \geq 31$ and $n \equiv 7 \pmod {12}$, there exists $v \equiv 3 \pmod 6$ with $n = 2v+1$. Note that $n \geq 31$ implies that $v \geq 15$, so we use Result \ref{OC2v+1} to find the STS($n$).
For $n \geq 45$ and $n \equiv 9 \pmod {12}$, there exists $v \equiv 1 \pmod 6$ with $n = 2v+7$. Note that $n \geq 45$ implies that $v \geq 19$, so we use Result \ref{OC2v+7} to find the STS($n$). \end{proof}
\begin{cor}
For every $n \geq 15$ with $n \equiv 1, 3\pmod 6$, there exists an AF STS$(n)$ with a rank two ucycle. \end{cor} \begin{proof}
Using Result \ref{main}, we construct an AF STS($n$) with a 1-ocycle. The 1-ocycle in compressed form is a rank two ucycle. \end{proof}
\section{Other STS Constructions with Overlap Cycles}
In this section, we look at several other known constructions for Steiner triple systems, and show their corresponding 1-ocycle constructions.
\begin{const}\label{PC}
\emph{(See \cite{TripleSystems}, p. 39 - Direct Product)} Given $(X, \mathcal{A})$, an STS($u$) and $(Y, \mathcal{B})$, an STS($v$), identify $X$ with $\mathbb{Z}_u$ and $Y$ with $\mathbb{Z}_v$. We construct a new STS($uv$) $=(Z, \mathcal{C})$ with points: $$Z = \mathbb{Z}_u \times \mathbb{Z}_v$$ and blocks:
\begin{enumerate}
\item $\{(i,a), (i,b), (i,c)\}$ with $i \in \mathbb{Z}_u$ and $\{a,b,c\} \in \mathcal{B}$,
\item $\{(i,a), (j,a), (k,a)\}$ with $\{i,j,k\} \in \mathcal{A}$ and $a \in \mathbb{Z}_v$, and
\item $\{(i,a), (j,b), (k,c)\}$ with $\{i,j,k\} \in \mathcal{A}$ and $\{a,b,c\} \in \mathcal{B}$.
\end{enumerate} \end{const}
The following theorem (see \cite{TripleSystems}) proves that Construction \ref{PC} is correct.
\begin{thm}
If $(X, \mathcal{A})$ is an STS$(v)$ and $(Y, \mathcal{B})$ is an STS$(w)$, then $(Z, \mathcal{C})$ is an STS$(vw)$. \end{thm}
An interesting consequence of the direct product is the following theorem. \begin{thm}
\emph{(See \cite{TripleSystems}, Lemma 7.12)} The automorphism group of the direct product of two triple systems is the direct product of their automorphism groups. \end{thm}
This theorem implies another method for constructing AF Steiner triple systems that admit ocycles. Beginning with two AF Steiner triple systems with corresponding 1-ocycles, we can use the following result to construct a 1-ocycle on their direct product.
\begin{result}\label{OCPC}
If there exists an STS$(u)$ with a 1-overlap cycle and an STS$(v)$ with a 1-overlap cycle, then there exists an STS$(uv)$ with a 1-ocycle. \end{result} \begin{proof}
Let $(X, \mathcal{A})$ be an STS($u$) and $(Y, \mathcal{B})$ be an STS($v$) that admit 1-ocycles $O(\mathcal{A})$ and $O(\mathcal{B})$, respectively. We construct an STS($uv$) using Construction \ref{PC} (the direct product). For each $i \in \mathbb{Z}_u$, we have a 1-ocycle covering the triples of type 1, namely $i \oplus O(\mathcal{B})$. Similarly, for each $a \in \mathbb{Z}_v$, we have a 1-ocycle covering the triples of type 2: $O(\mathcal{A}) \oplus a$. Lastly, for each $A=\{i,j,k\} \in \mathcal{A}$ and each $B =\{a,b,c\} \in \mathcal{B}$, we can construct the following 1-ocycle:
$$\begin{array}{ccc}
\{(i,a), & (j,b), & (k,c)\} \\
\{(k,c), & (j,a), & (i,b)\} \\
\{(i,b) ,& (j,c), & (k,a)\} \\
\{(k,a) ,& (j,b), & (i,c)\} \\
\{(i,c) ,& (j,a), & (k,b)\} \\
\{(k,b) ,& (j,c), & (i,a)\}
\end{array}$$
To connect cycles covering triples of types (1) and (2), we connect wherever possible. Starting with $0 \oplus O(\mathcal{B})$, we connect all cycles over triples of type (2). Then, starting with an arbitrary, already connected, cycle over triples of type (2), we repeat the process by adding cycles over triples of type (1) wherever possible. We continue this process of extending our cycle until we no longer are able to add any more cycles.
We will always be able to continue to connect cycles, except when all cycles are connected, or:
\begin{enumerate}
\item there exists $i \in \mathbb{Z}_u$ that never appears as an overlap point in $O(\mathcal{A})$, and/or,
\item there exists $a \in \mathbb{Z}_v$ that never appears as an overlap point in $O(\mathcal{B})$.
\end{enumerate}
If we have both cases, then we choose a block $A \in \mathcal{A}$ containing $i$, say $A = \{i,j,k\}$, and a block $B \in \mathcal{B}$ containing $a$, i.e. $B = \{a,b,c\}$. Then we arrange the cycle covering the triples from $A \times B$ to begin with $(i,a)$. Since two points of $A$ must appear as overlap points in $O(\mathcal{A})$ and $i$ is not one of them, we must have that $j$ and $k$ are overlap points in $O(\mathcal{A})$. Similarly, $b$ and $c$ must be overlap points in $O(\mathcal{B})$. Thus we can connect the cycle for $A \times B$ to the cycles $k \oplus O(\mathcal{B})$ (at point $(k,c)$) and $O(\mathcal{A}) \oplus c$ (at point $(k,c)$ as well). Note that since each block can only contain one hidden element, this process will never use a block from $\mathcal{A}$ or $\mathcal{B}$ more than once. If only case (1) or case (2) holds (but not both), this process is repeated with an arbitrary choice of block from $\mathcal{B}$. \end{proof}
\begin{const}\label{3m6}
\emph{(\cite{Bose}, Bose Construction)} Suppose that $n \equiv 3 \pmod 6$; then $n=3m$ for some $m$ odd. The point set is made up of three copies of the integers modulo $m$. Formally: $$X = \mathbb{Z}_3 \times \mathbb{Z}_m.$$ Blocks are of two types:
\begin{enumerate}
\item $\{(a,i), (a,j), (a+1,k)\}$ with $i+j = 2k$ for each $a \in \mathbb{Z}_3$
\item $\{(0,i), (1,i), (2,i)\}$ for each $i \in \mathbb{Z}_m$
\end{enumerate} \end{const}
The following theorem from \cite{Bose} proves that Construction \ref{3m6} is correct.
\begin{thm}
\emph{\cite{Bose}} If $n \equiv 3 \pmod 6$, there exists an STS$(n)$. \end{thm}
\begin{result}\label{OC3m6}
For $n \equiv 3 \pmod 6$ with $n>3$, there exists an STS$(n)$ that admits a 1-ocycle. \end{result} \begin{proof}
We will use Construction \ref{3m6} to create an STS($n$), then construct 1-ocycles to cover each type of triples, and finally show how to connect them into one large cycle. First, note that we have $m\geq 3$ since $n>3$, and so there exists at least three triples of each kind. \\ \\
\textbf{Step 1: Triples of type (1) with $a=1$:} Define the value $\min \{i-j ,j-i \}$, where subtraction is done in the group $\mathbb{Z}_m$, to be the \textbf{distance} for the triple $\{(1,i), (1,j), (2,k)\}$. Partition the blocks of type (1) into classes so that the blocks $\{(1,i), (1,j), (2,k)\}$ and $\{(1,r), (1,s), (2,t)\}$ are in the same class if and only if they have the same distance. This defines an equivalence relation on the set of blocks of type (1) with $\frac{m-1}{2}$ different equivalence classes. Create a cycle using the set of blocks having the form $\{(1,i), (1,i+1), (2,i+\frac{m-1}{2}+1)\}$ as shown below in compressed form: $$(1,0) (1,1) (1,2) \cdots (1,m-1) (1,0)$$ in compressed form. Create similar (possibly shorter) cycles using the blocks within each other equivalence class. This creates at least one, if not several disjoint, cycles for each equivalence class. Since the first cycle created (using blocks with distance 1) has every point $(1,i)$ as an overlap point, we can combine all of these cycles to make one long cycle.
\\
\\
\textbf{Step 2: Triples of type (1) with $a=2$:}
Repeat as in Step 1. We pay careful attention to attach the cycle for distance 2 blocks at the point $(2,0)$. Note that this is possible since distance 2 also creates one long cycle covering the entire equivalence class, as $m$ must be odd. Now we may be assured that the cycle corresponding to distance 2 does not have any cycles attached at the overlap point $(2,1)$ between the blocks $\{(2,m-1), (0,0), (2,1)\}$ and $\{(2,1), (0,2), (2,3)\}$. Then, when we have combined all blocks of type (1) with $a=2$ to make a cycle, we convert the cycle to a string by cutting it between these two blocks and then reversing the order of the last two points. In other words, we now have a string that begins with $(2,1) (0,2) (2,3)$ and ends with $(2,m-1) (2,1) (0,0)$.
\\
\\
\textbf{Step 3: Triples of type (2) and (1) with $a=0$:} Repeat as in Step 1 excluding the equivalence class with distance 2. For these excluded blocks, we partition the set of blocks of type (1) and (2) into sets of size two by grouping together: $$\{(0,i), (2,i), (1,i)\} \hbox{ and } \{(1,i), (0,i-1), (0,i+1)\}.$$ Clearly the blocks in each set of size two can form a 1-overlap string, and then we can combine each of these strings to obtain a 1-ocycle of the form: $$\underline{(0,1)} (2,1) \underline{(1,1)} (0,0) \underline{(0,2)} \cdots \underline{(0,i)} (2,i) \underline{(1,i)} (0,i-1) \underline{(0,i+1)} \cdots \underline{(0,0)} (2,0) \underline{(1,0)} (0,m-1) \underline{(0,1)}$$ $$\hbox{or}$$ $$(0,1) (1,1) (0,2) (1,2) \cdots (0,i) (1,i) (0,i+1) (1,i+1) \cdots (0,0) (1,0) (0,1)$$ in compressed form.
\\
\\
\textbf{Step 4: Combining the triples from Step 1 and Step 3} Since the cycles created in Step 1 and Step 3 both contain every point $(1,i)$ as an overlap point, we can combine these two cycles. More importantly, we have a choice of where to combine the cycles, since we have at least two choices for an overlap point $(1,i)$. We choose to combine the two cycles at an overlap point other than $(1,1)$. Then, we can create a string from this cycle by cutting the cycle between the blocks $\{(0,1), (2,1), (1,1)\}$ and $\{(1,1), (0,0), (0,2)\}$ (from cycle from Step 3), and reversing the order of the first two and the last two elements. In other words, we now have a string that begins with $\{(2,1), (0,1), (1,1)\}$ and ends with $\{(1,1), (0,2), (0,0)\}$.
\\
\\
To create our final 1-ocycle, we recall that our string from Step 2 also begins with the point $(2,1)$ and ends with the point $(0,0)$, and so we can combine these two strings into one large cycle by reversing the order of the string from Step 4. \end{proof}
\begin{const}\label{1m6}
\emph{(\cite{Skolem}, Skolem Construction)} If $n \equiv 1 \pmod 6$, then $n=6t+1$ for some $t \in \mathbb{Z}$. We define the point set as $$Y = (\mathbb{Z}_{2t} \times \mathbb{Z}_3) \cup \{ \infty\}.$$ Then we define three types of blocks:
\begin{enumerate}
\item $ A_x = \{(x,0), (x,1), (x,2)\}$ for $0 \leq x \leq t-1$.
\item $B_{x,y,i} = \{(x,i), (y,i), (x \circ y, i+1)\}$ for each $x , y \in \mathbb{Z}_{2t}$ with $x<y$ and each $i \in \mathbb{Z}_3$, and where $x \circ y = \pi(x+y \pmod {2t})$ and $$\pi(z) = \left\{ \begin{array}{ll} z/2, & \hbox{ if } z \hbox{ is even,} \\ (z+2t-1)/2, & \hbox{ if } z \hbox{ is odd.} \end{array}\right.$$
\item $C_{x,i} = \{\infty, (x+t,i), (x,i+1)\}$ for each $0 \leq x \leq t-1$ and $i \in \mathbb{Z}_3$.
\end{enumerate} \end{const}
The following theorem from \cite{Skolem} proves that Construction \ref{1m6} is correct.
\begin{thm}
If $n \equiv 1 \pmod 6$, then there is an STS$(n)$. \end{thm}
\begin{result}\label{OC1m6}
For $n \equiv 1 \pmod 6$ with $n>1$, there exists an STS$(n)$ that admits a 1-overlap cycle. \end{result} \begin{proof}
We will use Construction \ref{1m6} to construct an STS($n$), then show how to construct disjoint cycles for most triples of type (2), then disjoint cycles for triples of types (1) and (3), and finally show how to combine them to make one large 1-ocycle containing all triples. \\ \\
\textbf{Step 1: Triples of type (2):} The triples of type (2) can be partitioned based on the pair $\{(x,i), (y,i)\}$. Similar to Result \ref{OC3m6}, we define the \textbf{distance} of the triple to be the smaller of $x-y$ and $y-x$ (modulo $2t$). Then, we can partition the set of triples of type (2) into classes that share the same distance for each difference $k<t$. Following the method from Result \ref{OC3m6} (Step 1), we can create disjoint cycles that contain all of these triples. Note that the triples corresponding to distance 1 make one long cycle for each second coordinate. This cycle is (in compressed form): $$(0,i) (1,i) (2,i) \ldots (2t-1,i) \hbox{ for } i \in \mathbb{Z}_3.$$ These cycles contain every point from $\mathbb{Z}_{2t} \times \{i\}$ as an overlap point, and so we can hook up all of them to make three long cycles - one for each $i \in \mathbb{Z}_3$. These cycles cover all triples of type (2) except those with distance $t$.
\\
\\
\textbf{Step 2: Triples of types (1), (2), (3):} We begin by partitioning the triples of type (3) into classes that contain the following blocks: $$\{\infty, (x+t,0), (x,1)\}, \{\infty, (x+t,1), (x,2)\}, \{\infty, (x+t,2), (x,0)\}.$$ Note that no other triples of type (3) contain any points with $x$ or $x+t$ as a first coordinate. This set of blocks has a corresponding triple of type (1): $$\{(x,0), (x,1), (x,2)\}.$$ It also has a corresponding triple of type (2) with distance $t$: $$\{(x,i),(x+t,i), (x \circ (x+t), i+1)\} \hbox{ for } i \in \mathbb{Z}_3.$$ Using these blocks and defining $y = x+t$, we create the following cycle: $$\begin{array}{ccccccccccccccc} \underline{x2} & (x\circ y)0 & \underline{y2} & x0 & \underline{\infty} & x1 & \underline{y0} & (x \circ y)1 & \underline{x0} & x2 & \underline{x1} & (x \circ y)2 & \underline{y1} & \infty & \underline{x2} \end{array}$$ $$\hbox{or}$$ $$ \begin{array}{cccccccccc} x2 & y2 & \infty & y0 & x0 & x1 & y1 & x2 \end{array}$$ in compressed form. This creates a set of disjoint cycles that cover all of the remaining triples.
\\
\\
To combine all of our cycles and create our final 1-ocycle, we note that the cycles from Step 2 each have at least one overlap point of the form $(x,1)$ with $x \in \mathbb{Z}_t$, and so we can hook these cycles all up to the cycle from Step 1 that corresponds to $i=1$. Also, each of the cycles from Step 2 also have overlap points $(x,i)$ corresponding to $i=1,2$ and $x \in \mathbb{Z}_t$, and so we can connect the remaining two cycles from Step 1. \end{proof}
We can now use the direct constructions to prove the existence of an STS($v$) that admits a 1-ocycle for every $v \equiv 1, 3 \pmod 6$.
\begin{thm}\label{easymain}
For every $v \equiv 1, 3 \pmod 6$, there exists an STS($v$) that admits a 1-ocycle. \end{thm}
\begin{proof}
For $n \equiv 3 \pmod 6$ with $n \geq 7$, we apply Result \ref{OC3m6} to obtain the desired system. For $n \equiv 1 \pmod 6$ with $n \geq 7$, we apply Result \ref{OC1m6} to obtain the desired system. \end{proof}
\begin{cor}
For every $n \geq 7$ with $n \equiv 1, 3\pmod 6$, there exists an STS$(n)$ with a rank two ucycle. \end{cor} \begin{proof}
Using Theorem \ref{easymain}, we construct an STS($n$) with a 1-ocycle. The 1-ocycle in compressed form is a rank two ucycle. \end{proof}
\begin{appendix}
\section{Appendix} Included in this appendix are the necessary base cases for Constructions \ref{C2v+1} and \ref{C2v+7}. \\ \\ $\mathbf{v=19:}$ We use the AF design (from \cite{STS19}) and produce the 1-ocycle:
$$\begin{array}{l|l|l|l|l|l}
1,2,3 & 6,15,8 & 15,17,10 & 16,11,14 & 19,1,18 & 17,2,19 \\
3,5,6 & 8,1,9 & 10,5,14 & 14,17,7 & 18,2,16 & 19,14,8 \\
6,2,4 & 9,2,11 & 14,6,9 & 7,15,9 & 16,3,19 & 8,16,7 \\
4,10,13 & 11,5,13 & 9,19,13 & 9,12,18 & 19,15,4 & 7,4,3 \\
13,1,12 & 13,8,17 & 13,18,7 & 18,15,11 & 4,8,12 & 3,17,18 \\
12,2,14 & 17,9,5 & 7,12,11 & 11,1,10 & 12,15,3 & 18,8,5 \\
14,1,15 & 5,12,19 & 11,4,17 & 10,2,8 & 3,13,14 & 5,4,1 \\
15,13,2 & 19,11,6 & 17,12,6 & 8,11,3 & 14,18,4 \\
2,5,7 & 6,13,16 & 6,18,10 & 3,9,10 & 4,9,16\\
7,1,6 & 16,5,15 & 10,12,16 & 10,7,19 & 16,1,17
\end{array}$$ \\ \\
$\mathbf{v=21:}$ We use the AF design (from \cite{AFSTS}) to produce the 1-ocycle:
$$\begin{array}{c|c|c|c|c|c|c}
00, \infty_1 , 11 & 05 , \infty_1 , 16 & 12 , 14 , 08 & 16 , 18 , 03 & 01 , 10 , 15 & 05 , 10 , 14 & 05,03,04 \\
11 , \infty_0 , 01 & 16 , \infty_0 , 06 & 08 , \infty_2 , 11 & 03 , \infty_2 , 15 & 15 , 12 , 18 & 14 , 13 , 06 & 04, 01, 07 \\
01 , \infty_1 , 12 & 06 , \infty_1 , 17 & 11 , 13 , 17 & 15 , 17 , 02 & 18 , 10 , 02 & 06 , 11 , 15 & 07, 08, 06 \\
12 , \infty_0 , 02 & 17 , \infty_0 , 07 & 07 , \infty_2 , 10 & 02 , \infty_2 , 14 & 02 ,11 , 16 & 15 , 14 , 07 & 06, 03, 00 \\
02 , \infty_1 , 13 & 07 , \infty_1 , 18 & 10 , 12 , 06 & 14 , 16 , 01 & 16 , 13 , 10 & 07 , 12 , 16 & 00, 04, 08 \\
13 , \infty_0 , 03 & 18 , \infty_0 , 08 & 06 , \infty_2 , 18 & 01 , \infty_2 , 13 & 10 , 11 , 03 & 16 , 15 , 08 & 08, 01, 03 \\
03 , \infty_1 , 14 & 08 , \infty_1 , 10 & 18 , 11 , 05 & 13 , 15 , 00 & 03 , 17 , 12 & 08 , 13 , 17 & 03, 07, 02 \\
14 , \infty_0 , 04 & 10 , 00 , \infty_0 & 05 , \infty_2 , 17 & 00 , 18 , 14 & 12 , 11 , 04 & 17 , 16 , 00 & 02, 04, 06 \\
04 , \infty_1 , 15 & \infty_0 , \infty_1 , \infty_2 & 17 , 10 , 04 & 14 , 11 , 17 & 04 , 18 , 13 & 00,01, 02 & 06, 01, 05 \\
15 , \infty_0 , 05 & \infty_2 , 00 , 12 & 04 , \infty_2 , 16 & 17 , 18 , 01 & 13 , 12 , 05 & 02, 08, 05 & 05, 07, 00
\end{array}$$ $\mathbf{v=25:}$ We use the AF design (from \cite{AFSTS}) to produce the 1-ocycle:
$$\begin{array}{c|c|c|c|c|c|c}
\infty_1, 00, 11 & 18, \infty_0, 08 & 06, \infty_3, 10 & 04, \infty_5, 11 & 07, 08, 06 & 13, 16, 14 & 14, \infty_6, 06 \\
11, \infty_0, 01 & 08, \infty_1, 10 & 10, \infty_2, 07 & 11, \infty_4, 05 & 06, 03, 00 & 14, 17, 15 & 06, 01, 15 \\
01, \infty_1, 12 & 10, \infty_0, 00 & 07, \infty_3, 11 & 05, \infty_5, 12 & 00, 04, 08 & 15, 18, 16 & 15, \infty_6, 07 \\
12, \infty_0, 02 & 00, \infty_3, 13 & 11, \infty_2, 08 & 12, \infty_4, 06 & 08, 01, 03 & 16, 10, 17 & 07, 02, 16 \\
02, \infty_1, 13 & 13, \infty_2, 01 & 08, \infty_3, 12 & 06, \infty_5, 13 & 03, 07, 02 & 17, 11, 18 & 16, \infty_6, 08 \\
13, \infty_0, 03 & 01, \infty_3, 14 & 12, \infty_2, 00 & 13, \infty_4, 07 & 02, 04, 06 & 18, 12, 10 & 08, 03, 17 \\
03, \infty_1, 14 & 14, \infty_2, 02 & 00, \infty_5, 16 & 07, \infty_5, 14 & 06, 01, 05 & 10, \infty_6, 02 & 17, 00, \infty_6 \\
14, \infty_0, 04 & 02, \infty_3, 15 & 16, \infty_4, 01 & 14, \infty_4, 08 & 05, 07, 00 & 02, 06, 11 & \infty_6, \infty_2, \infty_0 \\
04, \infty_1, 15 & 15, \infty_2, 03 & 01, \infty_5, 17 & 08, \infty_5, 15 & 00, 04, 18 & 11, \infty_6, 03 & \infty_0, \infty_3, \infty_1 \\
15, \infty_0, 05 & 03, \infty_3, 16 & 17, \infty_4, 02 & 15, \infty_4, 00 & 18, \infty_6, 01 & 03, 07, 12 & \infty_1, \infty_4, \infty_2 \\
05, \infty_1, 16 & 16, \infty_2, 04 & 02, \infty_5, 18 & 00, 01, 02 & 01, 05, 10 & 12, \infty_6, 04 & \infty_2, \infty_5, \infty_3 \\
16, \infty_0, 06 & 04, \infty_3, 17 & 18, \infty_4, 03 & 02, 08, 05 & 10, 13, 11 & 04, 08, 13 & \infty_3, \infty_6, \infty_4 \\
06, \infty_1, 17 & 17, \infty_2, 05 & 03, \infty_5, 10 & 05, 03, 04 & 11, 14, 12 & 13, \infty_6, 05 & \infty_4, \infty_0, \infty_5 \\
17, \infty_0, 07 & 05, \infty_3, 18 & 10, \infty_4, 04 & 04, 01, 07 & 12, 15, 13 & 05, 00, 14 & \infty_5, \infty_6, \infty_1 \\
07, \infty_1, 18 & 18, \infty_2, 06 & \end{array}$$ \\ \\ $\mathbf{v=27:}$ We use the AF design (from \cite{AFSTS}) to produce the 1-ocycle:
$$\hspace{-7mm}\begin{array}{c|c|c|c|c|c}
00, \infty, 10 & 19, 18, 12 & 07, \infty, 17 & 04, 00, 12 & 01, 06, 1(10) & 0(11), 03, 17 \\
10, 0(12), 01 & 12, 1(10), 16 & 17, 06, 08 & 12, 0(12), 05 & 1(10), 05, 02 & 17, 02, 0(12) \\
01, \infty, 11 & 16, 15, 10 & 08, \infty, 18 & 05, 01, 13 & 02, 07, 1(11) & 0(12), 04, 18 \\
11, 12, 10 & 10, 1(12), 1(11) & 18, 07, 09 & 13, 00, 06 & 1(11), 06, 03 & 18, 03, 00 \\
10, 19, 1(10) & 1(11), 12, 14 & 09, \infty, 19 & 06, 02, 14 & 03, 08, 1(12) & 00, 07, 01 \\
1(10), 1(12), 11 & 14, 16, 11 & 19, 08, 0(10) & 14, 01, 07 & 1(12), 07, 04 & 01, 08, 02 \\
11, 13, 15 & 11, 19, 17 & 0(10), \infty, 1(10) & 07, 03, 15 & 04, 09, 10 & 02, 09, 03 \\
15, 17, 1(11) & 17, 10, 18 & 1(10), 09, 0(11) & 15, 02, 08 & 10, 08, 05 & 03, 0(10), 04 \\
1(11), 19, 16 & 18, 1(11), 11 & 0(11), \infty, 1(11) & 08, 04, 16 & 05, 0(10), 11 & 04, 0(11), 05 \\
16, 17, 1(12) & 11, 00, 02 & 1(11), 0(10), 0(12) & 16, 03, 09 & 11, 09, 06 & 05, 0(12), 06 \\
1(12), 18, 14 & 02, \infty, 12 & 0(12), \infty, 1(12) & 09, 05, 17 & 06, 0(11), 12 & 06, 00, 07 \\
14, 19, 15 & 12, 01, 03 & 1(12), 0(11), 00 & 17, 04, 0(10) & 12, 0(10), 07 & 07, 01, 08 \\
15, 1(10), 18 & 03, \infty, 13 & 00, 09, 1(11) & 0(10), 06, 18 & 07, 0(12), 13 & 08, 02, 09 \\
18, 16, 13 & 13, 02, 04 & 1(11), 08, 01 & 18, 05, 0(11) & 13, 0(11), 08 & 09, 03, 0(10) \\
13, 1(11), 1(10) & 04, \infty, 14 & 01, 0(10), 1(12) & 0(11), 07, 19 & 08, 00, 14 & 0(10), 04, 0(11) \\
1(10), 17, 14 & 14, 03, 05 & 1(12), 09, 02 & 19, 06, 0(12) & 14, 0(12), 09 & 0(11), 05, 0(12) \\
14, 10, 13 & 05, \infty, 15 & 02, 0(11), 10 & 0(12), 08, 1(10) & 09, 01, 15 & 0(12), 06, 00 \\
13, 17, 12 & 15, 04, 06 & 10, 0(10), 03 & 1(10), 07, 00 & 15, 00, 0(10) & \\
12, 15, 1(12) & 06, \infty, 16 & 03, 0(12), 11 & 00, 05, 19 & 0(10), 02, 16 & \\
1(12), 13, 19 & 16, 05, 07 & 11, 0(11), 04 & 19, 04, 01 & 16, 01, 0(11) & \end{array}$$
$\mathbf{v=33:}$ We use the AF design (from \cite{AFSTS}) to produce the 1-ocycle:
$$\begin{array}{c|c|c|c}
13, \infty_1, 02 & 1(13), 16, 0(14) & 1(13), 11, 08 & 03, 09, 0(13) \\
02, \infty_2, 15 & 0(14), \infty_0, 1(14) & 08, 15, 1(14) & 0(13), 0(10), 04\\
15, \infty_1, 04 & 1(14), 14, 19 & 1(14), 12, 09 & 04, 0(14), 05\\
04, \infty_2, 17 & 19, 1(10), 0(14) & 09, 16, 10 & 05, 02, 0(10) \\
17, \infty_1, 06 & 0(14), 18, 1(12) & 10, 0(10), 13 & 0(10), 07, 0(14) \\
06, \infty_2, 19 & 1(12), 15, 0(13) & 13, 1(10), 0(11) & 0(14), 0(11), 06\\
19, \infty_1, 08 & 0(13), 17, 1(11) & 0(11), 15, 19 & 06, 0(12), 0(10) \\
08, \infty_2, 1(11) & 1(11), 14, 0(12) & 19, 12, 0(10) & 0(10), 03, 0(11) \\
1(11), \infty_1, 0(10) & 0(12), 16, 1(10) & 0(10), 14, 18 & 0(11), 09, 04 \\
0(10), \infty_2, 1(13) & 1(10), \infty_0, 0(10) & 18, 11, 09 & 04, 08, 0(12) \\
1(13), \infty_1, 0(12) & 0(10), 17, 11 & 09, 13, 17 & 0(12), 09, 05 \\
0(12), \infty_2, 10 & 11, 14, 0(11) & 17, 10, 08 & 05, 08, 0(13) \\
10, \infty_1, 0(14) & 0(11), \infty_0, 1(11) & 08, 12, 16 & 0(13), 07, 06 \\
0(14), \infty_2, 12 & 1(11), 11, 16 & 16, 1(14), 07 & 06, 08, 09 \\
12, \infty_1, 01 & 16, 17, 0(11) & 07, 11, 15 & 09, 0(14), 02 \\
01, \infty_2, 14 & 0(11), 18, 12 & 15, 1(13), 06 & 02, 1(11), 10 \\
14, \infty_1, 03 & 12, 15, 0(12) & 06, 10, 14 & 10, 0(13), 12\\
03, \infty_2, 16 & 0(12), 19, 13 & 14, 1(12), 05 & 12, 00, 14 \\
16, \infty_1, 05 & 13, 16, 0(13) & 05, 1(14), 13 & 14, 02, 16 \\
05, \infty_2, 18 & 0(13), 1(10), 14 & 13, 1(11), 04 & 16, 04, 18 \\
18, \infty_1, 07 & 14, 17, 0(14) & 04, 1(13), 12 & 18, 06, 1(10) \\
07, \infty_2, 1(10) & 0(14), 1(11), 15 & 12, 1(10), 03 & 1(10), 08, 1(12) \\
1(10), \infty_1, 09 & 15, 18, 00 & 03, 1(12), 11 & 1(12), 1(14), 0(10) \\
09, \infty_2, 1(12) & 00, 1(12), 16 & 11, 19, 02 & 0(10), 15, 16 \\
1(12), \infty_1, 0(11) & 16, 19, 01 & 02, 01, 00 & 16, \infty_0, 06 \\
0(11), \infty_2, 1(14) & 01, 1(13), 17 & 00, 0(10), 09 & 06, 11, 12 \\
1(14), \infty_1, 0(13) & 17, 1(10), 02 & 09, 07, 01 & 12, \infty_0, 02 \\
0(13), \infty_2, 11 & 02, 1(14), 18 & 01, 05, 03 & 02, 1(12), 1(13) \\
11,00, \infty_1 & 18, 1(11), 03 & 03, 00, 04 & 1(13), \infty_0, 0(13) \\
\infty_1, \infty_0, \infty_2 & 03, \infty_0, 13 & 04, 06, 01 & 0(13), 18, 19 \\
\infty_2, 00,13 & 13, 18, 1(13) & 01, 0(10), 08 & 19, \infty_0, 09 \\
13, 01, 15 & 1(13), 1(14), 03 & 08, 00, 07 & 09, 14, 15 \\
15, 03, 17 & 03, 10, 19 & 07, 03, 0(12) & 15, \infty_0, 05 \\
17, 05, 19 & 19, 1(12), 04 & 0(12), 0(14), 01 & 05, 10, 11 \\
19, 07, 1(11) & 04, 11, 1(10) & 01, 0(13), 0(11) & 11, \infty_0, 01 \\
1(11), 09, 1(13) & 1(10), 1(13), 05 & 0(11), 08, 02 & 01, 1(11), 1(12) \\
1(13), 0(11), 10 & 05, 12, 1(11) & 02, 03, 06 & 1(12), \infty_0, 0(12) \\
10, 19, 01 & 1(11), 1(14), 06 & 06, 00, 05 & 0(12), 17, 18 \\
01, 1(10), 1(14) & 06, 13, 1(12) & 05, 0(11), 07 & 18, \infty_0, 08 \\
1(14), 17, 00 & 1(12), 10, 07 & 07, 04, 02 & 08, 13, 14 \\
00, \infty_0, 10 & 07, \infty_0, 17 & 02, 0(13), 0(12) & 14, \infty_0, 04 \\
10, 15, 1(10) & 17, 1(12), 12 & 0(12), 0(11), 00 & 04, 10, 1(14) \\
1(10), 1(11), 00 & 12, 13, 07 & 00, 0(14), 0(13) & 1(14), 0(12), 11\\
00, 19, 1(13) & 07, 14, 1(13) & 0(13), 08, 03 & 11, 0(14), 13 \end{array}$$
\end{appendix}
\end{document} |
\begin{document}
\title[Khavinson-Shapiro Conjecture for the Bergman Projection]{The Khavinson-Shapiro Conjecture for the Bergman Projection in one and Several Complex Variables}
\author{Alan R. Legg}
\address{Mathematics Department, Purdue University, West Lafayette, IN 47907}
\email{arlegg@purdue.edu} \thanks{Research supported by the NSF Analysis and Cyber-enabled Discovery and Innovation programs, grant DMS~1001701} \subjclass{} \keywords{}
\begin{abstract} We reveal a complex analogue to a result about polynomial solutions to the Dirichlet Problem on ellipsoids in $\mathbb{R}^n$ by showing that the Bergman projection on any ellipsoid in $\mathbb{C}^n$ is such that the projection of any polynomial function of degree at most $N$ is a holomorphic polynomial function of degree at most $N$. The discussion is motivated by a connection between the Bergman projection and the Khavinson-Shapiro conjecture in $\mathbb{C}$. We also relate the Khavinson-Shapiro conjecture to polyharmonic Bergman projections in $\mathbb{R}^n$ by showing that these projections take polynomials to polynomials on ellipsoids. \end{abstract}
\maketitle
\theoremstyle{plain}
\newtheorem {thm}{Theorem}[section] \newtheorem {lem}[thm]{Lemma} \newtheorem {prop} [thm] {Proposition} \section{Introduction and Notation}
An intriguing connection between the Dirichlet problem and the Bergman projection can be found via the Khavinson-Shapiro conjecture, which in one formulation posits that ellipsoids are the only domains on which the Dirichlet problem solution operator for the Laplacian takes polynomial boundary data to polynomials (cf. Sections 2 and 5 of \cite{KS}). If we modify the Khavinson-Shapiro conjecture by replacing the Dirichlet problem solution operator with the Bergman projection, then in the special case of smooth bounded planar domains, we will see below that we actually obtain a statement equivalent to the original Khavinson-Shapiro conjecture. This observation is the starting point here for a consideration of the Bergman projections of polynomial functions on ellipsoids in more than one complex variable.
For the case of the Laplacian on real space, it is a fact that the Dirichlet problem solution operator on an ellipsoid takes polynomial boundary data into harmonic polynomials. That is to say, whenever the boundary values of a polynomial are given on an ellipsoid, it follows that the harmonic function on the ellipsoid which attains the same boundary values is also a polynomial. Furthermore, the degree of this harmonic polynomial does not exceed the degree of the polynomial whose boundary data were given. The result can be obtained very elegantly by the use of a linear map from the set of polynomials to itself (the so-called ``Fischer Map"). For a good treatment of the details, see Proposition 1 of \cite{KL}; another exposition along the same lines can be found in Sections 1 and 2 of \cite{Baker}.
Employing an argument in the same spirit, we establish in Section 3 an analytic analogue for ellipsoids in $\mathbb{R}^{2n} \sim \mathbb{C}^n$, showing that the Bergman projection on ellipsoids maps polynomials to polynomials. Even more specifically, we show that the Bergman projection of any polynomial function on an ellipsoid is a holomorphic polynomial of equal or lesser degree.
Further background for questions related to the Khavinson-Shapiro conjecture can be found in the article \cite{KL1}. For more works pertaining to the use of Fischer maps and related machinery in partial differential equations, we direct the reader to the papers \cite{Sh1}, \cite{LR}, \cite{Re}, \cite{R}.
Recall for the sake of precision that given a domain $\Omega \subset \mathbb{C}^n$, the Bergman projection $B: \thinspace L^2(\Omega) \rightarrow H^2(\Omega)$ on $\Omega$ is the orthogonal projection from $L^2(\Omega)$ onto its subspace $H^2(\Omega)$ consisting of holomorphic functions which are square-integrable with respect to Lebesgue measure. Here we are employing the usual inner product on $L^2(\Omega)$; namely, given $f,g \in L^2(\Omega)$, their inner product is $\langle f, g \rangle = \int_{\Omega}^{}f\bar{g}dV$, where $dV$ is Lebesgue measure.
As a matter of notation, let $z=(z_1, z_2, \dots, z_n)$ denote the coordinates of $\mathbb{C}^n$, and let $x_j=Re( z_j)$ and $\thinspace y_j=Im( z_j),\thinspace j=1,\thinspace 2,\dots,n$ denote the real coordinates on $\Omega$. We further say $x=(x_1, x_2, \dots, x_n)$ and $ \thinspace y=(y_1, y_2, \dots, y_n)$, and we let $\alpha, \beta, \gamma$ stand for $n$-dimensional multi-indices. Then, using the usual multi-index notation, we define for each nonnegative integer $N$ the following sets of functions on $\Omega$: \begin{equation} \label{Pdef}
P_N= \{\sum_{|\alpha|+|\beta| \leq N}c_{\alpha,\beta}x^\alpha y^{\beta}\thinspace :\thinspace c_{\alpha, \beta} \in \mathbb{C} \},
\end{equation} the set of (not-necessarily-holomorphic) complex-valued polynomial functions of degree at most $N$, and
\begin{equation}
\label{Hdef}
HP_N= \{\sum_{|\gamma|\leq N}^{} d_{\gamma}z^{\gamma} \thinspace : \thinspace d_{\gamma} \in \mathbb{C} \},
\end{equation} the set of holomorphic polynomial functions of degree at most N. The content of our main result, then, is that on any ellipsoid, $B(P_N)=HP_N$ for each nonnegative integer $N$.
The close similarity to the case of the Dirichlet problem solution operator on an ellipsoid naturally brings us back to the consideration of a `Khavinson-Shapiro'-type conjecture for the Bergman projection in several dimensions; i.e., we may ask whether multi-dimensional ellipsoids are at all characterized by the property that the Bergman projection maps polynomials to polynomials. In Section 4, we work toward a generally negative answer to the question, exhibiting in this case non-ellipsoidal domains on which the Bergman projection maps polynomials to polynomials.
Finally, we return in Section 5 to the linear-algebra-style proof used in Section 3 to show that the polyharmonic Bergman projections take polynomials to polynomials on ellipsoids in real space. This serves to open the possibility of a hierarchy of Khavinson-Shapiro conjectures.
\section{The Bergman Projection and the Khavinson-Shapiro Conjecture in $\mathbb{C}$}
As motivation for considering the Bergman projection as it acts on polynomials in ellipsoidal domains, we first consider the case of the plane. In this case, it is true that the Bergman projection takes polynomials to polynomials on ellipses, but by simple calculations we show something a bit stronger, which is related to the Khavinson-Shapiro conjecture.
It turns out, as presented in the next proposition, that for smooth bounded domains in the plane, the Dirichlet problem solution operator takes polynomials to polynomials if and only if the Bergman projection takes polynomials to polynomials. Thus the Khavinson-Shapiro conjecture is equivalent in this case to the analogous formulation involving the Bergman projection instead of the Dirichlet problem solution operator. To see this requires the fact that holomorphy and harmonicity in the plane are related by differentiation (if $f$ is harmonic, then $\frac{\partial f}{\partial z}$ is holomorphic).
\begin{prop} Let $\Omega \subset \mathbb{C}$ be a $\mathcal{C}^{\infty}$-smooth bounded domain. Then the Bergman projection of $\Omega$ maps polynomials to polynomials if and only if the Dirichlet problem solution operator takes polynomial boundary data to polynomials. \end{prop} \begin{proof} First assume that the Bergman projection maps polynomials to polynomials, and let $Q(z,\thinspace \bar{z})$ be a real-valued polynomial function on $\Omega$. By Havin's Lemma (see, e.g., pages 26 and 82 of \cite{Sh}), we then have the orthogonal decomposition $\frac{\partial Q}{\partial z}=p(z) + \frac{\partial \varphi}{\partial z}$, where $p$ is the Bergman projection of $\frac{\partial Q}{\partial z}$, and $\varphi$ is $\mathcal{C}^{\infty}$-smooth up to the boundary of $\Omega$, and vanishes on the boundary of $\Omega$. By hypothesis, $p$ is a holomorphic polynomial; and by formal antidifferentiation let $P$ be a holomorphic polynomial such that $P'=p$. Then we have that $\frac{\partial}{\partial z}(Q-P-\varphi)=0$. Hence the function being differentiated on the left is anti-holomorphic, say $Q-P-\varphi = \bar{H}$, where $H \in H^2(\Omega)$. But now notice that $Q-\varphi$ is harmonic and equal to $Q$ on the boundary, and so is the solution to the Dirichlet problem with boundary data $Q$. Since $Q$ is real-valued, so is the harmonic extension of its boundary values, and so $P+\bar{H}=\bar{P}+H$. But this means that $P-H=\bar{P}-\bar{H}$, and so $P-H$ must be constant (it is both holomorphic and antiholomorphic). Hence $H$ is a polynomial. But this means that the solution to the Dirichlet problem with boundary data $Q$, is a polynomial. Now by linearity and breaking into real and imaginary parts, we see that the Dirichlet solution is polynomial for any complex-valued polynomial boundary data.
Conversely, assume that the Dirichlet solution is polynomial whenever the boundary data of a polynomial is given on $bd \Omega$. Then, let $q(z,\thinspace \bar{z})$ be any polynomial. By formal antidifferentiation in $z$, let $Q$ be a polynomial function such that $\frac{\partial Q}{\partial z}=q$. Let $p$ be the Bergman projection of $q$. Just as above, there exists $\varphi$ smooth up to the boundary and vanishing on $bd \Omega$ such that we have the orthogonal decomposition $\frac{\partial Q}{\partial z}=p + \frac{\partial \varphi}{\partial z}$. Differentiate this equation with respect to $\bar{z}$ to conclude that $\Delta Q = \Delta \varphi$. Hence $Q-\varphi$ is harmonic, and has the same boundary values as $Q$. Hence it is the Dirichlet solution for boundary data $Q$, and so by hypothesis $Q-\varphi$ is a polynomial, and so $\frac{\partial \varphi}{\partial z}$ is also a polynomial. But now, returning to the relation $q=p+\frac{\partial \varphi}{\partial z}$, we have that $p$ is in fact a polynomial. \end{proof}
Hence for bounded smooth planar domains, the Khavinson-Shapiro conjecture can be rephrased to the effect that ellipses should be the only smooth bounded planar domains on which the Bergman projection maps polynomials to polynomials. From here on, we will investigate the situation in more than one variable.
We emphasize that, while we were able to employ the fact that holomorphy and harmonicity are related simply by a differentiation in $\mathbb{C}$, this is not so in more than one complex variable. For this reason, we of course expect that any relationship between the behaviors of the Dirichlet solution operator and the Bergman projection in several variables will be more complicated than in the planar case. Nevertheless, a strong similarity will be found. We will find that the Bergman projection continues to take polynomials to polynomials on ellipsoidal domains, but that other classes of domains have the same behavior.
\section{The Bergman Projection of Polynomials on Ellipsoids }
To proceed with a consideration of matters in more than one dimension, we will need to have a few pertinent facts at our disposal, which are collected here for reference. Note particularly that our setting will be applicable to all ellipsoids in $\mathbb{C}^n$, not just complex ellipsoids.
Any ellipsoid is by definition a quadric; that is, given an ellipsoid $\Omega \subset \mathbb{C}^n \sim \mathbb{R}^{2n}$, there exists a polynomial function $r(x_1,x_2, \dots, x_n, y_1, y_2, \dots, y_n) \thinspace$ on $\mathbb{C}^n$ such that the degree of $r$ is equal to $2$, $r$ vanishes on the boundary of $\Omega$, and \begin{equation} \label{quadric} \Omega= \{z \in \mathbb{C}^n \thinspace : \thinspace r(x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_n)< 0 \}. \end{equation}
In addition, letting $H^2(\Omega)^\perp$ be the orthogonal subspace to $H^2(\Omega)$ in $L^2(\Omega)$, we have the following: if $\omega$ is any smooth $(0,1)$-form on $\Omega$ which extends smoothly to the boundary of $\Omega$ and vanishes on the boundary of $\Omega$, then \begin{equation} \label{perp} \vartheta \omega \in H^2(\Omega)^\perp, \end{equation} where $\vartheta$ is the formal adjoint to the $\overline{\partial}$ operator (see Section 3 of \cite{Bell1}).
Finally, we point out that although $\vartheta$ and $\overline{\partial}$ are merely formal adjoints, they act as true Hilbert space adjoints for certain pairs of forms or functions. Among these cases is that of the inner product of two functions, each of which is smooth up to the boundary of $\Omega$, and one of which is of the form $\vartheta \omega$, where $\omega$ is a smooth $(0,1)$-form which extends smoothly to the boundary of $\Omega$ and vanishes on the boundary of $\Omega$. To be precise, if the other function in the inner product is $f$, then in this case we may write
\begin{equation}
\label{adjoint}
\langle \vartheta \omega, f \rangle = \langle \omega, \overline{\partial}f \rangle,
\end{equation}
where we use $\langle \cdot \thinspace, \cdot \rangle$ to denote both the usual $L^2$ inner product on functions, and the $L^2$ inner product on $(0,1)$-forms, defined as the sum of the inner products of the respective component functions of the forms involved. A detailed account of the adjointness properties of $\vartheta$ and $\overline{\partial}$ on smooth bounded domains can be found in \cite{FK}.
We are now ready to state our main result:
\begin{thm} \label{main}
Suppose that $\Omega \subset \mathbb{C}^n$ is an ellipsoid and, as in (\ref{Pdef}) and (\ref{Hdef}), let $P_N$ and $HP_N$ be, respectively, the space of complex-valued polynomial functions on $\Omega$ of degree at most N, and the space of holomorphic polynomial functions of degree at most N. Denote by $B$ the Bergman projection on $\Omega$. Then for each nonnegative integer $N$, $B(P_N)=HP_N$.
\end{thm}
\begin{proof} The inclusion $HP_N \subset B(P_N)$ is clear, since $HP_N$ is a subset of $P_N$ which is invariate under $B$.
Considering $P_N$ and $HP_N$ as finite-dimensional complex vector spaces, and noting that $HP_N$ is a subspace of $P_N$, form the quotient vector space $P_N/HP_N$. The idea of the proof of the inclusion $B(P_N) \subset HP_N$ will be to exploit a certain vector space isomorphism of $P_N/HP_N$ with itself to obtain an orthogonal decomposition for elements of $P_N$.
To this end, let $r$ be a degree-2 defining polynomial for $\Omega$ as in (\ref{quadric}), so that $r < 0$ on $\Omega$ and $r|_{\partial \Omega} = 0$, and define the map $\varphi : P_N \rightarrow P_N$ according to the formula
\[\varphi (p) = \vartheta r \overline{ \partial}p \quad \text{ for each $p \in P_N$}. \]
That $\varphi$ does in fact preserve degree is a consequence of the fact that \[ \vartheta r \overline{ \partial}p = -\sum _{j=1}^n \frac{\partial}{\partial z_j}(r \frac{\partial p}{\partial \bar{z}_j}), \] and each term of this sum has degree at most $N$; for $p$ itself has degree at most $N$, and each differentiation reduces the degree by at least $1$, while multiplying by $r$ increases the degree by at most $2$. It is clear, moreover, that $\varphi$ is complex-linear; and if it happens that $p \in HP_N$, then $\overline{\partial} p = 0$, so that $\varphi (p) =0$.
Hence $\varphi$ descends to a linear mapping $\tilde{\varphi} : P_N/HP_N \rightarrow P_N/HP_N $ according to $\tilde{\varphi}([p])=[\varphi(p)]\quad \text{for each}\quad [p] \in P_N/HP_N,$ where $[\thinspace \cdot \thinspace ]$ denotes equivalence class. In fact, $\tilde{\varphi}$ is injective, as we now show.
Assume for the moment that $\tilde{\varphi} ([p])=[0]$. In this case $\varphi (p)$ is equivalent to $0$ modulo $HP_N$, and so there exists $h \in HP_N$ such that $\vartheta r \overline{\partial}p=h.$
However, $r \overline{\partial} p$ is a $(0,1)$-form on $\Omega$, smooth up to boundary of $\Omega$, which vanishes on the boundary of $\Omega$; consequently, (\ref{perp}) gives that $\vartheta r \overline{\partial}p \in \mathnormal{H}^2(\Omega)^{\perp}$. And now, since $h \in \mathnormal{H}^2(\Omega), $ we see that $\vartheta r \overline{\partial}p = 0$, and we may calculate: \[ 0=\langle - \vartheta r \overline{\partial}p, p \rangle \\ =\langle -r \overline{\partial}p, \overline{\partial}p \rangle \] where in the second equality the use of the adjointess of $\vartheta$ and $\overline{\partial}$ is justified since $r \overline{\partial}p$ vanishes on the boundary (cf (\ref{adjoint})) above). Owing to the fact that $-r >0$ on $\Omega$ , we have demonstrated that the weighted $L^2$ norm of the $(0,1)$-form $\overline{\partial}p$ against a positive measure on $\Omega$ arising from a smooth function is $0$, which in turn implies that $\overline{\partial}p \equiv 0$, so $p$ is holomorphic and $[p]=[0]$. So indeed $\tilde{\varphi}$ is injective.
Now, $\tilde{\varphi}$ must also be surjective, being an injective linear map from a finite-dimensional vector space into a vector space of equal dimension. The surjectivity of $\tilde{\varphi}$ will provide us with the orthogonal decomposition we require to identify the Bergman projections of the elements of $P_N$.
Given any polynomial function $P \in P_N$, there exists a polynomial function $Q \in P_N$ such that $[P]=\tilde{\varphi}([Q])$; or, what is the same, there must exist a holomorphic polynomial function $H \in HP_N$ such that
\begin{equation}
\label{decomp}
P=\vartheta r \overline{\partial}Q + H.
\end{equation}
Notice now, though, that $\vartheta r \overline{\partial}Q \in \mathnormal{H}^2(\Omega)^\perp$ by (\ref{perp}), and since $H \in \mathnormal{H}^2(\Omega),$ (\ref{decomp}) is in fact an orthogonal decomposition of $P$, and so we must have that $BP = H \in HP_N$. Thus $B(P_N) \subset HP_N$, and we are finished.
\end{proof}
\section{Other Domains on which the Bergman Projection Maps Polynomials to Polynomials}
In response to the Khavinson-Shapiro-type question of how well ellipsoids may be characterized by the property that polynomials are mapped to polynomials under the Bergman projection, we provide here examples of other domains exhibiting the same property.
\subsection{Bounded Circular Domains}
Let $R \subset \mathbb{C}^n$ be a bounded circular domain containing the origin, and let $K(z, w)$ be the Bergman kernel function of $R$. For convenience, for each multi-index $\alpha$ define $K_{0}^{\alpha}(z)=\frac{\partial ^\alpha K(z,w)}{\partial \bar{w}^\alpha}|_{w=0}$. As discussed in \cite{Bell2}, the function $K_{0}^{\alpha}$ is such that for each $f \in H^2(R)$,
\begin{equation}
\label{diffrepro}
\langle f, K_{0}^{\alpha} \rangle = \frac{\partial ^\alpha f}{\partial z^\alpha}(0).
\end{equation}
Note that $K_0^\alpha$ is the unique such function in $H^2(R)$, being the integral kernel function guaranteed by the Riesz representation theorem for point evaluation of the $\alpha$- derivative at zero for functions in $H^2(R)$.
Since $R$ is a bounded circular domain containing $0$, it follows from \cite{Bell2} that the linear span of the $K_0^\alpha$ as $\alpha$ ranges over all multi-indices is identical to the set of holomorphic polynomial functions on $R$. Even more precisely, we have that, given a particular multi-index $\alpha$, the set of homogeneous holomorphic polynomials of degree $|\alpha|$ is identical to the linear span of $ \{K_0^\gamma \thinspace : \thinspace |\gamma|=|\alpha| \}$.
With these preliminaries in place, we can show the following:
\begin{thm}
\label{circ}
If $R \subset \mathbb{C}^n$ is a bounded circular domain containing the origin and $P_N$, $HP_N$ are as in (\ref{Pdef}) and (\ref{Hdef}), and if $B$ is the Bergman Projection on $R$, then $B(P_N)=HP_N$ for each non-negative integer $N$.
\end{thm}
\begin{proof}
As in the proof of Theorem \ref{main}, it is easy to see that $HP_N \subset P_N$.
For the reverse inclusion, by linearity it suffices to prove that for each pair of multi-indices $\alpha$, $\beta$ such that $|\alpha|+|\beta|=N$, $B(z^{\alpha}\bar{z}^\beta) \in HP_N$. By the comments preceding the statement of the current theorem, we may calculate as follows:
\[\begin{split}
\langle f, B(z^\alpha \bar{z}^\beta) \rangle = \langle f, z^\alpha \bar{z}^\beta \rangle = \langle fz^\beta, z^\alpha \rangle = \\ \langle f z^\beta, \sum_{|\gamma|=|\alpha|}^{}c_\gamma K_0^\gamma \rangle = \sum_{|\gamma|=|\alpha|}^{}c_\gamma \frac{\partial ^\gamma (fz^\beta)}{\partial z^\gamma}|_{z=0}
,
\end{split}\]
where the $c_\gamma$ are constants depending only on $\alpha$. The sum on the far right can be simplified by the product rule, so that we get for some constants $d_\gamma$ which depend only on $\alpha$ and $\beta$,
\begin{equation}
\label{monprod}
\langle f, B(z^\alpha \bar{z}^\beta) \rangle = \sum_{|\gamma| \leq |\alpha|}^{} d_\gamma \frac{\partial ^\gamma f}{\partial z^\gamma}|_{z=0}.
\end{equation}
Again by comments above, the sum on the right is equal to the inner product \[ \langle f, \sum_{|\gamma| \leq |\alpha|} d_\gamma K_0^\gamma \rangle, \]
and the right member of this inner product is a holomorphic polynomial $H$ of degree at most $|\alpha| \leq N$.
Thus, we have found a polynomial $H \in HP_N$ such that $ \langle f, B(z^\alpha \bar{z}^\beta) \rangle = \langle f, H \rangle $ for every $f \in H^2(R)$; and since $H$ and $B(z^\alpha \bar{z}^\beta)$ are themselves in $H^2(R)$, it follows that $B(z^\alpha \bar{z}^\beta)=H$.
\end{proof}
Note that the only domains satisfying the hypotheses of Theorem \ref{circ} when $n=1$ are discs centered at the origin. When $n>1$, Theorem \ref{circ} includes the case of `complex ellipsoids,' which have a defining polynomial as in (\ref{quadric}) of the form $r=-1 + \sum_{j=1}^{n}a_j |z_j|^2$, the $a_j, \thinspace j=1,2, \dots, n$ being positive real numbers. For other ellipsoids, we must appeal to Theorem \ref{main}.
\subsection{Images under certain biholomorphisms}
Using the transformation formula for the Bergman projection under biholomorphic mappings, we can show that under suitable biholomorphisms, the property that the Bergman projection maps polynomials to polynomials is preserved. However, in this case we must relax the degree-preserving requirement that $B(P_N)=HP_N$. Although we will not investigate the possible effects of this relaxation here, it is interesting to note that it does have meaningful consequences in the plane \cite{CS}.
Let $\Omega$ and $V$ be domains in $\mathbb{C}^n$ and $f:\thinspace \Omega \rightarrow V$ a biholomorphism between them, and let $u=det(f')$ be the complex Jacobian determinant of $f$. Then, if $g \in L^2(V)$, it follows that $u \cdot g \circ f \in L^2(\Omega)$ and
\begin{equation}
\label{bergtrans}
B_\Omega (u \cdot g \circ f)=u \cdot (B_Vg)\circ f,
\end{equation}
where $B_\Omega$ and $B_V$ are the Bergman projections on $\Omega$ and $V,$ respectively. (cf. Ch. 3 Sec. 2 of \cite{Bergman})
Using the notation of (\ref{Pdef}), let $P=\cup_{N \geq 0}P_N$ be the set of all polynomial functions on $\mathbb{C}^n$, and let $HP$ be its subset consisting of all holomorphic polynomial functions. In what follows, we shall view the elements of $P$ and $HP$ as functions either on $\Omega$ or on $V$, but no confusion on this point will arise in this context. In addition, by a `polynomial biholomorphic mapping' we will mean a biholomorphic mapping whose component functions are polynomials. A polynomial biholomorphic mapping `with polynomial inverse' will be a polynomial biholomorphic mapping whose inverse mapping is also a polynomial biholomorphic mapping.
\begin{thm}
\label{bihol}
Let $\Omega,\thinspace V \subset \mathbb{C}^n$ be domains, and let $B_\Omega$ and $B_V$ denote the Bergman projections of $\Omega$ and $V$, respectively. Assume that $B_\Omega$ is such that $B_\Omega(P)=HP.$ Then, if there exists a polynomial biholomorphic mapping $f: \Omega \rightarrow V$ with polynomial inverse, it follows that $B_V (P) = HP$.
\end{thm}
\begin{proof}
Let $p\in P$ be any polynomial function on V. We will show that $B_Vp \in HP$. As usual, $HP \subset B_V(P)$ is clear.
Using the transformation formula (\ref{bergtrans}) for $f$ as in the statement of the theorem, we have
\begin{equation}
\label{trans}
B_\Omega (u \cdot p \circ f)=u \cdot (B_Vp)\circ f,
\end{equation}
where $u$ is the complex Jacobian determinant of $f$. Let $F=f^{-1}:\thinspace V \rightarrow \Omega$ be the inverse mapping to $f$, and let $U$ be the complex Jacobian determinant of $F$. By hypothesis, each of $f,\thinspace F$ are polynomial mappings, and so $u$ is a polynomial function on $\Omega$ and $U$ is a polynomial function on $V$. By the chain rule, since $f\circ F$ is the identity, we have
\begin{equation}
\label{chain}
(u\circ F)\cdot U \equiv 1
\end{equation}
as functions on $V$.
Now rearrange (\ref{trans}) by dividing by $u$ and composing on the right with $F$ on each side. Using (\ref{chain}), this yields that
\begin{equation}
\label{BV}
B_Vp=U \cdot B_\Omega(u \cdot p \circ f)\circ F
\end{equation}
Now, since $u, \thinspace p, \thinspace f$ are all polynomial, the function $u \cdot p \circ f$ is a polynomial function on $\Omega$, so by hypothesis $B_\Omega(u \cdot p \circ f)$ is a polynomial function on $\Omega$. Since $F$ and $U$ are polynomial functions on $V$, it follows immediately that $U \cdot B_\Omega(u\cdot p \circ f)\circ F$ is a polynomial function on $V$. Hence $B_Vp \in HP$.
\end{proof}
We remark that in one dimension, the only biholomorphic polynomial mappings with polynomial inverse are of degree 1, but many other such mappings exist in dimensions greater than 1. We can use Theorem \ref{bihol} to find domains which are neither ellipsoids nor bounded circular domains on which $B(P)=HP$.
As an explicit example, let $\Omega$ be the unit polydisc in $\mathbb{C}^2$, \[ \Omega = \{(z_1,z_2)\in \mathbb{C}^2 \thinspace : \thinspace |z_1|<1 \thinspace \text{and} \thinspace |z_2|<1 \},\]
and let $f: \thinspace \mathbb{C}^2 \rightarrow \mathbb{C}^2$ be the polynomial mapping defined by
\[f(z_1,z_2)=(z_1+z_2^2,z_2).\] It is easy to verify that $f$ is univalent on all of $\mathbb{C}^2$, with inverse $F$ given by \[F(\zeta_1,\zeta_2)=(\zeta_1-\zeta_2^2,\zeta_2).\] Define $V=f(\Omega)$, and apply Theorem \ref{bihol} with $f$ restricted to $\Omega$ and $F$ restricted to $V$ see that the projection $B_V$ is such that $B_V(P)=HP$. The domain $V$ is neither a circular domain about any point of $\Omega$, nor an ellipsoid. To see this is a matter of a few simple calculations.
First, note that the points $(\frac{91}{100}, \frac{1}{10})$ and $(\frac{171}{100},\frac{9}{10})$ are both members of $V$ (being the images under $f$ of the points $(\frac{9}{10}, \frac{1}{10})$ and $(\frac{9}{10},\frac{9}{10})$, respectively). Their midpoint is $M=(\frac{131}{100},\frac{1}{2}),$ which is not a member of $V$, since $F(M)=(\frac{53}{50},\frac{1}{2})$, which lies outside $\Omega$. Hence $V$ is not convex, and therefore cannot be an ellipsoid.
Second, consider again the point $(\frac{171}{100},\frac{9}{10})$ of $V$. If $V$ were a circular domain about the origin, then the point $(-\frac{171}{100},-\frac{9}{10})$ would also be a member of $V$, but applying $F$ to this point we find $F((-\frac{171}{100},-\frac{9}{10}))=(-\frac{63}{25}, -\frac{9}{10})$, which is not an element of $\Omega,$ so $(-\frac{171}{100},-\frac{9}{10})$ is not a member of $V$, and $V$ fails to be circular about the origin.
If $V$ were circular about the point $a\in V$, then by \cite{Bell2} the Bergman kernel at $a$, $K_V(\zeta,\thinspace a)$ would be constant in $\zeta \in V$. Since the Jacobian determinant of $F$ is the function $1$, the transformation formula for the Bergman kernel under biholomorphisms yields that $K_\Omega(z,\thinspace F(a))$ is constant in $z \in \Omega$. However, since $\Omega$ is itself circular about the origin, \cite{Bell2} gives that $K_\Omega(z,\thinspace 0)$, the Bergman kernel at the origin, is also constant. Hence, by the reproducing property of the Bergman kernel, there exists a complex constant $\lambda$ such that $h(0)=\lambda h(F(a))$ for all $h \in H^2(\Omega)$. Now, this is only possible if $\lambda=1$ and $F(a)=0$. Applying $f$, we have that $a=f(0)=0$. Thus $V$ is circular about the origin, but this possibility was excluded in the previous paragraph.
\section{A hierarchy of Khavinson-Shapiro Conjectures}
As a final consideration, we can employ another variation of the linear algebra proof from Section $3$ to show that on ellipsoids in $\mathbb{R}^n$, the orthogonal projection from $L^2$ real-valued functions onto its subspace of polyharmonic functions of order $m$, which we call the `Bergman Projection onto polyharmonic functions of order $m$', takes real polynomials to real polynomials without increasing degree. (The projection is defined since the space of polyharmonic functions of order $m$ is closed in $L^2$, for example by hypoellipticity of the operator $\Delta ^m$). Recall that, for positive integers $m$, the polyharmonic functions of order $m$ on a domain are those functions $f$ such that $\Delta ^m f=0$. We mention that the Khavinson-Shapiro conjecture for the polyharmonic Dirichlet problem has recently been established for a particular class of domains in \cite{R} (cf Sec. 10, Thm 31).
Given an ellipsoid $\mathcal{E}\subset \mathbb{R}^n$, let the Bergman projection onto polyharmonic functions of order $m$ be denoted by $B^{(m)}$, let $\mathcal{H}^{(m)}_N$ denote the space of polynomials which are polyharmonic of order $m$ and of total degree at most $N$. Let $P_N$ be the space of real polynomials of degree at most N, and let $r$ be a degree-2 defining polynomial for $\mathcal{E}$.
Mimicking the proof of Theorem \ref{main}, let $\varphi$ be the linear map from $P_N$ to itself defined by
\[ \varphi (p)=\Delta ^m r^{2m} \Delta^m p. \]
Now, since $\varphi$ clearly vanishes on $\mathcal{H}^{(m)}_N$, it descends to a map $\tilde{\varphi}$ from $P_N/ \mathcal{H}^{(m)}_N$ to itself, where $P_N$ is the space of all polynomials of degree at most $N$. Considering $P_N / \mathcal{H}^{(m)}_N$ as a real vector space, $\tilde{\varphi}$ is linear, and can be shown to be injective. Indeed, suppose that $\tilde{\varphi}(p)=[0]$. Then ${\varphi}(p) \in \mathcal{H}^{(m)}_N$. But $\varphi (p)$ is also orthogonal to $\mathcal{H}^{(m)}_N$ by integration by parts. To see this, let $q \in \mathcal{H}^{(m)}_N$, and, using the usual inner product on real-valued functions, notice that
\[ \int_{\mathcal{E}}{\Delta^m r^{2m} \Delta^m p \cdot q } = \int_{\mathcal{E}}{r^{2m} \Delta ^m p \cdot \Delta^m q} = \int_{\mathcal{E}}{r^{2m} \Delta ^m p \cdot 0}=0. \]
The use of the self-adjointness of $\Delta$ has been employed $m$ times, and this is justified since $r^{2m}$ along with all of its partial derivatives of order up to $2m-1$ vanish on the boundary of $\mathcal{E}$.
Hence ${\varphi}(p)=0.$ Write this as
\[ \Delta \Delta^{m-1} r^{2m} \Delta ^m p=0, \] and notice that by counting derivatives,
$ \Delta^{m-1} r^{2m} \Delta ^m p$
vanishes on the boundary of $\mathcal{E}$. Hence by the maximum principle for harmonic functions,
\[ \Delta^{m-1} r^{2m} \Delta ^m p =0 .\]
Repeating this argument $m-1$ more times, we see that $r^{2m} \Delta^m p =0$, and since $r$ is nonvanishing on $\mathcal{E}$, we have $\Delta ^m p = 0$, so $[p]=0$.
So $\tilde{\varphi}$ is injective, and it is also surjective since it is a linear map between vector spaces of equal finite dimension. Given a polynomial $P \in P_N$, let $Q \in P_N$ be such that $[P]=\tilde{\varphi}([Q]).$ There exists $h \in \mathcal{H}^{(m)}_N$ such that $P=\varphi(Q) + h.$ But by the integration by parts argument above applied to $Q$, $\varphi(Q)$ is orthogonal to $\mathcal{H}^{(m)}_N$. So we in fact have found an orthogonal decomposition, and $B^{(m)}P =h$ is a polynomial of degree at most $N$. We have proved the following theorem.
\begin{thm} \label{Harmonics} Let $\mathcal{E} \subset \mathbb{R}^n$ be an ellipsoid. Given any positive integer $m$, let $B^{(m)}$ be the Bergman projection from $L^2(\mathcal{E})$ onto polyharmonic functions of order $m$. Then, for each positive integer $N$, $B^{(m)}(P_N)= \mathcal{H}^{(m)}_N$. \end{thm}
In essence, this means we have conceived a whole hierarchy of Khavinson-Shapiro conjectures, one for each of the possible values of $m$.
We remark that we can explicitly relate Theorem \ref{Harmonics} to potential theory by noticing that for smooth bounded domains, $B^{(m)}=I- \Delta ^m G^{(2m)} \Delta ^m$ for each positive integer $m$. Here $I$ is the identity operator and $G^{(2m)}$ is the solution operator for the polyharmonic Dirichlet problem $\Delta ^{2m} \varphi = v$, $\varphi=D_j \varphi =0$ on $bd\mathcal{E}$, where $D_j$ stands for every partial derivative of order at most $m-1$. For an ellipsoid $\mathcal{E}$, whenever $p$ is a polynomial of degree $N$, it follows that $\Delta ^m G^{(2m)} \Delta ^m p$ is a polynomial of degree at most $N$, and we have a formulation in terms of solutions to a PDE, in comparison with the original formulation of the Khavinson-Shapiro conjecture.
{\bf Acknowledgements.} In closing, the author would like to thank Steve Bell and Erik Lundberg for insightful discussions regarding the content of this work.
\end{document} |
\begin{document}
\title{Unconditional Stability for Numerical Scheme Combining Implicit Timestepping for Local Effects and Explicit Timestepping
for Nonlocal Effects}
{\tiny Preprint ANL/MCS-P1093-0903}
\begin{abstract} A combination of implicit and explicit timestepping is analyzed for a system of ODEs motivated by ones arising from spatial discretizations of evolutionary partial differential equations. Loosely speaking, the method we consider is implicit in local and stabilizing terms in the underlying PDE and explicit in nonlocal and unstabilizing terms. Unconditional stability and convergence of the numerical scheme are proven by the energy method and by algebraic techniques. This stability result is surprising because usually when different methods are combined, the stability properties of the least stable method plays a determining role in the combination. \end{abstract}
\section{Introduction} This report considers timestepping methods for systems of ordinary differential equations of the form \begin{equation} \label{eq:mainEquation}
u'(t) + Au(t)+B(u)u(t)-Cu(t)=f(t), \end{equation} in which $A$, $B(u)$, and $C$ are ${n}\times{n}$ matrices, $u(t)$ and $f(t)$ are $n$-vectors, and \begin{equation}\label{eq:condEquation}
A=A^{T} \succ 0,\, B(u)=-B(u)^{*},\, C=C^{T}\succeq 0 \mbox{~and~} A-C \succeq 0. \end{equation} Here $\succ$ and $\succeq$ denote, respectively, the positive definite and the positive semidefinite ordering. The key properties motivating our work are that $A$ is sparse and that although $C$ is not sparse, the action of $C$ on a vector is inexpensive to calculate. This structure is motivated by multiscale discretizations of turbulence but can also arise from closed-loop control problems and ensemble calculations. Given this structure of (\ref{eq:mainEquation}), the simplest scheme that is computationally feasible is \underline{explicit} in the global, unstable part of (\ref{eq:mainEquation}), that is, $Cu$. Accordingly, we consider \begin{equation} \label{eq:schemeEquation} \frac{u_{n+1}-u_n}{k}+A{u_{n+1}}+B({u_n}){ u_{n+1}}-C{u_n}=f_{n+1}, k=\Delta t, \end{equation} where $u_n$ is the approximation to $u(t=nk)$. Usually when methods are combined, the stability properties of the explicit method play a determining role in the overall method. In Theorems \ref{t:unconditional} and \ref{t:bouHom}, we prove the surprising result that (\ref{eq:schemeEquation}) is \underline {unconditionally stable}. This result is outside the realm of root condition stability analysis for uncoupled scalar problems.
In Section 2, unconditional stability and convergence of (\ref{eq:schemeEquation}) are proven. We give two stability proofs. The first is algebraic. Since the constants depend on the dimension of the system, we also give an energy proof of stability (with uniform constants) that is potentially extensible to discretized PDEs. Section 3 presents numerical tests illustrating the theory. First, we briefly summarize some motivating problems leading to (\ref{eq:mainEquation}).
The basic model of the turbulent dispersion is that it is dissipative in the mean (see ~\cite{KP80},~\cite{MP93},~\cite{IL98}). A more accurate formulation is that its dissipative effects are focused on the smallest resolved scales (see ~\cite{HMJ00}). This physical idea has led to algorithms for numerical stabilization of transport-dominated phenomena based on eddy diffusivity acting only on the smallest resolved scales (e.g., ~\cite{KA02},~\cite{HU00},~\cite{TA89},~\cite{KL02},~\cite{G99a}, ~\cite{G99b},~\cite{HMJ00},~\cite{L00},~\cite{L02}). The natural realization of this idea for spatial discretizations of convection diffusion equations is diffusive stabilization on all scales and then antidiffusing on the large scales. This leads to the system of ODEs \begin{equation} \label{eq:tensor} \dot u_{ij}(t)+b \cdot \nabla^h u_{ij} - (\epsilon_0(h)+\epsilon) \Delta^h u_{ij} + \epsilon_0(h) P_H(\Delta^h P_H(u_{ij}))= f_{ij}, \end{equation} where standard notation is used: $\Delta^h$ is the discrete Laplacian, $\epsilon_0(h)$ is the artificial viscosity parameters and $P_H$ denotes a projection onto a coarser mesh; see Section 3 for details. The system (\ref{eq:tensor}) fits exactly the form (\ref{eq:mainEquation}), (\ref{eq:condEquation}), where $C$ is provided as the matrix arising from $\epsilon_0(h)$ term. We shall also test one algorithm as a perturbation of the method (\ref{eq:tensor}) in which the projection is replaced by a nearest averaging $\overline{\Delta^h \overline{{u}_{ij}}}$. In both cases, the projection or averaging operator accounts for the nonlocal character (i.e., the large bandwidth) of $C$. On the other hand, averaging and projection are both embarrassingly parallel operators whose action on a given vector is cheap to perform.\\ \begin{remark} (1) A second main application is discretization of turbulent flow problems which, although nonlinear and constrained, have a similar structure to the above (simple) linear convection diffusion problem.\\ (2) A known method of stabilizing the timestepping and the associated linear system (but not the spacial discretization ) corresponds to (\ref{eq:mainEquation}) without the averaging: \begin{equation} \frac{u_{n+1}-u_n}{k}+b \cdot \nabla^h u_{n+1} - (\epsilon_0(h)+\epsilon) \Delta^h u_{n+1} + \epsilon_0(h) \Delta^h u_{n}= f_{n+1}. \end{equation} Each time step requires the inversion of the matrix corresponding the operator $- (\epsilon_0(h)+\epsilon) \Delta^h+b \cdot \nabla^h+k^{-1}I$, which, for $\epsilon_0$ suitably chosen, is an $M$-matrix. Our analysis applies to this method as well. \end{remark}
\section{The Stability Analysis} For our analysis, we assume that $B(u)$ is in $C^1(\Re^n)$ and $f(t)$ is in $C^1([0,\infty))$. For any $T>0$, we denote by \[ F_T=\max_{t \in [0,T]} \norm{f(t)}_2. \]
\begin{lemma} \label{l:boundedSolution} The system of ODEs (\ref{eq:mainEquation}) under the condition (\ref{eq:condEquation}) with initial condition $u(0)=u_0$ has a unique solution on $[0,T]$, for any $T>0$. \end{lemma}
{\bf Proof} Since (\ref{eq:mainEquation}) can be written as $\dot{u}=\psi(t,u)$ with $\psi$ being of class $C^0$ in $t$ and $C^1$ in $u$, local existence and uniqueness follows from the classical theory of ODEs \cite[Theorem V.8]{BR62}.
We now show that the solution does not experience blow-up and can be extended everywhere. We multiply through (\ref{eq:mainEquation}) by $(u(t)^T)$ and we use (\ref{eq:condEquation}) to obtain that \[ u(t)^T u'(t) \leq -u(t)^T (A-C) u(t) + u(t)^T f(t) \leq u(t)^T f(t).\] Using Cauchy-Schwarz, we obtain that \[ \frac{d}{dt} \norm{u (t)}_2^2 \leq \norm{u(t)}_2^2 + F_T^2. \] In turn, this implies that, \[ \norm{u(t)}_2^2 \leq \norm{u(0)}_2^2 e^t + F_T^2 \left(e^t-1\right) \] for any $t$ in an interval containing $0$ where $u(t)$ is defined. Since $u(t)$ does not experience blow-up in finite time, it can be extended uniquely over all of $[0,T]$. $\Box$
Note that from (\ref{eq:mainEquation}) and our assumtion that $f(t)$ is of class $C^1([0,\infty])$, we get that $u(t)$ is of class $C^2([0,\infty])$. The fact that $u''(t)$ is continuous will be used in determining a bound for the truncation error.
Consider the system of ODEs (\ref{eq:mainEquation}) under the condition (\ref{eq:condEquation}) and discretized by (\ref{eq:schemeEquation}).
First, we note that each step of (\ref{eq:schemeEquation}) requires the inversion of $I+kA+kB_n$. \begin{lemma} Under (\ref{eq:condEquation}) the ${n}\times{n}$ matrix $I+kA+kB_n$ has a positive definite symmetric part and is invertible. \end{lemma} \noindent {\bf Proof:} Let $x$ be any nonzero vector in $\Re^n$. Then \[ \begin{array}{rcl} x^T(I+kA+kB_n)x & = & x^Tx+kx^TAx+kx^TB_nx \\
& = & {\norm{x}_2}^2+kx^TAx>0.~~~ \Box{}{} \end{array} \] Since $A$, $B_n=B(u_n)$ and $C$ do not commute, the stability of the numerical scheme cannot be analyzed by reduction to eigenvalues. Therefore, we formulate an energy norm that is not increased at each time step, that is, $\norm{u_{n+1}}_E \le \norm{u_n}_E$. \begin{definition} The energy norm of (\ref{eq:schemeEquation}), $\norm{.}_E$, is given by \begin{equation} \label{def:ENorm} {\norm{u}_E}^2={u}^Tu +k{u}^TCu, \end{equation} for some $u \in \Re^n$, and its associated inner product is $<u,v>_E=(N_kv)^T(N_ku)$, with $N_k=(I+kC)^{\frac{1}{2}}$, for some $u,v \in \Re^n$. \end{definition}
It can be seem immediately that the energy norm and the 2-norm satisfy the following inequality:
\[ \sqrt{1 + k \lambda_{min}(C)} \leq \norm{u}_E \leq \sqrt{1 + k \lambda_{max}(C)}, \] where $\lambda_{min}(C)$ and $\lambda_{max}(C)$ are, respectively, the smallest and the largest eigenvalue of $C$. From this inequality and the positive semidefiniteness of $C$, we get that the induced matrix norms satisfy \[ \norm{A}_E \leq \norm{A}_2 \sqrt{1 + k \lambda_{max}(C)}. \]
\begin{theorem} \label{t:unconditional} Let $u_n$ satisfy (\ref{eq:schemeEquation}) with ${f(.)}\equiv{0}$, under the condition (\ref{eq:condEquation}) on the coefficients. Then, \[\norm{u_{n+1}}_E \le \norm{u_n}_E.\] \end{theorem} \noindent {\bf Proof:} Multiplying with $u_{n+1}^T$ through the equation in (\ref{eq:schemeEquation}), we obtain \[u_{n+1}^T\frac{u_{n+1}-u_n}{k}+u_{n+1}^TA{u_{n+1}}+u_{n+1}^TB_n u_{n+1}=u_{n+1}^TCu_n\] Since $B_n$ is skew symmetric, $u_{n+1}^TB_nu_{n+1}=0$. Therefore \begin{equation} u_{n+1}^T\frac{u_{n+1}-u_n}{k}+u_{n+1}^TA{u_{n+1}}=u_{n+1}^TCu_n. \end{equation} This is equivalent to \begin{equation} u_{n+1}^Tu_{n+1}+ku_{n+1}^TAu_{n+1}=ku_{n+1}^TCu_n+u_{n+1}^Tu_n . \end{equation} Since $A \succeq C$, we have that \begin{equation} \label{eq:Cineq} u_{n+1}^Tu_{n+1}+ku_{n+1}^TCu_{n+1} \le u_{n+1}^Tu_n +ku_{n+1}^TCu_n. \end{equation} Define $w=(u_{n+1},k^{1/2}C^{1/2}u_{n+1})^T, v=(u_n,k^{1/2}C^{1/2}u_n)^T$. Then (\ref{eq:Cineq})) can be written as $w^Tw \le w^Tv$. Applying the Cauchy-Schwarz inequality, we get $\norm{w}_2 \le \norm{v}_2$. Hence, \begin{equation} u_{n+1}^Tu_{n+1}+ku_{n+1}^TCu_{n+1} \le u_n^Tu_n +ku_n^TCu_n \end{equation} or \[\norm{u_{n+1}}_E \le \norm{u_n}_E. \Box{}{}\]
The conclusion of the preceding theorem is that when (\ref{eq:mainEquation}) is homogeneous, $f \equiv 0$, we obtain that $\norm{u_n}_E \neq \norm{u_0}_e$, $\forall n$, independent of $T$. This means that our method is, indeed, \underline{unconditionally stable}.
Consider (\ref{eq:schemeEquation}) with ${f}\equiv{0}$, rewritten as \begin{equation} \label{eq:homEquation} (I+kA+kB_n)u_{n+1}=(I+kC)u_n, B_n=B(u_n). \end{equation} Equation (\ref{eq:homEquation}) yields \[ u_{n+1}=(I+kA+kB_n)^{-1}(I+kC)u_n, \] which, in turn, implies that \[ (I+kC)^\frac{1}{2}u_{n+1}=(I+kC)^\frac{1}{2} (I+kA+kB_n)^{-1}(I+kC)^\frac{1}{2} (I+kC)^\frac{1}{2} u_n. \] Therefore, from the definition of $\norm{\cdot}_E$, a sufficient condition to prove the unconditional stability result is to prove that \[ \norm{(I+kC)^\frac{1}{2} (I+kA+kB_n)^{-1}(I+kC)^\frac{1}{2}}_2 \leq 1. \] From \ref{eq:condEquation}, this can be done by using the following Lemma.
\begin{lemma} \label{l:mainLemma} Let $D_1={D_1}^T \succ 0$ and $D_2={D_2}^T \succ 0$ be ${n}\times{n}$ matrices such that $D_1-D_2 \succ 0$. Let $D_4=D_2^{\frac{1}{2}}$ and be symmetric. If $D_3$ is an ${n}\times{n}$ skew-symmetric matrix, then \begin{equation} \parallel D_4(D_1+D_3)^{-1}D_4 \parallel _2 \le 1. \end{equation} \end{lemma} \noindent {\bf Proof:} Let $F=D_4(D_1+D_3)^{-1}D_4$. It is straightforward that\\ $F^{-1}=D_4^{-1}(D_1+D_3)D_4^{-1}$. For any nonzero vector $x$ in $\Re^n$, \begin{eqnarray*} x^TF^{-1}x &=& x^TD_4^{-1}(D_1+D_3)D_4^{-1}x\\ &=& x^TD_4^{-1}D_1D_4^{-1}x+x^TD_4^{-1}D_3D_4^{-1}x \end{eqnarray*} Here we claim that $D_4^{-1}D_3D_4^{-1}$ is skew symmetric and therefore $x^TD_4^{-1}D_3D_4^{-1}x=0$. To obtain this one can notice that since $D_2=D_4^2$ and $D_2$ is a symmetric matrix, then $D_4$ and $D_4^{-1}$ are also symmetric.\\ Hence, \[(D_4^{-1}D_3D_4^{-1})^T=D_4^{-1}D_3^TD_4^{-1}=-D_4^{-1}D_3D_4^{-1}.\] Thus \[x^TF^{-1}x= x^TD_4^{-1}D_1D_4^{-1}x, \hspace{.25in} \mbox{~for any~} \ 0\ne x\in\Re^n. \] Using the fact that $D_1-D_2$ is nonnegative, we obtain \[x^TF^{-1}x \ge x^TD_4^{-1}D_2D_4^{-1}x=x^Tx, \hspace{.25in} \mbox{~for any~} \ 0\ne x\in\Re^n. \]
This implies that \[{\norm{x}_2}^2 \le x^TF^{-1}x \le \norm{x}_2. \norm{F^{-1}x}_2, \hspace{.25in} \mbox{~for any~} \ 0\ne x\in\Re^n, \] that is, \begin{equation} \label{Fbound} \norm{x}_2 \le \norm{F^{-1}x}_2,\hspace{.25in} \mbox{~for any~} \ 0\ne x\in\Re^n. \end{equation} Obviously (\ref{Fbound}) is equivalent to \begin{equation} \norm{Fy}_2 \le \norm{y}_2,\hspace{.25in} \mbox{~for any~} \ 0\ne y\in\Re^n. \end{equation} Since the last equation holds for any nonzero vector $y$, then $\norm{F}_2 \le 1$. \\ \hspace{2in}$\Box{}{}$\\
For the next step, we analyze the stability of the nonhomogenous problem over an arbitrary but finite time interval $[0,T]$. We later show that the stability of the homogeneous problem does not depend on $T$. Consider (\ref{eq:schemeEquation}) with ${f}\not\equiv{0}$.
After some simple calculations, we get that $u_n$ satisfies \begin{equation} \label{eq:mainRecursion} u_{n+1}=(I+kA+kB_n)^{-1}(I+kC)u_n+k(I+kA+kB_n)^{-1}f_{n+1}. \end{equation}
We denote the range of the step index $n$, by $[0,N]$, where $kN=T$. To simplify the notation, we do not explicitly indicate that $N$ depends on $k$ and $T$.
\begin{theorem} \label{t:bouHom} Let (\ref{eq:condEquation}) hold. Then the solution of (\ref{eq:mainRecursion}) satisfies the following bound: \begin{eqnarray*} \norm{u_{n+1}}_E &\le& \norm{u_0}_E+\frac{k}{1+k \lambda _{min}(C)}\Sigma_{p=0}^{n}\norm{f_{p+1}}_E\\ &\le& \norm{u_0}_E+\frac{T}{(1+k\lambda_{min}(C))} \max_{t \in [0,T]} \norm{f(t)}_E,\, \forall 0 \leq n \leq N-1. \end{eqnarray*} Here $T$ is the size of the integration interval.
\end{theorem}
\noindent {\bf Proof:} To simplify notation, we take $N_k=(I+kC)^{\frac{1}{2}}$ and $M_k=(I+kA+kB_n)^{-1}(I+kC)$. Then the equation (\ref{eq:mainRecursion}) can be written as \[u_{n+1}=M_ku_n+k(I+kA+kB_n)^{-1}f_{n+1}.\] Using the definition 2.1, we have \[(N_ku_{n+1})^T(N_ku_{n+1})=(N_ku_{n+1})^TN_kM_ku_n+k(N_ku_{n+1}) ^TN_k(I+kA+kB_n)^{-1} f_{n+1}.\] \noindent Algebraic manipulation and the Cauchy-Schwarz inequality yield \begin{eqnarray*} {\norm{N_ku_{n+1}}_2}^2 &\le& \norm{N_ku_{n+1}}_2 .\norm{N_kM_kN_k^{-1}}_2 . \norm{N_ku_n}_2\\ & & +k\norm{N_ku_{n+1}}_2.\norm{N_kM_kN_k^{-1}}_2.\norm{N_k^{-1}f_{n+1}}_2. \end{eqnarray*} Using Lemma \ref{l:mainLemma} with $D_2=N_k^2$ and $D_1+D_3=M_kN_k^{-2}$, we obtain that $\norm{N_kM_kN_k^{-1}}_2 \le 1$. Then the previous inequality reduces to \[\norm{N_ku_{n+1}}_2 \le \norm{N_ku_n}_2+k\norm{N_k^{-1}f_{n+1}}_2\] This inequality can be simplified as follows: \begin{eqnarray*} \norm{N_ku_{n+1}}_2 &\le& \norm{N_ku_n}_2 + k\norm{N_k^{-2}N_kf_{n+1}}_2\\ &\le& \norm{N_ku_n}_2 + k\norm{(I+kC)^{-1}}_2 \norm{N_kf_{n+1}}_2\\ &\le& \norm{N_ku_n}_2 + \frac{k}{(1+ k \lambda_{min}(C))}\norm{N_kf_{n+1}}_2. \end{eqnarray*} \noindent Thus, \[\norm{u_{n+1}}_E - \norm{u_n}_E \le \frac{k}{(1+ k \lambda_{min}(C))}\norm{f_{n+1}}_E,\] \noindent and since $(I+kC)^{-1}$ is a symmetric positive definite matrix, \[\norm{I+kC}_2=\max{\lambda(I+kC)^{-1}} = \frac{1}{(\min{\lambda(I+kC)})}.\] By the spectral mapping theorem $\lambda(I+kC)=1+k\lambda(C)$. Therefore \[\norm{(I+kC)^{-1}}_2= \frac{1}{1+k \lambda_{min}(C)},\] \noindent where $\lambda_{min}(C)$ is the minimum eigenvalue of matrix $C$. This implies \[\norm{u_{n+1}}_E - \norm{u_n}_E \le \frac{k}{(1+k \lambda _{min}(C))}\norm{f_{n+1}}_E,\, 0 \leq n \leq N-1. \] \noindent Summing from $0$ to $n$ gives \[\norm{u_{n+1}}_E - \norm{u_0}_E \le \frac{k}{(1+k \lambda _{min}(C))} \Sigma_{p=0}^{n}\norm{f_{p+1}}_E,\, \forall 0 \leq n \leq N-1,\] \noindent that is, \[\norm{u_{n+1}}_E \le \norm{u_0}_E + \frac{k}{(1+k \lambda _{min}(C))} \Sigma_{p=0}^{n}\norm{f_{p+1}}_E,\, 0 \leq n \leq N-1,\] which is the claimed first result. The second result follows immediately. $\Box{}{}$
The local truncation error of the method (\ref{eq:schemeEquation}) is clearly $O(\Delta t)$. In the error estimate (which follows) we need a precise statement of this fact, which we now derive.\\ To simplify our notation, we use $u_n$ to denote $u(t_n)$, where $u(\cdot)$ is the exact solution of (\ref{eq:mainEquation}). We also use $u_n$ to denote an iterate of our numerical scheme, but the particular meaning of $u_n$ will become evident from the context.
According to the definition of local truncation error ~\cite{A89}, \begin{eqnarray} \tau_{n+1} &=& \frac{u(t_{n+1})-u(t_n)}{k}+Au(t_{n+1})+B(u(t_n))u(t_{n+1})-Cu(t_n)\nonumber\\ & & -[u'(t_{n+1})+Au(t_{n+1})+B(u(t_{n+1}))u(t_{n+1})-Cu(t_{n+1})]\\ &=& \frac{u_{n+1}-u_n}{k}-u'_{n+1}-(B(u(t_{n+1}))-B(u(t_n))) u(t_{n+1})+ C(u_{n+1}-u_n). \nonumber \end{eqnarray} \noindent
Using the second-order integral form of the Taylor expansion around $t_{n+1}$, we obtain \[ u_{n+1}-u_n-k u'_{n+1}=-\int_{t_{n+1}}^{t_n} u''(t) (t-t_{n+1}) dt, \] which we rewrite as \[ \frac{u_{n+1}-u_n}{k}- u'_{n+1}=-\frac{1}{k}\int_{t_{n+1}}^{t_n} u''(t) (t-t_{n+1}) dt= -\frac{1}{k}\int_{t_{n}}^{t_{n+1}} u''(t) (t_{n+1}-t) dt. \] Using the first-order integral form of the Taylor expansion around $t_n$, we obtain \[\begin{array}{rcl} & & (B(u(t_{n+1}))-B(u(t_n)))u(t_{n+1})-C(u_{n+1}-u_n) = \\ & & \int_{t_n}^{t_{n+1}} \left( \frac{d}{dt} B(u(t)) u(t_{n+1})-C u'(t) \right) dt. \end{array} \]
Using the expression we have derived for the local truncation error $\tau_{n+1}$, and the preceding equations derived from Taylor's theorem, we obtain \begin{eqnarray*} \tau_{n+1} & = & -\frac{1}{k}\int_{t_{n}}^{t_{n+1}} u''(t) (t_{n+1}-t) dt - \int_{t_n}^{t_{n+1}} \left( \frac{d}{dt} B(u(t)) u(t_{n+1})-C u'(t) \right) dt \\ & = & \int_{t_n}^{t_{n+1}} \left(-\frac{t_{n+1}-t}{k} u''(t)- \frac{d}{dt} B(u(t)) u(t_{n+1})+C u'(t) \right) dt. \end{eqnarray*}
By the mean value theorem, there exists $\xi_n \in (t_n,t_{n+1})$ such that \begin{equation} \label{eq:mainTruncation} \tau_{n+1} = - u''(\xi_n) (t_{n+1}-\xi_n) - k \left.\frac{d}{dt} B(u(t))
\right|_{t=\xi_n} u(t_{n+1}) + k C u'(\xi_n). \end{equation} Hence, using the fact that $ 0 \leq \left(t_{n+1}-\xi_n \right) \leq k$, we obtain that \[ \norm{\tau_{n+1}}_2 \le k \max_{t_n \le s \le t_{n+1}} \left( \norm{ u''(s)}_2 +
\norm{\left.\frac{d}{dt} B(u(t)) \right|_{t=s}}_2 \max_{t_n \le \theta \le t_{n+1}}\norm{u(\theta)}_2+ \norm{Cu'(s)}_2 \right). \] \noindent This proves the following lemma. \begin{lemma} Let $k=\Delta t$ and $n\ge 0$. The method \begin{equation} \label{inhomogeneous} \frac{u_{n+1}-u_n}{k}+Au_{n+1}+B_nu_{n+1}-Cu_n=f_{n+1}, \end{equation} \noindent where $A=A^T \succ 0$ and $C=C^T\succeq 0$ are ${n}\times{n}$ symmetric matrices, $B_n$ an ${n}\times{n}$ skew-symmetric matrix, and $f_{n+1}=f((n+1)k)$, is consistent. That is, the local truncation error is $O(\Delta t)$. \end{lemma}
We now bound the total error. We consider first the energy norm of truncation error. \begin{lemma} \label{l:btruncation} Let $\tau_{n+1}$ be the local truncation error of method (\ref{inhomogeneous}). Then \begin{equation}\label{eq:btruncation} \norm{\tau_{n+1}}_E \le k \max_{0\le t \le T} \left(\norm{u''(t)}_E+ \norm{Cu'(t)}_E + \norm{\frac{d}{dt} B(u(t)) }_E \max_{0 \le s \le T} \norm{u(s)}_E \right). \end{equation} \end{lemma} \noindent {\bf Proof:} By definition of energy norm and following the identity (\ref{eq:mainTruncation}), we get \[\norm{\tau_{n+1}}_E = \norm{- u''(\xi_n) (t_{n+1}-\xi_n) -
k \left.\frac{d}{dt} B(u(t)) \right|_{t=\xi_n} u(t_{n+1}) + k C u'(\xi_n)}_E \] for some $\xi_n \in [t_n, t_{n+1}]$. The conclusion follows after applying the inequality $0 \leq t_{n+1}-\xi_n \leq k$, the triangle inequality, and the properties of the $\max$ function. Note that $\norm{\frac{d}{dt} B(u(t)) }_E$ is the induced $\norm{\cdot}_E$ of the corresponding matrix. $\Box{}{}$ \\
We now give a convergence result for the solution of (\ref{eq:schemeEquation}). First, we need to compute a certain estimate. We have that \begin{eqnarray*} & & \left[B(u(t_n))-B(u_n)\right] u(t_{n+1}) = \int_{0}^{1} \frac{d}{d \theta}[B(u(t_n) \theta + u_n(1-\theta))] u(t_{n+1}) dt = \\ && \left(\nabla_u (B(u(t_n) \theta_n + u_n(1-\theta_n)))e_n\right)u(t_{n+1}),\, \mbox{~for some~} \theta_n \in [0,1],\\ \end{eqnarray*} where $e_n=u(t_n)-u_n$. Here $u(t_n)$ is the solution of (\ref{eq:mainEquation}), whereas $u_n$ is the solution of our numerical scheme.
We define the matrix $W_n$, by its action on a vector $x \in \Re^n$: \[ W_n x = \left[ \left(\nabla_u B(u(t_n) \theta_n + u_n(1-\theta_n)) \right) x \right] u(t_{n+1}), \] which results in the following identity \begin{equation} \label{eq:wn}
\left[B(u(t_n))-B(u_n)\right] u(t_{n+1}) = W_n e_n. \end{equation}
\begin{lemma}\label{l:boundW} Let u(.) be the solution of (\ref{eq:mainEquation}) and $u_n$ be the approximation to $u(n\Delta t)$, obtained from the numerical scheme
(\ref{eq:schemeEquation}). Then there exists $\Gamma$ such that, $\forall t \in [0,T]$ we have that \[ \norm{W_n}_2 \leq \Gamma,\mbox{~and~} \norm{W_n}_E \leq \Gamma_E=\Gamma \sqrt{1+k\lambda_{max}(C)},\; \forall 0 \leq n \leq N.\] \end{lemma}
{\bf Proof:} From Theorem \ref{t:bouHom} we have that \[ \begin{array}{rcl} \norm{u_n}_2 & \leq & \norm{u_n}_E \leq \norm{u_0}_E + T \max_{t \in [0,T]} \norm{f(t)}_E \\ & \leq & \sqrt{1 + k \lambda_{max}(C)} \left( \norm{u_0}_2 + T \max_{t \in [0,T]} \norm{f(t)}_2 \right),\; \forall 0\leq n \leq N. \end{array} \] We define \[ \Lambda_E=\sqrt{1+ T \lambda_{max}(C)} \left( \norm{u_0}_2 + T \max_{t \in [0,T]} \norm{f(t)}_2 \right). \] From Lemma \ref{l:boundedSolution} we have that $u(t)$ is bounded on $[0,T]$, and we define $\Lambda_u=\max_{t \in [0,T]} \norm{u(t)}_2$. Since $B(\cdot)$ is of class $C^1$, we can define \[ \Gamma=\max_{\theta \in [0,1],\, \norm{u_1}_2 \leq \Lambda_E,\, \norm{u_2}_2 =1,\, \norm{v_1} \leq \Lambda_u, \norm{v}_2 \leq \Lambda_u} \norm{\left[\left(\nabla_u B(\theta v_2 + (1-\theta) u_1) \right) u_2\right] v_1}_2. \] From the definition of $W_n$, we immediately obtain that \[ \norm{W_n}_2 \leq \Gamma, \; \forall 0 \leq n \leq N. \] The second part of the conclusion follows from the inequality between $\norm{\cdot}_E$ and $\norm{\cdot}_2$. $\Box$
\begin{theorem}\label{t:main} Consider solving the nonhomogenous problem on the interval [0,T]\\ \[u' +Au+B(u)u-Cu=f\] \noindent using the following method\\ \[\frac{u_{n+1}-u_n}{k}+Au_{n+1}+B_nu_{n+1}-Cu_n=f_{n+1},\]\\ \noindent where $k=\Delta t$, $B_n=B(u_n)$ and $f_{n+1}=f((n+1)k)$. Let $e_n=u(t_n)-u_n$ denote the local error. Assume that $e_0=0$. Then the method is convergent and \begin{eqnarray*} \norm{e_{n+1}}_E & \le & \frac{\left(1+\frac{k \Gamma_E}{1+k \lambda _{min}(C)}\right)^{n-1}-1} {1+\frac{k \Gamma_E }{1+k \lambda_{min}(C)}}\frac{k^2U}{1+k \lambda_{min}(C)} \\ & \le & \frac{e^\frac{ T \Gamma_E}{1+k \lambda _{min}(C)}-1} {1+\frac{k \Gamma_E }{1+k \lambda_{min}(C)}}\frac{k^2U}{1+k \lambda_{min}(C)},\; \forall 0 \leq n \leq N-1, \end{eqnarray*} when $\Gamma_E \neq 0$, and \begin{eqnarray*} \norm{e_{n+1}}_E \le (n+1) \frac{k^2U}{1+k \lambda_{min}(C)} \le T \frac{kU}{1+k \lambda_{min}(C)},\, \forall 0 \leq n \leq N-1, \end{eqnarray*} when $\Gamma_E=0$, where \[ U = \max_{0\le t \le T}\left(\norm{u''(t)}_E +\norm{Cu'(t)}_E + \norm{\frac{d}{dt} B(u(t)) }_E \max_{0 \le s \le T} \norm{u(s)}_E \right). \] \end{theorem} \noindent {\bf Proof:} Following the definition of the truncation error $\tau_{n+1}$ and using the equation (\ref{eq:wn}), we obtain that the error, $e_n=u(t_n)-u_n$, satisfies \[\frac{e_{n+1}-e_n}{k}+Ae_{n+1}+B_ne_{n+1}-Ce_n=\tau_{n+1} - W_n e_n.\] \noindent After algebraic calculations, we find that \[e_{n+1}=(I+kA+kB_n)^{-1}(I+kC)e_n+k(I+kA+kB_n)^{-1}(\tau_{n+1}-W_n e_n).\] \noindent We use the energy inner product to obtain \[\begin{array}{l} <e_{n+1},e_{n+1}>_E= \\ <(I+kA+kB_n)^{-1}(I+kC)e_n+k(I+kA+kB_n)^{-1} (\tau_{n+1}-W_n e_n),e_{n+1}>_E. \end{array} \] \noindent Applying the definition of energy norm (\ref{def:ENorm}) and the substitutions $M_k=(I+kA+kB_n)^{-1}(I+kC)$, and $N_k=(I+kC)^\frac{1}{2}$, we find that \[\begin{array}{l} (N_ke_{n+1})^T(N_ke_{n+1})= \\ (N_ke_{n+1})^TN_kM_ke_n+k(N_ke_{n+1})^TN_k (I+kA+kB_n)^{-1})(\tau_{n+1}-W_n e_n). \end{array} \] \noindent Using the Cauchy-Schwarz inequality, we obtain that \begin{eqnarray*} \norm{N_ke_{n+1}}_2^2 &\le& \norm{N_ke_{n+1}}_2 .\norm{N_kM_kN_k^{-1}}_2 . \norm{N_ke_n}_2\\ & & +k\norm{N_ke_{n+1}}_2.\norm{N_kM_kN_k^{-1}}_2 .\norm{N_k^{-1} \left(\tau_{n+1}-W_n e_n \right)}_2. \end{eqnarray*} \noindent Thus \[\norm{N_ke_{n+1}}_2 \le \norm{N_kM_kN_k^{-1}}_2 . \norm{N_ke_n}_2+ k\norm{N_kM_kN_k^{-1}}_2 .\norm{N_k^{-1} \left(\tau_{n+1}-W_n e_n \right)}_2.\] \noindent Using Lemma \ref{l:mainLemma} with $D_2=N_k^2$ and $D_1+D_3=M_kN_k^{-2}$, we obtain that $\norm{N_kM_kN_k^{-1}}_2 \le 1$. \noindent Hence \[\norm{e_{n+1}}_E \le \norm{e_n}_E + k\norm{N_k^{-2}}_2 \norm{ \left(\tau_{n+1}-W_n e_n \right)}_E.\] \noindent Equivalently, we obtain that \[\norm{e_{n+1}}_E \le \norm{e_n}_E + k\norm{(I+kC)^{-1}}_2 \left(\norm{\tau_{n+1}}_E + \norm{W_n}_E \norm{e_n}_E\right).\] \noindent Notice that $(I+kC)^{-1}$ is a symmetric positive definite matrix and \[\norm{(I+kC)^{-1}}_2=\frac{1}{1+k \lambda _{min}(C)}.\] On the other hand, by Lemma \ref{l:boundW}, there is a constant $\Gamma_E$ such that $\norm{W_n}_E \le \Gamma_E$. Therefore, \begin{equation}\label{eq:recursion} \norm{e_{n+1}}_E \le \left(1+\frac{k\Gamma_E}{1+k \lambda _{min}(C)} \right)\norm{e_n}_E + \frac{k}{1+k \lambda _{min}(C)}\norm{\tau_{n+1}}_E. \end{equation}
This is a recursion formula of the following form: \[ r_{n+1} \leq a r_n + b \tau_n, \\ \] which, when $a \neq 0$ has an upper bound of the type \[ r_{n+1} \leq a^n r_0 + \frac{a^{n-1}-1}{a} b \max_n \norm{\tau_{n}}_E. \]
Using this fact, we obtain that, when $\Gamma_E \neq 0$, the following bound for the error holds whenever $0 \leq n \leq N-1.$
\begin{eqnarray*} \norm{e_{n+1}}_E &\le& \left(1+\frac{k\Gamma_E}{1+k \lambda _{min}(C)}\right)^n\norm{e_0}_E\\ & & +\frac{\left(1+\frac{k\Gamma_E}{1+k \lambda _{min}(C)}\right)^{n-1}-1} {1+\frac{k\Gamma_E}{1+k \lambda_{min}(C)}}.\frac{k}{1+k \lambda _{min}(C)} \max_n \norm{\tau_{n+1}}_E \end{eqnarray*}
Replacing $\norm{\tau_{n+1}}_E$ by its bound (\ref{eq:btruncation}) obtained in Lemma \ref{l:btruncation}, and considering that $e_0=0$, we have, when $\Gamma_E \neq 0$ and $0 \leq n \leq N-1$, that
\begin{equation*} \norm{e_{n+1}}_E \le \frac{\left(1+\frac{k\Gamma_E}{1+k \lambda _{min}(C)}\right)^{n-1}-1} {1+\frac{k\Gamma_E}{1+k \lambda _{min}(C)}}.\frac{k^2U}{1+k \lambda _{min}(C)} \end{equation*} with $U=\max_{0\le t \le T}\left(\norm{u''(t)}_E +\norm{Cu'(t)}_E + \norm{\frac{d}{dt} B(u(t)) }_E \max_{0 \le s \le T} \norm{u(s)}_E \right)$. The second inequality for $\Gamma \neq 0$ follows from the inequality $(1+x)^n \leq e^{xn}$, for $x>0$ and $n$ positive integer.
When $\Gamma_E = 0$, we immediately get from (\ref{eq:recursion}) and from Lemma \ref{l:btruncation} that \[ \norm{e_{n+1}}_E \le (n+1) \frac{k^2U}{1+k \lambda_{min}(C)},\; \forall 0 \leq n \leq N-1, \] which, together with $k N = T$ prove the inequalities for $\Gamma_E = 0$.
The convergence follows from the fact that $\norm{\cdot}_E$ converges to $\norm{\cdot}_2$ as $k \rightarrow 0$ which implies that $\norm{e_n}_2 \rightarrow 0$ as $k \rightarrow 0$. $\Box{}{}$
The case $\Gamma_E=0$ occurs, for example, when $B(u)$ is constant (which we simulate numerically in the next section). For that case, the error increases only linearily with the size of the interval, assuming that the derivatives up to order $2$ of the solution $u(t)$ are uniformly bounded.
\section{Numerical Results}
Let $\Omega=[0,1] \times [0,1]$. For the equation \begin{eqnarray} \label{eq:modelPDE} u_t + b\cdot \nabla u - \epsilon \Delta u &=& f, \mbox{~over~} \Omega,\nonumber \\ u &=& \phi(x) \mbox{~on~} \delta \Omega, \\ u(x,0) &=& u_0(x) \mbox{~in~} \Omega, \nonumber \end{eqnarray} use the method described in this work, with uniform mesh and central difference. A choice must be made for the antidiffusion operator: averaging or projection. We have selected averaging. Since it is just outside the theory, we will thereby test the robustness of the algorithm. Antidiffusion is completed by averaging, where $\bar{u}(p)$:=weighted average of nearest neighbors. This corresponds to filtering with $\delta=2h$. The method becomes in our case \[\dot u_{ij}(t) + b \cdot \nabla^h u_{ij} - (\epsilon + \epsilon_0) \Delta^h u_{ij} + \epsilon_0 \overline{\Delta \overline{u_{ij}}}^q=h,\] where $q$ denotes how many times the average operation is taken. In our experiments we chose $q=2$ and $\epsilon=10^{-4}$. We take $b=(\cos(\theta),\sin(\theta))$, where $\theta=17^\circ$.
For the boundary and initial conditions we take the line at angle $\theta$ through the center of the domain. On the north side of the line we take $\phi=1$ on the boundary; on the south side we take $\phi=0$ on the boundary. We take $f=0$ and $0$ as initial conditions.
\begin{figure}
\caption{ Spatial stability of the steady-state solution for various choices of the artificial viscosity parameter $\epsilon_0$. }
\label{fig:surfaces}
\end{figure}
\begin{figure}
\caption{ Stability of the numerical method demonstrated by the behavior of the energy norm}
\label{fig:timestepEnergyComparison}
\end{figure}
\begin{figure}
\caption{ Numerical validation of Theorem \ref{t:unconditional}}
\label{fig:shiftedBest}
\end{figure}
\begin{figure}
\caption{ Exponential growth of the solution of the scheme that includes the advection term explicitly}
\label{fig:explosion}
\end{figure}
We performed the following experiments, all on a 32 $\times$ 32 mesh. \begin{enumerate} \item We ran the simulation for $1,000$ steps with a timestep of $10$, with the artificial viscosity parameter $\epsilon_0$ having succesively the values $10^{-1}$, $5 \times 10^{-3}$, $10^{-3}$, $10^{-4}$. We have presented no analysis for the spatial dependence of the solution with respect to $\epsilon_0$, but we have included this experiment for validation, since our choice of parameters should result roughly in the steady-state approximation for this mesh, which has been studied before in the literature.
The results are depicted in Figure \ref{fig:surfaces}. We see that when the artificial viscosity parameter $\epsilon_0$ is very small, a complete loss of coherence of the spatial structure results, whereas too large a parameter ($\epsilon_0=0.1$) alters the steady-state solution significantly. This effect is consistent with the typical behavior of centered methods for the skew step problem ~\cite{RST96}. \item For $\epsilon_0=10^{-4}$, we ran the simulation for $100$ steps with a timestep of $1$ and for $1,000$ steps with a timestep of $0.1$. The energy norm comparison of these computations is presented in Figure \ref{fig:timestepEnergyComparison}. We see that even for the very large step, the energy norm stays bounded, consistent with our absolute stability claim.
We also present in Figure \ref{fig:shiftedBest} a comparison between the energy norms of the distance between the successive iterates of the two cases and their outcome at time $100$. From Figure \ref{fig:timestepEnergyComparison}, we infer that $u(100)$ is a reasonable approximation to the steady-state solution. Since the equation (\ref{eq:modelPDE}) is linear, we have that $u_n-u(100)$ is the result of the numerical scheme applied to the homogeneous equation associated to (\ref{eq:modelPDE}). From Theorem \ref{t:unconditional} we have that $\norm{u_n-u(100)}_E$ must be a decreasing sequence, which is exactly what we observe from Figure \ref{fig:shiftedBest}. Note that $\norm{u_n}_E$ is not a decreasing sequence, as can be seen in Figure \ref{fig:shiftedBest}. Moreover, the sequence $\norm{u_n}_E$ may not even be monotonic, as seen in Figure \ref{fig:timestepEnergyComparison}, for $k=0.1$.
\item We compare the results of our scheme with the similar scheme that takes into account explicitly the term that contains the skew-symmetric matrix $B(u_n)$. For the latter scheme we obtain the recursion \[ \frac{u_{n+1}-u_n}{k}+Au_{n+1}+B(u_n)u_{n}-Cu_n=f_{n+1}. \] We apply this scheme to our example on a 32 $\times$ 32 mesh for $1000$ timesteps of length $k=1$. We see the rapid exponential growth that is typical for computations with the timestep outside the region of stability.
This demonstrates that our scheme has significantly better stability properties than the alternative, which would result in linear systems of comparable sparsity. The numerical scheme, based on a backward Euler approach that considers all terms implicitly, though absolutely stable, will result in less sparse linear systems since the matrix $C$ contains an averaging operator that substantially reduces sparsity and is not considered here for comparison.
\end{enumerate}
\begin{flushright} \scriptsize \framebox{\parbox{2.4in}{The submitted manuscript has been created by the University of Chicago as Operator of Argonne National Laboratory ("Argonne") under Contract No.\ W-31-109-ENG-38 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up, nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.}} \end{flushright}
\end{document} |
\begin{document}
\title{Unitary Invariants and Classification of Four-Qubit States via Negativity Fonts} \author{S. Shelly Sharma} \email{shelly@uel.br} \affiliation{Departamento de F\'{\i}sica, Universidade Estadual de Londrina, Londrina 86051-990, PR Brazil } \author{N. K. Sharma} \email{nsharma@uel.br} \affiliation{Departamento de Matem\'{a}tica, Universidade Estadual de Londrina, Londrina 86051-990 PR, Brazil }
\begin{abstract} Local unitary invariance and the notion of negativity fonts are used as the principle tools to construct four qubit invariants of degree 8, 12, and 24. A degree 8 polynomial invariant that is non-zero on pure four qubit states with four-body quantum correlations and zero on all other states, is identified. Classification of four qubit states into seven major classes, using criterion based on the nature of correlations, is discussed.
\end{abstract} \maketitle
To detect and quantify entanglement of composite quantum systems is a challenge taken up with great zeal by theorists and experimentalists alike. On the way, from the elegant bipartite separability criterion of Peres \cite{pere96} up to classification schemes for four qubit states \cite{vers02,vers03,miya03,miya04,ging02,lama07,li09,bors10,vieh11,shar12}, several useful entanglement measures and invariants have been found \cite{woot98,coff00,dur00,wong01,meye02,vida02,luqu03,oste05,luqu06,shar101,shar102,oste06,heyd04,leva051,leva052,leva06,chte07,doko09,elts12} .\ Two qubit entanglement is quantified by\ concurrence \cite{hill97}, which for a pure state is equal to global negativity \cite{zycz98,vida02}. Entanglement of a three qubit state due to three-body quantum correlations is quantified by three tangle \cite{coff00}. For the most general three qubit state, the difference of squared global negativity and three tangle is a measure of two qubit correlations and satisfies CKW inequality \cite{coff00}. A natural question is, which polynomial function of the coefficients quantifies entanglement due to four-body correlations? Can we write an invariant analogous to global negativity for two qubits and three tangle for three qubits to quantify four-body correlations?
Invariant theory describes invariant properties of homogenous polynomials under general linear transformations. If we write a qubit state in multilinear form, we can find the set of invariants of the form in terms of state coefficients $a_{i_{1}i_{2}...i_{N}}$ by using standard methods, as has been done in \cite{miya03,miya04,luqu03}. One may then investigate the properties of all invariants in the set. Our general aim, however, is to construct those polynomial invariants that quantify entanglement due to $K-$body correlations in an $N-$qubit $\left( N\geq K\right) $ pure state. This is done by constructing $N-$qubit invariants from multivariate forms with $\left( K-1\right) -$qubit invariants as coefficients instead of $a_{i_{1} i_{2}...i_{N}}$.\ In particular, the invariant that quantifies entanglement due to $N-$body correlations is obtained from a biform having as coefficients the $N-1$ qubit invariants. The term $N-$body correlations refers, strictly, to correlations of the type present in an $N-$qubit GHZ state. The advantage of our approach \cite{shar101,shar102} is twofold. Firstly, we can choose to construct invariants that contain information about entanglement of a part of the system. Secondly, since the form of $N-$qubit invariants is directly linked to the underlying structure of the composite system state, it can throw light on the suitability of a given state for a specific information processing task. Local unitary invariance and the notion of negativity fonts are used as the principle tools to identify $K-$qubit invariants in an $N-$qubit state. Negativity fonts are the elementary units of entanglement in a quantum superposition state. Determinants of negativity fonts are linked to matrices obtained from state operator through selective partial transposition \cite{shar07,shar08}. In this article, we obtain analytical expressions for polynomial invariants of degree $8$, $12$, and $24$ for $N=4$ states. One of the four qubit invariants is found to be non zero on states with four-body quantum correlations and zero on separable states as well as on states with entanglement due to two and three body correlations. It is analogous to three qubit invariant used to define three tangle \cite{woot98}, and can likewise be used to construct an entanglement monotone to quantify four-body correlations.
To obtain four qubit invariants that quantify four qubit quantum correlations, we follow a sequence of steps as given below:
1. Identify two qubit invariants for a given pair in a three qubit state.
2. Obtain a quadratic equation with two qubit invariants for a given pair of qubits as coefficients. Discriminant of the\ form is the three qubit invariant written in terms of two qubit invariants.
3. Identify two qubit invariants in a four qubit state. Select three qubits and write three qubit invariants for these in a four qubit state. We identify five invariants, including two invariants analogous to ones known for a three qubit state.
4. A local unitary on fourth qubit yields transformation equations for three qubit invariants. Proper unitaries can reduce the number of three qubit invariants in the set to four. The process of finding such local unitaries yields a quartic equation from which four qubit invariants are obtained. Since the invariants in a larger Hilbert space are written in terms of relevant invariants in subspaces, it is possible to differentiate the invariants that quantify three-body quantum correlations from those that quantify four-body quantum correlations.
In principle the process can be carried on to higher number of qubits. Polynomial invariants introduced by Luque and Thibon \cite{luqu03} got geometrical meaning in the work of Levay \cite{leva06}. We point out the relation of our four qubit invariants with invariants in \cite{luqu03} and \cite{leva06}.
Polynomial invariants that identify the nature of correlations in a state are useful to apply classification criteria proposed in \cite{shar12} to four qubit states. Two multi qubit pure states are equivalent under stochastic local operations and classical communication (SLOCC) \cite{dur00} if one can be obtained from the other with some probability using only local operations and classical communication amongst different parties. SLOCC equivalence is the central point in four qubit state classification into nine families in \cite{vers02}. Borsten et al. \cite{bors10} have invoked the black-hole--qubit correspondence to derive the classification of four-qubit entanglement. However, it has been found that the number of four qubit SLOCC\ entanglement classes is much larger \cite{li09}. The main result of Lamata et al. \cite{lama07} is that each of the eight genuine in-equivalent entanglement classes contains a continuous range of strictly non equivalent states, although with similar structure. O. Viehmann et al. \cite{vieh11} select a set of generators for the SL(2, C)$\otimes4$ -invariant polynomials or tangles and classify the eight families of ref. \cite{lama07} using tangle patterns. In our classification scheme using correlation based criterion \cite{shar12}, multipartite states within the same class have same type of correlations but may have different number and type of negativity fonts in canonical state (all the states may not be SLOCC equivalent). In section IV, we calculate the relevant invariants for SLOCC\ families \cite{vers02}\ and re-classify the states on the basis of number and nature of negativity fonts with non-zero determinants. The polynomial invariants used to classify the states in our scheme quantify correlations generated by distinct interaction types. Intuitively, this information should be extremely useful to quantum state engineering. Negativity font analysis can be a helpful tool to optimize the subsystem interactions to tailor the invariant dynamics for a specific quantum information processing task. A minor point that will be discussed relates to the controversy regarding the family L$_{ab_{4}}$ which is pointed out in ref. \cite{chte07}\ to be a subclass of L$_{abc_{3}}$ with ($a=c)$, while in \cite{li09} it has been shown that L$_{ab_{4}}$ and L$_{abc_{3}}$ belong to distinct SLOCC classes.
\section{Negativity Fonts and two qubit invariants}
In this section, we briefly review the concepts of global partial transpose \cite{pere96}, global negativity \cite{zycz98,vida02}, $K-$way partial transpose \cite{shar09} and $K-$way negativity fonts \cite{shar101,shar102}. We also identify those two qubit invariants which determine the entanglement of a pair of qubits in a three qubit state.
A general N-qubit pure state may be written as \begin{equation} \left\vert \Psi^{A_{1},A_{2},...A_{N}}\right\rangle =\sum_{i_{1}i_{2}...i_{N} }a_{i_{1}i_{2}...i_{N}}\left\vert i_{1}i_{2}...i_{N}\right\rangle , \label{nstate} \end{equation} where $\left\vert i_{1}i_{2}...i_{N}\right\rangle $ are the basis vectors spanning $2^{N}$ dimensional Hilbert space, and $A_{p}$ is the location of qubit $p$. The coefficients $a_{i_{1}i_{2}...i_{N}}$ are complex numbers. The local basis states of a single qubit are labelled by $i_{m}=0$ and $1,$ where $m=1,...,N$. The global partial transpose of an $N$ qubit state $\widehat {\rho}=\left\vert \Psi^{A_{1},A_{2},...A_{N}}\right\rangle \left\langle \Psi^{A_{1},A_{2},...A_{N}}\right\vert $ with respect to qubit at location $p$ is constructed from the matrix elements of $\widehat{\rho}$ through \begin{equation} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{G}^{T_{A_{p}} }\left\vert j_{1}j_{2}...j_{N}\right\rangle =\left\langle i_{1}i_{2} ...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1} j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle . \label{ptg} \end{equation} To construct a $K-$way partial transpose \cite{shar09}, every matrix element $\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{N}\right\rangle $ is labelled by a number $K=\sum \limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}),$ where $\delta_{i_{m},j_{m}}=1$ for $i_{m}=j_{m}$, and $\delta_{i_{m},j_{m}}=0$ for $i_{m}\neq j_{m}$. Matrix elements of state operator with a given $K$ represent $K-$way coherences present in the state. Local operations on a quantum superposition transform $K-$way coherences to $K\pm1$ way coherences. The $K-$way partial transpose of\ $\widehat{\rho}$ with respect to subsystem $p$ for $K>2$ is obtained by selective transposition such that \begin{align} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{K}^{T_{A_{p}} }\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1} i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle ,\nonumber\\ \text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & =K,\quad \text{and }\quad\delta_{i_{p},j_{p}}=0 \label{ptk1} \end{align} and \begin{align} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{K}^{T_{A_{p}} }\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1} i_{2}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{N} \right\rangle ,\nonumber\\ \text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & \neq K, \label{ptk2} \end{align} while \begin{align} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{2}^{T_{p} }\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1} i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle ,\nonumber\\ \text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & =1\text{ or }2,\quad\text{and }\quad\delta_{i_{p},j_{p}}=0 \label{pt21} \end{align} and \begin{align} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho}_{2}^{T_{p} }\left\vert j_{1}j_{2}...j_{N}\right\rangle & =\left\langle i_{1} i_{2}...i_{N}\right\vert \widehat{\rho}\left\vert j_{1}j_{2}...j_{N} \right\rangle ,\nonumber\\ \text{if}\quad\sum\limits_{m=1}^{N}(1-\delta_{i_{m},j_{m}}) & \neq1\text{ or }2. \label{pt22} \end{align} One can verify that global partial transpose may be expanded as \begin{equation} \widehat{\rho}_{G}^{T_{A_{p}}}=
{\textstyle\sum\limits_{K=2}^{N}}
\widehat{\rho}_{K}^{T_{A_{p}}}-\left( N-2\right) \widehat{\rho}. \label{decomp} \end{equation} Negativity of $\widehat{\rho}^{T_{p}}$, defined as $N^{A_{p}}=\left( \left\Vert \rho^{T_{A_{p}}}\right\Vert _{1}-1\right) ,$where $\left\Vert \widehat{\rho}\right\Vert _{1}$ is the trace norm of $\widehat{\rho}$, arises due to all possible negativity fonts present in $\widehat{\rho}^{T_{p}}$. Since $\widehat{\rho}$ is a positive operators, global negativity depends on the negativity of $K-$way partially transposed operators with $K\geq2$.
To understand the concept of a negativity font in the context of an $N-$qubit system, consider the state \begin{align*} \left\vert \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\rangle & =a_{i_{1} i_{2}...i_{N}}\left\vert i_{1}i_{2}...i_{N}\right\rangle +a_{i_{1} +1,i_{2}...i_{N}}\left\vert i_{1}+1,i_{2}...i_{N}\right\rangle \\ & +a_{j_{1}j_{2}...j_{N}}\left\vert j_{1}j_{2}...j_{N}\right\rangle +a_{j_{1}+1,j_{2}...j_{N}}\left\vert j_{1}+1,j_{2}...j_{N}\right\rangle , \end{align*} with $K=
{\textstyle\sum\limits_{m=1}^{N}}
\left( 1-\delta_{i_{m}j_{m}}\right) $ and $\delta_{i_{1}j_{1}}=0$.\ The state $\left\vert \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\rangle $ is the product of a $K-$qubit GHZ-like state with $N-K$ qubit product state. Let $\widehat{\sigma}_{K}^{T_{A_{1}}}$ be the $K-$way partial transpose of $\widehat{\sigma}_{K}=\left\vert \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\rangle \left\langle \Phi_{K}^{A_{1}A_{2}...A_{N}}\right\vert $ with respect to qubit $A_{1}$. If $\widehat{\rho}$ is a pure state given by $\widehat{\rho }=\left\vert \Psi^{A_{1}A_{2}...A_{N}}\right\rangle \left\langle \Psi ^{A_{1}A_{2}...A_{N}}\right\vert $, then $\widehat{\sigma}_{K}^{T_{A_{1}}}$ is a $4\times4$ sub-matrix of $\widehat{\rho}_{G}^{T_{A_{1}}}$ and $\widehat {\rho}_{K}^{T_{A_{1}}}$with negative eigenvalue given by \[ \lambda^{-}=-\left\vert \det\left[ \begin{array} [c]{cc} a_{i_{1}i_{2}...i_{N}} & a_{j_{1}+1,j_{2}...j_{N}}\\ a_{i_{1}+1,i_{2}...i_{N}} & a_{j_{1}j_{2}...j_{N}} \end{array} \right] \right\vert . \] The matrix $\left[ \begin{array} [c]{cc} a_{i_{1}i_{2}...i_{N}} & a_{j_{1}+1,j_{2}...j_{N}}\\ a_{i_{1}+1,i_{2}...i_{N}} & a_{j_{1}j_{2}...j_{N}} \end{array} \right] $ is referred to as a $K-$way negativity font \cite{shar101,shar102}. A symbol used to represent a negativity font, must identify the qubits that appear in $K$ qubit GHZ-like state. Therefore we split the set of $N$ qubits with their locations and local basis indices given by, $T=\left\{ \left( A_{1}\right) _{i_{1}}\left( A_{2}\right) _{i_{2}}...\left( A_{N}\right) _{i_{N}}\right\} ,$ into two subsets, with $S_{1,T}$ containing qubits with local basis indices satisfying $\delta_{i_{m}j_{m}}=0$ ($i_{m}\neq j_{m}$), and $S_{2,T}$ having qubits for which $\delta_{i_{m}j_{m}}=1$ ($i_{m}=j_{m}$). To simplify the notation, we represent by $s_{1,T}$, the sequence of local basis indices for qubits in $S_{1,T}$. A specific negativity font is therefore represented by \[ \nu_{S_{2,T}}^{i_{1}i_{2}...i_{N}}=\left[ \begin{array} [c]{cc} a_{i_{1}i_{2}...i_{N}} & a_{j_{1}+1,j_{2}...j_{N}}\\ a_{i_{1}+1,i_{2}...i_{N}} & a_{j_{1}j_{2}...j_{N}} \end{array} \right] . \] A nonzero determinant $D_{S_{2,T}}^{s_{1,T}}=\det\left( \nu_{S_{2,T}} ^{i_{1}i_{2}...i_{N}}\right) $ ensures that $\widehat{\sigma}_{K}^{T_{A_{1}} }$ is negative. A measurement on the state of a qubit with index in $S_{1,T}$ reduces $\widehat{\sigma}_{K}$ to a separable state, whereas, measuring the state of a qubit in\ $S_{2,T}$ does not change the negativity of $\widehat{\sigma}_{K}^{T_{A_{1}}}$. Elementary negativity fonts that quantify the negativity of $\rho^{T_{A_{p}}}$ for $p\neq1$ are defined in an analogous fashion. The determinant of a $K-$way negativity font detects $K-$body quantum correlations in an $N$ qubit state. For even $K$, proper combinations of determinants of $K-$way negativity fonts are found to be invariant under the action of local unitary operations on $K$ qubits \cite{shar102}.
For a two qubit state negative eigenvalue of partial transpose is the invariant that distinguishes between the separable and entangled states. Global negativity of $\left\vert \Psi^{A_{1}A_{2}}\right\rangle =
{\textstyle\sum}
a_{i_{1}i_{2}}\left\vert i_{1}i_{2}\right\rangle $ is determined by $I_{2}^{A_{1}A_{2}}=\left\vert a_{00}a_{11}-a_{01}a_{10}\right\vert $, which is invariant under $U^{A_{1}}\otimes U^{A_{2}}$. Here $U^{A_{i}}$ is a local unitary operator that acts on qubit $A_{i}$. The subscript on $I_{2} ^{A_{1}A_{2}}$ refers to two-body correlations. A two qubit state therefore has a single negativity font $\nu^{00}=\left[ \begin{array} [c]{cc} a_{00} & a_{01}\\ a_{10} & a_{11} \end{array} \right] $. In a general three qubit state, \[ \left\vert \Psi^{A_{1}A_{2}A_{3}}\right\rangle =
{\textstyle\sum}
a_{i_{1}i_{2}i_{3}}\left\vert i_{1}i_{2}i_{3}\right\rangle , \] the number of two-qubit invariants, for a selected pair of qubits, is three. For the pair $A_{1}A_{2}$, for example, these are determinants of\ $2-$way negativity fonts defined as \begin{equation} D_{\left( A_{3}\right) _{i_{3}}}^{00}=\det\left[ \begin{array} [c]{cc} a_{00i_{3}} & a_{01i_{3}}\\ a_{10i_{3}} & a_{11i_{3}} \end{array} \right] ,\;i_{3}=0,1,\label{2wayd2} \end{equation} and the difference $\left( D^{000}-D^{010}\right) =\left( D^{000} +D^{001}\right) $, where \begin{equation} D^{0i_{2}0}=\det\left[ \begin{array} [c]{cc} a_{0i_{2}0} & a_{0i_{2}+1,1}\\ a_{1i_{2}0} & a_{1,i_{2}+1,1} \end{array} \right] ,\;i_{2}=0,1,\label{3wayd2} \end{equation} is determinant of a three-way negativity font.
\section{Three-body correlations and three qubit invariants}
Our method was applied in ref. \cite{shar101}, to construct three-tangle \cite{coff00} and a degree two four qubit invariant which is a function of determinants of $4-$way negativity fonts. To clarify the process, we review the three qubit case and show that by using three qubit invariants one may classify three qubit entangled states into states with i) three and two body correlations,\ ii) states with only three body correlations and iii) a set of states with only two body correlations. Class (i) states are the most general states. Class (ii) states with GHZ\ type entanglement have the form \[ \left\vert \Psi^{A_{1}A_{2}A_{3}}\right\rangle =a_{i_{1}i_{2}i_{3}}\left\vert i_{1}i_{2}i_{3}\right\rangle +a_{i_{1}+1,i_{2}+1,i_{3}+1}\left\vert i_{1}+1,i_{2}+1,i_{3}+1\right\rangle , \] and Class (iii) contains W-like entangled states and bi-separable states of three qubits. First of all, we write down the transformation equation for two qubit invariant $D_{\left( A_{3}\right) _{1}}^{00}$ to obtain the invariant which quantifies three-body correlations. The form of this invariant is later used to identify three qubit invariants in four qubit states. In the absence of three-body correlations, modified transformation equations yield three qubit invariants that quantify two body correlations in a three qubit state.
Under a local unitary U$^{A_{3}}=\frac{1}{\sqrt{1+\left\vert x\right\vert ^{2}}}\left[ \begin{array} [c]{cc} 1 & -x^{\ast}\\ x & 1 \end{array} \right] $, $D_{\left( A_{3}\right) _{1}}^{00}$ transforms as \begin{equation} \left( D_{\left( A_{3}\right) _{1}}^{00}\right) ^{\prime}=\frac {1}{1+\left\vert x\right\vert ^{2}}\left( D_{\left( A_{3}\right) _{1}} ^{00}+\left( x\right) ^{2}D_{\left( A_{3}\right) _{0}}^{00}+x\left( D^{000}+D^{001}\right) \right) , \end{equation} such that \begin{equation} \left( N_{A_{3}}^{A_{1}A_{2}}\right) ^{2}=\left\vert D_{\left( A_{3}\right) _{1}}^{00}\right\vert ^{2}+\left\vert D_{\left( A_{3}\right) _{0}}^{00}\right\vert ^{2}+2\left\vert \left( \frac{D^{000}+D^{001}} {2}\right) \right\vert ^{2}\label{na1a2} \end{equation} is a three qubit invariant. If the pair of qubits $A_{1}A_{2}$ is entangled then $N_{A_{3}}^{A_{1}A_{2}}\neq0$. We can verify that global negativity of $\widehat{\rho}_{G}^{T_{A_{1}}}$ is given by \begin{equation} \left( N_{G}^{A_{1}}\right) ^{2}=4\left( N_{A_{3}}^{A_{1}A_{2}}\right) ^{2}+4\left( N_{A_{2}}^{A_{1}A_{3}}\right) ^{2}, \end{equation} where \begin{equation} \left( N_{A_{2}}^{A_{1}A_{3}}\right) ^{2}=\left\vert D_{\left( A_{2}\right) _{1}}^{00}\right\vert ^{2}+\left\vert D_{\left( A_{2}\right) _{0}}^{00}\right\vert ^{2}+2\left\vert \left( \frac{D^{000}-D^{001}} {2}\right) \right\vert ^{2}. \end{equation} The discriminant of $\left( D_{\left( A_{3}\right) _{1}}^{00}\right) ^{\prime}=0$, yields three qubit invariant \begin{equation} I_{3}^{A_{1}A_{2}A_{3}}=\left( D^{000}+D^{001}\right) ^{2}-4D_{\left( A_{3}\right) _{0}}^{00}D_{\left( A_{3}\right) _{1}}^{00}\text{,} \label{3way} \end{equation} which is a polynomial invariant of degree four in coefficients $a_{i_{1} i_{2}i_{3}}$. The subscript in $I_{3}^{A_{1}A_{2}A_{3}}$ refers to three-body correlations of the type present in a three qubit GHZ state. The terms $D^{000}-D^{010}$, $D_{\left( A_{3}\right) _{0}}^{00}$, and $D_{\left( A_{3}\right) _{1}}^{00}$ vanish on a product state of qubits $A_{1}$ and $A_{2}$. On the state \begin{equation} \left\vert \Psi^{A_{1}A_{2}}\right\rangle \left\vert \Psi^{A_{3}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}}}
a_{i_{1}i_{2}}\left\vert i_{1}i_{2}\right\rangle \left( b_{0}\left\vert 0\right\rangle +b_{1}\left\vert 1\right\rangle \right) ;\quad\left( i_{m}=0,1\right) , \end{equation} with $D^{00}\neq0$, we have $D_{\left( A_{3}\right) _{0}}^{00}=\left( b_{0}\right) ^{2}D^{00}$, $D_{\left( A_{3}\right) _{1}}^{00}=\left( b_{1}\right) ^{2}D^{00}$, and $D^{000}=D^{001}=b_{0}b_{1}D^{00}$ as such $I_{3}^{A_{1}A_{2}A_{3}}=0.$ Modulus of $I_{3}^{A_{1}A_{2}A_{3}}$, quantifies the entanglement of qubits $A_{1}A_{2}A_{3}$ due to three body correlations. Three tangle \cite{coff00}, $\tau_{3}=4\left\vert I_{3}^{A_{1}A_{2}A_{3} }\right\vert $, is a well known entanglement monotone.{}
For a general three qubit state with $I_{3}^{A_{1}A_{2}A_{3}}=0$, \ determinants of two-way fonts transform as \[ \left( D_{\left( A_{3}\right) _{0}}^{00}\right) ^{\prime}=\frac {1}{1+\left\vert x\right\vert ^{2}}\left( x^{\ast}\sqrt{D_{\left( A_{3}\right) _{1}}^{00}}-\sqrt{D_{\left( A_{3}\right) _{0}}^{00}}\right) ^{2}, \] \[ \left( D_{\left( A_{3}\right) _{1}}^{00}\right) ^{\prime}=\frac {1}{1+\left\vert x\right\vert ^{2}}\left( x\sqrt{D_{\left( A_{3}\right) _{0}}^{00}}+\sqrt{D_{\left( A_{3}\right) _{1}}^{00}}\right) ^{2}, \] therefore \begin{equation} N_{A_{3}}^{A_{1}A_{2}}=\left\vert \left( D_{\left( A_{3}\right) _{0}} ^{00}\right) ^{\prime}\right\vert +\left\vert \left( D_{\left( A_{3}\right) _{1}}^{00}\right) ^{\prime}\right\vert =\left\vert D_{\left( A_{3}\right) _{0}}^{00}\right\vert +\left\vert D_{\left( A_{3}\right) _{1} }^{00}\right\vert , \end{equation} is a three qubit invariant. In other words if $I_{3}^{A_{1}A_{2}A_{3}}=0$ then $N_{A_{3}}^{A_{1}A_{2}}$ quantifies two body correlations of the pair $A_{1}A_{2}$. One can verify that $\left\vert D_{\left( A_{m}\right) _{0} }^{00}\right\vert +\left\vert D_{\left( A_{m}\right) _{1}}^{00}\right\vert $ ($m=1,2,3$), are three qubit invariants in this case. The sum of product invariants \begin{align} I_{2}^{A_{1}A_{2}A_{3}} & =3
{\textstyle\sum\limits_{\substack{i,j=1\\(i<j)}}^{3}}
\left( \left\vert D_{\left( A_{i}\right) _{0}}^{00}\right\vert +\left\vert D_{\left( A_{i}\right) _{1}}^{00}\right\vert \right) \left( \left\vert D_{\left( A_{j}\right) _{0}}^{00}\right\vert +\left\vert D_{\left( A_{j}\right) _{1}}^{00}\right\vert \right) ,\nonumber\\ & =3\left( N_{A_{1}}^{A_{2}A_{3}}N_{A_{2}}^{A_{1}A_{3}}+N_{A_{1}} ^{A_{2}A_{3}}N_{A_{3}}^{A_{1}A_{2}}+N_{A_{2}}^{A_{1}A_{3}}N_{A_{3}} ^{A_{1}A_{2}}\right) \label{i23qubit} \end{align} detects W-like tripartite entanglement. It is zero on bi-separable states for which only one of the three $N_{A_{m}}^{A_{i}A_{j}}=\left\vert D_{\left( A_{m}\right) _{0}}^{00}\right\vert +\left\vert D_{\left( A_{m}\right) _{1} }^{00}\right\vert $ ($i\neq j\neq m$) is non zero and one on a three qubit W-state. Major classes of three qubits states are uniquely defined by values of polynomial invariants $4\left\vert I_{3}^{A_{1}A_{2}A_{3}}\right\vert $, $\left( N_{G}^{A_{1}}\right) ^{2}-4\left\vert I_{3}^{A_{1}A_{2}A_{3} }\right\vert $, and $I_{2}^{A_{1}A_{2}A_{3}}$.
\section{Four-body correlations and four-qubit invariants}
Four qubit states live in the Hilbert space $C^{2}\otimes C^{2}\otimes C^{2}\otimes C^{2}$ with a distinct subspace for each set of three qubits. If there were no four body correlations, three qubit invariants $\left( I_{3}^{A_{i}A_{j}A_{k}}\right) _{\left( A_{l}\right) _{i_{l}}}(i_{l}=0,1)$, may determine the entanglement of a four qubit state. In general, additional three qubit invariants that depend also on four-way negativity fonts exist. For a selected set of three qubits, three qubit invariants constitute a five dimensional space and are easily found by the action of a local unitary on the fourth qubit. To write down transformation equations for three qubit invariants, first of all, we identify two qubit invariants.
In the most general four qubit state \begin{equation} \left\vert \Psi^{A_{1}A_{2}A_{3}A_{4}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}i_{4}}}
a_{i_{1}i_{2}i_{3}i_{4}}\left\vert i_{1}i_{2}i_{3}i_{4}\right\rangle ;\quad\left( i_{m}=0,1\right) , \label{4qubitstate} \end{equation} when state of qubit $A_{1}$ is transposed, we are looking at entanglement of qubit $A_{1}$ with rest of the system. Qubit $A_{1}$ may have pairwise entanglement with qubits $A_{2},$ $A_{3},$ or $A_{4}$. For a given pair, there are four two-way two qubit invariants (the remaining pair of qubits being in state $\left\vert 00\right\rangle $, $\left\vert 10\right\rangle $, $\left\vert 01\right\rangle $ or $\left\vert 11\right\rangle $). For example, the determinants of two-way negativity fonts for the pair $A_{1}A_{2}$, written as \begin{equation} D_{\left( A_{3}\right) _{i_{3}}\left( A_{4}\right) _{i_{4}}}^{00} =\det\left[ \begin{array} [c]{cc} a_{00i_{3}i_{4}} & a_{01i_{3}i_{4}}\\ a_{10i_{3}i_{4}} & a_{11i_{3}i_{4}} \end{array} \right] \text{, \ }\left( i_{3},i_{4}=0,1\right) , \end{equation} are invariant with respect to unitaries on qubits $A_{1}$ and $A_{2}$. Three-way coherences generate two qubit invariants $D_{\left( A_{4}\right) _{i_{4}}}^{000}-D_{\left( A_{4}\right) _{i_{4}}}^{010},\left( i_{4}=0,1\right) $, and \ $D_{\left( A_{3}\right) _{i_{3}}}^{000} -D_{\left( A_{3}\right) _{i_{3}}}^{010}\left( i_{3}=0,1\right) $, for the pair $A_{1}A_{2}$. Here determinants of three-way fonts for \{$A_{1}A_{2} A_{3}$\} and \{$A_{1}A_{2}A_{4}$\}, respectively, are defined as \begin{equation} D_{\left( A_{4}\right) _{i_{4}}}^{0i_{2}0}=\det\left[ \begin{array} [c]{cc} a_{0i_{2}0i_{4}} & a_{0i_{2}+1,1i_{4}}\\ a_{1i_{2}0i_{4}} & a_{1i_{2}+1,1i_{4}} \end{array} \right] ,\text{ \ }\left( i_{2},i_{4}=0,1\right) , \end{equation} and \begin{equation} D_{\left( A_{3}\right) _{i_{3}}}^{0i_{2}0}=\det\left[ \begin{array} [c]{cc} a_{0i_{2}i_{3}0} & a_{0i_{2}+1i_{3}1}\\ a_{1i_{2}i_{3}0} & a_{1i_{2}+1i_{3}1} \end{array} \right] ,\text{ \ }\left( i_{2},i_{3}=0,1\right) . \end{equation} If four-way negativity fonts are present, then additional $A_{1}A_{2}$ invariants, $D^{0000}-D^{0100}$\ and $D^{0001}-D^{0101}$, are to be considered. Determinants of four-way negativity fonts are given by \begin{equation} D^{0i_{2}0i_{4}}=\det\left[ \begin{array} [c]{cc} a_{0i_{2}0i_{4}} & a_{0,i_{2}+1,1,i_{4}+1}\\ a_{1i_{2}0i_{4}} & a_{1,i_{2}+1,1,i_{4}+1} \end{array} \right] ,\text{ \ }\left( i_{2},i_{4}=0,1\right) . \end{equation} Degree two four qubit invariant \begin{equation} I_{4}=\left( D^{0000}+D^{0011}-D^{0010}-D^{0001}\right) , \label{i4} \end{equation} obtained in \cite{shar101} is the same as invariant H of degree two in ref. \cite{luqu03}. The entanglement monotone, $\tau_{4}=4\left\vert I_{4} \right\vert $, was called four-tangle in analogy with three tangle \cite{coff00}. In \cite{shar102} our method was successfully applied to derive degree two $N-$qubit invariants for even $N$ and degree four invariants for odd N in terms of determinants of negativity fonts. It was also shown that one may use the method to construct $N-$qubit invariants to detect $M-$qubit correlations ($M\leq N$) in an $N-$qubit state. As an example, we reported degree four invariants $J_{4}^{\left( A_{1}A_{2}\right) }$, $J_{4}^{\left( A_{1}A_{3}\right) }$, and $J_{4}^{\left( A_{1}A_{4}\right) }$ in ref \cite{shar102} and found that $\left( I_{4}\right) ^{2}=\frac{1}{3}\left( J_{4}^{\left( A_{1}A_{2}\right) }+J_{4}^{\left( A_{1}A_{3}\right) } +J_{4}^{\left( A_{1}A_{4}\right) }\right) $.
Presently, we focus on the set $A_{1}A_{2}A_{3}$ of three qubits in state $\left\vert \Psi^{A_{1}A_{2}A_{3}A_{4}}\right\rangle $ (Eq. (\ref{4qubitstate} ) ) viewed as \begin{equation} \left\vert \Psi^{A_{1}A_{2}A_{3}A_{4}}\right\rangle =\left\vert \Psi_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\right\rangle \left\vert 0\right\rangle +\left\vert \Psi_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}\right\rangle \left\vert 1\right\rangle , \end{equation} where \[ \left\vert \Psi_{\left( A_{4}\right) _{i_{4}}}^{A_{1}A_{2}A_{3} }\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}}}
a_{i_{1}i_{2}i_{3}i_{4}}\left\vert i_{1}i_{2}i_{3}i_{4}\right\rangle ;\quad\left( i_{4}=0,1\right) . \] Three qubit invariants \begin{equation} \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{i_{4}} }=\left( D_{_{\left( A_{4}\right) _{i_{4}}}}^{000}+D_{_{\left( A_{4}\right) _{i_{4}}}}^{001}\right) ^{2}-4D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{i_{4}}}^{00}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{i_{4}}}^{00};\quad i_{4}=0,1, \end{equation} quantify GHZ\ state like three-way correlations in three qubit subspace $C^{2}\otimes C^{2}\otimes C^{2}$. Continuing the search for a four qubit invariant that detects four qubit correlations, we examine the transformation of three qubit invariant $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}$under U$^{A_{4}}=\frac{1}{\sqrt{1+\left\vert y\right\vert ^{2}}}\left[ \begin{array} [c]{cc} 1 & -y^{\ast}\\ y & 1 \end{array} \right] $. The resulting transformation equation is \begin{align} & \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1} }^{\prime}=\frac{1}{\left( 1+\left\vert y\right\vert ^{2}\right) ^{2} }\left[ y^{4}\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}+4y^{3}P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\right. \nonumber\\ & \left. +6y^{2}T_{A_{4}}^{A_{1}A_{2}A_{3}}+4yP_{\left( A_{4}\right) _{1} }^{A_{1}A_{2}A_{3}}+\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}\right] ,\label{i3} \end{align} where \begin{align} T_{A_{4}}^{A_{1}A_{2}A_{3}} & =\frac{1}{6}\left( D^{0000}+D^{0011} +D^{0010}+D^{0001}\right) ^{2}\nonumber\\ & -\frac{2}{3}\left( D_{\left( A_{3}\right) _{0}}^{000}+D_{\left( A_{3}\right) _{0}}^{001}\right) \left( D_{\left( A_{3}\right) _{1}} ^{000}+D_{\left( A_{3}\right) _{1}}^{001}\right) \nonumber\\ & +\frac{1}{3}\left( D_{_{\left( A_{4}\right) _{0}}}^{000}+D_{_{\left( A_{4}\right) _{0}}}^{001}\right) \left( D_{_{\left( A_{4}\right) _{1}} }^{000}+D_{_{\left( A_{4}\right) _{1}}}^{001}\right) \nonumber\\ & -\frac{2}{3}\left( D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{1}} ^{00}+D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}\right) , \end{align} and \begin{align} P_{\left( A_{4}\right) _{i_{4}}}^{A_{1}A_{2}A_{3}} & =\frac{1}{2}\left( D_{_{\left( A_{4}\right) _{i_{4}}}}^{000}+D_{_{\left( A_{4}\right) _{i_{4}}}}^{001}\right) \left( D^{0000}+D^{0011}+D^{0010}+D^{0001}\right) \nonumber\\ & -\left( D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{i_{4}}} ^{00}\left( D_{\left( A_{3}\right) _{0}}^{000}+D_{\left( A_{3}\right) _{0}}^{001}\right) +D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{i_{4}}}^{00}\left( D_{\left( A_{3}\right) _{1}}^{000}+D_{\left( A_{3}\right) _{1}}^{001}\right) \right) . \end{align} Discriminant of a quartic equation, $y^{4}a-4by^{3}+6y^{2}c-4dy+f=0$, in variable $y$ is $\Delta=S^{3}-27T^{2}$ where $S=3c^{2}-4bd+af$, and $T=acf-ad^{2}-b^{2}f+2bcd-c^{3}$ (cubic invariant ), are polynomial invariants. When a selected U$^{A_{4}}$ results in $\left( \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}\right) ^{\prime}=0$ (Eq. (\ref{i3})), the associated polynomial invariant is \begin{align} I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}} & =3\left( T_{A_{4}}^{A_{1}A_{2}A_{3} }\right) ^{2}-4P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}\nonumber\\ & +\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0} }\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1} },\label{i48} \end{align} which is a four qubit invariant of degree eight expressed in terms of three qubit invariants for $A_{1}A_{2}A_{3}$. In order to distinguish between degree $2$ invariant $I_{4}$ and the new invariant, degree of the invariant has been added to the subscript. By construction, the four qubit invariant $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ is a combination of three qubit ($A_{1}A_{2}A_{3}$) invariants. It is easily verified that on a state which is a product of $\left\vert \Psi^{A_{1}A_{2}A_{3}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}i_{3}}}
a_{i_{1}i_{2}i_{3}}\left\vert i_{1}i_{2}i_{3}\right\rangle $ with $I_{3}^{A_{1}A_{2}A_{3}}\neq0,$ and $\Psi^{A_{4}}=d_{0}\left\vert 0\right\rangle +d_{1}\left\vert 1\right\rangle ,$ we obtain
\begin{equation} \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=T_{A_{4}} ^{A_{1}A_{2}A_{3}}=P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}=P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}, \end{equation} leading to $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$. Likewise, $I_{(4,8)} ^{A_{1}A_{2}A_{3}A_{4}}$ vanishes on product state $\left\vert \Psi ^{A_{1}A_{2}}\right\rangle \left\vert \Psi^{A_{3}A_{4}}\right\rangle $, where $\left\vert \Psi^{A_{1}A_{2}}\right\rangle =
{\textstyle\sum_{i_{1}i_{2}}}
a_{i_{1}i_{2}}\left\vert i_{1}i_{2}\right\rangle $ and $\left\vert \Psi ^{A_{3}A_{4}}\right\rangle =
{\textstyle\sum_{i_{3}i_{4}}}
b_{i_{3}i_{4}}\left\vert i_{3}i_{4}\right\rangle $. Besides that $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$ on a four qubit W-like state, and all entangled states with only three and two-body correlations, as seen in section IV.
The cubic invariant associated with Eq. (\ref{i3}) is \begin{equation} J^{A_{1}A_{2}A_{3}A_{4}}=\det\left[ \begin{array} [c]{ccc} \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}} & P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}} & T_{A_{4}}^{A_{1}A_{2}A_{3} }\\ P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}} & T_{A_{4}}^{A_{1}A_{2}A_{3}} & P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\\ T_{A_{4}}^{A_{1}A_{2}A_{3}} & P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}} & \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}} \end{array} \right] , \end{equation} while the discriminant reads as \begin{equation} \Delta=\left( I_{\left( 4,8\right) }^{A_{1}A_{2}A_{3}A_{4}}\right) ^{3}-27\left( J^{A_{1}A_{2}A_{3}A_{4}}\right) ^{2}\text{.} \end{equation} Since there are four ways in which a given set of three qubits may be selected, $\Delta$ can be expressed in terms of different sets of three qubit invariants. In addition (Eq. (\ref{i3})) also leads to \begin{align} \left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2} & =\left\vert \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}\right\vert ^{2}+\left\vert \left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}\right\vert ^{2}\nonumber\\ +6\left\vert T_{A_{4}}^{A_{1}A_{2}A_{3}}\right\vert ^{2} & +4\left\vert P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}\right\vert ^{2}+4\left\vert P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}\right\vert ^{2}, \end{align} which is a four qubit invariant analogous to $\left( N_{A_{3}}^{A_{1}A_{2} }\right) ^{2}$ (Eq. (\ref{na1a2})) for three qubit states. In general, one can construct an invariant $N_{A_{l}}^{A_{i}A_{j}A_{k}}$ ($i\neq j\neq k\neq l$) for a selected three qubit subsystem $A_{i}A_{j}A_{k}$ of four qubit state. In analogy with global negativity, one may define a four qubit invariant of degree four, \begin{equation} \left( N_{(4,4)}^{A_{1}}\right) ^{2}=16\left( N_{A_{4}}^{A_{1}A_{2}A_{3} }\right) ^{2}+16\left( N_{A_{3}}^{A_{1}A_{2}A_{4}}\right) ^{2}+16\left( N_{A_{2}}^{A_{1}A_{3}A_{4}}\right) ^{2}, \label{na1_48} \end{equation} which detects bipartite entanglement of qubit $A_{1}$ with subsystem $A_{2}A_{3}A_{4}$ due to three and four body quantum correlations. If $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$, but at least two of the $N_{A_{l} }^{A_{i}A_{j}A_{k}}$ are finite, then 4-partite entanglement can be due to three and two body correlations. In this case the invariant that detects entanglement may be defined as \begin{align} N_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}} & =16N_{A_{1}}^{A_{2}A_{3}A_{4}}N_{A_{2} }^{A_{1}A_{3}A_{4}}+16\left( N_{A_{1}}^{A_{2}A_{3}A_{4}}+N_{A_{2}} ^{A_{1}A_{3}A_{4}}\right) N_{A_{3}}^{A_{1}A_{2}A_{4}}\nonumber\\ & +16\left( N_{A_{1}}^{A_{2}A_{3}A_{4}}+N_{A_{2}}^{A_{1}A_{3}A_{4}} +N_{A_{3}}^{A_{1}A_{2}A_{4}}\right) N_{A_{4}}^{A_{1}A_{2}A_{3}}. \end{align} On the other hand, if we have a state on which all $N_{A_{l}}^{A_{i}A_{j} A_{k}}$ are zero, then the quantities $I_{A_{r}A_{s}}^{A_{p}A_{q}}=\sum _{i_{r}i_{s}}\left\vert D_{\left( A_{r}\right) _{i_{r}}\left( A_{s}\right) _{i_{s}}}^{00}\right\vert $ ($p\neq q\neq r\neq s=1$ to $4$), turn out to be four qubit invariants. A different class of entangled states is obtained if only one of the $N_{A_{l}}^{A_{i}A_{j}A_{k}}$ is non zero along with a finite $I_{A_{r}A_{s}}^{A_{p}A_{l}}$. In section II we noted that $I_{2}^{A_{1} A_{2}A_{3}}$ (Eq. (\ref{i23qubit})) detects W like entanglement of three qubits $A_{1}A_{2}A_{3}$. Likewise, when $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4} }=N_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$, the invariant \begin{align} I_{(2,6)}^{A_{1}A_{2}A_{3}A_{4}} & =\frac{3}{2}\left( I_{2}^{A_{1} A_{2}A_{3}}\right) \left( I_{A_{2}A_{3}}^{A_{1}A_{4}}+I_{A_{1}A_{3}} ^{A_{2}A_{4}}+I_{A_{1}A_{2}}^{A_{3}A_{4}}\right) \nonumber\\ & +\frac{3}{2}\left( I_{2}^{A_{1}A_{2}A_{4}}\right) \left( I_{A_{1}A_{4} }^{A_{2}A_{3}}+I_{A_{1}A_{3}}^{A_{3}A_{4}}\right) +\frac{3}{2}\left( I_{2}^{A_{1}A_{3}A_{4}}\right) I_{A_{1}A_{3}}^{A_{2}A_{4}} \label{i26} \end{align} detects W-like four qubit entanglement. Here $\left( I_{2}^{A_{p}A_{q}A_{r} }\right) _{A_{s}}=3I_{A_{r}A_{s}}^{A_{p}A_{q}}I_{A_{q}A_{s}}^{A_{p}A_{r}}$, $(p\neq q\neq r\neq s=1$ to $4),$ is the invariant that detects W-like entanglement of qubits $A_{p}A_{q}A_{r}$ in a four qubit state.
In ref. \cite{leva06} four qubit invariants have been obtained in terms of coefficients having geometrical significance. A comparison of Eq. (56) of ref. \cite{leva06} with our Eq. (\ref{i3}), indicates that their set of invariants ($I_{1}$, $I_{2}$, $I_{3}$, $I_{4}$) may be expressed in terms of our three qubit invariants, though they are not exactly the same. A method equivalent to method of Schlafli \cite{gelf94} has been used to arrive at Eq. (22) \[ R(\text{t})=c_{0}t_{0}^{4}+4c_{1}t_{0}^{3}t_{1}+6c_{2}t_{0}^{2}t_{1} ^{2}+4c_{3}t_{0}t_{1}^{2}+c_{4}t_{1}^{4} \] by Luque and Thibon \cite{luqu03}. Then higher degree invariants are expressed in terms of $c_{i}$ coefficients and computer algebra relates these to basic four qubit invariants. Since for $t_{0}=1$, expression for $R($t$)$ has the same form as Eq. (\ref{i3}), a direct correspondence can be established between $c_{i}$ coefficients and our three qubit invariants. Such a comparison establishes a neat connection of our invariants with projective geometry approach and classical invariant theory concepts.
\section{Invariants and Classification of four-qubit States}
Decomposition of global partial transpose $\widehat{\rho}_{G}^{T_{A_{p}}}$ of four qubit state $\left\vert \Psi^{A_{1}A_{2}A_{3},A_{4}}\right\rangle $ with respect to qubit $A_{p}$ in terms of $K-$way partially transposed operators (Eq. (\ref{decomp})) reads as \begin{equation} \widehat{\rho}_{G}^{T_{A_{p}}}=\sum\limits_{K=2}^{4}\widehat{\rho} _{K}^{T_{A_{p}}}-2\widehat{\rho}.\label{3n} \end{equation} When a state has only $K-$way coherences, we have $\widehat{\rho} _{G}^{T_{A_{p}}}=\widehat{\rho}_{K}^{T_{A_{p}}}$, for a selected set of $K$ qubits. For a given qubit, the number of $K-$way negativity fonts in a $K-$way partially transposed matrix varies from $0$ to $4$. Local unitary operations can be used to annihilate the negativity fonts that is obtain a state for which determinants of selected negativity fonts are zero. The process leads to canonical state which is a state written in terms of minimum number of local basis product states \cite{acin00}. In ref. \cite{shar12}, we proposed a classification scheme in which an entanglement class is characterized by the minimal set of $K-$way $\left( 2\leq K\leq4\right) $\ partially transposed matrices present in the expansion of global partial transpose of the canonical state. Seven possible ways in which the global partial transpose (GPT) of a four qubit canonical state may be decomposed correspond to seven major entanglement classes that is class I. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\sum\limits_{K=2}^{4}\left( \widehat{\rho}_{c}\right) _{K}^{T_{A_{p}}}-2\widehat{\rho}_{c}$, II. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{4}^{T_{A_{p}}}+\left( \widehat{\rho}_{c}\right) _{3}^{T_{A_{p}}}-\widehat{\rho}_{c}$, III. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{4}^{T_{A_{p}}}+\left( \widehat{\rho}_{c}\right) _{2}^{T_{A_{p}}} -\widehat{\rho}_{c}$ , IV. $\ \left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{4}^{T_{A_{p}}}$, V. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho }_{c}\right) _{3}^{T_{A_{p}}}+\left( \widehat{\rho}_{c}\right) _{2}^{T_{A_{p}}}-\widehat{\rho}_{c}$ , VI. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{3}^{T_{A_{p}}}$, and VII. $\left( \widehat{\rho}_{c}\right) _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{c}\right) _{2}^{T_{A_{p}}}$. Of these, six classes contain states with four-partite entanglement, while class VI with $\widehat{\rho} _{G}^{T_{A_{p}}}=\left( \widehat{\rho}_{3}^{T_{A_{p}}}\right) _{c}$ has only three qubit entanglement. Each major class contains sub-classes depending on the number and type of negativity fonts in global partial transpose of the canonical state. Table \ref{t1} lists the decomposition of $\left( \rho _{c}\right) _{G}^{T_{A_{p}}}$, invariants $I_{\left( 4,8\right) } ^{A_{1}A_{2}A_{3}A_{4}}$, $D_{A_{4}}^{A_{1}A_{2}A_{3}}$, $\Delta$, and $N_{K-\text{way}}$ ($K=$2,3,4) in canonical state, for different classes of four qubit entangled states. Here $D_{A_{4}}^{A_{1}A_{2}A_{3}}=\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2}-2\left\vert I_{(4,8)}^{A_{1} A_{2}A_{3}A_{4}}\right\vert $, is a measure of residual three-way correlations between qubits $A_{1}A_{2}A_{3}$ and $N_{K-\text{way}}$ ($K=$2,3,4) is the number of $K-$way negativity fonts in a state. \begin{table}[ptb] \caption{Decomposition of $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}$, invariants $I_{\left( 4,8\right) }^{A_{1}A_{2}A_{3}A_{4}}$, $D_{A_{4} }^{A_{1}A_{2}A_{3}}$, $\Delta$, and $N_{K-\text{way}}$ ($K=$2,3,4) in canonical state, for seven classes of four qubit entangled states} \begin{tabular}
[c]{||c||c||c||c||c||c||c||c||}\hline\hline Class & Decomposition of $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}$ & $I_{\left( 4,8\right) }^{A_{1}A_{2}A_{3}A_{4}}$ & $D_{A_{4}}^{A_{1} A_{2}A_{3}}$ & $\Delta$ & $N_{2-way}$ & $N_{3-way}$ & $N_{4-way} $\\\hline\hline I & $\sum\limits_{K=2}^{4}\left( \widehat{\rho}_{c}\right) _{K}^{T_{A_{p}} }-2\widehat{\rho}_{c}$ & $\neq0$ & $\neq0$ & $\neq0$ & $\geq1$ & $\geq1$ & $\geq1$\\\hline\hline II & $\left( \rho_{c}\right) _{4}^{T_{A_{p}}}+\left( \rho_{c}\right) _{3}^{T_{A_{p}}}-\widehat{\rho}_{c}$ & $\neq0$ & $\neq0$ & $0$ & $0$ & $\geq1$ & $\geq1$\\\hline\hline III & $\left( \rho_{c}\right) _{4}^{T_{A_{p}}}+\left( \rho_{c}\right) _{2}^{T_{A_{p}}}-\widehat{\rho}_{c}$ & $\neq0$ & $0$ & $\neq0$ & $\geq1$ & $0$ & $\geq1$\\\hline\hline IV & $\left( \rho_{c}\right) _{4}^{T_{A_{p}}}$ & $\neq0$ & $0$ & $0$ & $0$ & $0$ & $1$\\\hline\hline V & $\left( \rho_{c}\right) _{3}^{T_{A_{p}}}+\left( \rho_{c}\right) _{2}^{T_{A_{p}}}-\widehat{\rho}_{c}$ & $0$ & $\neq0$ & $0$ & $\geq1$ & $\geq1$ & $0$\\\hline\hline VI & $\left( \rho_{c}\right) _{3}^{T_{A_{p}}}$ & $0$ & $\neq0$ & $0$ & $0$ & $1$ & $0$\\\hline\hline VII & $\left( \rho_{c}\right) _{2}^{T_{A_{p}}}$ & $0$ & $0$ & $0$ & $\geq1$ & $0$ & $0$\\\hline\hline \end{tabular} \label{t1} \end{table}
A four qubit state with a single four way negativity font \[ \left\vert \Psi_{ab}\right\rangle =a\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) +b\left( \left\vert 1101\right\rangle +\left\vert 1110\right\rangle +\left\vert 0011\right\rangle \right) \text{,} \] is an example of class I states . Three qubit invariants for the state are $I_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}=a^{2}b^{2}$, $P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}=\frac{1}{2}a^{3}b$, $I_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3}}=b^{4}$, $P_{\left( A_{4}\right) _{1} }^{A_{1}A_{2}A_{3}}=-\frac{1}{2}a^{2}b^{2}$, and $\left( T^{A_{1}A_{2}A_{3} }\right) _{A_{4}}=\frac{1}{6}\left( a^{4}-2ab^{3}\right) $. Four qubit invariants are found to be $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=\frac{1} {12}\left( a^{4}+4ab^{3}\right) ^{2}$, $D_{A_{4}}^{A_{1}A_{2}A_{3}}\neq0$, and $\Delta\neq0$. A representative of class II states with $\widehat{\rho }_{G}^{T_{A_{p}}}=\widehat{\rho}_{4}^{T_{A_{p}}}+\widehat{\rho}_{3}^{T_{A_{p} }}-\widehat{\rho}$ is, $\left\vert \Psi_{a}\right\rangle =a\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) +\left\vert 1110\right\rangle $. The state is SLOCC equivalent to GHZ state, however it deserves a distinct status since on removal of qubit $A_{4}$ it has residual three way coherences.
Invariants for class III states \begin{align} G_{abcd} & =\frac{a+d}{2}\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) +\frac{a-d}{2}\left( \left\vert 1100\right\rangle +\left\vert 0011\right\rangle \right) \nonumber\\ & +\frac{b+c}{2}\left( \left\vert 1010\right\rangle +\left\vert 0101\right\rangle \right) +\frac{b-c}{2}\left( \left\vert 0110\right\rangle +\left\vert 1001\right\rangle \right) , \end{align} \begin{align} L_{abc_{2}} & =\frac{a+b}{2}\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) +\frac{a-b}{2}\left( \left\vert 1100\right\rangle +\left\vert 0011\right\rangle \right) \nonumber\\ & +c\left( \left\vert 1010\right\rangle +\left\vert 0101\right\rangle \right) +\left\vert 0110\right\rangle , \end{align}
\begin{equation} L_{a_{2}b_{2}}=a\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) +b\left( \left\vert 0101\right\rangle +\left\vert 1010\right\rangle \right) +\left( \left\vert 0110\right\rangle +\left\vert 0011\right\rangle \right) , \end{equation} and \begin{equation} L_{a_{2}0_{3\oplus\widetilde{1}}}=a\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) +\left( \left\vert 0101\right\rangle +\left\vert 0110\right\rangle +\left\vert 0011\right\rangle \right) , \end{equation} of ref. \cite{vers02} with $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right) _{4}^{T_{A_{1}}}+\left( \rho_{c}\right) _{2}^{T_{A_{1}} }-\widehat{\rho}_{c}$ are listed in Table \ref{t2}. All three way coherences are convertible to two way coherences as such three-way negativity fonts have zero determinants. Four qubit entanglement occurs due to four-way and two-way coherences. For all these states, the invariants $P_{\left( A_{4}\right) _{0}}^{A_{1}A_{2}A_{3}}$ and $P_{\left( A_{4}\right) _{1}}^{A_{1}A_{2}A_{3} }$ are identically zero. In Table \ref{t2}, for states in family $G_{abcd}$, three qubit invariants used for the set $A_{1}A_{2}A_{3}$ are \[ T_{A_{4}}^{A_{1}A_{2}A_{3}}=\frac{1}{6}\left( A-2B\right) ,\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=B, \] where \begin{equation} A=\left( a^{2}-b^{2}\right) \left( d^{2}-c^{2}\right) ,B=\frac{1} {4}\left( a^{2}-d^{2}\right) \left( b^{2}-c^{2}\right) . \label{AB} \end{equation} For states $G_{ab00}$ and $G_{00cd}$, $\Delta=0$. For states $L_{abc_{2}}$, with $\left( T^{A_{1}A_{2}A_{3}}\right) _{A_{4}}=\frac{1}{6}\left( a^{2}-c^{2}\right) \left( b^{2}-c^{2}\right) $, $\left( I_{3}^{A_{1} A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=c\left( a^{2}-b^{2}\right) $, the value $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=0$ results in $\Delta=0$. A comparison of states $L_{abc_{2}}$ with $a=c$ and L$_{ab_{3}}$ shows that the states are not SLOCC\ equivalent \cite{li09} because the number of negativity fonts is not equal. However, since four qubit correlations are null $\left( I_{(4,8)}^{A_{1}A_{2} A_{3}A_{4}}=0\right) $ for $L_{abc_{2}}$ with $a=c$ as well as L$_{ab_{3}}$, these are subclasses of the same major class in correlation type based classification, partially supporting the result of \cite{chte07}.
The families of states $L_{ab_{3}}$ and $L_{a_{4}}$ of ref. \cite{vers02} have a similar global partial transpose composition. The value of degree two\ invariant is $I_{4}=\frac{3a^{2}+b^{2}}{2}$ for $L_{ab_{3}}$ and $I_{4}=2a^{2}$ for $L_{a_{4}}$ indicating that four-way coherences are present. However, for the set of qubits $A_{1}A_{2}A_{3}$, only non zero three qubit invariant is $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=\frac{a^{2}-b^{2}}{2}$ for L$_{ab_{3}}$ and $\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=-4a^{2}$\ for $L_{a_{4}}$. A finite $I_{4}$ but zero $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ indicates that the superposition contains a product of two qubit entangled states. Four partite entanglement may, in this case, be detected by products $\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) \left( N_{A_{3}}^{A_{1} A_{2}A_{4}}\right) $.
The states in families $L_{a_{2}b_{2}}$ and $L_{a_{2}0_{3\oplus\widetilde{1}} }$ \cite{vers02} have $\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2}=2\left\vert I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\right\vert $. The states in $L_{a_{2}b_{2}}$ and $L_{a_{2}0_{3\oplus\widetilde{1}}}$ differ from each other in the number of two way negativity fonts with non-zero determinants. Only non zero three tangle for the states $G_{abba}$ is $T_{A_{4}}^{A_{1} A_{2}A_{3}}$.\begin{table}[ptb] \caption{Invariants for class III states $G_{abcd}$, $L_{abc_{2}}$, $L_{a_{2}b_{2}}$ and $L_{a_{2}0_{3\oplus\widetilde{1}}}$ with $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right) _{4}^{T_{A_{1}} }+\left( \rho_{c}\right) _{2}^{T_{A_{1}}}-\widehat{\rho}_{c}$. A and B in column II are as defined in Eq. (\ref{AB}).} \label{t2} \begin{tabular}
[c]{||c||c||c||c||c||}\hline\hline Invariant$\backslash$Class & $G_{abcd}$ & $L_{abc_{2}}$ & $L_{a_{2}b_{2}}$ & $L_{a_{2}0_{3\oplus\widetilde{1}}}$\\\hline\hline $\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2}$ & $\frac{1}{6}\left\vert A-2B\right\vert ^{2}+2\left\vert B\right\vert ^{2}$ & $ \begin{array} [c]{c} \frac{1}{6}\left\vert \left( a^{2}-c^{2}\right) \left( b^{2}-c^{2}\right) \right\vert ^{2}\\ +\left\vert c\left( a^{2}-b^{2}\right) \right\vert ^{2} \end{array} $ & $\frac{1}{6}\left\vert \left( a^{2}-b^{2}\right) ^{4}\right\vert $ & $\frac{1}{6}\left\vert a^{8}\right\vert $\\\hline\hline $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ & $\left( \frac{1}{12}\left( A-2B\right) ^{2}+B^{2}\right) $ & $\frac{1}{12}\left( a^{2}-c^{2}\right) ^{2}\left( b^{2}-c^{2}\right) ^{2}$ & $\frac{1}{12}\left( a^{2}-b^{2}\right) ^{4}$ & $\frac{1}{12}a^{8}$\\\hline\hline $D_{A_{4}}^{A_{1}A_{2}A_{3}}$ & $\neq0$ & $\left\vert c\left( a^{2} -b^{2}\right) \right\vert ^{2}$ & $0$ & $0$\\\hline\hline $\Delta$ & $\neq0$ & $0$ & $0$ & $0$\\\hline\hline \end{tabular} \end{table}The states $G_{a00a}$ and $G_{0bb0}$, with $\left( \rho _{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right) _{4}^{T_{A_{p}}}$ belong to class IV in classification scheme based on correlation type. For these states only non-zero three tangle is $T_{A_{4}}^{A_{1}A_{2}A_{3}}$, therefore $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\neq0$, $\Delta=0$ and $D_{A_{4} }^{A_{1}A_{2}A_{3}}=0$.
The global partial transpose has composition, $\left( \rho_{c}\right) _{G}^{T_{A_{1}}}=\left( \rho_{c}\right) _{3}^{T_{A_{1}}}+\left( \rho _{c}\right) _{2}^{T_{A_{1}}}-\widehat{\rho}_{c}$, for class V states $L_{0_{7\oplus\overline{1}}}$and $L_{0_{5\oplus\overline{3}}}$. In both cases $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=0$, while the product $\left( N_{A_{4} }^{A_{1}A_{2}A_{3}}\right) \left( N_{A_{3}}^{A_{1}A_{2}A_{4}}\right) \neq 0$. Two states differ in the the number of two-way negativity fonts with non-zero determinants. Only non-zero invariant for Class VI state $L_{0_{3\oplus\overline{1}}0_{3\oplus\overline{1}}}$ \cite{vers02} with $\left( \rho_{c}\right) _{G}^{T_{A_{1}}}=\left( \rho_{c}\right) _{3}^{T_{A_{1}}}$ is $\left( I_{3}^{A_{2}A_{3}A_{4}}\right) _{\left( A_{1}\right) _{0}}$. The state has only three qubit entanglement. Class VII with $\left( \rho_{c}\right) _{G}^{T_{A_{p}}}=\left( \rho_{c}\right) _{2}^{T_{A_{p}}}$ contains four qubit states with W-type entanglement represented by $L_{a=0b_{3}=0}$ and separable states with entangled qubit pairs, for example $G_{aaaa}$.
The polynomial invariant $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ is non-zero on states $\left\vert \Psi_{ab}\right\rangle $, $G_{abcd}$, $L_{abc_{2}}$, $L_{a_{2}b_{2}}$, $L_{a_{2}0_{3\oplus\widetilde{1}}}$, $G_{a00a}$ and $G_{0bb0}$ and vanishes on states L$_{ab_{3}}$, $L_{a_{4}}$, $L_{0_{7\oplus \overline{1}}}$, $L_{0_{5\oplus\overline{3}}},L_{0_{3\oplus\overline{1} }0_{3\oplus\overline{1}}}$ and $G_{aaaa}$. We define an entanglement monotone to quantify four qubit correlations as \begin{equation} \tau_{(4,8)}=4\left\vert \left( 12I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\right) ^{\frac{1}{2}}\right\vert , \label{tau48} \end{equation} which is one on states with maximal entanglement due to four-body correlations, finite on all states with entanglement due to four-body correlations and zero otherwise. The subscript $(4,8)$ is carried on from $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$. One can verify that on four qubit GHZ state \[ \left\vert GHZ\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) \] as well as cluster states \cite{brie01,raus01} \[ \left\vert C_{1}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1100\right\rangle +\left\vert 0011\right\rangle -\left\vert 1111\right\rangle \right) \] \[ \left\vert C_{2}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 0110\right\rangle +\left\vert 1001\right\rangle -\left\vert 1111\right\rangle \right) , \] \[ \left\vert C_{3}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1010\right\rangle +\left\vert 0101\right\rangle -\left\vert 1111\right\rangle \right) , \] $\tau_{(4,8)}=1$ and $\left( N_{A_{4}}^{A_{1}A_{2}A_{3}}\right) ^{2}=2\left\vert I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}\right\vert $. So what is different in cluster states? We recall the invariants $J^{A_{i}A_{j}}$ from \cite{shar102}, the invariants that detect entanglement of a selected pair, $A_{i}A_{j}$, of qubits in a four qubit state. For a GHZ state\ $J^{A_{i} A_{j}}=\frac{1}{4}$, for $\left( i\neq j\right) =1$ to $4$, while for a cluster state all $J^{A_{i}A_{j}}$ \cite{shar102}, do not have the same value. In canonical form, GHZ\ has a single four-way negativity font, while a cluster state has two four-way negativity fonts besides also having two-way negativity fonts (state reduction does not destroy all the coherences).
Another state proposed through a numerical search in ref. \cite{brow06} to be a maximally entangled state is \[ \left\vert \Phi\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1101\right\rangle \right) +\frac{1}{\sqrt{8}}\left( \left\vert 1011\right\rangle +\left\vert 0011\right\rangle +\left\vert 0110\right\rangle -\left\vert 1110\right\rangle \right) , \] However, on this state \[ T_{A_{4}}^{A_{1}A_{2}A_{3}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=\frac{1}{32}, \] \[ \left( P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left( P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=0, \] therefore $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}=\frac{1}{256}$, and $\tau _{(4,8)}=\sqrt{\frac{3}{4}}$. On two excitation four qubit Dicke state \[ \left\vert \Psi_{D}\right\rangle =\frac{1}{\sqrt{6}}\left( \left\vert 0011\right\rangle +\left\vert 1100\right\rangle +\left\vert 0101\right\rangle +\left\vert 1010\right\rangle +\left\vert 1001\right\rangle +\left\vert 0110\right\rangle \right) , \] we have, $\tau_{(4,8)}=\frac{5}{9},$ while it is zero on four qubit W-state \[ \left\vert W\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1100\right\rangle +\left\vert 1010\right\rangle +\left\vert 1001\right\rangle \right) . \] Four tangle $\tau_{4}$ also vanishes on $W-$like state of four qubits, however, it fails to vanish on product of two qubit entangled states. Contrary to $\tau_{(4,8)}$, a non zero $\tau_{4}$ does not ensure four-partite entanglement. On four qubit state \begin{align} \left\vert HS\right\rangle & =\frac{1}{\sqrt{6}}\left( \left\vert 0011\right\rangle +\left\vert 1100\right\rangle +\exp\left( \frac{i2\pi} {3}\right) \left( \left\vert 1010\right\rangle +\left\vert 0101\right\rangle \right) \right) \nonumber\\ & +\frac{1}{\sqrt{6}}\exp\left( \frac{i4\pi}{3}\right) \left( \left\vert 1001\right\rangle +0110\right) , \end{align} conjectured to have maximal entanglement in ref. \cite{higu00}, we have $D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00}=D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}=\frac{1}{6}$, and for $4-$way negativity fonts $D^{0011}=\frac{1}{6}$, $D^{0001}=\frac{1}{12}\left( 1-i\sqrt{3}\right) ,$ and $D^{0010}=\frac{1}{12}\left( 1+i\sqrt{3}\right) $). Therefore \[ T_{A_{4}}^{A_{1}A_{2}A_{3}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left( I_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=0, \] \[ \left( P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{0}}=\left( P_{3}^{A_{1}A_{2}A_{3}}\right) _{\left( A_{4}\right) _{1}}=0, \] leading to $\tau_{(4,8)}=0$. However, the invariant $I_{(2,6)}^{A_{1} A_{2}A_{3}A_{4}}=1$ (Eq. (\ref{i26})) on $\left\vert HS\right\rangle $ and takes value $\frac{27}{64}$ on four qubit $\left\vert W\right\rangle $ state. It reflects the fact that a measurement on the state of a qubit, in $\left\vert HS\right\rangle $ always leaves the three remaining qubits in a three qubit W-state, whereas a similar measurement on a $\left\vert W\right\rangle $ state yields a mixture of three qubit W-state with three qubits in a separable state.
The choice $I_{(4,8)}^{A_{1}A_{2}A_{3}A_{4}}$ to quantify four qubit correlations is also supported by the conclusions of \ \cite{endr06}, where for \ a selected set of four qubit states, generator S of ref. \cite{luqu03} has been shown to have the same parameter dependence as optimized Bell type inequalities and a combination of global negativity and 2-qubit concurrences.
To summarize, degree 8, 12 and 24 four qubit invariants, expressed in terms of three qubit invariants, have been obtained. One can continue the process to higher number of qubits. Commonly, multivariate forms in terms of state coefficients $a_{i_{1}i_{2}...i_{N}}$ are used to obtain polynomial invariants for qubit systems. Our strategy is to write multivariate forms with relevant $K-$qubit invariants as coefficients. The advantage of our technique is that relevant invariants in a larger Hilbert space are easily related to invariants in sub spaces as such to the structure of the quantum state at hand. Construction of polynomial invariants for states other than the most general state is a great help in classification of states. Our method can be easily applied to determine the invariants for any given state. Entanglement monotone that quantifies four qubit correlations can be used to quantify correlations in pure and mixed (via convex roof extension) four qubit states.
This work is financially supported by CNPq Brazil and FAEP UEL Brazil.
\end{document} |
$\Box$\begin$\Box${document$\Box$}$\Box$
$\Box$
$\Box$\title{The Measurement Calculus}$\Box$ $\Box$\begin$\Box${abstract$\Box$}$\Box$ We$\Box$ propose$\Box$ a$\Box$ calculus$\Box$ of$\Box$ local$\Box$ equations$\Box$ over$\Box$ one$\Box$-way$\Box$ computing$\Box$ patterns$\Box$~$\Box$\cite$\Box${mqqcs$\Box$}$\Box$,$\Box$ which$\Box$ preserves$\Box$ interpretations$\Box$,$\Box$ and$\Box$ allows$\Box$ the$\Box$ rewriting$\Box$ of$\Box$ any$\Box$ pattern$\Box$ to$\Box$ a$\Box$ standard$\Box$ form$\Box$ where$\Box$ entanglement$\Box$ is$\Box$ done$\Box$ first$\Box$,$\Box$ then$\Box$ measurements$\Box$,$\Box$ then$\Box$ local$\Box$ corrections$\Box$.$\Box$ $\Box$ We$\Box$ infer$\Box$ from$\Box$ this$\Box$ that$\Box$ patterns$\Box$ with$\Box$ no$\Box$ dependencies$\Box$,$\Box$ or$\Box$ using$\Box$ only$\Box$ Pauli$\Box$ measurements$\Box$,$\Box$ can$\Box$ only$\Box$ realise$\Box$ unitaries$\Box$ belonging$\Box$ to$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$ $\Box$ $\Box$\end$\Box${abstract$\Box$}$\Box$
$\Box$
$\Box$\section$\Box${Introduction$\Box$}$\Box$ The$\Box$ $\Box$\emph$\Box${one$\Box$-way$\Box$}$\Box$ model$\Box$ centres$\Box$ on$\Box$ 1$\Box$-qubit$\Box$ measurements$\Box$ as$\Box$ the$\Box$ main$\Box$ ingredient$\Box$ of$\Box$ quantum$\Box$ computation$\Box$~$\Box$\cite$\Box${mqqcs$\Box$}$\Box$,$\Box$ and$\Box$ is$\Box$ believed$\Box$ by$\Box$ physicists$\Box$ to$\Box$ lend$\Box$ itself$\Box$ to$\Box$ easier$\Box$ implementations$\Box$~$\Box$\cite$\Box${Nielsen04$\Box$,ND04$\Box$,BR04$\Box$,CMJ04$\Box$}$\Box$.$\Box$ During$\Box$ computations$\Box$,$\Box$ measurements$\Box$ and$\Box$ local$\Box$ corrections$\Box$ are$\Box$ allowed$\Box$ to$\Box$ depend$\Box$ on$\Box$ the$\Box$ outcomes$\Box$ of$\Box$ previous$\Box$ measurements$\Box$.$\Box$
$\Box$
We$\Box$ first$\Box$ develop$\Box$ a$\Box$ notation$\Box$ for$\Box$ such$\Box$ classically$\Box$ correlated$\Box$ sequences$\Box$ of$\Box$ entanglements$\Box$,$\Box$ measurements$\Box$,$\Box$ and$\Box$ local$\Box$ corrections$\Box$.$\Box$ $\Box$ Computations$\Box$ are$\Box$ organised$\Box$ in$\Box$ patterns$\Box$,$\Box$ and$\Box$ we$\Box$ give$\Box$ a$\Box$ careful$\Box$ treatment$\Box$ of$\Box$ pattern$\Box$ composition$\Box$ and$\Box$ tensor$\Box$ products$\Box$ $\Box$(parallel$\Box$ composition$\Box$)$\Box$ of$\Box$ patterns$\Box$.$\Box$ $\Box$ We$\Box$ show$\Box$ next$\Box$ that$\Box$ such$\Box$ pattern$\Box$ combinations$\Box$ reflect$\Box$ the$\Box$ corresponding$\Box$ combinations$\Box$ of$\Box$ unitary$\Box$ operators$\Box$.$\Box$ $\Box$ An$\Box$ easy$\Box$ proof$\Box$ of$\Box$ universality$\Box$,$\Box$ based$\Box$ on$\Box$ a$\Box$ family$\Box$ of$\Box$ 2$\Box$-qubit$\Box$ patterns$\Box$ follows$\Box$.$\Box$
$\Box$
So$\Box$ far$\Box$,$\Box$ this$\Box$ constitutes$\Box$ mostly$\Box$ a$\Box$ work$\Box$ of$\Box$ clarification$\Box$ of$\Box$ what$\Box$ was$\Box$ already$\Box$ known$\Box$ from$\Box$ the$\Box$ series$\Box$ of$\Box$ papers$\Box$ introducing$\Box$ and$\Box$ investigating$\Box$ the$\Box$ properties$\Box$ of$\Box$ the$\Box$ one$\Box$-way$\Box$ model$\Box$~$\Box$\cite$\Box${mqqcs$\Box$}$\Box$.$\Box$ $\Box$ However$\Box$,$\Box$ we$\Box$ work$\Box$ here$\Box$ with$\Box$ an$\Box$ extended$\Box$ notion$\Box$ of$\Box$ pattern$\Box$,$\Box$ where$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$ may$\Box$ overlap$\Box$ in$\Box$ any$\Box$ way$\Box$ one$\Box$ wants$\Box$ them$\Box$ to$\Box$,$\Box$ and$\Box$ this$\Box$ obtains$\Box$ more$\Box$ efficient$\Box$ $\Box$-$\Box$ in$\Box$ the$\Box$ sense$\Box$ of$\Box$ fewer$\Box$ qubits$\Box$ $\Box$-$\Box$ implementations$\Box$ of$\Box$ unitaries$\Box$.$\Box$ $\Box$ Specifically$\Box$,$\Box$ our$\Box$ generating$\Box$ set$\Box$ consists$\Box$ of$\Box$ two$\Box$ simple$\Box$ patterns$\Box$,$\Box$ each$\Box$ one$\Box$ using$\Box$ only$\Box$ 2$\Box$ qubits$\Box$.$\Box$ $\Box$ From$\Box$ it$\Box$ we$\Box$ obtain$\Box$ a$\Box$ 3$\Box$ qubits$\Box$ realisation$\Box$ of$\Box$ the$\Box$ $\Box$$R$\Box$_z$\Box$$$\Box$ rotations$\Box$ and$\Box$ a$\Box$ 14$\Box$ qubit$\Box$ implementation$\Box$ for$\Box$ the$\Box$ controlled$\Box$-$\Box$$U$\Box$$$\Box$ family$\Box$:$\Box$ a$\Box$ very$\Box$ significant$\Box$ reduction$\Box$ over$\Box$ the$\Box$ known$\Box$ implementations$\Box$.$\Box$
$\Box$
However$\Box$,$\Box$ the$\Box$ main$\Box$ point$\Box$ of$\Box$ this$\Box$ paper$\Box$ is$\Box$ to$\Box$ introduce$\Box$ alongside$\Box$ our$\Box$ notation$\Box$,$\Box$ a$\Box$ calculus$\Box$ of$\Box$ local$\Box$ equations$\Box$ over$\Box$ patterns$\Box$ that$\Box$ exploits$\Box$ the$\Box$ fact$\Box$ that$\Box$ 1$\Box$-qubit$\Box$ $\Box$$xy$\Box$$$\Box$-measurements$\Box$ are$\Box$ closed$\Box$ under$\Box$ conjugation$\Box$ by$\Box$ Pauli$\Box$ operators$\Box$.$\Box$ $\Box$ We$\Box$ show$\Box$ that$\Box$ this$\Box$ calculus$\Box$ is$\Box$ sound$\Box$ in$\Box$ that$\Box$ it$\Box$ preserves$\Box$ the$\Box$ patterns$\Box$ interpretations$\Box$.$\Box$ $\Box$ Most$\Box$ importantly$\Box$,$\Box$ we$\Box$ derive$\Box$ from$\Box$ it$\Box$ a$\Box$ simple$\Box$ algorithm$\Box$ by$\Box$ which$\Box$ any$\Box$ general$\Box$ pattern$\Box$ can$\Box$ be$\Box$ put$\Box$ into$\Box$ a$\Box$ standard$\Box$ form$\Box$ where$\Box$ entanglement$\Box$ is$\Box$ done$\Box$ first$\Box$,$\Box$ then$\Box$ measurements$\Box$,$\Box$ then$\Box$ corrections$\Box$.$\Box$
$\Box$
The$\Box$ consequences$\Box$ of$\Box$ the$\Box$ existence$\Box$ of$\Box$ such$\Box$ a$\Box$ procedure$\Box$ are$\Box$ far$\Box$-reaching$\Box$.$\Box$ First$\Box$,$\Box$ since$\Box$ entangling$\Box$ comes$\Box$ first$\Box$,$\Box$ one$\Box$ can$\Box$ prepare$\Box$ the$\Box$ entire$\Box$ entangled$\Box$ state$\Box$ needed$\Box$ during$\Box$ the$\Box$ computation$\Box$ right$\Box$ at$\Box$ the$\Box$ start$\Box$:$\Box$ one$\Box$ never$\Box$ has$\Box$ to$\Box$ do$\Box$ $\Box$`$\Box$`on$\Box$ the$\Box$ fly$\Box$'$\Box$'$\Box$ entanglements$\Box$.$\Box$ $\Box$ Second$\Box$,$\Box$ since$\Box$ local$\Box$ corrections$\Box$ come$\Box$ last$\Box$,$\Box$ only$\Box$ the$\Box$ output$\Box$ qubits$\Box$ will$\Box$ ever$\Box$ need$\Box$ corrections$\Box$.$\Box$ $\Box$ Third$\Box$,$\Box$ the$\Box$ rewriting$\Box$ of$\Box$ a$\Box$ pattern$\Box$ to$\Box$ standard$\Box$ form$\Box$ reveals$\Box$ parallelism$\Box$ in$\Box$ the$\Box$ pattern$\Box$ computation$\Box$.$\Box$ In$\Box$ a$\Box$ general$\Box$ pattern$\Box$,$\Box$ one$\Box$ is$\Box$ forced$\Box$ to$\Box$ compute$\Box$ sequentially$\Box$ and$\Box$ obey$\Box$ strictly$\Box$ the$\Box$ command$\Box$ sequence$\Box$,$\Box$ whereas$\Box$ after$\Box$ standardisation$\Box$,$\Box$ the$\Box$ dependency$\Box$ structure$\Box$ is$\Box$ relaxed$\Box$,$\Box$ resulting$\Box$ in$\Box$ low$\Box$ depth$\Box$ complexity$\Box$.$\Box$ Last$\Box$,$\Box$ the$\Box$ existence$\Box$ of$\Box$ a$\Box$ standard$\Box$ form$\Box$ for$\Box$ any$\Box$ pattern$\Box$ also$\Box$ has$\Box$ interesting$\Box$ corollaries$\Box$ beyond$\Box$ implementation$\Box$ and$\Box$ complexity$\Box$ matters$\Box$,$\Box$ as$\Box$ it$\Box$ follows$\Box$ from$\Box$ it$\Box$ that$\Box$ patterns$\Box$ using$\Box$ no$\Box$ dependencies$\Box$,$\Box$ or$\Box$ using$\Box$ only$\Box$ the$\Box$ restricted$\Box$ class$\Box$ of$\Box$ Pauli$\Box$ measurements$\Box$,$\Box$ can$\Box$ only$\Box$ realise$\Box$ a$\Box$ unitary$\Box$ belonging$\Box$ to$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$
$\Box$
$\Box$\mypar$\Box${Acknowledgements$\Box$:$\Box$}$\Box$ Elham$\Box$ Kashefi$\Box$ wishes$\Box$ to$\Box$ express$\Box$ her$\Box$ gratitude$\Box$ to$\Box$ Quentin$\Box$ for$\Box$ letting$\Box$ her$\Box$ collaborate$\Box$ with$\Box$ his$\Box$ father$\Box$,$\Box$ Vincent$\Box$ Danos$\Box$,$\Box$ during$\Box$ their$\Box$ stay$\Box$ in$\Box$ Canada$\Box$.$\Box$ Prakash$\Box$ Panangaden$\Box$ wishes$\Box$ to$\Box$ express$\Box$ his$\Box$ gratitude$\Box$ to$\Box$ EPSRC$\Box$ for$\Box$ supporting$\Box$ his$\Box$ stay$\Box$ in$\Box$ Oxford$\Box$ where$\Box$ this$\Box$ collaboration$\Box$ began$\Box$.$\Box$
$\Box$
$\Box$\section$\Box${Computation$\Box$ Patterns$\Box$}$\Box$ We$\Box$ first$\Box$ develop$\Box$ a$\Box$ notation$\Box$ for$\Box$ 1$\Box$-qubit$\Box$ measurement$\Box$ based$\Box$ computations$\Box$.$\Box$ $\Box$ The$\Box$ basic$\Box$ commands$\Box$ one$\Box$ can$\Box$ use$\Box$ are$\Box$:$\Box$ $\Box$ $\Box$\begin$\Box${itemize$\Box$}$\Box$ $\Box$\item$\Box$ 1$\Box$-qubit$\Box$ measurements$\Box$ $\Box$$$\Box$\M$\Box${$\Box$\alpha$\Box$}i$\Box$$$\Box$ $\Box$\item$\Box$ 2$\Box$-qubit$\Box$ entanglement$\Box$ operators$\Box$ $\Box$$$\Box$\et$\Box$ ij$\Box$$$\Box$ $\Box$\item$\Box$ and$\Box$ 1$\Box$-qubit$\Box$ Pauli$\Box$ corrections$\Box$ $\Box$$$\Box$\Cx$\Box$ i$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\Cz$\Box$ i$\Box$$$\Box$ $\Box$\end$\Box${itemize$\Box$}$\Box$ The$\Box$ indices$\Box$ $\Box$$i$\Box$$$\Box$,$\Box$ $\Box$$j$\Box$$$\Box$ represent$\Box$ the$\Box$ qubits$\Box$ on$\Box$ which$\Box$ each$\Box$ of$\Box$ these$\Box$ operations$\Box$ apply$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\alpha$\Box$$$\Box$ is$\Box$ a$\Box$ parameter$\Box$ in$\Box$ $\Box$$$\Box$[0$\Box$,2$\Box$\pi$\Box$]$\Box$$$\Box$.$\Box$ $\Box$ Sequences$\Box$ of$\Box$ such$\Box$ commands$\Box$,$\Box$ $\Box$ together$\Box$ with$\Box$ two$\Box$ distinguished$\Box$ $\Box$-$\Box$-$\Box$-possibly$\Box$ overlapping$\Box$-$\Box$-$\Box$-$\Box$ sets$\Box$ of$\Box$ qubits$\Box$ corresponding$\Box$ to$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$,$\Box$ will$\Box$ be$\Box$ called$\Box$ $\Box$\emph$\Box${measurement$\Box$
$\Box$ $\Box$ patterns$\Box$}$\Box$,$\Box$ or$\Box$ simply$\Box$ patterns$\Box$.$\Box$ $\Box$ These$\Box$ patterns$\Box$ can$\Box$ be$\Box$ combined$\Box$ by$\Box$ composition$\Box$ and$\Box$ tensor$\Box$ product$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
Importantly$\Box$ corrections$\Box$ and$\Box$ measurements$\Box$ are$\Box$ allowed$\Box$ to$\Box$ depend$\Box$ on$\Box$ previous$\Box$ measurement$\Box$ outcomes$\Box$.$\Box$ $\Box$ We$\Box$ shall$\Box$ prove$\Box$ later$\Box$ that$\Box$ patterns$\Box$ without$\Box$ those$\Box$ classical$\Box$ dependencies$\Box$ can$\Box$ only$\Box$ realise$\Box$ unitaries$\Box$ that$\Box$ are$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$ $\Box$ Thus$\Box$ dependencies$\Box$ are$\Box$ crucial$\Box$ if$\Box$ one$\Box$ wants$\Box$ to$\Box$ define$\Box$ a$\Box$ universal$\Box$ computing$\Box$ model$\Box$;$\Box$ that$\Box$ is$\Box$ to$\Box$ say$\Box$ a$\Box$ model$\Box$ where$\Box$ all$\Box$ finite$\Box$-dimensional$\Box$ unitaries$\Box$ can$\Box$ be$\Box$ realised$\Box$,$\Box$ and$\Box$ it$\Box$ is$\Box$ also$\Box$ crucial$\Box$ to$\Box$ develop$\Box$ a$\Box$ notation$\Box$ that$\Box$ will$\Box$ handle$\Box$ these$\Box$ dependencies$\Box$ gracefully$\Box$.$\Box$
$\Box$
$\Box$\subsection$\Box${Commands$\Box$}$\Box$ The$\Box$ entanglement$\Box$ commands$\Box$ are$\Box$ defined$\Box$ as$\Box$ $\Box$$$\Box$\et$\Box$ ij$\Box$:$\Box$=$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${ij$\Box$}$\Box$$$\Box$,$\Box$ while$\Box$ the$\Box$ correction$\Box$ commands$\Box$ are$\Box$ the$\Box$ Pauli$\Box$ operators$\Box$ $\Box$$$\Box$\Cx$\Box$ i$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\Cz$\Box$ i$\Box$$$\Box$.$\Box$
$\Box$
A$\Box$ $\Box$\emph$\Box${1$\Box$-qubit$\Box$ measurement$\Box$}$\Box$ command$\Box$,$\Box$ written$\Box$ $\Box$$$\Box$\M$\Box${$\Box$\alpha$\Box$}i$\Box$$$\Box$,$\Box$ is$\Box$ given$\Box$ by$\Box$ a$\Box$ pair$\Box$ of$\Box$ complement$\Box$ orthogonal$\Box$ projections$\Box$,$\Box$ on$\Box$:$\Box$ $\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\oqb$\Box$\alpha$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\frac1{\sqrt2}$\Box$($\Box$\ket0$\Box$+$\Box$ e$\Box$^$\Box${i$\Box$\alpha$\Box$}$\Box$\ket1$\Box$)$\Box$$\Box$$\Box$ $\Box$\oqbn$\Box$\alpha$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\frac1{\sqrt2}$\Box$($\Box$\ket0$\Box$-$\Box$\ei$\Box$\alpha$\Box$\ket1$\Box$)$\Box$ $\Box$}$\Box$ It$\Box$ is$\Box$ easily$\Box$ seen$\Box$ that$\Box$ $\Box$$$\Box$\oqb$\Box$\alpha$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\oqbn$\Box$\alpha$\Box$$$\Box$ form$\Box$ an$\Box$ orthonormal$\Box$ basis$\Box$ in$\Box$ $\Box$$$\Box${\mbb C}^2$\Box$$$\Box$,$\Box$ so$\Box$ they$\Box$ indeed$\Box$ define$\Box$ a$\Box$ 1$\Box$-qubit$\Box$ measurement$\Box$ $\Box$(of$\Box$ rank$\Box$ $\Box$$2$\Box$^$\Box${n$\Box$-1$\Box$}$\Box$$$\Box$,$\Box$ if$\Box$ $\Box$$n$\Box$$$\Box$ is$\Box$ the$\Box$ number$\Box$ of$\Box$ qubits$\Box$ in$\Box$ the$\Box$ ambient$\Box$ computing$\Box$ space$\Box$)$\Box$.$\Box$ $\Box$ $\Box$ Measurements$\Box$ here$\Box$ will$\Box$ always$\Box$ be$\Box$ understood$\Box$ as$\Box$ destructive$\Box$ measurements$\Box$,$\Box$ that$\Box$ is$\Box$ to$\Box$ say$\Box$ the$\Box$ concerned$\Box$ qubit$\Box$ is$\Box$ consumed$\Box$ in$\Box$ the$\Box$ measurement$\Box$ operation$\Box$.$\Box$ $\Box$
$\Box$
The$\Box$ outcome$\Box$ of$\Box$ a$\Box$ measurement$\Box$ done$\Box$ at$\Box$ qubit$\Box$ $\Box$$i$\Box$$$\Box$ will$\Box$ be$\Box$ denoted$\Box$ by$\Box$ $\Box$$s$\Box$_i$\Box$\in$\Box${\mbb Z}_2$\Box$$$\Box$.$\Box$ $\Box$ Since$\Box$ one$\Box$ only$\Box$ deals$\Box$ with$\Box$ patterns$\Box$ where$\Box$ qubits$\Box$ are$\Box$ measured$\Box$ at$\Box$ most$\Box$ once$\Box$ $\Box$(see$\Box$ condition$\Box$ $\Box$(D1$\Box$)$\Box$ below$\Box$)$\Box$,$\Box$ this$\Box$ is$\Box$ unambiguous$\Box$.$\Box$ We$\Box$ take$\Box$ the$\Box$ convention$\Box$ that$\Box$ $\Box$$s$\Box$_i$\Box$=0$\Box$$$\Box$ if$\Box$ under$\Box$ the$\Box$ corresponding$\Box$ measurement$\Box$ the$\Box$ state$\Box$ collapses$\Box$ to$\Box$ $\Box$$$\Box$\oqb$\Box$\alpha$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$s$\Box$_i$\Box$=1$\Box$$$\Box$ if$\Box$ to$\Box$ $\Box$$$\Box$\oqbn$\Box$\alpha$\Box$$$\Box$.$\Box$
$\Box$
Outcomes$\Box$ can$\Box$ be$\Box$ summed$\Box$ together$\Box$ resulting$\Box$ in$\Box$ expressions$\Box$ of$\Box$ the$\Box$ form$\Box$ $\Box$$s$\Box$=$\Box$\sum$\Box$_$\Box${i$\Box$\in$\Box$ I$\Box$}$\Box$ s$\Box$_i$\Box$$$\Box$ which$\Box$ we$\Box$ call$\Box$ $\Box$\emph$\Box${signals$\Box$}$\Box$,$\Box$ and$\Box$ where$\Box$ the$\Box$ summation$\Box$ is$\Box$ understood$\Box$ as$\Box$ being$\Box$ done$\Box$ is$\Box$ $\Box$$$\Box${\mbb Z}_2$\Box$$$\Box$.$\Box$ We$\Box$ define$\Box$ the$\Box$ $\Box$\emph$\Box${domain$\Box$}$\Box$ of$\Box$ a$\Box$ signal$\Box$ as$\Box$ the$\Box$ set$\Box$ of$\Box$ qubits$\Box$ it$\Box$ depends$\Box$ on$\Box$.$\Box$ $\Box$
$\Box$
Dependent$\Box$ corrections$\Box$ will$\Box$ be$\Box$ written$\Box$ $\Box$$$\Box$\cx$\Box$ is$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\cz$\Box$ is$\Box$$$\Box$ with$\Box$ $\Box$ $\Box$$s$\Box$$$\Box$ a$\Box$ signal$\Box$.$\Box$ Their$\Box$ meaning$\Box$ is$\Box$ that$\Box$ $\Box$$$\Box$\cx$\Box$ i0$\Box$=$\Box$\cz$\Box$ i0$\Box$=I$\Box$$$\Box$ $\Box$(no$\Box$ correction$\Box$ is$\Box$ applied$\Box$)$\Box$,$\Box$ while$\Box$ $\Box$$$\Box$\cx$\Box$ i1$\Box$=$\Box$\Cx$\Box$ i$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\cz$\Box$ i1$\Box$=$\Box$\Cz$\Box$ i$\Box$$$\Box$.$\Box$
$\Box$
Dependent$\Box$ measurements$\Box$ will$\Box$ be$\Box$ written$\Box$ $\Box$$$\Box$\MS$\Box${$\Box$\alpha$\Box$}ist$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$s$\Box$$$\Box$ and$\Box$ $\Box$$t$\Box$$$\Box$ are$\Box$ signals$\Box$.$\Box$ Their$\Box$ meaning$\Box$ is$\Box$ as$\Box$ follows$\Box$:$\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\label$\Box${msem$\Box$}$\Box$ $\Box$\MS$\Box${$\Box$\alpha$\Box$}$\Box$ ist$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\M$\Box${$\Box$($\Box$-1$\Box$)$\Box$^s$\Box$\alpha$\Box$+t$\Box$\pi$\Box$}$\Box$ i$\Box$ $\Box$}$\Box$ As$\Box$ a$\Box$ result$\Box$,$\Box$ before$\Box$ applying$\Box$ a$\Box$ measurement$\Box$,$\Box$ one$\Box$ has$\Box$ to$\Box$ know$\Box$ first$\Box$ all$\Box$ the$\Box$ measurements$\Box$ outcomes$\Box$ occurring$\Box$ in$\Box$ the$\Box$ signals$\Box$ $\Box$$s$\Box$$$\Box$,$\Box$ $\Box$$t$\Box$$$\Box$,$\Box$ then$\Box$ one$\Box$ has$\Box$ to$\Box$ compute$\Box$ the$\Box$ parity$\Box$ of$\Box$ $\Box$$s$\Box$$$\Box$ and$\Box$ $\Box$$t$\Box$$$\Box$,$\Box$ and$\Box$ maybe$\Box$ modify$\Box$ $\Box$$$\Box$\alpha$\Box$$$\Box$ to$\Box$ one$\Box$ of$\Box$ $\Box$ $\Box$$$\Box$-$\Box$\alpha$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\alpha$\Box$+$\Box$\pi$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$-$\Box$\alpha$\Box$+$\Box$\pi$\Box$$$\Box$.$\Box$ One$\Box$ can$\Box$ easily$\Box$ compute$\Box$ that$\Box$:$\Box$ $\Box$\EQ$\Box$ $\Box${$\Box$ X$\Box$_i$\Box$\M$\Box${$\Box${$\Box$\alpha$\Box$}$\Box$}i$\Box$ X$\Box$_i$\Box$&$\Box$=$\Box$&$\Box$\M$\Box${$\Box${$\Box$-$\Box$\alpha$\Box$}$\Box$}i$\Box$\label$\Box${xmx$\Box$}$\Box$$\Box$$\Box$ Z$\Box$_i$\Box$\M$\Box${$\Box${$\Box$\alpha$\Box$}$\Box$}i$\Box$ Z$\Box$_i$\Box$&$\Box$=$\Box$&$\Box$\M$\Box${$\Box${$\Box$\alpha$\Box$+$\Box$\pi$\Box$}$\Box$}i$\Box$\label$\Box${zmz$\Box$}$\Box$ $\Box$}$\Box$ so$\Box$ that$\Box$ the$\Box$ actions$\Box$ correspond$\Box$ to$\Box$ conjugations$\Box$ of$\Box$ measurements$\Box$ under$\Box$ $\Box$$X$\Box$$$\Box$ and$\Box$ $\Box$$Z$\Box$$$\Box$.$\Box$ We$\Box$ will$\Box$ refer$\Box$ to$\Box$ them$\Box$ as$\Box$ the$\Box$ $\Box$$X$\Box$$$\Box$ and$\Box$ $\Box$$Z$\Box$$$\Box$-actions$\Box$.$\Box$ Note$\Box$ that$\Box$ $\Box$ these$\Box$ two$\Box$ actions$\Box$ are$\Box$ commuting$\Box$,$\Box$ since$\Box$ $\Box$$$\Box$-$\Box$\alpha$\Box$+$\Box$\pi$\Box$=$\Box$-$\Box$\alpha$\Box$-$\Box$\pi$\Box$$$\Box$ up$\Box$ to$\Box$ $\Box$$2$\Box$\pi$\Box$$$\Box$,$\Box$ and$\Box$ hence$\Box$ the$\Box$ order$\Box$ in$\Box$ which$\Box$ one$\Box$ applies$\Box$ them$\Box$ doesn$\Box$'t$\Box$ matter$\Box$.$\Box$ Should$\Box$ one$\Box$ use$\Box$ other$\Box$ local$\Box$ corrections$\Box$,$\Box$ then$\Box$ one$\Box$ would$\Box$ have$\Box$ here$\Box$ $\Box$ instead$\Box$ the$\Box$ corresponding$\Box$ actions$\Box$ on$\Box$ measurement$\Box$ angles$\Box$.$\Box$ As$\Box$ we$\Box$ will$\Box$ see$\Box$ later$\Box$,$\Box$ relations$\Box$ $\Box$($\Box$\ref$\Box${xmx$\Box$}$\Box$)$\Box$ and$\Box$ $\Box$($\Box$\ref$\Box${zmz$\Box$}$\Box$)$\Box$ are$\Box$ key$\Box$ to$\Box$ the$\Box$ propagation$\Box$ of$\Box$ dependent$\Box$ corrections$\Box$,$\Box$ and$\Box$ to$\Box$ obtaining$\Box$ patterns$\Box$ in$\Box$ the$\Box$ standard$\Box$ entanglement$\Box$,$\Box$ measurement$\Box$,$\Box$ correction$\Box$ form$\Box$.$\Box$ Since$\Box$ measurements$\Box$ considered$\Box$ here$\Box$ are$\Box$ destructive$\Box$ ones$\Box$,$\Box$ the$\Box$ equations$\Box$ simplify$\Box$ to$\Box$ $\Box$$$\Box$\M$\Box${$\Box${$\Box$\alpha$\Box$}$\Box$}i$\Box$ X$\Box$_i$\Box$=$\Box$\M$\Box${$\Box${$\Box$-$\Box$\alpha$\Box$}$\Box$}i$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\M$\Box${$\Box${$\Box$\alpha$\Box$}$\Box$}i$\Box$ Z$\Box$_i$\Box$=$\Box$\M$\Box${$\Box${$\Box$\alpha$\Box$+$\Box$\pi$\Box$}$\Box$}i$\Box$$$\Box$.$\Box$
$\Box$
Another$\Box$ point$\Box$ worth$\Box$ noticing$\Box$ is$\Box$ that$\Box$ the$\Box$ domain$\Box$ of$\Box$ the$\Box$ signals$\Box$ of$\Box$ a$\Box$ dependent$\Box$ command$\Box$,$\Box$ be$\Box$ it$\Box$ a$\Box$ measurement$\Box$ or$\Box$ a$\Box$ correction$\Box$,$\Box$ represents$\Box$ the$\Box$ set$\Box$ of$\Box$ measurements$\Box$ which$\Box$ one$\Box$ has$\Box$ to$\Box$ do$\Box$ before$\Box$ one$\Box$ can$\Box$ determine$\Box$ the$\Box$ actual$\Box$ value$\Box$ of$\Box$ the$\Box$ command$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
Finally$\Box$ we$\Box$ note$\Box$ that$\Box$ we$\Box$ could$\Box$ work$\Box$ with$\Box$ general$\Box$ 1$\Box$-qubit$\Box$ measurements$\Box$,$\Box$ instead$\Box$ of$\Box$ the$\Box$ class$\Box$ defined$\Box$ above$\Box$,$\Box$ sometimes$\Box$ called$\Box$ $\Box$$xy$\Box$$$\Box$-measurements$\Box$.$\Box$ All$\Box$ the$\Box$ developments$\Box$ would$\Box$ carry$\Box$ through$\Box$ nicely$\Box$,$\Box$ but$\Box$ we$\Box$ have$\Box$ not$\Box$ found$\Box$ so$\Box$ far$\Box$ any$\Box$ compelling$\Box$ reason$\Box$ for$\Box$ this$\Box$ additional$\Box$ generality$\Box$.$\Box$
$\Box$
$\Box$\subsection$\Box${Patterns$\Box$}$\Box$ $\Box$\begin$\Box${defi$\Box$}$\Box$}$\Box$\def$\Box$\ED$\Box${$\Box$\end$\Box${defi$\Box$}$\Box$ Patterns$\Box$ consists$\Box$ of$\Box$ three$\Box$ finite$\Box$ sets$\Box$ $\Box$$V$\Box$$$\Box$,$\Box$ $\Box$$I$\Box$$$\Box$,$\Box$ $\Box$$O$\Box$$$\Box$,$\Box$ together$\Box$ with$\Box$ two$\Box$ injective$\Box$ maps$\Box$ $\Box$$$\Box$\iota$\Box$:I$\Box$\rightarrow$\Box$ V$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$o$\Box$:O$\Box$\rightarrow$\Box$ V$\Box$$$\Box$ $\Box$ and$\Box$ a$\Box$ finite$\Box$ sequence$\Box$ of$\Box$ commands$\Box$ $\Box$$A$\Box$_n$\Box$\ldots$\Box$ A$\Box$_1$\Box$$$\Box$ applying$\Box$ to$\Box$ qubits$\Box$ in$\Box$ $\Box$$V$\Box$$$\Box$.$\Box$ $\Box$ $\Box$\ED$\Box$ The$\Box$ set$\Box$ $\Box$$V$\Box$$$\Box$ is$\Box$ called$\Box$ the$\Box$ pattern$\Box$ $\Box$\emph$\Box${computation$\Box$ space$\Box$}$\Box$,$\Box$ and$\Box$ we$\Box$ write$\Box$ $\Box$$$\Box$\hil$\Box$ V$\Box$$$\Box$ for$\Box$ the$\Box$ associated$\Box$ quantum$\Box$ state$\Box$ space$\Box$ $\Box$$$\Box$\otimes$\Box$_$\Box${i$\Box$\in$\Box$ V$\Box$}$\Box${\mbb C}^2$\Box$$$\Box$.$\Box$ To$\Box$ ease$\Box$ notation$\Box$,$\Box$ we$\Box$ will$\Box$ forget$\Box$ altogether$\Box$ about$\Box$ the$\Box$ maps$\Box$ $\Box$$$\Box$\iota$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$o$\Box$$$\Box$,$\Box$ and$\Box$ write$\Box$ simply$\Box$ $\Box$$I$\Box$$$\Box$,$\Box$ $\Box$$O$\Box$$$\Box$ instead$\Box$ of$\Box$ $\Box$$$\Box$\iota$\Box$(I$\Box$)$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$o$\Box$(O$\Box$)$\Box$$$\Box$.$\Box$ Note$\Box$ however$\Box$,$\Box$ that$\Box$ these$\Box$ maps$\Box$ are$\Box$ useful$\Box$ to$\Box$ define$\Box$ classical$\Box$ manipulations$\Box$ of$\Box$ the$\Box$ quantum$\Box$ states$\Box$,$\Box$ such$\Box$ as$\Box$ permutations$\Box$ of$\Box$ the$\Box$ qubits$\Box$.$\Box$ The$\Box$ sets$\Box$ $\Box$ $\Box$$I$\Box$$$\Box$,$\Box$ $\Box$$O$\Box$$$\Box$ will$\Box$ be$\Box$ called$\Box$ respectively$\Box$ the$\Box$ pattern$\Box$ $\Box$\emph$\Box${inputs$\Box$}$\Box$ and$\Box$ $\Box$\emph$\Box${outputs$\Box$}$\Box$,$\Box$ and$\Box$ we$\Box$ will$\Box$ write$\Box$ $\Box$$$\Box$\hil$\Box$ I$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\hil$\Box$ O$\Box$$$\Box$ for$\Box$ the$\Box$ associated$\Box$ quantum$\Box$ state$\Box$ spaces$\Box$.$\Box$ The$\Box$ sequence$\Box$ $\Box$$A$\Box$_n$\Box$\ldots$\Box$ A$\Box$_1$\Box$$$\Box$ will$\Box$ be$\Box$ called$\Box$ the$\Box$ pattern$\Box$ $\Box$\emph$\Box${command$\Box$ sequence$\Box$}$\Box$.$\Box$ $\Box$
$\Box$
To$\Box$ run$\Box$ a$\Box$ pattern$\Box$,$\Box$ one$\Box$ prepares$\Box$ the$\Box$ input$\Box$ qubits$\Box$ in$\Box$ some$\Box$ input$\Box$ state$\Box$ $\Box$$$\Box$\psi$\Box$\in$\Box$\hil$\Box$ I$\Box$$$\Box$,$\Box$ $\Box$ while$\Box$ the$\Box$ non$\Box$-input$\Box$ qubits$\Box$ are$\Box$ all$\Box$ set$\Box$ in$\Box$ the$\Box$ $\Box$$$\Box$\ket$\Box${$\Box$+$\Box$}$\Box$$$\Box$ state$\Box$,$\Box$ then$\Box$ the$\Box$ commands$\Box$ are$\Box$ executed$\Box$ in$\Box$ sequence$\Box$,$\Box$ and$\Box$ finally$\Box$ the$\Box$ result$\Box$ of$\Box$ the$\Box$ pattern$\Box$ computation$\Box$ is$\Box$ some$\Box$ $\Box$$$\Box$\phi$\Box$\in$\Box$\hil$\Box$ O$\Box$$$\Box$.$\Box$ $\Box$ There$\Box$ might$\Box$ be$\Box$ qubits$\Box$ in$\Box$ the$\Box$ pattern$\Box$,$\Box$ which$\Box$ are$\Box$ neither$\Box$ inputs$\Box$ nor$\Box$ outputs$\Box$ qubits$\Box$,$\Box$ and$\Box$ are$\Box$ used$\Box$ as$\Box$ auxiliary$\Box$ qubits$\Box$ during$\Box$ the$\Box$ computation$\Box$.$\Box$ Usually$\Box$ one$\Box$ tries$\Box$ to$\Box$ use$\Box$ as$\Box$ few$\Box$ of$\Box$ them$\Box$ as$\Box$ possible$\Box$,$\Box$ since$\Box$ these$\Box$ participate$\Box$ to$\Box$ the$\Box$ $\Box$\emph$\Box${space$\Box$ complexity$\Box$}$\Box$ of$\Box$ the$\Box$ computation$\Box$.$\Box$ $\Box$
$\Box$
Note$\Box$ that$\Box$ one$\Box$ does$\Box$ not$\Box$ require$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$ to$\Box$ be$\Box$ disjoint$\Box$ subsets$\Box$ of$\Box$ $\Box$$V$\Box$$$\Box$.$\Box$ $\Box$ This$\Box$ seemingly$\Box$ innocuous$\Box$ additional$\Box$ flexibility$\Box$ is$\Box$ actually$\Box$ quite$\Box$ useful$\Box$ to$\Box$ give$\Box$ parsimonious$\Box$ implementations$\Box$ of$\Box$ unitaries$\Box$~$\Box$\cite$\Box${generator04$\Box$}$\Box$.$\Box$ $\Box$ While$\Box$ the$\Box$ restriction$\Box$ to$\Box$ disjoint$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$ is$\Box$ unnecessary$\Box$,$\Box$ it$\Box$ has$\Box$ been$\Box$ discussed$\Box$ whether$\Box$ more$\Box$ constrained$\Box$ patterns$\Box$ might$\Box$ be$\Box$ easier$\Box$ to$\Box$ realise$\Box$ physically$\Box$.$\Box$ $\Box$ Recent$\Box$ work$\Box$~$\Box$\cite$\Box${graphstates$\Box$,BR04$\Box$,CMJ04$\Box$}$\Box$ however$\Box$,$\Box$ seems$\Box$ to$\Box$ indicate$\Box$ they$\Box$ are$\Box$ not$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
$\Box$
Here$\Box$ is$\Box$ an$\Box$ example$\Box$ of$\Box$ a$\Box$ pattern$\Box$ implementing$\Box$ the$\Box$ Hadamard$\Box$ operator$\Box$ $\Box$$H$\Box$$$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathfrak$\Box$ H$\Box$&$\Box$:$\Box$=$\Box$&$\Box$ $\Box$($\Box$\ens$\Box${1$\Box$,2$\Box$}$\Box$,$\Box$\ens$\Box${1$\Box$}$\Box$,$\Box$\ens$\Box${2$\Box$}$\Box$,$\Box$\cx$\Box$ 2$\Box${s$\Box$_1$\Box$}$\Box$\M01$\Box$\et12$\Box$)$\Box$ $\Box$}$\Box$
$\Box$
What$\Box$ is$\Box$ this$\Box$ pattern$\Box$ doing$\Box$~$\Box$?$\Box$ The$\Box$ first$\Box$ qubit$\Box$ is$\Box$ prepared$\Box$ $\Box$ in$\Box$ some$\Box$ input$\Box$ state$\Box$ $\Box$$$\Box$\psi$\Box$$$\Box$,$\Box$ and$\Box$ the$\Box$ second$\Box$ in$\Box$ state$\Box$ $\Box$$$\Box$\ket$\Box$+$\Box$$$\Box$,$\Box$ then$\Box$ these$\Box$ are$\Box$ entangled$\Box$ to$\Box$ obtain$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${12$\Box$}$\Box$($\Box$\psi$\Box$_1$\Box$\otimes$\Box$\ket$\Box$+$\Box$_2$\Box$)$\Box$$$\Box$.$\Box$ $\Box$ Once$\Box$ this$\Box$ is$\Box$ done$\Box$,$\Box$ the$\Box$ first$\Box$ qubit$\Box$ is$\Box$ measured$\Box$ in$\Box$ the$\Box$ $\Box$$$\Box$\ket$\Box$+$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\ket$\Box$-$\Box$$$\Box$ basis$\Box$.$\Box$ $\Box$ Finally$\Box$ an$\Box$ $\Box$$X$\Box$$$\Box$ $\Box$ $\Box$ correction$\Box$ is$\Box$ applied$\Box$ on$\Box$ the$\Box$ output$\Box$ qubit$\Box$,$\Box$ depending$\Box$ on$\Box$ the$\Box$ outcome$\Box$ of$\Box$ the$\Box$ measurement$\Box$.$\Box$ $\Box$ We$\Box$ will$\Box$ do$\Box$ this$\Box$ calculation$\Box$ in$\Box$ detail$\Box$ later$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsection$\Box${Pattern$\Box$ combination$\Box$}$\Box$ We$\Box$ are$\Box$ interested$\Box$ now$\Box$ in$\Box$ how$\Box$ one$\Box$ can$\Box$ combine$\Box$ patterns$\Box$ into$\Box$ bigger$\Box$ ones$\Box$.$\Box$
$\Box$
The$\Box$ first$\Box$ way$\Box$ to$\Box$ combine$\Box$ patterns$\Box$ is$\Box$ by$\Box$ composing$\Box$ them$\Box$.$\Box$ $\Box$ Two$\Box$ patterns$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_$\Box${1$\Box$}$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_$\Box${2$\Box$}$\Box$$$\Box$ may$\Box$ be$\Box$ composed$\Box$ if$\Box$ $\Box$$V$\Box$_1$\Box$\cap$\Box$ V$\Box$_2$\Box$=O$\Box$_1$\Box$=I$\Box$_2$\Box$$$\Box$.$\Box$ Note$\Box$ that$\Box$ provided$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ has$\Box$ as$\Box$ many$\Box$ outputs$\Box$ as$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$ has$\Box$ inputs$\Box$,$\Box$ by$\Box$ renaming$\Box$ the$\Box$ pattern$\Box$ qubits$\Box$,$\Box$ one$\Box$ can$\Box$ always$\Box$ $\Box$ make$\Box$ them$\Box$ composable$\Box$.$\Box$ $\Box$\begin$\Box${defi$\Box$}$\Box$}$\Box$\def$\Box$\ED$\Box${$\Box$\end$\Box${defi$\Box$}$\Box$ The$\Box$ composite$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_$\Box${2$\Box$}$\Box$\mathfrak$\Box$ P$\Box$_$\Box${1$\Box$}$\Box$$$\Box$ is$\Box$ defined$\Box$ as$\Box$:$\Box$$\Box$$\Box$ $\Box$ $\Box$-$\Box$-$\Box$-$\Box$ $\Box$$V$\Box$:$\Box$=V$\Box$_1$\Box$\cup$\Box$ V$\Box$_2$\Box$$$\Box$,$\Box$ $\Box$$I$\Box$=I$\Box$_1$\Box$$$\Box$,$\Box$ $\Box$$O$\Box$=O$\Box$_2$\Box$$$\Box$,$\Box$$\Box$$\Box$ $\Box$-$\Box$-$\Box$-$\Box$ commands$\Box$ are$\Box$ concatenated$\Box$.$\Box$ $\Box$\ED$\Box$ The$\Box$ other$\Box$ way$\Box$ of$\Box$ combining$\Box$ patterns$\Box$ is$\Box$ to$\Box$ tensor$\Box$ them$\Box$.$\Box$ Two$\Box$ patterns$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_$\Box${1$\Box$}$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_$\Box${2$\Box$}$\Box$$$\Box$ $\Box$ may$\Box$ be$\Box$ tensored$\Box$ if$\Box$ $\Box$$V$\Box$_1$\Box$\cap$\Box$ V$\Box$_2$\Box$=$\Box$\varnothing$\Box$$$\Box$.$\Box$ Again$\Box$ one$\Box$ can$\Box$ always$\Box$ meet$\Box$ this$\Box$ condition$\Box$ by$\Box$ renaming$\Box$ qubits$\Box$ in$\Box$ a$\Box$ way$\Box$ that$\Box$ these$\Box$ sets$\Box$ are$\Box$ made$\Box$ disjoint$\Box$.$\Box$ $\Box$ $\Box$\begin$\Box${defi$\Box$}$\Box$}$\Box$\def$\Box$\ED$\Box${$\Box$\end$\Box${defi$\Box$}$\Box$ The$\Box$ tensor$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_$\Box${2$\Box$}$\Box$\otimes$\Box$\mathfrak$\Box$ P$\Box$_$\Box${1$\Box$}$\Box$$$\Box$ is$\Box$ defined$\Box$ as$\Box$:$\Box$$\Box$$\Box$ $\Box$-$\Box$-$\Box$-$\Box$ $\Box$$V$\Box$=V$\Box$_1$\Box$\cup$\Box$ V$\Box$_2$\Box$$$\Box$,$\Box$ $\Box$$I$\Box$=I$\Box$_1$\Box$\cup$\Box$ I$\Box$_2$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$O$\Box$=O$\Box$_1$\Box$\cup$\Box$ O$\Box$_2$\Box$$$\Box$,$\Box$$\Box$$\Box$ $\Box$ $\Box$-$\Box$-$\Box$-$\Box$ commands$\Box$ are$\Box$ concatenated$\Box$.$\Box$ $\Box$ $\Box$\ED$\Box$ Note$\Box$ that$\Box$ all$\Box$ unions$\Box$ above$\Box$ are$\Box$ disjoint$\Box$.$\Box$ Note$\Box$ also$\Box$ that$\Box$,$\Box$ in$\Box$ opposition$\Box$ to$\Box$ the$\Box$ composition$\Box$ case$\Box$,$\Box$ commands$\Box$ from$\Box$ distinct$\Box$ patterns$\Box$ freely$\Box$ commute$\Box$,$\Box$ since$\Box$ they$\Box$ apply$\Box$ to$\Box$ disjoint$\Box$ qubits$\Box$ and$\Box$ are$\Box$ independent$\Box$ of$\Box$ each$\Box$ other$\Box$,$\Box$ so$\Box$ when$\Box$ we$\Box$ say$\Box$ that$\Box$ commands$\Box$ have$\Box$ to$\Box$ be$\Box$ concatenated$\Box$,$\Box$ this$\Box$ is$\Box$ only$\Box$ for$\Box$ definiteness$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsection$\Box${Pattern$\Box$ conditions$\Box$}$\Box$ One$\Box$ might$\Box$ want$\Box$ to$\Box$ subject$\Box$ patterns$\Box$ to$\Box$ various$\Box$ conditions$\Box$:$\Box$ $\Box$\begin$\Box${description$\Box$}$\Box$ $\Box$\item$\Box$[$\Box$(D0$\Box$)$\Box$]$\Box$ no$\Box$ command$\Box$ depends$\Box$ an$\Box$ outcome$\Box$ not$\Box$ yet$\Box$ measured$\Box$;$\Box$ $\Box$\item$\Box$[$\Box$(D1$\Box$)$\Box$]$\Box$ no$\Box$ command$\Box$ acts$\Box$ on$\Box$ a$\Box$ qubit$\Box$ already$\Box$ measured$\Box$;$\Box$ $\Box$\item$\Box$[$\Box$(D2$\Box$)$\Box$]$\Box$ a$\Box$ qubit$\Box$ $\Box$$i$\Box$$$\Box$ is$\Box$ measured$\Box$ if$\Box$ and$\Box$ only$\Box$ if$\Box$ $\Box$$i$\Box$$$\Box$ is$\Box$ not$\Box$ an$\Box$ output$\Box$;$\Box$ $\Box$\item$\Box$[$\Box$(EMC$\Box$)$\Box$]$\Box$ commands$\Box$ occur$\Box$ $\Box$$E$\Box$$s$\Box$ first$\Box$,$\Box$ then$\Box$ $\Box$$M$\Box$$s$\Box$,$\Box$ then$\Box$ $\Box$$C$\Box$$s$\Box$.$\Box$ $\Box$ $\Box$\end$\Box${description$\Box$}$\Box$ The$\Box$ reader$\Box$ might$\Box$ want$\Box$ to$\Box$ check$\Box$ that$\Box$ our$\Box$ example$\Box$ $\Box$$$\Box$\mathfrak$\Box$ H$\Box$$$\Box$ satisfies$\Box$ all$\Box$ of$\Box$ the$\Box$ above$\Box$.$\Box$ $\Box$ It$\Box$ is$\Box$ routine$\Box$ to$\Box$ verify$\Box$ that$\Box$ these$\Box$ conditions$\Box$ are$\Box$ preserved$\Box$ under$\Box$ composition$\Box$ and$\Box$ tensor$\Box$.$\Box$ $\Box$ Conditions$\Box$ $\Box$(D0$\Box$)$\Box$ and$\Box$ $\Box$(D1$\Box$)$\Box$ ensure$\Box$ that$\Box$ a$\Box$ pattern$\Box$ can$\Box$ always$\Box$ be$\Box$ run$\Box$ meaningfully$\Box$.$\Box$ $\Box$ Indeed$\Box$ if$\Box$ $\Box$(D0$\Box$)$\Box$ fails$\Box$,$\Box$ then$\Box$ at$\Box$ some$\Box$ point$\Box$ of$\Box$ the$\Box$ computation$\Box$,$\Box$ one$\Box$ will$\Box$ want$\Box$ to$\Box$ execute$\Box$ a$\Box$ command$\Box$ which$\Box$ $\Box$ depends$\Box$ on$\Box$ outcomes$\Box$ that$\Box$ are$\Box$ not$\Box$ known$\Box$ yet$\Box$.$\Box$ $\Box$ Likewise$\Box$,$\Box$ if$\Box$ $\Box$(D1$\Box$)$\Box$ fails$\Box$,$\Box$ one$\Box$ will$\Box$ try$\Box$ to$\Box$ apply$\Box$ a$\Box$ command$\Box$ on$\Box$ a$\Box$ qubit$\Box$ that$\Box$ has$\Box$ been$\Box$ consumed$\Box$ by$\Box$ a$\Box$ measurement$\Box$ $\Box$(recall$\Box$ that$\Box$ we$\Box$ use$\Box$ destructive$\Box$ measurements$\Box$)$\Box$.$\Box$ $\Box$ Condition$\Box$ $\Box$(D2$\Box$)$\Box$ is$\Box$ there$\Box$ to$\Box$ make$\Box$ sure$\Box$ that$\Box$ at$\Box$ the$\Box$ end$\Box$ of$\Box$ running$\Box$ the$\Box$ pattern$\Box$,$\Box$ the$\Box$ state$\Box$ will$\Box$ belong$\Box$ to$\Box$ the$\Box$ output$\Box$ space$\Box$ $\Box$$$\Box$\hil$\Box$ O$\Box$$$\Box$,$\Box$ $\Box$\textit$\Box${i$\Box$.e$\Box$.$\Box$}$\Box$,$\Box$ that$\Box$ all$\Box$ non$\Box$-output$\Box$ qubits$\Box$,$\Box$ and$\Box$ only$\Box$ them$\Box$,$\Box$ will$\Box$ have$\Box$ been$\Box$ consumed$\Box$ by$\Box$ a$\Box$ measurement$\Box$ when$\Box$ the$\Box$ computation$\Box$ ends$\Box$.$\Box$ $\Box$
$\Box$
Starting$\Box$ now$\Box$ we$\Box$ will$\Box$ assume$\Box$ that$\Box$ all$\Box$ patterns$\Box$ satisfy$\Box$ the$\Box$ $\Box$ $\Box$\emph$\Box${definiteness$\Box$}$\Box$ conditions$\Box$ $\Box$(D0$\Box$)$\Box$,$\Box$ $\Box$(D1$\Box$)$\Box$ and$\Box$ $\Box$(D2$\Box$)$\Box$,$\Box$ and$\Box$ will$\Box$ designate$\Box$ by$\Box$ $\Box$(D$\Box$)$\Box$ the$\Box$ conjunction$\Box$ of$\Box$ these$\Box$ three$\Box$ conditions$\Box$.$\Box$ $\Box$
$\Box$
Condition$\Box$ $\Box$(EMC$\Box$)$\Box$ is$\Box$ of$\Box$ a$\Box$ completely$\Box$ different$\Box$ nature$\Box$.$\Box$ Patterns$\Box$ not$\Box$ respecting$\Box$ it$\Box$ will$\Box$ be$\Box$ called$\Box$ $\Box$\emph$\Box${wild$\Box$}$\Box$.$\Box$ $\Box$
$\Box$
Later$\Box$ on$\Box$,$\Box$ we$\Box$ will$\Box$ introduce$\Box$ the$\Box$ measurement$\Box$ calculus$\Box$ and$\Box$ show$\Box$ a$\Box$ simple$\Box$ rewriting$\Box$ procedure$\Box$ turning$\Box$ any$\Box$ given$\Box$ wild$\Box$ pattern$\Box$ into$\Box$ an$\Box$ equivalent$\Box$ one$\Box$ which$\Box$ is$\Box$ in$\Box$ $\Box$(EMC$\Box$)$\Box$ form$\Box$.$\Box$ $\Box$ We$\Box$ call$\Box$ this$\Box$ procedure$\Box$ $\Box$\emph$\Box${standardisation$\Box$}$\Box$,$\Box$ and$\Box$ also$\Box$ say$\Box$ that$\Box$ a$\Box$ pattern$\Box$ meeting$\Box$ the$\Box$ $\Box$(EMC$\Box$)$\Box$ condition$\Box$ is$\Box$ $\Box$\emph$\Box${standard$\Box$}$\Box$.$\Box$ $\Box$
$\Box$
Before$\Box$ turning$\Box$ to$\Box$ this$\Box$ matter$\Box$,$\Box$ we$\Box$ need$\Box$ a$\Box$ clean$\Box$ definition$\Box$ of$\Box$ what$\Box$ it$\Box$ means$\Box$ for$\Box$ a$\Box$ pattern$\Box$ to$\Box$ implement$\Box$ or$\Box$ to$\Box$ realise$\Box$ a$\Box$ unitary$\Box$ operator$\Box$,$\Box$ together$\Box$ with$\Box$ a$\Box$ proof$\Box$ that$\Box$ the$\Box$ way$\Box$ one$\Box$ can$\Box$ combine$\Box$ patterns$\Box$ is$\Box$ reflected$\Box$ in$\Box$ their$\Box$ interpretations$\Box$.$\Box$ $\Box$ This$\Box$ is$\Box$ key$\Box$ to$\Box$ our$\Box$ proof$\Box$ of$\Box$ universality$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\section$\Box${Computing$\Box$ a$\Box$ pattern$\Box$}$\Box$ Besides$\Box$ quantum$\Box$ states$\Box$ which$\Box$ are$\Box$ vectors$\Box$ in$\Box$ some$\Box$ $\Box$$$\Box$\hil$\Box$ V$\Box$$$\Box$,$\Box$ one$\Box$ needs$\Box$ a$\Box$ classical$\Box$ state$\Box$ recording$\Box$ the$\Box$ outcomes$\Box$ of$\Box$ the$\Box$ successive$\Box$ measurements$\Box$ one$\Box$ does$\Box$ in$\Box$ a$\Box$ pattern$\Box$.$\Box$ So$\Box$ it$\Box$ is$\Box$ natural$\Box$ to$\Box$ define$\Box$ the$\Box$ computation$\Box$ state$\Box$ space$\Box$ as$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathcal$\Box$ S$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\bigcup$\Box$_$\Box${V$\Box$,W$\Box$}$\Box$ $\Box$\hil$\Box$ V$\Box$\times$\Box${\mbb Z}_2$\Box$^W$\Box$ $\Box$}$\Box$ where$\Box$ $\Box$$V$\Box$$$\Box$,$\Box$ $\Box$$W$\Box$$$\Box$ range$\Box$ over$\Box$ finite$\Box$ sets$\Box$.$\Box$ In$\Box$ other$\Box$ words$\Box$ a$\Box$ computation$\Box$ state$\Box$ $\Box$ is$\Box$ a$\Box$ pair$\Box$ $\Box$$q$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\Gamma$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$q$\Box$$$\Box$ is$\Box$ a$\Box$ quantum$\Box$ state$\Box$ and$\Box$ $\Box$$$\Box$\Gamma$\Box$$$\Box$ is$\Box$ a$\Box$ map$\Box$ from$\Box$ some$\Box$ $\Box$$W$\Box$$$\Box$ to$\Box$ the$\Box$ outcome$\Box$ space$\Box$ $\Box$$$\Box${\mbb Z}_2$\Box$$$\Box$.$\Box$ We$\Box$ call$\Box$ this$\Box$ classical$\Box$ component$\Box$ $\Box$$$\Box$\Gamma$\Box$$$\Box$ an$\Box$ $\Box$\emph$\Box${outcome$\Box$ map$\Box$}$\Box$ and$\Box$ denote$\Box$ by$\Box$ $\Box$$$\Box$\varnothing$\Box$$$\Box$ the$\Box$ unique$\Box$ map$\Box$ in$\Box$ $\Box$$$\Box${\mbb Z}_2$\Box$^$\Box$\varnothing$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsection$\Box${Commands$\Box$ as$\Box$ actions$\Box$}$\Box$ We$\Box$ need$\Box$ a$\Box$ few$\Box$ notations$\Box$.$\Box$ For$\Box$ any$\Box$ signal$\Box$ $\Box$$s$\Box$$$\Box$ and$\Box$ classical$\Box$ state$\Box$ $\Box$ $\Box$$$\Box$\Gamma$\Box$\in$\Box${\mbb Z}_2$\Box$^W$\Box$$$\Box$,$\Box$ such$\Box$ that$\Box$ the$\Box$ domain$\Box$ of$\Box$ $\Box$$s$\Box$$$\Box$ is$\Box$ included$\Box$ in$\Box$ $\Box$$W$\Box$$$\Box$,$\Box$ we$\Box$ take$\Box$ $\Box$$s$\Box$_$\Box$\Gamma$\Box$$$\Box$ $\Box$ to$\Box$ be$\Box$ the$\Box$ value$\Box$ of$\Box$ $\Box$$s$\Box$$$\Box$ given$\Box$ by$\Box$ the$\Box$ outcome$\Box$ map$\Box$ $\Box$$$\Box$\Gamma$\Box$$$\Box$.$\Box$ That$\Box$ is$\Box$ to$\Box$ say$\Box$,$\Box$ if$\Box$ $\Box$$s$\Box$=$\Box$\sum$\Box$_I$\Box$ s$\Box$_i$\Box$$$\Box$,$\Box$ then$\Box$ $\Box$$s$\Box$_$\Box$\Gamma$\Box$:$\Box$=$\Box$\sum$\Box$_I$\Box$\Gamma$\Box$(i$\Box$)$\Box$$$\Box$ where$\Box$ the$\Box$ sum$\Box$ is$\Box$ taken$\Box$ in$\Box$ $\Box$$$\Box${\mbb Z}_2$\Box$$$\Box$.$\Box$ $\Box$ Also$\Box$ if$\Box$ $\Box$$$\Box$\Gamma$\Box$\in$\Box${\mbb Z}_2$\Box$^W$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$x$\Box$\in$\Box${\mbb Z}_2$\Box$$$\Box$,$\Box$ we$\Box$ define$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$\Gamma$\Box$[x$\Box$/i$\Box$]$\Box$(i$\Box$)$\Box$=x$\Box$,$\Box$$\Box$,$\Box$\Gamma$\Box$[x$\Box$/i$\Box$]$\Box$(j$\Box$)$\Box$=$\Box$\Gamma$\Box$(j$\Box$)$\Box$\hbox$\Box${$\Box$ for$\Box$ $\Box$}j$\Box$\neq$\Box$ i$\Box$}$\Box$ which$\Box$ is$\Box$ a$\Box$ map$\Box$ in$\Box$ $\Box$$$\Box${\mbb Z}_2$\Box$^$\Box${W$\Box$\cup$\Box$\ens$\Box$ i$\Box$}$\Box$$$\Box$.$\Box$
$\Box$
We$\Box$ may$\Box$ now$\Box$ see$\Box$ each$\Box$ of$\Box$ our$\Box$ commands$\Box$ as$\Box$ acting$\Box$ on$\Box$ $\Box$$$\Box$\mathcal$\Box$ S$\Box$$$\Box$.$\Box$
$\Box$
$\Box$\AR$\Box${$\Box$ q$\Box$,$\Box$\Gamma$\Box$&$\Box$\slar$\Box${$\Box$\et$\Box$ ij$\Box$}$\Box$&$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${ij$\Box$}$\Box$ q$\Box$,$\Box$\Gamma$\Box$$\Box$$\Box$ q$\Box$,$\Box$\Gamma$\Box$&$\Box$\slar$\Box${$\Box$\cx$\Box$ is$\Box$}$\Box$&$\Box$\cx$\Box$ i$\Box${s$\Box$_$\Box$\Gamma$\Box$}$\Box$ q$\Box$,$\Box$\Gamma$\Box$$\Box$$\Box$ q$\Box$,$\Box$\Gamma$\Box$&$\Box$\slar$\Box${$\Box$\cz$\Box$ is$\Box$}$\Box$&$\Box$\cz$\Box$ i$\Box${s$\Box$_$\Box$\Gamma$\Box$}$\Box$ q$\Box$,$\Box$\Gamma$\Box$$\Box$$\Box$ q$\Box$,$\Box$\Gamma$\Box$&$\Box$\slar$\Box${$\Box$\MS$\Box${$\Box$\alpha$\Box$}ist$\Box$}$\Box$&$\Box${$\Box$\oqbb$\Box${$\Box$\alpha$\Box$_$\Box$\Gamma$\Box$}$\Box$}$\Box$_iq$\Box$,$\Box$\Gamma$\Box$[0$\Box$/i$\Box$]$\Box$$\Box$$\Box$ q$\Box$,$\Box$\Gamma$\Box$&$\Box$\slar$\Box${$\Box$\MS$\Box${$\Box$\alpha$\Box$}ist$\Box$}$\Box$&$\Box${$\Box$\oqbnb$\Box${$\Box$\alpha$\Box$_$\Box$\Gamma$\Box$}$\Box$}$\Box$_iq$\Box$,$\Box$\Gamma$\Box$[1$\Box$/i$\Box$]$\Box$ $\Box$}$\Box$ where$\Box$ $\Box$$$\Box$\alpha$\Box$_$\Box$\Gamma$\Box$=$\Box$($\Box$-1$\Box$)$\Box$^$\Box${s$\Box$_$\Box$\Gamma$\Box$}$\Box$\alpha$\Box$+t$\Box$_$\Box$\Gamma$\Box$\pi$\Box$$$\Box$ following$\Box$ equation$\Box$ $\Box$($\Box$\ref$\Box${msem$\Box$}$\Box$)$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\bra$\Box$\psi$\Box$_i$\Box$$$\Box$ is$\Box$ the$\Box$ linear$\Box$ form$\Box$ associated$\Box$ to$\Box$ $\Box$$$\Box$\psi$\Box$$$\Box$ applied$\Box$ at$\Box$ qubit$\Box$ $\Box$$i$\Box$$$\Box$.$\Box$ Suppose$\Box$ $\Box$$q$\Box$\in$\Box$\hil$\Box$ V$\Box$$$\Box$,$\Box$ for$\Box$ the$\Box$ above$\Box$ relations$\Box$ to$\Box$ be$\Box$ defined$\Box$,$\Box$ one$\Box$ needs$\Box$ the$\Box$ indices$\Box$ $\Box$$i$\Box$$$\Box$,$\Box$ $\Box$$j$\Box$$$\Box$ on$\Box$ which$\Box$ the$\Box$ various$\Box$ command$\Box$ apply$\Box$ to$\Box$ be$\Box$ in$\Box$ $\Box$$V$\Box$$$\Box$.$\Box$ One$\Box$ also$\Box$ needs$\Box$ $\Box$$$\Box$\Gamma$\Box$$$\Box$ to$\Box$ contain$\Box$ the$\Box$ domains$\Box$ of$\Box$ $\Box$$s$\Box$$$\Box$ and$\Box$ $\Box$$t$\Box$$$\Box$,$\Box$ so$\Box$ that$\Box$ $\Box$$s$\Box$_$\Box$\Gamma$\Box$$$\Box$ and$\Box$ $\Box$$t$\Box$_$\Box$\Gamma$\Box$$$\Box$ are$\Box$ well$\Box$-defined$\Box$.$\Box$ This$\Box$ will$\Box$ always$\Box$ be$\Box$ the$\Box$ case$\Box$ during$\Box$ the$\Box$ run$\Box$ of$\Box$ a$\Box$ pattern$\Box$ because$\Box$ of$\Box$ condition$\Box$ $\Box$(D$\Box$)$\Box$.$\Box$
$\Box$
All$\Box$ commands$\Box$ except$\Box$ measurements$\Box$ are$\Box$ deterministic$\Box$ and$\Box$ only$\Box$ modify$\Box$ the$\Box$ quantum$\Box$ part$\Box$ of$\Box$ the$\Box$ state$\Box$.$\Box$ The$\Box$ measurements$\Box$ actions$\Box$ on$\Box$ $\Box$$$\Box$\mathcal$\Box$ S$\Box$$$\Box$ are$\Box$ not$\Box$ deterministic$\Box$,$\Box$ so$\Box$ that$\Box$ these$\Box$ are$\Box$ actually$\Box$ binary$\Box$ relations$\Box$ on$\Box$ $\Box$$$\Box$\mathcal$\Box$ S$\Box$$$\Box$,$\Box$ and$\Box$ modify$\Box$ both$\Box$ the$\Box$ quantum$\Box$ and$\Box$ classical$\Box$ parts$\Box$ of$\Box$ the$\Box$ state$\Box$.$\Box$ The$\Box$ usual$\Box$ convention$\Box$ has$\Box$ it$\Box$ that$\Box$ when$\Box$ one$\Box$ does$\Box$ a$\Box$ measurement$\Box$ the$\Box$ resulting$\Box$ state$\Box$ is$\Box$ $\Box$\emph$\Box${renormalised$\Box$}$\Box$,$\Box$ but$\Box$ we$\Box$ don$\Box$'t$\Box$ adhere$\Box$ to$\Box$ it$\Box$ here$\Box$,$\Box$ the$\Box$ reason$\Box$ being$\Box$ that$\Box$ this$\Box$ way$\Box$,$\Box$ the$\Box$ probability$\Box$ of$\Box$ reaching$\Box$ a$\Box$ given$\Box$ state$\Box$ can$\Box$ be$\Box$ read$\Box$ off$\Box$ its$\Box$ norm$\Box$,$\Box$ and$\Box$ the$\Box$ overall$\Box$ treatment$\Box$ is$\Box$ simpler$\Box$.$\Box$
$\Box$
We$\Box$ introduce$\Box$ an$\Box$ additional$\Box$ command$\Box$ called$\Box$ $\Box$\emph$\Box${shifting$\Box$}$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ q$\Box$,$\Box$\Gamma$\Box$&$\Box$\slar$\Box${$\Box$\ss$\Box$ is$\Box$}$\Box$&q$\Box$,$\Box$\Gamma$\Box$[$\Box$\Gamma$\Box$(i$\Box$)$\Box$+s$\Box$_$\Box$\Gamma$\Box$/i$\Box$]$\Box$ $\Box$}$\Box$ It$\Box$ consists$\Box$ in$\Box$ shifting$\Box$ the$\Box$ measurement$\Box$ outcome$\Box$ at$\Box$ $\Box$$i$\Box$$$\Box$ by$\Box$ the$\Box$ amount$\Box$ $\Box$$s$\Box$_$\Box$\Gamma$\Box$$$\Box$.$\Box$ $\Box$ Note$\Box$ that$\Box$ the$\Box$ $\Box$$Z$\Box$$$\Box$-action$\Box$ leaves$\Box$ measurements$\Box$ globally$\Box$ invariant$\Box$,$\Box$ in$\Box$ the$\Box$ sense$\Box$ that$\Box$ $\Box$$$\Box$\oqb$\Box${$\Box$\alpha$\Box$+$\Box$\pi$\Box$}$\Box$,$\Box$\oqbn$\Box${$\Box$\alpha$\Box$+$\Box$\pi$\Box$}$\Box$=$\Box$\oqbn$\Box${$\Box$\alpha$\Box$}$\Box$,$\Box$\oqb$\Box${$\Box$\alpha$\Box$}$\Box$$$\Box$.$\Box$ Thus$\Box$ changing$\Box$ $\Box$$$\Box$\alpha$\Box$$$\Box$ to$\Box$ $\Box$$$\Box$\alpha$\Box$+$\Box$\pi$\Box$$$\Box$ amounts$\Box$ to$\Box$ swap$\Box$ the$\Box$ outcomes$\Box$ of$\Box$ the$\Box$ measurements$\Box$,$\Box$ and$\Box$ one$\Box$ has$\Box$:$\Box$ $\Box$\EQ$\Box${$\Box$\MS$\Box$\alpha$\Box$ ist$\Box$&$\Box$=$\Box$&$\Box$\ss$\Box$ it$\Box$\ms$\Box$\alpha$\Box$ is$\Box$\label$\Box${split$\Box$}$\Box$}$\Box$ and$\Box$ shifting$\Box$ allows$\Box$ to$\Box$ split$\Box$ the$\Box$ $\Box$$t$\Box$$$\Box$ action$\Box$ of$\Box$ a$\Box$ measurement$\Box$,$\Box$ resulting$\Box$ sometimes$\Box$ in$\Box$ convenient$\Box$ optimisations$\Box$ of$\Box$ standard$\Box$ forms$\Box$.$\Box$
$\Box$
$\Box$\subsection$\Box${Computation$\Box$ branches$\Box$}$\Box$ Let$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ be$\Box$ a$\Box$ pattern$\Box$ with$\Box$ computation$\Box$ space$\Box$ $\Box$$V$\Box$$$\Box$,$\Box$ inputs$\Box$ $\Box$$I$\Box$$$\Box$,$\Box$ outputs$\Box$ $\Box$$O$\Box$$$\Box$ and$\Box$ command$\Box$ sequence$\Box$ $\Box$$A$\Box$_n$\Box$\ldots$\Box$ A$\Box$_1$\Box$$$\Box$.$\Box$ A$\Box$ $\Box$ complete$\Box$ pattern$\Box$ computation$\Box$ starts$\Box$ with$\Box$ some$\Box$ input$\Box$ state$\Box$ $\Box$$q$\Box$$$\Box$ in$\Box$ $\Box$$$\Box$\hil$\Box$ I$\Box$$$\Box$,$\Box$ together$\Box$ with$\Box$ the$\Box$ empty$\Box$ outcome$\Box$ map$\Box$ $\Box$$$\Box$\varnothing$\Box$$$\Box$.$\Box$ $\Box$ $\Box$ The$\Box$ input$\Box$ state$\Box$ $\Box$$q$\Box$$$\Box$ is$\Box$ then$\Box$ tensored$\Box$ with$\Box$ as$\Box$ many$\Box$ $\Box$$$\Box$\ket$\Box$+$\Box$$s$\Box$ as$\Box$ there$\Box$ are$\Box$ non$\Box$-inputs$\Box$ in$\Box$ $\Box$$V$\Box$$$\Box$,$\Box$ so$\Box$ as$\Box$ to$\Box$ obtain$\Box$ a$\Box$ state$\Box$ in$\Box$ the$\Box$ full$\Box$ space$\Box$ $\Box$$$\Box$\hil$\Box$ V$\Box$$$\Box$.$\Box$ $\Box$ Then$\Box$ commands$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ are$\Box$ applied$\Box$ in$\Box$ sequence$\Box$.$\Box$ We$\Box$ can$\Box$ summarise$\Box$ the$\Box$ situation$\Box$ as$\Box$ follows$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\xymatrix$\Box$@$\Box$=10pt$\Box$@M$\Box$=5pt$\Box$@R$\Box$=20pt$\Box$@C$\Box$=40pt$\Box${$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ I$\Box$\ar$\Box$[d$\Box$]$\Box$\ar$\Box$@$\Box${$\Box$.$\Box$>$\Box$}$\Box$[rr$\Box$]$\Box$ $\Box$&$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ O$\Box$ $\Box$$\Box$$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ I$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${$\Box$\varnothing$\Box$}$\Box$\ar$\Box$[r$\Box$]$\Box$^$\Box${prep$\Box$}$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ V$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${$\Box$\varnothing$\Box$}$\Box$\ar$\Box$[r$\Box$]$\Box$^$\Box${A$\Box$_1$\Box$\ldots$\Box$ A$\Box$_n$\Box$\quad$\Box$}$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ O$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${V$\Box$\setminus$\Box$ O$\Box$}$\Box$\ar$\Box$[u$\Box$]$\Box$ $\Box$}$\Box$ $\Box$}$\Box$ To$\Box$ make$\Box$ this$\Box$ precise$\Box$,$\Box$ say$\Box$ there$\Box$ is$\Box$ a$\Box$ $\Box$\emph$\Box${$\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$-branch$\Box$}$\Box$ from$\Box$ $\Box$$q$\Box$\in$\Box$\hil$\Box$ I$\Box$$$\Box$ to$\Box$ $\Box$$q$\Box$'$\Box$\in$\Box$\hil$\Box$ O$\Box$$$\Box$,$\Box$ written$\Box$ $\Box$$q$\Box$ $\Box$\brA$\Box${$\Box$\mathfrak$\Box$ P$\Box$}q$\Box$'$\Box$$$\Box$,$\Box$ $\Box$ if$\Box$ there$\Box$ is$\Box$ a$\Box$ sequence$\Box$ $\Box$ $\Box$$$\Box$(q$\Box$_i$\Box$,$\Box$\Gamma$\Box$_i$\Box$)$\Box$$$\Box$ with$\Box$ $\Box$$1$\Box$\leq$\Box$ i$\Box$\leq$\Box$ n$\Box$+1$\Box$$$\Box$,$\Box$ such$\Box$ that$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ q$\Box$\otimes$\Box$\ket$\Box${$\Box$\hskip$\Box$-$\Box$.4ex$\Box$+$\Box$\ldots$\Box$+$\Box$}$\Box$,$\Box$\varnothing$\Box$=q$\Box$_1$\Box$,$\Box$\Gamma$\Box$_1$\Box$$\Box$$\Box$ $\Box$ q$\Box$'$\Box$=q$\Box$_$\Box${n$\Box$+1$\Box$}$\Box$\neq0$\Box$$\Box$$\Box$ $\Box$\hbox$\Box${and$\Box$ for$\Box$ all$\Box$ $\Box$}i$\Box$\leq$\Box$ n$\Box$:q$\Box$_i$\Box$,$\Box$\Gamma$\Box$_i$\Box$\slar$\Box${A$\Box$_i$\Box$}q$\Box$_$\Box${i$\Box$+1$\Box$}$\Box$,$\Box$\Gamma$\Box$_$\Box${i$\Box$+1$\Box$}$\Box$ $\Box$}$\Box$ thus$\Box$ $\Box$$$\Box$\brA$\Box${$\Box$\mathfrak$\Box$ P$\Box$}$\Box$$$\Box$ is$\Box$ a$\Box$ binary$\Box$ relation$\Box$ on$\Box$ $\Box$ $\Box$$$\Box$\hil$\Box$ I$\Box$\times$\Box$\hil$\Box$ O$\Box$$$\Box$.$\Box$ That$\Box$ it$\Box$ is$\Box$ a$\Box$ relation$\Box$ and$\Box$ not$\Box$ a$\Box$ map$\Box$ reflects$\Box$ the$\Box$ fact$\Box$ that$\Box$ measurements$\Box$ a$\Box$ priori$\Box$ introduce$\Box$ non$\Box$ determinism$\Box$ in$\Box$ the$\Box$ evolution$\Box$ of$\Box$ the$\Box$ quantum$\Box$ states$\Box$.$\Box$
$\Box$
Specifically$\Box$,$\Box$ if$\Box$ $\Box$$k$\Box$$$\Box$ is$\Box$ the$\Box$ number$\Box$ of$\Box$ measurements$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ $\Box$(or$\Box$ equivalently$\Box$ the$\Box$ number$\Box$ of$\Box$ non$\Box$-outputs$\Box$ qubits$\Box$)$\Box$,$\Box$ there$\Box$ are$\Box$ at$\Box$ most$\Box$ $\Box$$2$\Box$^k$\Box$$$\Box$ branches$\Box$ in$\Box$ any$\Box$ given$\Box$ computation$\Box$,$\Box$ and$\Box$ therefore$\Box$ a$\Box$ given$\Box$ $\Box$$q$\Box$\in$\Box$\hil$\Box$ I$\Box$$$\Box$ is$\Box$ in$\Box$ relation$\Box$ with$\Box$ at$\Box$ most$\Box$ $\Box$ $\Box$$2$\Box$^k$\Box$$$\Box$ distinct$\Box$ $\Box$$q$\Box$'$\Box$\in$\Box$\hil$\Box$ O$\Box$$$\Box$.$\Box$ The$\Box$ $\Box$\emph$\Box${probability$\Box$}$\Box$ of$\Box$ a$\Box$ branch$\Box$ is$\Box$ $\Box$ $\Box$
defined$\Box$ to$\Box$ be$\Box$ $\Box$$$\Box$$\Box$|q$\Box$'$\Box$$\Box$|$\Box$^2$\Box$/$\Box$$\Box$|q$\Box$$\Box$|$\Box$^2$\Box$$$\Box$ $\Box$($\Box$$q$\Box$$$\Box$ being$\Box$ always$\Box$ assumed$\Box$ to$\Box$ be$\Box$ non$\Box$ zero$\Box$)$\Box$.$\Box$
$\Box$
Indeed$\Box$ one$\Box$ has$\Box$:$\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\label$\Box${likeli$\Box$}$\Box$
$\Box$\sum$\Box$_$\Box${$\Box$\ens$\Box${q$\Box$'$\Box$\mid$\Box$ q$\Box$ $\Box$\brA$\Box${$\Box$\mathfrak$\Box$ P$\Box$}q$\Box$'$\Box$}$\Box$}$\Box$$\Box$|q$\Box$'$\Box$$\Box$|$\Box$^2$\Box$&$\Box$=$\Box$&$\Box$$\Box$|q$\Box$$\Box$|$\Box$^2$\Box$ $\Box$}$\Box$ since$\Box$ any$\Box$ action$\Box$ is$\Box$ either$\Box$ a$\Box$ unitary$\Box$,$\Box$ thus$\Box$ a$\Box$ norm$\Box$-preserving$\Box$ action$\Box$,$\Box$ or$\Box$ a$\Box$ measurement$\Box$ which$\Box$ introduces$\Box$ a$\Box$ branching$\Box$,$\Box$ and$\Box$ then$\Box$ if$\Box$ $\Box$ $\Box$$q$\Box$$$\Box$ projects$\Box$ to$\Box$ $\Box$$q$\Box$_0$\Box$$$\Box$ and$\Box$ $\Box$$q$\Box$_1$\Box$$$\Box$,$\Box$ under$\Box$ some$\Box$ $\Box$
$\Box$$$\Box$\M$\Box$\alpha$\Box$ i$\Box$$$\Box$,$\Box$ $\Box$$$\Box$$\Box$|q$\Box$$\Box$|$\Box$^2$\Box$=$\Box$$\Box$|q$\Box$_0$\Box$$\Box$|$\Box$^2$\Box$+$\Box$$\Box$|q$\Box$_1$\Box$$\Box$|$\Box$^2$\Box$$$\Box$,$\Box$ $\Box$ so$\Box$ that$\Box$ the$\Box$ relation$\Box$ above$\Box$ is$\Box$ always$\Box$ preserved$\Box$.$\Box$
$\Box$
$\Box$\begin$\Box${defi$\Box$}$\Box$}$\Box$\def$\Box$\ED$\Box${$\Box$\end$\Box${defi$\Box$}$\Box$ One$\Box$ says$\Box$ the$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ $\Box$\emph$\Box${$\Box${deterministic$\Box$}$\Box$}$\Box$ if$\Box$ for$\Box$ all$\Box$ $\Box$$q$\Box$\in$\Box$\hil$\Box$ I$\Box$$$\Box$,$\Box$ $\Box$$q$\Box$'$\Box$$$\Box$ and$\Box$ $\Box$$q$\Box$'$\Box$'$\Box$\in$\Box$\hil$\Box$ O$\Box$$$\Box$,$\Box$ whenever$\Box$ $\Box$$q$\Box$\brA$\Box${$\Box$\mathfrak$\Box$ P$\Box$}q$\Box$'$\Box$$$\Box$ and$\Box$ $\Box$$q$\Box$\brA$\Box${$\Box$\mathfrak$\Box$ P$\Box$}q$\Box$'$\Box$'$\Box$$$\Box$,$\Box$ then$\Box$ $\Box$$q$\Box$'$\Box$$$\Box$ and$\Box$ $\Box$$q$\Box$'$\Box$'$\Box$$$\Box$ only$\Box$ differ$\Box$ up$\Box$ to$\Box$ a$\Box$ scalar$\Box$.$\Box$ $\Box$\ED$\Box$ Note$\Box$ that$\Box$ even$\Box$ when$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ deterministic$\Box$,$\Box$ all$\Box$ branches$\Box$ might$\Box$ not$\Box$ be$\Box$ equally$\Box$ likely$\Box$.$\Box$ When$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ deterministic$\Box$,$\Box$ $\Box$ one$\Box$ defines$\Box$ a$\Box$ norm$\Box$-preserving$\Box$ map$\Box$ $\Box$$U$\Box$_$\Box${$\Box$\mathfrak$\Box$ $\Box$ P$\Box$}$\Box$$$\Box$ from$\Box$ $\Box$ $\Box$$$\Box$\hil$\Box$ I$\Box$$$\Box$ to$\Box$ $\Box$$$\Box$\hil$\Box$ O$\Box$$$\Box$ by$\Box$:$\Box$ $\Box$\EQ$\Box${$\Box$
U$\Box$_$\Box${$\Box$\mathfrak$\Box$ $\Box$ P$\Box$}$\Box$(q$\Box$)$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\frac$\Box${$\Box$$\Box$|q$\Box$$\Box$|$\Box$}$\Box${$\Box$$\Box$|q$\Box$'$\Box$$\Box$|$\Box$}q$\Box$'$\Box$ $\Box$}$\Box$ Note$\Box$ that$\Box$ when$\Box$ $\Box$$q$\Box$\brA$\Box${$\Box$\mathfrak$\Box$ P$\Box$}q$\Box$'$\Box$$$\Box$,$\Box$ $\Box$$q$\Box$'$\Box$\neq0$\Box$$$\Box$,$\Box$ so$\Box$ that$\Box$ the$\Box$ definition$\Box$ above$\Box$ always$\Box$ make$\Box$ sense$\Box$.$\Box$ Note$\Box$ also$\Box$ that$\Box$ because$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ deterministic$\Box$,$\Box$ this$\Box$ map$\Box$ depends$\Box$ on$\Box$ the$\Box$ choice$\Box$ of$\Box$ $\Box$$q$\Box$'$\Box$$$\Box$ only$\Box$ up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$.$\Box$ One$\Box$ can$\Box$ further$\Box$ comment$\Box$ that$\Box$ $\Box$ since$\Box$ we$\Box$ took$\Box$ the$\Box$ convention$\Box$ not$\Box$ to$\Box$ renormalise$\Box$ measurement$\Box$ results$\Box$,$\Box$ we$\Box$ have$\Box$ to$\Box$ do$\Box$ here$\Box$ a$\Box$ global$\Box$ renormalisation$\Box$ to$\Box$ define$\Box$ the$\Box$ pattern$\Box$ interpretation$\Box$.$\Box$
$\Box$
One$\Box$ says$\Box$ that$\Box$ a$\Box$ deterministic$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ $\Box$\emph$\Box${realises$\Box$}$\Box$ or$\Box$ $\Box$\emph$\Box${implements$\Box$}$\Box$ $\Box$$U$\Box$_$\Box${$\Box$\mathfrak$\Box$ $\Box$ P$\Box$}$\Box$$$\Box$,$\Box$ or$\Box$ equivalently$\Box$ that$\Box$ $\Box$$U$\Box$_$\Box${$\Box$\mathfrak$\Box$ $\Box$ P$\Box$}$\Box$$$\Box$ is$\Box$ the$\Box$ $\Box$\emph$\Box${interpretation$\Box$}$\Box$ of$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
This$\Box$ map$\Box$ $\Box$$U$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$}$\Box$$$\Box$ must$\Box$ actually$\Box$ be$\Box$ a$\Box$ unitary$\Box$ embedding$\Box$,$\Box$ since$\Box$ all$\Box$ quantum$\Box$ definable$\Box$ deterministic$\Box$ transformations$\Box$ are$\Box$ unitaries$\Box$.$\Box$ $\Box$ If$\Box$ a$\Box$ precise$\Box$ argument$\Box$ is$\Box$ needed$\Box$ here$\Box$,$\Box$ one$\Box$ can$\Box$ rephrase$\Box$ all$\Box$ the$\Box$ definitions$\Box$ given$\Box$ so$\Box$ far$\Box$ in$\Box$ the$\Box$ language$\Box$ of$\Box$ density$\Box$ operators$\Box$ and$\Box$ completely$\Box$-positive$\Box$ maps$\Box$ $\Box$(cp$\Box$-maps$\Box$)$\Box$.$\Box$ $\Box$ Then$\Box$ a$\Box$ deterministic$\Box$ pattern$\Box$ will$\Box$ implement$\Box$ a$\Box$ cp$\Box$-map$\Box$ preserving$\Box$ pure$\Box$ density$\Box$ operators$\Box$.$\Box$ $\Box$ From$\Box$ the$\Box$ Kraus$\Box$ representation$\Box$ theorem$\Box$ for$\Box$ cp$\Box$-maps$\Box$,$\Box$ it$\Box$ is$\Box$ easy$\Box$ to$\Box$ see$\Box$ that$\Box$ such$\Box$ cp$\Box$-maps$\Box$ are$\Box$ liftings$\Box$ of$\Box$ unitary$\Box$ embeddings$\Box$.$\Box$
$\Box$
$\Box$\subsection$\Box${Short$\Box$ examples$\Box$}$\Box$ First$\Box$ we$\Box$ give$\Box$ a$\Box$ quick$\Box$ example$\Box$ of$\Box$ a$\Box$ deterministic$\Box$ pattern$\Box$ that$\Box$ has$\Box$ branches$\Box$ with$\Box$ different$\Box$ probabilities$\Box$.$\Box$ The$\Box$ state$\Box$ space$\Box$ is$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$}$\Box$$$\Box$,$\Box$ with$\Box$ $\Box$$I$\Box$=O$\Box$=$\Box$\ens1$\Box$$$\Box$,$\Box$ while$\Box$ the$\Box$ command$\Box$ sequence$\Box$ is$\Box$ $\Box$$$\Box$\M$\Box$\al2$\Box$$$\Box$.$\Box$ Therefore$\Box$,$\Box$ starting$\Box$ with$\Box$ input$\Box$ $\Box$$q$\Box$$$\Box$,$\Box$ one$\Box$ gets$\Box$ two$\Box$ branches$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ q$\Box$\otimes$\Box$\ket$\Box$+$\Box$,$\Box$\varnothing$\Box$ $\Box$&$\Box$\slar$\Box${$\Box$\M$\Box$\al2$\Box$}$\Box$&$\Box$ $\Box$\left$\Box$$\Box${$\Box$ $\Box$\begin$\Box${array$\Box$}$\Box${l$\Box$}$\Box$ $\Box$\frac12$\Box$(1$\Box$+$\Box$\emi$\Box${$\Box$\alpha$\Box$}$\Box$)q$\Box$,$\Box$\varnothing$\Box$[0$\Box$/2$\Box$]$\Box$$\Box$$\Box$$\Box$$\Box$ $\Box$\frac12$\Box$(1$\Box$-$\Box$\emi$\Box${$\Box$\alpha$\Box$}$\Box$)q$\Box$,$\Box$\varnothing$\Box$[1$\Box$/2$\Box$]$\Box$ $\Box$\end$\Box${array$\Box$}$\Box$ $\Box$\right$\Box$.$\Box$ $\Box$}$\Box$ Thus$\Box$ this$\Box$ pattern$\Box$ is$\Box$ indeed$\Box$ deterministic$\Box$,$\Box$ and$\Box$ implements$\Box$ the$\Box$ identity$\Box$ up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$,$\Box$ $\Box$ and$\Box$ yet$\Box$ the$\Box$ two$\Box$ branches$\Box$ have$\Box$ respective$\Box$ probabilities$\Box$ $\Box$$$\Box$(1$\Box$+$\Box$\cos$\Box$\alpha$\Box$)$\Box$/2$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$(1$\Box$-$\Box$\cos$\Box$\alpha$\Box$)$\Box$/2$\Box$$$\Box$,$\Box$ which$\Box$ are$\Box$ not$\Box$ equal$\Box$ in$\Box$ $\Box$ general$\Box$.$\Box$
$\Box$
Next$\Box$,$\Box$ we$\Box$ return$\Box$ to$\Box$ the$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ H$\Box$$$\Box$ which$\Box$ we$\Box$ already$\Box$ took$\Box$ as$\Box$ an$\Box$ example$\Box$.$\Box$ Let$\Box$ us$\Box$ consider$\Box$ for$\Box$ a$\Box$ start$\Box$ the$\Box$ pattern$\Box$ with$\Box$ same$\Box$ space$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$}$\Box$$$\Box$,$\Box$ same$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$ $\Box$$I$\Box$=$\Box$\ens1$\Box$$$\Box$,$\Box$ $\Box$$O$\Box$=$\Box$\ens2$\Box$$$\Box$,$\Box$ and$\Box$ shorter$\Box$ command$\Box$ sequence$\Box$ $\Box$$$\Box$\Ms$\Box$ 01$\Box$\et12$\Box$$$\Box$.$\Box$ $\Box$ Starting$\Box$ with$\Box$ input$\Box$ $\Box$$q$\Box$=$\Box$(a$\Box$\ket$\Box${0$\Box$}$\Box$+b$\Box$\ket$\Box${1$\Box$}$\Box$)$\Box$\ket$\Box$+$\Box$$$\Box$,$\Box$ one$\Box$ has$\Box$ two$\Box$ computation$\Box$ branches$\Box$,$\Box$ branching$\Box$ at$\Box$ $\Box$$$\Box$\Ms$\Box$ 01$\Box$$$\Box$:$\Box$ $\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$(a$\Box$\ket$\Box${0$\Box$}$\Box$+b$\Box$\ket$\Box${1$\Box$}$\Box$)$\Box$\ket$\Box$+$\Box$,$\Box$\varnothing$\Box$ $\Box$&$\Box$\slar$\Box${$\Box$\et12$\Box$}$\Box$&$\Box$ $\Box$\frac1{\sqrt2}$\Box$(a$\Box$\ket$\Box${00$\Box$}$\Box$+a$\Box$\ket$\Box${01$\Box$}$\Box$+b$\Box$\ket$\Box${10$\Box$}$\Box$-b$\Box$\ket$\Box${11$\Box$}$\Box$)$\Box$,$\Box$\varnothing$\Box$$\Box$$\Box$$\Box$$\Box$ $\Box$&$\Box$\slar$\Box${$\Box$\Ms$\Box$ 01$\Box$}$\Box$&$\Box$ $\Box$ $\Box$\left$\Box$$\Box${$\Box$ $\Box$\begin$\Box${array$\Box$}$\Box${l$\Box$}$\Box$ $\Box$\frac12$\Box$($\Box$(a$\Box$+b$\Box$)$\Box$\ket$\Box${0$\Box$}$\Box$+$\Box$(a$\Box$-b$\Box$)$\Box$\ket$\Box${1$\Box$}$\Box$)$\Box$,$\Box$\varnothing$\Box$[0$\Box$/0$\Box$]$\Box$$\Box$$\Box$$\Box$$\Box$ $\Box$\frac12$\Box$($\Box$(a$\Box$-b$\Box$)$\Box$\ket$\Box${0$\Box$}$\Box$+$\Box$(a$\Box$+b$\Box$)$\Box$\ket$\Box${1$\Box$}$\Box$)$\Box$,$\Box$\varnothing$\Box$[1$\Box$/0$\Box$]$\Box$ $\Box$\end$\Box${array$\Box$}$\Box$ $\Box$\right$\Box$.$\Box$ $\Box$}$\Box$ and$\Box$ since$\Box$ $\Box$$$\Box$\norm$\Box${a$\Box$+b$\Box$}$\Box$^2$\Box$+$\Box$\norm$\Box${a$\Box$-b$\Box$}$\Box$^2$\Box$=2$\Box$($\Box$\norm$\Box$ a$\Box$^2$\Box$+$\Box$\norm$\Box$ b$\Box$^2$\Box$)$\Box$$$\Box$,$\Box$ both$\Box$ transitions$\Box$ happen$\Box$ with$\Box$ equal$\Box$ probabilities$\Box$ $\Box$$$\Box$\frac12$\Box$$$\Box$.$\Box$ $\Box$ Both$\Box$ branches$\Box$ end$\Box$ up$\Box$ with$\Box$ different$\Box$ outputs$\Box$,$\Box$ so$\Box$ the$\Box$ pattern$\Box$ is$\Box$ $\Box$\emph$\Box${not$\Box$}$\Box$ deterministic$\Box$.$\Box$ However$\Box$,$\Box$ if$\Box$ one$\Box$ applies$\Box$ the$\Box$ local$\Box$ correction$\Box$ $\Box$$$\Box$\Cx2$\Box$$$\Box$ on$\Box$ either$\Box$ of$\Box$ the$\Box$ branches$\Box$ ends$\Box$,$\Box$ both$\Box$ outputs$\Box$ will$\Box$ be$\Box$ made$\Box$ to$\Box$ coincide$\Box$.$\Box$ Let$\Box$ us$\Box$ choose$\Box$ to$\Box$ let$\Box$ the$\Box$ correction$\Box$ bear$\Box$ on$\Box$ the$\Box$ second$\Box$ branch$\Box$,$\Box$ obtaining$\Box$ the$\Box$ example$\Box$ $\Box$$$\Box$\mathfrak$\Box$ H$\Box$$$\Box$ $\Box$ which$\Box$ we$\Box$ defined$\Box$ already$\Box$.$\Box$ We$\Box$ have$\Box$ just$\Box$ proved$\Box$ $\Box$$H$\Box$=U$\Box$_$\Box${$\Box$\mathfrak$\Box$ H$\Box$}$\Box$$$\Box$,$\Box$ that$\Box$ is$\Box$ to$\Box$ say$\Box$ $\Box$$$\Box$\mathfrak$\Box$ H$\Box$$$\Box$ realises$\Box$ the$\Box$ Hadamard$\Box$ operator$\Box$.$\Box$
$\Box$
With$\Box$ our$\Box$ definitions$\Box$ in$\Box$ place$\Box$,$\Box$ we$\Box$ first$\Box$ infer$\Box$ that$\Box$ $\Box$ pattern$\Box$ combinations$\Box$ correspond$\Box$ to$\Box$ combinations$\Box$ of$\Box$ their$\Box$ interpretations$\Box$.$\Box$ $\Box$ From$\Box$ this$\Box$ an$\Box$ easy$\Box$ structured$\Box$ argument$\Box$ $\Box$-$\Box$ that$\Box$ uses$\Box$ surprisingly$\Box$ simple$\Box$ patterns$\Box$ $\Box$-$\Box$ for$\Box$ universality$\Box$ will$\Box$ follow$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsection$\Box${Composing$\Box$,$\Box$ Tensoring$\Box$ and$\Box$ Interpretation$\Box$}$\Box$ Recall$\Box$ that$\Box$ two$\Box$ patterns$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$ may$\Box$ be$\Box$ combined$\Box$ by$\Box$ composition$\Box$ provided$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ have$\Box$ as$\Box$ many$\Box$ outputs$\Box$ as$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$ has$\Box$ inputs$\Box$.$\Box$ Suppose$\Box$ $\Box$ this$\Box$ is$\Box$ the$\Box$ case$\Box$,$\Box$ and$\Box$ suppose$\Box$ further$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$ respectively$\Box$ realise$\Box$ some$\Box$ unitaries$\Box$ $\Box$$U$\Box$_1$\Box$$$\Box$ and$\Box$ $\Box$$U$\Box$_2$\Box$$$\Box$,$\Box$ then$\Box$ $\Box$ $\Box$ the$\Box$ composite$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ realises$\Box$ $\Box$$U$\Box$_2U$\Box$_1$\Box$$$\Box$.$\Box$
$\Box$
Indeed$\Box$,$\Box$ the$\Box$ two$\Box$ diagrams$\Box$ representing$\Box$ branches$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$:$\Box$
$\Box$
$\Box${$\Box$\footnotesize$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\xymatrix$\Box$@$\Box$=10pt$\Box$@M$\Box$=3pt$\Box$@R$\Box$=20pt$\Box$@C$\Box$=7pt$\Box${$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${I$\Box$_1$\Box$}$\Box$\ar$\Box$[d$\Box$]$\Box$\ar$\Box$@$\Box${$\Box$.$\Box$>$\Box$}$\Box$[rr$\Box$]$\Box$ $\Box$&$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${O$\Box$_1$\Box$}$\Box$\ar$\Box$@$\Box${$\Box$=$\Box$}$\Box$ $\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${I$\Box$_2$\Box$}$\Box$\ar$\Box$[d$\Box$]$\Box$\ar$\Box$@$\Box${$\Box$.$\Box$>$\Box$}$\Box$[rr$\Box$]$\Box$ $\Box$&$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${O$\Box$_2$\Box$}$\Box$ $\Box$$\Box$$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${I$\Box$_1$\Box$}$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${$\Box$\varnothing$\Box$}$\Box$\ar$\Box$[r$\Box$]$\Box$^$\Box${p$\Box$_1$\Box$}$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${V$\Box$_1$\Box$}$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${$\Box$\varnothing$\Box$}$\Box$\ar$\Box$[r$\Box$]$\Box$^$\Box${$\Box$}$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${O$\Box$_1$\Box$}$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${V$\Box$_1$\Box$\setminus$\Box$ O$\Box$_1$\Box$}$\Box$\ar$\Box$[u$\Box$]$\Box$ $\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${I$\Box$_2$\Box$}$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${$\Box$\varnothing$\Box$}$\Box$\ar$\Box$[r$\Box$]$\Box$^$\Box${p$\Box$_2$\Box$}$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${V$\Box$_2$\Box$}$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${$\Box$\varnothing$\Box$}$\Box$\ar$\Box$[r$\Box$]$\Box$^$\Box${$\Box$}$\Box$&$\Box$ $\Box${$\Box$}$\Box$\hil$\Box$ $\Box${O$\Box$_2$\Box$}$\Box$\times$\Box${\mbb Z}_2$\Box$^$\Box${V$\Box$_2$\Box$\setminus$\Box$ O$\Box$_2$\Box$}$\Box$\ar$\Box$[u$\Box$]$\Box$ $\Box$}$\Box$ $\Box$}$\Box$ $\Box$}$\Box$$\Box$$\Box$ can$\Box$ be$\Box$ pasted$\Box$ together$\Box$,$\Box$ since$\Box$ $\Box$$O$\Box$_1$\Box$=I$\Box$_2$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\hil$\Box$ $\Box${O$\Box$_1$\Box$}$\Box$=$\Box$\hil$\Box$ $\Box${I$\Box$_2$\Box$}$\Box$$$\Box$.$\Box$ $\Box$ But$\Box$ then$\Box$,$\Box$ it$\Box$ is$\Box$ enough$\Box$ to$\Box$ notice$\Box$ 1$\Box$)$\Box$ that$\Box$ preparation$\Box$ steps$\Box$ $\Box$$p$\Box$_2$\Box$$$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$ commute$\Box$ to$\Box$ all$\Box$ actions$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ since$\Box$ they$\Box$ apply$\Box$ on$\Box$ disjoint$\Box$ sets$\Box$ of$\Box$ qubits$\Box$,$\Box$ and$\Box$ 2$\Box$)$\Box$ that$\Box$ no$\Box$ action$\Box$ taken$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$$$\Box$ depends$\Box$ on$\Box$ $\Box$ the$\Box$ measurements$\Box$ outcomes$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$.$\Box$ It$\Box$ follows$\Box$ that$\Box$ $\Box$ the$\Box$ pasted$\Box$ diagram$\Box$ describes$\Box$ the$\Box$ same$\Box$ branches$\Box$ as$\Box$ does$\Box$ the$\Box$ one$\Box$ associated$\Box$ to$\Box$ the$\Box$ composite$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
A$\Box$ similar$\Box$ argument$\Box$ applies$\Box$ to$\Box$ the$\Box$ case$\Box$ of$\Box$ a$\Box$ tensor$\Box$ combination$\Box$,$\Box$ and$\Box$ one$\Box$ has$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$\otimes$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ realises$\Box$ $\Box$$U$\Box$_2$\Box$\otimes$\Box$ U$\Box$_1$\Box$$$\Box$.$\Box$
$\Box$
The$\Box$ same$\Box$ holds$\Box$ even$\Box$ for$\Box$ non$\Box$-deterministic$\Box$ patterns$\Box$ considered$\Box$ as$\Box$ implementing$\Box$ cp$\Box$-maps$\Box$.$\Box$ $\Box$ But$\Box$ we$\Box$ will$\Box$ not$\Box$ be$\Box$ concerned$\Box$ with$\Box$ this$\Box$ generalised$\Box$ setting$\Box$ in$\Box$ this$\Box$ paper$\Box$.$\Box$
$\Box$
$\Box$\section$\Box${Universality$\Box$}$\Box$ Consider$\Box$ the$\Box$ two$\Box$ following$\Box$ patterns$\Box$:$\Box$ $\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\cx$\Box$ 2$\Box${s$\Box$_1$\Box$}$\Box$\M$\Box${$\Box${$\Box$-$\Box$\alpha$\Box$}$\Box$}1$\Box$\et$\Box$ 12$\Box$$\Box$$\Box$ $\Box$\mathop{\wedge}\hskip-.4ex$\Box${$\Box$\mathfrak$\Box$ Z$\Box$}$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\et$\Box$ 12$\Box$ $\Box$ $\Box$}$\Box$ In$\Box$ the$\Box$ first$\Box$ pattern$\Box$ $\Box$$1$\Box$$$\Box$ is$\Box$ the$\Box$ only$\Box$ input$\Box$ and$\Box$ $\Box$$2$\Box$$$\Box$ is$\Box$ the$\Box$ only$\Box$ output$\Box$,$\Box$ while$\Box$ in$\Box$ the$\Box$ second$\Box$ both$\Box$ $\Box$$1$\Box$$$\Box$ and$\Box$ $\Box$$2$\Box$$$\Box$ are$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$.$\Box$ Note$\Box$ that$\Box$ here$\Box$ we$\Box$ are$\Box$ taking$\Box$ advantage$\Box$ of$\Box$ allowing$\Box$ patterns$\Box$ with$\Box$ overlapping$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\begin$\Box${prop$\Box$}$\Box$}$\Box$\def$\Box$\ORP$\Box${$\Box$\end$\Box${prop$\Box$}$\Box$ The$\Box$ patterns$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box${$\Box$\mathfrak$\Box$ Z$\Box$}$\Box$$$\Box$ are$\Box$ universal$\Box$.$\Box$ $\Box$\ORP$\Box$ First$\Box$,$\Box$ we$\Box$ claim$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box${$\Box$\mathfrak$\Box$ Z$\Box$}$\Box$$$\Box$ respectively$\Box$ realise$\Box$ $\Box$$$\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box${Z$\Box$}$\Box$$$\Box$,$\Box$ with$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$&$\Box$:$\Box$=$\Box$&$\Box$\frac1{\sqrt2}$\Box$\MA$\Box${1$\Box$&$\Box$\ei$\Box$\alpha$\Box$$\Box$\1$\Box$&$\Box$-$\Box$\ei$\Box$\alpha$\Box$}$\Box$ $\Box$}$\Box$ We$\Box$ have$\Box$ already$\Box$ seen$\Box$ in$\Box$ our$\Box$ example$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$J$\Box$(0$\Box$)$\Box$=$\Box$\mathfrak$\Box$ H$\Box$$$\Box$ implements$\Box$ $\Box$$H$\Box$=$\Box$J$\Box$(0$\Box$)$\Box$$$\Box$,$\Box$ thus$\Box$ we$\Box$ already$\Box$ know$\Box$ this$\Box$ in$\Box$ the$\Box$ particular$\Box$ $\Box$ case$\Box$ where$\Box$ $\Box$$$\Box$\alpha$\Box$=0$\Box$$$\Box$.$\Box$ The$\Box$ general$\Box$ case$\Box$ follows$\Box$ by$\Box$ the$\Box$ same$\Box$ kind$\Box$ of$\Box$ computation$\Box$.$\Box$ The$\Box$ case$\Box$ of$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$$$\Box$ is$\Box$ obvious$\Box$.$\Box$$\Box$$\Box$ Second$\Box$,$\Box$ we$\Box$ know$\Box$ that$\Box$ these$\Box$ unitaries$\Box$ form$\Box$ a$\Box$ universal$\Box$ set$\Box$~$\Box$\cite$\Box${generator04$\Box$}$\Box$.$\Box$ Therefore$\Box$,$\Box$ from$\Box$ the$\Box$ preceding$\Box$ section$\Box$,$\Box$ $\Box$ we$\Box$ infer$\Box$ that$\Box$ combining$\Box$ the$\Box$ corresponding$\Box$ patterns$\Box$ will$\Box$ generate$\Box$ patterns$\Box$ realising$\Box$ all$\Box$ finite$\Box$-dimensional$\Box$ unitaries$\Box$.$\Box$ $\Box$$$\Box$\Box$\Box$$$\Box$
$\Box$
These$\Box$ patterns$\Box$ are$\Box$ indeed$\Box$ among$\Box$ the$\Box$ simplest$\Box$ possible$\Box$.$\Box$ As$\Box$ a$\Box$ consequence$\Box$,$\Box$ in$\Box$ the$\Box$ section$\Box$ devoted$\Box$ to$\Box$ examples$\Box$,$\Box$ we$\Box$ will$\Box$ find$\Box$ that$\Box$ our$\Box$ implementations$\Box$ have$\Box$ often$\Box$ little$\Box$ space$\Box$ complexity$\Box$.$\Box$
$\Box$
Remarkably$\Box$,$\Box$ in$\Box$ our$\Box$ set$\Box$ of$\Box$ generators$\Box$,$\Box$ one$\Box$ finds$\Box$ a$\Box$ single$\Box$ dependency$\Box$,$\Box$ which$\Box$ occurs$\Box$ in$\Box$ the$\Box$ correction$\Box$ phase$\Box$ of$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$.$\Box$ $\Box$ No$\Box$ set$\Box$ of$\Box$ patterns$\Box$ without$\Box$ any$\Box$ measurement$\Box$ could$\Box$ be$\Box$ a$\Box$ generating$\Box$ set$\Box$,$\Box$ since$\Box$ such$\Box$ patterns$\Box$ can$\Box$ only$\Box$ implement$\Box$ unitaries$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$ $\Box$ Dependencies$\Box$ are$\Box$ also$\Box$ needed$\Box$ for$\Box$ universality$\Box$,$\Box$ but$\Box$ we$\Box$ have$\Box$ to$\Box$ wait$\Box$ for$\Box$ the$\Box$ development$\Box$ of$\Box$ the$\Box$ measurement$\Box$ calculus$\Box$ in$\Box$ the$\Box$ next$\Box$ section$\Box$ to$\Box$ give$\Box$ a$\Box$ proof$\Box$ of$\Box$ this$\Box$ fact$\Box$.$\Box$
$\Box$
$\Box$\section$\Box${The$\Box$ measurement$\Box$ calculus$\Box$}$\Box$ We$\Box$ turn$\Box$ to$\Box$ the$\Box$ next$\Box$ important$\Box$ matter$\Box$ of$\Box$ the$\Box$ paper$\Box$,$\Box$ namely$\Box$ standardisation$\Box$.$\Box$ The$\Box$ idea$\Box$ is$\Box$ quite$\Box$ simple$\Box$.$\Box$ It$\Box$ is$\Box$ enough$\Box$ to$\Box$ provide$\Box$ local$\Box$ pattern$\Box$ rewriting$\Box$ rules$\Box$ pushing$\Box$ $\Box$$E$\Box$$s$\Box$ to$\Box$ the$\Box$ beginning$\Box$ of$\Box$ the$\Box$ pattern$\Box$,$\Box$ and$\Box$ $\Box$$C$\Box$$s$\Box$ to$\Box$ the$\Box$ end$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
$\Box$\subsection$\Box${The$\Box$ equations$\Box$}$\Box$ A$\Box$ first$\Box$ set$\Box$ of$\Box$ equations$\Box$ give$\Box$ means$\Box$ to$\Box$ propagate$\Box$ local$\Box$ Pauli$\Box$ corrections$\Box$ through$\Box$ the$\Box$ entangling$\Box$ operator$\Box$ $\Box$$$\Box$\et$\Box$ ij$\Box$$$\Box$.$\Box$ Because$\Box$ $\Box$$$\Box$\et$\Box$ ij$\Box$ $\Box$=$\Box$\et$\Box$ ji$\Box$$$\Box$,$\Box$ there$\Box$ are$\Box$ only$\Box$ two$\Box$ cases$\Box$ to$\Box$ consider$\Box$:$\Box$ $\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\et$\Box$ ij$\Box$ $\Box$\cx$\Box$ is$\Box$&$\Box$=$\Box$&$\Box$\cx$\Box$ is$\Box$\cz$\Box$ js$\Box$\et$\Box$ ij$\Box$\label$\Box${ecx$\Box$}$\Box$$\Box$$\Box$ $\Box$\et$\Box$ ij$\Box$ $\Box$\cz$\Box$ is$\Box$&$\Box$=$\Box$&$\Box$\cz$\Box$ is$\Box$\et$\Box$ ij$\Box$\label$\Box${ecz$\Box$}$\Box$ $\Box$}$\Box$ These$\Box$ equations$\Box$ are$\Box$ easy$\Box$ to$\Box$ verify$\Box$ and$\Box$ are$\Box$ natural$\Box$ since$\Box$ $\Box$$$\Box$\et$\Box$ ij$\Box$$$\Box$ belongs$\Box$ to$\Box$ the$\Box$ Clifford$\Box$ group$\Box$,$\Box$ and$\Box$ therefore$\Box$ maps$\Box$ $\Box$ under$\Box$ conjugation$\Box$ the$\Box$ Pauli$\Box$ group$\Box$ to$\Box$ itself$\Box$.$\Box$ $\Box$
$\Box$
A$\Box$ second$\Box$ set$\Box$ of$\Box$ equations$\Box$ give$\Box$ means$\Box$ to$\Box$ push$\Box$ corrections$\Box$ through$\Box$ measurements$\Box$ acting$\Box$ on$\Box$ the$\Box$ same$\Box$ qubit$\Box$.$\Box$ Again$\Box$ there$\Box$ are$\Box$ two$\Box$ cases$\Box$:$\Box$ $\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$ $\Box$\MS$\Box$\alpha$\Box$ ist$\Box$\cx$\Box$ ir$\Box$&$\Box$=$\Box$&$\Box$\MS$\Box$\alpha$\Box$ i$\Box${s$\Box$+r$\Box$}$\Box${t$\Box$}$\Box$\label$\Box${mcx$\Box$}$\Box$$\Box$$\Box$ $\Box$\MS$\Box$\alpha$\Box$ ist$\Box$\cz$\Box$ ir$\Box$&$\Box$=$\Box$&$\Box$\MS$\Box$\alpha$\Box$ i$\Box${s$\Box$}$\Box${t$\Box$+r$\Box$}$\Box$\label$\Box${mcz$\Box$}$\Box$ $\Box$}$\Box$ These$\Box$ equations$\Box$ follow$\Box$ easily$\Box$ from$\Box$ equations$\Box$ $\Box$($\Box$\ref$\Box${xmx$\Box$}$\Box$)$\Box$ and$\Box$ $\Box$($\Box$\ref$\Box${zmz$\Box$}$\Box$)$\Box$.$\Box$ They$\Box$ express$\Box$ the$\Box$ fact$\Box$ that$\Box$ the$\Box$ $\Box$ measurements$\Box$ $\Box$$$\Box$\Ms$\Box$\alpha$\Box$ i$\Box$$$\Box$ are$\Box$ closed$\Box$ under$\Box$ conjugation$\Box$ by$\Box$ the$\Box$ Pauli$\Box$ group$\Box$,$\Box$ $\Box$ very$\Box$ much$\Box$ like$\Box$ equations$\Box$~$\Box$($\Box$\ref$\Box${ecx$\Box$}$\Box$)$\Box$ $\Box$ and$\Box$~$\Box$($\Box$\ref$\Box${ecz$\Box$}$\Box$)$\Box$ express$\Box$ the$\Box$ fact$\Box$ that$\Box$ the$\Box$ Pauli$\Box$ group$\Box$ is$\Box$ closed$\Box$ under$\Box$ conjugation$\Box$ by$\Box$ the$\Box$ entanglements$\Box$ $\Box$$$\Box$\et$\Box$ ij$\Box$$$\Box$.$\Box$
$\Box$
Define$\Box$ the$\Box$ following$\Box$ convenient$\Box$ abbreviations$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\ms$\Box$\alpha$\Box$ is$\Box$:$\Box$=$\Box$\MS$\Box$\alpha$\Box$ is0$\Box$,$\Box$$\Box$,$\Box$ $\Box$\MS$\Box$\alpha$\Box$ i$\Box${$\Box$}t$\Box$:$\Box$=$\Box$\MS$\Box$\alpha$\Box$ i0t$\Box$,$\Box$$\Box$,$\Box$ $\Box$\Ms$\Box$\alpha$\Box$ i$\Box$:$\Box$=$\Box$\MS$\Box$\alpha$\Box$ i00$\Box$,$\Box$$\Box$$\Box$ $\Box$\Ms$\Box$ xi$\Box$:$\Box$=$\Box$\Ms0i$\Box$,$\Box$$\Box$,$\Box$ $\Box$\Ms$\Box$ yi$\Box$:$\Box$=$\Box$\Ms$\Box$\frac\pi2$\Box$ i$\Box$ $\Box$}$\Box$ Particular$\Box$ cases$\Box$ of$\Box$ the$\Box$ equations$\Box$ above$\Box$ are$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$ $\Box$\M$\Box$ xi$\Box$\cx$\Box$ is$\Box$&$\Box$=$\Box$&$\Box$\M$\Box$ xi$\Box$$\Box$$\Box$ $\Box$\M$\Box$ yi$\Box$\cx$\Box$ is$\Box$&$\Box$=$\Box$&$\Box$\ms$\Box$ yis$\Box$ $\Box$&$\Box$=$\Box$&$\Box$\MS$\Box$ yi$\Box${$\Box$}s$\Box$ $\Box$&$\Box$=$\Box$&$\Box$\M$\Box$ yi$\Box$\cz$\Box$ is$\Box$ $\Box$}$\Box$ The$\Box$ first$\Box$ equation$\Box$,$\Box$ follows$\Box$ from$\Box$ $\Box$$$\Box${$\Box$-0$\Box$}$\Box$=0$\Box$$$\Box$,$\Box$ so$\Box$ the$\Box$ $\Box$$X$\Box$$$\Box$ action$\Box$ on$\Box$ $\Box$$$\Box$\M$\Box$ xi$\Box$$$\Box$ is$\Box$ trivial$\Box$;$\Box$ the$\Box$ middle$\Box$ equation$\Box$,$\Box$ second$\Box$ row$\Box$,$\Box$ is$\Box$ because$\Box$ $\Box$ $\Box$$$\Box${$\Box$-$\Box$\frac\pi2$\Box$}$\Box$$$\Box$ is$\Box$ equal$\Box$ $\Box$$$\Box$\frac\pi2$\Box$+$\Box$\pi$\Box$$$\Box$ modulo$\Box$ $\Box$$2$\Box$\pi$\Box$$$\Box$,$\Box$ $\Box$ and$\Box$ therefore$\Box$ the$\Box$ $\Box$$X$\Box$$$\Box$ and$\Box$ $\Box$$Z$\Box$$$\Box$ actions$\Box$ coincide$\Box$ on$\Box$ $\Box$$$\Box$\M$\Box$ yi$\Box$$$\Box$.$\Box$ So$\Box$ we$\Box$ obtain$\Box$ the$\Box$ following$\Box$:$\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$ $\Box$\MS$\Box$ xist$\Box$&$\Box$=$\Box$&$\Box$\MS$\Box$ xi$\Box${$\Box$}t$\Box$\label$\Box${mx$\Box$}$\Box$$\Box$$\Box$ $\Box$\MS$\Box$ yist$\Box$&$\Box$=$\Box$&$\Box$\MS$\Box$ yi$\Box${$\Box$}$\Box${s$\Box$+t$\Box$}$\Box$\label$\Box${my$\Box$}$\Box$ $\Box$}$\Box$ which$\Box$ we$\Box$ will$\Box$ use$\Box$ later$\Box$ to$\Box$ prove$\Box$ that$\Box$ patterns$\Box$ with$\Box$ measurements$\Box$ of$\Box$ the$\Box$ form$\Box$ $\Box$$M$\Box$^x$\Box$$$\Box$ and$\Box$ $\Box$$M$\Box$^y$\Box$$$\Box$ may$\Box$ only$\Box$ realise$\Box$ unitaries$\Box$ in$\Box$ the$\Box$ $\Box$ Clifford$\Box$ group$\Box$.$\Box$
$\Box$
$\Box$\subsection$\Box${The$\Box$ rewriting$\Box$ rules$\Box$}$\Box$ We$\Box$ now$\Box$ define$\Box$ a$\Box$ set$\Box$ of$\Box$ rewrite$\Box$ rules$\Box$,$\Box$ obtained$\Box$ by$\Box$ directing$\Box$ the$\Box$ equations$\Box$ above$\Box$:$\Box$ $\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\et$\Box$ ij$\Box$\cx$\Box$ is$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\cx$\Box$ is$\Box$\cz$\Box$ js$\Box$\et$\Box$ ij$\Box$&$\Box$\quad$\Box$\hbox$\Box${$\Box$}EX$\Box$$\Box$$\Box$ $\Box$\et$\Box$ ij$\Box$\cz$\Box$ is$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\cz$\Box$ is$\Box$\et$\Box$ ij$\Box$&$\Box$\quad$\Box$\hbox$\Box${$\Box$}EZ$\Box$$\Box$$\Box$ $\Box$\MS$\Box$\alpha$\Box$ is$\Box${t$\Box$}$\Box$\cx$\Box$ i$\Box${r$\Box$}$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\MS$\Box$\alpha$\Box$ i$\Box${s$\Box$+r$\Box$}$\Box${t$\Box$}$\Box$&$\Box$\quad$\Box$\hbox$\Box${$\Box$}MX$\Box$$\Box$$\Box$ $\Box$\MS$\Box$\alpha$\Box$ is$\Box${t$\Box$}$\Box$\cz$\Box$ i$\Box${r$\Box$}$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\MS$\Box$\alpha$\Box$ i$\Box${s$\Box$}$\Box${r$\Box$+t$\Box$}$\Box$&$\Box$\quad$\Box$\hbox$\Box${$\Box$}MZ$\Box$ $\Box$}$\Box$ to$\Box$ which$\Box$ we$\Box$ need$\Box$ to$\Box$ add$\Box$ the$\Box$ $\Box$\emph$\Box${free$\Box$ commutation$\Box$ rules$\Box$}$\Box$,$\Box$ $\Box$ obtained$\Box$ when$\Box$ commands$\Box$ operate$\Box$ on$\Box$ disjoint$\Box$ sets$\Box$ of$\Box$ qubits$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\et$\Box$ ij$\Box$\CO$\Box${$\Box$\vec$\Box$ k$\Box$}$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\CO$\Box${$\Box$\vec$\Box$ k$\Box$}$\Box$\et$\Box$ ij$\Box$&$\Box$\quad$\Box$\hbox$\Box${with$\Box$ $\Box$}A$\Box$\neq$\Box$ E$\Box$$\Box$$\Box$ $\Box$\CO$\Box${$\Box$\vec$\Box$ k$\Box$}$\Box$\cx$\Box$ is$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\cx$\Box$ is$\Box$\CO$\Box${$\Box$\vec$\Box$ k$\Box$}$\Box$&$\Box$\quad$\Box$\hbox$\Box${with$\Box$ $\Box$}A$\Box$\neq$\Box$ C$\Box$$\Box$$\Box$ $\Box$\CO$\Box${$\Box$\vec$\Box$ k$\Box$}$\Box$\cz$\Box$ is$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\cz$\Box$ is$\Box$\CO$\Box${$\Box$\vec$\Box$ k$\Box$}$\Box$&$\Box$\quad$\Box$\hbox$\Box${with$\Box$ $\Box$}A$\Box$\neq$\Box$ C$\Box$ $\Box$ $\Box$}$\Box$ $\Box$ where$\Box$ $\Box$$$\Box$\vec$\Box$ k$\Box$$$\Box$ represent$\Box$ the$\Box$ qubits$\Box$ acted$\Box$ upon$\Box$ by$\Box$ command$\Box$ $\Box$$A$\Box$$$\Box$,$\Box$ and$\Box$ are$\Box$ supposed$\Box$ to$\Box$ be$\Box$ distinct$\Box$ from$\Box$ $\Box$$i$\Box$$$\Box$ and$\Box$ $\Box$$j$\Box$$$\Box$.$\Box$
$\Box$
Condition$\Box$ $\Box$(D$\Box$)$\Box$ is$\Box$ easily$\Box$ seen$\Box$ to$\Box$ be$\Box$ preserved$\Box$ under$\Box$ rewriting$\Box$.$\Box$
$\Box$
Under$\Box$ rewriting$\Box$,$\Box$ the$\Box$ computation$\Box$ space$\Box$,$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$ remain$\Box$ the$\Box$ same$\Box$,$\Box$ and$\Box$ so$\Box$ are$\Box$ the$\Box$ entanglement$\Box$ commands$\Box$.$\Box$ Measurements$\Box$ might$\Box$ be$\Box$ modified$\Box$,$\Box$ but$\Box$ there$\Box$ is$\Box$ still$\Box$ the$\Box$ same$\Box$ number$\Box$ of$\Box$ them$\Box$,$\Box$ and$\Box$ they$\Box$ are$\Box$ still$\Box$ acting$\Box$ on$\Box$ the$\Box$ same$\Box$ qubits$\Box$.$\Box$ The$\Box$ only$\Box$ induced$\Box$ modifications$\Box$ concern$\Box$ local$\Box$ corrections$\Box$ and$\Box$ dependencies$\Box$.$\Box$ We$\Box$ also$\Box$ take$\Box$ due$\Box$ note$\Box$ that$\Box$ none$\Box$ of$\Box$ these$\Box$ equations$\Box$ may$\Box$ create$\Box$ $\Box$ dependencies$\Box$.$\Box$
$\Box$
$\Box$\subsection$\Box${Standardisation$\Box$}$\Box$ Write$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ respectively$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ if$\Box$ both$\Box$ patterns$\Box$ have$\Box$ the$\Box$ same$\Box$ type$\Box$,$\Box$ and$\Box$ one$\Box$ obtains$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$'s$\Box$ command$\Box$ $\Box$ sequence$\Box$ from$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$'s$\Box$ one$\Box$ by$\Box$ applying$\Box$ one$\Box$,$\Box$ respectively$\Box$ any$\Box$ number$\Box$,$\Box$ of$\Box$ the$\Box$ rules$\Box$ above$\Box$.$\Box$ Say$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ $\Box$\emph$\Box${standard$\Box$}$\Box$ if$\Box$ for$\Box$ no$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
Because$\Box$ all$\Box$ our$\Box$ equations$\Box$ are$\Box$ sound$\Box$,$\Box$ one$\Box$ has$\Box$ that$\Box$ whenever$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ and$\Box$ both$\Box$ patterns$\Box$ are$\Box$ deterministic$\Box$,$\Box$ then$\Box$ $\Box$$U$\Box$_$\Box${$\Box$\mathfrak$\Box$
$\Box$ $\Box$ P$\Box$}$\Box$=U$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$.$\Box$
$\Box$
One$\Box$ can$\Box$ show$\Box$ by$\Box$ a$\Box$ standard$\Box$ rewriting$\Box$ theory$\Box$ argument$\Box$,$\Box$ $\Box$ that$\Box$ for$\Box$ all$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$,$\Box$ there$\Box$ exists$\Box$ a$\Box$ unique$\Box$ standard$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ such$\Box$ that$\Box$ $\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ and$\Box$ moreover$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ satisfies$\Box$ the$\Box$ $\Box$(EMC$\Box$)$\Box$ condition$\Box$.$\Box$ $\Box$ Reaching$\Box$ the$\Box$ standard$\Box$ form$\Box$ takes$\Box$ at$\Box$ most$\Box$ quadratic$\Box$ time$\Box$ in$\Box$ the$\Box$ number$\Box$ of$\Box$ instructions$\Box$ in$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$.$\Box$ $\Box$ Details$\Box$ are$\Box$ given$\Box$ in$\Box$ the$\Box$ appendix$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsection$\Box${Signal$\Box$ shifting$\Box$}$\Box$ One$\Box$ can$\Box$ extend$\Box$ the$\Box$ calculus$\Box$ to$\Box$ include$\Box$ the$\Box$ shifting$\Box$ command$\Box$ $\Box$$$\Box$\ss$\Box$ it$\Box$$$\Box$.$\Box$ This$\Box$ allows$\Box$ one$\Box$ to$\Box$ dispose$\Box$ of$\Box$ dependencies$\Box$ induced$\Box$ by$\Box$ the$\Box$ $\Box$$Z$\Box$$$\Box$-action$\Box$,$\Box$ and$\Box$ obtain$\Box$ sometimes$\Box$ standard$\Box$ patterns$\Box$ with$\Box$ smaller$\Box$ depth$\Box$ complexity$\Box$,$\Box$ as$\Box$ we$\Box$ will$\Box$ see$\Box$ in$\Box$ the$\Box$ next$\Box$ section$\Box$ devoted$\Box$ to$\Box$ examples$\Box$.$\Box$ $\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\MS$\Box$\alpha$\Box$ ist$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\ss$\Box$ it$\Box$\ms$\Box$\alpha$\Box$ is$\Box$$\Box$$\Box$ $\Box$\cx$\Box$ js$\Box$\ss$\Box$ it$\Box$&$\Box$\Rightarrow$\Box$&$\Box$ $\Box$\ss$\Box$ it$\Box$ $\Box$\cx$\Box$ j$\Box${s$\Box$[t$\Box$+s$\Box$_i$\Box$/s$\Box$_i$\Box$]$\Box$}$\Box$$\Box$$\Box$ $\Box$\cz$\Box$ js$\Box$\ss$\Box$ it$\Box$&$\Box$\Rightarrow$\Box$&$\Box$ $\Box$\ss$\Box$ it$\Box$ $\Box$\cz$\Box$ j$\Box${s$\Box$[t$\Box$+s$\Box$_i$\Box$/s$\Box$_i$\Box$]$\Box$}$\Box$$\Box$$\Box$ $\Box$\MS$\Box$\alpha$\Box$ jst$\Box$\ss$\Box$ ir$\Box$&$\Box$\Rightarrow$\Box$&$\Box$\ss$\Box$ ir$\Box$ $\Box$\MS$\Box$\alpha$\Box$ j$\Box${s$\Box$[r$\Box$+s$\Box$_i$\Box$/s$\Box$_i$\Box$]$\Box$}$\Box${t$\Box$[r$\Box$+s$\Box$_i$\Box$/s$\Box$_i$\Box$]$\Box$}$\Box$ $\Box$}$\Box$ where$\Box$ $\Box$$s$\Box$[t$\Box$/s$\Box$_i$\Box$]$\Box$$$\Box$ is$\Box$ the$\Box$ substitution$\Box$ of$\Box$ $\Box$$s$\Box$_i$\Box$$$\Box$ with$\Box$ $\Box$$t$\Box$$$\Box$ in$\Box$ $\Box$$s$\Box$$$\Box$,$\Box$ $\Box$$s$\Box$$$\Box$,$\Box$ $\Box$$t$\Box$$$\Box$ being$\Box$ signals$\Box$.$\Box$ The$\Box$ first$\Box$ additional$\Box$ rewrite$\Box$ rule$\Box$ was$\Box$ already$\Box$ introduced$\Box$ as$\Box$ equation$\Box$ $\Box$($\Box$\ref$\Box${split$\Box$}$\Box$)$\Box$,$\Box$ while$\Box$ the$\Box$ other$\Box$ ones$\Box$ are$\Box$ merely$\Box$ propagating$\Box$ the$\Box$ signal$\Box$ shift$\Box$.$\Box$ Clearly$\Box$ also$\Box$,$\Box$ one$\Box$ can$\Box$ dispose$\Box$ of$\Box$ $\Box$$$\Box$\ss$\Box$ it$\Box$$$\Box$ when$\Box$ it$\Box$ hits$\Box$ the$\Box$ end$\Box$ of$\Box$ the$\Box$ pattern$\Box$ command$\Box$ sequence$\Box$.$\Box$ We$\Box$ will$\Box$ refer$\Box$ to$\Box$ this$\Box$ new$\Box$ set$\Box$ of$\Box$ rules$\Box$ as$\Box$ $\Box$$$\Box$\Rightarrow$\Box$_S$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
$\Box$
$\Box$
$\Box$\section$\Box${Examples$\Box$}$\Box$ In$\Box$ this$\Box$ section$\Box$ we$\Box$ develop$\Box$ some$\Box$ examples$\Box$ illustrating$\Box$ both$\Box$ pattern$\Box$ composition$\Box$,$\Box$ pattern$\Box$ standardisation$\Box$,$\Box$ and$\Box$ signal$\Box$ shifting$\Box$.$\Box$ We$\Box$ compare$\Box$ our$\Box$ implementations$\Box$ with$\Box$ the$\Box$ implementations$\Box$ given$\Box$ in$\Box$ the$\Box$ reference$\Box$ paper$\Box$~$\Box$\cite$\Box${mqqcs$\Box$}$\Box$.$\Box$ To$\Box$ combine$\Box$ patterns$\Box$ $\Box$ one$\Box$ needs$\Box$ to$\Box$ rename$\Box$ their$\Box$ qubits$\Box$ as$\Box$ we$\Box$ already$\Box$ noticed$\Box$.$\Box$ We$\Box$ use$\Box$ the$\Box$ $\Box$ following$\Box$ concrete$\Box$ notation$\Box$:$\Box$ if$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ a$\Box$ pattern$\Box$ over$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,$\Box$\ldots$\Box$,n$\Box$}$\Box$$$\Box$,$\Box$ $\Box$ $\Box$ and$\Box$ $\Box$$f$\Box$$$\Box$ is$\Box$ an$\Box$ injection$\Box$,$\Box$ we$\Box$ write$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$(f$\Box$(1$\Box$)$\Box$,$\Box$\ldots$\Box$,f$\Box$(n$\Box$)$\Box$)$\Box$$$\Box$ for$\Box$ the$\Box$ same$\Box$ pattern$\Box$ with$\Box$ qubits$\Box$ renamed$\Box$ according$\Box$ to$\Box$ $\Box$$f$\Box$$$\Box$.$\Box$ We$\Box$ also$\Box$ write$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$_2$\Box$\circ$\Box$\mathfrak$\Box$ P$\Box$_1$\Box$$$\Box$ for$\Box$ pattern$\Box$ composition$\Box$ to$\Box$ ease$\Box$ reading$\Box$.$\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${Teleportation$\Box$.$\Box$}$\Box$ Consider$\Box$ the$\Box$ composite$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$J$\Box$($\Box$\beta$\Box$)$\Box$(2$\Box$,3$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$(1$\Box$,2$\Box$)$\Box$$$\Box$ with$\Box$ computation$\Box$ space$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$,3$\Box$}$\Box$$$\Box$,$\Box$ inputs$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$}$\Box$$$\Box$,$\Box$ and$\Box$ outputs$\Box$ $\Box$$$\Box$\ens$\Box${3$\Box$}$\Box$$$\Box$.$\Box$ $\Box$ We$\Box$ run$\Box$ our$\Box$ standardisation$\Box$ procedure$\Box$ so$\Box$ as$\Box$ to$\Box$ obtain$\Box$ an$\Box$ equivalent$\Box$ standard$\Box$ pattern$\Box$:$\Box$ $\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\beta$\Box$)$\Box$(2$\Box$,3$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$(1$\Box$,2$\Box$)$\Box$&$\Box$=$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_$\Box${2$\Box$}$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\beta$\Box$}2$\Box$\tr$\Box${$\Box$\et23$\Box$\cx2$\Box${s$\Box$_$\Box${1$\Box$}$\Box$}$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}1$\Box$\et12$\Box$ $\Box$$\Box$$\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_$\Box${2$\Box$}$\Box$}$\Box$\tr$\Box${$\Box$\Ms$\Box${$\Box$-$\Box$\beta$\Box$}2$\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}1$\Box$\et23$\Box$\et12$\Box$ $\Box$$\Box$$\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_$\Box${2$\Box$}$\Box$}$\Box$\cz3$\Box${s$\Box$_$\Box${1$\Box$}$\Box$}$\Box$\ms$\Box${$\Box$-$\Box$\beta$\Box$}2$\Box${s$\Box$_$\Box${1$\Box$}$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}1$\Box$\et23$\Box$\et12$\Box$ $\Box$}$\Box$ Let$\Box$ us$\Box$ call$\Box$ the$\Box$ pattern$\Box$ just$\Box$ obtained$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$,$\Box$\beta$\Box$)$\Box$$$\Box$.$\Box$ If$\Box$ we$\Box$ take$\Box$ as$\Box$ a$\Box$ special$\Box$ case$\Box$ $\Box$$$\Box$\alpha$\Box$=$\Box$\beta$\Box$=0$\Box$$$\Box$,$\Box$ we$\Box$ get$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$\Ms$\Box$ x2$\Box$\Ms$\Box$ x1$\Box$\et23$\Box$\et12$\Box$ $\Box$}$\Box$ and$\Box$ since$\Box$ we$\Box$ know$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$(0$\Box$)$\Box$$$\Box$ implements$\Box$ $\Box$$H$\Box$$$\Box$ and$\Box$ $\Box$$H$\Box$^2$\Box$=I$\Box$$$\Box$,$\Box$ we$\Box$ conclude$\Box$ that$\Box$ this$\Box$ pattern$\Box$ $\Box$ implements$\Box$ the$\Box$ identity$\Box$,$\Box$ or$\Box$ in$\Box$ other$\Box$ words$\Box$ it$\Box$ teleports$\Box$ qubit$\Box$ $\Box$$1$\Box$$$\Box$ to$\Box$ qubit$\Box$ $\Box$$3$\Box$$$\Box$.$\Box$ $\Box$ As$\Box$ it$\Box$ happens$\Box$,$\Box$ this$\Box$ pattern$\Box$ obtained$\Box$ by$\Box$ self$\Box$-composition$\Box$,$\Box$ is$\Box$ the$\Box$ same$\Box$ as$\Box$ the$\Box$ one$\Box$ given$\Box$ in$\Box$ the$\Box$ reference$\Box$ paper$\Box$~$\Box$\cite$\Box$[p$\Box$.14$\Box$]$\Box${mqqcs$\Box$}$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${$\Box$$x$\Box$$$\Box$-rotation$\Box$.$\Box$}$\Box$ Here$\Box$ is$\Box$ the$\Box$ reference$\Box$ implementation$\Box$ of$\Box$ an$\Box$ $\Box$$x$\Box$$$\Box$-rotation$\Box$~$\Box$\cite$\Box$[p$\Box$.17$\Box$]$\Box${mqqcs$\Box$}$\Box$,$\Box$ $\Box$$R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$:$\Box$ $\Box$ $\Box$\EQ$\Box${$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$\ms$\Box${$\Box$-$\Box$\alpha$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$\Ms$\Box$ x1$\Box$\et23$\Box$\et12$\Box$ $\Box$}$\Box$ with$\Box$ computation$\Box$ space$\Box$ $\Box$$V$\Box$=$\Box$\ens$\Box${1$\Box$,2$\Box$,3$\Box$}$\Box$,$\Box$\ens$\Box${1$\Box$}$\Box$,$\Box$\ens$\Box${3$\Box$}$\Box$$$\Box$.$\Box$ There$\Box$ is$\Box$ a$\Box$ natural$\Box$ question$\Box$ which$\Box$ me$\Box$ might$\Box$ call$\Box$ the$\Box$ $\Box$ recognition$\Box$ problem$\Box$,$\Box$ namely$\Box$ how$\Box$ do$\Box$ we$\Box$ know$\Box$ this$\Box$ is$\Box$ implementing$\Box$ $\Box$$R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$~$\Box$?$\Box$ Of$\Box$ course$\Box$ there$\Box$ is$\Box$ the$\Box$ brute$\Box$ force$\Box$ answer$\Box$ to$\Box$ that$\Box$,$\Box$ which$\Box$ we$\Box$ applied$\Box$ to$\Box$ compute$\Box$ our$\Box$ simpler$\Box$ patterns$\Box$,$\Box$ and$\Box$ which$\Box$ consists$\Box$ in$\Box$ computing$\Box$ down$\Box$ all$\Box$ the$\Box$ four$\Box$ possible$\Box$ branches$\Box$ $\Box$ $\Box$ generated$\Box$ by$\Box$ the$\Box$ measurements$\Box$ at$\Box$ $\Box$$1$\Box$$$\Box$ and$\Box$ $\Box$$2$\Box$$$\Box$.$\Box$ $\Box$ Another$\Box$ possibility$\Box$ is$\Box$ to$\Box$ use$\Box$ the$\Box$ stabiliser$\Box$ formalism$\Box$ as$\Box$ explained$\Box$ in$\Box$ the$\Box$ reference$\Box$ paper$\Box$~$\Box$\cite$\Box${mqqcs$\Box$}$\Box$.$\Box$ $\Box$ Yet$\Box$ another$\Box$ possibility$\Box$ is$\Box$ to$\Box$ use$\Box$ $\Box$\emph$\Box${pattern$\Box$ composition$\Box$}$\Box$,$\Box$ as$\Box$ we$\Box$ did$\Box$ before$\Box$,$\Box$ and$\Box$ this$\Box$ is$\Box$ what$\Box$ we$\Box$ are$\Box$ going$\Box$ to$\Box$ do$\Box$.$\Box$ $\Box$
$\Box$
We$\Box$ know$\Box$ that$\Box$ $\Box$$R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$=$\Box$J$\Box$($\Box${$\Box$\alpha$\Box$}$\Box$)H$\Box$$$\Box$ up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$,$\Box$ hence$\Box$ the$\Box$ composite$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box${$\Box$\alpha$\Box$}$\Box$)$\Box$(2$\Box$,3$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ H$\Box$(1$\Box$,2$\Box$)$\Box$$$\Box$ implements$\Box$ $\Box$$R$\Box$_$\Box${x$\Box$}$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$.$\Box$ $\Box$ Now$\Box$ we$\Box$ may$\Box$ standardise$\Box$ it$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box${$\Box$\alpha$\Box$}$\Box$)$\Box$(2$\Box$,3$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ H$\Box$(1$\Box$,2$\Box$)$\Box$&$\Box$=$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}2$\Box$\tr$\Box${$\Box$\et23$\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$\Ms$\Box$ x1$\Box$\et12$\Box$$\Box$$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$\tr$\Box${$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}2$\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$\Ms$\Box$ x1$\Box$\et23$\Box$\et12$\Box$$\Box$$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$\ms$\Box${$\Box$-$\Box$\alpha$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$\Ms$\Box$ x1$\Box$\et23$\Box$\et12$\Box$$\Box$$\Box$ $\Box$}$\Box$ obtaining$\Box$ exactly$\Box$ the$\Box$ implementation$\Box$ we$\Box$ started$\Box$ with$\Box$.$\Box$ Since$\Box$ our$\Box$ calculus$\Box$ is$\Box$ preserving$\Box$ interpretations$\Box$,$\Box$ we$\Box$ deduce$\Box$ that$\Box$ the$\Box$ implementation$\Box$ is$\Box$ correct$\Box$.$\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${$\Box$$z$\Box$$$\Box$-rotation$\Box$.$\Box$}$\Box$ Now$\Box$,$\Box$ we$\Box$ have$\Box$ a$\Box$ method$\Box$ here$\Box$ for$\Box$ synthesising$\Box$ further$\Box$ implementations$\Box$,$\Box$ which$\Box$ we$\Box$ can$\Box$ use$\Box$ fir$\Box$ instance$\Box$ with$\Box$ another$\Box$ rotation$\Box$ $\Box$$R$\Box$_z$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$.$\Box$ $\Box$ Again$\Box$ we$\Box$ know$\Box$ that$\Box$ $\Box$$R$\Box$_z$\Box$($\Box$\alpha$\Box$)$\Box$=HR$\Box$_x$\Box$($\Box$\alpha$\Box$)H$\Box$$$\Box$,$\Box$ and$\Box$ we$\Box$ already$\Box$ know$\Box$ how$\Box$ to$\Box$ implement$\Box$ both$\Box$ components$\Box$ $\Box$$H$\Box$$$\Box$ and$\Box$ $\Box$$R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
Starting$\Box$ with$\Box$ the$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ H$\Box$(4$\Box$,5$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$(2$\Box$,3$\Box$,4$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ H$\Box$(1$\Box$,2$\Box$)$\Box$$$\Box$ we$\Box$ get$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathfrak$\Box$ H$\Box$(4$\Box$,5$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$(2$\Box$,3$\Box$,4$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ H$\Box$(1$\Box$,2$\Box$)$\Box$=$\Box$$\Box$$\Box$ $\Box$\mathfrak$\Box$ H$\Box$(4$\Box$,5$\Box$)$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\et34$\Box$ $\Box$\tr$\Box${$\Box$\et23$\Box$ $\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et12$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\mathfrak$\Box$ H$\Box$(4$\Box$,5$\Box$)$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${$\Box$}$\Box$\Ms$\Box$ x2$\Box$ $\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\et34$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${23$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\mathfrak$\Box$ H$\Box$(4$\Box$,5$\Box$)$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box$ x2$\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${234$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\mathfrak$\Box$ H$\Box$(4$\Box$,5$\Box$)$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${234$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$ $\Box$\Ms$\Box$ x4$\Box$ $\Box$\tr$\Box${$\Box$\et45$\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${234$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$ $\Box$\cz5$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box$ x4$\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$}$\Box$ $\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$ $\Box$\cz5$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\MS$\Box$ x4$\Box${s$\Box$_3$\Box$}$\Box${$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$ $\Box$\cz5$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\MS$\Box$ x4$\Box${s$\Box$_3$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$}$\Box$ To$\Box$ ease$\Box$ reading$\Box$ $\Box$$$\Box$\et23$\Box$\et12$\Box$$$\Box$ is$\Box$ shortened$\Box$ to$\Box$ $\Box$$$\Box$\et1$\Box${23$\Box$}$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\et12$\Box$\et23$\Box$\et34$\Box$$$\Box$ to$\Box$ $\Box$$$\Box$\et1$\Box${234$\Box$}$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\MS$\Box${$\Box$\alpha$\Box$}i$\Box${1$\Box$+s$\Box$}$\Box${t$\Box$}$\Box$$$\Box$ is$\Box$ used$\Box$ as$\Box$ shorthand$\Box$ for$\Box$ $\Box$$$\Box$\MS$\Box${$\Box$-$\Box$\alpha$\Box$}i$\Box${s$\Box$}$\Box${t$\Box$}$\Box$$$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
Here$\Box$ for$\Box$ the$\Box$ first$\Box$ time$\Box$,$\Box$ we$\Box$ see$\Box$ $\Box$$MZ$\Box$$$\Box$ rewritings$\Box$,$\Box$ inducing$\Box$ the$\Box$ $\Box$$Z$\Box$$$\Box$-action$\Box$ on$\Box$ measurements$\Box$.$\Box$ The$\Box$ obtained$\Box$ standardised$\Box$ pattern$\Box$ can$\Box$ therefore$\Box$ be$\Box$ rewritten$\Box$ further$\Box$ using$\Box$ the$\Box$ extended$\Box$ calculus$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$ $\Box$\cz5$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\MS$\Box$ x4$\Box${s$\Box$_3$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${S$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_2$\Box$+s$\Box$_4$\Box$}$\Box$\cz5$\Box${s$\Box$_1$\Box$+s$\Box$_3$\Box$}$\Box$ $\Box$\Ms$\Box$ x4$\Box$ $\Box$\MS$\Box$\al3$\Box${1$\Box$+s$\Box$_2$\Box$}$\Box${$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$\Ms$\Box$ x1$\Box$ $\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$}$\Box$ obtaining$\Box$ again$\Box$ the$\Box$ pattern$\Box$ given$\Box$ in$\Box$ the$\Box$ reference$\Box$ paper$\Box$~$\Box$\cite$\Box$[p$\Box$.5$\Box$]$\Box${mqqcs$\Box$}$\Box$.$\Box$
$\Box$
However$\Box$,$\Box$ just$\Box$ as$\Box$ in$\Box$ the$\Box$ case$\Box$ of$\Box$ the$\Box$ $\Box$$R$\Box$_x$\Box$$$\Box$ rotation$\Box$,$\Box$ we$\Box$ also$\Box$ have$\Box$ $\Box$$R$\Box$_z$\Box$($\Box$\alpha$\Box$)$\Box$=H$\Box$J$\Box$($\Box${$\Box$\alpha$\Box$}$\Box$)$\Box$$$\Box$ up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$,$\Box$ $\Box$ hence$\Box$ the$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ H$\Box$(2$\Box$,3$\Box$)$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box${$\Box$\alpha$\Box$}$\Box$)$\Box$(1$\Box$,2$\Box$)$\Box$$$\Box$ also$\Box$ $\Box$ implements$\Box$ $\Box$$R$\Box$_$\Box${z$\Box$}$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$,$\Box$ and$\Box$ we$\Box$ may$\Box$ standardize$\Box$ it$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathfrak$\Box$ H$\Box$(2$\Box$,3$\Box$)$\Box$\circ$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box${$\Box$\alpha$\Box$}$\Box$)$\Box$(1$\Box$,2$\Box$)$\Box$&$\Box$=$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\Ms$\Box$ x2$\Box$ $\Box$\tr$\Box${$\Box$\et23$\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}1$\Box$\et12$\Box$ $\Box$$\Box$$\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box$ x2$\Box$ $\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}1$\Box$\et1$\Box${23$\Box$}$\Box$ $\Box$$\Box$$\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box${$\Box$}$\Box${$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$}1$\Box$\et1$\Box${23$\Box$}$\Box$ $\Box$}$\Box$ obtaining$\Box$ a$\Box$ 3$\Box$ qubits$\Box$ standard$\Box$ pattern$\Box$ for$\Box$ the$\Box$ $\Box$$z$\Box$$$\Box$-rotation$\Box$,$\Box$ which$\Box$ is$\Box$ simpler$\Box$ than$\Box$ the$\Box$ preceding$\Box$ one$\Box$,$\Box$ because$\Box$ it$\Box$ is$\Box$ based$\Box$ on$\Box$ the$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ generators$\Box$.$\Box$ $\Box$ Since$\Box$ the$\Box$ $\Box$$z$\Box$$$\Box$-rotation$\Box$ $\Box$$R$\Box$_z$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ is$\Box$ the$\Box$ same$\Box$ as$\Box$ the$\Box$ phase$\Box$ operator$\Box$:$\Box$ $\Box$\AR$\Box${P$\Box$($\Box$\alpha$\Box$)$\Box$=$\Box$\MA$\Box${1$\Box$&0$\Box$$\Box$\0$\Box$&$\Box$\ei$\Box$\alpha$\Box$}$\Box$}$\Box$ up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$,$\Box$ we$\Box$ also$\Box$ obtain$\Box$ with$\Box$ the$\Box$ same$\Box$ pattern$\Box$ an$\Box$ implementation$\Box$ of$\Box$ the$\Box$ phase$\Box$ operator$\Box$.$\Box$ In$\Box$ particular$\Box$,$\Box$ if$\Box$ $\Box$$$\Box$\alpha$\Box$=$\Box$\frac\pi2$\Box$$$\Box$,$\Box$ using$\Box$ the$\Box$ extended$\Box$ calculus$\Box$,$\Box$ we$\Box$ get$\Box$ the$\Box$ following$\Box$ pattern$\Box$ for$\Box$ $\Box$$P$\Box$($\Box$\frac\pi2$\Box$)$\Box$$$\Box$:$\Box$ $\Box$$$\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$+1$\Box$}$\Box$\Ms$\Box$ x2$\Box$\Ms$\Box$ y1$\Box$\et1$\Box${23$\Box$}$\Box$$$\Box$.$\Box$
$\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${General$\Box$ rotation$\Box$.$\Box$}$\Box$ The$\Box$ realisation$\Box$ of$\Box$ a$\Box$ general$\Box$ rotation$\Box$ based$\Box$ on$\Box$ the$\Box$ Euler$\Box$ decomposition$\Box$ of$\Box$ rotations$\Box$ as$\Box$ $\Box$$R$\Box$_x$\Box$($\Box$\gamma$\Box$)R$\Box$_z$\Box$($\Box$\beta$\Box$)R$\Box$_x$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$,$\Box$ would$\Box$ results$\Box$ in$\Box$ a$\Box$ 7$\Box$ qubits$\Box$ pattern$\Box$.$\Box$ We$\Box$ get$\Box$ a$\Box$ 5$\Box$ qubits$\Box$ implementation$\Box$ $\Box$ based$\Box$ on$\Box$ the$\Box$ $\Box$$$\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ decomposition$\Box$~$\Box$\cite$\Box${generator04$\Box$}$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ R$\Box$($\Box$\alpha$\Box$,$\Box$\beta$\Box$,$\Box$\gamma$\Box$)$\Box$&$\Box$=$\Box$&$\Box$J$\Box$(0$\Box$)$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$ $\Box$J$\Box$($\Box$\beta$\Box$)$\Box$ $\Box$J$\Box$($\Box$\gamma$\Box$)$\Box$ $\Box$}$\Box$ The$\Box$ extended$\Box$ standardization$\Box$ procedure$\Box$ yields$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$(0$\Box$)$\Box$(4$\Box$,5$\Box$)$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$(3$\Box$,4$\Box$)$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\beta$\Box$)$\Box$(2$\Box$,3$\Box$)$\Box$ $\Box$\mathfrak$\Box$ $\Box$J$\Box$($\Box$\gamma$\Box$)$\Box$(1$\Box$,2$\Box$)$\Box$ $\Box$&$\Box$=$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$\Ms$\Box${0$\Box$}4$\Box$\et45$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\Ms$\Box${$\Box$\alpha$\Box$}3$\Box$\et34$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\Ms$\Box${$\Box$\beta$\Box$}2$\Box$ $\Box$\tr$\Box$ $\Box${$\Box$\et23$\Box$ $\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et12$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$\Ms$\Box${0$\Box$}4$\Box$\et45$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\Ms$\Box${$\Box$\alpha$\Box$}3$\Box$\et34$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box${$\Box$\beta$\Box$}2$\Box$\cx2$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${23$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$\Ms$\Box${0$\Box$}4$\Box$\et45$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\Ms$\Box${$\Box$\alpha$\Box$}3$\Box$ $\Box$\tr$\Box${$\Box$\et34$\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${23$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EXZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$\Ms$\Box${0$\Box$}4$\Box$\et45$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box${$\Box$\alpha$\Box$}3$\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$\cz3$\Box${s$\Box$_1$\Box$}$\Box$}$\Box$ $\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${234$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MXZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$\Ms$\Box${0$\Box$}4$\Box$ $\Box$\tr$\Box${$\Box$\et45$\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\MS$\Box${$\Box$\alpha$\Box$}3$\Box${s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${234$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EXZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box${0$\Box$}4$\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\cz5$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\MS$\Box${$\Box$\alpha$\Box$}3$\Box${s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MXZ$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_4$\Box$}$\Box$\cz5$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\MS$\Box${0$\Box$}4$\Box${$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box${$\Box$\alpha$\Box$}3$\Box${s$\Box$_2$\Box$}$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${S$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx5$\Box${s$\Box$_2$\Box$+s$\Box$_4$\Box$}$\Box$\cz5$\Box${s$\Box$_1$\Box$+s$\Box$_3$\Box$}$\Box$ $\Box$\Ms$\Box${0$\Box$}4$\Box$ $\Box$\MS$\Box${$\Box$\alpha$\Box$}3$\Box${s$\Box$_2$\Box$}$\Box${$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$}2$\Box${s$\Box$_1$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\gamma$\Box$}1$\Box$\et1$\Box${2345$\Box$}$\Box$ $\Box$}$\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${CNOT$\Box$ $\Box$($\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ X$\Box$$$\Box$)$\Box$.$\Box$}$\Box$ This$\Box$ is$\Box$ our$\Box$ first$\Box$ example$\Box$ with$\Box$ two$\Box$ inputs$\Box$ and$\Box$ two$\Box$ outputs$\Box$.$\Box$ We$\Box$ use$\Box$ here$\Box$ the$\Box$ trivial$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ I$\Box$$$\Box$ with$\Box$ computation$\Box$ space$\Box$ $\Box$$$\Box$\ens1$\Box$$$\Box$,$\Box$ $\Box$ inputs$\Box$ $\Box$$$\Box$\ens1$\Box$$$\Box$,$\Box$ outputs$\Box$ $\Box$$$\Box$\ens1$\Box$$$\Box$,$\Box$ and$\Box$ empty$\Box$ command$\Box$ sequence$\Box$,$\Box$ which$\Box$ implements$\Box$ the$\Box$ identity$\Box$ over$\Box$ $\Box$$$\Box$\hil$\Box$ 1$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
One$\Box$ has$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ X$\Box$=$\Box$(I$\Box$\otimes$\Box$ H$\Box$)$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$(I$\Box$\otimes$\Box$ H$\Box$)$\Box$$$\Box$,$\Box$ so$\Box$ we$\Box$ get$\Box$ a$\Box$ pattern$\Box$ using$\Box$ $\Box$ 4$\Box$ qubits$\Box$ over$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$,3$\Box$,4$\Box$}$\Box$$$\Box$,$\Box$ with$\Box$ inputs$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$}$\Box$$$\Box$,$\Box$ and$\Box$ outputs$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,4$\Box$}$\Box$$$\Box$,$\Box$ $\Box$ where$\Box$ one$\Box$ notices$\Box$ that$\Box$ inputs$\Box$ and$\Box$ outputs$\Box$ intersect$\Box$ on$\Box$ the$\Box$ control$\Box$ qubit$\Box$ $\Box$ $\Box$$$\Box$\ens1$\Box$$$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$($\Box$\mathfrak$\Box$ I$\Box$(1$\Box$)$\Box$\otimes$\Box$\mathfrak$\Box$ h$\Box$(3$\Box$,4$\Box$)$\Box$)$\Box$ $\Box$\mathop{\wedge}\hskip-.4ex$\Box$\mathfrak$\Box$ Z$\Box$(1$\Box$,3$\Box$)$\Box$ $\Box$($\Box$\mathfrak$\Box$ I$\Box$(1$\Box$)$\Box$\otimes$\Box$\mathfrak$\Box$ h$\Box$(2$\Box$,3$\Box$)$\Box$)$\Box$ $\Box$&$\Box$=$\Box$&$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\Ms$\Box$ x3$\Box$ $\Box$\et34$\Box$ $\Box$\et13$\Box$ $\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\et23$\Box$ $\Box$}$\Box$ By$\Box$ standardising$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\Ms$\Box$ x3$\Box$ $\Box$\et34$\Box$ $\Box$\tr$\Box${$\Box$\et13$\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\et23$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\cz1$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x3$\Box$ $\Box$\tr$\Box${$\Box$\et34$\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\et13$\Box$\et23$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\cz1$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box$ x3$\Box$\cx3$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\et13$\Box$\et23$\Box$\et34$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MX$\Box$}$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx4$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\cz4$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\cz1$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x3$\Box$ $\Box$\Ms$\Box$ x2$\Box$ $\Box$\et13$\Box$\et23$\Box$\et34$\Box$ $\Box$}$\Box$ Note$\Box$ that$\Box$ we$\Box$ are$\Box$ not$\Box$ using$\Box$ here$\Box$ the$\Box$ $\Box$$$\Box$\et1$\Box${234$\Box$}$\Box$$$\Box$ abbreviation$\Box$,$\Box$ because$\Box$ the$\Box$ underlying$\Box$ structure$\Box$ of$\Box$ entanglement$\Box$ is$\Box$ not$\Box$ a$\Box$ chain$\Box$.$\Box$ This$\Box$ pattern$\Box$ was$\Box$ already$\Box$ described$\Box$ in$\Box$ Aliferis$\Box$ and$\Box$ Leung$\Box$'s$\Box$ paper$\Box$~$\Box$\cite$\Box${AL04$\Box$}$\Box$.$\Box$ In$\Box$ their$\Box$ original$\Box$ presentation$\Box$ the$\Box$ authors$\Box$ actually$\Box$ use$\Box$ an$\Box$ explicit$\Box$ identity$\Box$ pattern$\Box$ $\Box$(using$\Box$ the$\Box$ teleportation$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ $\Box$J$\Box$(0$\Box$,0$\Box$)$\Box$$$\Box$ presented$\Box$ above$\Box$)$\Box$,$\Box$ but$\Box$ we$\Box$ know$\Box$ from$\Box$ the$\Box$ careful$\Box$ presentation$\Box$ of$\Box$ composition$\Box$ that$\Box$ this$\Box$ is$\Box$ not$\Box$ necessary$\Box$.$\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${GHZ$\Box$.$\Box$}$\Box$ We$\Box$ present$\Box$ now$\Box$ a$\Box$ family$\Box$ of$\Box$ patterns$\Box$ preparing$\Box$ the$\Box$ GHZ$\Box$ entangled$\Box$ states$\Box$ $\Box$$$\Box$\ket$\Box${0$\Box$\ldots0$\Box$}$\Box$+$\Box$\ket$\Box${1$\Box$\ldots1$\Box$}$\Box$$$\Box$.$\Box$ One$\Box$ has$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\hbox$\Box${GHZ$\Box$}$\Box$(n$\Box$)$\Box$&$\Box$=$\Box$&$\Box$ $\Box$(H$\Box$_n$\Box$ $\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${n$\Box$-1$\Box$ n$\Box$}$\Box$ $\Box$\ldots$\Box$ H$\Box$_2$\Box$ $\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${1$\Box$ 2$\Box$}$\Box$)$\Box$\ket$\Box${$\Box$\hskip$\Box$-$\Box$.4ex$\Box$+$\Box$\hskip$\Box$-$\Box$.4ex$\Box$\ldots$\Box$\hskip$\Box$-$\Box$.4ex$\Box$+$\Box$}$\Box$ $\Box$}$\Box$ and$\Box$ by$\Box$ combining$\Box$ the$\Box$ patterns$\Box$ for$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$$$\Box$ and$\Box$ $\Box$$H$\Box$$$\Box$,$\Box$ we$\Box$ obtain$\Box$ a$\Box$ pattern$\Box$ with$\Box$ computation$\Box$ space$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$,2$\Box$'$\Box$,$\Box$\ldots$\Box$,$\Box$ n$\Box$,$\Box$ n$\Box$'$\Box$}$\Box$$$\Box$,$\Box$ no$\Box$ inputs$\Box$,$\Box$ outputs$\Box$ $\Box$$$\Box$\ens$\Box${1$\Box$,2$\Box$'$\Box$,$\Box$\ldots$\Box$,n$\Box$'$\Box$}$\Box$$$\Box$,$\Box$ and$\Box$ the$\Box$ following$\Box$ command$\Box$ sequence$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cx$\Box${n$\Box$'$\Box$}$\Box${s$\Box$_n$\Box$}$\Box$\Ms$\Box$ x$\Box${n$\Box$}$\Box$\et$\Box${n$\Box$}$\Box${n$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${$\Box$(n$\Box$-1$\Box$)$\Box$'$\Box$}$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\cx$\Box${2$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$\Ms$\Box$ x$\Box${2$\Box$}$\Box$\et$\Box${2$\Box$}$\Box${2$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${1$\Box$}$\Box${2$\Box$}$\Box$ $\Box$}$\Box$ Under$\Box$ that$\Box$ form$\Box$,$\Box$ the$\Box$ only$\Box$ apparent$\Box$ way$\Box$ to$\Box$ run$\Box$ the$\Box$ pattern$\Box$ is$\Box$ to$\Box$ execute$\Box$ all$\Box$ commands$\Box$ in$\Box$ sequence$\Box$.$\Box$ The$\Box$ situation$\Box$ changes$\Box$ completely$\Box$,$\Box$ when$\Box$ we$\Box$ bring$\Box$ the$\Box$ pattern$\Box$ to$\Box$ extended$\Box$ standard$\Box$ form$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cx$\Box${n$\Box$'$\Box$}$\Box${s$\Box$_n$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${n$\Box$}$\Box$\et$\Box${n$\Box$}$\Box${n$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${$\Box$(n$\Box$-1$\Box$)$\Box$'$\Box$}$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\cx$\Box${3$\Box$'$\Box$}$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${3$\Box$}$\Box$ $\Box$\et$\Box${3$\Box$}$\Box${3$\Box$'$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\et$\Box${2$\Box$'$\Box$}$\Box${3$\Box$}$\Box$ $\Box$\cx$\Box${2$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$}$\Box$\Ms$\Box$ x$\Box${2$\Box$}$\Box$\et$\Box${2$\Box$}$\Box${2$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${1$\Box$}$\Box${2$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx$\Box${n$\Box$'$\Box$}$\Box${s$\Box$_n$\Box$}$\Box$ $\Box$\cx$\Box${2$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${n$\Box$}$\Box$\et$\Box${n$\Box$}$\Box${n$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${$\Box$(n$\Box$-1$\Box$)$\Box$'$\Box$}$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\cx$\Box${3$\Box$'$\Box$}$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\tr$\Box${$\Box$\Ms$\Box$ x$\Box${3$\Box$}$\Box$\cz$\Box${3$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${2$\Box$}$\Box$ $\Box$\et$\Box${3$\Box$}$\Box${3$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$'$\Box$}$\Box${3$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$}$\Box${2$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${1$\Box$}$\Box${2$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx$\Box${n$\Box$'$\Box$}$\Box${s$\Box$_n$\Box$}$\Box$ $\Box$\cx$\Box${2$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${n$\Box$}$\Box$\et$\Box${n$\Box$}$\Box${n$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${$\Box$(n$\Box$-1$\Box$)$\Box$'$\Box$}$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\cx$\Box${3$\Box$'$\Box$}$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\MS$\Box$ x$\Box${3$\Box$}$\Box${$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${2$\Box$}$\Box$ $\Box$\et$\Box${3$\Box$}$\Box${3$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$'$\Box$}$\Box${3$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$}$\Box${2$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${1$\Box$}$\Box${2$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx$\Box${n$\Box$'$\Box$}$\Box${s$\Box$_n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\cx$\Box${3$\Box$'$\Box$}$\Box${s$\Box$_3$\Box$}$\Box$ $\Box$\cx$\Box${2$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\MS$\Box$ x$\Box${n$\Box$}$\Box${$\Box$}$\Box${s$\Box$_$\Box${n$\Box$-1$\Box$}$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\MS$\Box$ x$\Box${3$\Box$}$\Box${$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${2$\Box$}$\Box$ $\Box$\et$\Box${n$\Box$}$\Box${n$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${$\Box$(n$\Box$-1$\Box$)$\Box$'$\Box$}$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\et$\Box${3$\Box$}$\Box${3$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$'$\Box$}$\Box${3$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$}$\Box${2$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${1$\Box$}$\Box${2$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_S$\Box$&$\Box$$\Box$$\Box$ $\Box$\cx$\Box${n$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$+s$\Box$_3$\Box$+$\Box$\cdots$\Box$+s$\Box$_n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\cx$\Box${3$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$+s$\Box$_3$\Box$}$\Box$ $\Box$\cx$\Box${2$\Box$'$\Box$}$\Box${s$\Box$_2$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\Ms$\Box$ x$\Box${3$\Box$}$\Box$ $\Box$\Ms$\Box$ x$\Box${2$\Box$}$\Box$ $\Box$\et$\Box${n$\Box$}$\Box${n$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${$\Box$(n$\Box$-1$\Box$)$\Box$'$\Box$}$\Box${n$\Box$}$\Box$ $\Box$\ldots$\Box$ $\Box$\et$\Box${3$\Box$}$\Box${3$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$'$\Box$}$\Box${3$\Box$}$\Box$ $\Box$\et$\Box${2$\Box$}$\Box${2$\Box$'$\Box$}$\Box$ $\Box$\et$\Box${1$\Box$}$\Box${2$\Box$}$\Box$ $\Box$}$\Box$ All$\Box$ measurements$\Box$ are$\Box$ now$\Box$ independent$\Box$ of$\Box$ each$\Box$ other$\Box$,$\Box$ it$\Box$ is$\Box$ therefore$\Box$ possible$\Box$ after$\Box$ the$\Box$ entanglement$\Box$ phase$\Box$,$\Box$ to$\Box$ do$\Box$ all$\Box$ of$\Box$ them$\Box$ in$\Box$ one$\Box$ round$\Box$,$\Box$ and$\Box$ in$\Box$ a$\Box$ subsequent$\Box$ round$\Box$ to$\Box$ do$\Box$ all$\Box$ local$\Box$ corrections$\Box$.$\Box$ In$\Box$ other$\Box$ words$\Box$,$\Box$ the$\Box$ obtained$\Box$ pattern$\Box$ has$\Box$ constant$\Box$ depth$\Box$ complexity$\Box$ $\Box$$2$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
$\Box$\subsubsection$\Box$*$\Box${Controlled$\Box$-$\Box$$U$\Box$$$\Box$}$\Box$ This$\Box$ final$\Box$ example$\Box$ presents$\Box$ another$\Box$ instance$\Box$ where$\Box$ standardization$\Box$ obtains$\Box$ a$\Box$ low$\Box$ depth$\Box$ complexity$\Box$.$\Box$ $\Box$ $\Box$ For$\Box$ any$\Box$ 1$\Box$-qubit$\Box$ unitary$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ one$\Box$ has$\Box$ the$\Box$ following$\Box$ decomposition$\Box$ of$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$$$\Box$ in$\Box$ terms$\Box$ of$\Box$ the$\Box$ generators$\Box$ $\Box$$$\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$~$\Box$\cite$\Box${generator04$\Box$}$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$ $\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$_$\Box${12$\Box$}$\Box$ $\Box$&$\Box$=$\Box$&$\Box$ $\Box$ $\Box$\Rr$\Box${1$\Box$}$\Box${0$\Box$}$\Box$\Rr$\Box${1$\Box$}$\Box${$\Box$\alpha$\Box$'$\Box$}$\Box$ $\Box$\Rr$\Box${2$\Box$}$\Box${0$\Box$}$\Box$\Rr$\Box${2$\Box$}$\Box${$\Box$\beta$\Box$+$\Box$\pi$\Box$}$\Box$ $\Box$\Rr2$\Box${$\Box$-$\Box$\frac$\Box$\ga2$\Box$}$\Box$\Rr2$\Box${$\Box$-$\Box$\frac\pi2$\Box$}$\Box$ $\Box$\Rr20$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${12$\Box$}$\Box$ $\Box$\Rr2$\Box${$\Box$\frac\pi2$\Box$}$\Box$\Rr2$\Box${$\Box$\frac$\Box$\ga2$\Box$}$\Box$ $\Box$\Rr2$\Box${$\Box$\frac$\Box${$\Box$-$\Box$\pi$\Box$-$\Box$\da$\Box$-$\Box$\beta$\Box$}2$\Box$}$\Box$ $\Box$ $\Box$\Rr20$\Box$ $\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$_$\Box${12$\Box$}$\Box$ $\Box$\Rr2$\Box${$\Box$\frac$\Box${$\Box$-$\Box$\beta$\Box$+$\Box$\da$\Box$-$\Box$\pi$\Box$}2$\Box$}$\Box$ $\Box$ $\Box$}$\Box$ with$\Box$ $\Box$$$\Box$\alpha$\Box$'$\Box$=$\Box$\alpha$\Box$+$\Box$\frac$\Box${$\Box$\beta$\Box$+$\Box$\gamma$\Box$+$\Box$\da$\Box$}2$\Box$$$\Box$.$\Box$ By$\Box$ translating$\Box$ each$\Box$ $\Box$$$\Box$J$\Box$$$\Box$ operator$\Box$ to$\Box$ its$\Box$ corresponding$\Box$ pattern$\Box$,$\Box$ we$\Box$ get$\Box$ the$\Box$ following$\Box$ wild$\Box$ pattern$\Box$ for$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$$$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cx$\Box${C$\Box$}$\Box${s$\Box$_B$\Box$}$\Box$\Ms$\Box${0$\Box$}$\Box${B$\Box$}$\Box$\et$\Box${B$\Box$}$\Box${C$\Box$}$\Box$ $\Box$\cx$\Box${B$\Box$}$\Box${s$\Box$_A$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$'$\Box$}$\Box${A$\Box$}$\Box$\et$\Box${A$\Box$}$\Box${B$\Box$}$\Box$ $\Box$\cx$\Box${k$\Box$}$\Box${s$\Box$_j$\Box$}$\Box$\Ms$\Box${0$\Box$}$\Box${j$\Box$}$\Box$\et$\Box${j$\Box$}$\Box${k$\Box$}$\Box$ $\Box$\cx$\Box${j$\Box$}$\Box${s$\Box$_i$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\beta$\Box$-$\Box$\pi$\Box$}$\Box${i$\Box$}$\Box$\et$\Box${i$\Box$}$\Box${j$\Box$}$\Box$ $\Box$$\Box$$\Box$ $\Box$\cx$\Box${i$\Box$}$\Box${s$\Box$_h$\Box$}$\Box$\Ms$\Box${$\Box$\frac$\Box$\ga2$\Box$}$\Box${h$\Box$}$\Box$\et$\Box${h$\Box$}$\Box${i$\Box$}$\Box$ $\Box$\cx$\Box${h$\Box$}$\Box${s$\Box$_g$\Box$}$\Box$\Ms$\Box${$\Box$\frac\pi2$\Box$}$\Box${g$\Box$}$\Box$\et$\Box${g$\Box$}$\Box${h$\Box$}$\Box$ $\Box$\cx$\Box${g$\Box$}$\Box${s$\Box$_f$\Box$}$\Box$\Ms$\Box${0$\Box$}$\Box${f$\Box$}$\Box$\et$\Box${f$\Box$}$\Box${g$\Box$}$\Box$ $\Box$\et$\Box${A$\Box$}f$\Box$ $\Box$\cx$\Box${f$\Box$}$\Box${s$\Box$_e$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\frac\pi2$\Box$}$\Box${e$\Box$}$\Box$\et$\Box${e$\Box$}$\Box${f$\Box$}$\Box$ $\Box$$\Box$$\Box$ $\Box$\cx$\Box${e$\Box$}$\Box${s$\Box$_d$\Box$}$\Box$\Ms$\Box${$\Box$-$\Box$\frac$\Box$\ga2$\Box$}$\Box${d$\Box$}$\Box$\et$\Box${d$\Box$}$\Box${e$\Box$}$\Box$ $\Box$\cx$\Box${d$\Box$}$\Box${s$\Box$_c$\Box$}$\Box$\Ms$\Box${$\Box$\frac$\Box${$\Box$\pi$\Box$+$\Box$\da$\Box$+$\Box$\beta$\Box$}2$\Box$}$\Box${c$\Box$}$\Box$\et$\Box${c$\Box$}$\Box${d$\Box$}$\Box$ $\Box$\cx$\Box${c$\Box$}$\Box${s$\Box$_b$\Box$}$\Box$\Ms$\Box${0$\Box$}$\Box${b$\Box$}$\Box$\et$\Box${b$\Box$}$\Box${c$\Box$}$\Box$ $\Box$\et$\Box${A$\Box$}b$\Box$ $\Box$\cx$\Box${b$\Box$}$\Box${s$\Box$_a$\Box$}$\Box$\Ms$\Box${$\Box$\frac$\Box${$\Box$\beta$\Box$-$\Box$\da$\Box$+$\Box$\pi$\Box$}2$\Box$}$\Box${a$\Box$}$\Box$\et$\Box${a$\Box$}$\Box${b$\Box$}$\Box$ $\Box$}$\Box$ Figure$\Box$~$\Box$\ref$\Box${wildgraph$\Box$}$\Box$ shows$\Box$ the$\Box$ underlying$\Box$ entanglement$\Box$ graph$\Box$ for$\Box$ the$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$$$\Box$ pattern$\Box$.$\Box$ In$\Box$ order$\Box$ to$\Box$ run$\Box$ the$\Box$ wild$\Box$ form$\Box$ of$\Box$ the$\Box$ pattern$\Box$ one$\Box$ needs$\Box$ to$\Box$ follow$\Box$ the$\Box$ graph$\Box$ structure$\Box$ and$\Box$ hence$\Box$ one$\Box$ has$\Box$ to$\Box$ perform$\Box$ the$\Box$ measurement$\Box$ commands$\Box$ in$\Box$ sequence$\Box$.$\Box$ $\Box$ $\Box$\begin$\Box${figure$\Box$}$\Box$[h$\Box$]$\Box$ $\Box$\begin$\Box${center$\Box$}$\Box$ $\Box$\includegraphics$\Box$[scale$\Box$=0$\Box$.6$\Box$]$\Box${graph$\Box$.eps$\Box$}$\Box$ $\Box$\caption$\Box${The$\Box$ underlying$\Box$ entanglement$\Box$ graph$\Box$ for$\Box$ the$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$$$\Box$ pattern$\Box$.$\Box$}$\Box$ $\Box$\label$\Box${wildgraph$\Box$}$\Box$ $\Box$\end$\Box${center$\Box$}$\Box$ $\Box$\end$\Box${figure$\Box$}$\Box$ Extended$\Box$ standardisation$\Box$ yields$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\cz$\Box${k$\Box$}$\Box${s$\Box$_i$\Box$+s$\Box$_g$\Box$+s$\Box$_e$\Box$+s$\Box$_c$\Box$+s$\Box$_a$\Box$}$\Box$ $\Box$\cx$\Box${k$\Box$}$\Box${s$\Box$_j$\Box$+s$\Box$_h$\Box$+s$\Box$_f$\Box$+s$\Box$_d$\Box$+s$\Box$_b$\Box$}$\Box$ $\Box$\cx$\Box${C$\Box$}$\Box${s$\Box$_B$\Box$}$\Box$ $\Box$\cz$\Box${C$\Box$}$\Box${s$\Box$_A$\Box$+s$\Box$_e$\Box$+s$\Box$_c$\Box$}$\Box$ $\Box$$\Box$$\Box$ $\Box$\Ms$\Box${0$\Box$}$\Box${B$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$-$\Box$\alpha$\Box$'$\Box$}$\Box${A$\Box$}$\Box$ $\Box$\Ms$\Box${0$\Box$}$\Box${j$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\beta$\Box$-$\Box$\pi$\Box$}$\Box${i$\Box$}$\Box${s$\Box$_h$\Box$+s$\Box$_f$\Box$+s$\Box$_d$\Box$+s$\Box$_b$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$-$\Box$\frac$\Box$\ga2$\Box$}$\Box${h$\Box$}$\Box${s$\Box$_g$\Box$+s$\Box$_e$\Box$+s$\Box$_c$\Box$+s$\Box$_a$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\frac\pi2$\Box$}$\Box${g$\Box$}$\Box${s$\Box$_f$\Box$+s$\Box$_d$\Box$+s$\Box$_b$\Box$}$\Box$ $\Box$$\Box$$\Box$ $\Box$\Ms$\Box${0$\Box$}$\Box${f$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$-$\Box$\frac\pi2$\Box$}$\Box${e$\Box$}$\Box${s$\Box$_d$\Box$+s$\Box$_b$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\frac$\Box$\ga2$\Box$}$\Box${d$\Box$}$\Box${s$\Box$_c$\Box$+s$\Box$_a$\Box$}$\Box$ $\Box$\ms$\Box${$\Box$\frac$\Box${$\Box$\pi$\Box$-$\Box$\da$\Box$-$\Box$\beta$\Box$}2$\Box$}$\Box${c$\Box$}$\Box${s$\Box$_b$\Box$}$\Box$ $\Box$\Ms$\Box${0$\Box$}$\Box${b$\Box$}$\Box$ $\Box$\Ms$\Box${$\Box$\frac$\Box${$\Box$-$\Box$\beta$\Box$+$\Box$\da$\Box$+$\Box$\pi$\Box$}2$\Box$}$\Box${a$\Box$}$\Box$ $\Box$$\Box$$\Box$ $\Box$\et$\Box${B$\Box$}$\Box${C$\Box$}$\Box$\et$\Box${A$\Box$}$\Box${B$\Box$}$\Box$ $\Box$\et$\Box${j$\Box$}$\Box${k$\Box$}$\Box$\et$\Box${i$\Box$}$\Box${j$\Box$}$\Box$\et$\Box${h$\Box$}$\Box${i$\Box$}$\Box$ $\Box$\et$\Box${g$\Box$}$\Box${h$\Box$}$\Box$\et$\Box${f$\Box$}$\Box${g$\Box$}$\Box$\et$\Box${A$\Box$}f$\Box$\et$\Box${e$\Box$}$\Box${f$\Box$}$\Box$\et$\Box${d$\Box$}$\Box${e$\Box$}$\Box$\et$\Box${c$\Box$}$\Box${d$\Box$}$\Box$\et$\Box${b$\Box$}$\Box${c$\Box$}$\Box$\et$\Box${a$\Box$}$\Box${b$\Box$}$\Box$\et$\Box${A$\Box$}b$\Box$ $\Box$}$\Box$ Figure$\Box$~$\Box$\ref$\Box${dependgraph$\Box$}$\Box$ shows$\Box$ the$\Box$ dependency$\Box$ structure$\Box$ of$\Box$ the$\Box$ resulting$\Box$ standard$\Box$ pattern$\Box$ for$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$$$\Box$,$\Box$ and$\Box$ one$\Box$ sees$\Box$ it$\Box$ has$\Box$ depth$\Box$ complexity$\Box$ $\Box$$7$\Box$$$\Box$.$\Box$ $\Box$\begin$\Box${figure$\Box$}$\Box$[h$\Box$]$\Box$ $\Box$\begin$\Box${center$\Box$}$\Box$ $\Box$\includegraphics$\Box$[scale$\Box$=0$\Box$.6$\Box$]$\Box${depend$\Box$.eps$\Box$}$\Box$ $\Box$\caption$\Box${The$\Box$ dependency$\Box$ graph$\Box$ for$\Box$ the$\Box$ standard$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ U$\Box$$$\Box$ pattern$\Box$.$\Box$}$\Box$ $\Box$\label$\Box${dependgraph$\Box$}$\Box$ $\Box$\end$\Box${center$\Box$}$\Box$ $\Box$\end$\Box${figure$\Box$}$\Box$
$\Box$
$\Box$\section$\Box${The$\Box$ no$\Box$ dependency$\Box$ theorems$\Box$}$\Box$ From$\Box$ standardization$\Box$ we$\Box$ can$\Box$ also$\Box$ infer$\Box$ results$\Box$ related$\Box$ dependencies$\Box$.$\Box$ We$\Box$ start$\Box$ with$\Box$ a$\Box$ simple$\Box$ observation$\Box$ which$\Box$ is$\Box$ a$\Box$ direct$\Box$ consequence$\Box$ of$\Box$ standardisation$\Box$.$\Box$
$\Box$
$\Box$\begin$\Box${lemme$\Box$}$\Box$}$\Box$\def$\Box$\EL$\Box${$\Box$\end$\Box${lemme$\Box$}$\Box$ Let$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ be$\Box$ a$\Box$ pattern$\Box$ implementing$\Box$ some$\Box$ unitary$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ and$\Box$ suppose$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$'s$\Box$ command$\Box$ sequence$\Box$ has$\Box$ measurements$\Box$ only$\Box$ of$\Box$ the$\Box$ $\Box$$M$\Box$^x$\Box$$$\Box$ and$\Box$ $\Box$$M$\Box$^y$\Box$$$\Box$ kind$\Box$,$\Box$ then$\Box$ $\Box$$U$\Box$$$\Box$ has$\Box$ a$\Box$ standard$\Box$ implementation$\Box$,$\Box$ $\Box$ having$\Box$ only$\Box$ independent$\Box$ measurements$\Box$,$\Box$ all$\Box$ being$\Box$ of$\Box$ the$\Box$ $\Box$$M$\Box$^x$\Box$$$\Box$ and$\Box$ $\Box$$M$\Box$^y$\Box$$$\Box$ kind$\Box$ $\Box$(therefore$\Box$ of$\Box$ depth$\Box$ complexity$\Box$ at$\Box$ most$\Box$ 2$\Box$)$\Box$.$\Box$ $\Box$\EL$\Box$ Write$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ for$\Box$ the$\Box$ standard$\Box$ pattern$\Box$ associated$\Box$ to$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$.$\Box$ By$\Box$ equations$\Box$ $\Box$($\Box$\ref$\Box${mx$\Box$}$\Box$)$\Box$ and$\Box$ $\Box$($\Box$\ref$\Box${my$\Box$}$\Box$)$\Box$,$\Box$ the$\Box$ $\Box$$X$\Box$$$\Box$-actions$\Box$ can$\Box$ be$\Box$ eliminated$\Box$ from$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ and$\Box$ then$\Box$ $\Box$$Z$\Box$$$\Box$-actions$\Box$ can$\Box$ be$\Box$ eliminated$\Box$ by$\Box$ using$\Box$ the$\Box$ extended$\Box$ calculus$\Box$.$\Box$ The$\Box$ final$\Box$ pattern$\Box$ still$\Box$ implements$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ has$\Box$ no$\Box$ longer$\Box$ any$\Box$ dependent$\Box$ measurements$\Box$,$\Box$ and$\Box$ has$\Box$ therefore$\Box$ depth$\Box$ complexity$\Box$ at$\Box$ most$\Box$ 2$\Box$.$\Box$ $\Box$$$\Box$\Box$\Box$$$\Box$
$\Box$
$\Box$\begin$\Box${theo$\Box$}$\Box$}$\Box$\def$\Box$\HT$\Box${$\Box$\end$\Box${theo$\Box$}$\Box$ $\Box$ Let$\Box$ $\Box$$U$\Box$$$\Box$ be$\Box$ a$\Box$ unitary$\Box$ operator$\Box$,$\Box$ $\Box$ then$\Box$ $\Box$$U$\Box$$$\Box$ is$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$ iff$\Box$ there$\Box$ exists$\Box$ a$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ implementing$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ $\Box$ having$\Box$ measurements$\Box$ only$\Box$ of$\Box$ the$\Box$ $\Box$$M$\Box$^x$\Box$$$\Box$ and$\Box$ $\Box$$M$\Box$^y$\Box$$$\Box$ kind$\Box$.$\Box$ $\Box$ $\Box$\HT$\Box$ The$\Box$ $\Box$`$\Box$`only$\Box$ if$\Box$'$\Box$'$\Box$ direction$\Box$ is$\Box$ easy$\Box$,$\Box$ since$\Box$ we$\Box$ have$\Box$ seen$\Box$ in$\Box$ the$\Box$ example$\Box$ section$\Box$,$\Box$ standard$\Box$ patterns$\Box$ for$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ X$\Box$$$\Box$,$\Box$ $\Box$$H$\Box$$$\Box$ and$\Box$ $\Box$$P$\Box$($\Box$\frac\pi2$\Box$)$\Box$$$\Box$ which$\Box$ had$\Box$ only$\Box$ $\Box$$M$\Box$^x$\Box$$$\Box$ and$\Box$ $\Box$$M$\Box$^y$\Box$$$\Box$ measurements$\Box$.$\Box$ Hence$\Box$ any$\Box$ Clifford$\Box$ operator$\Box$ can$\Box$ be$\Box$ implemented$\Box$ by$\Box$ a$\Box$ combination$\Box$ of$\Box$ these$\Box$ patterns$\Box$.$\Box$ By$\Box$ the$\Box$ lemma$\Box$ above$\Box$,$\Box$ we$\Box$ know$\Box$ we$\Box$ can$\Box$ actually$\Box$ choose$\Box$ these$\Box$ patterns$\Box$ to$\Box$ be$\Box$ standard$\Box$.$\Box$
$\Box$
For$\Box$ the$\Box$ $\Box$`$\Box$`if$\Box$'$\Box$'$\Box$ direction$\Box$,$\Box$ we$\Box$ prove$\Box$ that$\Box$ $\Box$$U$\Box$$$\Box$ belongs$\Box$ to$\Box$ the$\Box$ normaliser$\Box$ of$\Box$ the$\Box$ Pauli$\Box$ group$\Box$,$\Box$ and$\Box$ hence$\Box$ by$\Box$ definition$\Box$ to$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$ In$\Box$ order$\Box$ to$\Box$ do$\Box$ so$\Box$ we$\Box$ use$\Box$ the$\Box$ standard$\Box$ form$\Box$ of$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ written$\Box$ as$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$=$\Box$ C$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$ which$\Box$ still$\Box$ implements$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ and$\Box$ has$\Box$ only$\Box$ $\Box$$M$\Box$^x$\Box$$$\Box$ and$\Box$ $\Box$$M$\Box$^y$\Box$$$\Box$ measurements$\Box$.$\Box$
$\Box$
Let$\Box$ $\Box$$i$\Box$$$\Box$ be$\Box$ an$\Box$ input$\Box$ qubit$\Box$,$\Box$ and$\Box$ consider$\Box$ the$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$'$\Box$=$\Box${$\Box$\mathfrak$\Box$ P$\Box$}C$\Box$_i$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$C$\Box$_i$\Box$$$\Box$ is$\Box$ either$\Box$ $\Box$$X$\Box$_i$\Box$$$\Box$ or$\Box$ $\Box$$Z$\Box$_i$\Box$$$\Box$.$\Box$ Clearly$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$'$\Box$$$\Box$ implements$\Box$ $\Box$$UC$\Box$_i$\Box$$$\Box$.$\Box$ $\Box$ First$\Box$,$\Box$ one$\Box$ has$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ C$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ C$\Box$_i$\Box$ $\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${EC$\Box$}$\Box$^$\Box$\star$\Box$&$\Box$ $\Box$ C$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}C$\Box$'$\Box$ E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ $\Box$}$\Box$ for$\Box$ some$\Box$ $\Box$\emph$\Box${non$\Box$-dependent$\Box$}$\Box$ sequence$\Box$ of$\Box$ corrections$\Box$ $\Box$$C$\Box$'$\Box$$$\Box$,$\Box$ which$\Box$,$\Box$ up$\Box$ to$\Box$ free$\Box$ commutations$\Box$ can$\Box$ be$\Box$ written$\Box$ uniquely$\Box$ as$\Box$ $\Box$$C$\Box$'$\Box$_OC$\Box$'$\Box$'$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$C$\Box$'$\Box$_O$\Box$$$\Box$ applies$\Box$ on$\Box$ output$\Box$ qubits$\Box$,$\Box$ and$\Box$ therefore$\Box$ commutes$\Box$ to$\Box$ $\Box$$M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$C$\Box$'$\Box$'$\Box$$$\Box$ applies$\Box$ on$\Box$ non$\Box$-output$\Box$ qubits$\Box$ $\Box$(which$\Box$ are$\Box$ therefore$\Box$ all$\Box$ measured$\Box$ in$\Box$ $\Box$$M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$)$\Box$.$\Box$ So$\Box$,$\Box$ by$\Box$ commuting$\Box$ $\Box$$C$\Box$'$\Box$_O$\Box$$$\Box$ both$\Box$ through$\Box$ $\Box$$M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$ and$\Box$ $\Box$$C$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$ $\Box$(up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$)$\Box$,$\Box$ one$\Box$ gets$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ C$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}C$\Box$'$\Box$ E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$&$\Box$ $\Box$ C$\Box$'$\Box$_OC$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}C$\Box$'$\Box$'$\Box$ E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ $\Box$}$\Box$ Using$\Box$ equations$\Box$ $\Box$($\Box$\ref$\Box${mx$\Box$}$\Box$)$\Box$,$\Box$ $\Box$($\Box$\ref$\Box${my$\Box$}$\Box$)$\Box$,$\Box$ and$\Box$ the$\Box$ extended$\Box$ calculus$\Box$ to$\Box$ eliminate$\Box$ the$\Box$ remaining$\Box$ $\Box$$Z$\Box$$$\Box$-actions$\Box$,$\Box$ one$\Box$ gets$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}C$\Box$'$\Box$'$\Box$ $\Box$&$\Box$\Rightarrow$\Box$_$\Box${MC$\Box$,S$\Box$}$\Box$^$\Box$\star$\Box$&$\Box$ $\Box$ SM$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ $\Box$}$\Box$ for$\Box$ some$\Box$ product$\Box$ $\Box$$S$\Box$=$\Box$\prod$\Box$_$\Box${$\Box$\ens$\Box${j$\Box$\in$\Box$ J$\Box$}$\Box$}$\Box$\ss$\Box$ j1$\Box$$$\Box$ of$\Box$ constant$\Box$ shiftings$\Box$,$\Box$ applying$\Box$ to$\Box$ some$\Box$ subset$\Box$ $\Box$$J$\Box$$$\Box$ of$\Box$ the$\Box$ non$\Box$-output$\Box$ qubits$\Box$.$\Box$ So$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ C$\Box$'$\Box$_OC$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}C$\Box$'$\Box$'$\Box$ E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ $\Box$&$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$&$\Box$ $\Box$ C$\Box$'$\Box$_OC$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}SM$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$\Box$$\Box$ $\Box$&$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$&$\Box$ $\Box$ C$\Box$'$\Box$_OC$\Box$'$\Box$'$\Box$_OC$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}M$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}E$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$ $\Box$}$\Box$ where$\Box$ $\Box$$C$\Box$'$\Box$'$\Box$_O$\Box$$$\Box$ is$\Box$ a$\Box$ further$\Box$ constant$\Box$ correction$\Box$ obtained$\Box$ by$\Box$ shifting$\Box$ $\Box$$C$\Box$_$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$ with$\Box$ $\Box$$S$\Box$$$\Box$.$\Box$ This$\Box$ proves$\Box$ that$\Box$ $\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$'$\Box$$$\Box$ also$\Box$ implements$\Box$ $\Box$$C$\Box$'$\Box$_OC$\Box$'$\Box$'$\Box$_OU$\Box$$$\Box$,$\Box$ and$\Box$ therefore$\Box$ $\Box$ $\Box$$UC$\Box$_i$\Box$=C$\Box$'$\Box$_OC$\Box$'$\Box$'$\Box$_OU$\Box$$$\Box$ which$\Box$ completes$\Box$ the$\Box$ proof$\Box$,$\Box$ since$\Box$ $\Box$$C$\Box$'$\Box$_OC$\Box$'$\Box$'$\Box$_O$\Box$$$\Box$ is$\Box$ a$\Box$ non$\Box$ dependent$\Box$ correction$\Box$.$\Box$ $\Box$ $\Box$$$\Box$\Box$\Box$$$\Box$
$\Box$
The$\Box$ only$\Box$ if$\Box$ part$\Box$ of$\Box$ this$\Box$ theorem$\Box$ already$\Box$ appears$\Box$ in$\Box$ previous$\Box$ work$\Box$~$\Box$\cite$\Box$[p$\Box$.18$\Box$]$\Box${mqqcs$\Box$}$\Box$.$\Box$ $\Box$
$\Box$
$\Box$
We$\Box$ can$\Box$ further$\Box$ prove$\Box$ that$\Box$ dependencies$\Box$ are$\Box$ crucial$\Box$ for$\Box$ the$\Box$ universality$\Box$ of$\Box$ the$\Box$ model$\Box$.$\Box$ Observe$\Box$ first$\Box$ that$\Box$ if$\Box$ a$\Box$ pattern$\Box$ has$\Box$ no$\Box$ measurements$\Box$,$\Box$ and$\Box$ hence$\Box$ no$\Box$ dependencies$\Box$,$\Box$ then$\Box$ it$\Box$ follows$\Box$ from$\Box$ $\Box$(D2$\Box$)$\Box$ that$\Box$ $\Box$$V$\Box$=O$\Box$$$\Box$,$\Box$ $\Box$\textit$\Box${i$\Box$.e$\Box$.$\Box$}$\Box$,$\Box$ all$\Box$ qubits$\Box$ are$\Box$ outputs$\Box$.$\Box$ Therefore$\Box$ computation$\Box$ steps$\Box$ involve$\Box$ only$\Box$ $\Box$$X$\Box$$$\Box$,$\Box$ $\Box$$Z$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box$ Z$\Box$$$\Box$,$\Box$ and$\Box$ it$\Box$ is$\Box$ not$\Box$ surprising$\Box$ that$\Box$ they$\Box$ compute$\Box$ a$\Box$ unitary$\Box$ which$\Box$ is$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$ The$\Box$ general$\Box$ argument$\Box$ essentially$\Box$ consists$\Box$ in$\Box$ showing$\Box$ that$\Box$ when$\Box$ there$\Box$ are$\Box$ measurements$\Box$,$\Box$ but$\Box$ still$\Box$ no$\Box$ dependencies$\Box$,$\Box$ then$\Box$ the$\Box$ measurements$\Box$ are$\Box$ playing$\Box$ no$\Box$ part$\Box$ in$\Box$ the$\Box$ result$\Box$.$\Box$ $\Box$ $\Box$\begin$\Box${theo$\Box$}$\Box$}$\Box$\def$\Box$\HT$\Box${$\Box$\end$\Box${theo$\Box$}$\Box$ Let$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ be$\Box$ a$\Box$ pattern$\Box$ implementing$\Box$ some$\Box$ unitary$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ and$\Box$ suppose$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$'s$\Box$ command$\Box$ sequence$\Box$ doesn$\Box$'t$\Box$ have$\Box$ any$\Box$ dependencies$\Box$,$\Box$ $\Box$ $\Box$ then$\Box$ $\Box$$U$\Box$$$\Box$ is$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$ $\Box$\HT$\Box$ Write$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ for$\Box$ the$\Box$ standard$\Box$ pattern$\Box$ associated$\Box$ to$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$.$\Box$ Since$\Box$ rewriting$\Box$ is$\Box$ sound$\Box$,$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ still$\Box$ implements$\Box$ $\Box$$U$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$ since$\Box$ rewriting$\Box$ never$\Box$ creates$\Box$ any$\Box$ dependency$\Box$,$\Box$ it$\Box$ still$\Box$ has$\Box$ no$\Box$ dependencies$\Box$.$\Box$ $\Box$ In$\Box$ particular$\Box$,$\Box$ the$\Box$ corrections$\Box$ one$\Box$ finds$\Box$ at$\Box$ the$\Box$ end$\Box$ of$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ call$\Box$ them$\Box$ $\Box$$C$\Box$$$\Box$,$\Box$ bear$\Box$ no$\Box$ dependencies$\Box$.$\Box$ Erasing$\Box$ them$\Box$ off$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ results$\Box$ in$\Box$ a$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$'$\Box$$$\Box$ which$\Box$ is$\Box$ still$\Box$ standard$\Box$,$\Box$ still$\Box$ deterministic$\Box$,$\Box$ and$\Box$ implementing$\Box$ $\Box$$U$\Box$'$\Box$:$\Box$=C$\Box$\st$\Box$ U$\Box$$$\Box$.$\Box$ $\Box$
$\Box$
Now$\Box$ how$\Box$ does$\Box$ the$\Box$ pattern$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$'$\Box$$$\Box$ run$\Box$ on$\Box$ some$\Box$ $\Box$ input$\Box$ $\Box$$$\Box$\phi$\Box$$$\Box$~$\Box$?$\Box$ First$\Box$ $\Box$$$\Box$\phi$\Box$\otimes$\Box$\ket$\Box${$\Box$\hskip$\Box$-$\Box$.4ex$\Box$+$\Box$\ldots$\Box$+$\Box$}$\Box$$$\Box$ goes$\Box$ by$\Box$ the$\Box$ entanglement$\Box$ $\Box$ phase$\Box$ to$\Box$ some$\Box$ $\Box$$$\Box$\psi$\Box$\in$\Box$\hil$\Box$ V$\Box$$$\Box$,$\Box$ and$\Box$ is$\Box$ then$\Box$ subjected$\Box$ to$\Box$ a$\Box$ sequence$\Box$ of$\Box$ independent$\Box$ 1$\Box$-qubit$\Box$ measurements$\Box$.$\Box$ $\Box$ Pick$\Box$ a$\Box$ basis$\Box$ $\Box$$$\Box$\mathcal$\Box$ B$\Box$$$\Box$ spanning$\Box$ the$\Box$ Hilbert$\Box$ space$\Box$ generated$\Box$ by$\Box$ the$\Box$ non$\Box$-output$\Box$ qubits$\Box$ $\Box$$$\Box$\hil$\Box${V$\Box$\setminus$\Box$ O$\Box$}$\Box$$$\Box$ and$\Box$ associated$\Box$ to$\Box$ this$\Box$ sequence$\Box$ of$\Box$ measurements$\Box$.$\Box$
$\Box$
Since$\Box$ $\Box$$$\Box$\hil$\Box$ V$\Box$=$\Box$\hil$\Box$ O$\Box$\otimes$\Box$ $\Box$\hil$\Box${V$\Box$\setminus$\Box$ O$\Box$}$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\hil$\Box${V$\Box$\setminus$\Box$ O$\Box$}$\Box$=$\Box$\oplus$\Box$_$\Box${$\Box$\phi$\Box$_b$\Box$\in$\Box$ B$\Box$}$\Box$[$\Box$\phi$\Box$_b$\Box$]$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$$\Box$[$\Box$\phi$\Box$_b$\Box$]$\Box$$$\Box$ $\Box$ is$\Box$ the$\Box$ linear$\Box$ subspace$\Box$ generated$\Box$ by$\Box$ $\Box$$$\Box$\phi$\Box$_b$\Box$$$\Box$,$\Box$ by$\Box$ distributivity$\Box$,$\Box$ $\Box$$$\Box$\psi$\Box$$$\Box$ uniquely$\Box$ $\Box$ decomposes$\Box$ as$\Box$:$\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\psi$\Box$=$\Box$\sum$\Box$_$\Box${$\Box$\phi$\Box$_b$\Box$\in$\Box$\mathcal$\Box$ B$\Box$}$\Box$ $\Box$\phi$\Box$_b$\Box$\otimes$\Box$ x$\Box$_b$\Box$ $\Box$}$\Box$ where$\Box$ $\Box$$$\Box$\phi$\Box$_b$\Box$$$\Box$ ranges$\Box$ over$\Box$ $\Box$$$\Box$\mathcal$\Box$ B$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$x$\Box$_b$\Box$\in$\Box$\hil$\Box$ O$\Box$$$\Box$.$\Box$ Now$\Box$ since$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$'$\Box$$$\Box$ is$\Box$ deterministic$\Box$,$\Box$ there$\Box$ exists$\Box$ an$\Box$ $\Box$$x$\Box$$$\Box$,$\Box$ and$\Box$ scalars$\Box$ $\Box$$$\Box$\lambda$\Box$_b$\Box$$$\Box$ such$\Box$ that$\Box$ $\Box$$x$\Box$_b$\Box$=$\Box$\lambda$\Box$_b$\Box$ x$\Box$$$\Box$.$\Box$ Therefore$\Box$ $\Box$$$\Box$\psi$\Box$$$\Box$ can$\Box$ be$\Box$ written$\Box$ $\Box$$$\Box$\psi$\Box$'$\Box$\otimes$\Box$ x$\Box$$$\Box$,$\Box$ for$\Box$ some$\Box$ $\Box$$$\Box$\psi$\Box$'$\Box$$$\Box$.$\Box$ $\Box$ $\Box$ It$\Box$ follows$\Box$ in$\Box$ particular$\Box$ that$\Box$ the$\Box$ output$\Box$ of$\Box$ the$\Box$ computation$\Box$ will$\Box$ still$\Box$ be$\Box$ $\Box$$x$\Box$$$\Box$ $\Box$(up$\Box$ to$\Box$ a$\Box$ scalar$\Box$)$\Box$,$\Box$ no$\Box$ matter$\Box$ what$\Box$ the$\Box$ actual$\Box$ measurements$\Box$ are$\Box$.$\Box$ $\Box$ One$\Box$ can$\Box$ therefore$\Box$ choose$\Box$ them$\Box$ to$\Box$ be$\Box$ all$\Box$ of$\Box$ the$\Box$ $\Box$$$\Box$\M$\Box$ x$\Box${$\Box$}$\Box$$$\Box$ kind$\Box$,$\Box$ $\Box$ and$\Box$ by$\Box$ the$\Box$ preceding$\Box$ theorem$\Box$ $\Box$$U$\Box$'$\Box$$$\Box$ is$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$,$\Box$ and$\Box$ so$\Box$ is$\Box$ $\Box$$U$\Box$=CU$\Box$'$\Box$$$\Box$,$\Box$ since$\Box$ $\Box$$C$\Box$$$\Box$ is$\Box$ a$\Box$ Pauli$\Box$ operator$\Box$.$\Box$ $\Box$ $\Box$$$\Box$\Box$\Box$$$\Box$
$\Box$
From$\Box$ this$\Box$ section$\Box$,$\Box$ we$\Box$ conclude$\Box$ in$\Box$ particular$\Box$ that$\Box$ any$\Box$ universal$\Box$ set$\Box$ of$\Box$ patterns$\Box$ has$\Box$ to$\Box$ include$\Box$ dependencies$\Box$ $\Box$(by$\Box$ the$\Box$ preceding$\Box$ theorem$\Box$)$\Box$,$\Box$ and$\Box$ also$\Box$ needs$\Box$ to$\Box$ use$\Box$ measurements$\Box$ $\Box$$M$\Box$^$\Box$\alpha$\Box$$$\Box$ where$\Box$ $\Box$$$\Box$\alpha$\Box$\neq0$\Box$$$\Box$ modulo$\Box$ $\Box$$$\Box$\frac\pi2$\Box$$$\Box$ $\Box$(by$\Box$ the$\Box$ theorem$\Box$ before$\Box$)$\Box$.$\Box$ This$\Box$ is$\Box$ indeed$\Box$ the$\Box$ case$\Box$ for$\Box$ the$\Box$ universal$\Box$ set$\Box$ $\Box$$$\Box$\mathfrak$\Box$J$\Box$($\Box$\alpha$\Box$)$\Box$$$\Box$ and$\Box$ $\Box$$$\Box$\mathop{\wedge}\hskip-.4ex$\Box${$\Box$\mathfrak$\Box$ Z$\Box$}$\Box$$$\Box$.$\Box$
$\Box$
$\Box$\section$\Box${Conclusion$\Box$}$\Box$ We$\Box$ presented$\Box$ a$\Box$ calculus$\Box$ for$\Box$ 1$\Box$-qubit$\Box$ measurement$\Box$ based$\Box$ quantum$\Box$ computing$\Box$.$\Box$ $\Box$ We$\Box$ have$\Box$ seen$\Box$ that$\Box$ pattern$\Box$ combinations$\Box$ allow$\Box$ for$\Box$ a$\Box$ structured$\Box$ proof$\Box$ of$\Box$ universality$\Box$,$\Box$ which$\Box$ also$\Box$ results$\Box$ in$\Box$ parsimonious$\Box$ implementations$\Box$.$\Box$ $\Box$ We$\Box$ have$\Box$ shown$\Box$ further$\Box$ that$\Box$ our$\Box$ calculus$\Box$ defines$\Box$ a$\Box$ quadratic$\Box$-time$\Box$ standardisation$\Box$ algorithm$\Box$ transforming$\Box$ any$\Box$ pattern$\Box$ to$\Box$ a$\Box$ standard$\Box$ form$\Box$ where$\Box$ entanglement$\Box$ is$\Box$ done$\Box$ first$\Box$,$\Box$ then$\Box$ measurements$\Box$,$\Box$ then$\Box$ local$\Box$ corrections$\Box$.$\Box$ And$\Box$ finally$\Box$,$\Box$ we$\Box$ have$\Box$ inferred$\Box$ from$\Box$ this$\Box$ procedure$\Box$ that$\Box$ patterns$\Box$ with$\Box$ no$\Box$ dependencies$\Box$,$\Box$ or$\Box$ using$\Box$ only$\Box$ Pauli$\Box$ measurements$\Box$,$\Box$ may$\Box$ only$\Box$ implement$\Box$ unitaries$\Box$ in$\Box$ the$\Box$ Clifford$\Box$ group$\Box$.$\Box$
$\Box$
An$\Box$ obvious$\Box$ question$\Box$ is$\Box$ whether$\Box$ one$\Box$ can$\Box$ extend$\Box$ these$\Box$ ideas$\Box$ to$\Box$ other$\Box$ measurement$\Box$ based$\Box$ models$\Box$,$\Box$ perhaps$\Box$ based$\Box$ on$\Box$ different$\Box$ families$\Box$ of$\Box$ entanglement$\Box$ operators$\Box$,$\Box$ more$\Box$ general$\Box$ measurements$\Box$ and$\Box$ other$\Box$ types$\Box$ of$\Box$ local$\Box$ corrections$\Box$.$\Box$ $\Box$ This$\Box$ is$\Box$ a$\Box$ matter$\Box$ which$\Box$ we$\Box$ wish$\Box$ to$\Box$ explore$\Box$ further$\Box$.$\Box$ $\Box$ For$\Box$ now$\Box$,$\Box$ it$\Box$ is$\Box$ already$\Box$ clear$\Box$ that$\Box$ both$\Box$ the$\Box$ notation$\Box$ and$\Box$ the$\Box$ calculus$\Box$ can$\Box$ be$\Box$ extended$\Box$ to$\Box$ the$\Box$ teleportation$\Box$ model$\Box$ which$\Box$ is$\Box$ based$\Box$ on$\Box$ 2$\Box$-qubit$\Box$ measurements$\Box$.$\Box$ $\Box$ This$\Box$ actually$\Box$ shows$\Box$ that$\Box$ teleportation$\Box$ models$\Box$ are$\Box$ embeddable$\Box$ in$\Box$ the$\Box$ one$\Box$-way$\Box$ model$\Box$ in$\Box$ a$\Box$ very$\Box$ strong$\Box$ sense$\Box$.$\Box$ $\Box$ We$\Box$ will$\Box$ return$\Box$ to$\Box$ this$\Box$ particular$\Box$ question$\Box$ elsewhere$\Box$.$\Box$
$\Box$
We$\Box$ also$\Box$ feel$\Box$ that$\Box$ the$\Box$ methods$\Box$ explored$\Box$ here$\Box$ can$\Box$ be$\Box$ stretched$\Box$ further$\Box$ and$\Box$ made$\Box$ to$\Box$ be$\Box$ relevant$\Box$ to$\Box$ the$\Box$ study$\Box$ of$\Box$ error$\Box$ propagation$\Box$ and$\Box$ error$\Box$ correcting$\Box$,$\Box$ but$\Box$ this$\Box$ demands$\Box$ using$\Box$ mixed$\Box$ states$\Box$,$\Box$ and$\Box$ interpreting$\Box$ patterns$\Box$ as$\Box$ cp$\Box$-maps$\Box$.$\Box$ $\Box$ $\Box$
$\Box$
Finally$\Box$,$\Box$ there$\Box$ is$\Box$ also$\Box$ a$\Box$ clear$\Box$ reading$\Box$ of$\Box$ dependencies$\Box$ as$\Box$ classical$\Box$ communications$\Box$,$\Box$ while$\Box$ local$\Box$ corrections$\Box$ can$\Box$ be$\Box$ thought$\Box$ of$\Box$ as$\Box$ local$\Box$ quantum$\Box$ operations$\Box$ in$\Box$ a$\Box$ multipartite$\Box$ scenario$\Box$.$\Box$ Along$\Box$ this$\Box$ reading$\Box$,$\Box$ standardisation$\Box$ $\Box$ pushes$\Box$ non$\Box$-local$\Box$ operations$\Box$ to$\Box$ the$\Box$ beginning$\Box$ of$\Box$ a$\Box$ distributed$\Box$ computation$\Box$,$\Box$ and$\Box$ it$\Box$ seems$\Box$ the$\Box$ measurement$\Box$ calculus$\Box$ could$\Box$ prove$\Box$ useful$\Box$ $\Box$ in$\Box$ the$\Box$ area$\Box$ of$\Box$ quantum$\Box$ protocols$\Box$.$\Box$
$\Box$
$\Box$\begin$\Box${thebibliography$\Box$}$\Box${DKP04$\Box$}$\Box$
$\Box$
$\Box$\bibitem$\Box$[AL04$\Box$]$\Box${AL04$\Box$}$\Box$ P$\Box$.$\Box$~Aliferis$\Box$ and$\Box$ D$\Box$.$\Box$~W$\Box$.$\Box$ Leung$\Box$.$\Box$ $\Box$\newblock$\Box$ Computation$\Box$ by$\Box$ measurements$\Box$:$\Box$ a$\Box$ unifying$\Box$ picture$\Box$.$\Box$ $\Box$\newblock$\Box$ Quant$\Box$-ph$\Box$/0404082$\Box$,$\Box$ April$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[BR04$\Box$]$\Box${BR04$\Box$}$\Box$ D$\Box$.$\Box$~E$\Box$.$\Box$ Browne$\Box$ and$\Box$ T$\Box$.$\Box$~Rudolph$\Box$.$\Box$ $\Box$\newblock$\Box$ Efficient$\Box$ linear$\Box$ optical$\Box$ quantum$\Box$ computation$\Box$.$\Box$ $\Box$\newblock$\Box$ Quant$\Box$-ph$\Box$/0405157$\Box$,$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[CAJ04$\Box$]$\Box${CMJ04$\Box$}$\Box$ S$\Box$.R$\Box$.$\Box$ Clark$\Box$,$\Box$ C$\Box$.$\Box$~Moura$\Box$ Alves$\Box$,$\Box$ and$\Box$ D$\Box$.$\Box$~Jaksch$\Box$.$\Box$ $\Box$\newblock$\Box$ Controlled$\Box$ generation$\Box$ of$\Box$ graph$\Box$ states$\Box$ for$\Box$ quantum$\Box$ computation$\Box$ in$\Box$ spin$\Box$
$\Box$ $\Box$ chains$\Box$.$\Box$ $\Box$\newblock$\Box$ Quant$\Box$-ph$\Box$/0406150$\Box$,$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[DKP04$\Box$]$\Box${generator04$\Box$}$\Box$ V$\Box$.$\Box$~Danos$\Box$,$\Box$ E$\Box$.$\Box$~Kashefi$\Box$,$\Box$ and$\Box$ P$\Box$.$\Box$~Panangaden$\Box$.$\Box$ $\Box$\newblock$\Box$ Robust$\Box$ and$\Box$ parsimonious$\Box$ realisations$\Box$ of$\Box$ unitaries$\Box$ in$\Box$ the$\Box$ one$\Box$-way$\Box$
$\Box$ $\Box$ model$\Box$.$\Box$ $\Box$\newblock$\Box$ Quant$\Box$-ph$\Box$/0411071$\Box$,$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[HEB04$\Box$]$\Box${graphstates$\Box$}$\Box$ M$\Box$.$\Box$~Hein$\Box$,$\Box$ J$\Box$.$\Box$~Eisert$\Box$,$\Box$ and$\Box$ H$\Box$.J$\Box$.$\Box$ Briegel$\Box$.$\Box$ $\Box$\newblock$\Box$ Multi$\Box$-party$\Box$ entanglement$\Box$ in$\Box$ graph$\Box$ states$\Box$.$\Box$ $\Box$\newblock$\Box$ $\Box${$\Box$\em$\Box$ Phys$\Box$.$\Box$ Rev$\Box$.$\Box$ A$\Box$}$\Box$,$\Box$ 69$\Box$:62311$\Box$-$\Box$-62333$\Box$,$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[ND04$\Box$]$\Box${ND04$\Box$}$\Box$ M$\Box$.$\Box$~A$\Box$.$\Box$ Nielsen$\Box$ and$\Box$ C$\Box$.$\Box$~M$\Box$.$\Box$ Dawson$\Box$.$\Box$ $\Box$\newblock$\Box$ Fault$\Box$-tolerant$\Box$ quantum$\Box$ computation$\Box$ with$\Box$ cluster$\Box$ states$\Box$.$\Box$ $\Box$\newblock$\Box$ Quant$\Box$-ph$\Box$/0405134$\Box$,$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[Nie04$\Box$]$\Box${Nielsen04$\Box$}$\Box$ M$\Box$.$\Box$~A$\Box$.$\Box$ Nielsen$\Box$.$\Box$ $\Box$\newblock$\Box$ Optical$\Box$ quantum$\Box$ computation$\Box$ using$\Box$ cluster$\Box$ states$\Box$.$\Box$ $\Box$\newblock$\Box$ quant$\Box$-ph$\Box$/0402005$\Box$,$\Box$ 2004$\Box$.$\Box$
$\Box$
$\Box$\bibitem$\Box$[RBB03$\Box$]$\Box${mqqcs$\Box$}$\Box$ R$\Box$.$\Box$~Raussendorf$\Box$,$\Box$ D$\Box$.$\Box$~E$\Box$.$\Box$ Browne$\Box$,$\Box$ and$\Box$ H$\Box$.$\Box$~J$\Box$.$\Box$ Briegel$\Box$.$\Box$ $\Box$\newblock$\Box$ Measurement$\Box$-based$\Box$ quantum$\Box$ computation$\Box$ on$\Box$ cluster$\Box$ states$\Box$.$\Box$ $\Box$\newblock$\Box$ $\Box${$\Box$\em$\Box$ Physical$\Box$ Review$\Box$}$\Box$,$\Box$ A$\Box$ 68$\Box$(022312$\Box$)$\Box$,$\Box$ 2003$\Box$.$\Box$
$\Box$
$\Box$\end$\Box${thebibliography$\Box$}$\Box$
$\Box$
$\Box$\section$\Box${Appendix$\Box$}$\Box$ We$\Box$ prove$\Box$ here$\Box$ that$\Box$ standardisation$\Box$ has$\Box$ indeed$\Box$ the$\Box$ properties$\Box$ quoted$\Box$ in$\Box$ the$\Box$ body$\Box$ of$\Box$ the$\Box$ paper$\Box$.$\Box$ First$\Box$,$\Box$ we$\Box$ need$\Box$ a$\Box$ lemma$\Box$:$\Box$ $\Box$\begin$\Box${lemme$\Box$}$\Box$}$\Box$\def$\Box$\EL$\Box${$\Box$\end$\Box${lemme$\Box$}$\Box$[Termination$\Box$]$\Box$ $\Box$ For$\Box$ all$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$,$\Box$ there$\Box$ exists$\Box$ finitely$\Box$ many$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ such$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$.$\Box$ $\Box$ $\Box$\EL$\Box$ $\Box$ Suppose$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ has$\Box$ command$\Box$ sequence$\Box$ $\Box$$A$\Box$_n$\Box$\ldots$\Box$ A$\Box$_1$\Box$$$\Box$,$\Box$ and$\Box$ define$\Box$ for$\Box$ $\Box$$A$\Box$_i$\Box$=$\Box$\et$\Box$ ij$\Box$$$\Box$ $\Box$$d$\Box$(A$\Box$_i$\Box$)$\Box$=i$\Box$$$\Box$,$\Box$ and$\Box$ for$\Box$ $\Box$$A$\Box$_j$\Box$=$\Box$\cx$\Box$ us$\Box$$$\Box$,$\Box$ $\Box$$d$\Box$(A$\Box$_j$\Box$)$\Box$=n$\Box$-j$\Box$$$\Box$.$\Box$ $\Box$ Define$\Box$ further$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ d$\Box$($\Box$\mathfrak$\Box$ P$\Box$)$\Box$&$\Box$=$\Box$&$\Box$($\Box$ $\Box$\sum$\Box$_$\Box${E$\Box$\in$\Box$\mathfrak$\Box$ P$\Box$}d$\Box$(E$\Box$)$\Box$,$\Box$ $\Box$\sum$\Box$_$\Box${C$\Box$\in$\Box$\mathfrak$\Box$ P$\Box$}d$\Box$(C$\Box$)$\Box$)$\Box$ $\Box$ $\Box$}$\Box$ $\Box$ This$\Box$ measure$\Box$ decreases$\Box$ lexicographically$\Box$ under$\Box$ rewriting$\Box$,$\Box$ in$\Box$ other$\Box$ words$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ implies$\Box$ $\Box$$d$\Box$($\Box$\mathfrak$\Box$ P$\Box$)$\Box$>d$\Box$($\Box$\mathfrak$\Box$ P$\Box$'$\Box$)$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$$\Box$<$\Box$$$\Box$ is$\Box$ the$\Box$ lexicographic$\Box$ ordering$\Box$ on$\Box$ $\Box$$$\Box$\mathbb$\Box$ N$\Box$^2$\Box$$$\Box$.$\Box$ Let$\Box$ us$\Box$ inspect$\Box$ all$\Box$ cases$\Box$.$\Box$ $\Box$ First$\Box$ when$\Box$ one$\Box$ applies$\Box$ $\Box$$EC$\Box$$$\Box$,$\Box$ then$\Box$ the$\Box$ first$\Box$ coordinate$\Box$ strictly$\Box$ diminishes$\Box$ $\Box$(the$\Box$ second$\Box$ does$\Box$ not$\Box$ always$\Box$,$\Box$ because$\Box$ of$\Box$ the$\Box$ duplication$\Box$ involved$\Box$ if$\Box$ $\Box$$C$\Box$=X$\Box$$$\Box$)$\Box$;$\Box$ when$\Box$ $\Box$$MC$\Box$$$\Box$,$\Box$ the$\Box$ second$\Box$ strictly$\Box$ diminishes$\Box$ and$\Box$ the$\Box$ first$\Box$ stays$\Box$ the$\Box$ same$\Box$ or$\Box$ diminishes$\Box$;$\Box$ when$\Box$ $\Box$$EA$\Box$$$\Box$,$\Box$ the$\Box$ first$\Box$ strictly$\Box$ diminishes$\Box$ $\Box$(because$\Box$ we$\Box$ dropped$\Box$ the$\Box$ case$\Box$ when$\Box$ $\Box$$A$\Box$$$\Box$ is$\Box$ itself$\Box$ an$\Box$ $\Box$$E$\Box$$$\Box$)$\Box$,$\Box$ and$\Box$ maybe$\Box$ the$\Box$ second$\Box$;$\Box$ when$\Box$ $\Box$$AC$\Box$$$\Box$,$\Box$ the$\Box$ second$\Box$ strictly$\Box$ diminishes$\Box$,$\Box$ and$\Box$ the$\Box$ first$\Box$ stays$\Box$ the$\Box$ same$\Box$ or$\Box$ diminishes$\Box$ $\Box$(when$\Box$ $\Box$$A$\Box$=E$\Box$$$\Box$)$\Box$.$\Box$
$\Box$
Therefore$\Box$,$\Box$ all$\Box$ rewritings$\Box$ are$\Box$ finite$\Box$,$\Box$ and$\Box$ since$\Box$ the$\Box$ system$\Box$ is$\Box$ finitely$\Box$ branching$\Box$ $\Box$(there$\Box$ are$\Box$ no$\Box$ more$\Box$ than$\Box$ $\Box$$n$\Box$$$\Box$ possible$\Box$ single$\Box$ step$\Box$ rewrites$\Box$ on$\Box$ a$\Box$ given$\Box$ sequence$\Box$ of$\Box$ length$\Box$ $\Box$$n$\Box$$$\Box$)$\Box$,$\Box$ we$\Box$ get$\Box$ the$\Box$ statement$\Box$ of$\Box$ the$\Box$ theorem$\Box$.$\Box$ $\Box$ $\Box$$$\Box$\Box$\Box$$$\Box$
$\Box$
It$\Box$ is$\Box$ not$\Box$ to$\Box$ difficult$\Box$ to$\Box$ strengthen$\Box$ the$\Box$ result$\Box$ above$\Box$,$\Box$ by$\Box$ showing$\Box$ that$\Box$ the$\Box$ longest$\Box$ possible$\Box$ rewriting$\Box$ of$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ quadratic$\Box$ in$\Box$ $\Box$$n$\Box$$$\Box$,$\Box$ where$\Box$ $\Box$$n$\Box$$$\Box$ is$\Box$ the$\Box$ length$\Box$ of$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$'s$\Box$ command$\Box$ sequence$\Box$.$\Box$
$\Box$
Say$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$ is$\Box$ $\Box$\emph$\Box${standard$\Box$}$\Box$ if$\Box$ for$\Box$ no$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$.$\Box$
$\Box$
$\Box$\begin$\Box${prop$\Box$}$\Box$}$\Box$\def$\Box$\ORP$\Box${$\Box$\end$\Box${prop$\Box$}$\Box$[Standardisation$\Box$]$\Box$ $\Box$ For$\Box$ all$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$$$\Box$,$\Box$ there$\Box$ exists$\Box$ a$\Box$ unique$\Box$ standard$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ such$\Box$ that$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\mathfrak$\Box$ P$\Box$'$\Box$$$\Box$ satisfies$\Box$ the$\Box$ $\Box$(EMC$\Box$)$\Box$ condition$\Box$.$\Box$ $\Box$ $\Box$\ORP$\Box$ $\Box$ Since$\Box$ the$\Box$ rewriting$\Box$ system$\Box$ is$\Box$ terminating$\Box$,$\Box$ confluence$\Box$ follows$\Box$ from$\Box$ local$\Box$ confluence$\Box$ $\Box$ $\Box$(meaning$\Box$ whenever$\Box$ two$\Box$ rewritings$\Box$ can$\Box$ be$\Box$ applied$\Box$,$\Box$ one$\Box$ can$\Box$ rewrite$\Box$ further$\Box$ both$\Box$ transforms$\Box$ to$\Box$ a$\Box$ same$\Box$ third$\Box$ expression$\Box$)$\Box$.$\Box$ $\Box$ Then$\Box$,$\Box$ uniqueness$\Box$ of$\Box$ the$\Box$ standard$\Box$ form$\Box$ is$\Box$ an$\Box$ easy$\Box$ consequence$\Box$ $\Box$(actually$\Box$,$\Box$ for$\Box$ terminating$\Box$ rewriting$\Box$ systems$\Box$,$\Box$ unicity$\Box$ of$\Box$ standard$\Box$ forms$\Box$ and$\Box$ confluence$\Box$ are$\Box$ equivalent$\Box$)$\Box$.$\Box$ Looking$\Box$ for$\Box$ critical$\Box$ pairs$\Box$,$\Box$ that$\Box$ is$\Box$ occurrences$\Box$ of$\Box$ three$\Box$ successive$\Box$ commands$\Box$ where$\Box$ two$\Box$ rules$\Box$ can$\Box$ be$\Box$ applied$\Box$ simultaneously$\Box$,$\Box$ $\Box$ one$\Box$ finds$\Box$ that$\Box$ there$\Box$ are$\Box$ only$\Box$ two$\Box$ types$\Box$:$\Box$ $\Box$$$\Box$\et$\Box$ ijM$\Box$_kC$\Box$_k$\Box$$$\Box$ with$\Box$ $\Box$$i$\Box$$$\Box$,$\Box$ $\Box$$j$\Box$$$\Box$ and$\Box$ $\Box$$k$\Box$$$\Box$ all$\Box$ distinct$\Box$,$\Box$ and$\Box$ $\Box$$$\Box$\et$\Box$ ijM$\Box$_kC$\Box$_$\Box${l$\Box$}$\Box$$$\Box$ with$\Box$ $\Box$$k$\Box$$$\Box$ and$\Box$ $\Box$$l$\Box$$$\Box$ distinct$\Box$.$\Box$ $\Box$ In$\Box$ both$\Box$ cases$\Box$ local$\Box$ confluence$\Box$ is$\Box$ easily$\Box$ verified$\Box$.$\Box$ $\Box$
$\Box$
Suppose$\Box$ now$\Box$ $\Box$$$\Box${$\Box$\mathfrak$\Box$ P$\Box$'$\Box$}$\Box$$$\Box$ does$\Box$ not$\Box$ satisfy$\Box$ $\Box$(EMC$\Box$)$\Box$.$\Box$ Then$\Box$,$\Box$ either$\Box$ there$\Box$ is$\Box$ a$\Box$ pattern$\Box$ $\Box$$EA$\Box$$$\Box$ with$\Box$ $\Box$$A$\Box$$$\Box$ not$\Box$ of$\Box$ type$\Box$ $\Box$$E$\Box$$$\Box$,$\Box$ or$\Box$ there$\Box$ is$\Box$ a$\Box$ pattern$\Box$ $\Box$$AC$\Box$$$\Box$ with$\Box$ $\Box$$A$\Box$$$\Box$ not$\Box$ of$\Box$ type$\Box$ $\Box$$C$\Box$$$\Box$.$\Box$ In$\Box$ the$\Box$ former$\Box$ case$\Box$,$\Box$ $\Box$$E$\Box$$$\Box$ and$\Box$ $\Box$$A$\Box$$$\Box$ must$\Box$ operate$\Box$ on$\Box$ overlapping$\Box$ qubits$\Box$,$\Box$ else$\Box$ one$\Box$ may$\Box$ apply$\Box$ a$\Box$ free$\Box$ commutation$\Box$ rule$\Box$,$\Box$ and$\Box$ $\Box$$A$\Box$$$\Box$ may$\Box$ not$\Box$ be$\Box$ a$\Box$ $\Box$$C$\Box$$$\Box$ since$\Box$ in$\Box$ this$\Box$ case$\Box$ one$\Box$ may$\Box$ apply$\Box$ an$\Box$ $\Box$$EC$\Box$$$\Box$ rewrite$\Box$.$\Box$ The$\Box$ only$\Box$ remaining$\Box$ case$\Box$ in$\Box$ when$\Box$ $\Box$$A$\Box$$$\Box$ is$\Box$ of$\Box$ type$\Box$ $\Box$$M$\Box$$$\Box$,$\Box$ overlapping$\Box$ $\Box$$E$\Box$$$\Box$'s$\Box$ qubits$\Box$,$\Box$ but$\Box$ this$\Box$ is$\Box$ what$\Box$ condition$\Box$ $\Box$(D1$\Box$)$\Box$ forbids$\Box$,$\Box$ $\Box$ and$\Box$ since$\Box$ $\Box$(D1$\Box$)$\Box$ is$\Box$ preserved$\Box$ under$\Box$ rewriting$\Box$,$\Box$ this$\Box$ contradicts$\Box$ the$\Box$ assumption$\Box$.$\Box$ The$\Box$ latter$\Box$ case$\Box$ is$\Box$ even$\Box$ simpler$\Box$.$\Box$ $\Box$ $\Box$$$\Box$\Box$\Box$$$\Box$
$\Box$
$\Box$\subsection$\Box${Discussion$\Box$}$\Box$ This$\Box$ is$\Box$ what$\Box$ we$\Box$ wanted$\Box$,$\Box$ namely$\Box$ we$\Box$ have$\Box$ shown$\Box$ that$\Box$ under$\Box$ rewriting$\Box$ any$\Box$ $\Box$ pattern$\Box$ can$\Box$ be$\Box$ put$\Box$ in$\Box$ $\Box$(EMC$\Box$)$\Box$ form$\Box$.$\Box$ We$\Box$ actually$\Box$ proved$\Box$ a$\Box$ bit$\Box$ more$\Box$,$\Box$ namely$\Box$ that$\Box$ the$\Box$ standard$\Box$ form$\Box$ obtained$\Box$ is$\Box$ unique$\Box$.$\Box$ $\Box$
$\Box$
However$\Box$,$\Box$ one$\Box$ has$\Box$ to$\Box$ be$\Box$ a$\Box$ bit$\Box$ careful$\Box$ about$\Box$ the$\Box$ significance$\Box$ of$\Box$ this$\Box$ additional$\Box$ piece$\Box$ of$\Box$ information$\Box$.$\Box$ Note$\Box$ first$\Box$ that$\Box$ unicity$\Box$ is$\Box$ obtained$\Box$ because$\Box$ we$\Box$ dropped$\Box$ the$\Box$ $\Box$$CC$\Box$$$\Box$ free$\Box$ commutations$\Box$,$\Box$ and$\Box$ all$\Box$ $\Box$$EE$\Box$$$\Box$ commutations$\Box$,$\Box$ thus$\Box$ having$\Box$ a$\Box$ very$\Box$ rigid$\Box$ notion$\Box$ of$\Box$ command$\Box$ sequence$\Box$.$\Box$ One$\Box$ cannot$\Box$ put$\Box$ them$\Box$ back$\Box$ as$\Box$ rewriting$\Box$ rules$\Box$,$\Box$ since$\Box$ they$\Box$ obviously$\Box$ ruin$\Box$ termination$\Box$ and$\Box$ uniqueness$\Box$ of$\Box$ standard$\Box$ forms$\Box$.$\Box$
$\Box$
A$\Box$ reasonable$\Box$ thing$\Box$ to$\Box$ do$\Box$,$\Box$ would$\Box$ be$\Box$ to$\Box$ take$\Box$ this$\Box$ set$\Box$ of$\Box$ equations$\Box$ as$\Box$ generating$\Box$ an$\Box$ $\Box$ equivalence$\Box$ relation$\Box$ on$\Box$ command$\Box$ sequences$\Box$,$\Box$ call$\Box$ it$\Box$ $\Box$$$\Box$\equiv$\Box$$$\Box$,$\Box$ and$\Box$ hope$\Box$ to$\Box$ strengthen$\Box$ the$\Box$ results$\Box$ obtained$\Box$ so$\Box$ far$\Box$,$\Box$ by$\Box$ proving$\Box$ that$\Box$ all$\Box$ reachable$\Box$ standard$\Box$ forms$\Box$ are$\Box$ equivalent$\Box$.$\Box$
$\Box$
But$\Box$ this$\Box$ is$\Box$ too$\Box$ naive$\Box$ a$\Box$ strategy$\Box$,$\Box$ since$\Box$ $\Box$ $\Box$$$\Box$\et12$\Box$\Cx1$\Box$\Cx2$\Box$\equiv$\Box$\et12$\Box$\Cx2$\Box$\Cx1$\Box$$$\Box$,$\Box$ and$\Box$:$\Box$ $\Box$ $\Box$\AR$\Box${$\Box$ $\Box$\et12$\Box$\cx1s$\Box$\cx2t$\Box$ $\Box$&$\Box$\Rightarrow$\Box$^$\Box$\star$\Box$&$\Box$\cx1s$\Box$\cz2s$\Box$\cx2t$\Box$\cz1t$\Box$\et12$\Box$$\Box$$\Box$ $\Box$&$\Box$\equiv$\Box$&$\Box$\cx1s$\Box$\cz1t$\Box$\cz2s$\Box$\cx2t$\Box$\et12$\Box$ $\Box$ $\Box$}$\Box$ $\Box$ obtaining$\Box$ an$\Box$ expression$\Box$ which$\Box$ is$\Box$ not$\Box$ symmetric$\Box$ in$\Box$ $\Box$$1$\Box$$$\Box$ and$\Box$ $\Box$$2$\Box$$$\Box$.$\Box$ To$\Box$ conclude$\Box$,$\Box$ one$\Box$ has$\Box$ to$\Box$ extend$\Box$ $\Box$$$\Box$\equiv$\Box$$$\Box$ to$\Box$ include$\Box$ the$\Box$ additional$\Box$ equivalence$\Box$ $\Box$$$\Box$\cx1s$\Box$\cz1t$\Box$\equiv$\Box$\cz1t$\Box$\cx1s$\Box$$$\Box$,$\Box$ which$\Box$ fortunately$\Box$ is$\Box$ sound$\Box$ since$\Box$ these$\Box$ two$\Box$ operators$\Box$ are$\Box$ equal$\Box$ up$\Box$ to$\Box$ a$\Box$ global$\Box$ phase$\Box$.$\Box$ $\Box$ We$\Box$ conjecture$\Box$ that$\Box$ this$\Box$ enriched$\Box$ equivalence$\Box$ is$\Box$ preserved$\Box$.$\Box$ $\Box$
$\Box$
$\Box$
$\Box$
$\Box$\end$\Box${document$\Box$}$\Box$ |
\begin{document}
\title{Distinguishing quantum states using time travelling qubits in a presence of thermal environments}
\author{Bartosz Dziewit} \affiliation{Institute of Physics, University of Silesia in Katowice, 40-007 Katowice, Poland} \author{Monika Richter} \affiliation{Institute of Physics, University of Silesia in Katowice, 40-007 Katowice, Poland} \author{Jerzy Dajka} \affiliation{Institute of Physics, University of Silesia in Katowice, 40-007 Katowice, Poland} \affiliation{Silesian Center for Education and Interdisciplinary Research, University of Silesia in Katowice, 41-500 Chorz\'{o}w, Poland}
\begin{abstract} We consider quantum circuits with time travel designed for distinguishing specific non--orthogonal quantum states in two most popular models: Deutsch's and postselected. We modify them by a presence of weakly coupled thermal environment. Using the Davies approximation we study how the thermal noise affects an ability of the circuits to distinguish non--orthogonal quantum states. We show that for purely dephasing environment a 'paradoxial power' of such circuits remains preserved. We also present a physics--based argument for conditions of validity of the maximum entropy rule introduced by David Deutsch for resolving the uniqueness ambiguity in a circuit with time travel. \end{abstract} \pacs{03.67.-a, 03.65.Yz, 03.67.Dd, 04.20.Gz}
\maketitle
\section{Introduction}
Impossibility of distinguishing non--orthogonal quantum states is a bedrock granting safety of quantum communication protocols~\cite{nielsen,scarani2,crypto}. This bedrock, however, can be eroded by closed time--like curves (CTC) which existence (under certain assumptions) has already been predicted long time ago~\cite{godel}. Potential time travelers could utilize the 'paradoxial power' of such circuits to solve problems which are hard to solve or even impossible to perform, cf. Ref. ~\cite{brun_exp} for recent a review. In particular they may be able to distinguish non--orthogonal quantum states~\cite{brun_disting,brun_fund}.
There are at least three non--relativistic models, utilizing quantum circuit formalism, of how the quantum computation is affected by the presence of CTCs. In other words, there are at least three models of quantum time travel useful for quantum information. {\it (i)}
David Deutsch~\cite{deutsch} was the first who began to investigate properties of quantum systems in a presence of CTCs. He proposed an effective (nonrelativistic) description utilizing the quantum circuit formalism to describe quantum systems built of interacting the {\it chronology respecting} (CR) and {\it chronology violating} (CV) constituents. This proposal allowed to resolve at least some of the paradoxes caused by CTCs. Despite of experimental attempts of mimicking the Deutsch model~\cite{ralph,brun_exp} this proposal remains controversial~\cite{wal_fund,allen}. The second {\it (ii)}, utilizes a nowadays experimentally accessible teleportation protocol equipped with a post--selection~\cite{svet,seth_prl,seth_prd} and the third {\it (iii)}, most recent~\cite{allen}, uses transition probabilities. For examples of other approaches one can consult e.g. Ref. ~\cite{elze_time} or ~\cite{vaidman_past}.
One can expect that the 'paradoxial' computational power of time travelers, originating from non--linearity of quantum models in the presence of CTCs, becomes weakened by omnipresent decoherence. In this paper we consider how the ability of distinguishing states of qubits is affected by thermal environment of time traveling qubit. We apply the Davies weak coupling approach~\cite{alicki} for a model of decoherence which is reviewed in Sec 2. of our paper. We limit our attention quantum circuits distinguishing non--orthogonal qubit's states in the Deutsch model in Sec 3. of the paper and the post--selected teleportation in Sec. 4. In Sec. 5 we analyze the circuit for the unproven theorem~\cite{allen}, designed to exemplify a celebrated paradox of information originating out of nowhere, and we present, utilizing general consideration of Ref.\cite{allen}, how the effect of thermal decoherence can serve as a physical justification of the Deutsch's maximum entropy rule introduced {\it ad hoc} in Ref.\cite{deutsch} in order to resolve the uniqueness ambiguity~\cite{allen} present circuits with CTCs.
\section{Davies decoherence}
Quantum decoherence is caused by the environment. Its influence on the qubit $Q$ is modeled by the Hamiltonian in the form:
\begin{eqnarray}\label{hamful} H&=&H_Q+H_{env}+H_{int}, \end{eqnarray}
where $H_Q$ is the Hamiltonian of the qubit, $H_{env}$ models the environment and $H_{int}$ describes the qubit--environment interaction.
For the qubit: \begin{eqnarray}\label{hami}
H_Q&=&\frac{\omega}{2}(|1\rangle\langle 1|-|0\rangle\langle 0|), \end{eqnarray}
where $\omega$ is the energy splitting of the qubit and $|0\rangle,|1\rangle$ span a Hilbert space of $Q$.
We assume that the interaction between the qubit and its environment satisfies the Davies weak coupling conditions \cite{alicki} dedicated for rigorous construction of the qubit reduced dynamics calculated with respect to the environment. It is formulated in terms of a completely positive (strictly Markovian) semigroup using parameters of the microscopic Hamiltonian of the full system~\cite{alicki}. As the Davies semigroups can be rigorously and consistently derived from microscopic models of open systems they satisfy thermodynamic and statistical--mechanical properties of open quantum systems such as the detailed balance condition and the Gibbs canonical distribution in the stationary regime \cite{alicki}.
The Davies method has been successfully used in recent studies of various problems in quantum information and physics of open quantum systems including teleportation~\cite{kloda}, entanglement dynamics~\cite{lendi}, quantum discord~\cite{mymy,mymy2}, properties of geometric phases of qubits \cite{dav_faza}, thermodynamic properties of nano--systems \cite{dav_heat} and quantum games~\cite{dajka_game}.
In this paper we consider only certain elements of Davies semi--groups: the Davies {\it maps} $D=D(p,A,G,\omega,t)$ which acts as follows~\cite{dav}:
\begin{equation}\label{dav} \begin{array}{ll}
D\bigg[|1\rangle\langle 1|\bigg]= [1-(1-p)(1-e^{-At})]|1\rangle\langle 1|+ \\
+ (1-p)(1-e^{-At})|0\rangle\langle 0|, \\
D\bigg[|1\rangle\langle 0|\bigg]= e^{i\omega t -Gt}|1\rangle\langle 0|, \\
D\bigg[|0\rangle\langle 1|\bigg]= e^{-i\omega t -Gt}|0\rangle\langle 1|, \\
D\bigg[|0\rangle\langle 0|\bigg]= p(1-e^{-At})|1\rangle\langle 1|+[1-(1-e^{-At})p]|0\rangle\langle 0| \end{array}, \end{equation}
where $p\in[0,1/2]$ is related to the temperature $T$ of the environment via:
\begin{eqnarray}\label{p} p&=&\exp(-\omega/2T)/[\exp(-\omega/2T)+\exp(\omega/2T)]. \end{eqnarray}
We set $k_B=1$. Let us notice that in long time limit the Davies map transforms any qubit state $\rho$ into the equilibrium Gibbs state:
\begin{eqnarray}\label{gibb}\lim_{t\rightarrow\infty}D(p,A,G,\omega,t)\rho=p|1\rangle\langle 1|+(1-p)|0\rangle\langle 0|.\end{eqnarray}
The case $T=0$ corresponds to the value $p=0$ and for $T\to \infty$ the parameter $p\to1/2$.
The parameters $A = 1/\tau_R$ and $G = 1/\tau_D$ interpreted in terms of spin relaxation~\cite{T12} are related to the energy relaxation time $\tau_R$ and the dephasing time $\tau_D$ respectively~\cite{dav}. There is a relation between $A$ and $G$ which guarantee that the Davies map is a trace-preserving completely positive map. It is given by the inequalities~\cite{T12}
\begin{eqnarray}\label{warun} G &\ge& A/2 \ge 0. \end{eqnarray}
The limiting case $A=0$ and $G\ne 0$ corresponds to (Markovian) pure dephasing without dissipation of energy. Let us notice that the pure dephasing despite its apparent simplicity can be effective applied to modeling of realistic systems c.f. Ref.\cite{defaz}, in which no energy dissipation occurs for a time scale significantly larger than other time scales in the system.
\section{Deutschian model of CTC}
The simplest Deutsch's circuit~\cite{deutsch} designed to mimic quantum dynamics in a presence of closed time--like curves (CTC) consists of a pair of qubits: the one is chronology respecting (CR) whereas the second violating the chronology (CV). This two qubits are coupled by the unitary $U$. The CV qubit enters the circuit and interacts with the CR qubit. Then it {\it violates} the chronology and is identified with its past. Formally the CV time evolution reads as follows:
\begin{equation}\label{d0} \begin{array}{l@{}l} \tau &{}=\Lambda(\tau),\\ \Lambda(\cdot) &{}=\mbox{Tr}_{CR}\{U(\rho_i\otimes\cdot)U^\dagger \}, \end{array} \end{equation} with the partial trace $\mbox{Tr}_{CR}$ calculated with respect to the CR qubit.
At the same time the state of the CR qubit which enters the circuit in a state $\rho_i$ changes into its final form $\rho_f$ given by:
\begin{eqnarray}\label{d2}
\rho_f&=&\mbox{Tr}_{CV}\{U(\rho_i\otimes\tau)U^\dagger \} \end{eqnarray} with the partial trace calculated with respect to the CV qubit.
Fundamentals of the Deutsch's consistence condition Eq.(\ref{d0}) is a subject of an important debate~\cite{wal_fund}. In this paper we do not intend to enter such philosophical topics and simply {\it assume} the ontic interpretation of quantum states both pure and, what is probably more unconventional, mixed.
Instead, our aim is to investigate an effect of thermal noise affecting the CV qubit. We consider the Deutsch's consistence condition Eq.(\ref{d0}) modified by the presence of thermal environment in the Davies approximation discussed in the previous section. It is given by a composition $\circ$ of maps:
\begin{eqnarray}\label{d1} \tau&=&[D\circ\Lambda](\tau), \end{eqnarray}
where $D=D(p,A,G,\omega,t)$ is the Davies map Eq.(\ref{dav}), with $t$ denoting a time period when the CV qubit interacts with thermal bath. Equation (\ref{d1}) has a natural interpretation: the CV qubit, before it returns to its past, interacts with thermal environment in the Markovian Davies approximation given by the map $D=D(p,A,G,\omega,t)$. Let us notice that the position of the $D$ in Eq.(\ref{d1}) rather than formal has a physical meaning reflecting our intention of making time travel 'noisy'.
Quantum circuits with CTCs can do tasks which are essentially inaccessible for the 'ordinary' (linear) quantum mechanics. One of the most spectacular examples of such a task is an ability of distinguishing non--orthogonal quantum states. This ability influences security of most quantum key distribution protocols~\cite{scarani2} with the celebrated archetype - the B92~\cite{B92}. There is a quantum circuit with the CTC~\cite{brun_disting} which can be utilized to distinguish non--orthogonal qubit states. It is presented in Fig.\ref{fig1}. \begin{figure}
\caption{Quantum circuit which can distinguish non--orthogonal states $|0\rangle$ and $|-\rangle$ using D--CTC (Deutschian). The dotted line denotes the qubit traveling backward in time. The circuit consists of the SWAP gate and the controlled Hadamard $H$ gate. $M$ denotes a measurement of $\rho_f$ Eqs.(\ref{dd2}),(\ref{ddd2}). }
\label{fig1}
\end{figure} Formally its action is given by Eq.(\ref{d2}) and, in the presence of thermal environment, by Eq.(\ref{d1}) with the unitary $U$ given by
\begin{eqnarray}
U&=& |00\rangle\langle 00|+|01\rangle\langle 10|+|1+\rangle\langle 01|+|1-\rangle\langle 11|, \end{eqnarray}
where $|\pm\rangle=[|0\rangle\pm|1\rangle]/\sqrt{2}$. It the noise--less case, when Eq.(\ref{d0}) instead of Eq.(\ref{d1}) is used, the circuit transforms the indistinguishable states $|-\rangle$,$|0\rangle$ into $|1\rangle$,$|0\rangle$ which are orthogonal and hence can be distinguished~\cite{brun_disting}. It is not surprising that the effect of thermal noise is to weaken this ability.
In order to qualify an effect of noise we compare an output $\rho_f$ Eq.(\ref{d2}) of the noise--disturbed circuit with the noise--less output (which is $\xi_-=|1\rangle\langle 1|$ for $\rho_i=|-\rangle\langle -|$ and $\xi_0=|0\rangle\langle 0|$ for $\rho_i=|0\rangle\langle 0|$ respectively).
We quantify an effect of noise by the trace distance $Q(\rho_f,\xi)=\mbox{Tr}[\sqrt{(\rho_f-\xi)^2}]/2$~\cite{nielsen} which is known~\cite{nielsen} to indicate distinguishability the states $\rho_f,\xi$. For both inputs $\rho_i=|-\rangle\langle -|$ and $\rho_i=|0\rangle\langle 0|$ the corresponding states of the CV ($\tau$) and CR ($\rho_f$) qubits with the details of their calculation are given in the Appendix.
For $\rho_i=|-\rangle\langle -|$ the trace distance $Q_-=Q(\rho_f,\xi_-)$ reads as follows
\begin{eqnarray}\label{Q1} Q_-&=& \frac{\sqrt {2}}{2}\,{\frac { \left( {{\rm e}^{A\,t}}-1 \right) {{\rm e}^{-G
\,t}}\sqrt {8\,{{\rm e}^{2\,G\,t}}+1} \left| -1+p \right| }{2\,{ {\rm e}^{A\,t}}-1}}. \end{eqnarray}
For $\rho_i=|0\rangle\langle 0|$ the corresponding trace distance $Q_0=Q(\rho_f,\xi_0)$ is given by:
\begin{eqnarray}\label{Q2} Q_0&=& \frac{\sqrt {2}}{2}\,{\frac {p\, \left( {{\rm e}^{A\,t}}-1 \right) {{\rm e}^{ -G\,t}}\sqrt {8\,{{\rm e}^{2\,G\,t}}+1}}{2\,{{\rm e}^{A\,t}}-1}}. \end{eqnarray}
There are three parameters $p$, $A$ and $G$ describing thermal environment affecting the CV qubit via the Davies map. The first two affect qualitatively the value of $Q=Q_-,Q_0$. Increasing the last, $G$, has only a quantitative impact and results in faster growth of both $Q_-$ and $Q_0$. It is not the case if one considers $A$.
The most important feature is that for {\it purely dephasing environments} the trace distance between the noisy and the noise--less output of the circuit in Fig.(\ref{fig1}) {\it vanishes} i.e. for $A=0$ both $Q_-=0$ and $Q_0=0$. In other words, in the case of pure dephasing the CTC--assisted distinguishing of non--orthogonal quantum states works as as good as in the noise--less case. Moreover, with decreasing $A$ the corresponding trace distance $Q_0,Q_-$ decreases as presented in Fig.(\ref{fig2}). \begin{figure}
\caption{Trace distance $Q$ calculated between the output of the circuit in Fig.(\ref{fig1}) for the input $|0\rangle$ ($Q_0$ upper panel) and $|-\rangle$ ($Q_-$ lower panel) for different values of the parameter $A$ of the Davies map with $p=1/4$ and $G=1$}
\label{fig2}
\end{figure}
Let us also notice that in the low temperature limit $p=0$ the trace distance $Q_0=0$ and that for larger values of $p$ the trace distance $Q_-$ {\it grows slower} than $Q_0$. As one infers from Fig.(\ref{fig3}) for fixed time instant $t$ and ordered values $p_1<p_2$ the corresponding time derivatives $\partial Q_0/\partial t|_{t,p_1}<\partial Q_0/\partial t|_{t,p_2}$ whereas $\partial Q_-/\partial t|_{t,p_1}<\partial Q_-/\partial t|_{t,p_2}$. \begin{figure}
\caption{Trace distance $Q$ calculated between the outputs of the circuit in Fig.(\ref{fig1}) for the input $|0\rangle$ ($Q_0$ upper panel) and $|-\rangle$ ($Q_-$ lower panel) for different values of the parameter $p$ of the Davies map and $A=G=1$}
\label{fig3}
\end{figure} This seemingly counter--intuitive property results from the particular and distinguished role played by the pure dephasing limit and related symmetry~\cite{alidef}. \begin{figure}
\caption{
Trace distance $R$ Eq.(\ref{QQQ}) calculated between the output of the circuit in Fig.(\ref{fig1}) for the thermally modified states Eq.(\ref{d2}) for the inputs $|0\rangle$ and $|-\rangle$ for different values of the parameter $A$, $p=1/4$ and $G=1$ (upper panel) and for different values of the parameter $p$ and $A=2G=2$ (lower panel) of the Davies map. The horizontal line on both panels indicate $R(\rho_{i1},\rho_{i2})=\sqrt{2}/2$.}
\label{fig34}
\end{figure}
A natural quantifier of an effect of thermal environment on the 'paradoxial' power of distinguishing non--orthogonal states is a difference between the trace distance of two inputs $R(\rho_{i1},\rho_{i2})$ and the corresponding outputs $R(\rho_{f1},\rho_{f2})$ for $\rho_{f1},\rho_{f2}$ calculated via Eq.(\ref{d2}). As the circuit in Fig.(\ref{fig1}) is dedicated to distinguish two very particular states $|0\rangle,|-\rangle$, cf. Ref.~\cite{brun_disting}, with $R(\rho_{i1},\rho_{i2})=\sqrt{2}/2$, the figure of merit is the quantity $R(\rho_{f1},\rho_{f2})$ which reads as follows:
\begin{eqnarray}\label{QQQ} R&=& \frac{e^{-tG}}{4e^{At}-1}[ (1-4p+4p^2)[2e^{2At}-4e^{At}+2]
+ 4e^{2tG}]\nonumber \\ \end{eqnarray}
For $R=1$ the output states $\rho_{f1},\rho_{f2}$ are distinguishable. The smaller value of $R$ is the more ineffective the circuit in Fig.(\ref{fig1}) is. Let us notice that for $R<\sqrt{2}/2$ distinguishability of the output states becomes, due to Davies decoherence, even worse than initially. The threshold condition $R(\rho_{f1},\rho_{f2})=\sqrt{2}/2$, indicated by the horizontal line in Fig.(\ref{fig34}), depends not only on time instant $t$ but also on parameters of the system. Decreasing $A$ (for given $G$ and $p$) allows to keep the circuit useful despite longer exposition on decoherence. Again, for $A=0$ the $R=1$ i.e. the output states in a presence of a purely dephasing environment are as good distinguishable as they where in a absence of decoherence as presented in Fig.(\ref{fig34}). It is natural to attempt to generalize this result beyond a limited class of input states which is the circuit in Fig.(\ref{fig1}) designed for. Although we cannot present a formal proof, we conjecture, upon numerical experiments performed on randomly chosen pairs of non--orthogonal initial states, that a thermal environment never enhance state distinguishability which is 'best' in the pure dephasing limit $A=0$. It is known that non--completely positive maps describing e.g. time--evolution of quantum systems initially entangled with their environment are not contractive~\cite{distance,laine}. As the Davies map Eq.(\ref{dav}) is, under the condition Eq.(\ref{warun}), contractive, one expects that any enhancement of distinguishability is solely due to peculiar character of the Deutsch map Eq.(\ref{d2}) and Eq.(\ref{d0}) originating from its non--linearity.
\section{Post--selected CTC and thermal noise}
The Deutsch's model~\cite{deutsch} of time travel operates essentially beyond standard quantum mechanics. However, there is the second most popular circuit--based model of quantum dynamics in a presence of CTCs in which one {\it mimics} the CV motion by a post--selected teleportation~\cite{svet,seth_prl,seth_prd,brun_fund}. Contrary to various difficulties arising in attempts of implementing Deutsch model~\cite{ralph,brun_exp} there are no fundamental experimental obstructions to post--select a desired outcome of teleportation procedure. However, let us notice that this apparent simplification occurs at cost of {\it deterministic post--selection} introduced {\it ad hoc} leading the well defined quantum teleportation protocol out of quantum mechanics {\it per se}. In Fig. (\ref{fig4}) we present a well known circuit designed to transform (in an absence of noise) non--orthogonal states $|1\rangle$ and $|-\rangle$ into a pair of orthogonal, and hence distinguishable, states $|1\rangle$ and $|0\rangle$, respectively~\cite{brun_fund}. \begin{figure}
\caption{Quantum circuit which can distinguish non--orthogonal states $|0\rangle$ and $|-\rangle$ using P--CTC (post--selected). The CV qubits are initially prepared in a maximally entangled state $|\Phi\rangle=[|00\rangle+|11\rangle]/\sqrt{2}$ which is then sent in the past by postselection of a projective $|\Phi\rangle\langle\Phi|$ measurement output. The other elements of the circuit are the same as in Fig.(\ref{fig1}).}
\label{fig4}
\end{figure}
The only but crucial difference between circuits in Fig(\ref{fig4}) and Fig.(\ref{fig1}) is in a way how an evolution of the CV qubit is modeled. Mimicking CTC with a post--selected teleportation utilizes a maximally entangled state as a resource~\cite{brun_fund} which, however, can be imperfect due to a presence of thermal noise. Here we consider a state of two qubits and we assume that only one of parties in this resource is affected by thermal environment. Let us notice that such a setting is {\it physically} different to that which we adopt in previous studies of the Deutsch model where the time travel itself was assumed to be 'noisy'.
Postselected CTC with thermal noise affecting the maximally entangled Bell state $|\Phi\rangle=[|00\rangle+|11\rangle]/\sqrt{2}$ of the CV qubits, cf. Fig.(\ref{fig4}), is given by
\begin{eqnarray}\label{p1}
\rho_f&=& \mbox{Tr}_{CB}\{|\Phi\rangle\langle\Phi|_{CB}U[\rho_i\otimes\chi_{CB}]U^\dagger \},
\end{eqnarray}
where
\begin{eqnarray}\label{p2}
\chi_{CB}&=& [D\otimes I]|\Phi\rangle\langle\Phi|_{CB}, \end{eqnarray}
is the noisy Bell state obtained tensor product $D\otimes I$ of the Davies and an identity map. It is assumed that only the CV qubit in $\chi_{CB}$ labeled by $C$ is coupled to the thermal Davies environment. Let us notice formal analogy of this scenario with a recently studied thermally modified teleportation protocol~\cite{kloda} or entanglement swapping~\cite{mymy2}. In particular
\begin{eqnarray}\label{pp2}
\chi_{CB}&=& a_0|00\rangle\langle 00|_{CB}+b_0|10\rangle\langle 10|_{CB}\nonumber \\
&+& c^*|00\rangle\langle 11|_{CB}+c|11\rangle\langle 00|_{CB} \nonumber \\
&+&a_1|01\rangle\langle 01|_{CB}+b_1|11\rangle\langle 11|_{CB} \end{eqnarray}
where
\begin{eqnarray}\label{dfun} 2b_1&=&1-(1-p)(1-e^{-At})\\ 2a_1&=&(1-p)(1-e^{-At})\\ 2c&=&e^{-i\omega t -Gt}\\ 2b_0&=&p(1-e^{-At})\\ 2a_0&=&1-(1-e^{-At})p \end{eqnarray}
For $\rho_i=|\psi\rangle\langle \psi|$ pure, the action of the circuit in Fig.({\ref{fig4}) is, in the presence of Davies environment Eq.(\ref{p1}) is given by the following transformation:
\begin{eqnarray}\label{p1}
\rho_f&=& \langle\Phi|_{CB}U[|\psi\rangle\langle \psi|\otimes\chi_{CB}]U^\dagger|\Phi\rangle_{CB}\nonumber\\
&=& \frac{a_0}{2}L_{I}|\psi\rangle\langle \psi|L_I^\dagger+\frac{b_0}{2}L_{II}|\psi\rangle\langle \psi|L_{II}^\dagger\nonumber \\
&+& \frac{c^*}{2}L_{III}|\psi\rangle\langle \psi|L_{IV}^\dagger+ \frac{c}{2}L_{IV}|\psi\rangle\langle \psi|L_{III}^\dagger\nonumber \\
&+&\frac{a_1}{2}L_{V}|\psi\rangle\langle \psi|L_V^\dagger+\frac{b_1}{2}L_{VI}|\psi\rangle\langle \psi|L_{VI}^\dagger
\end{eqnarray}
where (notice that $U=U_{SYS,C}$),
\begin{eqnarray}\label{ls}
L_I &=& \langle \Phi|_{CB} U|00\rangle_{CB}=\frac{1}{\sqrt{2}} |0\rangle \langle 0|\nonumber \\
L_{II} &=& \langle \Phi|_{CB} U|10\rangle_{CB} = \frac{1}{2}\left(|1\rangle\langle 0|+ |1\rangle\langle 1|\right)\nonumber \\
L_{III} &=& \langle \Phi|_{CB} U|00\rangle_{CB}=L_I\nonumber \\
L_{IV} &=& \langle \Phi|_{CB} U|11\rangle_{CB} = \frac{1}{2}\left(|1\rangle\langle 0|- |1\rangle\langle 1|\right)\nonumber \\
L_V &=& \langle \Phi|_{CB} U|01\rangle_{CB}=\frac{1}{\sqrt{2}} |0\rangle \langle 1|\nonumber \\
L_{VI} &=& \langle \Phi|_{CB} U|11\rangle_{CB} =L_{IV} \end{eqnarray}
In a general case the transformation in Eq.(\ref{p1}) transforms non--orthogonal states into the states which remain non--orthogonal i.e. thermal Markovian noise divests the circuit in Fig.(\ref{fig4}) of its 'paradoxial' power (below we skip normalization constants):
\begin{eqnarray}
|1\rangle\langle 1|&\rightarrow& \frac{1}{2}\left( 1-p \right) \left( 1-{{\rm e}
^{-At}} \right) |0\rangle\langle 0| \nonumber \\&+&\frac{1}{2}\left[1+(2p-1) \left( 1-{{\rm e}^{-At
}} \right)\right]|1\rangle\langle 1|\nonumber \\
\\
|+\rangle\langle +|&\rightarrow& \frac{1}{2}\left[ 1+(1-2p) \left( 1-{{\rm e}^{-At}}
\right) \right] |0\rangle\langle 0| \nonumber \\ &+&\frac{1}{2}p \left( 1-{{\rm e}^{-At}} \right)|1\rangle\langle 1| \end{eqnarray}
However, if an energy exchange between the CV qubit and the environment is negligible, i.e. the circuit operates in the pure dephasing regime $A=0$, the situation changes. Non--orthogonal input states are transformed into an output states which {\it are orthogonal} and hence can be distinguished: \begin{eqnarray}\label{post}
|1\rangle\langle 1| &\longrightarrow& |1\rangle\langle 1|\\
|+\rangle\langle +| &\longrightarrow& |0\rangle\langle 0| \end{eqnarray}
The reason of that becomes clear if one notices that {\it both} in the original noise--less case~\cite{brun_fund} (i.e. when $D=I$) and in the pure decoherence limit $A=0$ \begin{eqnarray} a_1&=&b_0=0\\ a_0 &=& b_1 =1 \end{eqnarray}
the transformation of non--orthogonal into the orthogonal states occurs since it follows that either $L_i|1\rangle\rightarrow |1\rangle$ and $L_i|+\rangle \rightarrow |0\rangle$ or $L_i|1\rangle =L_i|+\rangle=0$ for $i=I,\ldots,VI$.
In the above equations we used Eqs (\ref{p1}) and (\ref{p2}) but skip (non-vanishing) normalization constants which does not affect orthogonality of states. From Eq.(\ref{post}) one infers that also in the case of postselected teleportation model pure dephasing plays a distinguished role exactly as it was in the Deutsch's model.
\section{Uniqueness ambiguity}
According to the Schauder's fixed point theorem, there is a solution $\tau$ of the Deutsch's condition Eq.(\ref{d0}). However, such a solution may not be unique resulting in the {\it uniqueness ambiguity}~\cite{deutsch,allen}. Using the Deutsch model of quantum time travel one faces with a problem which state $\tau$ (among many possibilities) is the 'proper' one. The original proposal of David Deutsch~\cite{deutsch} is the {\it maximum entropy rule} which states that the physical $\tau$ is the one which contains minimum information. This condition introduced {\it ad hoc}~\cite{allen} is not universal and can be replaced by other proposals~\cite{politzer,dejonghe}. As an example of a Deutsch's circuit with uniqueness ambiguity can serve a circuit designed for the unproven theorem paradox. It is an example of a knowledge--generating circuit: a mathematician $M$, equipped with a knowledge about her/his modern mathematics read from a book $B$, becomes a time traveler $T$ and travels back in time in order to write the book $B$. A simplest example of a circuit playing such a role is presented in Fig.(\ref{fig5}), cf. Ref.~\cite{allen}. \begin{figure}
\caption{Quantum circuit for the unproven theorem~\cite{allen}. $B$ is the book, $M$ -- the mathematician and $T$ the time traveler using D--CTC and the action of the circuit is given in Eq.(\ref{book}).}
\label{fig5}
\end{figure} Such a circuit describes interaction of three qubits: $B$, $M$ which are CR and the last one $T$ which violates chronology. The interaction is given by a unitary
\begin{eqnarray}\label{book} U&=& SWAP_{MT}CNOT_{BM}CNOT_{TB}, \end{eqnarray}
and an input of the circuit is $|0\rangle_B|0\rangle_M$.
The Deutsch's consistency condition Eq.(\ref{d0}) for this circuit is solved by a family of states
\begin{eqnarray}\label{tau_b0}
\tau_\alpha &=& \alpha |0\rangle\langle 0| +(1-\alpha)|1\rangle\langle 1|, \end{eqnarray}
where $\alpha\in[0,1]$ and hence is ambiguous. In Ref.\cite{allen} it is shown that an effect of depolarization can resolve this ambiguity.
Here we consider probably the most natural and omnipresent source of noise. We assume that the time travel is disturbed by a thermal environment. In such a case the state of the time traveler is a solution of Eq.(\ref{d1}) i.e. the time travel of $T$ is affected by thermal Davies noise. This solution is unique and is given by the Gibbs state: \begin{eqnarray} \label{tau_b}
\tau &=& p|1\rangle\langle 1|+(1-p) |0\rangle\langle 0|. \end{eqnarray} Let us notice that in the zero--temperature limit $p=0$
one obtains $\tau\rightarrow\tau_{0}$. In the $p=1/2$ limit one arrives at the state which {\it maximizes} entropy. Let us also notice that, for the model of Davies decoherence considered here, the Deutsch's rule holds only approximately (in the regime of high temperature) and that in general the unique solution is not always maximally mixed.
The solution to the uniqueness ambiguity discussed here is essentially the same as in Ref. \cite{allen}, but the source of noise that resolves the ambiguity is physically rather than formally motivated. In other words, a very natural condition that the CV qubit is weakly disturbed by its thermal environment can serve as {\it physics--based} justification for the choice of the solution $\tau$ of the Deutsch consistency condition Eq.(\ref{d0}), instead of otherwise {\it ad hoc}, maximum entropy rule introduced by David Deutsch in Ref.\cite{deutsch}. \section{Summary} If time travels were possible, the world would be essentially different. Quantum cryptography~\cite{crypto} and in particular quantum key distribution~\cite{scarani2} essentially changed basic objectives of communication which, comparing to a pre--quantum age, became much safer. However, most of the quantum no--go theorems -- {\it sine qua non} conditions for security of quantum protocols\cite{scarani1,scarani2} -- originate from {\it linearity} of quantum mechanics\cite{nielsen}. An existence of closed time--like curves can change (almost) everything. There are quantum circuits which in a presence of CTCs can break security of quantum protocols. In this work we analyzed only one of them: the circuit designed to distinguish non--orthogonal qubit's states and to break e.g. the B92 quantum crypto--protocol. Our aim was to check if and how such a 'paradoxial power' becomes reduced by the omnipresent decoherence caused by thermal environment affecting time--traveling qubits. We consider only two among many approaches to CTCs: the one proposed by David Deutsch~\cite{deutsch} and the second based on the post--selected teleportation protocol~\cite{svet,seth_prl,seth_prd}. Our intention was to investigate possibly wide class of open systems modeled in way which is both tractable and rigorous. That is why we assumed the general type of coupling to the environment: the Davies weak coupling approach~\cite{alicki}. Using Davies approach one can describe the broadest class of open quantum systems with finite--dimensional space of states with {\it the only restriction} imposed: the coupling to environment must be weak. We showed for both Deutsch's and post--selected model a distinctive role played by {\it pure decoherence} when, despite of a presence of environment and resulting information loss, circuits with CTCs do not lose their 'paradoxial power' of distinguishing non--orthogonal quantum states. This result can serve as a potentially useful guideline for experimentalists who attempt to mimic circuits with CTCs in order to implement 'linearity--free quantum computations'. Physically pure decoherence describe open quantum systems operating at time scales which are short comparing with a time scale of a system--environment energy exchange~\cite{defaz}.
In addition to practical there is also a fundamental aspect of decoherence which needs to be taken into account in all the applications of quantum phenomena~\cite{schloss}. In the last section of our paper, inspired by Ref. \cite{allen}, we investigated the circuit for an unproven theorem to show that thermal decoherence, present in any real system, can help to resolve the uniqueness ambiguity originating from non--uniqueness of a solution of the Deutch's consistence condition Eq.(\ref{d0}). We showed that in a particular case considered in our work thermal noise not only allows to select the 'proper' state of chronology violating qubit, which is not necessarily maximally mixed, but also justifies the Deutsch's maximum entropy rule in the regime of high temperature.
There are many physical concepts affecting human imagination ranging from confining light black holes, dilatation of time, butterfly effect up to teleportation and the celebrated but piteous Schr\"{o}dinger's cat. All of them are strange but the closed time--like curves are stranger than the other. We hope that our work will modestly contribute to both better understanding hypothetical behavior of quantum systems in a presence of CTCs and, as a guideline, to experimental attempts of mimicking such systems.
\section*{Acknowledgments} The work has been supported by the NCN project UMO-2013/09/B/ST2/03382 (B.D. and M.R.) and the NCN grant 2015/19/B/ST2/02856 (J.D)
\section*{Appendix} \begin{widetext} In this Appendix we provide detail of calculations leading to the results presented in Sec III and V for the Deutchian model of CTC. Further in the Appendix we adopt the following notation
$p(i,j)=|i\rangle\langle j|,
p(ij,kl)=|ij\rangle\langle kl|$ and $p(ijm,kln)=|ijm\rangle\langle kln|$
where $i,j,k,l,m,n=0,1$ labels our computational basis. In this notation partial traces of a two--qubit matrix $X$ with respect to CR and CV qubits and for
$x(ij,kl)= \mbox{Tr}(X p(kl,ij))$
read as follows:
\begin{eqnarray} \mbox{Tr}_{CR}X &=& (x(00,00)+x(10,10))p(0,0)+(x(01,00)+x(11,10))p(1,0)\nonumber \\&+&(x(00,01)+x(10,11))p(0,1)+(x(01,01)+x(11,11))p(1,1) \\ \mbox{Tr}_{CV}X &=& (x(00,00)+x(01,01))p(0,0)+(x(10,00)+x(11,01))p(1,0)\nonumber \\&+&(x(00,10)+x(01,11))p(0,1)+(x(10,10)+x(11,11))p(1,1) \end{eqnarray}
and the partial traces of a three--qubit matrix $X$ (in Sec. V) with respect to CR qubits and for
$x(ijm,kln)= \mbox{Tr}(X p(klm,ijn))$
read as follows:
\begin{eqnarray}\label{tr3} \mbox{Tr}_{CR}X &=& (x(000,000)+x(010,010)+x(100,100)+x(110,110))p(0,0)\nonumber\\&+&(x(000,001)+x(010,011)+x(100,101)+x(110,111))p(0,1)\nonumber\\&+&(x(001,000)+x(011,010)+x(101,100)+x(111,110))p(0,1)\nonumber\\&+&(x(001,001)+x(011,011)+x(101,101)+x(111,111))p(1,1)\end{eqnarray}
The unitary coupling between the CV and CR qubits is a product of controlled Hadamard $H_C$ and the $SWAP$ i.e.
$U=H_C\,SWAP$
where
\begin{eqnarray} H_C &=& p(0,0)\otimes \mathcal{I} + p(1,1) \otimes Had\\ Had &=& (p(0,0)+p(0,1)+p(1,0)-p(1,1))/\sqrt{2}\\ SWAP &=& p(00,00)+p(11,11)+p(10,01)+p(01,10) \end{eqnarray}
For an input
$\rho_i=|-\rangle\langle -|=[p(0,0)+p(1,1)-p(1,0)-p(0,1)]/2$ the corresponding CV qubit
\begin{eqnarray}\label{taug} \tau&=&ap(0,0)+(1-a)p(1,1)+[(b_r+ib_i)p(0,1)+h.c.] \end{eqnarray}
satisfying Eq.(\ref{d1}) with real $a,b_r,b_i$ can be calculated in the following steps: {\it (i)} An output of the circuit $X=U \rho_i\otimes \tau U^\dagger$ is traced with respect to the CR qubit and then {\it (ii)} subjected to thermal noise via Eq.(\ref{dav}) and finally {\it (iii)} selfconsistently compared to the input i.e.:
\begin{eqnarray} \tau &=& (x(00,00)+x(10,10))D[p(0,0)]+(x(01,00)+x(11,10))D[p(1,0)]\nonumber \\&+&(x(00,01)+x(10,11))D[p(0,1)]+(x(01,01)+x(11,11))D[p(1,1)] \end{eqnarray}
resulting in a set of {\it linear} equations which allows to calculate the parameters $a,b_r,b_i$. The CV qubit $\tau=\tau[1,1]p(0,0)+\tau[1,2]p(0,1)+\tau[2,1]p(1,0)+\tau[2,2]p(1,1)$ is then given by
\begin{eqnarray}\label{dd2} \tau[1,1]&=&1-\tau[2,2]=-2\,{\frac {p{{\rm e}^{-At}}-{{\rm e}^{-At}}-p+1}{{{\rm e}^{-At}}-2}}\nonumber \\ \tau[1,2]&=&\tau[2,1]^*={\frac {p{{\rm e}^{-t \left( i\omega+A+G \right) }}-{{\rm e}^{-t \left( i\omega+ G \right) }}p-{{\rm e}^{-t \left( i\omega+A+G \right) }}+{{\rm e}^{-t
\left( i\omega+G \right) }}}{{{\rm e}^{-At}}-2}} \end{eqnarray} and the output $\rho_f$ of the circuit calculated via Eq.(\ref{d2}) reads as follows \begin{eqnarray}\label{dd1} \rho_f[1,1]&=&1-\rho_f[2,2]=-2\,{\frac {p{{\rm e}^{-At}}-{{\rm e}^{-At}}-p+1}{{{\rm e}^{-At}}-2}}\nonumber \\ \rho_f[1,2]&=&\rho_f[2,1]^*= -1/2\,{\frac { \left( p{{\rm e}^{-t \left( i\omega+A+G \right) }}-{{\rm e}^ {-t \left( i\omega+G \right) }}p-{{\rm e}^{-t \left( i\omega+A+G \right) }}+{ {\rm e}^{-t \left( i\omega+G \right) }} \right) \sqrt {2}}{{{\rm e}^{-At}}- 2}} \end{eqnarray}
For an input $\rho_i=|0\rangle\langle 0|=p(0,0)$ the CV qubit, calculated via the same steps, is given by:
\begin{eqnarray}\label{ddd2} \tau[1,1]&=&1-\tau[2,2]=-{\frac {2\,p{{\rm e}^{-At}}-{{\rm e}^{-At}}-2\,p+2}{{{\rm e}^{-At}}-2 }} \nonumber \\ \tau[1,2]&=&\tau[2,1]^*={\frac {p \left( {{\rm e}^{-t \left( i\omega+A+G \right) }}-{{\rm e}^{-t
\left( i\omega+G \right) }} \right) }{{{\rm e}^{-At}}-2}} \end{eqnarray} and the corresponding output of the circuit calculated via Eq.(\ref{d2}) reads as follows \begin{eqnarray}\label{ddd1} \rho_f[1,1]&=&1-\rho_f[2,2]=-{\frac {2\,p{{\rm e}^{-At}}-{{\rm e}^{-At}}-2\,p+2}{{{\rm e}^{-At}}-2 }} \nonumber \\ \rho_f[1,2]&=&\rho_f[2,1]^*=1/2\,{\frac {p \left( {{\rm e}^{-t \left( i\omega+A+G \right) }}-{{\rm e}^{ -t \left( i\omega+G \right) }} \right) \sqrt {2}}{{{\rm e}^{-At}}-2}}
\end{eqnarray}
In the case of the unproven theorem paradox considered in Sec. V the circuit acts as a unitary \begin{eqnarray} U&=& SWAP_{MT}CNOT_{BM}CNOT_{TB} \end{eqnarray}
with \begin{eqnarray} CNOT_{TB}&=&p(000,000)+p(101,001)+p(100,100)\nonumber \\ &+& p(111,011)+p(001,101)+p(110,110)+ p(011,111) \end{eqnarray} \begin{eqnarray} CNOT_{BM}&=&p(000,000)+ p(001,001)+p(010,010)+p(110,100)\nonumber\\&+& p(011,011)+p(111,101)+p(100,110)+ p(101,111) \end{eqnarray} and \begin{eqnarray} SWAP_{MT}&=&p(000,000)+ p(010,001)+p(001,010)+p(100,100)\nonumber \\&+& p(011,011)+p(110,101)+p(101,110)+ p(111,111) \end{eqnarray}
An input $\rho_i=p(00,00)$ and $\tau$ is given in Eq.(\ref{taug}). Further one follows the same steps as in the previous case but with $X=U \rho_i\otimes \tau U^\dagger$ then traced with respect to the CR qubits according to Eq.(\ref{tr3}) and subjected to thermal noise via Eq.(\ref{dav}) obtaining $\tau[1,1]=1-\tau[2,2]=p$ and $\tau[1,2]=\tau[2,1]=0$ corresponding to the Gibbs state of the time travelling qubit. \end{widetext}
\begin{thebibliography}{35} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont
{I.~Chuang}(2000)}]{nielsen}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{I.~Chuang}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Computation
and Quantum Information}}}\ (\bibinfo {publisher} {Cambridge University
Press},\ \bibinfo {year} {2000})\BibitemShut {NoStop} \bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2009)\citenamefont
{Scarani}, \citenamefont {Bechmann-Pasquinucci}, \citenamefont {Cerf},
\citenamefont {Du\ifmmode~\check{s}\else \v{s}\fi{}ek}, \citenamefont
{L\"utkenhaus},\ and\ \citenamefont {Peev}}]{scarani2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Scarani}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bechmann-Pasquinucci}}, \bibinfo {author} {\bibfnamefont {N.~J.}\
\bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Du\ifmmode~\check{s}\else \v{s}\fi{}ek}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {L\"utkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Peev}},\ }\href {\doibase 10.1103/RevModPhys.81.1301}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf
{\bibinfo {volume} {81}},\ \bibinfo {pages} {1301} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin},
\citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont
{Zbinden}}]{crypto}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}},
\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \ and\ \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href {\doibase
10.1103/RevModPhys.74.145} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145}
(\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {G\"odel}(1949)}]{godel}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{G\"odel}},\ }\href {\doibase 10.1103/RevModPhys.21.447} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{21}},\ \bibinfo {pages} {447} (\bibinfo {year} {1949})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Brun}\ and\ \citenamefont {Wilde}(2015)}]{brun_exp}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont
{Brun}}\ and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont
{Wilde}},\ }\href {http://arxiv.org/abs/1504.05911} {\bibfield {journal}
{\bibinfo {journal} {arxiv.org/abs/1504.05911}\ } (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brun}\ \emph {et~al.}(2009)\citenamefont {Brun},
\citenamefont {Harrington},\ and\ \citenamefont {Wilde}}]{brun_disting}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont
{Brun}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Harrington}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Wilde}},\ }\href
{\doibase 10.1103/PhysRevLett.102.210402} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo
{pages} {210402} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brun}\ and\ \citenamefont {Wilde}(2012)}]{brun_fund}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont
{Brun}}\ and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont
{Wilde}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Found.
Phys.}\ }\textbf {\bibinfo {volume} {42}},\ \bibinfo {pages} {341} (\bibinfo
{year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deutsch}(1991)}]{deutsch}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Deutsch}},\ }\href {\doibase 10.1103/PhysRevD.44.3197} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {44}},\
\bibinfo {pages} {3197} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ringbauer}\ \emph {et~al.}(2014)\citenamefont
{Ringbauer}, \citenamefont {Broome}, \citenamefont {Myers}, \citenamefont
{White},\ and\ \citenamefont {Ralph}}]{ralph}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Ringbauer}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Broome}}, \bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont {Myers}},
\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {White}}, \ and\
\bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont {Ralph}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Nature Communications}\
}\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {4145} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wallman}\ and\ \citenamefont
{Bartlett}(2012)}]{wal_fund}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont
{Wallman}}\ and\ \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont
{Bartlett}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Found. Phys.}\ }\textbf {\bibinfo {volume} {42}},\ \bibinfo {pages} {656}
(\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Allen}(2014)}]{allen}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-M.~A.}\
\bibnamefont {Allen}},\ }\href {\doibase 10.1103/PhysRevA.90.042107}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {042107} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Svetlichny}(2011)}]{svet}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Svetlichny}},\ }\href {\doibase 10.1007/s10773-011-0973-x} {\bibfield
{journal} {\bibinfo {journal} {International Journal of Theoretical
Physics}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {3903}
(\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}\ \emph
{et~al.}(2011{\natexlab{a}})\citenamefont {Lloyd}, \citenamefont {Maccone},
\citenamefont {Garcia-Patron}, \citenamefont {Giovannetti}, \citenamefont
{Shikano}, \citenamefont {Pirandola}, \citenamefont {Rozema}, \citenamefont
{Darabi}, \citenamefont {Soudagar}, \citenamefont {Shalm},\ and\
\citenamefont {Steinberg}}]{seth_prl}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Lloyd}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Garcia-Patron}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Shikano}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Pirandola}}, \bibinfo {author} {\bibfnamefont {L.~A.}\
\bibnamefont {Rozema}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Darabi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Soudagar}},
\bibinfo {author} {\bibfnamefont {L.~K.}\ \bibnamefont {Shalm}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}},\ }\href
{\doibase 10.1103/PhysRevLett.106.040403} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo
{pages} {040403} (\bibinfo {year} {2011}{\natexlab{a}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Lloyd}\ \emph
{et~al.}(2011{\natexlab{b}})\citenamefont {Lloyd}, \citenamefont {Maccone},
\citenamefont {Garcia-Patron}, \citenamefont {Giovannetti},\ and\
\citenamefont {Shikano}}]{seth_prd}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Lloyd}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Garcia-Patron}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \ and\ \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Shikano}},\ }\href {\doibase
10.1103/PhysRevD.84.025007} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. D}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {025007}
(\bibinfo {year} {2011}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Elze, Hans-Thomas}}(2013)}]{elze_time}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibnamefont {{Elze, Hans-Thomas}}},\
}\href {\doibase 10.1051/epjconf/20135801013} {\bibfield {journal} {\bibinfo
{journal} {EPJ Web of Conferences}\ }\textbf {\bibinfo {volume} {58}},\
\bibinfo {pages} {01013} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vaidman}(2013)}]{vaidman_past}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Vaidman}},\ }\href {\doibase 10.1103/PhysRevA.87.052104} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{87}},\ \bibinfo {pages} {052104} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Alicki}\ and\ \citenamefont {Lendi}(2007)}]{alicki}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Alicki}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Lendi}},\
}\href@noop {} {\emph {\bibinfo {title} {Quantum Dynamical Semigroups and
Applications}}},\ Lecture Notes in Physics\ (\bibinfo {publisher}
{Springer},\ \bibinfo {year} {2007})\BibitemShut {NoStop} \bibitem [{\citenamefont {K{\l}oda}\ and\ \citenamefont {Dajka}(2014)}]{kloda}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{K{\l}oda}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dajka}},\ }\href {\doibase 10.1007/s11128-014-0831-x} {\bibfield {journal}
{\bibinfo {journal} {Quantum Information Processing}\ ,\ \bibinfo {pages}
{1}} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lendi}\ and\ \citenamefont {Wonderen}(2007)}]{lendi}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Lendi}}\ and\ \bibinfo {author} {\bibfnamefont {A.~J.~v.}\ \bibnamefont
{Wonderen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of Physics A: Mathematical and Theoretical}\ }\textbf {\bibinfo
{volume} {40}},\ \bibinfo {pages} {279} (\bibinfo {year} {2007})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Dajka}\ \emph {et~al.}(2012)\citenamefont {Dajka},
\citenamefont {Mierzejewski}, \citenamefont {\L{}uczka}, \citenamefont
{Blattmann},\ and\ \citenamefont {H{\"a}nggi}}]{mymy}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dajka}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mierzejewski}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {\L{}uczka}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Blattmann}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {H{\"a}nggi}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Journal of Physics A:
Mathematical and Theoretical}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo
{pages} {485306} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dajka}\ and\ \citenamefont
{\L{}uczka}(2013)}]{mymy2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dajka}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{\L{}uczka}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {022301}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dajka}\ \emph {et~al.}(2011)\citenamefont {Dajka},
\citenamefont {\L{}uczka},\ and\ \citenamefont {H{\"a}nggi}}]{dav_faza}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dajka}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {\L{}uczka}}, \
and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {H{\"a}nggi}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum
Information Processing}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages}
{85} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Szel{\c a}g}\ \emph {et~al.}(2008)\citenamefont
{Szel{\c a}g}, \citenamefont {Dajka}, \citenamefont {Zipper},\ and\
\citenamefont {\L{}uczka}}]{dav_heat}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Szel{\c a}g}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Dajka}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Zipper}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {\L{}uczka}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Acta Physica Polonica B}\
}\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {1177} (\bibinfo {year}
{2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dajka}\ \emph {et~al.}(2015)\citenamefont {Dajka},
\citenamefont {K{\l}oda}, \citenamefont {{\L}obejko},\ and\ \citenamefont
{S{\l}adkowski}}]{dajka_game}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dajka}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {K{\l}oda}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{\L}obejko}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {S{\l}adkowski}},\ }\href
{\doibase 10.1371/journal.pone.0134916} {\bibfield {journal} {\bibinfo
{journal} {PLoS ONE}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages}
{1} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roga}\ \emph {et~al.}(2010)\citenamefont {Roga},
\citenamefont {Fannes},\ and\ \citenamefont {Zyczkowski}}]{dav}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Roga}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fannes}}, \ and\
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Zyczkowski}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reports on
Mathematical Physics}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo {pages}
{311 } (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Levitt}(2008)}]{T12}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Levitt}},\ }\href@noop {} {\emph {\bibinfo {title} {Spin Dynamics: Basics of
Nuclear Magnetic Resonance}}}\ (\bibinfo {publisher} {Wiley},\ \bibinfo
{year} {2008})\BibitemShut {NoStop} \bibitem [{\citenamefont {Schuster}\ \emph {et~al.}(2007)\citenamefont
{Schuster}, \citenamefont {Houck}, \citenamefont {Schreier}, \citenamefont
{Wallraff}, \citenamefont {Gambetta}, \citenamefont {Blais}, \citenamefont
{Frunzio}, \citenamefont {Majer}, \citenamefont {Johnson}, \citenamefont
{Devoret}, \citenamefont {Girvin},\ and\ \citenamefont {Schoelkopf}}]{defaz}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont
{Schuster}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}},
\bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Schreier}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}}, \bibinfo {author}
{\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Frunzio}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Johnson}}, \bibinfo {author} {\bibfnamefont {M.~H.}\
\bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {445}},\ \bibinfo {pages}
{515} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}(1992)}]{B92}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Bennett}},\ }\href {\doibase 10.1103/PhysRevLett.68.3121} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {68}},\ \bibinfo {pages} {3121} (\bibinfo {year}
{1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alicki}(2004)}]{alidef}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Alicki}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Open
Syst. Inf. Dyn.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {53}
(\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dajka}\ and\ \citenamefont
{\L{}uczka}(2010)}]{distance}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dajka}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{\L{}uczka}},\ }\href {\doibase 10.1103/PhysRevA.82.012341} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{82}},\ \bibinfo {pages} {012341} (\bibinfo {year} {2010})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Laine}\ \emph {et~al.}(2010)\citenamefont {Laine},
\citenamefont {Piilo},\ and\ \citenamefont {Breuer}}]{laine}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.-M.}\ \bibnamefont
{Laine}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Piilo}}, \ and\
\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}},\ }\href
{http://stacks.iop.org/0295-5075/92/i=6/a=60010} {\bibfield {journal}
{\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume}
{92}},\ \bibinfo {pages} {60010} (\bibinfo {year} {2010})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Politzer}(1994)}]{politzer}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~D.}\ \bibnamefont
{Politzer}},\ }\href {\doibase 10.1103/PhysRevD.49.3981} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{49}},\ \bibinfo {pages} {3981} (\bibinfo {year} {1994})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {DeJonghe}\ \emph {et~al.}(2010)\citenamefont
{DeJonghe}, \citenamefont {Frey},\ and\ \citenamefont {Imbo}}]{dejonghe}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{DeJonghe}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Frey}}, \
and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Imbo}},\ }\href
{\doibase 10.1103/PhysRevD.81.087501} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {087501} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2005)\citenamefont
{Scarani}, \citenamefont {Iblisdir}, \citenamefont {Gisin},\ and\
\citenamefont {Ac\'{i}n}}]{scarani1}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Scarani}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Iblisdir}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Ac\'{i}n}},\ }\href {\doibase
10.1103/RevModPhys.77.1225} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {1225}
(\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schlosshauer}(2007)}]{schloss}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Schlosshauer}},\ }\href@noop {} {\emph {\bibinfo {title} {Decoherence and
the quantum-to-classical transition}}}\ (\bibinfo {publisher} {Springer},\
\bibinfo {year} {2007})\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\title[Rotationally typically real logharmonic mappings ]{Characterizing rotationally typically real logharmonic mappings}
\author[N. M. Alarifi]{Najla M. Alarifi}
\address{Department of Mathematics, Imam Abdulrahman Bin Faisal University, Dammam 31113, Kingdom of Saudi Arabia} \email{najarifi@hotmail.com}
\author[Z. Abdulhadi]
{Zayid AbdulHadi} \address{Department of Mathematics\\ American University of Sharjah\\ Sharjah, Box 26666\\ UAE} \email{zahadi@aus.edu}
\author[R. M. Ali]{Rosihan M. Ali}
\address{School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia} \email{rosihan@usm.my}
\begin{abstract} This paper treats the class of normalized logharmonic mappings $f(z)=zh(z)\overline{g(z)}$ in the unit disk satisfying $\varphi(z)=zh(z)g(z)$ is analytically typically real. Every such mapping $f$ is shown to be a product of two particular logharmonic mappings, each of which admits an integral representation. Also obtained is the radius of starlikeness and an upper estimate for arclength. Additionally, it is shown that $f$ maps the unit disk into a domain symmetric with respect to the real axis when it is univalent and its second dilatation has real coefficients. \end{abstract} \subjclass[2010]{Primary 30C35, 30C45} \keywords{Logharmonic mappings; typically real functions; radius of starlikeness; arclength. } \maketitle \section{Introduction}
Let $\mathcal{H}(U)$ be the linear space of analytic functions defined in the unit disk $U=\{z: |z|<1\}$ of the complex plane $\mathbb{C}.$ Let $B$ denote the set of self-maps $a\in $ $\mathcal{H}(U),$ and
$B_0$ its subclass consisting of $a \in B(U)$ with $a(0)=0.$ A logharmonic mapping in $U$ is a solution of the nonlinear elliptic partial differential equation \begin{equation}\label{ede} \overline{\left( \frac{f_{\overline{z}}(z)}{f(z)} \right)}=a(z)\frac{f_{z}(z)}{f(z)},
\end{equation} where the second dilatation function $a$ lies in $B$. Thus the Jacobian
\begin{equation*}
J_{f}=\left\vert f_{z}\right\vert ^{2}(1-|a|^{2})
\end{equation*} is positive, and all non-constant logharmonic mappings are sense-preserving and open in $U$.
If $f$ is a non-constant logharmonic mapping which vanishes only at $z=0$, then \cite{Abd1} $f$ admits the representation
\begin{equation} \label{eq1.2}
f(z)=z^{m}|z|^{2\beta m}h(z)\overline{g(z)},
\end{equation} where $m$ is a positive integer, ${\rm Re\ } \beta >-1/2$, and $h, g \in \mathcal{H}(U)$ satisfy $g(0)=1$ and $h(0)\neq 0.$ The exponent $\beta $ in \eqref{eq1.2} depends only on $a(0)$ and is given by \begin{equation*}
\beta =\overline{a(0)}\dfrac{1+a(0)}{1-|a(0)|^{2}}.
\end{equation*} Note that $f(0)\neq 0$ if and only if $m=0$, and that a univalent logharmonic mapping vanishes at the origin if and only if $m=1$, that is, $f$ has the form
\begin{equation*}
f(z)=z|z|^{2\beta }h(z)\overline{g(z)},
\end{equation*} where $0\notin (hg)(U).$ This class has been studied extensively over recent years in \cite{Abd1,RAli,RAli1,Abd2,Abd3,Abd4,Abd5,Abd11}.
As further evidence of its importance, note that $F(\zeta )=\log f(e^{\zeta })$ are univalent harmonic mappings of the half-plane $\{\zeta :{\rm Re \ } \zeta <0\}$. Studies on univalent harmonic mappings can be found in \cite{Abu,Sheil,Dur1,Dur2,Dur3,Hen1,Hen2,Jun}, which are closely related to the theory of minimal surfaces (see \cite{Nit,Oss}).
Denote by $S_{Lh}$ the class consisting of univalent logharmonic mappings $f$ in $U$ with respect to some $a \in B_0$ of the form \begin{equation*} f(z)=z h(z)\overline{g(z)},
\end{equation*} normalized by $h(0)= 1=g(0) ,$ and $h$ and $g$ are nonvanishing analytic functions in $U.$ Also let $S_{Lh}^{\ast }$ denote its subclass of univalent starlike logharmonic mappings.
An analytic function $\varphi$ in $U$ is typically real if $\varphi(z)$ is real whenever $z$ is real and nonreal elsewhere. Similarly, a logharmonic mapping $f$ in $U$ is typically real if $f(z)$ is real whenever $z$ is real and nonreal elsewhere. Investigations into typically real logharmonic mappings was initiated by Abdulhadi in \cite{Abd3}.
This paper treats the class $T_{Lh}$ of logharmonic mappings $f(z)=zh(z)\overline{g(z)}$ satisfying $\varphi (z)=zh(z)g(z) \in HG$ and is analytically typically real in $U$. Here $HG$ is the class of analytic functions $\varphi(z)=zh(z)g(z),$ where $h$ and $g$ in $\mathcal{H}(U)$ are normalized by $h(0)=1=g(0),$ and $0\notin (hg)(U).$ It is evident that mappings $\varphi (z)=zh(z)g(z)$ in the class $HG$ are rotations of the corresponding logharmonic mappings $f(z)=zh(z)\overline{g(z)}$.
In Section 2, every mapping $f \in T_{Lh}$ is shown to be a product of two particular logharmonic mappings, each of which admits an integral representation. The radius of starlikeness is also obtained for the class $T_{Lh}$, as well as an upper estimate for its arclength.
For an analytic univalent function $f(z)=z+\sum^\infty_{n=2}a_n z^n,$ it is known \cite{Abd3} that $f$ is typically real if and only if the image $f(U)$ is a domain symmetric with respect to the real axis. However, this characterization no longer holds for logharmonic maps, that is, it is not true that a univalent logharmonic mapping $F(z)=zh(z)\overline{g(z)}\in T_{Lh}$ if and only if the image $F(U)$ is a symmetric domain with respect to the real axis. In Section 3 we explore conditions on the dilatation $a$ that would ensure a univalent logharmonic mapping
$f(z)=zh(z)\overline{g(z)}\in T_{Lh}$ necessarily satisfies $f(U)$ is symmetric with respect to the real axis. Sufficient conditions for univalent logharmonic mappings to be in the class $T_{Lh} $ are also determined.
\section{An integral representation and radius of starlikeness}
The first result is to establish an integral representation for logharmonic mappings. \begin{lemma}\label{lem1} Let $f(z)=zh(z)\overline{g(z)}$ be a logharmonic mapping with respect to $a\in B,$ and $\varphi (z)=zh(z)g(z)$ with $h, g \in \mathcal{H}(U).$ Then
\begin{align*} f(z)=\varphi (z)\exp \left(-2i\, {\rm Im\ }\int_{0}^{z}\dfrac{a(s)}{1+a(s)}\dfrac{\varphi ^{\prime }(s)}{\varphi (s)}ds\right). \end{align*} \end{lemma}
\begin{proof} Since \begin{align}\label{eq2.2} f(z)=\varphi (z)\dfrac{\overline{g(z)}}{g(z)}, \end{align} it follows from \eqref{ede} that \begin{align*} \frac{g^{\prime }(z)}{g(z)}=a(z)\left(\frac{\varphi'(z)}{\varphi(z) }-\frac{g'(z)}{g(z)} \right). \end{align*} Thus \begin{equation}\label{eqg} \dfrac{g'(z)}{g(z)}=\frac{a(z)}{1+a(z)}\frac{\varphi'(z)}{\varphi(z) },
\end{equation} which yields \begin{equation}\label{eq2.3} g(z)=\exp \int_{0}^{z}\dfrac{a(s)}{1+a(s)}\dfrac{\varphi ^{\prime }(s)}{ \varphi (s)}ds.
\end{equation} The result is readily inferred by substituting \eqref{eq2.3} into \eqref{eq2.2}. \end{proof}
Let $T_{Lh}^{0}$ denote the subclass of $T_{Lh}$ consisting of logharmonic mappings $q$ in $U$ with respect to $a \in B $ of the form $q(z)=zu(z)\overline{v(z)} $ and satisfying $ zu(z)v(z)=z/(1-z^{2})$.
It follows from Lemma \ref{lem1} that
\begin{align}\label{eq2.4} q(z)=\frac{z}{1-z^{2}}\exp \left( -2i\, {\rm Im\ } \int_{0}^{z}\frac{a(s)}{1+a(s)} \frac{1+s^{2}}{s(1-s^{2})}ds\right). \end{align}
Denote by $ \mathcal{P} _{\mathbb{R}}$ the class of normalized analytic functions with positive real part and with real coefficients in $U.$ Further denote by $\mathcal{P}_{Lh} $ the class consisting of logharmonic mappings $w$ with respect to $a \in B $ of the form $w(z)=s(z)\overline{t(z)},$ where $s,t \in \mathcal{H}(U) $ are normalized by $ s(0) = 1=t(0), $ and satisfy $p(z) =s(z)t(z)\in \mathcal{P} _{\mathbb{R}} .$
Similar to the proof of Lemma \ref{lem1}, it is readily established that
\begin{equation}\label{eq2.5} w(z)=p(z)\exp \left(-2i\, {\rm Im\ } \int_{0}^{z}\frac{a(s)}{1+a(s)}\dfrac{p^{\prime }(s)}{ p(s)}ds\right).
\end{equation} Note that the class $\mathcal{P}_{Lh}$ also contains the set $\mathcal{P} _{\mathbb{R}}.$
The following result gives a representation formula for functions in the class $T_{Lh}$ in terms of functions in $T_{Lh}^0$ and $\mathcal{P}_{Lh}.$
\begin{theorem}\label{thm1} A function $f$ belongs to $T_{Lh}$ with respect to $a\in B $ if and only if $f(z)=q(z)w(z) $ for some $q$ belonging to $ T_{Lh}^{0}$ with respect to $a\in B $ and $w\in \mathcal{P}_{Lh}$ with respect to $a\in B.$ \end{theorem}
\begin{proof} Let \ $f(z)=zh(z)\overline{g(z)}\in T_{Lh}$ with respect to $a\in B.$ It is known \cite{Rog} that every typically real analytic function $\varphi$ has the form $(1-z^2)\varphi(z)=zp(z)$ for some $p \in \mathcal{P}_{\mathbb{R}}$. Thus Lemma \ref{lem1} yields
\begin{align*} f(z) &=\frac{zp(z)}{1-z^{2}}\exp \left( -2i{\rm Im\ } \int_{0}^{z}\frac{a(s)}{1+a(s)} \left( \frac{1+s^{2}}{s(1-s^{2})}+\frac{p'(s)}{p(s)}\right) ds \right) \\&=\left(\frac{z}{1-z^{2}}\exp \left( -2i{\rm Im\ } \int_{0}^{z}\frac{a(s)}{ 1+a(s)}\frac{1+s^{2}}{s(1-s^{2})}ds\right) \right) \\& \quad \quad \times \left( p(z)\exp \left( -2i{\rm Im\ } \int_{0}^{z}\dfrac{a(s)}{1+a(s)}\frac{p^{\prime }(s)}{p(s)}ds\right) \right) \\&:=q(z)w(z), \end{align*} where from \eqref{eq2.4} and \eqref{eq2.5},
\[ q(z)=\frac{z}{1-z^{2}}\exp \left( -2i{\rm Im\ } \int_{0}^{z}\frac{a(s)}{ 1+a(s)}\frac{1+s^{2}}{s(1-s^{2})}ds\right) \in T_{Lh}^{0}, \] and
\[w(z)=p(z)\exp \left( -2i{\rm Im\ } \int_{0}^{z}\dfrac{a(s)}{1+a(s)}\dfrac{p^{\prime }(s)}{p(s)}ds\right) \in \mathcal{P}_{Lh}.
\]
Conversely, if $f(z)=q(z)w(z) =zh(z)\overline{ g(z)} ,$ then
\eqref{eq2.4} and \eqref{eq2.5} yield
\[h(z)=\frac{p(z)}{(1-z^{2})}\exp \Bigg( - \int_{0}^{z}\frac{a(s)} {1+a(s)}\bigg( \frac{1+s^{2}}{s(1-s^{2})}+ \dfrac{p^{\prime }(s)}{p(s)}\bigg) \Bigg)ds,
\] and \[g(z)=\exp \int_{0}^{z}\frac{a(s)}{ 1+a(s)}\bigg( \frac{1+s^{2}}{s(1-s^{2})}+ \dfrac{p^{\prime }(s)}{p(s)}\bigg)ds .
\] Thus $\varphi(z)=zh(z)g(z)=zp(z)/(1-z^2),$ $ p \in \mathcal{P}_{\mathbb{R}}.$ It follows from \cite{Rog} that $\varphi \in T,$ and hence $f \in T_{Lh}.$ \end{proof}
\begin{corollary}\label{cor1} If $f $ belongs to $T_{Lh}$ with respect to $a\in B, $ then $q^2(z)/f(z)\in T_{Lh} $ for some $q\in T_{Lh}^{0}$ with respect to the same $a\in B. $ \end{corollary} \begin{proof} It follows from Theorem \ref{thm1} that $f(z) =q(z) w(z),$ where $q\in T_{Lh}^{0}$ and $w(z) =s(z)\overline{t(z)} \in\mathcal{P}_{Lh}.$ Since
\begin{align*} \overline{\Bigg(\frac{ \big(\frac{1}{w }\big)_{\overline{z}}}{ \frac{1}{w } }\Bigg)}
= \overline{\Bigg(\frac{ -(w ) _{\overline{z}}}{w } \Bigg)} =\frac{-a (w)_{z}}{w}=a \frac{\left(\frac{1}{w} \right)_{z}}{\frac{1}{w}}, \end{align*} it follows that $1/w$ is logharmonic with respect to the same $a.$ Furthermore, $ 1/w = 1/(s \overline{t })=( 1/ s ) (\overline{ 1/t }) ,$ and \begin{align*}
{\rm Re \ } \left(\frac{1}{s t }\right)&={\rm Re \ } \left(\frac{\overline{ s t }}{|s t |^2}\right)=\frac{{\rm Re \ }( s t )}{|s t |^2}>0. \end{align*} Also, $1/w$ has real coefficients. Thus the function $1/w \in \mathcal{P}_{Lh}$. Hence Theorem \ref{thm1} shows that $ q/w = q^{2}/f \in T_{Lh}.$ \end{proof}
The next result obtains an estimate for the radius of starlikeness for the class $T_{Lh}$. \begin{theorem}\label{thm2} Let $f(z)=zh(z)\overline{g(z)}\in T_{Lh} .$
Then $f$ maps the disk $|z|<3-2\sqrt{2}$ onto a starlike domain.\end{theorem}
\begin{proof}
The function $f$ maps the circle $|z|=r$ onto a starlike curve provided \[\frac{\partial}{\partial \theta} \arg f(re^{i\theta}) = {\rm Im } \left(\frac{\partial}{\partial \theta} \log f(re^{i\theta})\right) ={\rm Re\ } \frac{zf_{z}-\overline{z}f_{\overline{z}}}{f} >0.
\] With $\varphi(z)=zh(z)g(z),$ a short computation gives
\begin{equation*} {\rm Re\ } \frac{zf_{z}-\overline{z}f_{\overline{z}}}{f}= {\rm Re\ } \left( \frac{1-a(z)}{1+a(z)} \frac{z\varphi'(z)}{\varphi(z)} \right) \end{equation*} for some $a \in B.$
Next let
\[q(z)=\frac{1-a(z)}{1+a(z)} \frac{z\varphi'(z)}{\varphi(z)},
\] and $\sigma(z)=\rho_{0}z.$ Kirwan \cite{Kir} has shown that the radius of starlikeness for typically real analytic functions $\varphi$ is $\rho_{0}=\sqrt{2}-1.$ Thus ${\rm Re\ } \varphi(\sigma(z))>0, $ and so $q(\sigma(z))$ is subordinated to $((1+z)/(1-z))^{2}$ in $U$.
Writing $p(z)=(1+z)/(1-z),$ it follows from \cite[p.\ 84]{goodman} that
\begin{equation*}
\left|p(z)-\frac{1+r^2}{1-r^2}\right| \leq \frac{2r}{1-r^2}.
\end{equation*}
Thus $|\arg(p(z))| < \pi/4$ provided $|z| < \rho_{0},$ where $\rho_{0}$ is a smallest positive root of the equation $ r^2-2\sqrt{2} r+1=0.$ The function $f(z)=zh(z)\overline{g(z)}$ is thus starlike in the disk $|z|<\rho_{0}^2=3-2\sqrt{2}.$ \end{proof}
In the next result, an upper estimate is established for arclength of all mappings $f$ in the class $T_{Lh}.$
\begin{theorem}\label{thm3} Let $f(z)=zh(z)\overline{g(z)}\in T_{Lh},$
and $|f(z)|\leq M(r)$, $0<r<1$. Then an upper bound for its arclength $L(r)$ is given by \begin{equation*} L(r)\leq 4\pi M(r)\dfrac{1+r+2r^2-2r^3}{(1-r)(1-r^{2})}.
\end{equation*} \end{theorem}
\begin{proof}
Let $C_{r}$ denote the image of the circle $|z|=r<1 $ under the mapping $w=f(z)$. Then \begin{align*}\label{eq2.9}
L(r) &=\int_{C_{r}}|df|\ =\int_{0}^{2\pi }|zf_{z}-\overline{z}f_{\overline{z
}}|d\theta \notag \\ &\leq M(r)\int_{0}^{2\pi }\left\vert \dfrac{zf_{z}-\overline{z}f_{\overline{ z}}}{f}\right\vert d\theta . \end{align*}
Since $\varphi (z)=zh(z)g(z)=zp(z)/(1-z^{2})$ for some $p \in \mathcal{P} _{\mathbb{R}},$ it follows that
\[\frac{zf_{z}-\overline{z}f_{\overline{z}}}{f}={\rm Re\ } \left( \frac{1-a(z)}{ 1+a(z)}\left( \frac{zp'(z)}{p(z)}+\frac{1+z^{2}}{1-z^{2}}\right) \right) +i{\rm Im\ } \left( \frac{zp'(z)}{p(z)}+\frac{1+z^{2}}{1-z^{2}} \right) .
\]
Therefore,
\begin{align*} \frac{L(r)}{M(r)} &\leq \int_{0}^{2\pi }\left\vert {\rm Re\ }\bigg( \dfrac{1-a(z)}{1+a(z)}\left( \dfrac{zp^{\prime }(z)}{p(z)}\right)\bigg)\right\vert d\theta \notag \\ &\quad \quad +\int_{0}^{2\pi }\left\vert {\rm Re\ }\bigg( \dfrac{1-a(z)}{1+a(z)}\left(\dfrac{1+z^{2}}{1-z^{2}}\right)\bigg) \right\vert d\theta \notag \\ &\quad \quad +\int_{0}^{2\pi }\left\vert {\rm Im\ } \dfrac{zp^{\prime }(z)}{p(z)} \right\vert d\theta + \int_{0}^{2\pi }\left\vert {\rm Im\ } \dfrac{1+z^{2}}{ 1-z^{2}}\right\vert d\theta, \notag \end{align*} that is, \begin{align}\label{eq2.10} L(r)\leq M(r)\left(I_{1}+I_{2}+I_{3}+I_{4}\right). \end{align}
The function $p$ is subordinate to $(1+z)/(1-z),$ and thus $zp'(z)/p(z)=2zw'(z)/(1-w^{2}(z))$ for some analytic self-map $w$ of $U$ with $w(0)=0$. The Schwarz-Pick inequality states
\[\frac{|w'(z)|}{1-|w(z)|^2} \leq \frac{1}{1-|z|^2}.\] Thus \begin{align*}\label{eq2.11} I_{1} &=\int_{0}^{2\pi }\left\vert {\rm Re\ } \left(\dfrac{1-a(z)}{1+a(z)}\bigg( \dfrac{zp^{\prime }(z)}{p(z)}\bigg)\right)\right\vert d\theta \notag \\ &\leq \int_{0}^{2\pi }\left\vert \dfrac{1+z}{1-z}\right\vert \left\vert \dfrac{2zw'(z)}{1-w^{2}(z)}\right\vert d\theta \notag
\leq \dfrac{1+|z|}{1-|z|} \int_{0}^{2\pi }\dfrac{2|z|}{1-|z|^{2}}d\theta \notag \\ &= \dfrac{4\pi r}{(1-r)^2}. \end{align*}
Since $[(1-a(z)/(1+a(z))][(1+z^{2})/(1-z^{2})]$ is subordinated to $((1+z)/(1-z) )^{2},$ it follows from Parseval's theorem that \begin{align*} I_{2}&=\int_{0}^{2\pi }\left\vert {\rm Re\ }\left( \dfrac{1-a(z)}{1+a(z)}\bigg(\dfrac{1+z^{2}}{1-z^{2}}\bigg)\right) \right\vert d\theta \notag\\
&\leq \int_{0}^{2\pi }\left\vert \ \left( \dfrac{1+z}{1-z}\right) ^{2}\right\vert d\theta \leq 2\pi \left( 1+4\underset{n=1}{\overset{\infty }{ \sum }}r^{2n}\right) \notag \\ &= 2\pi \left( \dfrac{1+3r^{2}}{1-r^{2}}\right), \end{align*}
Also, \begin{align*} I_{3} &=\int_{0}^{2\pi }\left\vert \ {\rm Im\ } \dfrac{zp'(z)}{p(z)} \right\vert d\theta \leq \int_{0}^{2\pi }\left\vert \dfrac{2zw'(z)}{1-w^{2}(z)}\right\vert d\theta \notag \\
&\leq \int_{0}^{2\pi }\dfrac{2|z|}{1-|z|^{2}}d\theta
=\dfrac{4\pi r}{1-r^{2}}, \end{align*}
and \begin{align*} I_{4}&=\int_{0}^{2\pi }\left\vert {\rm Im\ } \dfrac{1+z^{2}}{1-z^{2}}\right\vert d\theta \leq \int_{0}^{2\pi }\left\vert \dfrac{1+z^{2}}{1-z^{2}}\right\vert d\theta \notag \\
&\leq 2\pi \dfrac{1+r^{2}}{1-r^{2}}. \end{align*}
Substituting the bounds for \ $I_{1},$ $\ I_{2},$ $\ I_{3}$ and $I_{4}$ into \eqref{eq2.10} yields
\begin{align*} L(r)\leq 4\pi M(r)\dfrac{1+r+2r^2-2r^3}{(1-r)(1-r^{2})}. \end{align*} \end{proof}
\section{Univalent logharmonic mappings in the class $T_{Lh}$}
As noted earlier, an analytic univalent function $\varphi$ is typically real if and only if the image $\varphi(U)$ is a domain symmetric with respect to the real axis. Such a geomteric characterization no longer holds true for the class $T_{Lh}$. The following example illustrates a univalent logharmonic mapping $F(z)=zh(z)\overline{g(z)}\in T_{Lh}$ where $F(U)$ is not a symmetric domain.
\begin{example}\label{exam1} Let \begin{equation*} F(z)=zh(z)\overline{g(z)}=z\left(1+\frac{iz}{3}\right ) \left(1+\frac{i\overline{z}}{3}\right ).
\end{equation*} It is evident that $F(0)=0,$ $h(0)= 1=g(0),$ where $h(z)=1+ iz/3 $ and $g(z)=1-i z/3 .$ Also, \begin{equation*}
|a(z)|=\left\vert F\overline{F_{\overline{z}}} /F_{z}\overline{F}\right\vert =\left\vert \dfrac{-iz\left( 3+iz\right) }{(3-iz)(3+2iz)}\right\vert <1.
\end{equation*} Thus $F$ is a normalized logharmonic mapping in $U $ with respect to $ a\in B_0.$
Let \begin{equation*}
\psi(z)= \frac{zh(z)}{g(z)}=\frac{z\left(1+\frac{iz}{3}\right)}{\left(1-\frac{iz}{3}\right)}.
\end{equation*} Then $\psi(0)=0,$ $\psi'(0)=1,$ and
\begin{equation*}
{\rm Re\ }\left(\frac{z\psi'(z)}{\psi(z)} \right)={\rm Re\ }\left(\frac{ z^2+6iz+9}{ z^2+9} \right) >
\frac{10}{(9+r^2\cos 2\theta)^2+r^4\sin ^2 {2\theta}}>0.
\end{equation*} Hence $\psi\in S^{\ast },$ and thus \cite{Abd5} shows that $F$ is in fact a univalent starlike logharmonic mapping.
Next, let $\varphi (z)=zh(z)g(z)=z\left( 1+z^{2}/9\right) .$ Evidently, for $z=x+iy,$
\begin{equation*}
{\rm Im } (\varphi (z))
=\frac{y}{9} \left(3x^2 +9-y^2\right).
\end{equation*} Then \begin{equation*} {\rm Im }(z){\rm Im } (\varphi (z))
=y^{2}\left(\dfrac{3x^{2}+(9-y^{2})}{9}\right)>0
\end{equation*} whenever ${\rm Im\ } (z)\neq 0.$ Hence $\varphi$ is typically real, and thus $F \in T_{Lh}.$
A simple calculation shows that
\begin{equation*} F(z)
=z\left(1+\frac{2i }{3} {\rm Re\ }z-\frac{|z|^2}{9}\right ).
\end{equation*} With \begin{equation*}
I(t)={\rm Im\ } F(e^{it})=\frac{ 2\cos ^2 t}{3}+\frac{8}{9}\sin t, \end{equation*} then \begin{equation*}
I'(t)=\frac{4}{3}\cos t\left( \frac{2}{3}-\sin t\right).
\end{equation*} It follows that $I'(t)=0$ when $t=\pm \pi/2,$ or $t=\sin^{-1} (2/3)$. Thus
\begin{equation*}
M=\max_{|t|\leq \pi}I(t)=I\left(\sin^{-1}\left( \frac{2}{3}\right )\right )=\frac{26}{27},
\end{equation*}
and
\begin{equation*}m=\min_{|t|\leq \pi}I(t)=I(-\frac{\pi}{2})=-\frac{ 8}{9},
\end{equation*} which shows that $F(U)$ is not symmetric with respect to the real axis.\end{example}
Figure \ref{pic1} shows the mapping $F(z)=z(1+iz/3)(1+i\overline{z}/3)$ which is not symmetric with respect to the real axis. \begin{figure}\label{pic1}
\end{figure}
Our next example illustrates a univalent logharmonic mapping from $U$ onto a symmetric domain $\Omega $ but does not belong to the class $T_{Lh}.$
\begin{example}\label{exam2} Consider the function
\begin{equation*}
F(z)=z\dfrac{1-\overline{z}}{1-z}\exp \left\{ {\rm Re\ } \left( \dfrac{4z}{1-z}\right) \right\}.
\end{equation*} Then $F(0)=0,$ $h(0)= 1=g(0),$ where
\begin{equation*}
h(z)=\frac{\exp \left\{\frac{2z}{1-z} \right\} }{1-z} , \quad \text{and} \quad g(z)=\exp \left\{\frac{2z}{1- z} \right\}(1-z).
\end{equation*}
Also, $|a(z)|=\left\vert F\overline{F_{\overline{z}}} /F_{z}\overline{F}\right\vert=\left\vert z\right\vert <1.$ Thus $F$ is a normalized logharmonic mapping with respect to $a $ from $U$ onto $ \mathbb{C}\backslash(-\infty ,-1/e^{2}].$
Let \begin{equation*}
\psi(z)= \frac{zh(z)}{g(z)} =\frac{z}{(1-z)^2}.
\end{equation*} Then
$\psi\in S^{\ast },$ and hence it follows from \cite{Abd5} that $F$ is in fact a univalent starlike logharmonic mapping. It is also evident that $F$ has real coefficients, that is, $F(z)=\overline{ F(\overline{z})},$ and so $F(U)$ is a symmetric domain with respect to the real axis.
Let $\varphi (z)=zh(z)g(z)=z\exp \left\{ 4z/(1-z)\right\} .$ At $z_1=(1-2/\pi)+2i/\pi,$
\begin{align*} {\rm Re }\left(\frac{1-z_1^2}{z_1}\varphi(z_1)\right) &={\rm Re }\left((1-z_1^2)\exp \left\{\frac{4z_1}{1-z_1} \right\}\right)\\ &= -\frac{4}{\pi}\exp \left\{ 4-\pi \right\}<0. \end{align*}
Thus $(1-z^2)\varphi(z)/z \notin \mathcal{P}_{\mathbb{R}},$ and it follows from \cite{Rog} that the function $\varphi$ is not typically real.\\
Figure \ref{pic2} shows the mapping $F(z)=z \exp \left\{ {\rm Re\ } \left( 4z/(1-z) \right) \right\}(1-\overline{z})/(1-z) $ which does not belong to the class $T_{Lh},$ but yet maps $U$
onto a symmetric domain with respect to the real axis $F(U).$\end{example}
\begin{figure}\label{pic2}
\end{figure}
The following result describes the geometry of a univalent logharmonic function in the class $T_{Lh}$ when its second dilatation has real coefficients.
\begin{theorem}\label{thm64} Let $f(z)=zh(z)\overline{g(z)}\in T_{Lh}$ be a univalent sense-preserving logharmonic mapping in $U$. If the second dilatation function $a$ has real coefficients, that is, $a(\overline{z})=\overline{a(z)},$ then $f(U)$ is symmetric with respect to the real axis.
\end{theorem}
\begin{proof} Let $\varphi (z)=zh(z)g(z)$ be analytically typically real. Then $\varphi$ has real coefficients. It follows from \eqref{eqg} that
\begin{equation*} \frac{g'(z)}{g(z)}=\frac{a(z)}{1+a(z)}\frac{\varphi'(z)}{\varphi (z)},
\end{equation*} which readily yields the solution $g$. Thus $g$ has real coefficients. It is also evident that $h$ has real coefficients since $h (z)=\varphi(z)/zg(z).$ Therefore, $f(z)=zh(z)\overline{g(z)}$ has real coefficients, whence $f(U)$ is symmetric with respect to the real axis. \end{proof}
The final result derives sufficient conditions for $f$ to be typically real logharmonic in some subdisk of $U$.
\begin{theorem}\label{thm65}
Let $f(z)=zh(z)\overline{g(z)}$ be a univalent sense-preserving logharmonic mapping in $U$ normalized by $h(0)=1=g(0),$ with its second dilatation function $a$ has real coefficients. Further, suppose $f(U)$ is a strictly starlike Jordan domain, that is, each radial ray from $0$ intersects the boundary $\partial \Omega$ of $\Omega=f(U)$ in exactly one point of $\mathbb{C}.$ If $f(U)$ is a symmetric domain with respect to the real axis, then $\varphi (z)=zh(z)g(z)$ is typically real in the disk $|z|<\sqrt{2}-1.$ \end{theorem} \begin{proof} Suppose $f(U)$ is a strictly starlike Jordan domain symmetric with respect to the real axis, and let $F(z) = \overline{f(\overline{z})} ,$ where $f(z)=zh(z)\overline{g(z)}$ is a univalent logharmonic mapping. Then $F $ is univalent in $U $ and $ F(U) = f (U).$
Let $F(z)=z H(z)\overline{G(z)} = z \overline{h(\overline{z})} g(\overline{z})$ with $H(z)=\overline{h(\overline{z})}$ and $G(z)=\overline{g(\overline{z})}.$ Thus $F(0) = 0,$ and $H(0) = 1 =G(0). $ With $ a^*(z) = F\overline{F_{\overline{z}}} /F_{z}\overline{F} ,$ then
{\large \begin{equation*}
a^*(z)
=\frac{\overline{\Big ( \frac{ ( g (\overline{ z}) )_{\overline{ z}}}{g (\overline{ z})} \Big )}}{\frac{1}{z}+
\frac {\left(\overline{h (\overline{ z})}\right) _{ z}}
{\overline{h (\overline{ z})}}}
=\frac{ \frac{\left(\overline{g (\overline{ z})}\right)_z}{\overline{g (\overline{ z})} } }{\frac{1}{z}+
\frac {\left(\overline{h (\overline{ z})_{ \overline{ z}}}\right) }
{\overline{h (\overline{ z})}}}
=\overline{a(\overline{z})}.
\end{equation*}} Since $a$ has real coefficients, it is evident that $ a^*(z)=a(z).$ Therefore, $F$ is a logharmonic mapping with respect to the same $a.$ Also, $H(0) = h(0)=1.$ It then follows from {\rm \cite[Lemma 2.4]{Abd11 }} that there is only one univalent logharmonic mapping from $U$ onto $f(U)$ which is a solution of \eqref{ede} normalized by $f(0) = 0 $ and $h(0) =1 =g(0) .$ In other words,
$f(z)=F(z)=\overline{f(\overline{z})},$
and thus $f$ has real coefficients. Hence $\psi (z)=zh(z)/g(z)=f(z)/|g(z)|^2$ has real coefficients.
Direct calculations yield
\begin{equation*} \frac{g'(z)}{g(z)}=\frac{a(z)}{1-a(z)}\frac{\psi'(z)}{\psi(z) },
\end{equation*} which upon integrating leads to
\begin{equation*} g(z)=\exp \int_{0}^{z}\frac{a(t)}{1-a(t)}\dfrac{\psi ^{\prime }(t)}{\psi (t)}dt. \end{equation*}
Then $g$, and so does $h$, have real coefficients, and thus $\varphi (z)=zh(z)g(z)$ has real coefficients. Furthermore, {\rm \cite[Theorem 3.1]{RAli}} shows that $\varphi $ is starlike univalent in the disk $|z|<\rho,$ where $\rho=\sqrt{2}-1 .$ Thus $\varphi $ is typically real in the disk $|z|<\sqrt{2}-1.$ \end{proof}
\noindent \textbf{Acknowledgment.} The third author gratefully acknowledged
support from Universiti Sains Malaysia research grant
1001/PMATHS/811280.
\end{document} |
\begin{document}
\title[Whittaker-Hill] {Instability intervals of the Whittaker-Hill operator}
\author{Xu-Dan Luo} \address{Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.} \email{lxd@amss.ac.cn}
\subjclass[2010]{Primary : 34L15; 41A60; 47E05.
}
\keywords{The Whittaker-Hill operator; Instability intervals; Asymptotics}
\begin{abstract} The Hill operator admits a band gap structure. As a special case, like the Mathieu operator, one has only open gaps, however, the instability intervals of the Whittaker-Hill operator may be open or closed. In 2007, P. Djakov and B. Mityagin gave the asymptotics of band gaps for a special Whittaker-Hill operator [P. Djakov and B. Mityagin, J. Funct. Anal., 242, 157-194 (2007).]. In this paper, a more general Whittaker-Hill operator is considered and the asymptotics of the instability intervals are studied. \end{abstract}
\maketitle
\section{Introduction and main results}
The Floquet (Bloch) theory indicates that the spectrum of the Schr\"{o}dinger operator \begin{equation} \label{E:Schrodinger operator} Lf:=-f''(x)+\nu(x)f(x), \ \ \ x\in\mathbb{R} \end{equation} with a smooth real-valued periodic potential $\nu(x)$ has a band gap structure. If we further assume that $\nu(x)$ is of periodic $\pi$ and set \begin{equation*} \nu(x)=-\sum_{n=1}^{\infty}\theta_{n}\cos(2nx)-\sum_{m=1}^{\infty}\phi_{m}\sin (2mx), \end{equation*} where $\theta_{n}$ and $\phi_{m}$ are real, then (\ref{E:Schrodinger operator}) can be written as: \begin{equation} Lf:=-f''(x)-\left[\sum_{n=1}^{\infty}\theta_{n}\cos(2nx)+\sum_{m=1}^{\infty}\phi_{m}\sin (2mx)\right]f(x). \end{equation} Moreover, there are \cite{Eastham} two monotonically increasing infinite sequence of real numbers \begin{equation*} \lambda_{0}^{+}, \ \lambda_{1}^{+}, \ \lambda_{2}^{+}, \cdots \end{equation*} and \begin{equation*} \lambda_{1}^{-}, \ \lambda_{2}^{-}, \ \lambda_{3}^{-}, \cdots \end{equation*} such that the Hill equation \begin{equation} Lf=\lambda f \end{equation} has a solution of period $\pi$ if and only if $\lambda=\lambda_{n}^{+}$, $n=0, 1, 2, \cdots$, and a solution of semi-period $\pi$ (i.e., $f(x+\pi)=-f(x)$) if and only if $\lambda=\lambda_{n}^{-}$, $n=1, 2, 3, \cdots$. The $\lambda_{n}^{+}$ and $\lambda_{n}^{-}$ satisfy the inequalities \begin{equation*} \lambda_{0}^{+}<\lambda_{1}^{-}\leq \lambda_{2}^{-}\leq \lambda_{1}^{+}\leq \lambda_{2}^{+}<\lambda_{3}^{-}\leq \lambda_{4}^{-}<\lambda_{3}^{+}\leq \lambda_{4}^{+}<\cdots \end{equation*} and the relations \begin{equation*} \lim_{n\rightarrow\infty}\lambda_{n}^{+}=\infty, \ \ \ \lim_{n\rightarrow\infty}\lambda_{n}^{-}=\infty. \end{equation*} Besides, $\gamma_{n}:=(\lambda_{n+1}^{-}-\lambda_{n}^{-})$ for odd $n$ and $\gamma_{n}:=(\lambda_{n}^{+}-\lambda_{n-1}^{+})$ for even $n$ are referred to as band gaps or instability intervals, where $n\geq 1$.
It is well-known that there is an extensive theory for the Mathieu operator, where the potential $\nu(x)$ is a single trigonometric function, i.e., \begin{equation} \nu(x)=-B \cos 2x. \end{equation}
Ince \cite{Ince 3} proved that all instability intervals of the Mathieu operator are open, i.e., no closed gaps for the Mathieu operator. In 1963, Levy and Keller \cite{Levy} gave the asymptotics of $\gamma_{n}=\gamma_{n}(B)$, i.e., for fixed $n$ and real nonzero number $B$, when $B \to 0$, \begin{equation} \gamma_{n}=\frac{8}{[(n-1)!]^{2}}\cdot \left(\frac{B}{8}\right)^{n} (1+O(B)). \end{equation} 18 years later, Harrell \cite{Harrell} gave the asymptotics of the band gaps of the Mathieu operator for fixed $B$ and $n\rightarrow\infty$, i.e., \begin{equation}
\gamma_{n}=\lambda_{n}^{+}-\lambda_{n}^{-}=\frac{8}{[(n-1)!]^{2}}\cdot \left(\frac{|B|}{8}\right)^{n}\left(1+O\left(\frac{1}{n^{2}}\right)\right). \end{equation}
Compared with the Mathieu potential, the band gaps for the Whittaker-Hill potential \begin{equation} \label{E:Whittaker-Hill potential} \nu(x)=-(B\cos 2x+C\cos 4x) \end{equation} may be open or closed. Specifically, if $B=4\alpha t$ and $C=2\alpha^{2}$, for any real $\alpha$ and natural number $t$, it is already known that for odd $t=2m+1$, all the even gaps are closed except the first $m$, but no odd gap disappears; similarly, for even $t=2m$, except for the first $m$, all the odd gaps are closed, but even gaps remain open (see Theorem 11, \cite{Djakov 1} and Theorem 7.9, \cite{Maguns}).
In 2007, P. Djakov and B. Mityagin (see \cite{Djakov 2}) gave the asymptotics of the instability intervals for the above special Whittaker-Hill potential, namely, for real $B, C\neq 0$, $B=4 \alpha t$ and $C=2 \alpha^{2}$, they have the following results, where either both $\alpha$ and $t$ are real numbers if $C>0$ or both $\alpha$ and $t$ are pure imaginary numbers if $C<0$. \begin{theorem}[\cite{Djakov 2}] \label{T:Djakov 1} Let $\gamma_{n}$ be the $n-$th band gap of the Whittaker-Hill operator \begin{equation} Lf=-f''-[4\alpha t \cos 2x+ 2\alpha^{2} \cos 4x]f, \end{equation} where either both $\alpha$ and $t$ are real, or both are pure imaginary numbers. If $t$ is fixed and $\alpha\rightarrow 0$, then for even $n$ \begin{equation}
\gamma_{n}=\left|\frac{8\alpha^{n}}{2^{n}[(n-1)!]^{2}}\prod_{k=1}^{n/2}(t^{2}-(2k-1)^{2})\right|(1+O(\alpha)), \end{equation} and for odd $n$ \begin{equation}
\gamma_{n}=\left|\frac{8\alpha^{n}t}{2^{n}[(n-1)!]^{2}}\prod_{k=1}^{(n-1)/2}(t^{2}-(2k)^{2})\right|(1+O(\alpha)). \end{equation} \end{theorem}
\begin{theorem}[\cite{Djakov 2}] \label{T:Djakov 2} Let $\gamma_{n}$ be the $n-$th band gap of the Whittaker-Hill operator \begin{equation} Lf=-f''-[4\alpha t \cos 2x+ 2\alpha^{2} \cos 4x]f, \end{equation} where either both $\alpha$ and $t\neq 0$ are real, or both are pure imaginary numbers. Then the following asymptotic formulae hold for fixed $\alpha$, $t$ and $n\rightarrow\infty$:
for even $n$ \begin{equation}
\gamma_{n}=\frac{8|\alpha|^{n}}{2^{n}[(n-2)!!]^{2}}\left|\cos\left(\frac{\pi}{2}t\right)\right|\left[1+O\left(\frac{\log n}{n}\right)\right], \end{equation} and for odd $n$ \begin{equation}
\gamma_{n}=\frac{8|\alpha|^{n}}{2^{n}[(n-2)!!]^{2}}\frac{2}{\pi}\left|\sin\left(\frac{\pi}{2}t\right)\right|\left[1+O\left(\frac{\log n}{n}\right)\right], \end{equation} where $(2m-1)!!=1\cdot3\cdots(2m-1)$, \ \ \ $(2m)!!=2\cdot4\cdots(2m)$. \end{theorem} In this paper, a more general Whittaker-Hill operator \begin{equation} L=-D^{2}+(bq^{m_{1}}\cos 2x+cq^{m_{2}}\cos 4x) \end{equation} is considered and the asymptotics of the instability intervals are derived, where $b$, $c$, $q$, $m_{1}$ and $m_{2}$ are real. Our theorems are stated as follows, in particular, we can deduce P. Djakov and B. Mityagin's results by choosing $m_{1}=1$, $m_{2}=2$, $b=-4\alpha t$ and $c=-2\alpha^{2}$. \begin{theorem} \label{T:1} Let the Whittaker-Hill operator be \begin{equation} Ly=-y''+(bq^{m_{1}}\cos 2x+cq^{m_{2}}\cos 4x)y, \end{equation} where $b$, $c$ and $q$ are real. If $q\rightarrow 0$ and $m_{1}$, $m_{2}$ are positive real parameters, then one of the following results holds:
\begin{enumerate} \item when $m_{1}> \frac{m_{2}}{2}$, \\ (i) \begin{equation}
\gamma_{2m}=\left|\frac{32\cdot(\frac{c}{2})^{m}\cdot q^{m_{2}m}}{2^{4m}[(m-1)!]^{2}}\right|+O\Big(q^{m_{2}(m-\frac{1}{2})+m_{1}}\Big), \end{equation} \\ (ii) \begin{equation}
\gamma_{1}= |bq^{m_{1}}|+O(q^{2m_{1}-\frac{m_{2}}{2}}), \ \ \ \gamma_{3}=\frac{|bcq^{m_{1}+m_{2}}|}{8}+O(q^{2m_{1}+\frac{m_{2}}{2}}), \end{equation} \begin{equation} \begin{split}
&\gamma_{2m-1}= \Big|\left(\frac{c}{2}\right)^{m-1}\cdot b\cdot q^{m_{2}(m-1)+m_{1}}\cdot \frac{8}{2^{3m}}\cdot \Big\{ \frac{1}{[(2m-3)!!]^{2}}\\
&\cdot \sum_{i=1}^{m-2} \frac{(2m-2i-3)!!\cdot (2i-1)!!}{i!\cdot (m-1-i)!} + \frac{2}{(2m-3)!!\cdot(m-1)!} \Big\}\Big|+O\Big(q^{m_{2}(m-\frac{3}{2})+2m_{1}}\Big) \ \ \ \text{for} \ \ \ m\geq 3; \end{split} \end{equation} \item when $m_{1}< \frac{m_{2}}{2}$, \begin{equation}
\gamma_{1}=\left|bq^{m_{1}}\right|+O(q^{m_{2}-m_{1}}), \ \ \ \gamma_{2}=\left|cq^{m_{2}}+\frac{b^{2}q^{2m_{1}}}{8}\right|+O(q^{m_{2}}), \end{equation} \begin{equation}
\gamma_{n}=\left|\frac{8\cdot b^{n}\cdot q^{m_{1}n}}{2^{3n}\cdot [(n-1)!]^{2}}\right|+O(q^{m_{1}n+m_{2}-2m_{1}}) \ \ \ \ \ \ \ \text{for} \ \ \ n\geq 3; \end{equation} \item when $m_{1}= \frac{m_{2}}{2}$ and $c<0$, \\ (i) \begin{equation}
\gamma_{1}=\left|bq^{m_{1}}\right|+O(q^{3m_{1}}), \ \ \ \gamma_{2}=\left|cq^{m_{2}}+\frac{b^{2}q^{2m_{1}}}{8}\right|+O(q^{4m_{1}}), \end{equation} \\ (ii) \begin{equation}
\gamma_{2m}=8\left|\frac{\prod_{k=1}^{m}\Big(\Big(\frac{b}{2}\Big)^{2}+8c\Big(k-\frac{1}{2}\Big)^{2}\Big)\cdot q^{2m_{1}\cdot m}}{2^{4m}\cdot[(2m-1)!]^{2}}\right|+O(q^{2m_{1}(m+1)}) \ \ \ \ \ \ \ \text{for} \ \ \ m\geq 2, \end{equation} \\ (iii) \begin{equation}
\gamma_{2m-1}=32\left|\frac{\frac{b}{2}\prod_{k=1}^{m-1}\Big(\Big(\frac{b}{2}\Big)^{2}+8ck^{2}\Big)\cdot q^{m_{1}\cdot (2m-1)}}{2^{4m}\cdot[(2m-2)!]^{2}}\right|+O(q^{m_{1}(2m+1)}) \ \ \ \ \ \ \ \text{for} \ \ \ m\geq 2. \end{equation} \end{enumerate} Here, $m$ is a positive integer and $\gamma_{n}$ is the n-th instability interval. \end{theorem}
\begin{theorem} \label{T:2} Let the Whittaker-Hill operator be \begin{equation} Ly=-y''+(bq^{m_{1}}\cos 2x+cq^{2m_{1}}\cos 4x)y, \end{equation} where $b$, $q$ are real, $c<0$ and $m_{1}>0$. Then the following asymptotic formula holds for fixed $b$, $c$, $q$ and $n\rightarrow\infty$. \begin{equation}
\gamma_{2m}=\frac{q^{2m_{1}\cdot m}\cdot |c|^{m}}{2^{3m-3}\cdot[(2m-2)!!]^{2}}\cdot \left|\cos\left(\frac{b\pi}{4\sqrt{-2c}}\right)\right|\cdot\left[1+O\left(\frac{\log m}{m}\right)\right], \end{equation} \begin{equation}
\gamma_{2m-1}=\frac{q^{m_{1}(2m-1)}\cdot |c|^{m-1}\cdot\sqrt{-2c}}{2^{3m-5}\cdot[(2m-3)!!]^{2}\cdot \pi}
\cdot \left|\sin\left(\frac{b\pi}{4\sqrt{-2c}}\right)\right|\cdot\left[1+O\left(\frac{\log m}{m}\right)\right]. \end{equation} Here, $m$ is a positive integer and $\gamma_{n}$ is the n-th instability interval. \end{theorem}
\section{Some lemmas} \begin{lemma}[\cite{Djakov 2}] \label{L:1} Let the Schr\"{o}dinger operator \begin{equation} Ly=-y''+v(x)y \end{equation}
be defined on $\mathbb{R}$, with a real-valued periodic $L^{2}([0, \pi])$-potential $v(x)$, where $v(x)=\sum_{m\in \mathbb{Z}} V(m)\exp(imx)$, $V(m)=0$ for odd $m$, then $\|v\|^{2}=\sum|V(m)|^{2}$.
(a) If $\|v\|<\frac{1}{9}$, then for each $n=1,2,\cdots$, there exists $z=z_{n}$ such that
~\\$|z|\leq 4\|v\|$, and \begin{equation} \label{E:length estimation}
2|\beta_{n}(z)|(1-3\|v\|^{2}/n^{2})\leq \gamma_{n}\leq 2|\beta_{n}(z)|(1+3\|v\|^{2}/n^{2}), \end{equation} where \begin{equation} \beta_{n}(z)=V(2n)+\sum_{k=1}^{\infty}\sum_{j_{1},\cdots,j_{k}\neq \pm n}\frac{V(n-j_{1})V(j_{1}-j_{2})\cdots V(j_{k-1}-j_{k})V(j_{k}+n)} {(n^{2}-j_{1}^{2}+z)\cdots(n^{2}-j_{k}^{2}+z)} \end{equation}
converges absolutely and uniformly for $|z|\leq 1$, and $\gamma_{n}$ is the n-th instability interval.
(b) If $V(0)=\frac{1}{\pi}\int_{0}^{\pi}v(x)dx=0$, then there is $N_{0}=N_{0}(v)$ such that (\ref{E:length estimation}) holds for $n\geq N_{0}$ with $z=z_{n}$, $|z_{n}|<1$.
\end{lemma}
\begin{lemma}[\cite{Volkmer}] \label{L:Volkmer} The Ince equation \begin{equation} \label{E:general Ince} (1+a\cos 2t)y''(t)+b(\sin 2t)y'(t)+(c+d\cos 2t)y(t)=0 \end{equation} can be transformed into the Whittaker-Hill equation by assuming \begin{equation} a=0,\ \ b=-4q,\ \ c=\lambda+2q^{2}, \ \ d=4(m-1)q. \end{equation} Moreover, \begin{equation} \label{E:semifinite band gap 1} \mathrm{sign} (\alpha_{2n}-\beta_{2n})=\mathrm{sign} \ q^{2}\cdot \mathrm{sign} \prod_{p=-n}^{n-1}(2p-m+1) \end{equation} and \begin{equation} \label{E:semifinite band gap 2} \mathrm{sign} (\alpha_{2n+1}-\beta_{2n+1})=\mathrm{sign} \ q \cdot \mathrm{sign} \prod_{p=-n}^{n-1}(2p-m), \end{equation} where $a$, $b$ and $d$ are real; $\alpha_{2n}$ and $\beta_{2n+2}$ are defined by the eigenvalues corresponding to non-trivial even and odd solutions with period $\pi$, respectively; and $\alpha_{2n+1}$ and $\beta_{2n+1}$ are defined by the eigenvalues corresponding to non-trivial even and odd solutions with semi-period $\pi$, respectively. \end{lemma}
\begin{lemma}[\cite{Maguns}] \label{L:Maguns} The Whittaker-Hill equation \begin{equation} f''+[\lambda+4mq\cos 2x+2q^{2}\cos 4x]f=0 \end{equation}
can have two linearly independent solutions of period $\pi$ or $2\pi$ if and only if $m$ is an integer. If $m=2l$ is even, then the odd intervals of instability on the $\lambda$ axis disappear, with at most $|l|+1$ exceptions, but no even interval of instability disappears. If $m=2l+1$ is odd, then at most $|l|+1$ even intervals of instability remain but no interval of instability disappears. \end{lemma}
\begin{lemma} \label{L:2} The Whittaker-Hill operator \begin{equation} L=-D^{2}-(B\cos 2x+C\cos 4x) \end{equation} admits all even gaps closed except the first $n$ when $\pm\frac{B}{4\sqrt{2C}}=-n-\frac{1}{2}$, $n\in\mathbb{Z}_{\geq 0}$; and all odd gaps closed except the first $n+1$ when $\pm\frac{B}{4\sqrt{2C}}=-n-1$, $n\in\mathbb{Z}_{\geq 0}$. \end{lemma}
\begin{proof} By Lemma \ref{L:Maguns}, we obtain that the Whittaker-Hill equation \begin{equation} \label{E:W-H-real} f''(x)+(A+B\cos 2x+C\cos 4x)f(x)=0 \end{equation} have two linearly independent solutions of period or semi-periodic $\pi$ if and only if $\frac{B}{2\sqrt{2C}}\in \mathbb{Z}$. Moreover, we transform (\ref{E:W-H-real}) into the Ince equation \begin{equation} g''(x)+4\sqrt{\frac{C}{2}}\sin 2x\cdot g'(x)+\left[(A+C)+\left(B+4\sqrt{\frac{C}{2}}\right)\cos 2x\right]g(x)=0. \end{equation} via $f(x)=e^{-\sqrt{\frac{C}{2}}\cos 2x}\cdot g(x)$. From Lemma \ref{L:Volkmer}, we can write the parameters $q$, $\lambda$ and $m$ of equation (\ref{E:general Ince}) in terms of $A$, $B$ and $C$, i.e., \begin{equation*} q=-\sqrt{\frac{C}{2}}, \ \ \ \lambda=A, \ \ \ m=-\frac{B}{2\sqrt{2C}}. \end{equation*}
(1) If $m=2n+1$, $n\in \mathbb{N}^{+}\cup\{0\}$, i.e., $\frac{B}{2\sqrt{2C}}=-2n-1$, and the solutions satisfy the periodic boundary conditions, then we deduce from Lemma \ref{L:Volkmer} that the first $2n+1$ eigenvalues are simple, and others are double.
(2) If $m=2n+2$, $n\in \mathbb{N}^{+}\cup\{0\}$, i.e., $\frac{B}{2\sqrt{2C}}=-2n-2$, and the solutions satisfy the semi-periodic boundary conditions, then we also deduce from Lemma \ref{L:Volkmer} that the first $2n+2$ eigenvalues are simple, and others are double.
Besides, we can also transform (\ref{E:W-H-real}) into the Ince equation \begin{equation} g''(x)-4\sqrt{\frac{C}{2}}\sin 2x\cdot g'(x)+\left[(A+C)+\left(B-4\sqrt{\frac{C}{2}}\right)\cos 2x\right]g(x)=0. \end{equation} via $f(x)=e^{\sqrt{\frac{C}{2}}\cos 2x}\cdot g(x)$. Thus, we have similar conclusions. \end{proof}
In order to prove our results, let us consider all possible walks from $-n$ to $n$. Each such walk is determined by the sequence of its steps \begin{equation} x=(x_{1}, \cdots, x_{\nu+1}), \end{equation} or by its vertices \begin{equation} j_{s}=-n+\sum_{k=1}^{s}x_{k}, \ \ \ s=1, \cdots, \nu. \end{equation} The relationship between steps and vertices are given by the formula \begin{equation} x_{1}=n+j_{1};\ \ \ x_{k}=j_{k}-j_{k-1}, \ k=2, \cdots, \nu; \ \ \ x_{\nu+1}=n-j_{\nu}. \end{equation} \begin{definition} \label{D:1} Let $X$ denote the set of all walks from $-n$ to $n$ with steps $\pm 2$ or $\pm 4$. For each $x=(x_{s})_{s=1}^{\nu+1}\in X$ and each $z\in \mathbb{R}$, we define \begin{equation} B_{n}(x,z)=\frac{V(x_{1})\cdots V(x_{\nu+1})}{(n^{2}-j_{1}^{2}+z)\cdots(n^{2}-j_{\nu}^{2}+z)}. \end{equation} \end{definition} \begin{definition} \label{D:2} Let $X^{+}$ denote the set of all walks from $-n$ to $n$ with positive steps equal to $2$ or $4$. For each $\xi\in X^{+}$, let $X_{\xi}$ denote the set of all walks $x\in X\backslash X^{+}$ such that each vertex of $\xi$ is a vertex of $x$ also. For each $\xi\in X^{+}$ and $\mu\in\mathbb{N}$, let $X_{\xi,\mu}$ be the set of all $x\in X_{\xi}$ such that $x$ has $\mu$ more vertices than $\xi$. Moreover, for each $\mu-$tuple $(i_{1}, \cdots, i_{\mu})$ of integers in $I_{n}=(n+2\mathbb{Z})\setminus \{\pm n\}$, we define $X_{\xi}(i_{1}, \cdots, i_{\mu})$ as the set of all walks $x$ with $\nu+1+\mu$ steps such that $(i_{1}, \cdots, i_{\mu})$ and the sequence of the vertices of $\xi$ are complementary subsequences of the sequence of the vertices of $x$. \end{definition} From Definition \ref{D:2}, we deduce \begin{equation} X_{\xi,\mu}=\bigcup_{(i_{1}, \cdots, i_{\mu})\in (I_{n})^{\mu}}X_{\xi}(i_{1}, \cdots, i_{\mu}). \end{equation}
\begin{lemma}[\cite{Djakov 2}] \label{L:3} If $\xi\in X^{+}$ and $n\geq 3$, then for $z\in[0,1)$ \begin{equation} 1-z\frac{\log n}{n}\leq \frac{B_{n}(\xi,z)}{B_{n}(\xi,0)}\leq 1-z\frac{\log n}{4n}, \end{equation} and for $z\in(-1,0]$ \begin{equation}
1+|z|\frac{\log n}{2n}\leq \frac{B_{n}(\xi,z)}{B_{n}(\xi,0)}\leq 1+|z|\frac{2\log n}{n}. \end{equation} \end{lemma}
\begin{lemma}[\cite{Djakov 2}] \label{L:4} For each walk $\xi\in X^{+}$ and each $\mu-$tuple $(i_{1}, \cdots, i_{\mu})\in (I_{n})^{\mu}$, \begin{equation} \sharp X_{\xi}(i_{1}, \cdots, i_{\mu})\leq 5^{\mu}. \end{equation} \end{lemma}
\begin{lemma} \label{L:5}
If $\xi\in X^{+}$ and $|z|\leq 1$, then there exists $n_{1}$ such that for $n\geq n_{1}$, \begin{equation}
\sum_{x\in X_{\xi}}|B_{n}(x,z)|\leq |B_{n}(\xi,z)|\cdot \frac{K \log n}{n}, \end{equation}
where $K=40 \left(|\frac{b}{2}q^{m_{1}}|+|\frac{c}{2}q^{m_{2}}|+|\frac{b^{2}}{2c}q^{2m_{1}-m_{2}}|\right)$. \end{lemma} \begin{proof} By Definition \ref{D:2}, we have \begin{equation}
\sum_{x\in X_{\xi}}|B_{n}(x,z)|=\sum_{\mu=1}^{\infty}\sum_{x\in X_{\xi,\mu}}|B_{n}(x,z)|. \end{equation} Moreover, \begin{equation}
\sum_{x\in X_{\xi,\mu}}|B_{n}(x,z)|\leq \sum_{(i_{1}, \cdots, i_{\mu})}\sum_{X_{\xi}(i_{1}, \cdots, i_{\mu})}|B_{n}(x,z)|, \end{equation} where the first sum on the right is taken over all $\mu-$tuples $(i_{1}, \cdots, i_{\mu})$ of integers $i_{s}\in n+2\mathbb{Z}$ such that $i_{s}\neq \pm n$.
Fix $(i_{1}, \cdots, i_{\mu})$, if $x\in X_{\xi}(i_{1}, \cdots, i_{\mu})$, then \begin{equation} \frac{B_{n}(x,z)}{B_{n}(\xi,z)}=\frac{\prod_{k}V(x_{k})}{\prod_{s}V(\xi_{s})}\cdot \frac{1}{(n^{2}-i_{1}^{2}+z)\cdots(n^{2}-i_{\mu}^{2}+z)}. \end{equation} Note that $V(\pm 2)=\frac{b}{2}q^{m_{1}}$ and $V(\pm4)=\frac{c}{2}q^{m_{2}}$. If each step of $\xi$ is a step of $x$, then \begin{equation}
\frac{\prod_{k}\left|V(x_{k})\right|}{\prod_{s}\left|V(\xi_{s})\right|}\leq C^{\mu}, \end{equation}
where $C:=|\frac{b}{2}q^{m_{1}}|+|\frac{c}{2}q^{m_{2}}|+|\frac{b^{2}}{2c}q^{2m_{1}-m_{2}}|$. For the general case, let $(j_{s})_{s=1}^{\nu}$ be the vertices of $\xi$, and let us put for convenience $j_{0}=-n$ and $j_{\nu+1}=n$. Since each vertex of $\xi$ is a vertex of $x$, for each $s$, $1\leq s\leq \nu+1$, \begin{equation} \xi_{s}=j_{s}-j_{s-1}=\sum_{k\in J_{s}}x_{k}, \end{equation}
where $x_{k}$, $k\in J_{s}$, are the steps of $x$ between the vertices $j_{s-1}$ and $j_{s}$. Fix an $s$, $1\leq s \leq \nu+1$. If $\xi_{s}=2$, then there is a step $x_{k^{*}}$, $k^{*}\in J_{s}$ such that $|x_{k^{*}}|=2$, otherwise, $\xi_{s}$ would be a multiple of $4$. Hence, $|V(\xi_{s})|=|V(x_{k}^{*})|$, which implies that \begin{equation}
\frac{\prod_{J_{s}}|V(x_{k})|}{|V(\xi_{s})|}\leq C^{b_{s}-1}, \end{equation} where $b_{s}$ is the cardinality of $J_{s}$.
If $\xi_{s}=4$, there are two possibilities. (1) If there is $k_{*}\in J_{s}$ with $|x_{k_{*}}|=4$, then $|V(\xi_{s})|=|V(x_{k_{*}})|$, so the above inequality holds. (2) There are $k', k''\in J_{s}$ such that $|x_{k'}|=|x_{k''}|=2$, hence, \begin{equation}
\frac{|V(x_{k'})V(x_{k''})|}{|V(\xi_{s})|}=\left|\frac{b^{2}}{2c}q^{2m_{1}-m_{2}}\right|\leq C, \end{equation} so the above inequality also holds. Note that \begin{equation} \sum_{s}(b_{s}-1)=\mu, \end{equation} we get \begin{equation} \frac{\prod_{k}V(x_{k})}{\prod_{s}V(\xi_{s})}\leq C^{\mu} \end{equation} holds for the general case.
By \begin{equation}
\frac{1}{|n^{2}-i^{2}+z|}\leq \frac{2}{|n^{2}-i^{2}|}, \end{equation}
where $i\neq \pm n$, $|z|\leq 1$, we have \begin{equation}
\frac{|B_{n}(x,z)|}{|B_{n}(\xi,z)|}\leq \frac{(2C)^{\mu}}{|n^{2}-i_{1}^{2}|\cdots |n^{2}-i_{\mu}^{2}|}, \ \ \ x\in X_{\xi}(i_{1}, \cdots, i_{\mu}). \end{equation} By Lemma \ref{L:4}, we derive \begin{equation}
\sum_{x\in X_{\xi}(i_{1}, \cdots, i_{\mu})}\frac{|B_{n}(x,z)|}{|B_{n}(\xi,z)|}\leq \frac{(10C)^{\mu}}{|n^{2}-i_{1}^{2}|\cdots |n^{2}-i_{\mu}^{2}|}. \end{equation} Combining Lemma \ref{L:3}, it yields \begin{equation} \begin{split}
\sum_{x\in X_{\xi,\mu}}\frac{|B_{n}(x,z)|}{|B_{n}(\xi,z)|}&\leq \sum_{(i_{1}, \cdots, i_{\mu})}\frac{(10C)^{\mu}}{|n^{2}-i_{1}^{2}|\cdots |n^{2}-i_{\mu}^{2}|}\leq\left(\sum_{i\in(n+2\mathbb{Z})\setminus \{\pm n\}} \frac{10C}{|n^{2}-i^{2}|}\right)^{\mu}\\ &\leq (10C)^{\mu}\left(\frac{1+\log n}{n}\right)^{\mu}\leq \left(\frac{20C \log n}{n}\right)^{\mu}. \end{split} \end{equation} Thus, \begin{equation}
\sum_{x\in X_{\xi,\mu}} |B_{n}(x,z)|\leq |B_{n}(\xi,z)|\cdot \left(\frac{20C \log n}{n}\right)^{\mu}. \end{equation} Hence, \begin{equation}
\sum_{x\in X_{\xi}}\frac{|B_{n}(x,z)|}{|B_{n}(\xi,z)|}\leq \sum_{\mu=1}^{\infty}\left(\frac{20C \log n}{n}\right)^{\mu}. \end{equation} We can choose $n_{1}\in \mathbb{N}^{+}$ such that $\frac{20C \log n}{n}\leq \frac{1}{2}$ for $n\geq n_{1}$. Then \begin{equation}
\sum_{x\in X_{\xi}}\frac{|B_{n}(x,z)|}{|B_{n}(\xi,z)|}\leq \frac{40C \log n}{n}. \end{equation} Therefore, there exists $n_{1}$ such that for $n\geq n_{1}$, \begin{equation}
\sum_{x\in X_{\xi}}|B_{n}(x,z)|\leq |B_{n}(\xi,z)|\cdot \frac{K \log n}{n}, \end{equation}
where $K:=40C=40 \left(|\frac{b}{2}q^{m_{1}}|+|\frac{c}{2}q^{m_{2}}|+|\frac{b^{2}}{2c}q^{2m_{1}-m_{2}}|\right)$. \end{proof}
\section{Proof of Theorem \ref{T:1}} Note that \begin{equation} V(\pm 2)=\frac{bq^{m_{1}}}{2}, \ \ \ V(\pm 4)=\frac{cq^{m_{2}}}{2} \end{equation} and \begin{equation}
\|v\|^{2}=\frac{1}{2}\Big(b^{2}q^{2m_{1}}+c^{2}q^{2m_{2}}\Big), \end{equation} by Lemma \ref{L:1}, we have \begin{equation} \gamma_{n}=\pm2\Big(V(2n)+\sum_{k=1}^{\infty} \beta_{k}(n,z)\Big)\Big(1+O(q^{2\cdot\min\{m_{1},m_{2}\}})\Big), \end{equation} where \begin{equation} \label{E:belta} \begin{split} \beta_{k}(n,z)&=\sum_{j_{1},\cdots,j_{k}\neq \pm n}\frac{V(n-j_{1})V(j_{1}-j_{2})\cdots V(j_{k-1}-j_{k})V(j_{k}+n)} {(n^{2}-j_{1}^{2}+z)\cdots(n^{2}-j_{k}^{2}+z)}\\ &=\sum_{j_{1},\cdots,j_{k}\neq \pm n}\frac{V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k})} {(n^{2}-j_{1}^{2}+z)\cdots(n^{2}-j_{k}^{2}+z)} \end{split} \end{equation} and $z=O(q)$. Moreover, all series converge absolutely and uniformly for sufficiently small $q$.
Note that \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})=2n, \end{equation} and \begin{equation} \frac{V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k})} {(n^{2}-j_{1}^{2}+z)\cdots(n^{2}-j_{k}^{2}+z)}\neq 0 \end{equation} when \begin{equation} (n+j_{1}),(j_{2}-j_{1}),\cdots,(j_{k}-j_{k-1}),(n-j_{k})\in \{\pm2, \pm4\}. \end{equation} We distinguish three cases to discuss.
{\noindent\bf Case 1.} If $m_{1}> \frac{m_{2}}{2}$, then \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree at least \begin{equation}
\frac{m_{2}}{4}\cdot\Big[|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\Big]. \end{equation} The minimum case occurs when \begin{equation} (n+j_{1}),(j_{2}-j_{1}),\cdots,(j_{k}-j_{k-1}),(n-j_{k})\in \{\pm4\}, \end{equation} then \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})\in 4\mathbb{Z}, \end{equation} while \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})=2n. \end{equation} If $n$ is even, i.e., $n=2m$, $m\in\mathbb{Z}_{>0}$, since \begin{equation} \begin{split}
&|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\\ &\geq (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})\\ &=2n=4m, \end{split} \end{equation} we obtain \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree at least $m_{2}\cdot m$. Such monomial in $q$ of degree $m_{2}\cdot m$ corresponds to a walk from $-n$ to $n$ with vertices $j_{1}, j_{2}, \cdots, j_{k}\neq \pm n$ and positive steps of length $4$. Thus, \begin{equation} \gamma_{2m}=\pm P_{2m}(t)q^{m_{2}m}+O\Big(q^{m_{2}(m-\frac{1}{2})+m_{1}}\Big), \end{equation} where \begin{equation} P_{2m}(t)q^{m_{2}m}=2\Big(V(4m)+\sum_{k=1}^{\infty} \beta_{k}(2m,z)\Big). \end{equation} We have \begin{equation} P_{2}(t)q^{m_{2}}=2 V(4)=c q^{m_{2}} \end{equation} and \begin{equation} \begin{split} &P_{2m}(t)q^{m_{2}m}=2 \sum_{k=1}^{\infty} \beta_{k}(2m,z)\\ &=2 \cdot \Big(\frac{c}{2}\Big)^{m} \cdot q^{m_{2}m}\cdot \prod_{j=1}^{m-1}\Big((4m^{2}-(-2m+4j)^{2})\Big)^{-1}\\ &=\frac{32\cdot(\frac{c}{2})^{m}\cdot q^{m_{2}m}}{2^{4m}[(m-1)!]^{2}} \end{split} \end{equation} for $m\geq 2$. Therefore, \begin{equation}
\gamma_{2m}=\left|\frac{32\cdot(\frac{c}{2})^{m}\cdot q^{m_{2}m}}{2^{4m}[(m-1)!]^{2}}\right|+O\Big(q^{m_{2}(m-\frac{1}{2})+m_{1}}\Big). \end{equation}
If $n$ is odd, i.e., $n=2m-1$, $m\in\mathbb{Z}_{>0}$, since \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})=2n=4m-2, \end{equation} then \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree at least \begin{equation} \begin{split}
&\frac{m_{2}}{4}\cdot\Big[|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\Big]+m_{1}-\frac{m_{2}}{2}\\ &\geq \frac{m_{2}}{4}\cdot(4m-2)+m_{1}-\frac{m_{2}}{2}\\ &=m_{2}(m-1)+m_{1}. \end{split} \end{equation} Such monomial in $q$ of degree $m_{2}(m-1)+m_{1}$ corresponds to a walk from $-n$ to $n$ with vertices $j_{1}, j_{2}, \cdots, j_{k}\neq \pm n$ and positive steps. Specifically, except for one step with length $2$, the others are of length $4$. Thus, \begin{equation} \gamma_{2m-1}=\pm P_{2m-1}(t)q^{m_{2}(m-1)+m_{1}}+O\Big(q^{m_{2}(m-\frac{3}{2})+2m_{1}}\Big), \end{equation} where \begin{equation} P_{2m-1}(t)q^{m_{2}(m-1)+m_{1}}=2\Big(V(4m-2)+\sum_{k=1}^{\infty} \beta_{k}(2m-1,z)\Big). \end{equation} We obtain \begin{equation} P_{1}(t) q^{m_{1}}=2 V(2)=bq^{m_{1}}, \end{equation} \begin{equation} P_{3}(t) q^{m_{1}+m_{2}}=2 \sum_{k=1}^{\infty} \beta_{k}(3,z)=2 \left(\frac{bq^{m_{1}}}{2}\right)\left(\frac{cq^{m_{2}}}{2}\right)\left(\frac{1}{3^{2}-1^{2}}+\frac{1}{3^{2}-1^{2}}\right) =\frac{bcq^{m_{1}+m_{2}}}{8}, \end{equation} \begin{equation} \begin{split} &P_{5}(t) q^{m_{1}+2m_{2}}=2 \sum_{k=1}^{\infty} \beta_{k}(5,z)\\ &=2 \left(\frac{bq^{m_{1}}}{2}\right)\left(\frac{cq^{m_{2}}}{2}\right)^{2} \left[\frac{1}{(5^{2}-3^{2})(5^{2}-1^{2})}+\frac{1}{(5^{2}-1^{2})(5^{2}-1^{2})}+\frac{1}{(5^{2}-3^{2})(5^{2}-1^{2})}\right]\\ &=\frac{bc^{2}q^{m_{1}+2m_{2}}}{3^{2}\cdot 2^{6}}, \end{split} \end{equation} \begin{equation} \begin{split} &P_{2m-1}(t)q^{m_{2}(m-1)+m_{1}}=2 \sum_{k=1}^{\infty} \beta_{k}(2m-1,z)\\ &=2 \left(\frac{c}{2}\right)^{m-1}\cdot \left(\frac{b}{2}\right)\cdot q^{m_{2}(m-1)+m_{1}}\cdot \Big\{\sum_{i=1}^{m-2} \prod_{j=1}^{i}\Big[(2m-1)^{2}-(-2m+1+4j)^{2}\Big]^{-1}\\ &\cdot \prod_{j=i}^{m-2}\Big[(2m-1)^{2}-(-2m+3+4j)^{2}\Big]^{-1}+\prod_{j=0}^{m-2}\Big[(2m-1)^{2}-(-2m+3+4j)^{2}\Big]^{-1}\\ &+\prod_{j=1}^{m-1}\Big[(2m-1)^{2}-(-2m+1+4j)^{2}\Big]^{-1}\Big\}\\ &=2 \left(\frac{c}{2}\right)^{m-1}\cdot \left(\frac{b}{2}\right)\cdot q^{m_{2}(m-1)+m_{1}}\cdot \frac{8}{2^{3m}}\cdot \Big\{ \frac{1}{[(2m-3)!!]^{2}}\cdot \sum_{i=1}^{m-2} \frac{(2m-2i-3)!!\cdot (2i-1)!!}{i!\cdot (m-1-i)!} \\ &+ \frac{2}{(2m-3)!!\cdot(m-1)!} \Big\}. \end{split} \end{equation} Hence, \begin{equation}
\gamma_{1}= |bq^{m_{1}}|+O(q^{2m_{1}-\frac{m_{2}}{2}}), \ \ \ \gamma_{3}=\frac{|bcq^{m_{1}+m_{2}}|}{8}+O(q^{2m_{1}+\frac{m_{2}}{2}}), \end{equation} \begin{equation} \begin{split}
&\gamma_{2m-1}= \Big|\left(\frac{c}{2}\right)^{m-1}\cdot b\cdot q^{m_{2}(m-1)+m_{1}}\cdot \frac{8}{2^{3m}}\cdot \Big\{ \frac{1}{[(2m-3)!!]^{2}}\\ & \cdot \sum_{i=1}^{m-2} \frac{(2m-2i-3)!!\cdot (2i-1)!!}{i!\cdot (m-1-i)!}
+ \frac{2}{(2m-3)!!\cdot(m-1)!} \Big\}\Big|+O\Big(q^{m_{2}(m-\frac{3}{2})+2m_{1}}\Big) \end{split} \end{equation} for $m\geq 3$.
{\noindent\bf Case 2.} If $m_{1}< \frac{m_{2}}{2}$, then \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree at least \begin{equation}
\frac{m_{1}}{2}\cdot\Big[|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\Big]. \end{equation} The minimum case occurs when \begin{equation} (n+j_{1}),(j_{2}-j_{1}),\cdots,(j_{k}-j_{k-1}),(n-j_{k})\in \{\pm2\}, \end{equation} then \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})\in 2\mathbb{Z}, \end{equation} while \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})=2n. \end{equation} Since \begin{equation} \begin{split}
&|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\\ &\geq (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})\\ &=2n, \end{split} \end{equation} we have \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree at least $m_{1}\cdot n$. Such monomial in $q$ of degree $m_{1}\cdot n$ corresponds to a walk from $-n$ to $n$ with vertices $j_{1}, j_{2}, \cdots, j_{k}\neq \pm n$ and positive steps of length $2$. Thus, \begin{equation} \gamma_{n}=\pm P_{n}(t) q^{m_{1}n}+O(q^{m_{1}n+m_{2}-2m_{1}}), \end{equation} where \begin{equation} P_{n}(t) q^{m_{1}n}=2\Big(V(2n)+\sum_{k=1}^{\infty} \beta_{k}(n,z)\Big). \end{equation} We deduce \begin{equation} P_{1}(t)q^{m_{1}}=2V(2)=bq^{m_{1}}, \ \ \ P_{2}(t)q^{2m_{1}}=2 \left(V(4)+\frac{\Big(\frac{bq^{m_{1}}}{2}\Big)^{2}}{2^{2}}\right)=cq^{m_{2}}+\frac{b^{2}q^{2m_{1}}}{8}, \end{equation} \begin{equation} \begin{split} &P_{n}(t) q^{m_{1}n}=2\sum_{k=1}^{\infty} \beta_{k}(n,z)\\ &=2\cdot\Big(\frac{b}{2}\Big)^{n}\cdot q^{m_{1}\cdot n}\cdot \prod_{j=1}^{n-1}(n^{2}-(-n+2j)^{2})^{-1}\\ &=2\cdot \Big(\frac{b}{2}\Big)^{n}\cdot q^{m_{1}\cdot n}\cdot \prod_{j=1}^{n-1}(n^{2}-(-n+2j)^{2})^{-1}\\ &=2\cdot \frac{(\frac{b}{2})^{n}\cdot q^{m_{1}\cdot n}}{4^{n-1}\cdot[(n-1)!]^{2}}\\ &=\frac{8\cdot b^{n}\cdot q^{m_{1}n}}{2^{3n}\cdot [(n-1)!]^{2}} \end{split} \end{equation} for $n\geq 3$. Therefore, \begin{equation}
\gamma_{1}=\left|bq^{m_{1}}\right|+O(q^{m_{2}-m_{1}}), \ \ \ \gamma_{2}=\left|cq^{m_{2}}+\frac{b^{2}q^{2m_{1}}}{8}\right|+O(q^{m_{2}}), \end{equation} \begin{equation}
\gamma_{n}=\left|\frac{8\cdot b^{n}\cdot q^{m_{1}n}}{2^{3n}\cdot [(n-1)!]^{2}}\right|+O(q^{m_{1}n+m_{2}-2m_{1}}) \end{equation} for $n\geq 3$.
{\noindent\bf Case 3.} If $m_{1}= \frac{m_{2}}{2}$, then \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree \begin{equation}
\frac{m_{1}}{2}\cdot\Big[|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\Big]. \end{equation} Since \begin{equation} \begin{split}
&|n+j_{1}|+|j_{2}-j_{1}|+\cdots+|j_{k}-j_{k-1}|+|n-j_{k}|\\ &\geq (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})\\ &=2n, \end{split} \end{equation} we have \begin{equation} V(n+j_{1})V(j_{2}-j_{1})\cdots V(j_{k}-j_{k-1})V(n-j_{k}) \end{equation} is a monomial in $q$ of degree at least $m_{1}\cdot n$, and each such monomial of degree $m_{1}\cdot n$ corresponds to a walk from $-n$ to $n$ with vertices $j_{1}, j_{2}, \cdots, j_{k}\neq \pm n$ and positive steps of length $2$ or $4$. The minimum case occurs when $n+j_{1}$, $j_{2}-j_{1}$, $\cdots$, $j_{k}-j_{k-1}$ and $n-j_{k}$ are of the same sign, while the second smallest degree is for one step of length $2$ with opposite sign. Thus, \begin{equation} \gamma_{n}=\pm P_{n}(t) q^{m_{1}n}+O(q^{m_{1}(n+2)}), \end{equation} where \begin{equation} P_{n}(t) q^{m_{1}n}=2\Big(V(2n)+\sum_{k=1}^{\infty} \beta_{k}(n,z)\Big). \end{equation} We obtain \begin{equation} P_{1}(t)q^{m_{1}}=2V(2)=bq^{m_{1}}, \ \ \ P_{2}(t)q^{2m_{1}}=2 \left(V(4)+\frac{\Big(\frac{bq^{m_{1}}}{2}\Big)^{2}}{2^{2}}\right)=cq^{m_{2}}+\frac{b^{2}q^{2m_{1}}}{8}, \end{equation} \begin{equation} \begin{split} &P_{n}(t) q^{m_{1}n}=2\sum_{k=1}^{\infty} \beta_{k}(n,z)\\ &=2\cdot P_{n}\Big(\frac{b}{2}\Big)\cdot q^{m_{1}\cdot n}\cdot \prod_{j=1}^{n-1}(n^{2}-(-n+2j)^{2})^{-1}\\ &=2\cdot P_{n}\Big(\frac{b}{2}\Big)\cdot q^{m_{1}\cdot n}\cdot \prod_{j=1}^{n-1}(n^{2}-(-n+2j)^{2})^{-1}\\ &=8\cdot \frac{P_{n}\big(\frac{b}{2}\big)\cdot q^{m_{1}\cdot n}}{2^{2n}\cdot[(n-1)!]^{2}} \end{split} \end{equation} for $n\geq 3$. Therefore, \begin{equation}
\gamma_{1}=\left|bq^{m_{1}}\right|+O(q^{3m_{1}}), \ \ \ \gamma_{2}=\left|cq^{m_{2}}+\frac{b^{2}q^{2m_{1}}}{8}\right|+O(q^{4m_{1}}), \end{equation} \begin{equation}
\gamma_{n}=\left|8\cdot \frac{P_{n}\big(\frac{b}{2}\big)\cdot q^{m_{1}\cdot n}}{2^{2n}\cdot[(n-1)!]^{2}}\right|+O(q^{m_{1}(n+2)}) \end{equation} for $n\geq 3$, where $P_{n}\big(\frac{b}{2}\big)$ is a polynomial of $\frac{b}{2}$ with degree $n$ and leading coefficient $1$.
Specifically, if $n$ is even, i.e., $n=2m$, $m\in\mathbb{Z}_{>0}$, then \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})=4m, \end{equation} which implies that each walk from $-2m$ to $2m$ has even number of steps with length $2$. We have \begin{equation} P_{2m}\Big(\frac{b}{2}\Big)=\prod_{k=1}^{m}\Big(\Big(\frac{b}{2}\Big)^{2}-x_{k}\Big), \end{equation} where $x_{k}$, $k=1,\cdots, m$, depend on $m$. By Lemma \ref{L:2}, we obtain all even gaps closed except the first $k$ if $\big(\frac{b}{2}\big)=-8c\big(k+\frac{1}{2}\big)^{2}$, which yields \begin{equation} P_{2m}\Big(\frac{b}{2}\Big)=\prod_{k=1}^{m}\Big(\Big(\frac{b}{2}\Big)^{2}+8c\Big(k-\frac{1}{2}\Big)^{2}\Big). \end{equation} Hence, \begin{equation}
\gamma_{2m}=8\left|\frac{\prod_{k=1}^{m}\Big(\Big(\frac{b}{2}\Big)^{2}+8c\Big(k-\frac{1}{2}\Big)^{2}\Big)\cdot q^{2m_{1}\cdot m}}{2^{4m}\cdot[(2m-1)!]^{2}}\right|+O(q^{2m_{1}(m+1)}) \end{equation} for $m\geq 2$. If $n$ is odd, i.e., $n=2m-1$, $m\in\mathbb{Z}_{>0}$, then \begin{equation} (n+j_{1})+(j_{2}-j_{1})+\cdots+(j_{k}-j_{k-1})+(n-j_{k})=2n=4m-2, \end{equation} which implies that each walk from $-2m$ to $2m$ has odd number of steps with length $2$. We have \begin{equation} P_{2m-1}\Big(\frac{b}{2}\Big)=\frac{b}{2}\prod_{k=1}^{m-1}\Big(\Big(\frac{b}{2}\Big)^{2}-y_{k}\Big), \end{equation} where $y_{k}$, $k=1,\cdots, m-1$, depend on $m$. By Lemma \ref{L:2}, we deduce \begin{equation} P_{2m-1}\Big(\frac{b}{2}\Big)=\frac{b}{2}\prod_{k=1}^{m-1}\Big(\Big(\frac{b}{2}\Big)^{2}+8ck^{2}\Big). \end{equation} Hence, \begin{equation}
\gamma_{2m-1}=32\left|\frac{\frac{b}{2}\prod_{k=1}^{m-1}\Big(\Big(\frac{b}{2}\Big)^{2}+8ck^{2}\Big)\cdot q^{m_{1}\cdot (2m-1)}}{2^{4m}\cdot[(2m-2)!]^{2}}\right|+O(q^{m_{1}(2m+1)}) \end{equation} for $m\geq 2$.
\section{Proof of Theorem \ref{T:2}} Since $V(\pm 2)=\frac{b}{2}q^{m_{1}}$ and $V(\pm 4)=\frac{c}{2}q^{2m_{1}}$, thus, \begin{equation}
\|v\|^{2}=\frac{1}{2}\Big(b^{2}q^{2m_{1}}+c^{2}q^{4m_{1}}\Big). \end{equation} By Lemma \ref{L:1}, we get \begin{equation}
\gamma_{n}=2\left|\sum_{x\in X}B_{n}(x,z)\right|\left(1+O\left(\frac{1}{n^{2}}\right)\right), \end{equation}
where $z=z_{n}$ depends on $n$, but $|z|<1$.
Set $\sigma_{n}=\sum_{\xi\in X^{+}}B_{n}(\xi,0):=\sigma_{n}^{+}+\sigma_{n}^{-}$, where $\sigma_{n}^{\pm}:=\sum_{\xi:B_{n}(\xi,0)\gtrless 0}B_{n}(\xi,0)$. When $\xi\in X^{+}$, \begin{equation} B_{n}(\xi,0)=\frac{V(x_{1})\cdots V(x_{\nu+1})}{(n^{2}-j_{1}^{2})\cdots(n^{2}-j_{\nu}^{2})}, \end{equation} where $x_{i}=2$ or $4$ for $i=1, \cdots, \nu+1$.
Note that $X\setminus X^{+}= \bigcup_{\xi\in X^{+}} X_{\xi}$, we choose disjoint sets $X_{\xi}'\subset X_{\xi}$ so that \begin{equation} X\setminus X^{+}=\bigcup_{\xi\in X^{+}}X_{\xi}'. \end{equation} Then \begin{equation} \sum_{x\in X\setminus X^{+}}B_{n}(x,z)=\sum_{\xi\in X^{+}}\left(\sum_{x\in X_{\xi}'}B_{n}(x,z)\right), \end{equation} therefore, we have \begin{equation} \begin{split} &\sum_{x\in X}B_{n}(x,z)=\sum_{\xi\in X^{+}}\left(B_{n}(\xi,z)+\sum_{x\in X_{\xi}'}B_{n}(x,z)\right)\\ &=\sum_{\xi:B_{n}(\xi,0)>0}\left(B_{n}(\xi,z)+\sum_{x\in X_{\xi}'}B_{n}(x,z)\right) +\sum_{\xi:B_{n}(\xi,0)<0}\left(B_{n}(\xi,z)+\sum_{x\in X_{\xi}'}B_{n}(x,z)\right)\\ &:\Sigma=\Sigma^{+}+\Sigma^{-}, \end{split} \end{equation} where $\Sigma^{\pm}:=\sum_{\xi:B_{n}(\xi,0)\gtrless 0}\left(B_{n}(\xi,z)+\sum_{x\in X_{\xi}'}B_{n}(x,z)\right)$.
By Lemma \ref{L:3} and Lemma \ref{L:5}, we get there exists a constant $C_{1}>0$ such that \begin{equation} \left[1\mp C_{1}\frac{\log n}{n}\right]\sigma_{n}^{\pm}\leq\Sigma^{\pm}\leq \left[1\pm C_{1}\frac{\log n}{n}\right]\sigma_{n}^{\pm}, \end{equation} which is followed by \begin{equation} \label{E:estimation}
\left|\frac{\Sigma}{\sigma_{n}}-1\right|\leq C_{1} \frac{|\sigma_{n}^{-}|+\sigma_{n}^{+}}{|\sigma_{n}|} \cdot \frac{\log n}{n}. \end{equation}
If $\xi\in X^{+}$, then $V(x_{1})\cdots V(x_{\nu+1})$ is a monomial in $q$ of degree $\frac{m_{1}}{2}\cdot (x_{1}+\cdots+x_{\nu+1})=m_{1}\cdot n$. From Case 3 of Theorem \ref{T:1}, we have \begin{equation} \sigma_{2m}=\sum_{\xi\in X^{+}}B_{2m}(\xi,0)=\frac{q^{2m_{1}\cdot m}}{4^{2m-1}\cdot[(2m-1)!]^{2}}\cdot\prod_{k=1}^{m}\left(\left(\frac{b}{2}\right)^{2}+8c\left(k-\frac{1}{2}\right)^{2}\right) \end{equation} and \begin{equation} \sigma_{2m-1}=\sum_{\xi\in X^{+}}B_{2m-1}(\xi,0)=\frac{q^{m_{1}(2m-1)}}{4^{2m-2}\cdot[(2m-2)!]^{2}}\cdot\frac{b}{2} \cdot\prod_{k=1}^{m-1}\left(\left(\frac{b}{2}\right)^{2}+8ck^{2}\right). \end{equation} Moreover, $\sigma_{2m}\neq 0$ when $\frac{b}{2}\neq 2\sqrt{-2c}\cdot (k-\frac{1}{2})$ and $\sigma_{2m-1}\neq 0$ when $\frac{b}{2}\neq 2\sqrt{-2c} \cdot k$, where $c<0$. So \begin{equation} \label{E:upper bound 1}
\frac{|\sigma_{2m}^{-}|+\sigma_{2m}^{+}}{|\sigma_{2m}|}
=\frac{\prod_{k=1}^{m}\left(1-\frac{b^{2}}{8c(2k-1)^{2}}\right)}{\prod_{k=1}^{m}\left|1+\frac{b^{2}}{8c(2k-1)^{2}}\right|}
\leq \frac{\prod_{k=1}^{\infty}\left(1-\frac{b^{2}}{8c(2k-1)^{2}}\right)}{\prod_{k=1}^{\infty}\left|1+\frac{b^{2}}{8c(2k-1)^{2}}\right|}
=\left|\frac{\cosh \left(\frac{b\pi}{4\sqrt{-2c}}\right)}{\cos \left(\frac{b\pi}{4\sqrt{-2c}}\right)}\right|. \end{equation} Similarly, we have \begin{equation} \label{E:upper bound 2}
\frac{|\sigma_{2m-1}^{-}|+\sigma_{2m-1}^{+}}{|\sigma_{2m-1}|}\leq \left|\frac{\sinh \left(\frac{b\pi}{4\sqrt{-2c}}\right)}{\sin \left(\frac{b\pi}{4\sqrt{-2c}}\right)}\right|. \end{equation} By (\ref{E:estimation}), we obtain \begin{equation} \sum_{x\in X}B_{2m}(x,z)=\sigma_{2m}\left[1+O\left(\frac{\log m}{m}\right)\right]=\left(\sum_{\xi\in X^{+}}B_{2m}(\xi,0)\right)\left[1+O\left(\frac{\log m}{m}\right)\right] \end{equation} and \begin{equation} \sum_{x\in X}B_{2m-1}(x,z)=\sigma_{2m-1}\left[1+O\left(\frac{\log m}{m}\right)\right]=\left(\sum_{\xi\in X^{+}}B_{2m-1}(\xi,0)\right)\left[1+O\left(\frac{\log m}{m}\right)\right]. \end{equation}
Notice that \begin{equation} \cos \left(\frac{b\pi}{4\sqrt{-2c}}\right)=\prod_{k=1}^{\infty}\left(1+\frac{b^{2}}{8c(2k-1)^{2}}\right) \end{equation} and \begin{equation} \sin \left(\frac{b\pi}{4\sqrt{-2c}}\right)=\frac{b\pi}{4\sqrt{-2c}}\prod_{k=1}^{\infty}\left(1+\frac{b^{2}}{8c(2k)^{2}}\right), \end{equation} then \begin{equation} \cos \left(\frac{b\pi}{4\sqrt{-2c}}\right)=\prod_{k=1}^{m}\left(1+\frac{b^{2}}{8c(2k-1)^{2}}\right)\left[1+O\left(\frac{1}{m}\right)\right] \end{equation} and \begin{equation} \sin \left(\frac{b\pi}{4\sqrt{-2c}}\right)=\frac{b\pi}{4\sqrt{-2c}}\prod_{k=1}^{m-1}\left(1+\frac{b^{2}}{8c(2k)^{2}}\right)\left[1+O\left(\frac{1}{m}\right)\right]. \end{equation} Hence, \begin{equation} \sum_{\xi\in X^{+}}B_{2m}(\xi,0)=\frac{q^{2m_{1}\cdot m}\cdot(-1)^{m}\cdot c^{m}}{2^{3m-2}\cdot[(2m-2)!!]^{2}}\cdot \cos\left(\frac{b\pi}{4\sqrt{-2c}}\right)\cdot \left[1+O\left(\frac{1}{m}\right)\right] \end{equation} and \begin{equation} \sum_{\xi\in X^{+}}B_{2m-1}(\xi,0)=\frac{q^{m_{1}(2m-1)}\cdot(-1)^{m-1}\cdot c^{m-1}\cdot\sqrt{-2c}}{2^{3m-4}\cdot[(2m-3)!!]^{2}\cdot \pi} \cdot \sin\left(\frac{b\pi}{4\sqrt{-2c}}\right)\cdot \left[1+O\left(\frac{1}{m}\right)\right]. \end{equation} Combining (\ref{E:estimation}), (\ref{E:upper bound 1}) and (\ref{E:upper bound 2}), we deduce \begin{equation} \begin{split} &\sum_{x\in X}B_{2m}(x,z)=\left(\sum_{\xi\in X^{+}}B_{2m}(\xi,0)\right)\left[1+O\left(\frac{\log m}{m}\right)\right]\\ &=\frac{q^{2m_{1}\cdot m}\cdot(-1)^{m}\cdot c^{m}}{2^{3m-2}\cdot[(2m-2)!!]^{2}}\cdot \cos\left(\frac{b\pi}{4\sqrt{-2c}}\right)\cdot\left[1+O\left(\frac{\log m}{m}\right)\right] \end{split} \end{equation} and \begin{equation} \begin{split} &\sum_{x\in X}B_{2m-1}(x,z)=\left(\sum_{\xi\in X^{+}}B_{2m-1}(\xi,0)\right)\left[1+O\left(\frac{\log m}{m}\right)\right]\\ &=\frac{q^{m_{1}(2m-1)}\cdot(-1)^{m-1}\cdot c^{m-1}\cdot\sqrt{-2c}}{2^{3m-4}\cdot[(2m-3)!!]^{2}\cdot \pi} \cdot \sin\left(\frac{b\pi}{4\sqrt{-2c}}\right)\cdot\left[1+O\left(\frac{\log m}{m}\right)\right]. \end{split} \end{equation} Therefore, \begin{equation}
\gamma_{2m}=\frac{q^{2m_{1}\cdot m}\cdot |c|^{m}}{2^{3m-3}\cdot[(2m-2)!!]^{2}}\cdot \left|\cos\left(\frac{b\pi}{4\sqrt{-2c}}\right)\right|\cdot\left[1+O\left(\frac{\log m}{m}\right)\right] \end{equation} and \begin{equation}
\gamma_{2m-1}=\frac{q^{m_{1}(2m-1)}\cdot |c|^{m-1}\cdot\sqrt{-2c}}{2^{3m-5}\cdot[(2m-3)!!]^{2}\cdot \pi}
\cdot \left|\sin\left(\frac{b\pi}{4\sqrt{-2c}}\right)\right|\cdot\left[1+O\left(\frac{\log m}{m}\right)\right]. \end{equation}
\end{document} |
\begin{document}
\renewcommand{\arabic{lstlisting}}{\arabic{lstlisting}}
\title{Monitoring with Verified Guarantees}
\author{Johann C.~Dauer\inst{1}\orcidID{0000-0002-8287-2376} \and Bernd Finkbeiner\inst{2}\orcidID{0000-0002-4280-8441} \and Sebastian Schirmer\inst{1}\orcidID{0000-0002-4596-2479}}
\authorrunning{Dauer et al.}
\institute{German Aerospace Center (DLR), Braunschweig, Germany\\ \email{\{johann.dauer, sebastian.schirmer\}@dlr.de} \and Helmholtz Center for Information Security (CISPA), Saarbrücken, Germany \email{finkbeiner@cispa.saarland}\\ }
\maketitle
\begin{abstract} Runtime monitoring is generally considered a light-weight alternative to formal verification. In safety-critical systems, however, the monitor itself is a critical component. For example, if the monitor is responsible for initiating emergency protocols, as proposed in a recent aviation standard, then the safety of the entire system critically depends on guarantees of the correctness of the monitor. In this paper, we present a verification extension to the \lola monitoring language that integrates the efficient specification of the monitor with Hoare-style annotations that guarantee the correctness of the monitor specification. We add two new operators, assume and assert, which specify assumptions of the monitor and expectations on its output, respectively. The validity of the annotations is established by an integrated \smt solver. We report on experience in applying the approach to specifications from the avionics domain, where the annotation with assumptions and assertions has lead to the discovery of safety-critical errors in the specifications. The errors range from incorrect default values in offset computations to complex algorithmic errors that result in unexpected temporal patterns.
\keywords{Formal methods \and Cyber-physical systems \and Runtime Verification \and Hoare Logic.} \end{abstract}
\section{Introduction}\label{sec:introduction} Cyber-physical systems are inherently safety-critical due to their direct interaction with the physical environment -- failures are unacceptable. A means of protection against failures is the integration of reliable monitoring capabilities. A \emph{monitor} is a system component that has access to a wide range of system information, \eg sensor readings and control decisions. When the monitor detects a failure, \ie a violation of the behavior stated in its \emph{specification}, it notifies the system or activates recoveries to prevent failure propagation.
The task of the monitor is critical to the safety of the system, and its correctness is therefore of utmost importance.
Runtime monitoring approaches like \lola~\cite{lola05,VerifiedLola} address this by describing the monitor in a formal specification language, and then generating a monitor implementation that is provably correct and has strong runtime guarantees, for example on memory consumption. Formal monitoring languages typically feature temporal~\cite{R2U2} and sometimes spatial~\cite{Nenzi} operators that simplify the specification of complex monitoring behaviors. However, the specification itself, the central part of runtime monitoring, is still prone to human errors during specification development. How can we check that the monitor specification itself is correct?
In this paper, we introduce a verification feature to the \lola framework. Specifically, we extend the specification language with \emph{assumptions} and \emph{assertions}. The framework verifies that the assertions are guaranteed to hold if the input to the monitor satisfies the assumptions.
The prime application area of \lola is unmanned aviation. \lola is increasingly used for the development and operation monitoring of unmanned aircraft; for example, the \lola monitoring framework has been integrated into the DLR unmanned aircraft superARTIS\footnote{https://www.dlr.de/content/en/research-facilities/superartis-en.html}~\cite{fpgartlola}. The verification extension presented in this paper is motivated by this work. In practice, system engineers report that support for specification development is necessary, \eg sanity checks and proves of correctness. Additionally, recent developments in unmanned aviation regulations and standards indicate a similar necessity. One such development is the upcoming industry standard ASTM F3269 (Standard Practice for Methods to Safely Bound Flight Behavior of Unmanned Aircraft Systems Containing Complex Functions). ASTM F3269 introduces a certification strategy based on a Run-Time Assurance (RTA) architecture that bounds the behavior of a complex function by a safety monitor~\cite{astm}, similar to the well-known Simplex architecture~\cite{sha}. This complex function could be a Deep Neural Network as proposed in~\cite{codann}. A simplified version of the architecture\footnote{In its original version the data is separated into assured and unassured data and data preparation components are added.} of ASTM F3269 is shown in Figure~\ref{fig:astm}.
\begin{figure}
\caption{Run-Time Assurance architecture proposed by ASTM F3269 to safely bound a complex function using a safety monitor.}
\label{fig:astm}
\end{figure}
At the core of the architecture is a safety monitor that takes the inputs and outputs of the complex function, and decides whether the complex function behaves as expected. If not, the monitor switches the control from the complex function to a matching recovery function. For instance, the flight of an unmanned aircraft could be separated into different phases: \eg take-off, cruise flight, and landing. For each of these phases, a dedicated recovery could be defined, \eg braking during take-off, the activation of a parachute during cruise flight, or a go-around maneuver during landing. Further, it is crucial that recoveries are only activated under certain conditions and that only one recovery is activated at a time. For instance, a parachute activation during a landing approach is considered safety-critical. The verification extension of \lola introduced in this paper can be used to guarantee statically that such decisions are avoided within the monitor specification. Consider the simplified \lola specification \begin{lstlisting}[] input event_a, event_b, value: Bool, Bool, Float32 assume <a1> !(event_a and event_b) output braking$~$ : Bool := ...computation... output parachute : Bool := ...computation... output go_around : Bool := ...computation... assert <a1> !(braking and parachute) \end{lstlisting} that declares an assumption on the system input \texttt{event}s and asserts that \texttt{braking} and \texttt{parachute} never evaluates to \emph{true} simultaneously.
In the following, we first give a brief introduction to the stream-based specification language \lola, then present the verification approach, and, finally, give details on the tool implementation and our tool experience with specifications that were written based on interviews with aviation experts. Our results show that standard \lola specifications are indeed prone to error, and that these errors can be caught with the formal verification introduced by our extension.
\paragraph{\textbf{Related Work}}\label{sec:related work}~\\ Most work on the verification of monitors focuses on the correct transformation into a general programming language. For example, Copilot~\cite{copilot} specifications can be compiled into C code with constant time and memory requirements. Similarly, there is a translation validation toolkit for \lola monitors implemented in Rust~\cite{VerifiedLola}, which is based on the Viper verification tool. Translation validation of this type is orthogonal to the verification approach of this paper. Instead of verifying the correctness of a transformation, our focus is to verify the specification itself. Both activities complement each other and facilitate safer future cyber-physical systems.
Our verification approach is based on classic ideas of inductive program verification~\cite{hoare69,Floyd1993}, and is closely related to the techniques used in static program verifiers like \textsc{KeY}~\cite{KeYBook2007}, VeriFast~\cite{verifast}, and Dafny~\cite{dafny}.
In a verification approach like Dafny, we are interested in functional properties of procedures,
specified as post-conditions that relate the values upon the termination of the procedure with those at the time of entry to the procedure, \eg \emph{ensure y = old(y)}.
By contrast, a stream-based language like \lola allows arbitrary access to past and future stream values. This makes it necessary to \emph{unfold} the \lola specification in order to properly relate the assumptions and assertions in time.
Most closely related to stream-based monitoring languages are synchronous programming languages like \textsc{LUSTRE}~\cite{lustre}, \textsc{ESTEREL}~\cite{esterel}, and \textsc{SIGNAL}~\cite{signal}. For these languages, the compiler is typically used for verification -- a program representing the negation of desired properties is compiled with the target program and a check for emptiness decides whether the properties are satisfied. Furthermore, a translation from past linear-time temporal logic to \textsc{ESTEREL} was proposed to simplify the specification of more complex temporal properties~\cite{esterelPLTL}. Other verification techniques also exist like \smt-based \emph{k-}Induction for \textsc{LUSTRE}~\cite{smtbasedlustre} or a term rewriting system on synced effects \cite{trsesterel}.
A key difference in our approach is that we do not rely on compilation. Our verification works on the level of an intermediate representation. Furthermore, synchronous programming languages are limited to past references, while the stream unfolding for the inductive correctness proof of the \lola specification includes both past and future temporal operators. Similar to \emph{k-}Induction, our approach is sound but not complete.
\section{Runtime Monitoring with \lola}\label{sec:preliminaries}
We now give an overview of the monitoring specification language \lola. The verification extension is presented in the next section.
\lola is a stream-based language that describes the translation from input streams to output streams: { \setlength\abovedisplayshortskip{0pt} \setlength\belowdisplayshortskip{0pt} \setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{-5pt} \begin{align*} \text{\textbf{input}}~ t_1 &: T_1\\ \vdots &\\ \text{\textbf{input}}~ t_m &: T_m\\ \text{\textbf{output}}~ s_1 &: T_{m+1} := e_1(t_1,\dots,t_m,s_1, \dots, s_n)\\ \vdots &\\ \text{\textbf{output}}~ s_n &: T_{m+n} := e_n(t_1,\dots,t_m,s_1, \dots, s_n)\\ \text{\textbf{trigger}}~ \varphi & ~ \mathit{message}\\ \end{align*} } where input streams carry synchronous arriving data from the system under scrutiny, output streams represent calculations, and triggers generate notification $\mathit{message}$s at instants where their condition $\varphi$ becomes $\mathit{true}$. Input streams $t_1,\dots, t_m$ and output streams $s_1,\dots, s_n$ are called \emph{independent} and \emph{dependent variables}, respectively. Each variable is typed: independent variables $t_i$ are typed $T_i$ and dependent variables $s_i$ are typed $T_{m+i}$. Dependent variables are computed based on \emph{stream expressions} $e_1, \dots, e_n$ over dependent and independent stream variables. A stream expression is one of the following: \begin{itemize}
\item an atomic stream expression $c$ of type $T$ if $c$ is a constant of type $T$;
\item an atomic stream expression $s$ of type $T$ if $s$ is a stream variable of type $T$;
\item a stream expression $ite(b, e_1, e_2)$ of type $T$ if $b$ is a Boolean stream expression and $e_1, e_2$ are stream expressions of type $T$. Note that $ite$ abbreviates the control construct \emph{if-then-else};
\item a stream expression $f(e_1, \dots, e_k)$ of type $T$ if $f: T_1 \times \dots \times T_k \mapsto T$ is a $k$-ary operator and $e_1, \dots, e_k$ are stream expressions of type $T_1, \dots, T_k$;
\item a stream expression $\mathit{o.offset(by: i).defaults(to: d)}$ of type $T$ if $o$ is a stream variable of type $T$, $i$ is an Integer, and $d$ is of type $T$. \end{itemize}
\noindent For example, consider the \lola specification \begin{lstlisting}[] input altitude: Float32 // in m output altitude_bound := altitude > 200.0 trigger altitude_bound "Warning: Decrease altitude!" \end{lstlisting} that notifies the system if the current \texttt{altitude }is above its operating limits, \ie \texttt{200.0} meters. Note that stream types are inferred, \ie \texttt{altitude\_bound} is of type \texttt{Bool}.
\lola uses temporal operators that allow output streams to access its and others previous and future stream values. The stream \begin{lstlisting}[] output alt_count := if altitude $\le$ 200.0 then 0
else alt_count.offset(by: -1).defaults(to: 0) + 1 \end{lstlisting} represents a count of consecutive altitude violations by accessing its own previous value, \ie \texttt{offset(by: x)} where a negative and positive integer \texttt{x} represents past and future stream accesses, respectively. Since temporal accesses are not always guaranteed to exist, the default operator defines values which are used instead, \ie \texttt{defaults(to: d)} where \texttt{d} has to be of the same type as the used stream. Here, at the first position of \texttt{alt\_count} the default value zero is taken. As abbreviations for the temporal operators, \texttt{alt\_count[x, d]} is used. Further, \texttt{s[x..y, d, $\circ$]} for \texttt{x $<$ y} abbreviates \texttt{s[x,d] $\circ$ s[x+1,d] $\circ~\dots~\circ$ s[y,d]} where $\circ$ is a binary operator. Using \texttt{alt\_count > 10} as a trigger condition is preferable if only persistent violations should be reported.
In general, \lola is a specification language that allows to specify complex temporal properties in a precise, concise, and less error-prone way. The focus is on \emph{what} properties should be monitored instead of \emph{how} a monitor should be executed. Therefore, the \lola monitor synthesis automatically infers and optimizes implementation details like evaluation order and memory management. The evaluation order~\cite{VerifiedLola} of \lola streams is automatically derived by analysis of the \emph{dependency graph}~\cite{lola05} of the specification. This allows to ignore the order when taking advantage of the modular structure of \lola output streams, \eg: \begin{lstlisting}[] output alt_avg := alt_count / (position+1) output alt_count := if altitude $\le$ 200.0 then 0
else alt_count.offset(by: -1).defaults(to: 0) + 1 output position := position.offset(by: -1).defaults(to: 0) \end{lstlisting} where \texttt{position} and \texttt{alt\_count} are used before their definition. Further, the dependency graph allows to detect invalid cyclic stream dependencies, \eg
\noindent \texttt{output a := a.offset(by: 0).defaults(to: 0)}.
\section{Assumptions and Assertions}\label{sec:hoarelola} In this section, we present the verification extension for the \lola specification language. The extension allows the developer to annotate the \lola specification with \emph{assumptions} and \emph{assertions} in order to verify the desired guarantees on the computed streams. As an example, consider the simplified specification in Listing \ref{lst:recovery}, which is structured into stream computations in Lines 1 to 23, and assumptions and assertions from Line 26 onwards.
\input{specifications/contingencies}
The computation part specifies a safety monitor within a RTA architecture that triggers recovery functions for three different flight phases. First, the take-off recovery function is triggered (Line 21) when the targeted take-off speed was not achieved on a runway up to a predefined point (Line 13). The distance between the current position and the end of the runway with local coordinates $(0,0)$ is computed in Line 8. Second, in-flight a parachute is activated (Line 22) when virtual barriers for the aircraft, \ie a geofence, are exceeded (Line 15). For more details on a \lola geofence specification (Line 9), we refer to~\cite{geofences}. Last, during landing, up to a point of no return (\texttt{alt < 10.0}), a new landing attempt is initiated (Line 23) if the aircraft's speed is too fast or its landing gear is not yet ready. To be more robust, the current and the previous value of the \texttt{landing\_gear\_ready} is taken into account (Lines 17-18).
With the verification extension, the specification assures that recoveries are not activated simultaneously (Lines 30-31), \ie for instance there is no possibility that a parachute is activated during a landing approach. The first two conjunctions in Line 30 evaluate to $\mathit{false}$ because relevant outputs use a disjoint altitude condition. The last conjunction requires an assumption. In fact, here, two assumptions are linked by the identifier $a1$ to the assertion. The assumptions specify: the known bound of received speed data (Line 27) as well as operational information (Line 26), \eg given by the concept of operation a nominal landing is only foreseen within the predefined operational airspace. Further, a second assertion is stated in Line 33 that guarantees that \emph{the parachute should only be activated when the aircraft is 100 meters above ground}. In this case, the property can be shown assumption-free. Assertions help engineers to show that certain properties are $\mathit{true}$. The given assertions indicate how specification debugging and management can benefit from the extension -- it avoids digging into potentially complex stream computations.
The extension and its verification approach are presented in the following. In general, the verification extension is used if a \lola specification is annotated in the following way: { \setlength\abovedisplayshortskip{0pt} \setlength\belowdisplayshortskip{0pt} \setlength\abovedisplayskip{5pt} \setlength\belowdisplayskip{-5pt} \begin{align*} \text{\textbf{assume}}~ \langle \alpha_1 \rangle &\quad \theta_1\\ \vdots &\\ \text{\textbf{assume}}~ \langle \alpha_m \rangle &\quad \theta_m\\ \text{\textbf{assert}}~ \langle \alpha_{m+1} \rangle &\quad \psi_1\\ \vdots &\\ \text{\textbf{assert}}~ \langle \alpha_{m+n} \rangle &\quad \psi_n\\ \end{align*} } where $\alpha_1,\dots,\alpha_{m+n} \in \Gamma$ are identifiers for $\theta_1,\dots,\theta_m,\psi_1,\dots,\psi_n$, which are Boolean stream expressions with possibly temporal operators. For convenience, we define functions which return all $\theta$ and $\psi$ that are linked to a given $\alpha$ identifier:\\
$assume(\alpha) = \{ \theta_j ~|~ \forall \alpha_j \in \Gamma, \alpha = \alpha_j\}$ and $assert(\alpha) = \{ \psi_j ~|~ \forall \alpha_j \in \Gamma, \alpha = \alpha_j\}$. The set of assertion $\psi_1,\dots,\psi_n$ is \emph{correct} for all input streams iff whenever an assumption is satisfied, its corresponding assertion is satisfied as well.
The verification of assertions relies on the encoding of the \lola execution in Satisfiability Modulo Theory (\smt). We define the $smt$ function that encodes a stream expression next. It can be used to encode independent and dependent variables as well as expressions of assumptions and assertions.
\begin{definition}[\smt-Encoding of Stream Expressions]\label{def-encoding}\\ Let $\Phi$ be a \lola specification over independent stream variables $t_1,\dots,t_m$ and dependent stream variables $s_1,\dots,s_n$. Further, let the natural number $N+1$ be the length of the input streams, $c$ be an \smt constant symbol, and $\tau_1^0,\dots,\tau_1^N, \dots, $ $\tau_m^0, \dots, \tau_m^N,~ \sigma_1^0, $ $\dots, \sigma_1^N, \dots, \sigma_n^0, \dots, \sigma_n^N$ be \smt variables. Then, the function $smt$ recursively encodes a stream expression $e$ at position j with $0 \leq j \leq N$ in the following way: \begin{itemize} \item Base cases: \begin{itemize}
\item $smt(c)(j) = \mathtt{c}$
\item $smt(t_i)(j) = \tau_i^j$
\item $smt(s_i)(j) = \sigma_i^j$ \end{itemize} \item Recursive cases: \begin{itemize}
\item $smt(f(e_1, \dots, e_n))(j) = \mathtt{f}(smt(e_1)(j),\dots,~smt(e_n)(j))$
\item $smt(ite(e_b, e_1, e_2))(j) = \mathtt{ite}(smt(e_b)(j),~smt(e_1)(j),~smt(e_2)(j))$
\item $
smt(e[k,c])(j) =
\begin{cases}
smt(e)(j+k) & \text{if}~ 0 \leq j +k \leq N,\\
c & \text{otherwise}
\end{cases} $ \end{itemize} \end{itemize} where $\mathtt{ite}$ is an \smt encoding of \emph{if-then-else}; $\mathtt{f}$ is an interpreted function if $f$ is from a theory supported by the \smt solver and an uninterpreted function otherwise. \end{definition}
Next, Proposition~\ref{the-hoare} shows how the correctness of asserted stream properties can be proven for finite input streams. If the set of assertions is correct, asserted stream properties are guaranteed to be valid in each step of the monitor execution. In practice, such specifications are preferable. In the following, let $\Phi$ be a \lola specification with verification annotations. Further, we refer to the set of input streams and computed output streams as stream execution.
\begin{proposition}[Assertion Verification of a Finite Stream Execution]\label{the-hoare}~\\ The set of assertions is correct for a finite stream execution with length $N+1$ under given assumptions, if the following formula is valid:
$\bigwedge\limits_{i:~0 \le i \leq N} ~~\Bigl(~~\bigwedge\limits_{\alpha \in \Gamma}~~\Bigl(\\ \bigwedge\limits_{\theta~\in~\mathit{assume}(\alpha)} smt(\theta)(i) ~ \wedge \bigwedge\limits_{s_k \in \Phi} \sigma_k^i = smt(e_k)(i) \rightarrow \bigwedge\limits_{\psi~\in~\mathit{assert}(\alpha)} smt(\psi)(i)~~\Bigr)\Bigr)$
\end{proposition}
The formula in Proposition~\ref{the-hoare} unfolds the complete stream execution and informally expresses that an assertion must hold in each stream position whenever its corresponding assumption and implementation are satisfied.
To avoid the complete unfolding and allow arbitrary stream lengths, an inductive argument is given in Proposition~\ref{def-efficient} that defines proof obligations for an annotated \lola specification. Next, we present a template for the stream unfolding that helps to define the proof obligation at the \emph{Begin}ning (Definition~\ref{def_begin}), during \emph{Run} (Definition~\ref{def_run}), and at the \emph{End} (Definition~\ref{def_end}) of a stream execution.
\begin{definition}[Template Stream Unfolding]\label{template}~\\ We define the template formula $\phi_{t}$ that states proof obligations as:
\noindent$\bigwedge\limits_{\alpha \in \Gamma}\Biggl($ $\bigwedge\limits_{i:~\mathit{c\_asm}}\Bigl(~\bigwedge\limits_{\theta~\in~\mathit{assume}(\alpha)} smt(\theta)(i)\Bigr) ~ \wedge ~ $ $\bigwedge\limits_{i:~\mathit{c\_asserted}}\Bigl(~\bigwedge\limits_{\psi~\in~\mathit{assert}(\alpha)} smt(\psi)(i)\Bigr)$
\\ $\wedge \bigwedge\limits_{i:~\mathit{c\_streams}}\Bigl(~\bigwedge\limits_{0 < k \leq n} \sigma_k = smt(e_k)(i) \Bigr) ~ \rightarrow ~ $ $\bigwedge\limits_{i:~\mathit{c\_assert}}\Bigl(~\bigwedge\limits_{\psi~\in~\mathit{assert}(\alpha)} smt(\psi)(i)\Bigr)\Biggr)$
\noindent where $\mathit{c\_asm}$, $\mathit{c\_asserted}$, $\mathit{c\_streams}$, and $\mathit{c\_assert}$ are template parameters for the unfolding of assumptions, previously proven assertions, output streams, and assertion, respectively. \end{definition}
The template formula in Definition~\ref{template} uses template parameters for the stream unfolding. For instance, the parameter assignment $\mathit{c\_asm}:=0\leq i < 10$ adds assumptions at the first ten positions of the stream execution. Further, the parameter $\mathit{c\_asserted}$ allows to incorporate the induction hypothesis.
In the following, we will use the \lola specification \begin{lstlisting}[numbers=none] assume<a1> reset[-1, f] $\vee$ reset[1, f] input reset : Bool output o1 := if reset then 0 else o1[-1, 0] + 1 output o2 := o1[-1, 0] + o1 + o1[1, 0] assert<a1> 0 $\le$ o2 and o2 $\le$ 3 \end{lstlisting} as a running example for the template stream unfolding. Here, the input $\mathit{reset}$ represent a reset command for the output stream $\mathit{o1}$ that counts how long no $\mathit{reset}$ occurred. Output $\mathit{o1}$ is used by output $\mathit{o2}$ which aggregates over the previous, the current, and the next outcome of $\mathit{o1}$. As assertion, we show that $\mathit{o2}$ is always positive and never larger than three given the assumption that in each execution step either the previous or the next $\mathit{reset}$ is $\mathit{true}$. The assumption ensures that at most two consecutive $\mathit{resets}$ are $\mathit{false}$. Given the $\mathit{reset}$ sequence of input values $\langle \mathit{true}; \mathit{false}; \mathit{false}\rangle$ that satisfies the assumption, the resulting $\mathit{o1}$ stream evaluates to $\langle 0; 1; 2 \rangle$. Here, at the second position of the sequence, $\mathit{o2}$ evaluates to three. To show that the assertion also holds at the first and the last position of the sequence, out-of-bounds values must be considered.
We show how the template $\phi_{t}$ can be used at the beginning of a stream execution. Here, default values due to past stream accesses beyond the beginning of a stream need to be captured by the obligation to guarantee that the assertions hold in these cases. The combination of past out-of-bounds and future out-of-bounds default values must also be covered by the obligations in case the stream is stopped early. These scenarios are depicted for the running example in Figure~\ref{fig:starttrace}. The figure shows four finite stream executions with different lengths. All stream positions are colored gray, while only some positions contain a single red dot. These features indicate the unfolding of stream variables and annotations using the template $\phi_{t}$. A gray-colored position means that the assumptions have been unfolded and a dotted position means the assertion has been unfolded. Further, arrows indicate temporal stream accesses where solid lines correspond to accesses by outputs and dashed lines correspond to accesses by annotations, \ie assumptions and assertions. For each stream execution, only the arrows for a single position are depicted -- the arrows for other positions have been omitted for the sake of clarity. For example, for $N=0$, the accesses of output $o2$ are both out-of-bounds, \ie the default value zero is used. While for $N=3$, the accesses at the second position are shown where only the past access of the assumption leads to an out-of-bounds access. The figure depicts all necessary stream execution that cover all combinations of past out-of-bounds accesses, \ie with and without future bound violations. The described unfoldings of Figure~\ref{fig:starttrace} are formalized as proof obligations in Definition~\ref{def_begin}.
\begin{definition}[Proof Obligations for Past Out-of-bounds Accesses]\label{def_begin}~\\
Let $w_p = \sup( \{0\} \cup \{~\left|k\right|~|~e[k,c] \in \Phi\ $ where $k < 0 \} )$ be the most negative offset and $w_f = \sup( \{0\} \cup \{~k~|~e[k,c] \in \Phi\ $ where $k > 0 \} )$ be the greatest positive offset. The proof obligations $\phi_{\mathit{Begin}}$ for past out-of-bounds accesses are defined as the conjunction of template formulas:
$\bigwedge\limits_{N: ~ 0 \leq N < \max(1,~ 2 \cdot (w_p + w_f))} ~ \phi_{t}(\mathit{c\_asm}, ~\mathit{c\_asserted}, ~\mathit{c\_streams}, ~\mathit{c\_assert})$
\noindent with template parameters:\\
\begin{tabular}{ll}
$~\bullet~ \mathit{c\_asm}$ & $:= 0 \leq i \le N$,\\
$~\bullet~ \mathit{c\_asserted}$ & $:= \mathit{false}$,\\
$~\bullet~ \mathit{c\_streams}$ & $:= 0 \leq i \le N$,\\
$~\bullet~ \mathit{c\_assert}$ & $:= 0 \leq i < \max(1,~\min(N+1,~ 2 \cdot w_p)).$
\end{tabular} \end{definition}
\begin{figure}
\caption{Four stream executions of different length $N+1$ with the respective template unfolding are depicted. The stream executions consider all cases with past out-of-bound accesses. A gray-colored box indicates that an assumption has been unfolded at this position, while a red dotted box indicates that an assertion has been unfolded at this position. Solid and dashed arrows indicate accesses by streams and annotations, respectively.}
\label{fig:starttrace}
\end{figure}
Next, the case where no out-of-bounds access occurs is considered. Hence, the obligations capture the nominal case where no default value is used. Since we have shown that past out-of-bounds accesses are valid we can use these proven assertions as assumptions. Figure~\ref{fig:runtrace} depicts a stream execution with a single dotted position, \ie the position where the assertion must be proven. As can be seen, all accesses from this position are within bounds. Further, note that the accesses of the first and the last unfolded assumption, \ie the first and the last gray-colored position, are also within bounds. The described unfolding is formalized as proof obligations in Definition~\ref{def_run}.
\begin{definition}[Proof Obligations for No Out-of-bounds Accesses]\label{def_run}~\\ The proof obligations $\phi_{\mathit{Run}}$ without out-of-bounds accesses are defined as\\ $\phi_{t}(\mathit{c\_asm}, ~\mathit{c\_asserted}, ~\mathit{c\_streams}, ~\mathit{c\_assert})$ with template parameters:\\ \begin{tabular}{ll}
$~\bullet~ \mathit{c\_asm}$ & $:= w_p \leq i \leq N - w_f$,\\
$~\bullet~ \mathit{c\_asserted}$ & $:= 2 \cdot w_p \leq i \leq N - 2\cdot w_f \wedge i \neq 3\cdot w_p$,\\
$~\bullet~ \mathit{c\_streams}$ & $:= 2 \cdot w_p \leq i \leq N - 2\cdot w_f$,\\
$~\bullet~ \mathit{c\_assert}$ & $:= i = 3\cdot w_p$,
\end{tabular}
\noindent where $N = 3 \cdot (w_p + w_f)$. \end{definition}
Last, we consider the case where only future out-of-bounds accesses occur. Hence, the respective obligations need to incorporate default values of future out-of-bounds accesses. As before, we can use the previously proven assertions as assumptions. Figure~\ref{fig:endtrace} depicts a stream execution with two dotted positions, \ie positions where the assertion must be proven. The position where arrows are given represents the case where only the assumption results in a future out-of-bounds access. The last position of the stream execution represents the case in which both the assumption and the stream result in future out-of-bounds accesses. The presented unfolding is formalized as proof obligations in Definition~\ref{def_end}.
\begin{definition}[Proof Obligations for Future Out-of-bounds Accesses]\label{def_end}~\\ The proof obligations $\phi_{\mathit{End}}$ for future out-of-bounds accesses are defined as the template formula $\phi_{t}(\mathit{c\_asm}, ~\mathit{c\_asserted}, ~\mathit{c\_streams}, ~\mathit{c\_assert})$ with template parameters:\\
\begin{tabular}{ll}
$~\bullet~ \mathit{c\_asm}$ & $:= w_p \leq i \leq N$,\\
$~\bullet~ \mathit{c\_asserted}$ & $:= 2 \cdot w_p \leq i < 3\cdot w_p$,\\
$~\bullet~ \mathit{c\_streams}$ & $:= 2 \cdot w_p \leq i \leq N$,\\
$~\bullet~ \mathit{c\_assert}$ & $:= 3\cdot w_p \leq i \leq N$
\end{tabular}
\noindent where $N = 3 \cdot w_p + w_f$. \end{definition}
So far, we have defined proof obligations for certain positions in the stream execution with and without out-of-bounds accesses. Together, the proof obligations constitute an inductive argument for the correctness of the assertions, see Proposition~\ref{def-efficient}. Here, the base case is given by Definition~\ref{def_begin} and induction steps are given by Definitions~\ref{def_run} and~\ref{def_end}. The induction steps use the induction hypothesis, \ie valid assertions, due to the template parameter $c\_asserted$.
\begin{proposition}[Assertion Verification by \lola Unfolding]\label{def-efficient}~\\ The set of assertions is correct if the formula $\phi_{\mathit{Begin}} ~\wedge~ \phi_{\mathit{Run}} ~\wedge~ \phi_{\mathit{End}}$ is valid. \end{proposition}
Proposition~\ref{def-efficient} proves the soundness of the verification approach. Soundness refers to the ability of an analyzer to prove the absence of errors --- if a \lola specification is accepted, it is guaranteed that the assertions are not violated. The converse does not hold, \ie the presented verification approach is not complete. Completeness refers to the ability of an analyzer to prove the presence of errors --- if a \lola specification is rejected, the counter-example given should be a valid stream execution that results in an assertion violation. The following \lola specification is rejected even though no assertion is violated: \begin{lstlisting}[escapechar=@] input a: Int32 assume <a1> a @$\le$@ 10 output sum := if sum[-1, 0] @$\le$@ 10 then 0 else sum[-1, 0] + a assert < a1 > sum @$\le$@ 100 \end{lstlisting} Here, since the $\mathtt{if}$-condition in Line 3 evaluates to $true$ at the beginning of the stream execution, $\mathtt{sum}$ is a constant stream with value zero. Hence, the assertion in Line 4 is never violated. The verification approach rejects this specification. The reason for this is that $\mathtt{sum} \le 100$ is added as an \emph{asserted} condition in $\phi_{\mathit{Run}}$. Therefore, the \smt solver can assign a value between $91$ and $100$ to the earliest $\mathtt{sum}$ variable of the unfolding, resulting in an assertion violation of the next $\mathtt{sum}$ variable.
\begin{figure}
\caption{A stream execution of length $N+1$ with the corresponding template unfolding is depicted. The stream execution considers the case where no out-of-bound access occurs. Gray-colored and red dotted positions represent unfolded assumptions and assertions, respectively. Solid and dashed arrows indicate accesses by streams and annotations, respectively.}
\label{fig:runtrace}
\end{figure}
\begin{figure}
\caption{A stream execution of length $N+1$ with the corresponding template unfolding is depicted. The stream execution covers all cases where future out-of-accesses occur. Gray-colored and red dotted positions represent unfolded assumptions and assertions, respectively. Solid and dashed arrows indicate accesses by streams and annotations, respectively.}
\label{fig:endtrace}
\end{figure}
\section{Application Experience in Avionics}\label{sec:experimental}In this section, we present details about the tool implementation and tool experiences on practical avionic specifications.
\paragraph{\textbf{Tool Implementation and Usage}}~\\ The tool is based on the open source \lola framework\footnote{https://rtlola.org/} written in Rust. Specifically, it uses the \lola frontend to parse a given specification into an intermediate representation. Based on this representation, the \smt formulas are created and evaluated with the Rust z3 crate\footnote{https://docs.rs/z3/0.9.0/z3/}. At its current phase of the crate's development, a combined solver is implemented that internally uses either a non-incremental or an incremental solver. There is no information on the implemented tactics available, but all our requests could be solved within seconds. For functions that are not natively supported by the Rust Z3 solver, the output is arbitrarily chosen by the solver with respect to the range of the function. The tool expects a \lola specification augmented by \emph{assumptions} and \emph{assertions}. The verification is done automatically and produces a counter-example stream execution, if any exists. The counter-example can then be used by the user to debug its specifications. Two different kinds of users are targeted. First, users that write the entire augmented specification. Such a user could be a systems engineer who is developing a safety monitor and wants to ensure that it contains critical properties. Second, users that augment an existing specification. Here, one reason could be that an existing monitor shall be composed with other critical components and certain behavioral properties are expected. Also, similar to software testing, the task of writing a specification and their respective assumptions and assertions could be separated between two users to ensure the independence of both.
\paragraph{\textbf{Practical Results}}~\\ To gain practical tool experience, previously written specifications based on interviews with engineers of the German Aerospace Center~\cite{schirmer2016} were extended by assumptions and assertions. The previous specifications were tested using log-files and simulations -- the authors considered them correct.
We report several specification errors in Table \ref{tab:specification} that were detected by the presented verification extension. In fact, the detected errors would have resulted in undetected failures. After the errors in the previous specifications were fixed, all assertions were proven correct. Note that the errors could have been found due to manual reviews. However, such reviews are tedious and error-prone, especially when temporal behaviors are involved. The detected errors in Table \ref{tab:specification} can be grouped into three classes: \emph{Classical Bugs}, \emph{Operator Errors}, and \emph{Wrong Interpretations}. Classical bugs are errors that occur when implementing an algorithm. Operator errors are \lola specific errors, \eg temporal accesses. Last, wrong interpretations refer to gaps between the specification and the user's design intend, \eg violated assertions due to incomplete specifications. Next, we give one representative example for each group. We reduced the specification to the representative fragment.
\begin{table}[t] \centering
\begin{tabular}{|l|c|c|c|l|} \hline \textbf{Specification} & \textbf{\#o} & \textbf{\#a} & \textbf{\#g} & \textbf{Detected errors} \\ \hline $\mathit{gps\_vel\_output}$ & 14 & 6 & 6 & \quad -- \\ \hline $\mathit{gps\_pos\_output}$ & 19 & 3 & 10 & \quad -- \\ \hline $\mathit{imu\_output}$ & 18 & 6 & 6 & Wrong default value \\
& & & & Division by zero \\ \hline $\mathit{nav\_output}$ & 25 & 3 & 5 & Missing abs() \\ \hline $\mathit{tagging}$ & 6 & 2 & 2 & \quad -- \\ \hline $\mathit{ctrl\_output}$ & 25 & 7 & 8 & Wrong threshold comparisons \\ \hline $\mathit{mm\_output\_1}$ & 4 & 1 & 2 & \quad -- \\ \hline $\mathit{mm\_output\_2}$ & 17 & 6 & 9 & Missing if condition\\
& & & & Wrong default value \\ \hline $\mathit{contingency\_output}$ & 4 & 8 & 1 & Observation: both contingencies could \\
& & & & be true in case of voting, \ie both at 50\% \\ \hline $health\_output$ & 1 & 5 & 1 & \quad -- \\ \hline \end{tabular}
\caption{Detected errors by the verification extension, where \#o, \#a, and \#g represent the number of outputs, assumptions, and assertions given in the specification, respectively.} \label{tab:specification} \end{table}
\begin{example}[Classical Bug]~\\ The \lola specification in Listing \ref{lst:expr_ctrl_output} monitors the fuel level. A monitor shall notify the operator when one of the three different fuel levels are reached: half (Line 8), warning (Line 9), and danger (Line 10). The fuel level is computed as a percentage in Line 7. It uses the fuel level at the beginning of the flight (Line 6) as a reference for its computation. Given the documentation of the fuel sensor, it is known that \texttt{fuel} values are within $\mathbb{R}^{+}$ and decreasing. This is formalized in Line 4 as an assumption. As an invariant, we asserted that the starting fuel is greater or equal to \texttt{fuel} (Line 15). Further, in Lines 16 to 18, we stated that once a level is reached it should remain at this level. During our experiment, the assertion led to a counter-example that pointed to the previously used and erroneous fuel level computation: \begin{lstlisting}[escapechar=@, numbers=none] output fuel_level := (start_fuel - fuel) / start_fuel \end{lstlisting} In short, the output computed the consumed fuel and not the remaining fuel. The computation could be easily fixed by converting consumed fuel into remaining fuel, see Line 7. Therefore, Listing \ref{lst:expr_ctrl_output} satisfies its assertion. Note, that offset accesses were used to assert the temporal behavior of the fuel level output stream. Further, \texttt{trigger\_once} is an abbreviation which states that only the first raising edge is reported to the user.
\end{example}
\begin{example}[Operator Error]~\\ An important monitoring property is to detect frozen values as these indicate a deteriorated sensor. Such a specification is depicted in Listing \ref{lst:expr_imu_output}. Here, as an input, the acceleration in $x-$direction is given. The frozen value check is computed from Line 6 to Line 10. It compares previous values using \lola's offset operator. To check this computation, we added the sanity check that asserts that no frozen value shall be detected (Line 13) when small changes in the input are present (Line 4). In the previous version, the frozen values were computed using the abbreviated offset operator: \begin{lstlisting}[escapechar=@, numbers=none] output frozen_ax := ax[-5..0, 0.0, =] \end{lstlisting} This resulted in a counter-example that pointed to wrong default values. Although the abbreviated version is easier to read and reduces the size of the specification, it is unfortunately not suitable for this kind of property. The tool detected the unlikely situation that the first value of \texttt{ax} is $0.0$ which would have resulted in evaluating \texttt{frozen\_ax} to true. Although unlikely, this should be avoided as contingencies activated in such situations depend on correct results and otherwise could harm people on the ground. By unfolding the operator and adding a different default value to one of the past accesses, the error was resolved (Line 6). Listing \ref{lst:expr_imu_output} shows the fixed version which satisfies its assertion.
\end{example}
\begin{example}[Wrong Interpretation]~\\ In Listing \ref{lst:expr_contingency_output}, two visual sensor readings are received (Lines 2-3). Both, readings argue over the same observations where \texttt{avgDist} represents the average distance to the measured obstacle, \texttt{actual} is the number of measurements, and \texttt{static} is the number of unchanged measurements. A simple rating function is introduced (Lines 5-8) that estimates the corresponding rating -- the higher the better. Using these ratings, the trust in each of the sensors is computed probabilistically (Lines 9-10). When considering the integration of such a monitor as an ASTM switch condition that decides which sensor value should be forwarded, the specification should be revised. This is the case because the assertion in Line 14 produces a counter-example which indicates that both trust triggers (Lines 11-12) can be activated at the same time. A common solution for this problem is to introduce a priority between the sensors. { \lstset {
numbers=right,
stepnumber=1, } \begin{lstlisting}[escapechar=@, caption={The \lola contingency\_output specification that uses an heuristic to decide which sensor is more trustworthy.}, label={lst:expr_contingency_output}] // Inputs input avgDist_laser, actual_laser, static_laser: Float64 input avgDist_optical, actual_optical, static_optical: Float64 // Outputs output rating_laser :=
0.2 * static_laser + 0.4 * actual_laser + 0.4 * avgDist_laser output rating_optical :=
0.2 * static_optical + 0.4 * actual_optical + 0.4 * avgDist_optical output trust_laser := rating_laser / ( rating_laser + rating_optical) output trust_optical := 1.0 - trust_laser trigger trust_laser >= 0.5 trigger trust_optical >= 0.5 // Assertions assert <a1> trust_laser != trust_optical \end{lstlisting} } \end{example}
The examples show how the presented \lola verification extension can be used to find errors in specifications. We also noticed that the annotations can serve as documentation. System assumptions are often implicitly known during development and are finally documented in natural language in separate files. Having these assumptions explicitly stated within the monitor specification potentially reduces future mistakes when reusing the specification, \eg when composing with other monitor specifications. Listing~\ref{lst:simpledoc} depicts such an example specification. Here, the monitor interfaces are clearly defined by the domain of input $a$ (Line 5) and output $o$ (Line 13). Also, $\mathit{reset}$ is assumed to be valid at least once per second (Line 5). Further, no deeper understanding of the internal computations (Lines 7-10) is required in order to safely compose this specification with others.
{ \lstset {
numbers=right,
stepnumber=1, } \begin{lstlisting}[escapechar=@, caption={\lola specification annotations describe interface properties.}, label={lst:simpledoc}] // Inputs with frequency 5Hz input a: Float64 input reset: Bool // Assumptions assume <a1> 0.0 $\le$ a $\le$ 1.0 and reset[-4..0, false, $\vee$] // Outputs output o_1 := ... ... output o_n := ... output o := o_1 + ... + o_n trigger o $\ge$ 0.5 "Warning: Output o exceeds threshold!" // Assertions assert <a1> 0.0 $\le$ o $\le$ 1.0 \end{lstlisting} }
\section{Conclusion}\label{sec:conclusion} As both the relevance and the complexity of cyber-physical systems continues to grow, runtime monitoring is an essential ingredient of safety-critical systems.
When monitors are derived from specifications it is crucial that the specifications are correct. In this paper, we have presented a verification approach for the stream-based monitoring language \lola. With this approach, the developer can formally prove guarantees on the streams computed by the monitor, and hence ensure that the monitor does not cause dangerous situations. The verification extension is motivated by upcoming aviation regulations and standards as well as by practical feedback of engineers.
The extension has been applied to previously written \lola specifications that were obtained based on interviews with aviation experts. In this process, we discovered and fixed several serious specification errors.
In the future, we plan to develop automatic invariant generation for \lola specifications. Another interesting direction for future work is to exploit the results of the analysis for the optimization of the specification and the resulting monitoring code. Finally, we plan to extend the verification approach to \mbox{\textsc{RTLola}}, the real-time extension of \lola.
\appendix \section{Lola Specifications -- Experience Report}\label{appendix:specs}
\subsection{$Specification: gps\_vel\_output$}\label{appendix:gps_vel_output} \input{specifications/experience/gps_vel_output.tex}
\subsection{$Specification: gps\_pos\_output$}\label{appendix:gps_pos_output} \input{specifications/experience/gps_pos_output.tex}
\subsection{$Specification: imu\_output$}\label{appendix:imu_output} \input{specifications/experience/imu_output.tex}
\subsection{$Specification: nav\_output$}\label{appendix:nav_output} \input{specifications/experience/nav_output.tex}
\subsection{$Specification: tagging$}\label{appendix:tagging} \input{specifications/experience/tagging.tex}
\subsection{$Specification: ctrl\_output$}\label{appendix:ctrl_output} \input{specifications/experience/ctrl_output.tex}
\subsection{$Specification: mm\_output\_1$}\label{appendix:mm_output_1} \input{specifications/experience/mm_output_1.tex}
\subsection{$Specification: mm\_output\_2$}\label{appendix:mm_output_2} \input{specifications/experience/mm_output_2.tex}
\subsection{$Specification: contingency\_output$}\label{appendix:contingency_output} \input{specifications/experience/contingency_output.tex}
\subsection{$Specification: health\_output$}\label{appendix:health_output} \input{specifications/experience/health_output.tex}
\end{document} |
\begin{document}
\title{Turing-Taylor expansions for arithmetic theories} \author{Joost J. Joosten}
\maketitle
\begin{abstract}
Turing progressions have been often used to measure the proof-theoretic strength of mathematical theories: iterate adding consistency of some weak base theory until you ``hit" the target theory. Turing progressions based on $n$-provability give rise to a $\Pi_{n+1}$ proof-theoretic ordinal $|U|_{\Pi^0_{n+1}}$. As such, to each theory $U$ we can assign the sequence of corresponding $\Pi_{n+1}$ ordinals $\langle |U|_n\rangle_{n>0}$. We call this sequence a \emph{Turing-Taylor expansion} or \emph{spectrum} of a theory.
In this paper, we relate Turing-Taylor expansions of sub-theories of Peano Arithmetic to Ignatiev's universal model for the closed fragment of the polymodal provability logic ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$. In particular, we observe that each point in the Ignatiev model can be seen as Turing-Taylor expansions of formal mathematical theories.
Moreover, each sub-theory of Peano Arithmetic that allows for a Turing-Taylor expansion will define a unique point in Ignatiev's model. \end{abstract}
\section{Introduction} Alan Turing considered in his dissertation progressions that are based on transfinitely adding consistency statements (\cite{Turing:1939:TuringProgressions}). If we disregard for the moment subtle coding and representation issues, these Turing progressions starting with some base theory $T$ were defined by \[ \begin{array}{llll} T^0 &:=& T; \\ T^{\alpha +1} & :=& T^\alpha \cup \{ \, {\tt Con}({T^\alpha}) \, \}; & \\ T_\lambda & := & \bigcup_{\alpha < \lambda} T^\alpha & \mbox{for limit $\lambda$.} \end{array} \] Here, ${\tt Con}({T^\alpha})$ denotes some natural formalization of the statement that the theory ${T^\alpha}$ cannot derive, say, $0=1$. If one starts out with a sound base theory $T$ this gives rise to a progression of increasing proof-theoretic strength. Since the consistency statements are of logical complexity $\Pi^0_1$, Turing progressions can be used to define a $\Pi^0_1$ ordinal of a theory that contains (interprets) arithmetic; one starts out with a relatively weak theory $T$ and defines the $\Pi^0_1$ ordinal of some target theory $U$ by \[
|U|_{\Pi^0_1} \ := \ \sup \{ \alpha \mid T^\alpha \subseteq U \}. \] Using stronger notions of provability this can be generalized. We shall use $[n]_T$ to denote a formalization of ``provable in $T$ together with all true $\Pi^0_n$ sentences" and $\langle n \rangle_T$ will denote the dual consistency notion $\neg [n] \neg$. Generalized Turing progressions are readily defined:
\[ \begin{array}{llll} T^0_n &:=& T; \\ T^{\alpha +1}_n & :=& T^\alpha_n \cup \{ \langle n \rangle_{T^\alpha_n} \top\}; & \\ T_n^\lambda & := & \bigcup_{\alpha < \lambda} T_n^\alpha & \mbox{for limit $\lambda$.} \end{array} \] Here, the $\top$ stands for some fixed provable like for example $1=1$ so that $\langle n \rangle_{T^\alpha_n} \top$ simply says that the theory $T^\alpha_n$ is consistent with all true $\Pi_n$ formulas. We can now define the $\Pi^0_{n+1}$ proof-theoretical ordinal of a theory $U$ w.r.t.\ some base theory $T$: \[
|U|_{\Pi^0_n} \ := \ \sup \{ \alpha \mid T_n^\alpha \subseteq U \}. \]
Using Primitive Recursive Arithmetic as base theory, U. Schmerl proved in \cite{Schmerl:1978:FineStructure} that $|\ensuremath{{\mathrm{PA}}}\xspace|_{\Pi^0_n} = \varepsilon_0$ for all $n\in \omega$ and Beklemishev showed (\cite{Beklemishev:2003:ProofTheoreticAnalysisByIteratedReflection, Beklemishev:2004:ProvabilityAlgebrasAndOrdinals, Beklemishev:2005:Survey}) how provability logics can naturally be employed to perform and simplify the computations to obtain these ordinals.
In this paper we shall see how various theories can be written as the finite union of Turing progressions in a way reminiscent of how $\mathcal{\mathbf C}^\infty$ functions can be written as a countable sum of monomials in their Taylor expansion. Hence, we shall speak of \emph{Turing-Taylor} expansions of arithmetical theories. Whereas the monomials in a Turing expansion of a $\mathcal{\mathbf C}^\infty$ function are in a sense orthogonal, the monomials in our Turing-Taylor expansions are not. Therefore, we will sometimes call the Turing-Taylor expansions also \emph{ordinal spectra} or simply \emph{spectra} of theories.
\section{Arithmetical preliminaries} We need to formalize various arguments that use cut-elimination. To this end, we assume that the base theory proves $\sf supexp$, i.e.~the totality of the super-exponential function $x\mapsto 2^x_x$, where $2^x_0 := x$ and $2^x_{y+1}:= 2^{2^x_y}$. However, we also need that our base theories are of low logical complexity aka, that the axioms are of logical complexity at most $\Pi^0_1$.
To this end, we shall assume that any theory $T$ will be in a language that contains a function symbol for the super-exponentiation and that the recursive defining equations for this super-exponentiation are amongst the axioms of $T$.
After having fixed our language, we define the arithmetical hierarchy syntactically as usual: $\Delta_0$ formulas are those formulas that only employ bounded quantification (i.e., quantification of the form $\forall \, x{<}t$ where $t$ is some term not containing $x$); If $\phi \in \Pi_n$ ($\Sigma_n$ resp.), then $\exists \, \vec x \ \phi \in \Sigma_{n+1}$ ($\forall \vec x \phi \in \Pi_{n+1}$ resp.).
Since $T$ has a constant for super-exponentiation, $T$ will be able to \emph{prove} the totality of super-exponentiation in a trivial way using induction for $\Delta_0$ formulas. It is folklore that $\Delta_0$ induction can be axiomatized in a $\Pi_1$ fashion:
\begin{lemma} Over Robinson's arithmetic \ensuremath{{\mathrm{Q}}}\xspace the following two schemes are equivalent \begin{enumerate} \item\label{item:Pi2FormulationInduction} $\forall x\ (\forall\, y{<}x \, \phi(y) \to \phi(x)) \to \forall x \ \phi(x)$ for $\phi \in \Delta_0$;
\item\label{item:Pi1FormulationInduction} $\forall x\ \Big(\forall\, z{\leq} x\ \big[\forall\, y{<}z\ \phi (y) \to \phi(z)\big]\to \phi (x)\Big) $ for $\phi \in \Delta_0$. \end{enumerate} \end{lemma}
\begin{proof} The only non-trivial direction is $\eqref{item:Pi2FormulationInduction} \Rightarrow \eqref{item:Pi1FormulationInduction}$ which follows by applying \eqref{item:Pi2FormulationInduction} to $\phi'(x,u) \ :=\ x\leq u \to \phi(x)$. \end{proof}
In the paper we shall heavily use formalized provability and the corresponding provability logics. As such, for c.e.\ theories $T$ we fix natural formalizations $[n]_T$ of ``provable in $T$ together with all true $\Pi_n$ sentences'' of complexity $\Sigma_{n+1}$ and the dual consistency notion $\langle n\rangle_T$ of complexity $\Pi_{n+1}$. When the context allows us to, we shall drop mention of the base theory $T$ and moreover, instead of writing $[0]$ ($\langle 0 \rangle$) we often write $\Box$ ($\Diamond$).
We shall typically refrain from distinguishing a formula $\phi$ from its G\"odel number or even a natural syntactical term denoting its G\"odel number. Also, we use the standard dot notation $\Box \, \phi(\dot x)$ to denote a formula with free variable $x$ so that for each $x$ the formula $\Box \, \phi (\dot x)$ is provably equivalent to $\Box\, n$ where $n$ is the G\"odel number of $\phi(t)$ where $t$ is some term (often called numeral) denoting $x$. Note that for non-standard $x$, the corresponding term denoting $x$ will also be non-standard.
We shall assume that each c.e.~theory $T$ that we consider comes with a $\Delta_0$ formula that defines the set of G\"odel numbers of axioms of $T$ on the standard model. A main result about formalized provability is formulated in what is nowadays called L\"ob's rule (\cite{Lob:1955:SolutionProblemHenkin}):
\begin{proposition} Let $T$ be a theory extending \ensuremath{{\rm{EA}}}\xspace. If $T\vdash \Box \phi \to \phi$, then $T\vdash \phi$. \end{proposition}
The natural way to prove statements about Turing progression is by transfinite induction. Weaker theories however cannot prove transfinite induction. Schmerl (\cite{Schmerl:1978:FineStructure}) introduced a way to circumvent transfinite induction employing so-called \emph{reflexive transfinite induction}.
\begin{lemma}[Reflexive transfinite induction] Let $T$ be some theory extending say, \ensuremath{{\rm{EA}}}\xspace, so that \[ T \vdash \forall \alpha \ \Big(\Box_T \ \forall \, \beta{<}\dot \alpha \ \phi(\beta) \ \to \ \phi(\alpha)\Big). \] Then it holds that $T\vdash \forall \alpha \ \phi (\alpha)$. \end{lemma}
\begin{proof} Clearly, if $T \vdash \forall \alpha \Big(\Box_T \ \forall \, \beta{<}\dot \alpha \ \phi(\beta) \ \to \ \phi(\alpha)\Big)$, then also \[ T \vdash \Box_T \ \forall \alpha \ \phi(\alpha) \ \to \ \forall \alpha \ \phi(\alpha), \] and the result follows from L\"ob's rule. \end{proof}
For theories $U$ and $V$, we shall write $U\equiv_n V$ for the statement that $U$ and $V$ prove the same $\Pi_{n+1}$ formulas.
\section{Modal preliminaries}
We shall see that the polymodal provability logic ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ is particularly well-suited to speak about Turing progressions and finite unions thereof.
\subsection{Provability logics and worms} We first define a polymodal version of provability logic as introduced by Japaridze in \cite{Japaridze:1988}.
\begin{definition} The propositional polymodal provability logic ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ has for each $n<\omega$ a modality $[n]$ with dual modality $\langle n\rangle$ being short for $\neg [n] \neg$. The language contains the constants $\top$ and $\bot$ for logical truth and falsity respectively.
The rules of ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ are Modus Ponens and Necessitation for each $[n]$ modality: $\frac{\phi}{[n]\phi}$. The axioms are \begin{enumerate} \item All propositional tautologies in the language of ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$;
\item $[n](\phi \to \psi) \to ([n]\phi \to [n]\psi)$ for each $n<\omega$ and ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ formulas $\phi$ and $\psi$;
\item $[n]([n]\phi \to \phi) \to [n]\phi$ for each $n<\omega$ and ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ formula $\phi$;
\item $[n]\phi \to [m]\phi$ for each $n<m<\omega$ and each ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ formula $\phi$;
\item $\langle n\rangle\phi \to [m]\langle n\rangle \phi$ for each $n<m<\omega$ and each ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ formula $\phi$;
\end{enumerate} \end{definition}
\noindent It is well-known that $[n]\phi \to [n][n]\phi$ is derivable in ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ and we shall use that without specific mention. The logic ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ is sound and complete for a wide range of theories $T$ when interpreting the modal operator $[n]$ as the formalized provability predicate $[n]_T$ (\cite{Japaridze:1988, Ignatiev:1993:StrongProvabilityPredicates}).
A standing assumption throughout all this paper is that all theories that we consider yield soundness of ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$. Moreover, we shall assume that any theory $T$ contains $\ensuremath{{\rm{EA}}}\xspace^+$ and has a set of axioms whose set of G\"odel numbers is definable on the standard model by a $\Delta_0$ formula.
The closed fragment ${\ensuremath{\mathsf{GLP}}}\xspace_\omega^0$ of ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ consists of all those ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ theorems that do not contain propositional variables. We define \emph{worms} to be the collection of iterated consistency statements within ${\ensuremath{\mathsf{GLP}}}\xspace_\omega^0$ and denote them by $\Worms$:
\begin{definition} For each $n<\omega$, the empty worm $\top$ is in $\Worms_n$; We inductively define that if $A\in \Worms_n$ and $\omega> m\geq n$, then $\langle m\rangle A \in \Worms_n$. The set $\Worms$ of all ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ worms is just $\Worms_0$. \end{definition}
\noindent Often we shall just identify a worm with the string of subsequent modality indices denoting the empty string by $\top$ for convenience. We now define a convenient decomposition of worms that will allow for inductive proofs.
\begin{definition} For a ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ worm $A$, its \emph{n-head} --we write $h_n(A)$--is the left-most part of $A$ that consists of only modalities which are at least $n$. The remaining part of $A$ is called the \emph{$n$-remainder} and is denoted by $r_n(A)$.
More formally: $h_n(\top) = \top$; and $h_n(mA) = mh_n(A)$ in case $m\geq n$ and $\top$ otherwise. Likewise: $r_n(\top)= \top$ and $r_n(mA) = r_n(A)$ in case $m\geq n$ and $mA$ otherwise. \end{definition}
\noindent The following lemma whose proof we leave as an exercise turns out to be very useful.
\begin{lemma} For each ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$-worm $A$ and for each $n<\omega$, we have \[ {\ensuremath{\mathsf{GLP}}}\xspace_\omega\vdash A \leftrightarrow h_n(A) \ \wedge \ r_n(A). \] \end{lemma}
\subsection{Lost in translation}
It is well-known (\cite{Beklemishev:2005:Survey, BeklemishevFernandezJoosten:2014:LinearlyOrderedGLP}) that worms constitute an alternative ordinal notation systems if we order them by \[ A <_n B \ :\Leftrightarrow \ {\ensuremath{\mathsf{GLP}}}\xspace_\omega \vdash B \to \langle n\rangle A. \] \begin{proposition} $\langle \varepsilon_0, < \rangle \cong \langle \Worms_n/\equiv, <_n\rangle$. \end{proposition} Here, $\Worms_n/\equiv$ denotes $\Worms_n$ modulo ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ provable equivalence and $\langle \varepsilon_0, <\rangle$ is just the ordinal $\varepsilon_0 = \sup \{ \omega, \omega^{\omega}, \omega^{\omega^{\omega}}, \ldots\}$ under the usual ordinal ordering $<$.
Worms can be related smoothly to more standard ordinal notations using the so-called hyper-exponentiation functions $e^n : {\sf On} \to {\sf On}$ (see \cite{FernandezJoosten:2012:Hyperations}) where $\sf On$ denotes the class of ordinals and $e^0$ is the identity function; $e^1 : \xi \mapsto -1 + \omega^\xi$; and $e^{n+m} = e^n\circ e^m$. The following theorem is proven in \cite{FernandezJoosten:2014:WellOrders}:
\begin{proposition}\label{theorem:wormCalculusForGLPworms} Let $o_0: \Worms \to {\sf On}$ be defined by \begin{enumerate} \item $o_0(\top) = 0$; \item $o_0(B0A) = o_0(A) + 1 + o_0(B)$; \item $o_0(n\uparrow A) = e^n(o_0(A))$. \end{enumerate} Here, $n\uparrow A$ denotes the worm that arises by simultaneously substituting any modality $m$ in $A$ by $n+m$.
Further, for any worm $A\in \Worms$ we define $o_n(n \uparrow A) := o_0(A)$ and $o_n(A) := o_n(h_n(A))$. We now have that \[ o_n: \langle \Worms_n/\equiv, <_n\rangle \cong \langle \varepsilon_0, < \rangle. \] \end{proposition}
In previous papers on polymodal provability logics, various proofs are rather involved since they work with classical ordinal notation systems. Worms have further logical and algebraic structure so that in the context of provability logics and Turing progressions, they are the better ordinal notation systems.
\begin{notation}\label{notation:TuringProgressionsIndexedByWorms} For $A{\in} \Worms$, by $T_n^A$ we shall denote the Turing progression $T_n^{o_n(A)}$. \end{notation}
Note that in virtue of our definitions we have that $T^A_n = T^{h_n(A)}_n$ and we shall use both notations interchangeably. Moreover, we note that a worm can denote various objects: an iterated consistency statement in modal logic, an iterated consistency statement in the language of arithmetic, and an ordinal. The context will always reveal what kind of object the occurrence of a particular worm denotes and thus we refrain from separating the different possible denotations by introducing extra notation.
\section{Turing Taylor expansions}
We shall see that various theories can be written as the finite union of simple Turing progressions which we call the \emph{Turing-Taylor expansion}. We start by looking at theories axiomatized by worms. Recall that all our theories are in the language containing a symbol for super-exponentiation and are supposed to come with a $\Delta_0$ axiomatization.
\subsection{Worms and Turing progressions}
The generalized Turing progressions $T^n_A$ are not too sensitive to adding ``small" elements to the base theory as is expressed by the following lemma.
\begin{lemma}\label{theorem:GeneralizedTuringProgressionsInvariantToSmallChangesBaseTheory} For any theory $T$ and for any $\sigma \in \Sigma_{n+1}$, we have provably in $\ensuremath{{\rm{EA}}}\xspace^+$ that \[ (T+\sigma)^\alpha_n \equiv (T)^\alpha_n +\sigma \ \ \ \ \ \mbox{for any $\alpha < \epsilon_0$}. \] In particular, for any theory $T$ and for any $GLP_\omega$ worm $A$, if $m<n<\omega$, then \[ (T+mA)^\alpha_n \equiv (T)^\alpha_n +mA \ \ \ \ \ \mbox{for any $\alpha < \epsilon_0$}. \] \end{lemma}
\begin{proof} By a straight-forward reflexive transfinite induction using provable $\Sigma_{n+1}$-completeness at the inductive step: for $n<\omega$ and $\sigma \in \Sigma_{n+1}$, we have \[ \sigma \to [n]_T \, \sigma. \] \end{proof}
\noindent The main motor to relate provability logics to Turing progression is by means of the following theorem.
\begin{theorem}\label{theorem:wormsAndGeneralizedTuringProgressions} Let $T$ be some elementary presented theory containing $\ensuremath{{\rm{EA}}}\xspace^+$ whose axioms have logical complexity at most $\Pi_{n+1}$ and let $A$ be some worm in $\Worms_n$. We have, provably in $\ensuremath{{\rm{EA}}}\xspace^+$, that \[ T+ A \equiv_n T_n^{A}. \] \end{theorem}
\begin{proof} By reflexive transfinite induction. We refer to \cite[Theorem 17]{Beklemishev:2005:Survey} for details. \end{proof}
In general we do of course not have\footnote{It is known that $\ensuremath{{\mathrm{PRA}}}\xspace + \neg \isig{1}$ and \isig{1} are $\Pi_2^0$ equivalent (see \cite[Lemma 3.4]{Joosten:2005:ClosedFragmentILPRAwithIsig1}). Clearly, $\ensuremath{{\mathrm{PRA}}}\xspace + \neg \isig{1} + \isig{1} \not \equiv_1 \isig{1}$.} that if $U\equiv_{n}V$, then $U + \psi \equiv_n V + \psi$ for theories $U$ and $V$ and formulas $\psi$. However, in the case of Turing progressions we can include ``small" additions on both sides and preserve conservativity.
\begin{lemma}\label{theorem:addingSmallWormsToT+A} Let $T$ some theory whose axioms have logical complexity at most $\Pi_{n+1}$ and let $A$ be some worm in $\Worms_n$. Moreover, let $B$ be any worm and $m<n$. We have, verifiably in $T$, that \[ T + A + mB \equiv_n T_n^{A} + mB. \] \end{lemma}
\begin{proof} As $m<n$ we have that $mB \in \Pi_{n}$. Whence, we can apply Theorem \ref{theorem:wormsAndGeneralizedTuringProgressions} to the theory $T+mB$ and obtain \[ T + mB + A \equiv_n (T+ mB)_n^{A} \] However, by Lemma \ref{theorem:GeneralizedTuringProgressionsInvariantToSmallChangesBaseTheory} we see that \[ (T+ mB)_n^{A}\equiv T_n^{A} + mB, \ \mbox{ whence } \ T + mB + A \equiv_n T_n^{A} + mB. \] \end{proof} \noindent From this lemma we obtain the following simple but very useful corollary.
\begin{corollary}\label{theorem:T+AIsAlmostATuringProgression} Let $T$ be some theory whose axioms have logical complexity at most $\Pi_{n+1}$. Moreover, let $A$ be any worm. We have verifiably in $T$ that \[ T + A \ \equiv_n \ T_n^{h_n(A)} + r_n(A) \ \equiv_n \ T_n^{A} + r_n(A). \] \end{corollary}
\begin{proof} Since ${\ensuremath{\mathsf{GLP}}}\xspace \vdash A \leftrightarrow h_n(A) \wedge r_n(A)$ and since by assumption ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ is sound w.r.t.\ $T$ we see that $T+ A \equiv T+ h_n(A) + r_n(A)$. The worm $r_n(A)$ is either empty or of the form $mA$ for some $m<n$. Clearly, $h_n(A) \in \Worms_n$. Thus, we can apply Lemma \ref{theorem:addingSmallWormsToT+A} and obtain \[ T+ h_n(A) + r_n(A) \equiv_n T_n^{h_n(A)} + r_n(A). \] Recall that by our notation convention (Notation \ref{notation:TuringProgressionsIndexedByWorms}) and by the definition of $o_n$ (see Proposition \ref{theorem:wormCalculusForGLPworms}), we have that $T_n^{h_n(A)} + r_n(A)\equiv T_n^{A} + r_n(A)$. \end{proof}
\subsection{Theories axiomatized by worms} From Theorem \ref{theorem:wormsAndGeneralizedTuringProgressions} we see that we can capture the $\Pi_1^0$ consequences of the $o(A)$-th Turing Progression of $T$ by the simply axiomatized theory $T+A$.
That is, $T+A$ proves the same $\Pi^0_1$ formulas as $T^0_{A}$. However, $T+A$ will in general prove many new formulas of higher complexity. We can characterize those consequences of $T+A$ also in terms of Turing progressions as we see in the next theorem.
\begin{theorem}\label{theorem:OmegaSequenceInTuringProgressions} Let $T$ be some $\Pi_1^0$ axiomatizable theory. Let $A$ be any ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ worm. We have, verifiably in $T$, that \begin{enumerate} \item $T + A \equiv \bigcup_{i< \omega} T_i^A$, and
\item $T + A \equiv_n \bigcup_{i=0}^{n} T_i^A$. \end{enumerate} \end{theorem}
\begin{proof} It suffices to prove the second item in the form of $T + A \equiv_n \bigcup_{i=0}^{n} T_i^{h_i(A)}$ since for any worm $A$, we have $h_i(A) =\top$ for $i$ large enough. We prove the second item by an external induction on $n$ and the base case follows directly from Theorem \ref{theorem:wormsAndGeneralizedTuringProgressions}.
For the inductive case we reason in $T$ as follows. By Corollary \ref{theorem:T+AIsAlmostATuringProgression} we know that \begin{equation}\label{IAmTiredOfTheseLengthyLables} T + A \equiv_{n+1} T_{n+1}^{h_{n+1}(A)} + r_{n+1}(A). \end{equation} In particular, as $T_{n+1}^{h_{n+1}(A)} + r_{n+1}(A) \subseteq \Pi_{n+2}$ we see that actually, $T+A$ is a $\Pi_{n+2}$-conservative extension of $T_{n+1}^{h_{n+1}(A)} + r_{n+1}(A)$, that is, \[ T + A \vdash T_{n+1}^{h_{n+1}(A)} + r_{n+1}(A). \] The induction hypothesis tells us that \begin{equation}\label{shortLabel} T + A \equiv_n \bigcup_{i=0}^{n} T_i^{h_i(A)}. \end{equation} Again, since $\bigcup_{i=0}^{n} T_i^{h_i(A)} \subseteq \Pi_{n+1}$ we obtain that \[ T + A \vdash \bigcup_{i=0}^{n} T_i^{h_i(A)}. \] Thus, $T+A \vdash \bigcup_{i=0}^{n+1} T_i^{h_i(A)}$ and in particular, if $\bigcup_{i=0}^{n+1} T_i^{h_i(A)} \vdash \pi$ then $T+A \vdash \pi$ for $\pi \in \Pi_{n+2}$.
Conversely, assume that $T+A \vdash \pi$ for some $\Pi_{n+2}$ sentence $\pi$. By \eqref{IAmTiredOfTheseLengthyLables} we see that $T_{n+1}^{h_{n+1}(A)} + r_{n+1}(A) \vdash \pi$. However, $r_{n+1}(A) \in \Pi_{n+1}$ and $T+A \vdash r_{n+1}(A)$ so, by \eqref{shortLabel} we see that $\bigcup_{i=0}^{n} T_i^{h_i(A)} \vdash r_{n+1}(A)$. Thus \[ \begin{array}{lll} \bigcup_{i=0}^{n+1} T_i^{h_i(A)} & \vdash & T_{n+1}^{h_{n+1}(A)} + r_{n+1}(A) \\ \ & \vdash & \pi. \end{array} \] as was required.
\end{proof}
It is clear that the modal reasoning in the proof of Theorem \ref{theorem:OmegaSequenceInTuringProgressions} can be extended beyond ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$. For this, one needs a (hyper-)arithmetic interpretation of ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$. One such example is given in \cite{FernandezJoosten:2013:OmegaRuleInterpretationGLP} but for $\lambda > \omega$ there are no canonical formula complexity classes around. This problem can be solved by considering a different interpretation of ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ as presented in \cite{Joosten:2015:JumpsThroughProvability}.
As a nice corollary to Theorem \ref{theorem:OmegaSequenceInTuringProgressions} we get the following simple but useful lemma.
\begin{lemma}\label{theorem:HigherProgressionImpliesLowerOnesWormVersion} Let $T$ be a $\Pi_{m+1}$ axiomatized theory. For $m\leq n$ and $A\in \Worms_n$, we have, verifiably in $T$, that $T_n^A \vdash T_{m}^A$. \end{lemma}
The restriction on the complexity of $T$ can actually be dropped as was shown in \cite{Schmerl:1978:FineStructure, Beklemishev:1995:IteratedLocalReflectionVersusIteratedConsistency}.
\section{Ignatiev's model and Turing-Taylor expansions}
In this section we shall focus on sub-theories of Peano Arithmetic. We shall see that if such a theory can be written as the finite union of generalized Turing progressions, then it can be seen as ``an element" of a well-known model for modal logic.
\subsection{Ignatiev's universal model}
The closed fragment ${\ensuremath{\mathsf{GLP}}}\xspace^0_\omega$ of ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ is a rich yet decidable structure. Ignatiev exposed a Kripke model for ${\ensuremath{\mathsf{GLP}}}\xspace^0_\omega$ that is universal in that the set of all formulas valid in all worlds of the model is exactly the set of theorems of ${\ensuremath{\mathsf{GLP}}}\xspace^0_\omega$.
We refer to the standard literature (\cite{Ignatiev:1993:StrongProvabilityPredicates, Joosten:2004:InterpretabilityFormalized, BeklemishevJoostenVervoort:2005:FinitaryTreatmentGLP}) for details and limit ourselves here to defining the model and state its main properties.
Ignatiev's universal model $\mathcal U$ is a pair $\langle \mathcal{I}_\omega, \{ \succ_i \}_{i\in \omega} \rangle$ where $\mathcal{I}_\omega$ is a set of \emph{worlds} and for each $i\in \omega$ we have a binary relation $\succ_i$ on $\mathcal{I}_\omega$. Worlds in Ignatiev's model for ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ are sequences of ordinals, \[ \langle \alpha_0, \alpha_1, \alpha_2, \ldots \rangle \mbox{ with $\alpha_{n+1}\leq{\ell}(\alpha_n)$} \] where ${\ell} (\alpha + \omega^\beta) = \beta$ and ${\ell}(0)=0$. We define $\vec \alpha \succ_n \vec \beta$ to hold exactly when both $\alpha_i = \beta_i$ for all $i<n$ and $\alpha_n > \beta_n$. We have included a picture of a part of $\mathcal U$ in Figure \ref{fig:ignatiev}.
We recursively define $\vec \alpha \nVdash \bot$, $\vec \alpha \Vdash \phi \to \psi$ iff ($\vec \alpha \nVdash \phi$ or $\vec \alpha \Vdash \psi$), and $\vec \alpha \Vdash [n] \phi$ iff for all $\vec \beta$ with $\alpha \succ_n \beta$ we have $\vec \beta \Vdash \phi$, with the tacit understanding that the other connectives are defined as usual in terms of $\bot, \to$ and the $[n]$'s. Ignatiev's model $\mathcal U$ is universal in that ${\ensuremath{\mathsf{GLP}}}\xspace^0_\omega \vdash \phi \ \ \Leftrightarrow \ \ \forall \, \vec \alpha {\in} \mathcal{I}_\omega\ \ \vec \alpha \Vdash \phi$.
In the light of Proposition \ref{theorem:wormCalculusForGLPworms}, we can represent each world $\vec \alpha \in \mathcal{I}_\omega$ by a (non-unique) sequence of worms $A_n\in \Worms_n$: \[ \langle A_0, A_1, A_2, \ldots \rangle \mbox{ with $A_{n+1}\leq_{n+1}h(A_n)$ and $o_n(A_n) = \alpha_n$.} \] We often refer to these sequences $\vec A$ as \emph{Ignatiev sequences} with their condition on them that $A_{n+1}\leq_{n+1}h(A_n)$.
\subsection{Turing-Taylor expansions}
Since $\ensuremath{{\rm{EA}}}\xspace^+$ seems to be weakest theory for which we can properly formalize Turing progressions we will define the $\Pi^0_{n+1}$ ordinal a theory $U$ to be \[
|U|_{\Pi^0_n} \ := \ \sup \{ \alpha \mid (\ensuremath{{\rm{EA}}}\xspace^+)_n^\alpha \subseteq U \}. \] A main point of this paper is that collecting these ordinals per theory yields in interesting structure: Ignatiev's model $\mathcal U$. Therefore, we define the spectrum of a theory as expected.
\begin{definition} For $U$ a formal arithmetic theory that lies in between $\ensuremath{{\rm{EA}}}\xspace^+$ and \ensuremath{{\mathrm{PA}}}\xspace, we define its \emph{spectrum} $tt(U)$ by \[
tt(U)\ \ := \ \ \langle |U|_{\Pi^0_1}, |U|_{\Pi^0_2}, |U|_{\Pi^0_3}, \ldots \rangle. \] Suggestively, we shall also speak of the \emph{Turing-Taylor expansion} of $U$ instead of the spectrum of $U$.
In case $U \equiv \bigcup_{n=0}^\infty (\ensuremath{{\rm{EA}}}\xspace^+)_n^{|U|_{\Pi^0_{n+1}}}$ we say that $U$ has a convergent Turing-Taylor expansion. \end{definition}
We include the reference to Taylor in the name due to the analogy to Taylor expansions of $C^{\infty}$ functions, that is, functions that are infinitely many times differentiable (see acknowledgements). If $f$ is a $C^{\infty}$ function, one can consider its Taylor expansion around 0 as $f(x) = \sum_{n=0}^{\infty}a_n x^n$. Thus, each Taylor expansion is determined by by its sequence $\langle a_0, a_1, a_2, \ldots \rangle$ of coefficients. In the case of a convergent Turing-Taylor expansion we fully determine the expansion by a sequence of ordinals $\langle \xi_0, \xi_1, \xi_2,\dots\rangle$ so that \[ U \equiv \bigcup_{n=0}^\infty T_n^{\xi_n}. \] We shall study which sequences of ordinals are attainable as coming from a convergent Turing-Taylor expansion.
Note that we have defined $tt(U)$ as to include only $\Pi_n^0$ sentences but this can easily be generalized to suitable sentences of higher complexities. For our current purpose, studying sub-theories of \ensuremath{{\mathrm{PA}}}\xspace, the restriction is not essential.
For Taylor expansions there is actually a uniform way of computing the coefficients as $f(x) = \sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n !}\ x^n$ where $f^{(n)}$ denotes the $n$-th derivative of $f$ and $f^{(0)} : = f$. For theories axiomatized by worms there we saw in Theorem \ref{theorem:OmegaSequenceInTuringProgressions} that there is also such a uniform way of computing the coefficients.
Note that the analogy to Taylor expansions is by no means perfect. In particular, in Taylor expansions we see that all the monomials $x^n$ are mutually independent, whereas in Turing progressions there will be certain dependency as we already saw in Lemma \ref{theorem:HigherProgressionImpliesLowerOnesWormVersion}. Therefore, we will rather speak of the spectrum or \emph{ordinal spectrum} of a theory instead of its Turing-Taylor expansion.
With every sequence $\vec \alpha = \langle \alpha_0, \alpha_1, \ldots \rangle$ of ordinals below $\varepsilon_0$ we can naturally associate a sub theory $(\vec \alpha)_{\sf tt}$ of \ensuremath{{\mathrm{PA}}}\xspace as follows \[ (\vec \alpha)_{\sf tt} := \bigcup_{n=0}^{\infty} \ensuremath{{\rm{EA}}}\xspace_n^{\alpha_n}. \] Of course we can and shall write the $\alpha_n$ most of the times as worms $A_n$ in $\Worms_n$. In general, we do not have that $tt((\vec A)_{\sf tt}) = \vec A$. Let us first see this in a concrete example and then prove some general theorems in the next sections.
\begin{example}\label{example:TuringTaylerProjections} For $\Pi^0_1$ axiomatized theories $T$ we have (in worm-notation) that $T^1_{1} + T_0^{01} \equiv T^1_1 + T_0^{101}$. In the classical notation system this reads $T_1^{1} + T_0^{\omega +1} \equiv T^1_1 + T_0^{\omega\cdot 2}$. \end{example}
\begin{proof} By Theorem \ref{theorem:OmegaSequenceInTuringProgressions} we have that $T^1_{1}\equiv T+\langle 1 \rangle \top$ and $T_0^{01} \equiv T+\langle 0 \rangle \langle 1 \rangle \top$. Thus, $T^1_1 + T_0^{01} \equiv T + \langle 1 \rangle \top + \langle 0 \rangle \langle 1\rangle \top$. Clearly the latter is equivalent to $T + \langle 1 \rangle \langle 0 \rangle \langle 1\rangle \top$ and we obtain our result by one more application of Theorem \ref{theorem:OmegaSequenceInTuringProgressions}.
Using Proposition \ref{theorem:wormCalculusForGLPworms} one gets the correspondence to the more familiar ordinal notation system. \end{proof}
\subsection{Each Turing-Taylor expansion corresponds to a unique point in Ignatiev's model}
We shall now prove that for each theory $U$ we have that $tt(U)$ is a sequence that occurs in $\mathcal{I}_\omega$. Most of the work in doing so is included in the following theorem.
\begin{theorem}\label{theorem:EachTheoryIsApointInIgnatievsModel} Let $T$ be some $\Pi_{n+1}$ axiomatized theory and let $A\in \Worms_{n+1}$ and $B\in \Worms_n$. We have, verifiably in $T$, that \[ T_{n+1}^A + T_n^{nB} \equiv_{n+1} T + A + nB, \] and \[ T_{n+1}^A + T_n^{nB} \equiv_n T_n^{AnB}. \] \end{theorem}
\begin{proof} Since $B\in \Worms_n$, by Theorem \ref{theorem:OmegaSequenceInTuringProgressions} and Lemma \ref{theorem:HigherProgressionImpliesLowerOnesWormVersion} we know that \[ T_n^{nB} \equiv T+nB. \] Consequently, we obtain the following equivalence. \begin{equation}\label{equation:AddingFinitelyAxiomatizedTuringProgression} T_{n+1}^A + T_n^{nB} \equiv T_{n+1}^A + nB \end{equation} Let us now see the following conservation result which proves the first part of the theorem. \begin{equation}\label{equation:ConservationBetweenTwoTPsAndTwoWorms} T_{n+1}^A + T_n^{nB} \equiv_{n+1} T+A +nB \end{equation} By \eqref{equation:AddingFinitelyAxiomatizedTuringProgression} it suffices to show that \[ T_{n+1}^A + nB \vdash \pi \ \ \Leftrightarrow \ \ T+A + nB \vdash \pi \] for any $\pi \in \Pi^0_{n+2}$. However, if $\pi \in \Pi^0_{n+2}$ we also have that $(nB \to \pi) \in \Pi^0_{n+2}$ since $nB \in \Pi^0_{n+1}$. Thus we can reason \[ \begin{array}{llll} T_{n+1}^A + nB \vdash \pi & \Leftrightarrow & T_{n+1}^A \vdash nB\to \pi& \\ & \Leftrightarrow & T+A \vdash nB\to \pi& \mbox{by Theorem \ref{theorem:wormsAndGeneralizedTuringProgressions}}\\ & \Leftrightarrow & T+A + nB \vdash \pi& \\ & \Leftrightarrow & T+AnB \vdash \pi& \mbox{since $A\in \Worms_{n+1}$.}\\ \end{array} \] This proves \eqref{equation:ConservationBetweenTwoTPsAndTwoWorms} and also $T_{n+1}^A + T_n^{nB} \equiv_{n+1} T+AnB$. We readily obtain the second claim of our theorem since by Theorem \ref{theorem:wormsAndGeneralizedTuringProgressions} we have $T+AnB\equiv_n T_n^{AnB}$. \end{proof}
\begin{corollary}\label{theorem:EachTtDefinesAUniquePoint} If $U$ is some sub-theory of \ensuremath{{\mathrm{PA}}}\xspace with a convergent Turing-Taylor expansion, so that $U \not \equiv_0 \ensuremath{{\mathrm{PA}}}\xspace$, then $tt(U)$ defines a point in $\mathcal{I}_\omega$. \end{corollary}
\begin{proof}
Since $U$ has a convergent Turing-Taylor expansion, $|U|_{\Pi^0_1}$ is well-defined. Since it is well-known that $(\ensuremath{{\rm{EA}}}\xspace^+)^0_{\varepsilon_0} \equiv_0 \ensuremath{{\mathrm{PA}}}\xspace$, by the assumption that $U \not \equiv_0 \ensuremath{{\mathrm{PA}}}\xspace$ we know that we can find a ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ worm $A$ with $(\ensuremath{{\rm{EA}}}\xspace^+)_0^A \equiv_0 U$ whence $|U|_{\Pi^0_1} = A$.
By Lemma \ref{theorem:HigherProgressionImpliesLowerOnesWormVersion} we see that each $|U|_{\Pi^0_n}<\varepsilon_0$. Thus, indeed, $tt(U)$ defines a sequence $\vec A$ of worms. W.l.o.g.~we pick $\vec A$ so that $A_n \in {\mathbb W}_n$ for each $n$.
Suppose now for a contradiction that for some $n$, the sequence $\vec A$ does not satisfy the condition that $A_{n+1}\leq_{n+1} h_{n+1}(A_n)$. As provably $A_n\leftrightarrow h_{n+1}(A_n)r_{n+1}(A_n)$, clearly, $A_n\geq_n r_{n+1}(A_n)$ whence $T_n^{A_n} \vdash T_n^{r_{n+1}(A_n)}$. By Theorem \ref{theorem:EachTheoryIsApointInIgnatievsModel} we know that $T_{n+1}^{A_{n+1}} + T_n^{r_{n+1}(A_n)}\vdash T_n^{A_{n+1}r_{n+1}(A_n)}$. But, since by assumption $A_{n+1}>_{n+1} h_{n+1}(A_n)$ we know that $A_{n+1}r_{n+1}(A_n) >_n A_n$. The latter violates the assumption that $|U|_{\Pi_n^0} = A_n$ is the supremum of all $B$ so that $(\ensuremath{{\rm{EA}}}\xspace^+)_n^B \subseteq U$. \end{proof}
\subsection{Each point in Ignatiev's model corresponds to a unique Turing-Taylor expansion}
Corollary \ref{theorem:EachTtDefinesAUniquePoint} tells us that certain points in the Ignatiev model $\mathcal{I}_\omega$ can be seen as mathematical theories with a convergent Turing-Taylor expansion. We now wish to see that \emph{every} point $\vec A$ in the Ignatiev model $\mathcal{I}_\omega$ can be interpreted naturally as a theory.
The natural candidate would of course be the theory $(\vec A)_{\sf tt}$. But, we have already seen in Example \ref{example:TuringTaylerProjections} that in general we do not have $tt((\vec A)_{\sf tt})=\vec A$. However, as we shall see, for points in the Ignatiev model the equality does hold.
We will need two technical lemmas to deal with the adjacent points in $\vec A$. The first, Lemma \ref{theorem:IgnatievConditionViolated} deals with the case that these adjacent points violate the condition $A_{n+1}\leq_{n+1}h(A_n)$ of Ignatiev sequences. The second case, Lemma \ref{theorem:ConservationIgnatievStep} deals with the case when no such violation is there.
\begin{lemma}\label{theorem:IgnatievConditionViolated} Let $T$ be a $\Pi_{n+1}$ axiomatized theory. Moreover, let $A\in \Worms_{n+1}$, $B\in \Worms_n$ and suppose $A\geq_{n+1} h_{n+1}(B)$. Then, verifiably in $T$, we have \[ T_{n+1}^A + T_n^B \equiv_n T_n^{Ar_{n+1}(B)}. \] \end{lemma} \begin{proof} We reason in $T$. Since clearly $B\geq_{n}r_{n+1}(B)$ we have that $T_n^{B} \vdash T_n^{r_{n+1}(B)}$. Consequently, \[ \begin{array}{llll} T_{n+1}^A + T_n^B & \vdash & T_{n+1}^A + T_n^{r_{n+1}B} & \\
& \equiv_n & T_n^{Ar_{n+1}(B)}& \mbox{by Theorem \ref{theorem:EachTheoryIsApointInIgnatievsModel}.}\\ \end{array} \] Let $\pi$ be some $\Pi^0_{n+1}$ sentence. We have thus seen that if $T_n^{Ar_{n+1}(B)}\vdash \pi$, then $T_{n+1}^A + T_n^B\vdash \pi$.
For the other direction, suppose that $T_{n+1}^A + T_n^B\vdash \pi$ for some $\pi \in \Pi^0_{n+1}$. We wish to see that $T_n^{Ar_{n+1}(B)}\vdash \pi$. We start with an application of Theorem \ref{theorem:EachTheoryIsApointInIgnatievsModel} and see: \[ \begin{array}{llll} T_{n+1}^A + T_n^{r_{n+1}(B)} &\equiv_{n+1} &T + A + {r_{n+1}(B)} & \\
& \equiv_{n+1} & T + Ar_{n+1}(B) & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)\\
& \equiv_{n+1} & T_{n+1}^A + T_n^{Ar_{n+1}(B)}& \mbox{by Theorem \ref{theorem:OmegaSequenceInTuringProgressions}.}\\ \end{array} \] As $A \geq_{n+1}h_{n+1}(B)$, we see that $Ar_{n+1}(B)\geq_n h_{n+1}(B)r_{n+1}(B)$, whence $Ar_{n+1}(B)\geq_n B$. Consequently, $T_n^{Ar_{n+1}(B)} \vdash T_n^B$. Thus, if $T_{n+1}^A + T_n^B \vdash \pi$, then also $T_{n+1}^A + T_n^{Ar_{n+1}(B)} \vdash \pi$ and by $(*)$, we see $T+ Ar_{n+1}(B) \vdash \pi$. Since $\pi \in \Pi^0_{n+1}$ we get by one more application of Theorem \ref{theorem:OmegaSequenceInTuringProgressions} that $T_n^{Ar_{n+1}(B)} \vdash \pi$ as was required. \end{proof} We note that the assumption $A\geq_{n+1} h_{n+1}B$ does not give us any information about the relations $\geq_{n+m}$ with $m>1$ at different coordinates in the Ignatiev sequence as these signs can switch arbitrarily. For example, let $A = 220222$ and $B=2122$. We have that \[ \begin{array}{lll} A & >_0 & B\\ h_1(A) & <_1 & h_1(B)\\ h_2(A) & >_2 & h_2(B).\\ \end{array} \] The next lemma takes care of the case $A\leq_{n+1} h_{n+1}(B)$. \begin{lemma}\label{theorem:ConservationIgnatievStep} Let $T$ be some $\Pi_{n+1}$-axiomatized theory. Moreover, let $A\in \Worms_{n+1}$, $B\in \Worms_n$ and suppose $A\leq_{n+1} h_{n+1}(B)$. Then, verifiably in $T$, we have \[ T_{n+1}^A + T_n^B \equiv_n T_n^{B}. \] \end{lemma}
\begin{proof} One direction is immediate so we reason in $T$ and assume that $T_{n+1}^A + T_n^B \vdash \pi$ for some $\pi\in\Pi^0_{n+1}$; we set out to prove that $T_n^B\vdash \pi$. However, by assumption $A\leq_{n+1}h_{n+1}(B)$ so that \begin{equation}\label{equation:bigTPImpliesSmallerTP} T_{n+1}^{h_{n+1}(B)} \vdash T_{n+1}^A. \end{equation} Using this, we obtain \[ \begin{array}{llll} T+ B &\equiv& T + h_{n+1}(B) + r_{n+1}(B) & \\
& \equiv_{n+1} & T_{n+1}^{h_{n+1}(B)} + T_n^{r_{n+1}(B)}& \mbox{by Theorem \ref{theorem:EachTheoryIsApointInIgnatievsModel}}\\
& \equiv_{n+1} & T_{n+1}^{h_{n+1}(B)} + T_n^{B}& \mbox{by Theorem \ref{theorem:EachTheoryIsApointInIgnatievsModel}}\\
& \vdash & T_{n+1}^A + T_n^B & \mbox{by \eqref{equation:bigTPImpliesSmallerTP}}. \end{array} \] On the other hand, $T+B \equiv_n T_n^B$ so that if $T_{n+1}^A + T_n^B\vdash \pi$, then $T+B \vdash \pi$ whence also $T_n^B \vdash \pi$ quot erat demonstrandum. \end{proof}
Suppose that $U\equiv_n V$. As mentioned before, in general we do not have that $U+T\equiv_nV+T$. However, we do have the following easy but useful lemma.
\begin{lemma}\label{theorem:ConservativeExtensions} (In $\ensuremath{{\rm{EA}}}\xspace^+$) Suppose $U\equiv_nV$ and $T\subseteq \Sigma_{n+1}$, then also $U + T\equiv_n V+T$. \end{lemma}
\begin{proof} Immediate from the (formalized) deduction theorem. \end{proof}
\begin{lemma} Let $T$ be a $\Pi_1$ axiomatized theory and let $\vec A \in \mathcal{I}_\omega$. We have, verifiably in $T$, that $\bigcup_{i=0}^{n}T_i^{A_i}\equiv_m \bigcup_{i=0}^{m}T_i^{A_i}$ for $m\leq n$. \end{lemma}
\begin{proof} By induction using lemmata \ref{theorem:ConservationIgnatievStep} and \ref{theorem:ConservativeExtensions}. \end{proof}
\begin{lemma}\label{theorem:IgnatievSequencesStableUnderIgnatievOperator} Let $\vec A \in \mathcal{I}_\omega$. If $\bigcup_{i=0}^{n}T_i^{A_i} \vdash T_n^B$, then $B\leq_n A_n$ given that $T$ is consistent. \end{lemma}
\begin{proof} Suppose otherwise, that is $A_n< B$. Then by compactness, for a single sentence $\pi$ of complexity at most $\Pi_{n}$ we have that $T_n^{A_n} + \pi \vdash T_n^B$. Since $B>A_n$ we certainly have $T_n^{A_n} + \pi \vdash \langle n\rangle_{T_n^{A_n}}\top$ whence, by provable $\Sigma_{n+1}$ completeness also $\langle n\rangle_{T_n^{A_n}}\pi$. Since the latter is equivalent to $\langle n\rangle_{T_n^{A_n} + \pi}\top$ we get by G\"odel's second incompleteness theorem for $n$-provability that $T_n^{A_n} + \pi$ is inconsistent. \end{proof}
\begin{theorem} Let $\vec A \in \mathcal{I}_\omega$. We have that ${\sf tt}((\vec A)_{\sf tt}) = \vec A$. \end{theorem}
\begin{proof}
We need to see that $|(\vec A)_{\sf tt}|_m = A_m$. This follows directly from the previous lemmas. \end{proof}
\subsection{Ignatiev's model: A roadmap to conservation results}
Now that we know that the $\Pi_n$ consequences of formal sub-theories between $\ensuremath{{\rm{EA}}}\xspace^+$ and $\ensuremath{{\mathrm{PA}}}\xspace$ correspond to points in Ignatiev's model, this gives us a nice way of collecting all we know about the $\Pi_n^0$ consequences of these fragments of arithmetic into a picture: We write the corresponding theories $T$ next to the nodes in the model that correspond to the spectra $tt(T)$ of $T$. If we have two theories $T_1$ and $T_2$, we can readily read off the amount of $\Pi_{n+1}$ conservation between them: look at the largest $n$ such that $\big(tt(T_1)\big)_n = \big(tt(T_2)\big)_n$.
For example, it is known that $\isig{1} \equiv \langle 2\rangle_{\ensuremath{{\rm{EA}}}\xspace^+} \top$ which would correspond to the point $\langle \omega^\omega, \omega, 1\rangle$. Similarly, \ensuremath{{\mathrm{PRA}}}\xspace will correspond to $\langle \omega^\omega, \omega, 0\rangle$ (see \cite[Corollary 4.14]{Beklemishev:2005:Survey}\footnote{There is a minor detail in that this results is formulated over $\ensuremath{{\rm{EA}}}\xspace$ and we work over $\ensuremath{{\rm{EA}}}\xspace^+$. Basic observations from \cite{Schmerl:1978:FineStructure} show that these differences are inessential for our limit ordinals.}). Parson's Theorem is readily read off from the picture (see Figure \ref{fig:ignatiev}) since $tt(\ensuremath{{\mathrm{PRA}}}\xspace)_1 = tt(\isig{1})_1 = \omega$ so that $\ensuremath{{\mathrm{PRA}}}\xspace \equiv_{\Pi^0_2} \isig{1}$.
\begin{figure}
\caption{Ignatiev's model: the $\succ_0$ relation is represented by a single arrow, $\succ_1$ by a double and $\succ_2$ by a triple arrow.}
\label{fig:ignatiev}
\end{figure}
However, the picture does not tell us which theories proves what kind of consistency of which other theory. For example, although there is an arrow between $\isig{1}$ and \ensuremath{{\mathrm{PRA}}}\xspace, it is clear that \isig{1} proves no consistency of \ensuremath{{\mathrm{PRA}}}\xspace. In \cite{HermoJoosten:2015:TuringSchmerl} a model is presented where, apart from the conservation, one can directly see and compare the consistency strength of the Turing progressions depicted in that model. We defer the further filling out of the Ignatiev model to a later paper.
\end{document} |
\begin{document}
\markboth{Andre Ahlbrecht, Florian Richter, and Reinhard F. Werner} {Finite Roots of the Completely Depolarizing Channel}
\title{How long can it take for a quantum channel to forget everything?}
\author{Andre Ahlbrecht} \address{Institute for Theoretical Physics, Leibniz Universit\"at Hannover, Appelstra\ss e 2\\ 30167 Hannover, Germany} \email{andre.ahlbrecht@itp.uni-hannover.de}
\author{Florian Richter} \email{frichter@itp.uni-hannover.de}
\author{Reinhard F. Werner} \email{reinhard.werner@itp.uni-hannover.de}
\maketitle
\begin{abstract} We investigate quantum channels, which after a finite number $k$ of repeated applications erase all input information, i.e., channels whose $k$-th power (but no smaller power) is a completely depolarizing channel. We show that on a system with Hilbert space dimension $d$, the order is bounded by $k\leq d^2-1$, and give an explicit construction scheme for such channels. We also consider strictly forgetful memory channels, i.e., channels with an additional input and output in every step, which after exactly $k$ steps retain no information about the initial memory state. We establish an explicit representation for such channels showing that the same bound applies for the memory depth $k$ in terms of the memory dimension $d$. \end{abstract}
\section{Introduction}
Quantum channels are the mathematical description for the most general quantum information processing operations. In this paper we consider the question how quantum information can be erased in an iterated process. In the simplest case, all information is lost after a single step of this process, i.e. the quantum channel completely depolarizes its initial state. Of course, the more interesting case is when the channel representing the one-step process acts non-trivially on the input system, but after a finite number of iterations leaves no information about the input system's initialization. We refer to such a channel as a root of a completely depolarizing channel (see figure~\ref{fig:root}). One of the main objectives in this article is to show how long it can possibly take until such an iteration is completely depolarizing. An upper bound in terms of the system's dimension can be derived easily from the Jordan normal form of the channel, but since the process needs to represent a physical transformation, which is expressed by complete positivity of the corresponding map, it is not clear a priori whether this bound is attained by some channel. In order to show that this is indeed a tight bound, we develop an explicit construction scheme for maximal roots of completely depolarizing channels.
One motivation to look at this problem stems from quantum memory channels \cite{memorychannel}. These are channels which account for correlations between successive uses of the channel by introducing an additional system, referred to as the memory. A central question in this context is whether the influence of a fixed input on the memory dies out in time, i.e. whether the channel is {\em forgetful} or not \cite{memorychannel,rybar}. Moreover, if the impact of the memory input vanishes within a finite number of steps the channel is referred to as {\em strictly forgetful}. We will demonstrate a connection between the concept of roots of completely depolarizing channels and strictly forgetful memory channels. The idea is to consider the transformation of the memory as a function of the state of the external input system. When the memory channel is strictly forgetful, this must be a root of a completely depolarizing channel for all system states. The converse, however, is not true: There are memory channels which give a root of a completely depolarizing channel for all fixed inputs, but are not strictly forgetful for general sequences of possibly entangled input states. Nevertheless, by adapting our method to the setting of strictly forgetful memory channels it is possible to create a technique which yields all strictly forgetful memory channels.
\begin{figure}
\caption{The channel $S$ is a finite root of the completely depolarizing channel (CDC) since a finite number of iterations maps an arbitrary input to the maximally mixed one ($d$ denotes the dimension of the quantum system).}
\label{fig:root}
\end{figure}
As a second application of our theory, we discuss the generation of finitely correlated spin-chain states \cite{fcs} by a root of completely depolarizing channels. If the channel is an $k^{\rm th}$ root, one obtains so-called $k$-dependent states \cite{Petz,Matus}, which are defined by the property that the output observables separated by more than $k$ sites are independent.
Our paper is organized as follows. We first set up some notation and background on quantum channels on finite dimensional systems. In Section~\ref{sec:roots} we derive the general upper bound, and then describe the construction of maximal roots. For qubits we give an exhaustive construction, and focus for general systems on the question how the Jordan structure of a maximal root can be realized by completely positive maps. In Section~\ref{sec:fcs} we show how to obtain $k$-dependent states, and in Section~\ref{sec:memch} we discuss the connection to memory channels.
\section{Quantum channels on finite dimensional systems} The purpose of this section is to introduce notation and to give some necessary background on the mathematical aspects of quantum channels. For a detailed introduction to this topic we refer the reader to Paulsen's book \cite{paulsen}.
Throughout this paper we deal exclusively with quantum systems which can be described by a finite dimensional Hilbert space $\hilbertH=\C^d$. By $\M_d$ we denote the set of linear operators on $\C^d$ and the physical states of the system are represented by the set of density operators $\mathcal{S}(\C^d):=\{\rho\in\M_d,\ \rho\geq 0, \ \text{tr}(\rho)=1\}$. Possible measurements on the system are associated with the set of hermitian operators $\mathcal{M}(\C^d):=\{A\in\M_d,\ A^*=A \}$, where $A^*$ denotes the adjoint of $A\in\M_d$. A quantum channel can be defined in two different ways, we can either regard it as a transformation of the physical states or as a transformation of the measurements. The first point of view, also known as the Schrödinger picture, corresponds to a linear mapping $T^*$ from the states on an input system $\hilbertH_{in}$ to states of an output system $\hilbertH_{out}$, that is, \begin{equation} T^*:\mathcal{S}(\hilbertH_{in})\mapsto \mathcal{S}(\hilbertH_{out})\, . \end{equation} In the Heisenberg picture, which is precisely the second point of view, the quantum channel is represented by a linear map $T$ from the measurements on $\hilbertH_{out}$ to measurements on $\hilbertH_{in}$, i.e., \begin{equation} T:\mathcal{M}(\hilbertH_{out})\mapsto \mathcal{M}(\hilbertH_{in})\, . \end{equation} The maps $T$ and $T^*$ are equivalent representations of the same physical transformation iff all expectation values for measurements $A$ performed on states $\rho$ coincide after application of $T$ respectively $T^*$. Hence, the representations in Heisenberg and Schrödinger picture are connected by the duality relation \begin{equation} \mathrm{tr} (T^*(\rho)A)=\mathrm{tr}(\rho\ T(A))\,. \end{equation} By linearity both maps extend to the whole space $\M_{d_{in}}$ respectively $\M_{d_{out}}$. In order to represent physical transformations, the maps $T^*$ and $T$ have to satisfy certain properties. Both have to be completely positive, that is, the extended maps $T^*\otimes \id_n$ and $T\otimes \id_n$, where $\id_n$ denotes the identity map on $n$-dimensional matrices, preserve positivity of operators. Additionally, it is often assumed that the quantum channel always generates an output when it is fed with a state of the input system. Mathematically, this is expressed by the assumption that $T^*$ is trace-preserving and $T$ is unital. Note that one property is a consequence of the other and the duality relation.
A channel which maps every input state to the same output state $\sigma$ is called a completely depolarizing channel (CDC). We denote the Schrödinger picture representation of this channel by $T_\sigma^*$, its mathematical definition reads $T^*_{\sigma}(\rho)= \mathrm{tr} (\rho) \sigma $, with $\sigma\in\mathcal{S}(\mathcal{H}_{out})$. The duality relation yields \begin{equation} \mathrm{tr} (T_{\sigma}^*(\rho)A)=\mathrm{tr} (\rho )\mathrm{tr} (\sigma A)=\mathrm{tr} ({\rho \, \mathrm{tr} (\sigma A)\cdot \mathbbm{1}}), \end{equation} thus, $T_{\sigma}(A)=\mathrm{tr} {\sigma A}\cdot \mathbbm{1}$ for every $A\in\M_{d_{out}}$ is the representation of a CDC in the Heisenberg picture. The particular case where $d_{in}=d_{out}=d$ and $\sigma=\frac{1}{d}\mathbbm{1}$ leads to the CDC which is defined by $T^*_{\mathbbm{1}/d}(\rho)=\mathrm{tr} (\rho)\frac{1}{d}\mathbbm{1}$ and $T_{\mathbbm{1}/d}(A)=\mathrm{tr} (A)\frac{1}{d}\mathbbm{1} $. We will refer to this channel as the \emph{bistochastic} CDC. \\
The mathematical theory of completely positive maps provides some important results leading us to different ways of specifying a quantum channel. Since we are interested in concatenable channels, we focus our attention in the following to channels with equal input and output system. The first statement is the famous theorem of Kraus \cite{kraus}, which proves that every completely positive map $T$ admits a decomposition of the form \begin{equation} T(X)=\sum_{\alpha}K_{\alpha}^* X K_{\alpha}\quad \text{with}\quad \sum_{\alpha}K_{\alpha}K_{\alpha}^*=\mathds{1}\, . \end{equation} We refer to $\{K_{\alpha}\}$ as \emph{Kraus operators} of the channel $T$. Note that this representation involves a unitary degree of freedom, i.e., the channels defined by $\{\tilde{K_i}\}$ and $\{K_i\}$ coincide, if there exists a unitary $U$ such that $K_i=\sum_jU_{ij}\tilde{K_j}$ holds. If we restrict to Kraus decompositions with minimal numbers of Kraus operators, this is actually the only freedom we have in choosing the $K_i$. In other words, two minimal Kraus representations $\{\tilde{K_i}\}$ and $\{{K_i}\}$ are always connected by a unitary $U$ and the formula $K_i=\sum_jU_{ij}\tilde{K_j}$. Of course, two Kraus decompositions of a quantum channel $T$ do not necessarily consist of the same number of Kraus operators. For example, consider the convex combination $T=\lambda T_1+(1-\lambda)T_2$ of two channels $T_1$ and $T_2$ with Kraus operators $\{K_{1,i}\}$ respectively $\{K_{2,i}\}$. Clearly, $T$ can be written as a Kraus decomposition with operators $\{\sqrt{\lambda}K_{1,i}\}\bigcup \{\sqrt{1-\lambda}K_{2,i}\}$ but in general there exists a Kraus decomposition with fewer operators. We define the minimal number of Kraus operators as the \emph{Kraus rank} of the channel $T$ and note that a Kraus decomposition is minimal iff the operators $K_i$ are linearly independent. We will see in the next section that a possible Kraus decomposition of the bistochastic CDC in the case of a qubit input and output system is $T_{\mathbbm{1}/2}(X)=\frac{1}{4}\sum_{i=0}^{4}\sigma_i X\sigma_i$ with Pauli operators $\sigma_i$. In fact, this result can trivially be extended to higher dimensions by replacing Pauli operators by Weyl operators. This implies that the Kraus rank of the bistochastic CDC is always $d^2$, where $d$ is the system's dimension.\\
Another characterization arises if we consider Choi's theorem \cite{choi}. The statement of the theorem is sometimes called the channel-state duality, since it predicates a map $T$ is completely positive iff its corresponding \emph{Choi operator} $\choi_T:=T\otimes \id(\ket{\Omega}\bra{\Omega})=\frac{1}{d} \sum_{i,j}T(\ket{i}\bra{j})\otimes\ket{i}\bra{j} $, with $\ket{\Omega}=\frac{1}{\sqrt{d}}\sum_{i}\ket{ii}$, is positive. In fact, the trace of $\choi_{T^*}$ is normalized, hence, $T$ is completely positive iff $\choi_{T^*}$ is a state, we will refer to $\choi_{T^*}$ as the \emph{Choi state} of $T$. Since the relation between a channel $T$ and its corresponding Choi state $\choi_{T^*}$ is invertible, any state fully determines a channel and vice versa. The Choi state and Choi operator of a CDC are then given by \begin{equation} \choi_{T_\sigma^*}=\sigma\otimes \frac{1}{d}\mathbbm{1}_d\quad \text{and}\quad \choi_{T_\sigma}= \frac{1}{d}\mathbbm{1}_d\otimes \sigma^T\, , \end{equation}
where $\sigma^T$ denotes the transpose of $\sigma$. Furthermore, the linearity of a channel $T$ allows for representation of $T$ by a matrix $D_{T}$. For that reason we equip the vector space $\M_d$ with the Hilbert-Schmidt scalar product $\langle A\vert B \rangle_{HS}:= \mathrm{tr} (A^*B)$ and define the representation matrix of a channel as ${D_{T}}_{i,j}:=\langle A^i |T( A_j) \rangle_{HS}$ with $\{A_1,...,A_{d^2}\}$ as operator basis and $\{A^1,...,A^{d^2}\}$ its dual basis defined via $\mathrm{tr} ({A^i}^*A_j)=\delta_{i,j}$. We point out that usually there is no way to determine the complete positivity of a map solely from its representation matrix without any knowledge of the operator basis. However, if we choose the matrix units $E_{ij}:= \ket{i}\bra{j}$ with $i,j \in \{1,...,d\}$ as a basis for the representation we find that the representation matrix of $T$ and its corresponding Choi operator $\choi_T$ are connected via $D_{T_{nm,kl}}=\bra{E_{nm}} T(E_{kl})\rangle=d\bra{n\otimes k}\choi_T\ket{m\otimes l}$. Thus, the representation matrix of a certian CDC is given in this basis by \begin{equation}\label{eq:CDC_matrix} \bra{E_{nm}} T_\sigma(E_{kl})\rangle=\mathrm{tr} (E_{mn}\mathrm{tr} (\sigma E_{kl})\mathbbm{1})=\frac{1}{d}\delta_{n,m}\bra{l}\sigma \ket{k}. \end{equation}
The divisibility of quantum channels, that is, the property of a channel $T$ to be decomposable into two non-trivial channels $S_1$ and $S_2$ has been investigated in reference \cite{dividingquantum}. In the paper at hand, we specify this investigations for the divisibility of a completely depolarizing channels $T_{\sigma}$ into a self-concatenation of identical maps, i.e. whether there exists a channel $S$ and $k\in \N$ such that $S^k=T_{\sigma}$. Finally, we draw some connections of this problem to other fields in quantum information theory.
We close this section with a mathematical definition of a $k^{\rm th}$ order root of a quantum channel. According to figure \ref{fig:root} we define: \begin{definition}[Root of a Channel] \label{def:root} A \textbf{$k^{\rm th}$ root} of the channel $T:\M_d\mapsto \M_d$ is a channel $S:\M_d\mapsto \M_d$ with \begin{equation} S^k=T \ \ \text{and} \ S^r\neq T \ \text{for} \ r<k, \quad k,r\in\mathbb{N}. \end{equation} We refer to $k$ as the \textbf{order} of a root. \end{definition}
\section{Roots of Completely Depolarizing Channels}\label{sec:roots}
This section is started with some general comments about the construction of roots of a CDC. Our aim is to get roots with maximal number of necessary self-concatenations, i.e. maximal order roots. It will turn out that this maximal order is always $d^2-1$, where $d$ denotes the dimension of the underlying Hilbert space $\H$. If we consider the bistochastic CDC it is always possible to construct a maximal order root of this channel. After characterizing all maximal roots of the bistochastic CDC for qubit systems we present a construction scheme leading to maximal roots of the bistochastic CDC in arbitrary dimensions.
\subsection{General upper bound} \label{Sec:GenApp}
The aim of this section is to describe the general approach we take to construct roots of a CDC. In particular, we investigate what the highest possible order of a finite CDC-root in terms of the dimension of $\H$ can be. Comparing the three introduced representations it turns out that the matrix-representation of a channel is the most fruitful one to determine an upper bound for the maximal order of a root. \begin{theorem}[Boundedness of the root order] \label{thm:bound} Let $S:\M_d\mapsto \M_d$ be a channel, which is a $k^{\rm th}$ root of a completely depolarizing channel $T_\sigma$. Then $k\leq d^2-1$. \end{theorem} Before we prove the theorem, we need the following statement from linear algebra. \begin{lemma}[Jordan normal form]\label{thm:jordan} \cite{LinearAlgebra} For every matrix $T\in \M_D$ there is an invertible matrix $R$, such that \begin{equation} T=R\left(\bigoplus\limits_{\ell=1}^K J_\ell(\lambda_\ell)\right) R^{-1}
=RJR^{-1}, \end{equation} with \begin{equation}
J_\ell(\lambda):=\left(
\begin{array}{cccc}
\lambda & 1 & & \\
&\ddots &\ddots & \\
& & \lambda & 1 \\
& & & \lambda \\
\end{array}
\right)\in \M_{d_\ell}, \end{equation} where the matrix $J$ is called the \textbf{Jordan normal form} of $T$ and the $J_\ell(\lambda)$ are the \textbf{Jordan blocks} of size $d_\ell$ corresponding to the eigenvalue $\lambda$. The number of Jordan blocks with the eigenvalue $\lambda$ is the \textbf{geometric multiplicity} of it while the sum of the dimensions $\sum_{\lambda=\lambda_\ell}d_\ell$ is the \textbf{algebraic multiplicity} of the eigenvalue. \end{lemma} If we express the property of $S$ being a root of a CDC in terms of the representation matrix, we can establish the upper bound in the following way:
\begin{proof}[Proof of Theorem~\ref{thm:bound}] We consider $T_\sigma$ and $S$ as operators on $\M_d$, a space of dimension $D=d^2$. The eigenvalues of the CDC channel $T$ are $1$ and $0$. Hence for any eigenvalue $\lambda$ of $S$ we have $\lambda^k\in\{0,1\}$, so $S$ likewise has only the eigenvalues $1$ and $0$. Since $1$ is a simple eigenvalue of $T$, the eigenvalue $1$ of $S$ is also simple, and there will be no roots of unity. Hence the algebraic multiplicity of the eigenvalue $0$ of $S$ is $d^2-1$. The only remaining question is the decomposition of this dimension into Jordan blocks. The root order will be the smallest $k$ such that $J_\ell(0)^k=0$ for all $\ell$. The smallest power $k$ for which $J_\ell(0)^k=0$ is $d_\ell$. Hence the root order is the dimension of the largest Jordan block. Clearly, this becomes largest when there is only one block, i.e., when $k=d^2-1$. \end{proof}
Although the above theorem gives an explicit upper bound for the order of a root of a CDC it does not answer the question whether this bound is attained by some channel $S$. In order to reach this bound we additionally need to care about complete positivity of $S$, which cannot be decided solely from the representation matrix of $S$. However, we will see in the next sections that there always exist channels which attain the upper bound $d^2-1$ for the order of a root of the bistochastic CDC. Since for the bistochastic CDC $T_{\mathbbm{1}/d}$ and $T_{\mathbbm{1}/d}^*$ are represented by the same map, we omit the $^*$ in the notation to distinguish between Heisenberg and Schrödinger picture for the rest of this section.
\subsection{All roots of the bistochastic qubit CDC } \label{Sec:Qubit}
The most elementary case arises if we consider the input and output system to be qubits. According to theorem \ref{thm:bound}, the highest possible order of a CDC-Root is three in this case. We will give an explicit characterization of the whole set of maximal qubit roots if $T_\sigma$ is the bistochastic CDC, i.e. $\sigma=\frac{1}{d}\mathbbm{1}$. Thereby we verify that there are indeed roots of the CDC of order three.\\ In reference \cite{mary1} it is proven that every bistochastic qubit channel can be decomposed as \begin{equation}\label{eq:qubit_case}
T(\rho)=U_1\Lambda[U_2\rho U_2^*]U_1^* \end{equation} where $U_1$ and $U_2$ are unitaries and $\Lambda$ is a Pauli diagonal channel, i.e. $\Lambda[\sigma_i]=\lambda_i\sigma_i\ \text{with}\ i\in\{0,1,2,3\}$. If we represent the set of possible qubit states $\rho$ via the Bloch sphere $\rho(\vec{r})=\frac{1}{2}(\hilberteins+\vec{r}\vec{\sigma}), \Vert\vec{r}\Vert\leq 1$ and translate the action of the maps induced by $U_1,U_2$ and $\Lambda$ to maps acting on $\vec{r}$, we find that an arbitrary qubit channel can be written as a composition of a diagonal map $\mathbf{L}=\text{diag}\{\lambda_1,\lambda_2,\lambda_3\}$ and two rotations: \begin{equation}\label{eq:bloch} T(\rho(\vec{r}))=\frac{1}{2}(\hilberteins+(\mathbf{R_1 L R_2}\vec{r})\vec{\sigma}), \ \mathbf{R_i}\in SO(3) \end{equation} Clearly, since the $\mathbf{R_i}$ are invertible, the rank of the linear map $T$ is determined by the choice of the values $\{\lambda_i\}$ of the diagonal map $\mathbf{L}$. Thus, the rank of $\mathbf{L}$ completely determines the order of a potential root $T$. Indeed, let us assume $T$ is a $k^{\rm th}$ order root of the CDC, then $k=3$ if the rank of $\mathbf{L}$ is two and $k=2$ if the rank is one, as can be seen from the corresponding Jordan normal forms.
To explore the possible configurations of the $\{\lambda_i\}$ resulting in completely positive maps we need to have a closer look at the set the Pauli diagonal channels. It is a well-known fact that the set of possible Pauli diagonal channels form a tetrahedron \cite{mary2}. To get a suitable characterization of the rank of the channels, we use the fact that every Pauli diagonal channel has a Kraus decomposition of the form $T(X)=\sum_{i=0}^3\mu_i\sigma_i X \sigma_i$. Some straightforward calculations show that these channels are indeed Pauli diagonal with the relations \begin{align}\label{eq:muuslambdas} \lambda_0&=\mu_0+\mu_1+\mu_2+\mu_3 \\ \lambda_1&=\mu_0+\mu_1-\mu_2-\mu_3 \notag\\ \lambda_2&=\mu_0-\mu_1+\mu_2-\mu_3 \notag \\ \lambda_3&=\mu_0-\mu_1-\mu_2+\mu_3 \notag \end{align} between $\{\mu_i\}$ and $\{\lambda_i\}$. If, on the other hand, we consider a map $T$ in terms of the $\lambda_i$, we can solve \eqref{eq:muuslambdas} for the $\mu_i$ to get a Kraus decomposition of $T$. It is easy to see that the eigenvectors of the Choi operator $\choi_T:=T\otimes \id(\ketbra{\Omega}{\Omega})$ are the vectors $\ket{\Omega_i}:=\hilberteins\otimes\sigma_i\ket{\Omega}$ with corresponding eigenvalue $\mu_i$. Hence, complete positivity of $T$ is expressed by the condition that all $\mu_i$ are positive numbers and unitality of $T$ requires that the $\mu_i$ add up to one, i.e. $\lambda_0=1$. This yields the inequalities \begin{align}\label{eq:tetrahedron} \lambda_1+\lambda_2+\lambda_3& \geq -1\\ \lambda_1-\lambda_2-\lambda_3&\geq -1\notag\\ -\lambda_1+\lambda_2-\lambda_3& \geq -1\notag \\ -\lambda_1-\lambda_2+\lambda_3& \geq -1\notag \end{align} characterizing the tetrahedron formed by the set of admissible $\{\lambda_i\}$. These relations imply that the set of Pauli diagonal channels with one eigenvalue equal to zero can be represented by squares inside the tetrahedron (see figure \ref{fig:RootQubitChannel}).\\
\begin{figure}
\caption{The tetrahedron of Pauli diagonal channels parameterized by the eigenvalues $\lambda_i$ with $i=1,2,3$. The extremal points of the tetrahedron represent the configuration where exactly one of the Kraus weights $\mu_i$ equals one. The squares inside the tetrahedron mark the configurations, where one of the eigenvalues of the channel is equal to zero. The bistochastic CDC is represented by the intersection point of all three squares.}
\label{fig:RootQubitChannel}
\end{figure}
Furthermore, we find an interesting connection between the Kraus rank of a channel and the rank as a linear map for qubit channels. Starting at an extremal point of the tetrahedron we find a reversible channel with Kraus rank one. If we move on the line between two extremal points we increase the Kraus rank by one under preservation of the reversibility. In the middle of the line the rank is lowered by two, because it lies on the intersection of two squares. Hence we need at least three Kraus operators to construct a qubit channel with rank exactly one less than maximal. Since the Pauli diagonal map characterizes the rank completely this holds for all bistochastic qubit channels.\\
The following theorem gives a complete characterization of maximal roots for qubit systems: \begin{theorem}[Maximal qubit roots] A bistochastic channel $T:\mathcal{S}(\C^2)\mapsto \mathcal{S}(\C^2)$ is a maximal root of the bistochastic CDC, iff $T$ is of the form \begin{equation} T(\rho(\vec{r}))=\frac{1}{2}(\hilberteins+(\mathbf{R_2^{T}R_1 L R_2}\vec{r})\vec{\sigma}), \ \mathbf{R_i}\in SO(3) \end{equation} where $\mathbf{L}$ has exactly two non-zero eigenvalues, $\mathbf{R_2}\in SO(3)$ is an arbitrary rotation and there exist angles $\phi, \theta \in [0,2\pi)$ such that \begin{equation} \mathbf{R_1}= \left( \begin{array}{ccc} 0 & \cos \theta & -\sin \theta \\ \sin \phi & \cos \phi \sin \theta & \cos \phi \cos \theta \\ -\cos \phi & \sin \phi \sin \theta & \sin \phi \cos \theta \end{array} \right)\, . \end{equation} If we choose $\mathbf{L}=\text{diag}({0,\lambda_2, \lambda_3})$ we have the restrictions $\vert \lambda_2\pm\lambda_3\vert\leq 1$ for complete positivity and $\tan \phi =-\frac{\lambda_2}{\lambda_3}\tan \theta $. \end{theorem} \begin{proof} As already mentioned, for a maximal root of the bistochastic CDC we have to choose the Pauli diagonal map $\mathbf{L}$ in \eqref{eq:bloch} such that it has exactly two non-zero eigenvalues. Without loss of generality we assume $\mathbf{L}=\text{diag}({0,\lambda_2, \lambda_3})$. In the following, we determine all roots of the bistochastic CDC whose decomposition \eqref{eq:bloch} is such that $\mathbf{R_2}=0$. The general case reduces to this particular setting by the following argument: If $\mathbf{R_1L R_2}$ is a general root of the bistochastic CDC we have \begin{equation} \mathbf{0}=(\mathbf{R_1 L R_2})^n= \mathbf{R_2}^{-1}(\mathbf{ R_2 R_1 L})^n \mathbf{R_2}\, , \end{equation} which means that $\mathbf{ R L}$, with $\mathbf{ R}=\mathbf{ R_2 R_1}$, is a root as well. On the other hand, if $\mathbf{R_1L}$ is a root then \begin{equation} \mathbf{0}=(\mathbf{R_1 L})^n=\mathbf{R_2}(\mathbf{R_2}^{-1}\mathbf{R_1 L R_2})^n \mathbf{R_2}^{-1}\, , \end{equation} and hence $\mathbf{R_2}^{-1}\mathbf{R_1 L R_2}$ is again a root.
Hence, the task is to determine all three-dimensional orthogonal matrices $\mathbf{R}$ such that the composition
\begin{equation}
D_T=\mathbf{R}\cdot \text{diag}({0,\lambda_2, \lambda_3}) \end{equation} is nilpotent. This is equivalent to saying that all eigenvalues of $D_T$ equal zero. Since $D_T$ is of the form \begin{equation} D_T= \left( \begin{array}{ccc} 0 & \lambda_2 r_{12} & \lambda_3 r_{13} \\ 0 & \lambda_2 r_{22} & \lambda_3 r_{23} \\ 0 & \lambda_2 r_{32} & \lambda_3 r_{33} \end{array} \right) \end{equation} its characteristic polynomial can be written as $\chi (z)= \det (D_T-z\mathbbm{1})=-z\cdot (z^2 - z\text{tr} ( \Lambda) +\det ( \Lambda))$, where we introduced the submatrix \begin{equation} \Lambda= \left( \begin{array}{cc}
\lambda_2 r_{22} & \lambda_3 r_{23} \\
\lambda_2 r_{32} & \lambda_3 r_{33} \end{array} \right)\,. \end{equation} Hence, the condition that all three eigenvalues of $D_T$ equal zero is equivalent to the condition: \begin{equation} \label{eq:condition} \det (\Lambda))=0 \quad \text{and} \quad \text{tr} (\Lambda))=0 \end{equation} The first part of the condition is already satisfied if the 2x2 submatrix of $\mathbf{R}$ consisting of $\{r_{22},r_{23},r_{32},r_{33}\}$ has a vanishing determinant. Therefore we choose the following ansatz: \begin{equation} \mathbf{R} = \left( \begin{array}{ccc}
& \hdots & \\ \vdots & a & b \\
& z\cdot a & z\cdot b \end{array} \right)\, ,\quad a,b,z \in \mathbbm{R} \end{equation} We get additional restrictions on $\{a,b,z\}$ from the orthogonality of $\mathbf{R}$, that is, the rows of $\mathbf{R}$ have to be normalized and mutually orthogonal. This leads to the conditions \begin{eqnarray} a^2+b^2&<&1 \\ z^2(a^2+b^2)&<&1 \nonumber \end{eqnarray} and \begin{equation} (z^2+1)(a^2+b^2)=1 \, . \end{equation} These relations suggest a parametrization in terms of trigonometric functions. If we expand the rows and columns accordingly, we find that \begin{equation} \mathbf{R}= \left( \begin{array}{ccc} 0 & \cos \theta & -\sin \theta \\ \sin \phi & \cos \phi \sin \theta & \cos \phi \cos \theta \\ -\cos \phi & \sin \phi \sin \theta & \sin \phi \cos \theta \end{array} \right)\, \text{with} \ \phi,\theta \in \mathbbm{R} \label{eq:rot} \end{equation} is a possible parametrization of all $\mathbf{R}\in SO(3)$ which fulfill the first condition of \eqref{eq:condition}. The second condition fixes a relation between the two variables, such that $\theta$ can be an arbitrary angle and $\phi$ has to satisfy $\tan \phi =-\frac{\lambda_2}{\lambda_3}\tan \theta $.
\end{proof}
\subsection{Roots via perturbation} \label{roots}
Inspired by the approach of the last section, one might try to imitate the construction of a maximal root via the composition of a rank-lowering and rotation maps in higher dimensions. For instance, one could replace the Pauli matrices by Weyl operators and consequently compose a rank-lowering Weyl diagonal map with some unitary channel representing the rotation. This procedure fails mainly due to a lack of a Bloch-sphere interpretation of the state space and the dimensional gap between $SU(d)$ and $SO(d^2-1)$, i.e., the unitary channels do not cover all possible $SO(d^2-1)$-rotations of the state space in dimensions beyond $d=2$.
Nevertheless, our aim is to construct maximal roots of the bistochastic CDC for arbitrary system dimension $d$. For this purpose, we recall the fact, that the Jordan normal form of a maximal root is, in an appropriate basis, given by the direct sum of a projector on the maximally mixed state $\sigma=\frac{1}{d}\mathbbm{1}$, representing the bistochastic CDC, and a maximal Jordan block to the eigenvalue zero. The key idea is now to consider this nilpotent Jordan block as a $\varepsilon$-weighted perturbation of the bistochastic CDC in such a way that the complete positivity remains untouched. The following theorem shows, that this is indeed possible. \begin{theorem} \label{thm:rootsperturb} Let $B:=\{A_1=\mathbbm{1}_d,A_2,...,A_{d^2}\}$ be a basis of hermitian operators in $\M_d$ and $B^*=\{A^1=\frac{\mathbbm{1}_d}{d},A^2,...,A^{d^2}\}$ its dual basis, with respect to the Hilbert-Schmidt scalar product $\mathrm{tr} ({A^j}^*A_i)=\delta_{i,j}$. Then, for small enough $\varepsilon\in\R$, the map \begin{equation} T_{\varepsilon}(X)=\frac{1}{d}\mathbbm{1} \mathrm{tr} X+\varepsilon \sum_{i=2}^{d^2-1}A_i\mathrm{tr} ({A^{i+1}}^* X)\,, \end{equation} is completely positive and therefore a root of the bistochastic CDC of maximal order $d^2-1$. \end{theorem} \begin{proof} First we emphasize that the dual basis $B^{*}$ is hermitian as well. To verify this, consider the $A_i$ as basis for the real vector space of all hermitian operators and construct the unique dual basis $A^j$ as a linear combination of the $A_i$ with real coefficients. The key point is to choose $\varepsilon$ in such a way that $T_{\varepsilon}$ becomes completely positive. A way to establish the existence of such an $\varepsilon$ is to consider the Choi operator $\choi_{T_\varepsilon}$ and choose $\varepsilon$ such that $\choi_{T_\varepsilon}$ is positive. A straightforward calculation yields \begin{equation} \choi_{T_{\varepsilon}}= \frac{1}{d^2}\mathbbm{1} + \frac{\varepsilon}{d}\sum_{i=2}^{d^2-1}A_i\otimes \overline{A^{i+1}}\,, \end{equation} where $\overline{A^i}$ denotes the complex conjugate of $A^i$ in a fixed basis of $\C^d$. Hence, the map $T_\varepsilon$ is completely positive if $\varepsilon$ is small enough to ensure \begin{equation}\label{eq:epsbound} - \mathbbm{1} \leq \varepsilon d \sum_{i=2}^{d^2-1}A_i\otimes \overline{A^{i+1}}=:\rho_\varepsilon\,. \end{equation} Since the chosen basis of $\M_d$ and its dual are hermitian this amounts to a comparison between the eigenvalues of the hermitian operators $\rho_\varepsilon$ and $\mathbbm{1}$. The continuity of the eigenvalue distribution of $\rho_\varepsilon$ assures the existence of a proper $\varepsilon$ for arbitrary $d$ and therefore guarantees the complete positivity of $T_{\varepsilon}$ for small enough $\varepsilon$. \end{proof} The construction scheme presented in the proof of theorem \ref{thm:rootsperturb} also applies for all CDCs $T_\sigma$ such that $\sigma$ has full rank. If, however, the rank $r$ of $\sigma$ is less than maximal this construction only leads to a root of order $r^2-1$.
Of course, for a given basis $A_i$ and its dual $A^i$ there is an explicit bound on $\varepsilon$ in terms of the eigenvalues of $\rho_\varepsilon$. This bound seems to be decreasing with the dimension $d$, due to the enlargement of the spectral radius in \eqref{eq:epsbound}, while going to larger dimensions. This suggests that all maximal roots of the bistochastic CDC in higher dimensions obtained by this construction get closer to the bistochastic CDC with increasing $d$. The following proposition refutes this statement: \begin{proposition} For arbitrary dimension $d\in\N$ there is always a maximal root $T_{\varepsilon}$ of the bistochastic CDC such that their cb-norm distance satisfies \begin{equation} \Vert T_{\varepsilon}-T_{\frac{1}{d}\mathbbm{1}}\Vert_{cb}\geq \frac{d-1}{d}\, . \end{equation} \end{proposition}
\begin{proof} We start with a further specification of the basis $A_i$ and choose $\Vert A_3\Vert_\infty=1$. This implies the following lower bound for the cb-norm difference between $T_\varepsilon$ and the bistochastic CDC: \begin{align} \Vert T_\varepsilon-T_{\frac{1}{d}\mathbbm{1}}\Vert_{cb}&\geq \sup\limits_{\Vert A\Vert_{\infty}\leq1}\Vert T_\varepsilon(A)-T_{\frac{1}{d}\mathbbm{1}}(A)\Vert_{\infty} \\ &\geq\Vert T_\varepsilon(A_3)-T_{\frac{1}{d}\mathbbm{1}}(A_3)\Vert_{\infty} \nonumber\\ &= \vert\varepsilon\vert\Vert A_2\Vert_{\infty} \nonumber \end{align} Hence, the cb-norm is bounded from below by $(d-1)/d$ if we are free to choose $\varepsilon \Vert A_2\Vert_{\infty}=(d-1)/d$. Of course, this choice should not violate the complete positivity of $T_\varepsilon$.
In order to show that this is indeed possible with some further restrictions to the choice of $A_3$, we consider the transformation $A_i\mapsto \delta^{-i+3} A_i$ and $A^i\mapsto \delta^{i-3}A^i$ with $\delta\in \R\backslash\{0\}$ and $i>3$, which still gives a maximal root of the bistochastic CDC. The Choi operator $\choi_{T_\varepsilon}$ transforms under this map according to \begin{equation} \choi_{T_\varepsilon} \mapsto \frac{1}{d^2}\mathbbm{1} + \frac{\varepsilon}{d}A_2\otimes \overline{A^{3}}+\delta\frac{\varepsilon}{d}\sum_{i=3}^{d^2-1}A_i\otimes \overline{A^{i+1}} \,. \end{equation} Since we get a maximal root of the bistochastic CDC for arbitrarily small but non-zero $\delta$, perturbation theory tells us that we only have to compare the eigenvalues of the first two parts of the sum when considering the limit $\delta\rightarrow 0$. In other words, we may neglect the influence of terms $A_i\otimes \overline{A^{i+1}}$ with $i>2$ on the eigenvalue problem if $\delta$ is chosen sufficiently small. Hence, we have to establish the inequality \begin{equation}\label{eq:firsttwo} -\mathbbm{1} \leq d \varepsilon A_2\otimes \overline{A^{3}}\,. \end{equation} By taking the operator norm on both sides of the inequality, we find the following sufficient criterion for complete positivity of $T_{\varepsilon}$: \begin{equation}\label{eq:ineq}
|d||\varepsilon|\Vert A_{2}\Vert_{\infty}\Vert \overline{A^{3}}\Vert_{\infty}\leq 1 \end{equation}
To obtain the restrictions imposed on $A_3$ by this inequality, we furthermore assume the basis $B$ to be orthogonal, that is, $A^i=A_i/\mathrm{tr} A_i^2$. We denote the eigenvalues of $A_3$ by $a_3^i$, then, orthogonality to $A_1=\mathbbm{1}$ requires $\sum_{i}a_3^{i}=0$. Additionally, $A_3$ must satisfy $\max\limits_i\vert a^i_3\vert=1$ to guarantee $\Vert A_3\Vert_\infty=1$. With these notations \eqref{eq:ineq} changes into \begin{equation}
|\varepsilon|\Vert A_{2}\Vert_{\infty}\leq\frac{|\mathrm{tr} A_{3}^2|}{d}=\frac{\sum_i\vert a^i_3\vert^2}{d}. \end{equation} Thus, equation \eqref{eq:firsttwo} is satisfied and $T_{\varepsilon}$ is completely positive, if we choose $A_3$ such that $d-1\leq \sum_i\vert a^i_3\vert^2$. This can be satisfied in even dimensions by choosing $a_3^{i}=(-1)^{i}$, actually leading to a lower bound of $1$ for the cb-norm difference. In odd dimensions we choose $a_3^{i}=(-1)^{i}$ for $i<d$ and $a_3^d=0$. \end{proof}
\section{Finitely correlated construction of $k$-dependent states}\label{sec:fcs} The general concept of \emph{finitely correlated states} and the occurring correlations are considered in reference \cite{fcs}. We want to deal with the correlations that occur if a maximal CDC-root $S$ is used to generate a functional on the infinite spin chain. For this purpose we consider the quasi-local algebra $\mathcal{A}:=\bigotimes_{i=-\infty}^{\infty}\M_{d_i}$ generated by algebras of finite subsets $\mathcal{A}_\Lambda:=\bigotimes_{z\in\Lambda} \mathcal{A}_z$ on finite chain elements $\Lambda\subset \Z$. A $k$-dependent state $\omega$ is defined in the following way \cite{Petz,Matus}: \begin{definition}[$k$-dependent state] A state $\omega: \mathcal{A}\mapsto \C$ is called \textbf{$k$-dependent} if algebras separated by $k$ or more sites are independent, i.e. \begin{equation} \label{eq:dependent} \omega\left(A_{(-\infty,n)}\otimes\underbrace{\mathbbm{1}\otimes...\otimes\mathbbm{1}}_{k\text{-times}}\otimes A_{[n+k+1,\infty)}\right)=\omega\left(A_{(-\infty,n)}\right)\omega\left(A_{[n+k+1,\infty)}\right). \end{equation} \end{definition} To understand the application of maximal CDC-roots in this context, we first recall that every channel $T:\M_{d'}\mapsto \M_{d}$ admits a Stinespring representation \cite{stinespring}, i.e. \begin{equation}\label{eq:stinespring_dilation2} T(X)=V^*(X\otimes\mathbbm{1}_k)V\ \ \forall X\in\M_{d'}, \end{equation} where $V:\mathbb{C}^d\mapsto\mathbb{C}^{d'}\otimes\mathbb{C}^{k}$ is an isometry, that is $V^*V=\mathbbm{1}_d$. Since we want to concatenate the channel $T$, input dimension $d=\text{dim}(\hilbertH_{in})$ and output dimension $d'=\text{dim}(\hilbertH_{out})$ will be equal. Furthermore, the dimension $k$ of the ancilla system is equal to the Kraus rank of $T$. Keeping this representation in mind, we define the following map $\mathbbm{E}_{A}:\M_d\otimes \M_k\mapsto \M_d$: \begin{equation} \quad \mathbbm{E}_{A}(X)=V^*(X\otimes A)V\, , \end{equation} where the isometry $V$ is chosen to be the same as in \eqref{eq:stinespring_dilation2} and therefore $\mathbbm{E}_{\mathbbm{1}}(X)=T(X)$ holds. The concatenation of several $\mathbbm{E}_{A_i}$, with $i=1,\ldots ,n$, together with any $\rho\in\mathcal{S}(\C^d)$ defines a functional $\omega_n:\M_k^{\otimes n}\mapsto \C$ via \begin{equation} \omega_n(A_1\otimes A_2 \otimes ... \otimes A_n)=\mathrm{tr}\left(\rho\mathbbm{E}_{A_1}\circ\mathbbm{E}_{A_2}\circ...\circ\mathbbm{E}_{A_n}(\mathbbm{1})\right). \end{equation} Due to the unitality of $T$, we can extend this functional to the positive half chain $\mathcal{A}_+:=\bigotimes_{i=0}^{\infty}\M_k$ via: \begin{align} \omega_{n+1}\left(A_{[1,n]}\otimes\mathbbm{1}\right)&=\mathrm{tr}\left(\rho\mathbbm{E}_{A_1}\circ\mathbbm{E}_{A_2}\circ...\circ\mathbbm{E}_{A_n}\circ\mathbbm{E}_{\mathbbm{1}}(\mathbbm{1})\right)\\ &=\mathrm{tr}\left(\rho\mathbbm{E}_{A_1}\circ\mathbbm{E}_{A_2}\circ...\circ\mathbbm{E}_{A_n}(\mathbbm{1})\right) \nonumber \\ &=\omega_n\left(A_{[1,n]}\right)\, .\nonumber \end{align} Furthermore, if we choose $\rho$ as an invariant state of $T$, i.e. $\mathrm{tr}\left(\rho\mathbbm{E}_{\mathbbm{1}}(X)\right)=\mathrm{tr}\left(\rho X\right)$, we can extend the functional also to the negative half-chain $\mathcal{A}_-:=\bigotimes_{i=-\infty}^{0}\M_k$ through setting: \begin{align} \omega_{n+1}\left(\mathbbm{1}\otimes A_{[1,n]}\right)&=\mathrm{tr}\left(\underbrace{\rho\mathbbm{E}_{\mathbbm{1}}}_{\rho}\circ\mathbbm{E}_{A_1}\circ\mathbbm{E}_{A_2}\circ...\circ\mathbbm{E}_{A_n}(\mathbbm{1})\right)\\ &=\omega_n\left(A_{[1,n]}\right)\, .\nonumber \end{align} Combining these two extensions we define a functional on the infinite spin chain $\mathcal{A}$. If we furthermore define the \emph{shift operator} $\sigma$ by setting \begin{equation} \sigma:\mathcal{A}\mapsto \mathcal{A} \quad,\quad \sigma(A_1\otimes...\otimes A_n\otimes\mathbbm{1})=\mathbbm{1}\otimes A_1\otimes\cdots\otimes A_n, \end{equation} we find that $\omega$ is a \emph{translation invariant state}, i.e. $\omega=\omega\circ\sigma$. We refer to $\omega$ as \emph{finitely correlated state} generated by $(T,\rho)$.
Now we choose the generating channel $T$ as a $k^{\rm th}$ root of the bistochastic CDC, that is, $\mathbbm{E}_{\mathbbm{1}}^{d^2-1}(X)=\frac{\mathbbm{1}}{d}\mathrm{tr} X$. Our construction scheme for roots of the bistochastic CDC yields channels $T$ for which the maximally mixed state is an invariant state, hence, we choose $\rho=\frac{\mathbbm{1}}{d}$. This means we construct a finitely correlated state on an infinite spin chain with a certain dependency length. This length is equal to the order of the root, as the following calculation for the resulting functional $\omega$ shows: \begin{alignat}{1} &\omega\left(\ldots \otimes A_n\otimes\underbrace{\mathbbm{1}\otimes \ldots\otimes\mathbbm{1}}_{d^2-1\text{-times}}\otimes A_{n+d^2}\otimes \ldots\right)\\ \notag &=\mathrm{tr}\left(\rho \ldots\circ\mathbbm{E}_{A_{n-1}}\circ\mathbbm{E}_{\mathbbm{1}}^{d^2-1}\circ\mathbbm{E}_{A_{n+d^2}}\circ\ldots (\mathbbm{1}) \right)\\ \notag &=\mathrm{tr}\left(\rho\ldots \circ\mathbbm{E}_{A_{n-1}}(\mathbbm{1})\right)\mathrm{tr}\left(\rho\mathbbm{E}_{A_{n+d^2}}\circ\ldots (\mathbbm{1})\right)\\ \notag &=\omega\left(A_{(-\infty,n)}\right)\omega\left(A_{[n+d^2,\infty)}\right). \end{alignat} This satisfies the form of \eqref{eq:dependent} and therefore the maximal roots generate $d^2-1$-dependent states on an infinite spin chain, with $d$ the dimension of $\rho$.
\section{Memory channels}\label{sec:memch} Commonly it is assumed that successive uses of a channel are uncorrelated in the sense that identical inputs at different time steps produce identical outputs. However, almost all real physical processes exhibit some correlation in time, i.e. the transformation of the states at some time $t$ depends to some extent on the states at previous times $t'<t$. If we consider the repeated application of a quantum channel, e.g. sending photons through some fiber, these correlations can be taken into account by introducing an additional system $\mathcal{M}$, which we refer to as the \emph{memory system}. \begin{figure}
\caption{A quantum memory channel $T$ and its n-fold concatenation $T_n$. $\mathcal{M}$ denotes the memory systems which is used to model interaction between the different concatenation steps.}
\label{fig:memory}
\end{figure} For the sake of clarity we define $\mathcal{S}(\hilbertH_{in})=:\mathcal{A}$ for the input system and $\mathcal{S}(\hilbertH_{out})=:\mathcal{B}$ for the output system. Then, the $n$-fold concatenation $T_n: \mathcal{M}\otimes \mathcal{A}^{\otimes n}\mapsto \mathcal{B}^{\otimes n}\otimes\mathcal{M}$ of the quantum memory channel $T:\mathcal{M}\otimes\mathcal{A} \mapsto \mathcal{B}\otimes\mathcal{M}$ can be expressed via \begin{equation}\label{eq:memory_concatenation} T_n=\left(id_{\mathcal{B}}^{\otimes n-1}\otimes T\right)\circ..\circ\left(id_{\mathcal{B}}\otimes T\otimes id_{\mathcal{A}}^{\otimes n-2}\right)\circ\left(T\otimes id_{\mathcal{A}}^{\otimes n-1}\right)\, , \end{equation} where $id_{X}: X\rightarrow X$ denotes the ideal or noiseless channel on system $X$, see figure \ref{fig:memory} for an illustration. Due to this construction, the elements of the output algebras $\mathcal{B}$ will certainly be affected by the choice of the initial memory state $\rho$ of the memory system $\mathcal{M}$. Indeed, it is a natural question to ask if all elements of the output system are influenced in the same way or if the effect of the memory dies out after sufficiently many concatenation steps. This leads to the notion of \emph{forgetful} memory channels, which are those quantum memory channels where the influence of the initialization of the memory systems vanishes exponentially with the number of time steps, see reference \cite{memorychannel} for a precise definition. Our aim in this section is to construct forgetful memory channels where the effect of the initial memory system vanishes completely after a certain number of concatenations. We refer to such channels as \emph{strictly forgetful} memory channels. The following definition expresses this in mathematical terms: \begin{definition}[Strictly Forgetful Memory Channel] A quantum memory channel $T$ is strictly forgetful, iff there is some $n\in\mathbbm{N}$, such that \begin{equation} \label{Eq:DefStricForget}
||\mathrm{tr}_{\mathcal{B}^{\otimes n}}\left[T_n((\sigma_{\mathcal{M},1}-\sigma_{\mathcal{M},2})\otimes\sigma_{sys})\right]||_{1}=0 \end{equation} for all $\sigma_{\mathcal{M},1}$,\ $\sigma_{\mathcal{M},2} \in \mathcal{S}(\mathcal{M})$ and $\sigma_{sys}\in\mathcal{S}(\mathcal{A}^{\otimes n})$. \end{definition} This definition assumes that there is no entanglement between the initial memory state and the system. We refer to the minimal $n$, such that \eqref{Eq:DefStricForget} holds, as \emph{memory depth} of the channel $T$. Equivalent to this definition is to say that the \emph{memory branch}, i.e. the channel $T_{\mathcal{M}}:\mathcal{S}(\mathcal{M})\otimes\mathcal{S}(\mathcal{A}^{\otimes n})\mapsto \mathcal{S}(\mathcal{M})$ defined by $T_{\mathcal{M}}(\sigma_{\mathcal{M}}\otimes\sigma_{sys}):=\mathrm{tr}_{\mathcal{B}^{\otimes n}}[T_n(\sigma_{\mathcal{M}}\otimes\sigma_{sys})]$, completely depolarizes the information of the memory input state, see fig. \ref{fig:memory_2}. \begin{figure}
\caption{To characterize the strict forgetfulness of a memory channel $T$, we consider the channel's n-fold concatenation, where we neglect the output system $\mathcal{B}^{\otimes n}$ (depicted by the bins). The memory channel is strictly forgetful, iff there is an $n\in\mathbbm{N}$, such that the output on the memory system for any two different input states cannot be distinguished via an arbitrary measurement.}
\label{fig:memory_2}
\end{figure} Our aim is to construct strictly forgetful memory channels $T$ with memory depth $n$, exploiting the results about $n$-th order roots of the CDC. Similarly to the section about the construction of maximal roots in arbitrary dimensions, we construct the forgetful memory channels by expressing the problem in terms of matrix equations.
For this purpose we fix bases of operators on the memory channels input, output and memory system, i.e., $\{M_{i}\}$ is a basis for $\mathcal{M}$ and $\{A_i\}$ respectively $\{B_i\}$ are bases of $\mathcal{A}$ respectively $\mathcal{B}$, where the number $i\in\{1,...,d^2_X\}$ of operators correspond the respective dimension of the Hilbert space. The matrix representation of a memory channel $T$ is then given through \begin{equation} \bra{i,j}{D_{T}}\ket{k,l}:= \mathrm{tr}({M^i}^{*}\otimes {B^j}^{*} \ T(M_k\otimes A_l)), \end{equation} In what follows, we try to identify the parts of the matrix, which determine the forgetfulness of the corresponding memory channel $T$. For that purpose, we first consider the identity \begin{align} \mathrm{tr}_{\mathcal{B}^{\otimes 2}}\left[T_2(\sigma_{\mathcal{M}}\otimes\sigma_1\otimes\sigma_2)\right]&=\mathrm{tr}_{\mathcal{B}^{\otimes 2}}\left[\Big( id_{\mathcal{B}}\otimes T\Big)\circ\Big(T\otimes id_{\mathcal{B}}\Big)(\sigma_{\mathcal{M}}\otimes\sigma_1\otimes\sigma_2)\right]\\ &=\mathrm{tr}_{\mathcal{B}}\Big[T\Big(\mathrm{tr}_{\mathcal{B}}\left[T\left(\sigma_{\mathcal{M}}\otimes\sigma_1\right)\right]\otimes\sigma_2\Big)\Big]\, .\nonumber \end{align} If we apply this identity to the definition of the memory branch $T_{\mathcal{M}}(\sigma_{\mathcal{M}}\otimes\sigma_{sys})$ of the $n$-th concatenation acting on a separable input state , i.e. $\sigma_{sys}=\sigma_1\otimes...\otimes{\sigma_n}$, we find the following term: \begin{align} \label{eq:concatenation_split} \notag T_{\mathcal{M}}(\sigma_{\mathcal{M}}\otimes\sigma_{sys})&=\mathrm{tr}_{\mathcal{B}^{\otimes n}}\left[T_n(\sigma_{\mathcal{M}}\otimes{\sigma_1}\otimes...\otimes{\sigma_n})\right]\\ &=\mathrm{tr}_{\mathcal{B}}[T(\mathrm{tr}_{\mathcal{B}}[T(...T( \mathrm{tr}_{\mathcal{B}}\left[T(\rho_{\mathcal{M}}\otimes\sigma_1)\right]\otimes\sigma_2)\otimes...\otimes \sigma_n)]. \end{align} By introducing the set of parameterized channels $T_{\mathcal{M},\sigma_i}:\mathcal{S}(\mathcal{M})\mapsto\mathcal{S}(\mathcal{M})$ on the memory branch, where $T_{\mathcal{M},\sigma_i}(\sigma_{\mathcal{M}}):=\mathrm{tr}_{\mathcal{B}}\left[T(\sigma_{\mathcal{M}}\otimes\sigma_i)\right]$ with $\sigma_i \in \mathcal{S}(\mathcal{A})$, we can rewrite \eqref{eq:concatenation_split} as concatenation of parameterized channels on the memory branch: \begin{equation}\label{eq:parameterized} T_{\mathcal{M}}(\sigma_{\mathcal{M}}\otimes\sigma_{sys})=T_{\mathcal{M},\sigma_{n}}\circ...\circ T_{\mathcal{M},\sigma_1}(\sigma_{\mathcal{M}}). \end{equation} We point out that so far we have just reformulated the description of the memory branch in terms of parametrized maps. If we now identify every $T_{\mathcal{M},\sigma_i}$ with its matrix representation $D_{T_{\mathcal{M},\sigma_{i}}}$ via the coefficients \begin{equation} \bra{k}D_{T_{\mathcal{M},\sigma_{i}}}\ket{l}:=\mathrm{tr}({M^k}^* \mathrm{tr}_{\mathcal{B}}[T(M_l\otimes\sigma_{i})]), \end{equation} for some basis of operators $\{M_k\}$ on the memory system, we can express \eqref{eq:parameterized} as multiplication of parametrized matrices: \begin{equation}\label{eq:conc} D_{T_{\mathcal{M}}}=D_{T_{\mathcal{M},\sigma_n}}\cdot D_{T_{\mathcal{M},\sigma_{n-1}}}\cdot\ldots \cdot D_{T_{\mathcal{M},\sigma_1}} \end{equation} Note that the left-hand-side of this equation implicitly depends on the system state $\sigma_{sys}$. Equation \eqref{eq:conc} turns out to be the crucial matrix equation to construct channels of memory depth $n$ utilizing the results of $n$-th order CDC-roots. Indeed, by the definition of a strictly forgetful channel $T$, the left-hand side needs to represent a completely depolarizing channel. Obviously, if the memory depth of $T$ is $n$, then $T_{\mathcal{M},\sigma}$ needs to be a root of a CDC of order $n_\sigma\leq n$ for all $\sigma\in\S(\mathcal{A})$ since we may choose $\sigma_i = \sigma$ for all $i$ in \eqref{eq:conc}. However, it is not enough to demand that $T_{\mathcal{M},\sigma}$ is a root of a CDC for all $\sigma\in\S(\mathcal{A})$ in order to construct a strictly forgetful memory channel. Indeed, there exist memory channels which are strictly forgetful for all system states of the form $\sigma_{sys}=\sigma^{\otimes n}$ but not for general system states, see the example at the end of this section.
The main obstacle to overcome is now that \eqref{eq:conc} needs to be completely depolarizing for all choices of the $\sigma_i$. Before we tackle this problem, let us argue that strict forgetfulness for all separable states of the input system $\mathcal{A}^{\otimes n}$ implies strict forgetfulness for all elements of $\S(\mathcal{A}^{\otimes n})$. Suppose $T$ is strictly forgetful for all separable states $\sigma_{sys}=\sigma_1\otimes...\otimes{\sigma_n}$ and choose an operator basis $\{\sigma_\alpha\}_{\alpha=1,\ldots,d_{\mathcal{A}}^2}$ of $\mathcal{A}$ such that each $\sigma_\alpha$ is a quantum state. An arbitrary, possibly entangled, state $\rho_{sys}$ can then be written as \begin{equation} \rho_{sys}=\sum_{\alpha_1,\ldots,\alpha_n}c_{\alpha_1\ldots\alpha_n}\sigma_{\alpha_1}\otimes\ldots\otimes\sigma_{\alpha_n}\, . \end{equation} Let $\sigma_{\mathcal{M},1}$ and $\sigma_{\mathcal{M},2}$ be arbitrary states of the memory, then we get \begin{align} \mathrm{tr}_{\mathcal{B}^{\otimes n}}\left[T_n(\sigma_{\mathcal{M},1}\otimes\rho_{sys})\right]&=&\sum_{\alpha_1,\ldots,\alpha_n}c_{\alpha_1\ldots\alpha_n}\mathrm{tr}_{\mathcal{B}^{\otimes n}}\left[T_n(\sigma_{\mathcal{M},1}\otimes \sigma_{\alpha_1}\otimes\ldots\otimes\sigma_{\alpha_n}\right]\\ &=&\sum_{\alpha_1,\ldots,\alpha_n}c_{\alpha_1\ldots\alpha_n}\mathrm{tr}_{\mathcal{B}^{\otimes n}}\left[T_n(\sigma_{\mathcal{M},2}\otimes \sigma_{\alpha_1}\otimes\ldots\otimes\sigma_{\alpha_n}\right] \nonumber \\ &=&\mathrm{tr}_{\mathcal{B}^{\otimes n}}\left[T_n(\sigma_{\mathcal{M},2}\otimes\rho_{sys})\right]\, , \nonumber \end{align} and hence $T$ is also strictly forgetful for all $\rho\in\S(\mathcal{A}^{\otimes n})$. Thus, we can restrict to separable states $\sigma_{sys}$ without loss of generality, which means that everything boils down to assuring that \eqref{eq:conc} equals a CDC. Hence, we need to have a closer look at the parametrized matrices $T_{\mathcal{M},\sigma_{i}}$. To facilitate the derivation we choose the bases $\{M_i\},\{A_i\}$ and $\{B_i\}$ to be hermitian (so the dual bases) and the identity as the first element for each of them. We then find for some fixed $\sigma_i$: \begin{align} \bra{k}D_{T_{\mathcal{M},\sigma_{i}}}\ket{l}&=\mathrm{tr}_{\mathcal{M}}(M^k \ \ \mathrm{tr}_{\mathcal{B}}[T(M_l\otimes\sigma_{i})])\\ &=d_{\mathcal{B}}\cdot \mathrm{tr}_{\mathcal{MB}}(M^k\otimes \underbrace{\frac{1}{d_\mathcal{B}}\mathbbm{1}_{\mathcal{B}}}_{=B^1} \cdot T(M_l\otimes\sum_{r=1}^{d_\mathcal{A}^2}\underbrace{\mathrm{tr}(A^r \sigma_i)}_{=:\alpha_r(\sigma_i)} A_r))\nonumber \\ &=\frac{d_{\mathcal{B}}}{d_{\mathcal{A}}}\cdot \underbrace{\bra{k,1}{D_{T}}\ket{l,1}}_{=:\bra{k}X_{1,1}\ket{l}}+d_{\mathcal{B}}\cdot \sum_{r=2}^{d_{\mathcal{A}}^2}\alpha_r(\sigma_i)\underbrace{\bra{k,1}{D_{T}}\ket{l,r}}_{=:\bra{k}X_{1,r}\ket{l}}\, .\nonumber \end{align} This equation shows that for appropriate bases every matrix on the right-hand side of \eqref{eq:conc} can be expressed as composition of the submatrix $X_{1,1}$ plus some state specific weighted sum of submatrices $\{X_{1,2...d^2}\}$ of the matrix $D_{T}$. As already mentioned, the matrix \begin{equation} \label{eq:memchan} D_{T_\mathcal{M},\sigma}=\frac{d_{\mathcal{B}}}{d_{\mathcal{A}}}\cdot X_{1,1}+d_{\mathcal{B}}\cdot\sum_{r=2}^{d_\mathcal{A}^2}\alpha_r(\sigma) X_{1,r} \end{equation} must necessarily represent a root of a CDC of order at most $n$ for arbitrary $\sigma$. Moreover, all $n$-fold products of matrices $D_{T_\mathcal{M},\sigma_i}$ with arbitrary $\sigma_i$'s must also represent the CDC. This can be assured by fixing bases and choosing the matrix blocks according to \begin{equation}\label{eq:submatrices} X_{1,l}=\left(
\begin{array}{cc}
\frac{d_\mathcal{A}}{d_\mathcal{B}}\delta_{1,l} & 0 \\
\frac{d_\mathcal{A}}{d_\mathcal{B}} v\delta_{1,l} & J_l \\
\end{array}
\right) \in M_{d_{\mathcal{M}}^2}(\mathbb{C})\, , \end{equation} where the $J_l$ are upper triangular matrices of dimension $d_\mathcal{M}^2-1$ and $v$ is a real valued vector. By choosing those blocks appropriately it can be assured that the nilpotency order of the $J_l$ is $n$, which, by our results on maximal roots of the CDC, is bounded from above by $n\leq d_\mathcal{M}^2-1$. This choice ensures that every $D_{T_{\mathcal{M},\sigma_{i}}}$ in \eqref{eq:conc} is of the form \begin{equation} D_{T_{\mathcal{M},\sigma_{i}}}=\left(
\begin{array}{cc}
1 & 0 \\
v\ & \sum_{l=1}^{d_\mathcal{A}^2}\alpha_l(\sigma_i)J_l \\
\end{array}
\right). \end{equation}
If we put this into the right-hand side of \eqref{eq:conc} we find that the memory branch is indeed completely depolarizing for an arbitrary input state of the form $\sigma_{sys}=\sigma_1\otimes...\otimes \sigma_n$ and the necessary number of concatenation steps is upper bounded by $d_\mathcal{M}^2-1$.
What remains is to show that complete positivity is not violated if we choose the sub-matrices $X_{1,k}$ in the proposed way. Here we emphasize that the behavior of the memory branch is just affected by the submatrices $X_{1,k}$ and we are completely free to choose $X_{2...d^2,k}$ to assure complete positivity. Moreover, we are free to choose the matrix blocks $J_l$ and $v$ with arbitrarily small but non-zero norm, without disturbing the forgetfulness property of $T$. Hence, the matrix blocks $J_l$ and $X_{2...d^2,k}$ can be considered as perturbation of the bistochastic completely depolarizing channel $T_{\mathbbm{1}}$ on $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{M}$ defined via $T_{\mathbbm{1}}(\sigma_{\mathcal{MA}})=\frac{1}{d_\mathcal{B}d_\mathcal{M}}\mathbbm{1}_\mathcal{BM}$ for all $\sigma_{\mathcal{MA}}\in \S(\mathcal{MA})$.
A natural question to ask is whether this construction yields all strictly forgetful memory channels or if there are examples which cannot be transformed to the case of upper triangular matrices by clever choice of a basis. It turns out that our construction indeed covers all strictly forgetful memory channels. \begin{theorem} \label{thm:memchan} Let $T:\mathcal{M}\otimes\mathcal{A} \mapsto \mathcal{B}\otimes\mathcal{M}$ be a strictly forgetful memory channel. For appropriate bases of $\mathcal{A},\mathcal{B}$ and $\mathcal{M}$ the memory branch is of the form \eqref{eq:submatrices}. This implies that the memory depth of strictly forgetful memory channels is upper bounded by $d_\mathcal{M}^2-1$. \end{theorem} \begin{proof} We choose again bases $\{A_i\},\{B_i\}$ and $\{M_i\}$ of $\mathcal{A},\mathcal{B}$ and $\mathcal{M}$ such that the identity is the first element of the respective basis and the other elements of the basis are hermitian and tracefree. We adopt the notation of \eqref{eq:memchan} and denote the matrices with elements $\bra{k,1}{D_{T}}\ket{l,r}$ by $X_{1,r}$. Let $\widehat X_{1,r}$ denote the matrix obtained from $X_{1,r}$ by deleting first row and column. The first step in our proof is to verify that the memory channel $T$ is strictly forgetful with memory depth at most $n$ iff the matrix algebra generated by the matrices $\widehat X_{1,l}$ is nilpotent. Since we have chosen the basis $\{A_i\}$ hermitian, tracefree and $A_1=\mathbbm{1}$, there are positive numbers $r_l$ such that \begin{equation} \sigma=\frac{1}{d}A_1 +\sum_{l=2}^{d^2_{\mathcal{A}}}a_l A_l \end{equation} is a quantum state for all $\vert a_l\vert\leq r_l$. Thus, the condition that $T_{\mathcal{M},\sigma_n}\circ \ldots \circ T_{\mathcal{M},\sigma_1}$ is completely depolarizing for all $\sigma_1,\ldots,\sigma_n$ implies that \begin{equation} \sum_{l_1,\ldots ,l_n}a_{l_1}\ldots a_{l_n}\widehat X_{1,l_1}\cdot\ldots\cdot \widehat X_{1,l_n} =0\, , \end{equation} where the $a_{l_i}$ equal $1/d$ if $l_i=0$ and $\vert a_{l_i}\vert \leq r_{l_i}$ otherwise. If we consider this as a polynomial in the variables $a_{l_i}$ with matrix-valued coefficients we see that this equation implies that all coefficients must vanish, that is, $\widehat X_{1,l_1}\cdot\ldots\cdot \widehat X_{1,l_n}=0$ for all $l_i$. This proves that the algebra generated by the matrices $\widehat X_{1,l}$ is nilpotent.
By the theorem of Jacobson \cite{jacobson} this already implies that this algebra is simultaneously triangularizable, that is, there is a basis in which all matrices are upper triangular. The statement about the maximal memory depth of $T$ follows trivially. \end{proof} The crucial point in the proof of theorem \ref{thm:memchan} is to show that the algebra generated by the $\widehat X_{1,l}$ is nilpotent. If were only able to prove that the subspace generated by the $\widehat X_{1,l}$ is nilpotent, which translates into the property that the memory channel $T$ is strictly forgetful for all system states of the form $\sigma_{sys}=\sigma^{\otimes n}$, we could not conclude that $T$ is strictly forgetful. In fact, there exist examples \cite{mathes} of nilpotent subspaces of matrices which are not simultaneously upper triangular. From such an example it is easy to construct a memory channel which is strictly forgetful for all $\sigma_{sys}=\sigma^{\otimes n}$ but not for general system states. Indeed, let all systems $\mathcal{A},\mathcal{B}$ and $\mathcal{M}$ be qubits and choose Pauli matrices as operator basis. Consider the following matrices \begin{equation} \widehat X_{1,2}= \left( \begin{array}{ccc}
0 & 0 & 0 \\
-a & 0 & 0 \\
0 & a & 0 \end{array} \right) \quad
\widehat X_{1,3}= \left( \begin{array}{ccc}
0 & b & 0 \\
0 & 0 & b \\
0 & 0 & 0 \end{array} \right)\quad
\widehat X_{1,1}= \widehat X_{1,4}=0 \end{equation} and assume all other matrix elements of $D_T$ to be zero, except $\bra{1}X_{1,1}\ket{1}=1$ which represents the trace-preserving property of $T$. Again, for small enough $a$ and $b$ this is completely positive. For a state $\sigma \in \S(\mathcal{A})$ we get the memory channel \begin{equation} D_{T_{\mathcal{M},\sigma}}= \left( \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & b\alpha_y(\sigma) & 0 \\
0 & -a\alpha_x(\sigma) & 0 & b\alpha_y(\sigma) \\
0 & 0 & a\alpha_x(\sigma) & 0 \end{array} \right)\, , \end{equation} where $\alpha_x(\sigma)$ respectively $\alpha_y(\sigma)$ denote the coefficients of $\sigma$ with respect to Pauli operator $x$ respectively $y$. Obviously, $D_{T_{\mathcal{M},\sigma}}$ is a root of the bistochastic CDC of order at most three, but the family of all $D_{T_{\mathcal{M},\sigma}}$ is not simultaneously upper triangular. This is expressed by the fact that the sequence \begin{equation} \left(D_{T_{\mathcal{M},\psi_x}} D_{T_{\mathcal{M},\psi_y}}\right) ^n= \left( \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & (-ab)^n & 0 \\
0 & 0 & 0 & (ab)^n \end{array} \right)\, , \end{equation} where $\psi_x$ respectively $\psi_y$ denote the eigenstates of Pauli operators $x$ respectively $y$ with eigenvalue $+1$, never exactly equals the CDC, although it converges exponentially in $n$ towards the CDC.
\section*{Discussion}
Our construction implies that the maximal memory depth of the channel just depends on the dimension of the memory system. Under further assumptions the bound can sometimes be improved. For example, if the memory channel is assumed to be reversible, and hence given by a unitary operator, the memory branch is a homomorphism. Reversible qubit channels have been discussed in \cite{rybar}, and the maximal memory depth was shown to be $2<3=2^2-1$. More generally, one can see that the nesting of linear subspaces implicit in the Jordan decomposition has to be replaced in the reversible case by a nesting of subalgebras. Since for these some dimensions are forbidden, one gets a tighter bound, namely depth $<2(d-1)$ \cite{GRW}.
\section*{Acknowledgments} We gratefully acknowledge financial support by the EU (projects CORNER and COQUIT) and stimulating conversations with David Gross and Thomas Salfeld.
\end{document} |
\begin{document}
\title{\sc{Reconstructing Compositions}
\pagestyle{main}
\begin{abstract} We consider the problem of reconstructing compositions of an integer from their subcompositions, which was raised by Raykova (albeit disguised as a question about layered permutations). We show that every composition $w$ of $n\ge 3k+1$ can be reconstructed from its set of $k$-deletions, i.e., the set of all compositions of $n-k$ contained in $w$. As there are compositions of $3k$ with the same set of $k$-deletions, this result is best possible. \end{abstract}
\minisec{Introduction} The Reconstruction Conjecture states that given the multiset of isomorphism types of $1$-vertex deletions (briefly, {\it $1$-deletions\/}) of a graph $G$ --- the {\it deck\/} of $G$ --- on three or more vertices, it is possible to determine $G$ up to isomorphism. The stronger set version of the conjecture due to Harary~\cite{harary:on-the-reconstr:} only allows access to the {\it set\/} of $1$-deletions and requires $G$ to have four or more vertices. These conjectures can be made even more difficult by considering $k$-deletions instead of $1$-deletions, for which we refer to Manvel~\cite{manvel:some-basic-obse:}.
Such reconstruction questions extend naturally to other combinatorial contexts. For example, Sch\"utzenberger and Simon (see Lothaire~\cite[Theorem 6.2.16]{lothaire:combinatorics-o:}) proved that every word of length $n\ge 2k+1$ can be reconstructed from its set of $k$-deletions (i.e., subwords of length $n-k$). This bound is tight because the words $(ab)^k$ (the word with $ab$ repeated $k$ times) and $(ba)^k$ have the same set of $k$-deletions: all words of length $k$ over the set $\{a,b\}$. Answering a question of Cameron~\cite{cameron:stories-from-th:}, Pretzel and Siemons~\cite{pretzel:on-the-reconstr:} considered the partition context, where they proved that every partition of $n\ge 2(k+3)(k+1)$ can be reconstructed from its set of $k$-deletions. (This bound is not known to be tight.)
Motivated by a question of Raykova~\cite{raykova:permutation-rec:} (described at the end of the paper), we consider the problem of set reconstruction for compositions (ordered partitions), establishing the following result.
\begin{theorem}\label{thm-comp-reconstruct} All compositions of $n\ge 3k+1$ can be reconstructed from their sets of $k$-deletions. \end{theorem}
Our proof of Theorem~\ref{thm-comp-reconstruct} illustrates an algorithm to perform the reconstruction. Perhaps more convincing than the proof is the Maple implementation of this algorithm, available from the author's homepage.
\minisec{Notation}
We view a composition as a word $w$ whose letters are positive integers, i.e., a word in $\mathbb{P}^*$. We denote the length of $w$ by $|w|$ and the sum of the entries of $w$ by $\|w\|$, and say that $w$ is a composition of $\|w\|$. A $1$-deletion of $w$ is a composition that can be obtained either by lowering a $\mathord{\ge}2$ entry of $w$ by $1$ or by removing an entry of $w$ that is equal to $1$. A $2$-deletion is then a $1$-deletion of a $1$-deletion, and so on.
This notion naturally defines a partial order
\symbolfootnote[2]{This partial order was first considered by Bergeron, Bousquet-M\'elou, and Dulucq~\cite{bergeron:standard-paths-:}, and has since been studied by Snellman~\cite{snellman:saturated-chain:,snellman:standard-paths-:}, Sagan and Vatter~\cite{sagan:the-mobius-func:}, and Bj\"orner and Sagan~\cite{bjorner:rationality-of-:}.}
on compositions: $u\le w$ if $w$ contains a subword $w(i_1)w(i_2)\cdots w(i_\ell)$ of length $\ell=|u|$ such that $u(j)\le w(i_j)$ for all $1\le j\le \ell$. (We refer to the indices $i_1<\cdots<i_\ell$ as an {\it embedding\/} of $u$.) For example, $1211\le 21312$ because of the subword $2312$. If $u\le w$ then $u$ is a $(\|w\|-\|u\|)$-deletion of $w$. Returning to the previous example, $\|21312\|=9$ and $\|1211\|=5$, so $1211$ is a $4$-deletion of $21312$.
\minisec{A lower bound} In the context of words, the fact that the sets of $k$-deletions of $(ab)^k$ and $(ba)^k$ are both equal to the set of all words of length $k$ over $\{a,b\}$ provides a lower bound on $k$-reconstructibility. Here we can use a very similar example: the sets of $k$-deletions of $(12)^k$ and $(21)^k$ are both equal to the set of all compositions of $2k$ in which no entry is greater than $2$. This implies that Theorem~\ref{thm-comp-reconstruct} is best possible.
\minisec{The proof}
Our reconstruction algorithm/proof of Theorem~\ref{thm-comp-reconstruct} employs several composition statistics. One is the {\it exceedance number\/}, defined by $\operatorname{ex}(w)=\|w\|-|w|=\sum (w_i-1)$ where the sum is over all entries $w(i)$. Another important composition statistic is the number of $1$'s in $w$, which can be approximated using its set of $k$-deletions:
\begin{lemma}\label{lem-determine-ones} The composition $w$ of $n\ge 3k+1$ has at least $k$ $1$'s if and only if either \begin{enumerate} \item[(1)] $1^{n-k}$ is a $k$-deletion of $w$, or \item[(2)] the longest $k$-deletion of $w$ is $k$ letters longer than the shortest $k$-deletion of $w$. \end{enumerate} Moreover, $w$ has precisely $k$ $1$'s if and only if one of the above conditions holds and $w$ has a $k$-deletion without $1$'s. \end{lemma} \begin{proof}
It is easy to see that if either (1) or (2) occurs then $w$ has at least $k$ $1$'s. Suppose then that $w$ has at least $k$ $1$'s. If $\operatorname{ex}(w)\le k$ then $1^{n-k}$ is a $k$-deletion of $w$, satisfying (1). On the other hand, if $\operatorname{ex}(w)>k$ then some $k$-deletion of $w$ has length $|w|$, while the fact that $w$ contains at least $k$ $1$'s guarantees that some $k$-deletion of $w$ has length $|w|-k$, satisfying (2). The second claim in the lemma is then readily verified. \end{proof}
Given a set of $k$-deletions of a composition, the first step in our algorithm is to apply Lemma~\ref{lem-determine-ones} to decide if the composition has fewer than $k$, precisely $k$, or more than $k$ $1$'s. The three cases are handled separately. The first two are relatively straightforward, while the last is more delicate.
\begin{lemma}\label{lem-few-ones} If $w$ is a composition of $n\ge 3k+1$ with fewer than $k$ $1$'s, then $w$ can be reconstructed from its set of $k$-deletions. \end{lemma} \begin{proof} Given the set of $k$-deletions of a composition $w$ satisfying these hypotheses, our algorithm can apply the result of Lemma~\ref{lem-determine-ones} to determine that $w$ has fewer than $k$ $1$'s. It then follows that $$
\operatorname{ex}(w)\ge\frac{\|w\|-(\mbox{\# of $1$'s in $w$})}{2}\ge\frac{2k+2}{2}=k+1. $$ From this we see that $w$ has the same length, say $m$, as its longest $k$-deletions, and then $\operatorname{ex}(w)$ can be easily determined: it is $k$ plus the exceedance number of one of the longest $k$-deletions.
Set $t=\operatorname{ex}(w)-k$ and define the composition $a=a(1)\cdots a(m)$ by $$ a(i)=\max\{s : \mbox{$\underbrace{1\cdots1}_{i-1}s\underbrace{1\cdots1}_{m-i}$ is, or is contained in, a $k$-deletion of $w$}\}. $$ It follows that $a$ satisfies \begin{equation}\label{eqn-u-v-w} a(i)=\min\{w(i), t+1\}. \end{equation} There are now two cases in which we are done: \begin{itemize}
\item If $\|a\|=n$ then $w$ must be equal to $a$. By \eqref{eqn-u-v-w}, this will occur if $w$ contains no entries greater than $t+1$.
\item If at most one entry of $a$ satisfies $a(i)=t+1$ --- which by \eqref{eqn-u-v-w} will occur if $w$ contains at most one entry $w(i)\ge t+1$ --- then \eqref{eqn-u-v-w} forces $w(j)=a(j)$ for all $j\neq i$ and then $w(i)$ can be calculated from the fact that $\|w\|=n$. \end{itemize} Suppose, for the sake of contradiction, that neither of these conditions hold. Thus $w$ must contain an entry $w(i)>t+1$ and another entry $w(j)\ge t+1$. We then have $$ k+t=\operatorname{ex}(w)\ge t + (t+1) + (\mbox{\# of $\mathord{\ge}2$ entries in $w$, not including $w(i),w(j)$}), $$ so \begin{equation}\label{eqn-few-ones-2} k\ge t+1+(\mbox{\# of $\mathord{\ge}2$ entries in $w$, not including $w(i),w(j)$}), \end{equation} while $$
|w|=2+(\mbox{$\#1$s in $w$})+(\mbox{\# of $\mathord{\ge}2$ entries in $w$, not including $w(i),w(j)$}), $$ so because $w$ contains fewer than $k$ $1$'s, \begin{equation}\label{eqn-few-ones-3}
(\mbox{\# of $\mathord{\ge}2$ entries in $w$, not including $w(i),w(j)$})\ge |w|-k-1. \end{equation}
Combining (\ref{eqn-few-ones-2}) and (\ref{eqn-few-ones-3}) shows that $|w|\le 2k-t$, but then $\operatorname{ex}(w)\ge (3k+1)-(2k-t)=k+t+1$, contradicting the definition of $t$ and completing the proof. \end{proof}
\begin{example} Suppose the reconstruction algorithm is given the set of $3$-deletions $$ \{52, 322, 412, 421, 511, 2122, 3112, 3121, 4111\} $$ of an unknown composition $w$ of $n=10$. The algorithm first checks the hypotheses of Lemma~\ref{lem-determine-ones}. The first condition does not hold because the set of $3$-deletions does not contain $1^{10-3}=1111111$, while the second condition fails because the longest $3$-deletion is only $2$ letters longer than the shortest. Therefore $w$ has fewer than $k=3$ $1$'s. Now the algorithm follows the proof of Lemma~\ref{lem-few-ones}. First we compute $\operatorname{ex}(w)$ from one of the longest $3$-deletions: $$ \operatorname{ex}(w)=\operatorname{ex}(3121)+3=6, $$ so $t=3$. Then we compute $a$: $$ \begin{array}{l} \mbox{$a(1)=4$ because $4111$ is contained in a $3$-deletion but $5111$ is not,}\\ \mbox{$a(2)=1$ because $1111$ is contained in a $3$-deletion but $1211$ is not,}\\ \mbox{$a(3)=2$ because $1121$ is contained in a $3$-deletion but $1131$ is not,}\\ \mbox{$a(4)=2$ because $1112$ is contained in a $3$-deletion but $1113$ is not.} \end{array} $$
Thus $w\ge 4122$. Since $\|4122\|=9<10=\|w\|$, we are not done reconstructing $w$ and need to account for one more exceedance. However, since $a(1)$ is the only entry of $a$ equal to $t+1=4$, $w(1)$ is the only entry of $w$ that can be greater than the corresponding entry of $a$, so we get $w=5122$. \end{example}
\begin{lemma}\label{lem-k-ones} If $w$ is a composition of $n\ge 3k+1$ with precisely $k$ $1$'s, then $w$ can be reconstructed from its set of $k$-deletions. \end{lemma} \begin{proof} Given the set of $k$-deletions of a composition $w$ satisfying these hypotheses, our algorithm can apply the result of Lemma~\ref{lem-determine-ones} to determine that it has exactly $k$ $1$'s. With this established, the length of $w$ can be computed as $k$ plus the length of the shortest $k$-deletion of $w$.
There is a $k$-deletion of $w$ without $1$'s, and this composition gives the $\mathord{\ge}2$ entries of $w$ in their correct order. Thus it suffices to determine where they lie in $w$. To this end define the composition $a_i$ by $$ a_i=\underbrace{1\cdots1}_{i-1}2\underbrace{1\cdots1}_{m-i}. $$ As $a_i$ is contained in a $k$-deletion of $w$ if and only if $w(i)\ge 2$, the $\mathord{\ge}2$ entries of $w$ can be discerned, completing the proof. \end{proof}
\begin{example} Suppose the reconstruction algorithm is given the set of $3$-deletions $$ \begin{array}{l} \{322, 2212, 2221, 3112, 3121, 3211, 12121, 12211, 21121,\\ \ \ 21211, 22111, 31111, 111211, 121111, 211111\}. \end{array} $$ of an unknown composition $w$ of $n=10$. Since the longest $3$-deletions in this set are $3$ letters longer than the shortest $3$-deletion, $w$ has at least $k=3$ $1$'s by Lemma~\ref{lem-determine-ones}. As the set also contains a $3$-deletion without $1$'s, the same lemma shows that $w$ has precisely $3$ $1$'s, and thus the algorithm follows the proof of Lemma~\ref{lem-k-ones}. The $3$-deletion without $1$'s --- $322$ --- gives the $\mathord{\ge}2$ entries of $w$ in their correct order. Now we form the $a_i$'s to see where these $\mathord{\ge}2$ entries lie: $$ \begin{array}{l} \mbox{$a_1=211111$ is contained in a $3$-deletion so $w(1)\ge 2$,}\\ \mbox{$a_2=121111$ is contained in a $3$-deletion so $w(2)\ge 2$,}\\ \mbox{$a_3=112111$ is not contained in a $3$-deletion so $w(3)=1$,}\\ \mbox{$a_4=111211$ is contained in a $3$-deletion so $w(4)\ge 2$,}\\ \mbox{$a_5=111121$ is not contained in a $3$-deletion so $w(5)=1$,}\\ \mbox{$a_6=111112$ is not contained in a $3$-deletion so $w(6)=1$.} \end{array} $$ Therefore we get $w=321211$. \end{example}
This leaves us to consider the case of compositions with many $1$'s. In this case we also need the {\it second exceedance number\/}, defined by $\operatorname{ex}_2(w)=\sum (w(i)-2)$ where the sum is over all entries $w(i)\ge 2$.
\begin{lemma}\label{lem-many-ones} If $w$ is a composition of $n\ge 3k+1$ with more than $k$ $1$'s, then $w$ can be reconstructed from its set of $k$-deletions. \end{lemma} \begin{proof} Given the set of $k$-deletions of such a composition $w$, our algorithm can apply the result of Lemma~\ref{lem-determine-ones} to conclude that it has more than $k$ $1$'s. Therefore the $k$-deletions with the fewest $1$'s contain all $\mathord{\ge}2$ entries of $w$ in the order in which they occur in $w$; let $v=v(1)\cdots v(\ell)$ denote the composition formed by these entries, so $$ w=\underbrace{1\cdots1}_{z(1)}v(1)\underbrace{1\cdots1}_{z(2)}v(2)\cdots v(\ell-1)\underbrace{1\cdots1}_{z(\ell)}v(\ell)\underbrace{1\cdots1}_{z(\ell+1)} $$ for some word $z\in\mathbb{N}^{\ell+1}$ (we take $\mathbb{N}$ to denote the nonnegative integers). Our goal is thus to determine $z$. We use similar techniques as in the proof of Lemma~\ref{lem-few-ones}, although here we must perform two steps.
The first of these steps is to find the $0$'s in $z$. For $1\le i\le\ell+1$ let $$ a_i=\underbrace{2\cdots2}_{i-1}1\underbrace{2\cdots2}_{\ell+1-i}. $$
Since the $2$'s in $a_i$ can only embed into $\mathord{\ge}2$'s in $w$, if $a_i$ is contained in a $k$-deletion of $w$ then its $1$ must embed into an element between $v(i-1)$ and $v(i)$, implying that $z(i)\ge 1$. Conversely, if $a_i$ is not contained in a $k$-deletion of $w$ then either $\|a_i\|>n-k$ or $z(i)=0$. Simple accounting shows that $$ n-k=\left( (\mbox{\# of $1$'s in $w$}) + 2\ell + \operatorname{ex}_2(w)\right)-k, $$
so $\|a_i\|=2\ell+1\le n-k$ because $w$ has more than $k$ $1$'s, and thus \begin{equation}\label{eqn-det-z=0} z(i)=0\iff\mbox{$a_i$ is not contained in a $k$-deletion of $w$.} \end{equation}
The second step is to use these $0$'s to divine the nonzero entries of $z$. Define the composition $b_i=b_i(1)\cdots b_i(\ell)$ by $$ b_i(j)= \left\{ \begin{array}{rll} 1&\mbox{if}&\mbox{$j\le i-1$ and $z(j)=0$ or}\\ &&\mbox{$j\ge i$ and $z(j+1)=0$, or}\\ 2&\multicolumn{2}{l}{\mbox{otherwise,}} \end{array} \right. $$ and consider the possible embeddings of $b_i$ in $w$. Suppose for the sake of example that $i\ge 4$. If $z(1)\ge 1$ then $b_i(1)=2$ and thus can embed only into or to the right of $v(1)$. Otherwise if $z(1)=0$ then $b_i(1)=1$, but in this case $v(1)$ is the first entry of $w$ so again $b_i(1)$ can embed only into or to the right of $v(1)$. Continuing this manner, if $z(2)\ge 1$ then $b_i(2)=2$, and since $b_i(2)$ can only embed into a $\mathord{\ge}2$ entry in $w$ to the right of $b_i(1)$, $b_i(2)$ can only embed into or to the right of $v(2)$. Otherwise if $z(2)=0$ then $b_i(2)=1$, but then $v(1)$ and $v(2)$ are adjacent in $w$ so since $b_i(1)$ must embed into or to the right of $v(1)$ and $b_i(2)$ must embed to the right of $b_i(1)$ we see that $b_i(2)$ must embed into or to the right of $v(2)$. Continuing in this manner it is easy to see (or more formally, to prove inductively) that: \begin{itemize} \item For all $j\le i-1$, $b_i(j)$ must embed into or to the right of $v(j)$. \item For all $j\ge i$, $b_i(j)$ must embed into or to the left of $v(j)$. \end{itemize} These two facts combine to show that $b_i(i-1)$ and $b_i(i)$ can only embed between $v(i-1)$ and $v(i)$ (inclusive). Now define the word $x\in\mathbb{N}^{\ell+1}$ by $x(i)=0$ if $z(i)=0$ and otherwise $$ x(i)=\max\{s : \mbox{$b_i(1)\cdots b_i(i-1)\underbrace{1\cdots1}_{s}b_i(i)\cdots b_i(\ell)$ is contained in a $k$-deletion of $w$}\}. $$ The analogue to \eqref{eqn-u-v-w} now follows by the conditions on embeddings of $b_i$ established above: \begin{equation}\label{eqn-many-ones-zi}
x(i)=\min\{ z(i), n-k-\|b_i\| \}. \end{equation}
Suppose $z(i)\ge 1$. In this case $\|b_i\|=2\ell-h$, where $h$ denotes the number of $0$ entries of $z$ (``holes''). Letting $k+t$ denote the number of $1$'s in $w$, we have $$ n=k+t+2\ell+\operatorname{ex}_2(w). $$ allowing us to rewrite \eqref{eqn-many-ones-zi} as \begin{equation}\label{eqn-many-ones-zi2} x(i)=\min\{ z(i), h+t+\operatorname{ex}_2(w) \}. \end{equation}
If $\|v\|+\|x\|=n$ then we must have $z=x$ and thus have successfully reconstructed $w$. By \eqref{eqn-many-ones-zi2}, this will happen if $z$ has no entries greater than $h+t+\operatorname{ex}_2(w)$. Suppose, for the sake of contradiction, that this does not occur, i.e., that $z$ contains an entry greater than $h+t+\operatorname{ex}_2(w)$. Then each of the other $(\ell+1-h)-1$ nonzero entries of $z$ correspond to at least one $1$ in $w$, and thus we have $$ k+t = \mbox{\# of $1$'s in $w$} \ge (h+t+\operatorname{ex}_2(w)+1)+(\ell-h) = t+\ell+\operatorname{ex}_2(w)+1. $$ However, this implies that $$ 2k\ge t+2\ell+\operatorname{ex}_2(w), $$ so $$ 3k\ge (k+t)+2\ell+\operatorname{ex}_2(w)=n, $$ and this contradiction completes the proof of both the lemma and Theorem~\ref{thm-comp-reconstruct}. \end{proof}
\begin{example} Suppose the reconstruction algorithm is given the set of $3$-deletions $$ \{ 1222, 2212, 11122, 11212, 11221, 12112, 12211, 111112, 111121, 111211, 112111, 1111111\}. $$ of an unknown composition $w$ of $n=10$. This set contains $1^{10-3}=1111111$ and every $3$-deletion in the set contains a $1$, so Lemma~\ref{lem-determine-ones} shows that $w$ has more than $k=3$ $1$'s. Thus we follow the proof of Lemma~\ref{lem-many-ones}. Each of the compositions with the fewest $1$'s, e.g., $2122$, give the $\mathord{\ge}2$ entries of $w$ in their correct order, $v=222$, so $$ w=\underbrace{1\cdots1}_{z(1)}2\underbrace{1\cdots1}_{z(2)}2\underbrace{1\cdots1}_{z(3)}2\underbrace{1\cdots1}_{z(4)}. $$ We then find the $0$ entries of $z$: $$ \begin{array}{l} \mbox{$z(1)\neq 0$ because $a_1=1222$ is contained in a $3$-deletion of $w$,}\\ \mbox{$z(2)=0$ because $a_2=2122$ is not contained in a $3$-deletion of $w$,}\\ \mbox{$z(3)\neq 0$ because $a_3=2212$ is contained in a $3$-deletion of $w$,}\\ \mbox{$z(4)=0$ because $a_4=2221$ is not contained in a $3$-deletion of $w$.} \end{array} $$ Now we build the word $x\in\mathbb{N}^4$. We have that $x(2)=x(4)=0$ because the corresponding entries of $z$ are $0$. To compute the other entries of $x$ we construct $b_1=121$ and $b_3=211$ and then have $$ \begin{array}{l} \mbox{$x(1)=3$ because $111\ 121$ is contained in a $3$-deletion of $w$ but $1111\ 121$ is not,}\\ \mbox{$x(3)=1$ because $21\ 1\ 1$ is contained in a $3$-deletion of $w$ but $21\ 11\ 1$ is not.} \end{array} $$
Since $\|v\|+\|x\|=\|222\|+\|3010\|=10$, we must have $z=x$ and thus $w=1112212$. \end{example}
\minisec{The connection to permutations} The subject of permutation patterns (see B\'ona's text~\cite{bona:combinatorics-o:} for a survey) is concerned with the following partial order on permutation: for permutations $\sigma$ of length $k$ and $\pi$ of length $n$, let $\sigma\le\pi$ if there are indices $i_1<i_2<\cdots<i_k$ such that the subsequence $\pi(i_1)\pi(i_2)\cdots\pi(i_k)$ has the same pairwise comparisons as $\sigma(1)\sigma(2)\cdots\sigma(k)$, and in such a case $\sigma$ is said to be an $(n-k)$-deletion of $\pi$. For example, $13254\le 213654798$ because of the subsequence $26598$ ($=\pi(1)\pi(4)\pi(5)\pi(8)\pi(9)$).
Given two permutations $\sigma$ and $\pi$ of lengths $m$ and $n$ respectively, their {\it direct sum\/}, $\sigma\oplus\pi$, is the permutation of length $m+n$ whose first $m$ entries form $\sigma$ and whose last $n$ entries are the copy of $\pi$ obtained by adding $m$ to each entry. For example, $213654\oplus 132=213654798$. A permutation is said to be {\it layered\/} if it can be written as the direct sum of decreasing permutations. Thus $213654798$ is layered because it can be written as $21\oplus 1\oplus 321\oplus 1\oplus 21$. There is a natural order-preserving bijection between layered permutations and compositions; for example, $213654798=21\oplus 1\oplus 321\oplus 1\oplus 21$ maps to the composition $21312$ while $13254=1\oplus 21\oplus 21$ maps to $122$, and $122\le 21312$ under the partial order on compositions.
Smith~\cite{smith:permutation-rec:} was the first to study multiset reconstruction for permutations. Her work was followed by Raykova~\cite{raykova:permutation-rec:} who proved that for all $k$, all sufficiently long permutations are reconstructible from their multisets of $k$-deletions. This leaves open the question of whether all sufficiently long permutations are reconstructible from their {\it sets\/} of $k$-deletions. Our work therefore answers Raykova's question about whether all sufficiently long layered permutations can be reconstructed from their sets of $k$-deletions.
\minisec{Acknowledgement} I thank Robert Brignall for his helpful comments.
\end{document} |
\begin{document}
\begin{center} {\Large {\bf A Uniformization Theorem Of Complete Noncompact K\" ahler Surfaces With Positive Bisectional Curvature}} \vskip 5mm Bing--Long Chen*, Siu--Hung Tang**, Xi--Ping Zhu*
* Department of Mathematics, Zhongshan University,\\Guangzhou 510275, P. R. China, and\\
Institute of Mathematical Sciences, The Chinese University of Hong Kong, Hong Kong\\
** Institute of Mathematical Sciences, The Chinese University of Hong Kong, Hong Kong\\
\end{center}
\baselineskip=20pt
\begin{abstract} In this paper, by combining techniques from Ricci flow and algebraic geometry, we prove the following generalization of the classical uniformization theorem of Riemann surfaces. Given a complete noncompact complex two dimensional K\"ahler manifold $M$ of positive and bounded holomorphic bisectional curvature, suppose its geodesic balls have Euclidean volume growth and its scalar curvature decays to zero at infinity in the average sense, then $M$ is biholomorphic to $\bf C^2$. During the proof, we also discover an interesting gap phenomenon which says that a K\"ahler manifold as above automatically has quadratic curvature decay at infinity in the average sense. \end{abstract}
\vskip 0.5mm
\section*{\S1. Introduction}
\setcounter{section}{1} \setcounter{equation}{0}
\qquad One of the most beautiful results in complex analysis of one variable is the classical uniformization theorem of Riemann surfaces which states that a simply connected Riemann surface is biholomorphic to either the Riemann sphere, the complex line or the open unit disc. Unfortunately, a direct analog of this beautiful result to higher dimensions does not exist. For example, there is a vast variety of biholomorphically distinct complex structures on ${\bf R}^{2n}$ for $n>1$, a fact which was already known to Poincar\'e (see \cite{BSW}, \cite{Fe} for a modern treatment). Thus, in order to characterize the standard complex structures for higher dimensional complex manifolds, one must impose more restrictions on the manifolds.
From the point of view of differential geometry, one consequence of the uniformization theorem is that a positively curved compact or noncompact Riemann surface must be biholomorphic to the Riemann sphere or the complex line respectively. It is thus natural to ask whether there is similar characterization for higher dimensional complete K\"ahler manifold with positive "curvature".
That such a characterization exists in the case of compact K\"ahler manifold is the famous Frankel conjecture which says that a compact K\"ahler manifold of positive holomorphic bisectional curvature is biholomorphic to a complex projective space. This conjecture was solved by Andreotti--Frankel \cite{Fra} and Mabuchi \cite{Mab} in complex dimensions two and three respectively and the general case was then solved by Mori \cite{Mor}, and Siu--Yau \cite{SiY} independently. In this paper, we are thus interested in complete noncompact K\"ahler manifolds with positive holomorphic bisectional curvature. The following conjecture provides the main impetus.
\vskip 3mm {\bf \underline{Conjecture}} (Green--Wu \cite{GW2}, Siu\cite{Si}, Yau \cite{Y2})\quad A complete noncompact\\K\"ahler manifold of positive holomorphic bisectional curvature is biholomorphic to a complex Euclidean space. \vskip 3mm
In contrary to the compact case, very little is known about this conjecture. The first result in this direction is the following isometrically embedding theorem.
\vskip 3mm{\bf \underline{Theorem}} (Mok--Siu--Yau\cite{MSY}, Mok\cite{Mo1}) \quad Let $M$ be a complete noncompact K\"ahler manifold of nonnegative holomorphic bisectional curvature of complex dimension $n\geq 2$. Suppose there exist positive constants $C_1$, $C_2$ such that for a fixed base point $x_0$ and some $\varepsilon>0$,
$$ \begin{array}{lll} \bigbreak \mbox{(i)} \qquad & \mbox{Vol}\,(B(x_0,r))\geq C_1r^{2n}\ & \qquad \qquad 0\leq r<+\infty \ ,\qquad \qquad \\ \mbox{(ii)} \qquad & R(x)\leq \displaystyle \frac{C_2} {1+d^{2+\varepsilon}(x_0,x)}\ & \qquad \qquad \mbox{on}\quad M\ ,\qquad \qquad \end{array} $$ where $\mbox{Vol}\,(B(x_0,r))$ denotes the volume of the geodesic ball $B(x_0,r)$ centered at $x_0$ with radius $r$, $R(x)$ denotes the scalar curvature and $ d(x_0,x)$ denotes the geodesic distance between $x_0$ and $x$. Then $M$ is isometrically biholomorphic to $\bf C^n$ with the flat metric.
\vskip 3mm Their method is to consider the Poincar\'e--Lelong equation $\sqrt{-1}\partial \overline{\partial }u= \mbox{Ric}$. Under the condition (ii) that the curvature has faster than quadratic decay, they proved the existence of a bounded solution $u$ to the Poincar\'e--Lelong equation. By virtue of Yau's Liouville theorem on complete manifolds with nonnegative Ricci curvature, this bounded plurisubharmonic function $u$ must be constant and hence the Ricci curvature must be identically zero. This implies that the K\"ahler metric is flat because of the nonnegativity of the holomorphic bisectional curvature. However, this argument breaks down if the faster than quadratic decay condition (ii) is weaken to a quadratic decay condition. In this case, although we can still solve the Poincar\'e--Lelong equation with logarithmic growth, the boundedness of the solution can no longer be guaranteed.
In \cite{Mo1}, Mok also developed a general scheme for compactifying complete K\"ahler manifolds of positive holomorphic bisectional curvature. This allowed him to obtain the following improvement of the above theorem. \vskip 3mm {\bf {\underline{Theorem}}} (Mok\cite{Mo1})\quad Let $M$ be a complete noncompact K\"ahler manifold of complex dimension $n$ with positive holomorphic bisectional curvature. Suppose there exist positive constants $C_1$, $C_2$ such that for a fixed base point $x_0$,
$$ \begin{array}{lll} \bigbreak \mbox{(i)} \qquad & \mbox{Vol}\,(B(x_0,r))\geq C_1r^{2n}\ & \qquad \qquad 0\leq r<+\infty \ ,\qquad \qquad \\ \mbox{(ii)}^{\prime} \qquad & 0<R(x)\leq \displaystyle \frac{C_2}{1+d^2(x_0,x)}\ & \qquad \qquad \mbox{on} \quad M\ ,\qquad \qquad \end{array} $$ then $M$ is biholomorphic to an affine algebraic variety. Moreover, if in addition the complex dimension $n=2$ and
(iii)\qquad the Riemannian sectional curvature of $M$ is positive,
\noindent then $M$ is biholomorphic to $\bf C^2$.
\vskip 3mm To the best of our knowledge, the above result of Mok and its slight improvements by To \cite{T}, and Chen-Zhu \cite{CZ2} are the best results in complex dimension two of the above stated conjecture.
Here, we would also like to recall the remark pointed out in \cite{CZ2} that there is a gap in the proof of Shi \cite{Sh3} (see \cite{CZ2} for more explanation) which would otherwise constitute a better result than that of Mok \cite{Mo1}.
In this paper, we consider only the case of complex dimension two. Our principal result is the following \vskip 3mm {\bf \underline{Main Theorem}} \quad Let $M$ be a complete noncompact complex two--dimen- sional K\"ahler manifold of positive and bounded holomorphic bisectional curvature. Suppose there exists a positive constant $C_1$ such that for a fixed base point $x_0$, we have
$$ \begin{array}{ll} \bigbreak \mbox{(i)}\qquad & \mbox{Vol}(B(x_0,r))\geq C_1r^4\ \qquad \ 0\leq r<+\infty \ , \\ \mbox{(ii)}^{\prime \prime }\qquad & \displaystyle \lim \limits_{r\rightarrow + \infty }\frac 1{\mbox{Vol}(B(x_0,r))}\int_{B(x_0,r)}R(x)dx=0\ , \end{array} $$ then $M$ is biholomorphic to $\bf C^2$. \vskip 3mm We remark that the condition $\mbox{(ii)}^{\prime \prime}$ means that the scalar curvature tends to zero at infinity in average sense. In view of the classical Bonnet--Myers theorem, this condition is almost necessary to make sure that the manifold is noncompact under our positive bisectional curvature condition. It is also clear that the pointwise decay condition $$ \lim \limits_{d(x,x_0)\rightarrow + \infty }R(x)=0 $$ is stronger than $\mbox{(ii)}^{\prime \prime }.$
The proof of the Main Theorem will be divided into three parts. In the first part, we will show that $M$ is a Stein manifold homeomorphic to $\bf R^4.$ For this, we evolve the K\"ahler metric on $M$ by the Ricci flow first studied by Hamilton. Note that the underlying complex structure of $M$ is unchanged under the Ricci flow, thus we can replace the K\"ahler metric in our main theorem by any one of the evolving metric. The advantage is that, in our case, the properties of the evolving metric are improving during the flow. Moreover, we know that the Euclidean volume growth condition (i) as well as the positive holomorphic bisectional curvature condition are perserved by the evolving metric. More importantly, by a blow up and blow down argument as in \cite{CZ1}, we can prove that the curvature of the evolving metric decays linearly in time. This implies that the injectivity radius of the evolving metric is getting bigger and bigger and any geodesic ball with radius less than half of the injectivity radius is almost pseudoconvex. By a perturbation argument as in \cite{CZ2}, we are then able to modify these geodesic balls to a sequence of exhausting pseudoconvex domains of $M$ such that any two of them form a Runge pair. From this, it follow readily that $M$ is a Stein manifold homeomorphic to $\bf R^4$.
In the second part of the proof, we consider the algebra $P(M)$ of holomorphic functions of polynomial growth on $M$ and we will prove that its quotient field has transcendental degree two over $\bf C$. For this, we first need to construct two algebraically independent holomorphic functions in the algebra $P(M)$. Using the $L^2$ estimates of
Andreotti--Vesentini \cite{AV} and H\"omander \cite{Ho}, it suffices to construct a strictly plurisubharmonic function of logarithmic growth on $M$. Now, if the scalar curvature decays in space at least quadratically, it was known from \cite{MSY}, \cite {Mo1} that such a strictly plurisubharmonic function of logarithmic growth can be obtained by solving the Poincar\'e--Lelong equation, as we mentioned before. However, our decay assumption $\mbox{(ii)}^{\prime \prime }$ is too weak to apply their result directly. To resolve this difficulty, we make use of the Ricci flow to verify a new gap phenomenon which was already predicted by Yau in \cite{Y3}. More explicitly, by using the time decay estimate of evolving metric in the previous part, we prove that the curvature of the initial metric must decay quadratically in space in certain average sense. Fortunately, this turns out to be enough to insure the existence of a strictly plurisubharmonic function of logarithmic growth. Next, by using the time decay estimate and the injectivity radius estimate of the evolving metric, we prove that the dimension of the space of holomorphic functions in $P(M)$ of degree at most $p$ is bounded by a constant times $p^2$. Combining this with the existence of two algebraically independent holomorphic functions in $P(M)$ as above, we can prove that the quotient field $R(M)$ of $P(M)$ has transcendental degree two over $\bf C$ by a classical argument of Poincar\'e--Siegel.
In other words, $R(M)$ is a finite extension field of some ${\bf C}(f_1,f_2)$, where $f_1$, $f_2 \in P(M)$ are algebraically independent over $\bf C$. Then, from the primitive element theorem, we have $R(M)={\bf C} (f_1,f_2,g/h)$ for some $g$, $h\in P(M)$. Hence the mapping $F:M\rightarrow {\bf C }^4$ given by $F=(f_1,f_2,g,h)$ defines, in an appropriate sense, a birational equivalence between $M$ and some irreducible affine algebraic subvariety $Z$ of $\bf C^4$.
In the last part of proof, we will basically follow the approach of Mok in \cite{Mo1} and \cite{Mo3} to establish a biholomorphic map from $M$ onto a quasi--affine algebraic variety by desingularizing the map $F$. Our essential contribution in this part is to establish uniform estimates on the multiplicity and the number of irreducible components of the zero divisor of a holomorphic function in $P(M)$. Again, the time decay estimate of the Ricci flow plays a crucial role in the arguments. Based on these estimates, we can show that the mapping $F:M\rightarrow Z$ is almost surjective in the sense that it can miss only a finite number of subvarieties in $Z$, and can be desingularized by adjoining a finite number of holomorphic functions of polynomial growth. This completes the proof that $M$ is a quasi--affine algebraic variety. Finally, by combining with the fact that $M$ is homeomorphic to $\bf R^4$, we conclude that $M$ is indeed biholomorphic to $\bf C^2$ by a theorem of Ramanujam \cite{R} on algebraic surfaces.
This paper contains eight sections. From Sections 2 to 4, we study the Ricci flow and obtain several geometric estimates for the evolving metric. In Section 5, we show that the two dimensional K\"ahler manifold is homeomorphic to $\bf R ^4$ and is a Stein manifold. Based on the estimates on the Ricci flow, a space decay estimate on the curvature and the existence of a strictly plurisubharmonic function of logarithmic growth are obtained in Section 6. In Section 7, we establish uniform estimates on the multiplicity and the number of irreducible components of the zero divisor of a holomorphic function of polynomial growth. Finally, in Section 8 we construct a biholomorphic map from the K\"ahler manifold onto a quasi--affine algebraic variety and complete the proof of the Main Theorem.
We are grateful to Professor L. F. Tam for many helpful discussions and Professor S. T. Yau for his interest and encouragement.
\section*{\S2. Preserving the volume growth}
\setcounter{section}{2} \setcounter{equation}{0}
\qquad Let $(M,g_{\alpha \overline{\beta }})$ be a complete, noncompact K\"ahler surface (i.e., a K\"ahler manifold of complex dimension two) satisfying all the assumptions in the Main Theorem.
We evolve the metric $g_{\alpha \overline{\beta }}$ according to the following Ricci flow equation \begin{equation} \label{2.1}\left\{ \begin{array}{ll} \bigbreak \displaystyle \frac{\partial g_{\alpha \overline{\beta }}(x,t)}{ \partial t}=-R_{\alpha \overline{\beta }}(x,t)\ & \qquad x\in M\ \quad t>0\ , \\ g_{\alpha \overline{\beta }}(x,0)=g_{\alpha \overline{\beta }}(x)\ & \qquad x\in M\ , \end{array} \right. \end{equation} where $R_{\alpha \overline{\beta }}(x,t)$ denotes the Ricci curvature tensor of the metric $g_{\alpha \overline{\beta }}(x,t)$.
Since the curvature of the initial metric is bounded, it is known from \cite {Sh1} that there exists some $T_{\max }>0$ such that (\ref{2.1}) has a maximal solution on $M\times [0,T_{\max })$ with either $T_{\max }=+\infty $ or the curvature becomes unbounded as $t\rightarrow T_{\max }$ when $T_{\max }<+\infty $. By using the maximum principle, one knows ( see Mok \cite{Mo2}, Hamilton \cite{Ha1}, or Shi \cite{Sh4} ) that the positivity of holomorphic bisectional curvature and the K\"ahlerity of $g_{\alpha \overline{\beta }}$ are preserved under the evolution of (\ref{2.1}). In particular, the Ricci curvature remains positive.
Our first result for the solution of the Ricci flow (\ref{2.1}) is the following proposition. \vskip 3mm{\bf \underline{Proposition 2.1}} \quad Suppose $(M,g_{\alpha \overline{\beta }})$ is assumed as above. Then the maximal volume growth condition (i) is preserved under the evolution of (\ref{2.1}), i.e., \begin{equation} \label{2.2} \mbox{Vol}_t(B_t(x,r))\geq C_1r^4\ \qquad \mbox{for all} \quad r>0\ ,\quad x\in M\ , \end{equation} with the same constant $C_1$ as in condition (i). Here, $B_t(x,r)$ is the geodesic ball of radius $r$ with center at $x$ with respect to the metric $g_{\alpha \overline{\beta }}(\cdot ,t)$, and the volume $\mbox{Vol}_t$ is also taken with respect to the metric $g_{\alpha \overline{\beta }}(\cdot ,t)$.
\vskip 3mm{\bf \underline{Proof.}} \quad Define a function $F(x,t)$ on $M\times [0,T_{\max })$ as follows, $$ F(x,t)=\log \frac{\det \left( g_{\alpha \overline{\beta }}(x,t)\right) }{ \det \left( g_{\alpha \overline{\beta }}(x,0)\right) }\ . $$ By (\ref{2.1}), we have \begin{eqnarray} \label{2.3} \frac{\partial F(x,t)}{\partial t} & = & g^{\alpha \overline{\beta }}(x,t)\cdot \frac \partial {\partial t}g_{\alpha \overline{\beta }}(x,t)\nonumber \\ & = &-R(x,t) \leq 0 \ , \end{eqnarray} which implies that $F(\cdot ,t)$ is nonincreasing in time. Since $R_{\alpha \overline{\beta }}(x,t)\geq 0$, we know from (\ref{2.1}) that the metric is shrinking in time. In particular, \begin{equation} \label{2.4} g_{\alpha \overline{\beta }}(x,t)\leq g_{\alpha \overline{\beta }}(x,0)\ \qquad \mbox{on} \quad M\times [0,T_{\max })\ . \end{equation} This implies that \begin{eqnarray} \label{2.5} e^{F(x,t)}R(x,t)&=&g^{\alpha \overline{\beta }}(x,t) R_{\alpha \overline{\beta }}(x,t)\cdot \frac{\det \left( g_{\gamma \overline{\delta }}(x,t)\right) } {\det \left( g_{\gamma \overline{\delta }}(x,0)\right) } \nonumber \\ & \leq & g^{\alpha \overline{\beta }}(x,0)R_{\alpha \overline{\beta }}(x,t) \nonumber \\ & = & g^{\alpha \overline{\beta }}(x,0)\left( R_{\alpha \overline{\beta }}(x,t)-R_{\alpha \overline{\beta }}(x,0)\right) +R(x,0)\nonumber\\ & = & -\bigtriangleup _0F(x,t)+R(x,0)\ , \end{eqnarray} where $\bigtriangleup _0$ denotes the Laplace operator with respect to the initial metric $g_{\alpha \overline{\beta }}(x,0)$ and $R(x,t)$ denotes the scalar curvature of the metric $g_{\alpha \overline{\beta }}(x,t)$.
Combining (\ref{2.3}) and (\ref{2.5}) gives \begin{equation} \label{2.6}e^{F(x,t)}\frac{\partial F(x,t)}{\partial t}\geq \bigtriangleup _0F(x,t)-R(x,0)\ \qquad \mbox{on} \quad M\times [0,T_{\max })\ . \end{equation}
Next, we introduce a cutoff function which will be used several times in this paper. Now, as the Ricci curvature of the initial metric is positive, we know from Schoen and Yau (Theorem 1.4.2 in \cite{ScY}) or Shi \cite{Sh4} that there exists a positive constant $C_3$ depending only on the dimension such that for any fixed point $x_0\in M$ and any number $0<r<+\infty $, there exists a smooth function $\varphi (x)$ on $M $ satisfying \begin{equation} \label{2.7}\left\{ \begin{array}{l} \bigbreak\displaystyle e^{-C_3\left( 1+\frac{d_0(x,x_0)}r\right) }\leq \varphi (x)\leq e^{-\left( 1+\frac{d_0(x,x_0)}r\right) }\ , \\
\bigbreak \displaystyle \left| \nabla \varphi \right| _0(x)\leq \frac{
C_3}r\varphi (x)\ , \\ \displaystyle \left| \bigtriangleup _0\varphi \right| (x)\leq \frac{C_3}{r^2}\varphi (x)\ , \end{array} \right. \end{equation} for all $x\in M$, where $d_0(x,x_0)$ is the distance between $x$ and $x_0$ with respect to the initial metric $g_{\alpha \overline{\beta }}(x,0)$ and $
\left| \cdot \right| _0$ stands for the corresponding $C^0$ norm of the initial metric $g_{\alpha \overline{\beta }}(x,0)$.
Combining (\ref{2.6}) and (\ref{2.7}), we obtain \begin{eqnarray} \frac \partial {\partial t}\int_M\varphi (x)e^{F(x,t)}dV_0&\geq&\int_M\left( \bigtriangleup _0F(x,t)-R(x,0)\right) \varphi (x)dV_0\nonumber\\ &\geq&\frac{C_3}{r^2}\int_MF(x,t)\varphi (x)dV_0-\int_MR(x,0)\varphi (x)dV_0\ ,\nonumber \end{eqnarray} where $dV_0$ denotes the volume element of the initial metric $g_{\alpha \overline{\beta }}(x,0)$.
Recall that $F(\cdot ,t)$ is nonincreasing in time and $F(\cdot ,0)\equiv 0$ . We integrate the above inequality from $0$ to $t$ to get \begin{equation} \label{2.8}\int_M\varphi (x)\left( 1-e^{F(x,t)}\right) dV_0\leq \frac{C_3t}{ r^2}\int_M\left( -F(x,t)\right) \varphi (x)dV_0+t\int_MR(x,0)\varphi (x)dV_0\ . \end{equation} Since the metric is shrinking under the Ricci flow, we have $$ B_t(x_0,r)\supset B_0(x_0,r)\ \qquad \mbox{for} \quad t\geq 0\ , \quad 0<r<+\infty \ , $$ and \begin{eqnarray} \label{2.9} \mbox{Vol}_t(B_t(x_0,r))& \geq & \mbox{Vol}_t(B_0(x_0,r)) \nonumber \\ & = & \int_{B_0(x_0,r)}e^{F(x,t)}dV_0 \nonumber \\ & = & \mbox{Vol}_0(B_0(x_0,r))+\int_{B_0(x_0,r)}\left( e^{F(x,t)}-1\right) dV_0\ . \end{eqnarray} Then by (\ref{2.7}) and (\ref{2.8}), the last term in (\ref{2.9}) satisfies \begin{eqnarray} \label{2.10} \int_{B_0(x_0,r)}\left( e^{F(x,t)}-1\right) dV_0&\geq&e^{2C_3}\int_M\left( e^{F(x,t)}-1\right) \varphi (x)dV_0\nonumber\\ &\geq&\frac{C_3e^{2C_3}t}{r^2}\int_MF(x,t)\varphi (x)dV_0\nonumber\\ &&-e^{2C_3}t\int_MR(x,0)\varphi (x)dV_0\ . \end{eqnarray} To estimate the two terms of the right hand side of (\ref{2.10}), we consider any fixed $ T_0<T_{\max }$. Since the curvature is uniformly bounded on $M\times [0,T_0]$ , it is clear from the equation (\ref{2.3}) that $F(x,t)$ is also uniformly bounded on $M\times [0,T_0]$.
Set $$
A=\sup \left\{ \left. \left| F(x,t)\right| \right| x\in M\ ,\ t \in[0,T_0]\right\} $$ and $$ M(r)=\sup \limits_{a\geq r}\frac 1{\mbox{Vol}_0\left( B_0(x_0,a)\right) }\int_{B_0(x_0,a)}R(x,0)dV_0\ . $$ Then condition $\mbox{(ii)}^{\prime \prime }$ says that $M(r)\rightarrow 0$ as $ r\rightarrow +\infty $. By using the standard volume comparison theorem and ( \ref{2.7}), we have \begin{eqnarray} \label{2.11} \int_MR(x,0)\varphi (x)dV_0&\leq&\int_MR(x,0)e^{-\left( 1+\frac{d_0(x,x_0)}r\right) }dV_0 \nonumber \\ & = & \int_{B_0(x_0,r)}R(x,0)e^{-\left( 1+\frac{d_0(x,x_0)}r\right) }dV_0 \nonumber \\ & & + \sum\limits_{k=0}^\infty \int_{B_0(x_0,2^{k+1}r) \backslash B_0(x_0,2^kr)}R(x,0)e^{-\left( 1+\frac{d_0(x,x_0)}r\right) }dV_0 \nonumber \\ & \leq & \int_{B_0(x_0,r)}R(x,0)dV_0 + \sum\limits_{k=0}^\infty e^{-2^k}\left( 2^{k+1}\right) ^4 \cdot \nonumber \\ & & \frac{\mbox{Vol}_0\left( B_0(x_0,r)\right) }{\mbox{Vol}_0 \left( B_0(x_0,2^{k+1}r)\right) }\int_{B_0(x_0,2^{k+1}r)}R(x,0)dV_0\nonumber\\ & \leq & C_4\cdot M(r)\cdot \mbox{Vol}_0\left( B_0(x_0,r)\right) \ , \end{eqnarray} and similarly \begin{eqnarray} \label{2.12} \int_M\varphi (x)dV_0&\leq&\int_{B_0(x_0,r)}e^{-\left( 1+\frac{d_0(x,x_0)}r\right) }dV_0\nonumber\\ &&+\sum\limits_{k=0}^\infty \int_{B_0(x_0,2^{k+1}r)\backslash B_0(x_0,2^kr)}e^{-\left( 1+\frac{d_0(x,x_0)}r\right) }dV_0\nonumber\\ &\leq&\mbox{Vol}_0\left( B_0(x_0,r)\right) +\sum\limits_{k=0}^\infty e^{-2^k}\left( 2^{k+1}\right) ^4\cdot \mbox{Vol}_0\left( B_0(x_0,r)\right)\nonumber\\ &\leq&C_4\mbox{Vol}_0\left( B_0(x_0,r)\right) \ , \end{eqnarray} where $C_4$ is some positive constant independent of $r$.
Substituting (\ref{2.10}), (\ref{2.11}) and (\ref{2.12}) into (\ref{2.9}) and dividing by $r^4$, we obtain \begin{eqnarray} \frac{\mbox{Vol}_t(B_t(x_0,r))}{r^4}&\geq&\frac{\mbox{Vol}_0(B_0(x_0,r))}{r^4}-\frac{C_3e^{2C_3}AT_0}{r^2}\left( C_4\frac{\mbox{Vol}_0(B_0(x_0,r))}{r^4}\right)\nonumber\\ && -e^{2C_3}T_0\left( C_4M(r)\cdot \frac{\mbox{Vol}_0(B_0(x_0,r))}{r^4}\right)\nonumber\\ &\geq&C_1-\frac{C_3e^{2C_3}AT_0\cdot C_4C_1}{r^2}-e^{2C_3}T_0C_4C_1\cdot M(r)\nonumber \end{eqnarray} by condition (i). Then letting $r\rightarrow +\infty $, we deduce that $$ \lim \limits_{r\rightarrow +\infty }\frac{\mbox{Vol}_t\left( B_t(x_0,r)\right) }{r^4 }\geq C_1\ . $$ Hence, by using the standard volume comparison theorem we have $$ \mbox{Vol}_t\left( B_t(x,r)\right) \geq C_1r^4\ \quad \mbox{for all}\ x\in M\ ,\ 0\leq r<+\infty, \,\, t\in [0,T_0]\ . $$ Finally, since $T_0<T_{\max }$ is arbitrary, this completes the proof of the proposition.
$\Box$
\section*{\S3. Singularity Models}
\setcounter{section}{3} \setcounter{equation}{0}
\qquad In this and the next section we will continue our study of the Ricci flow (\ref{2.1}). We will use rescaling arguments to analyse the behavior of the solution of (\ref{2.1}) near the maximal time $T_{\max }.$
First of all, let us recall some basic terminologies. According to Hamilton ( see for example, definition 16.3 in \cite{Ha3}), a solution to the Ricci flow, where either the manifold is compact or at each time $t$ the evolving metric is complete and has bounded curvature, is called a singularity model if it is not flat and is of one of the following three types. Here, we have used $Rm$ to denote the Riemannian curvature tensor.
\begin{description} \item[Type I:] The solution exists for $-\infty <t<\Omega $ for some $ 0<\Omega <+\infty $ and $$
|Rm|\leq \frac \Omega {\Omega -t} $$ \hspace{6mm} everywhere with equality somewhere at $t=0$;
\item[Type II:] The solution exists for $-\infty <t<+\infty $ and $$
|Rm|\leq 1 $$ \hspace{8mm} everywhere with equality somewhere at $t=0$;
\item[Type III:] The solution exists for $-A<t<+\infty $ for some $0<A<+\infty$ and $$
|Rm|\leq \frac A{A+t} $$ \hspace{10mm} everywhere with equality somewhere at $t=0$. \end{description}
The singularity models of Type I and II are called ancient solutions in the sense that the existence time interval of the solution contains $(-\infty ,0]$.
Next, we recall the local injectivity radius estimate of Cheeger, Gromov and Taylor \cite{CGT}. Let $N$ be a complete Riemannian manifold of dimension $m$ with $\lambda \leq \mbox{sectional curvature of}\, N \leq \Lambda $ and let $r$ be a positive constant satisfying $r\leq \frac \pi {4\sqrt{\Lambda }}$ if $\Lambda >0$, then the injectivity radius of $N$ at a point $x$ is bounded from below as follows, $$ \mbox{inj}_N(x) \geq r\cdot \frac{\mbox{Vol}(B(x,r))}{\mbox{Vol} (B(x,r))+V^m_{\lambda}(2r)}\ , $$ where $V^m_{\lambda}(2r)$ denotes the volume of a ball with radius $2r$ in the $m$ dimensional model space $V^m_{\lambda}$ with constant sectional curvature $\lambda .$ In particular, it implies that for a complete Riemannian manifold $N$ of dimension $4$ with sectional curvature bounded between $-1$ and $1$, the injectivity radius at a point $x$ can be estimated as \begin{equation} \label{3.1} \mbox{inj}_N(x) \geq \frac 12\cdot \frac{\mbox{Vol}(B(x,\frac 12))} {\mbox{Vol}(B(x,\frac 12))+V} \end{equation} for some absolute positive constant $V$. Furthermore, if in addition $N$ satisfies the maximal volume growth condition $$ \mbox{Vol} \left( B(x,r)\right) \geq C_1r^4\ ,\quad 0\leq r<+\infty \ , $$ then (\ref{3.1}) gives \begin{equation} \label{3.2} \mbox{inj}_N(x) \geq \beta >0 \end{equation} for some positive constant $\beta $ depending only on $C_1$ and $V.$
Now, return to our setting. Let $(M,g_{\alpha \overline{\beta }})$ be a complete, noncompact K\"ahler surface satisfying the same assumptions as in the Main Theorem and let $g_{\alpha \overline{\beta } }(x,t)$ be the solution of the Ricci flow (\ref{2.1}) on $M\times [0,T_{\max})$. Denote $$ R_{\max }(t)=\sup \limits_{x\in M}R(x,t)\ . $$ We have shown in Proposition 2.1 that the solution $g_{\alpha \overline{ \beta }}(\cdot ,t)$ satisfies the same maximal volume growth condition (i) as the initial metric. Since condition (i) is invariant under rescaling of metrics, by a simple rescaling argument, we get the following injectivity radius estimate for the solution $g_{\alpha \overline{\beta }}(\cdot ,t),$ \begin{equation} \label{3.3} \mbox{inj}(M,g_{\alpha \overline{\beta }}(\cdot ,t))\geq \frac \beta { \sqrt{R_{\max }(t)}} \qquad \mbox{for}\;t \in [0,T_{\max }). \end{equation} Then, by applying a result of Hamilton (see Theorems 16.4 and 16.5 in \cite{Ha3}), we know that there exists a sequence of dilations of the solution which converges to one of the singularity models of Type I, II or III. We will analyse this limit in the next section.
We conclude this section with the following lemma which will be very useful in our analysis of the Type I and Type II limits.
\vskip 3mm {\bf \underline{Lemma 3.1}} \quad
Suppose $(\widetilde{M},\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t))$ is a complete ancient solution to the Ricci flow on a noncompact K\"ahler surface with nonnegative and bounded holomorphic bisectional curvature for all time. Then the curvature operator of the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ is nonnegative definite everywhere on $\widetilde{M} \times (-\infty ,0]$.
\vskip 3mm {\bf \underline{Proof}} \quad Choose a local orthonormal coframe $\{\omega _1,\omega _2,\omega _3,\omega _4\}$ on an open set $U\subset \widetilde{M}$ so that $\omega _1+\sqrt{-1}\omega _2$ and $\omega_3+\sqrt{-1}\omega _4$ are $(1,0)$ forms over $U$. Then the self--dual forms $$ \varphi _1=\omega _1\land \omega _2+\omega _3\land \omega _4, \,\,\, \varphi_2=\omega _2\land \omega _3+\omega _1\land \omega _4, \,\,\, \varphi _3=\omega_3\land \omega _1+\omega _2\land \omega _4 $$ and the anti--self--dual forms $$ \psi_1=\omega _1\land \omega _2-\omega _3\land \omega _4, \,\,\, \psi _2=\omega_2\land \omega _3-\omega _1\land \omega _4, \,\,\, \psi _3=\omega _3\land \omega_1-\omega _2\land \omega _4 $$ form a basis of the space of $2$ forms over $U$. In particular, $\varphi _1,\psi _1,\psi _2$ and $\psi _3$ give a basis for the space of $(1,1)$ forms over $U$.
On a K\"ahler surface, it is well known that its curvature operator has image in the holonomy algebra $u(2) \,(\subset so(4))$ spanned by $(1,1)$ forms. Thus, the curvature operator ${\bf M}$ in the basis $\{\varphi_1,\varphi _2,\varphi _3,\psi _1,\psi _2,\psi _3\}$ has the following form, $$ {\bf M}=\left( \begin{array}{cc} \begin{array}{ccc} a & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} & \begin{array}{ccc} b_1 & b_2 & b_3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \\ \begin{array}{ccc} b_1 & 0 & 0 \\ b_2 & 0 & 0 \\ b_3 & 0 & 0 \end{array} & A \end{array} \right) \ $$ where $A$ is a $3\times 3$ symmetric matrix.
Let $V$ be a real tangent vector of the K\"ahler surface $\widetilde{M}$. Denote by $J$ the complex structure of the K\"ahler surface $\widetilde{M}$. It is clear that the complex $2$--plane $V\wedge JV$ is dual to $(1,1)$ form $u\varphi _1+v_1\psi _1+v_2\psi _2+v_3\psi _3$ satisfying the decomposability condition $u^2=v_1^2+v_2^2+v_3^2$. Then after normalizing $u$ to 1 by scaling, we see that the holomorphic bisectional curvature is nonnegative if and only if \begin{equation} \label{3.4} a+b\cdot v+b\cdot w+^{\ t}\negthinspace {}vAw\geq 0\ , \end{equation} for any unit vectors $v=(v_1,v_2,v_3)$ and $w=(w_1,w_2,w_3)$ in ${\bf R^3}$, where $b$ is the vector $(b_1,b_2,b_3)$ in $\bf{M}$.
Denote by $a_1\leq a_2\leq a_3$ the eigenvalues of $A$. Recall that $\mbox{tr}\,A=a $ by the Bianchi identity, so if we choose $v$ to be the eigenvector of $A $ with eigenvalue $a_3$ and choose $w=-v$, (\ref{3.4}) gives \begin{equation} \label{3.5}a_1+a_2\geq 0\ . \end{equation} In particular, we have $a_2\geq 0$.
To proceed further, we need to adapt the maximum principle for parabolic equations on compact manifold in Hamilton \cite{Ha1} to $\widetilde{M}$.
Let $$ \left( a_i\right) _{\min }(t)=\inf\limits_{x\in \widetilde{M}}a_i(x,t)\ ,\qquad i=1,2,3\ $$ and $$
K=\sup \limits_{(x,t)\in \widetilde{M}\times (-\infty ,0]}\left|
Rm(x,t)\right|. $$ By assumption, the ancient solution $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ has bounded holomorphic bisectional curvature, hence $K$ is finite. Thus, by the derivative estimate of Shi \cite{Sh1} (see also Theorem 7.1 in \cite{Ha3}), the higher order derivatives of the curvature are also uniformly bounded. In particular, we can use the maximum principle of Cheng--Yau (see Proposition 1.6 in \cite{CY}) and then, as observed in \cite{Ha3}, this implies that the maximum principle of Hamilton in \cite{Ha1} actually works for the evolution equations of the curvature of $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ on the complete noncompact manifold $\widetilde{M}.$ Thus, from \cite{Ha1}, we obtain \begin{eqnarray} \frac{d\left( a_1\right) _{\min }}{dt}&\geq&\left( \left( a_1\right)_ {\min }\right) ^2+2\left( a_2\right) _{\min }\cdot \left( a_3\right) _{\min } \nonumber \\ & \geq & 3\left( \left( a_1\right) _{\min }\right) ^2 \nonumber \end{eqnarray} by (\ref{3.5}). Then, for fixed $t_0\in (-\infty ,0)$ and $t>t_0,$ \begin{eqnarray} \left( a_1\right) _{\min }(t)&\geq&\frac 1{\left( a_1\right)_ {\min }^{-1}\left( t_0\right) -3\left( t-t_0\right) } \nonumber \\ & \geq & \frac 1{-K^{-1}-3\left( t-t_0\right) }\ . \nonumber \end{eqnarray} Letting $t_0\rightarrow -\infty $, we get \begin{equation} \label{3.6} a_1\geq 0\ ,\qquad \mbox{for all} \;\;\; (x,t) \in \widetilde{M} \times (-\infty ,0]\, \end{equation} i.e. $A \geq 0$.
Finally, to prove the nonnegativity of the curvature operator $\bf{M}$, we recall its corresponding ODE from \cite{Ha1},
$$ \frac{d{\bf M}}{dt}={\bf M}^2+\left( \begin{array}{cc} 0 & 0 \\ 0 & A^{\#} \end{array} \right) \ , $$ where $A^{\#} \geq 0$ is the adjoint matrix of $A.$
Let $m_1$ be the smallest eigenvalue of the curvature operator ${\bf M}$. By using the maximum principle of Hamilton ( \cite{Ha1} or \cite {Ha3} ) again, we have $$ \frac{d\left( m_1\right) _{\min }}{dt}\geq \left( m_1\right) _{\min }^2 $$ where $\left( m_1\right) _{\min }(t)=\inf \limits_{x\in \widetilde{M} }m_1(x,t)$. Therefore, by the same reasoning in the derivation of (\ref{3.6}), we have \begin{equation} \label{3.7} m_1\geq 0\ ,\qquad \mbox{for all} \;\; (x,t)\in \widetilde{M}\times (-\infty ,0]\ . \end{equation} So ${\bf{M}} \geq 0$ and the proof of the lemma is completed.
$\Box $
\section*{\S4. Time decay estimate on curvature}
\setcounter{section}{4} \setcounter{equation}{0}
\qquad Let $(M,g_{\alpha \overline{\beta }}(x))$ be a complete noncompact K\"ahler surface satisfying all the assumptions in the Main Theorem and $(M,g_{\alpha \overline{\beta }}(\cdot ,t))$, $t\in [0,T_{\max })$ be the maximal solution of the Ricci flow (\ref{2.1}) with $g_{\alpha \overline{\beta }}(\cdot )$ as the initial metric. Clearly, the maximal solution is of either one of the following types.
\begin{description} \item[{Type I:}] \qquad $T_{\max }<+\infty $ and $\sup \left( T_{\max }-t\right) R_{\max }(t)<+\infty $;
\item[{Type II(a):}] $T_{\max }<+\infty $ and $\sup \left( T_{\max}-t\right) R_{\max }(t)=+\infty $;
\item[{Type II(b):}] $T_{\max }=+\infty $ and $\sup tR_{\max }(t)=+\infty $;
\item[{Type III:}] \quad $T_{\max }=+\infty $ and $\sup tR_{\max }(t)<+\infty $. \end{description}
In Section 3, we have proved that the maximal solution satisfies the following injectivity radius estimate $$ \mbox{inj}(M,g_{\alpha \overline{\beta }}(\cdot ,t))\geq \frac \beta {\sqrt{R_{\max}(t)}}\ \qquad \mbox{on} \quad [0,T_{\max }), $$ for some $\beta >0$. By applying a result of Hamilton (Theorems 16.4 and 16.5 in \cite{Ha3}), we know that there exists a sequence of dilations of the solution converging to a singularity model of the corresponding type. Note that since the maximal solution is complete and noncompact, the limit must also be complete and noncompact. The following is the main result of this section which says that this limit must be of Type III or equivalently, the maximal solution must be of Type III.
\vskip 3mm{\bf \underline{Theorem 4.1}} \quad Let $(M,g_{\alpha \overline{\beta }}(x))$ be a complete noncompact K\"ahler surface as above.
Then the Ricci flow (\ref{2.1}) with $g_{\alpha \overline{\beta }}(x)$ as the initial metric has a solution $g_{\alpha \overline{\beta }}(x,t)$ for all $t\in [0,+\infty) $ and $x\in M$. Moreover, the scalar curvature $R(x,t)$ of the solution satisfies \begin{equation} \label{4.1} 0\leq R(x,t)\leq \frac C{1+t}\ \qquad \mbox{on} \quad M\times [0,+\infty)\ , \end{equation} for some positive constant $C$.
\vskip 3mm{\bf \underline{Proof.}} \quad We prove by contradiction. Thus, suppose the maximal solution is of Type I or Type II and let $(\widetilde{M},\widetilde{g}_{\alpha \overline{\beta }}(x,t))$ be the limit of a sequence of dilations of the maximal solution which
is then a singularity model of Type I or Type II respectively. After a study of its properties, we can blow down the singularity model and apply a dimension reduction argument to obtain the desired contradiction.
Now, recall that the maximal solution satisfies the maximal volume growth condition (i) by Proposition 2.1. Since condition (i) is also invariant under rescaling, we see that the singularity model $(\widetilde{M},\widetilde{g}_{\alpha \overline{\beta }}(x,t))$ also satisfies the maximal volume growth condition, i.e. \begin{equation} \label{4.2} \mbox{Vol}_t\left( \widetilde{B}_t(x,r)\right) \geq C_1r^4\ \qquad \mbox{for all} \quad 0\leq r< +\infty \quad \mbox{and} \quad x\in \widetilde{M}\ , \end{equation} where $\mbox{Vol}_t\left( \widetilde{B}_t(x,r)\right) $ denotes the volume of the geodesic ball $\widetilde{B}_t(x,r)$ of radius $r$ with center at $x$ with respect to the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t).$
It is clear that the limit $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ has nonnegative holomorphic bisectional curvature. Thus, from Lemma 3.1, the curvature operator of the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ is nonnegative definite everywhere.
Denote by $\widetilde{R}(\cdot ,t)$ the scalar curvature of $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ and $\widetilde{d}_t(x,x_0)$ the geodesic distance between two points $x,x_0\in \widetilde{M}$ with respect to the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot,t)$. We claim that at time $t=0,$ we have \begin{equation} \label{4.3} \limsup \limits_{\widetilde{d}_0(x,x_0)\rightarrow +\infty } \widetilde{R}(x,0)\widetilde{d}_0^2(x,x_0)=+\infty \ \end{equation} for any fixed $x_0\in \widetilde{M}.$
Suppose not, that is the curvature of the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,0)$ has quadratic decay. Now, by applying a result of Shi (see Theorem 8.2 in \cite{Sh4}), we know that the solution $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ of the Ricci flow exists for all $t\in (-\infty ,+\infty )$ and satisfies \begin{equation} \label{4.4} \lim \limits_{t\rightarrow +\infty }\sup \left\{ \left.
\widetilde{R}(x,t)\right| \ x\in \widetilde{M}\right\} =0\ . \end{equation} On the other hand, by the Harnack inequality of Cao \cite{Cao}, we have \begin{equation} \label{4.5} \frac{\partial \widetilde{R}}{\partial t}\geq 0\ \qquad \mbox{on} \quad \widetilde{M}\times (-\infty ,+\infty )\ . \end{equation} Thus, combining (\ref{4.4}) and (\ref{4.5}), we deduce that $$ \widetilde{R}\equiv 0\ ,\qquad \mbox{for all} \quad (x,t)\in \widetilde{M} \times (-\infty ,+\infty )\ $$ and hence $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ is flat for all $t\in (-\infty ,+\infty )$. But, by definition, a singularity model cannot be flat. This proves our claim (\ref{4.3}).
With the estimate (\ref{4.3}), we can then apply a lemma of Hamilton (Lemma 22.2 in \cite{Ha3}) to find a sequence of positive numbers $\delta _j$, $j=1,2,\cdots$, with $\delta_j\rightarrow 0$ such that
\begin{enumerate} \item[{(a)}] $\widetilde{R}(x,0)\leq (1+\delta _j)\widetilde{R}(x_j,0)$ for all $x$ in the ball $\widetilde{B}_0(x_j,r_j)$ of radius $r_j$ centered at $x_j$ with respect to the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot,0)$;
\item[{(b)}] $r_j^2\widetilde{R}(x_j,0)\rightarrow +\infty $;
\item[{(c)}] if $s_j=\widetilde{d}_0(x_j,x_0)$, then $\lambda _j=s_j/r_j\rightarrow +\infty $;
\item[{(d)}] the balls $\widetilde{B}_0(x_j,r_j)$ are disjoint. \end{enumerate}
Denote the minimum of the holomorphic sectional curvature of the metric $ \widetilde{g}_{\alpha \overline{\beta }}(\cdot ,0)$ at $x_j$ by $h_j$. We claim that the following holds \begin{equation} \label{4.6} \varepsilon _j=\frac{h_j}{\widetilde{R}(x_j,0)}\rightarrow 0\ \qquad \mbox{as} \quad j\rightarrow +\infty \ . \end{equation}
Suppose not, there exists a subsequence $j_k\rightarrow +\infty $ and some positive number $\varepsilon >0$ such that \begin{equation} \label{4.7} \varepsilon _{j_k}=\frac{h_{j_k}}{\widetilde{R}(x_{j_k},0)}\geq \varepsilon \ \qquad \mbox{for all} \quad k=1,2,\cdots \ . \end{equation}
Since the solution $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ is ancient, it follows from the Harnack inequality of Cao \cite{Cao} that the scalar curvature $\widetilde{R}(x,t)$ is pointwisely nondecreasing in time. Then, by using the local derivative estimate of Shi \cite{Sh1} (or see Theorem 13.1 in \cite{Ha3}) and (a), (b), we have \begin{eqnarray} \label{4.8}
\sup \limits_{x\in \widetilde{B}_0(x_{j_k},r_{j_k})}\left|
\nabla \widetilde{R}m(x,0)\right| ^2&\leq&C_5\widetilde{R}^2(x_j,0) \left( \frac 1{r_{j_k}^2}+\widetilde{R}(x_j,0)\right) \nonumber \\ &\leq&2C_5\widetilde{R}^3(x_j,0)\ , \end{eqnarray} where $\widetilde{R}m$ is the curvature tensor of $\widetilde{g}_{\alpha \overline{\beta }}$ and $C_5$ is a positive constant depending only on the dimension.
For any $x\in \widetilde{B}_0(x_{j_k},r_{j_k})$, we obtain from (\ref{4.7}) and (\ref{4.8}) that the minimum of the holomorphic sectional curvature $h_{\min}(x)$ at $x$, satisfies \begin{eqnarray} \label{4.9} h_{\min }(x)&\geq&h_{j_k}-\sqrt{2C_5}\widetilde{R}^{3/2}(x_{j_k},0)\widetilde{d}_0(x,x_{j_k})\nonumber\\ &\geq&\widetilde{R}(x_{j_k},0)\left( \varepsilon -\sqrt{2C_5}\cdot \sqrt{\widetilde{R}(x_{j_k},0)}\cdot \widetilde{d}_0(x,x_{j_k})\right)\nonumber\\ &\geq&\frac \varepsilon 2\widetilde{R}(x_{j_k},0)\ \end{eqnarray} if $$ \widetilde{d}_0(x,x_{j_k})\leq \frac{\varepsilon}{2\sqrt{2C_5}\cdot \sqrt{ \widetilde{R}(x_{j_k},0)}}\ . $$ Thus, from (a) and (\ref{4.9}), there exists $k_0>0$ such that for any $k\geq k_0$ and $$ x \in \widetilde{B}_0(x_{j_k},\frac{\varepsilon}{2\sqrt{2C_5} \cdot \sqrt{\widetilde{R}(x_{j_k},0)}}), $$ we have \begin{equation} \label{4.10} \frac \varepsilon 2\widetilde{R}(x_{j_k},0) \leq \mbox{holomorphic sectional curvature at $x$} \leq 2\widetilde{R}(x_{j_k},0). \end{equation}
We have proved that the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,0)$ has nonnegative definite curvature operator. In particular, the sectional curvature is nonnegative. Then, by the generalized Cohn--Vossen inequality in real dimension 4 \cite{GW1}, we have \begin{equation} \label{4.11} \int_{\widetilde{M}}\Theta \leq \chi \left( \widetilde{M}\right) <+\infty \end{equation} where $\Theta $ is the Gauss--Bonnet--Chern integrand for the metric $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,0)$ and $\chi \left( \widetilde{M}\right)$ is the Euler number of the manifold $\widetilde{M}$ which has finite topology type by the soul theorem of Cheeger--Gromoll.
On the other hand, from the proof of Theorem 1.3 of Bishop--Goldberg \cite{BG} (see Page 523 of \cite{BG}), the inequality (\ref{4.10}) implies that \begin{equation} \label{4.12} \Theta (x)\geq C(\varepsilon)\widetilde{R}^2(x_{j_k},0)\ \qquad \mbox{for all} \;\;\; x\in \widetilde{B}_0(x_{j_k}, \frac{\varepsilon}{2\sqrt{2C_5}\cdot \sqrt{ \widetilde{R}(x_{j_k},0)}}), \end{equation}
where $C(\varepsilon)$ is some positive constant depending only on $\varepsilon$. Now, by combining (\ref{4.2}), (b), (d), (\ref{4.11}) and (\ref{4.12}), we get \begin{eqnarray*} +\infty >\chi \left( \widetilde{M}\right) & \geq & \sum\limits_{k=k_0}^\infty \int_{\widetilde{B}_0(x_{j_k}, \frac{\displaystyle \varepsilon}{2\sqrt{2C_5}\cdot \sqrt{ \widetilde{R}(x_{j_k},0)}})}\Theta \\ & \geq & C(\varepsilon)\sum\limits_{k=k_0}^\infty \widetilde{R}^2 (x_{j_k},0)\cdot C_1\left( \frac\varepsilon {2\sqrt{2C_5}\cdot \sqrt{\widetilde{R}(x_{j_k},0)}}\right) ^4 \\ & = & C(\varepsilon )\sum\limits_{k=k_0}^\infty \frac{C_1\varepsilon ^4}{64C_5^2}\\ & = & +\infty \end{eqnarray*} which is a contradiction. Hence our claim (\ref{4.6}) is proved.
Now, we are going to blow down the singularity model $(\widetilde{M}, \widetilde{g}_{\alpha \overline{\beta }}(x,t))$. For the above chosen $x_j$, $r_j$ and $\delta _j$, let $x_j$ be the new origin $O$, dilate the space by a factor $\lambda _j$ so that $\widetilde{R}(x_j,0)$ become $1$ at the origin at $t=0$, and dilate in time by $\lambda _j^2$ so that it is still a solution to the Ricci flow. The balls $\widetilde{B}_0(x_j,r_j)$ are dilated to the balls centered at the origin of radii $\widetilde{r}_j=r_j^2 \widetilde{R}(x_j,0)\rightarrow +\infty $ ( by (b) ). Since the scalar curvature of $\widetilde{g}_{\alpha \overline{\beta }}(x,t)$ is pointwise nondecreasing in time by the Harnack inequality, the curvature bounds on $ \widetilde{B}_0(x_j,r_j)$ also give bounds for previous times in these balls. And the maximal volume growth estimate (\ref{4.2}) and the local injectivity radius estimate of Cheeger, Gromov and Taylor \cite{CGT} imply that $$ \mbox{inj}_{\widetilde{M}}\left( x_j,\widetilde{g}_{\alpha \overline{\beta }}(\cdot,0)\right) \geq \frac \beta {\sqrt{\widetilde{R}(x_j,0)}}\ , $$ for some positive constant $\beta $ independent of $j.$
So we have everything to take a limit for the dilated solutions. By applying the compactness theorem in \cite{Ha2} and combining (\ref{4.2}), (\ref{4.6} ), (a) and (b), we obtain a complete noncompact solution, still denoted by $ ( \widetilde{M},\widetilde{g}_{\alpha \overline{\beta }}(x,t))$, for $t\in (-\infty ,0]$ such that
\begin{enumerate} \item[{(e)}] the curvature operator is still nonnegative;
\item[{(f)}] $\widetilde{R}(x,t)\leq 1$, for all $x\in \widetilde{M}$, $ t\in (-\infty ,0]$, and $\widetilde{R}(0,0)=1$;
\item[{(g)}] $\mbox{Vol}_t\left( \widetilde{B}_t(x,r)\right) \geq C_1r^4$ for all $x\in \widetilde{M}$, $0\leq r\leq +\infty $;
\item[{(h)}] there exists a complex $2$--plane $V\wedge JV$ at the origin $ O $ so that at $t=0$, the corresponding holomorphic sectional curvature vanishes. \end{enumerate}
If we consider the universal covering of $\widetilde{M}$, the induced metric of $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ on the universal covering is clearly still a solution to the Ricci flow and satisfies all of above (e), (f), (g), (h). Thus, without loss of generality, we may assume that $\widetilde{M}$ is simply connected.
Next, by using the strong maximum principle on the evolution equation of the curvature operator of $\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t)$ as in \cite{Ha1} (see Theorem 8.3 of \cite{Ha1}), we know that there exists a constant $K>0$ such that on the time interval $-\infty <t<-K$, the image of the curvature operator of $(\widetilde{M},\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t))$ is a fixed Lie subalgebra of $so(4)$ of constant rank on $\widetilde{M}$. Because $\widetilde{M}$ is K\"ahler, the possibilities are limited to $u(2)$, $so(2)\times so(2)$ or $so(2).$
In the case $u(2)$, the sectional curvature is strictly positive. Thus, this case is ruled out by (h). In the cases $so(2)\times so(2)$ or $so(2)$, according to \cite{Ha1}, the simply connected manifold $\widetilde{M}$ splits as a product $\widetilde{M}=\Sigma _1\times \Sigma _2$, where $\Sigma _1$ and $\Sigma _2$ are two Riemann surfaces with nonnegative curvature (by (e)), and at least one of them, say $\Sigma _1$, has positive curvature (by (f)).
Denote by $\widetilde{g}_{\alpha \overline{\beta }}^{(1)}(\cdot ,t)$ the corresponding metric on $\Sigma _1$. Clearly, it follows from (g) and standard volume comparison that for any $x\in \Sigma _1$, $t\in (-\infty ,-K) $, we have \begin{equation} \label{4.13} \mbox{Vol}B_{\Sigma_1}(x,r) \geq C_6 r^2 \qquad \mbox{for} \;\; 0 \leq r < +\infty \end{equation}
where both the geodesic ball $B_{\Sigma_1}(x,r)$ and the volume are taken with respect to the metric $\widetilde{g}_{\alpha \overline{\beta }}^{(1)}(\cdot ,t)$ on $\Sigma_1$, $C_6$ is a positive constant depending only on $C_1.$ Also as the curvature of $\widetilde{g}_{\alpha \overline{\beta }}^{(1)}(x,t)$ is positive, it follows from Cohn--Vossen inequality that \begin{equation} \label{4.14} \int_{\Sigma _1}\widetilde{R}^{(1)}(x,t)d\sigma _t\leq 8\pi \ , \end{equation} where $\widetilde{R}^{(1)}(x,t)$ is the scalar curvature of $(\Sigma _1, \widetilde{g}_{\alpha \overline{\beta }}^{(1)}(x,t))$ and $d\sigma _t$ is the volume element of the metric $\widetilde{g}_{\alpha \overline{\beta } }^{(1)}(x,t).$
Now, the metric $\widetilde{g}_{\alpha \overline{\beta }}^{(1)}(x,t)$ is a solution to the Ricci flow on the Riemann surface $\Sigma _1$ over the ancient time interval $(-\infty ,-K)$. Thus, (\ref{4.13}) and (\ref{4.14}) imply that for each $t\in (-\infty ,-K)$, the curvature of $\widetilde{g}_{\alpha \overline{\beta }}^{(1)}(x,t)$ has quadratic decay in the average sense of Shi \cite{Sh4} and then the apriori estimate of Shi (see Theorem 8.2 in \cite{Sh4}) implies that the solution $\widetilde{g}_{\alpha \overline{\beta }}^{(1)}(x,t)$ exists for all $t\in (-\infty ,+\infty )$ and satisfies \begin{equation} \label{4.15} \lim \limits_{t\rightarrow +\infty }\sup \left\{ \left.
\widetilde{R}^{(1)}(x,t)\right| \ x\in \Sigma _1\right\} =0. \end{equation} Again, by the Harnack inequality of Cao \cite{Cao}, we know that $\widetilde{R}^{(1)}(x,t)$ is pointwisely nondecreasing in time. Therefore, we conclude that $$ \widetilde{R}^{(1)}(x,t)\equiv 0\qquad \mbox{on}\quad \Sigma _1\times (-\infty,+\infty )\ . $$ This contradicts with the fact that $(\Sigma _1,\widetilde{g}_{\alpha \overline{\beta }}(\cdot ,t))$ has positive curvature for $t<-K.$ Hence we have seeked the desired contradiction and have completed the proof of Theorem 4.1.
$\Box $
\section*{\S5. Topology and Steinness}
\setcounter{section}{5} \setcounter{equation}{0}
\qquad In this section, we use the estimates obtained in the previous sections to study the topology and the complex structure of the K\"ahler surface in our Main Theorem. Our result is
\vskip 3mm{\bf \underline{Theorem 5.1}} \quad Suppose $(M,g_{\alpha \overline{\beta }})$ is a complete noncompact K\"ahler surface satisfying the assumptions in the Main Theorem.
Then $M$ is homeomorphic to $\bf R^4$ and is a Stein manifold.
\vskip 3mm The proof of this theorem is exactly the same as in \cite{CZ2}. For the convenience of the readers, we give a sketch of the arguments and refer to the cited reference for details.
\vskip 3mm{\bf \underline{Sketch of proof.}} \qquad We evolve the metric $g_{\alpha \overline{\beta }}(x)$ by the Ricci flow (\ref{2.1}). From Theorem 4.1, the solution $g_{\alpha \overline{\beta }}(x,t)$ exists for all $ t\in [0,+\infty )$ and satisfies \begin{equation} \label{5.1} R(x,t)\leq \frac C{1+t}\ \qquad \mbox{on} \quad M\times [0,+\infty )\ \end{equation} for some positive constant $C$. Also, Proposition 2.1 tells us that the volume growth condition (i) is preserved under the Ricci flow. By using the local injectivity radius estimate of Cheeger--Gromov--Taylor, this implies that \begin{equation} \label{5.2} \mbox{inj}\left( M,g_{\alpha \overline{\beta }}(\cdot ,t)\right) \geq C_7(1+t)^{\frac 12}\ \qquad \mbox{for}\quad t\in [0,+\infty )\ \end{equation} with some positive constant $C_7$.
Since the Ricci curvature of $g_{\alpha \overline{\beta }}(x,t)$ is positive for all $x\in M$ and $t\geq 0$, the Ricci flow equation (\ref{2.1}) implies that the ball $B_t(x_0,\frac{C_7}2(1+t)^{\frac 12})$ of radius $\frac{C_7}2(1+t)^{\frac 12}$ with respect to the metric $g_{\alpha \overline{\beta }}(\cdot ,t)$ contains the ball $B_0(x_0,\frac{C_7}2(1+t)^{\frac 12})$ of the same radius with respect to the initial metric $g_{\alpha \overline{\beta }}(\cdot ,0)$. Combining this with (\ref{5.2}), we deduce that $$ \pi _p(M,x_0)=0\ \qquad \mbox{for any} \quad p\geq 1\ $$ and $$ \pi _q(M,\infty )=0\ \qquad \mbox{for} \quad 1\leq q\leq 2\ , $$ where $\pi _q(M,\infty )$ is the $q$th homotopy group of $M$ at infinity.
Thus, by the resolution of the generalized Poincar\'e conjecture on four manifolds by Freedman \cite{Fre}, we know that $M$ is homeomorphic to $\bf R^4$.
Next, the injectivity radius estimate (\ref{5.2}) also tells us that, for $t$ large enough, the exponential maps provide diffeomorphisms between big geodesic balls $B_t(x_0,\frac{C_7}2(1+t)^{\frac 12})$ of $M$ with big Euclidean balls on $\bf C^2$. The curvature estimate (\ref{5.1}) together with its derivative estimates imply that the difference of the complex structure of $M$ and the standard complex struture of $\bf C^2$ in those geodesic balls can be made arbitrarily small by taking $t$ large enough. Then, we can use the $L^2$ estimates of H\"ormander \cite{Ho} to modify the exponential map to a biholomorphism between a domain $\Omega (t)$ containing $B_t(x_0,\frac{C_7}4(1+t)^{\frac 12})$ and a Euclidean ball of radius $\frac{C_7}3(1+t)^{\frac 12}$ in $\bf C^2$. Since the solution $g_{\alpha \overline{\beta }}(\cdot ,t)$ of the Ricci flow is shrinking, the domain $\Omega (t)$ contains the geodesic ball $B_0(x_0,\frac{C_7}4(1+t)^{\frac 12})$. Thus, we can choose a sequence of $t_k\rightarrow +\infty $, such that \begin{displaymath} M=\cup _{k=0}^{+\infty }\Omega (t_k)\ ,\qquad \Omega (t_1)\subset \Omega (t_2)\subset \cdots \subset \Omega (t_k)\subset \cdots \ . \end{displaymath} Since for each $k$, $\Omega (t_k)$ is biholomorphic to the unit ball of $\bf C^2$, $(\Omega (t_k),\Omega (t_l))$ forms a Runge pair for any $k,\ l$. Finally, we can appeal to a theorem of Markeo \cite{Mar} (see also Siu \cite{Si}) to conclude that $M$ is a Stein manifold.
$\Box $
\section*{\S6. Space decay estimate on curvature and the Poincar\'e--Lelong equation}
\setcounter{section}{6} \setcounter{equation}{0}
\qquad Let $(M,g_{\alpha \overline{\beta }})$ be a complete noncompact K\"ahler surface satisfying all the assumptions in the Main Theorem. The main purpose of this section is to establish the existence of a strictly plurisubharmonic function of logarithmic growth on $M$. To this end, we first prove a curvature decay estimate at infinity of the metric $g_{\alpha \overline{\beta }}$.
\vskip 3mm{\bf \underline{Theorem 6.1}} \quad Let $(M,g_{\alpha \overline{\beta }})$ be a complete noncompact K\"ahler surface as above.
Then there exists a constant $C>0$ such that for all $x\in M,\ r>0$, we have \begin{equation} \label{6.1} \int_{B(x,r)}R(y)\,\frac 1{d^2(x,y)}\,dy \leq C\log (2+r)\ . \end{equation}
\vskip 3mm{\bf \underline{Proof.}} \quad Let $g_{\alpha \overline{\beta }}(x,t)$ be the solution of the Ricci flow (\ref{2.1}) with $g_{\alpha \overline{\beta }}(x)$ as the initial metric. From Theorem 4.1, we know that the solution exists for all times and satisfies \begin{equation} \label{6.2} R(x,t) \leq \frac{C_8}{1+t}\ \qquad \mbox{on} \quad M\times [0,+\infty ) \end{equation} for some positive constant $C_8.$
Let $$ F(x,t)=\log \frac{\det \left( g_{\alpha \overline{\beta }} (x,t)\right) }{\det \left( g_{\alpha \overline{\beta }}(x,0)\right)} $$ be the function introduced in the proof of Proposition 2.1. Since $$ -\partial _\alpha \overline{\partial }_\beta \log \frac{\det \left( g_{\gamma \overline{\delta }}(\cdot ,t)\right) }{\det \left( g_{\gamma \overline{\delta }}(\cdot ,0)\right) }=R_{\alpha \overline{\beta }}(\cdot ,t)-R_{\alpha \overline{\beta }}(\cdot ,0)\ , $$ after taking trace with the initial metric $g_{\alpha \overline{\beta }}(\cdot,0)$, we get \begin{equation} \label{6.3} R(\cdot ,0)=\bigtriangleup _0F(\cdot ,t)+g^{\alpha \overline{ \beta }}(\cdot ,0)R_{\alpha \overline{\beta }}(\cdot ,t) \end{equation} where $\bigtriangleup _0$ is the Laplace operator of the metric $g_{\alpha \overline{\beta }}(\cdot ,0).$
Since $(M,g_{\alpha \overline{\beta }}(\cdot ,0))$ has positive Ricci curvature and maximal volume growth, it is well known (see \cite{ScY}) that the Green function $G_0(x,y)$ of the initial metric $g_{\alpha \overline{\beta }}(\cdot ,0)$ exists on $M$ and satisfies the estimates \begin{equation} \label{6.4} \frac{C_9^{-1}}{d_0^2(x,y)}\leq G_0(x,y)\leq \frac{C_9}{d_0^2(x,y)} \end{equation} and \begin{equation} \label{6.5}
\left| \nabla _yG_0(x,y)\right| _0\leq \frac{C_9}{d_0^3(x,y)} \end{equation} for some positive constant $C_9$ depending only on $C_1.$
For any fixed $\overline{x}_0\in M$ and any $\alpha >0$, we denote $$
\Omega _\alpha =\left\{ \left. x\in M\right| \ G_0 (\overline{x}_0,x)\geq \alpha \ \right\} \ . $$ By (\ref{6.4}), it is not hard to see \begin{equation} \label{6.6} B_0\left( \overline{x}_0,\left( \frac{C_9^{-1}}\alpha \right) ^{\frac 12}\right) \subset \Omega _\alpha \subset B_0\left( \overline{x} _0,\left( \frac{C_9}\alpha \right) ^{\frac 12}\right) \ . \end{equation} Recall that $F$ evolves by $$ \frac{\partial F(x,t)}{\partial t}=-R(x,t)\ \qquad \mbox{on} \quad M\times [0,+\infty )\ . $$ Combining with (\ref{6.2}), we obtain \begin{equation} \label{6.7}0\geq F(x,t)\geq -C_{10}\log (1+t) \qquad \mbox{on} \quad M\times [0,+\infty )\ . \end{equation} Multiplying (\ref{6.3}) by $G_0(\overline{x} _0,x)-\alpha $ and integrating over $\Omega _\alpha $, we have \begin{eqnarray} \label{6.8} \int_{\Omega _\alpha }R(x,0)\left( G_0(\overline{x}_0,x)-\alpha \right) dx&=&\int_{\Omega _\alpha }\left( \bigtriangleup _0F(x,t)\right) \left( G_0(\overline{x}_0,x)-\alpha \right) dx \nonumber \\ & & +\int_{\Omega _\alpha }g^{\alpha \overline{\beta }}(x,0) R_{\alpha \overline{\beta }}(\cdot ,t)\left( G_0(\overline{x}_0,x)- \alpha \right) dx \nonumber \\ & = & -\int_{\partial \Omega _\alpha }F(x,t)\frac{\partial G_0 (\overline{x}_0,x)}{\partial \nu }d\sigma -F(\overline{x}_0,t) \nonumber\\ & & +\int_{\Omega _\alpha }g^{\alpha \overline{\beta }}(x,0) R_{\alpha \overline{\beta }}(\cdot ,t)\left( G_0(\overline{x}_0,x)-\alpha \right) dx\nonumber\\ & \leq & C_{10}\left( 1+C_9^{\frac 52}\alpha ^{\frac 32}\mbox{Vol}_0 \left( \partial \Omega _\alpha \right) \right) \log (1+t) \nonumber \\ & & +\int_{\Omega _\alpha }g^{\alpha \overline{\beta }}(x,0) R_{\alpha \overline{\beta }}(\cdot ,t)G_0(\overline{x}_0,x)dx\ ,\nonumber\\ \end{eqnarray} by (\ref{6.4}) and (\ref{6.7}). Here, we have used $\nu $ to denote the outer unit normal of $\partial \Omega _\alpha .$
From the coarea formula, we have \begin{eqnarray} \frac 1\alpha \int_\alpha ^{2\alpha }r^{\frac 32}Vol_0\left( \partial \Omega _r\right) dr&\leq&2^{\frac 32}\alpha ^{\frac 12}
\int_\alpha ^{2\alpha }\int_{\partial \Omega _r}\left| \nabla
G_0(\overline{x}_0,x)\right| _0d\sigma \left| d\nu \right| \nonumber \\ & \leq & 2^{\frac 32}C_9^{\frac 52}\alpha ^2\mbox{Vol}_0\left( \Omega _\alpha\right) \nonumber \\ & \leq & 2^{\frac 32}C_9^{\frac 52}\alpha ^2\mbox{Vol}_0\left( B_0\left( \overline{x}_0,\left( \frac{C_9}\alpha \right) ^{\frac 12}\right) \right)\nonumber\\ & \leq & C_{11} \nonumber \end{eqnarray} for some positive constant $C_{11}$ by the standard volume comparison. Substitute this into (\ref{6.8}) and integrate (\ref{6.8}) from $\alpha $ to $2\alpha $, we get \begin{eqnarray} \label{6.9} \int_{\Omega _{2\alpha}}R(x,0)\left( G_0(\overline{x}_0,x)-2\alpha \right) dx&\leq&C_{10}\left( 1+C_9^{\frac 52}C_{11}\right) \log (1+t) \nonumber\\ & & +\int_{\Omega _\alpha }g^{\alpha \overline{\beta }}(x,0) R_{\alpha \overline{\beta }}(x,t)G_0(\overline{x}_0,x)dx\ .\nonumber\\ \end{eqnarray} It is easy to see that $$ \int_{\Omega _{4\alpha }}R(x,0)G_0(\overline{x}_0,x)dx\leq 2\int_{\Omega _{2\alpha }}R(x,0)\left( G_0(\overline{x}_0,x)-2\alpha \right) dx $$ and by the equation Ricci flow (\ref{2.1}), we also have \begin{eqnarray} &&\int_0^t\int_{\Omega _\alpha }g^{\alpha \overline{\beta }}(x,0) R_{\alpha \overline{\beta }}(x,t)G_0(\overline{x}_0,x)dx dt \nonumber\\ & = & \int_{\Omega _\alpha }g^{\alpha \overline{\beta }}(x,0) \left( g_{\alpha \overline{\beta }}(x,0)-g_{\alpha \overline{\beta }} (x,t)\right) G_0(\overline{x}_0,x)dx \nonumber \\ & \leq & 2\int_{\Omega _\alpha }G_0(\overline{x}_0,x)dx\ .\nonumber \end{eqnarray} Thus by integrating (\ref{6.9}) in time from $0$ to $t$ and combining the above two inequalities, we get for any $t>0,$ $$ \int_{\Omega _{4\alpha }}R(x,0)G_0(\overline{x}_0,x)dx\leq 2C_{10}\left( 1+C_9^{\frac 52}C_{11}\right) \log (1+t)+\frac 4t\int_{\Omega _\alpha }G_0( \overline{x}_0,x)dx\ . $$ Finally, substituting (\ref{6.4}) and (\ref{6.6}) into the above inequality, we see that there exists some positive constant $C_{12}$ such that for any $\overline{x}_0\in M$, $t>0$ and $r>0,$ \begin{equation} \label{6.10}\int_{B_0(\overline{x}_0,r)}R(x,0)\frac 1{d^2(\overline{x} _0,x)}dx\leq C_{12}\left( \log (1+t)+\frac{r^2}t\right) \ . \end{equation} Choose $t=r^2$, we get the desired estimate.
$\Box $
\vskip 3mm Now we can use the estimate (\ref{6.1}) to solve the following Poincar\'e--Lelong equation on $M$ \begin{equation} \label{6.11} \sqrt{-1}\partial \overline{\partial }u= \mbox{Ric} \ \end{equation} to get the strictly plurisubharmonic function mentioned at the beginning of this section.
As in \cite{MSY} or \cite{NST}, we first study the corresponding Poisson equation on $M$ \begin{equation} \label{6.12}\bigtriangleup u=R. \end{equation} After we solve the Poisson equation (\ref{6.12}) with a solution of logarithmic growth, we will see that it is indeed a solution of the Poincar\'e--Lelong equation with logarithmic growth.
To solve (\ref{6.12}), we first construct a family of approximate solutions $u_r$ as follows.
For a fixed $x_0\in M$ and $r>0$, define $u_r(x)$ on $B(x_0,r)$ by $$ u_r(x)=\int_{B(x_0,r)}\left( G(x_0,y)-G(x,y)\right) R(y)dy $$ where $G(x,y)$ is the Green function of the metric $g_{\alpha \overline{\beta }}$ on $M$. It is clear that $$ u_r(x_0)=0 \qquad \mbox{and} \qquad \bigtriangleup u_r(x)=R(x) \quad \mbox{on} \quad B(x_0,r). $$ For $x\in B(x_0,\frac r2)$, we write \begin{eqnarray} u_r(x) & = & \left( \int_{B(x_0,r)\backslash B(x_0,2d(x,x_0))}+ \int_{B(x_0,2d(x,x_0))}\right) \left( G(x_0,y)-G(x,y)\right) R(y)dy \nonumber \\ & := &I_1+I_2\ . \nonumber \end{eqnarray} From (\ref{6.1}), we see that \begin{equation} \label{6.13}
\left| I_2\right| \leq C_{13}\log \left( 2+d(x,x_0)\right) \qquad \mbox{on} \quad B(x_0,\frac r2) \end{equation} for some positive constant $C_{13}$ independent of $x_0$, $x$ and $r.$
To estimate $I_1$, we get from (\ref{6.5}) that for $y\in B(x_0,r)\backslash B(x_0,2d(x,x_0)),$ \begin{eqnarray}
\left| G(x_0,y)-G(x,y)\right| &\leq&d(x,x_0)\cdot \sup
\limits_{z\in B(x_0,d(x,x_0))}\left| \nabla _zG(z,y)\right| \nonumber \\ & \leq & C_9d(x,x_0)\cdot \sup \limits_{z\in B(x_0,d(x,x_0))} \frac 1{d^3(z,y)} \nonumber \\ &\leq&8C_9\frac{d(x,x_0)}{d^3(y,x_0)}\ .\nonumber \end{eqnarray} Thus by (\ref{6.1}), we have \begin{eqnarray} \label{6.14}
\left|I_1\right|&\leq&8C_9d(x,x_0)\int_{B(x_0,r)\backslash B(x_0,2d(x,x_0))}\frac{R(y)}{d^3(y,x_0)}dy \nonumber\\ &\leq&8C_9d(x,x_0)\sum\limits_{k=1}^\infty \frac 1{2^kd(x,x_0)}\cdot \int_{B(x_0,2^{k+1}d(x,x_0))\backslash B(x_0,2^kd(x,x_0))} \frac{R(y)}{d^2(y,x_0)}dy \nonumber \\ & \leq & 8C_9C\sum\limits_{k=1}^\infty \frac 1{2^k}\log \left( 2+2^{k+1}d(x,x_0)\right)\nonumber\\ & \leq & C_{14}\log \left( 2+d(x,x_0)\right) \end{eqnarray} for some positive constant $C_{14}.$
Hence, by combining (\ref{6.13}) and (\ref{6.14}), we deduce \begin{equation} \label{6.15}
\left| u_r(x)\right| \leq \left( C_{13}+C_{14}\right) \log \left( 2+d(x,x_0)\right) \end{equation} for any $r\geq 2d(x,x_0).$
On the other hand, by taking the derivative of $u_r(x)$, we get \begin{eqnarray} \label{6.16}
\left| \nabla u_r(x)\right|&\leq&C_9\int_M\frac{R(y)} {d^3(x,y)}dy \nonumber \\ & \leq & C_9\left( \int\limits_{B(x,1)}\frac{R(y)}{d^3(x,y)}dy +\sum\limits_{k=1}^\infty \frac 1{2^{k-1}}\int\limits_{B(x,2^k) \backslash B(x,2^{k-1})}\frac{R(y)}{d^2(x,y)}dy\right) \nonumber\\ & \leq & C_9\left( C_{15}+\sum\limits_{k=1}^\infty \frac 1{2^{k-1}}C\log \left( 2+2^k\right) \right) \nonumber \\ & = & C_{16}\ . \end{eqnarray} Here, we have used (\ref{6.1}) and (\ref{6.5}), $C_{15}$ and $C_{16}$ are positive constants independent of $r$. Therefore, it follows from the Schauder theory of elliptic equations that there exists a sequence of $r_j\rightarrow +\infty $ such that $u_{r_j}(x)$ converges uniformly on compact subset of $M$ to a smooth function $u$ satisfying \begin{equation} \label{6.17} \left\{ \begin{array}{ll} \bigbreak u(x_0)=0\qquad \mbox{and} \qquad \bigtriangleup u=R & \mbox{on} \;\; M\ , \\
\bigbreak |u(x)|\leq \left( C_{13}+C_{14}\right) \log \left( 2+d(x,x_0)\right) & \mbox{for} \;\; x\in M\ , \\
|\nabla u(x)|\leq C_{16}\ & \mbox{for} \;\; x\in M\ . \end{array} \right. \end{equation} Thus, we have obtained a solution $u$ of logarithmic growth to the Poisson equation (\ref{6.12}) on $M$. In the following we prove that $u$ is actually a solution of the Poincar\'e--Lelong equation (\ref{6.11}).
Recall the Bochner identity, with $\bigtriangleup u = R$ \begin{eqnarray} \label{6.18}
\frac 12\bigtriangleup \left| \nabla u\right| ^2
& = &\left| \nabla ^2u\right| ^2+\left\langle \nabla u,\nabla R\right\rangle +Ric\left( \nabla u,\nabla u\right) \nonumber \\
& \geq & \left| \nabla ^2u\right| ^2+\left\langle \nabla u,\nabla R\right\rangle \ . \end{eqnarray} For any $r>0$ and any $\overline{x}_0\in M$, by multiplying (\ref{6.18}) by the cutoff function in (\ref{2.7}) and integrating by parts, we get \begin{eqnarray}
\int_M\left| \nabla ^2u\right| ^2\varphi dx
& \leq & \frac 12\int_M\left| \nabla u\right| ^2\cdot \left|
\bigtriangleup \varphi \right| dx+\int_M\left| \nabla ^2u\right| \cdot R\varphi dx \nonumber \\
& & + \int_M\left| \nabla u\right| \cdot R\cdot \left|
\nabla \varphi \right| dx \nonumber \\ & \leq & \frac{C_{16}^2}2\cdot \frac{C_3}{r^2}\int_M\varphi dx+
\frac 12\int_M\left| \nabla ^2u\right| ^2\varphi dx \nonumber \\ & & +\frac 12\int_MR^2\varphi dx+C_{16}\cdot \left( \sup \limits_MR\right) \cdot \frac{C_3}r\int_M\varphi dx\ .\nonumber \end{eqnarray} Thus, \begin{equation} \label{6.19}
\int_M\left| \nabla ^2u\right| ^2\varphi dx\leq \left(C_3C_{16}^2\cdot \frac 1{r^2}+2C_{16}C_3\left( \sup \limits_MR\right) \frac 1r\right) \int_M\varphi dx+\int_MR^2\varphi dx\ . \end{equation} By (\ref{6.1}), (\ref{2.7}) and the standard volume comparison, we have \begin{eqnarray} \label{6.20} \int_MR^2\varphi dx&\leq&\left( \sup \limits_MR\right) \int_MR(x)e^{-\left( 1+\frac{d(x,\overline{x}_0)}r\right) }dx \nonumber \\ & \leq & \left( \sup \limits_MR\right)\cdot \nonumber\\ &&\left( \int_{B(\overline{x}_0,r)}R(x)dx+\sum\limits_{k=0}^\infty e^{-2^{k-1}}\int_{B(\overline{x}_0,2^{k+1}r)\backslash B(\overline{x}_0,2^kr)}R(x)dx\right) \nonumber\\ &\leq&C_{17}r^2\log \left( 2+r\right) \end{eqnarray} and \begin{eqnarray} \label{6.21} \int_M\varphi dx&\leq&\int_Me^{-\left( 1+\frac{d(x,\overline{x}_0)}r \right) }dx \nonumber\\ & \leq & \int_{B(\overline{x}_0,r)}dx+\sum\limits_{k=0}^\infty e^{-2^{k-1}}\int_{B(\overline{x}_0,2^{k+1}r)\backslash B(\overline{x}_0,2^kr)}dx \nonumber\\ & \leq & C_{17}r^4 \end{eqnarray} for some positive constant $C_{17}$ independent of $r$ and $\overline{x}_0.$
Substituting these two inequalities into (\ref{6.19}) we have \begin{equation} \label{6.22}
\frac 1{r^4}\int_{B(\overline{x}_0,r)}\left| \nabla ^2u\right| ^2dx\leq C_{18}\left( \frac 1{r^2}+\frac 1r+\frac{\log (2+r)}{r^2}\right) \end{equation} for some positive constant $C_{18}$ independent of $r$ and $\overline{x}_0.$
Since the holomorphic bisectional curvature of $g_{\alpha \overline{\beta }}$ is positive, it was shown in \cite{MSY} that the function
$\left| \sqrt{-1}\partial \overline{\partial }u-\mbox{Ric}\right| ^2$ is subharmonic on $M$. Then by the mean value inequality and (\ref{6.20}), (\ref{6.22}), we have \begin{eqnarray}
\left| \sqrt{-1}\partial \overline{\partial }u-Ric\right| ^2(\overline{x}_0)&\leq&\frac{C_{19}}{r^4}\int_{B(\overline{x}_0,r)}
\left| \sqrt{-1}\partial \overline{\partial }u-Ric\right| ^2(x)dx \nonumber\\ & \leq & \frac{2C_{19}}{r^4}\int_{B(\overline{x}_0,r)}\left(
\left| \nabla ^2u\right| ^2+R^2\right) dx \nonumber\\ & \leq & C_{20}\left( \frac 1{r^2}+\frac 1r+\frac{\log (2+r)}{r^2}\right) \nonumber \end{eqnarray}for some positive constants $C_{19},\ C_{20}$ independent of $r$ and $\overline{x}_0.$ Since $\overline{x}_0\in M$ and $r>0$ are arbitrary, by letting $r\rightarrow +\infty $ we know that $$ \sqrt{-1}\partial \overline{\partial }u= \mbox{Ric} \qquad \mbox{on} \quad M\ . $$ In summary, we have proved the following result. \vskip 3mm{\bf \underline{Proposition 6.2}} \qquad Suppose $(M,g_{\alpha \overline{\beta }})$ is a complete noncompact K\"ahler surface satisfying all the assumptions in the Main Theorem. Then there exists a strictly plurisubharmonic function $u(x)$ on $M$ satisfying the Poincar\'e--Lelong equation (\ref{6.11}) with the estimate $$
|u(x)| \leq C\log\left(2+d(x,x_0)\right) \qquad \mbox{for all} \;\; x\in M $$ for some positive constant C.
\section*{\S 7. Uniform estimates on multiplicity and the number of components of an ``algebraic'' divisor}
\setcounter{section}{7} \setcounter{equation}{0}
\qquad Let $(M,g_{\alpha \overline{\beta }})$ be a complete noncompact K\"ahler surface satisfying all the assumptions in the Main Theorem. In this section, we will consider the algebra $P(M)$ of holomorphic functions of polynomial growth on $M$. We first construct $f_1,\ f_2$ in $P(M)$ which are algebraically independent over $\bf C.$
In the previous section, by solving the Poincar\'e--Lelong equation, we have obtained a strictly plurisubharmonic function $u$ on $M$ of logarithmic growth. As shown in \cite{Mo1}, the existence of nontrivial functions in the algebra $P(M)$ then follows readily from the $L^2$ estimates of the $\overline{\partial }$ operator on complete K\"ahler manifold of Andreotti--Vesentini \cite{AV} and H\"ormander \cite{Ho}. For completeness, we give the proof as follows.
Let $x\in M$ and $\left\{ (z_1,z_2)\,|\ |z_1|^2+|z_2|^2<1\right\}$ be local holomorphic coordinates at $x$ with $z_1(x)=z_2(x)=0$. Let $\eta $ be a smooth cutoff function on $\bf C^2$ with
$\mbox{Supp}\,\eta \subset \subset\left\{ |z_1|^2+|z_2|^2<1\right\}$
and $\eta \equiv 1$ on $\left\{|z_1|^2+|z_2|^2<\frac 14\right\} $. Then the function $$
\eta \log |z|=\eta \left( z_1,z_2\right) \log
\left(|z_1|^2+|z_2|^2\right) ^{\frac 12} $$ is globally defined on $M$ and is smooth except at $x$. Furthermore,
the $(1,1)$ form $\partial \overline{\partial }\,(\eta \log |z|)$ is bounded from below. Since $u$ is strictly plurisubharmonic, we can choose a sufficiently large positive constant $C$ such that $$
v=Cu+ 6 \eta \, \log \left|\, z\right| $$ is strictly plurisubharmonic on $M$. Then, for any nonzero tangent vector $\xi $ of type $(1,0)$ on $M$, we have $$ \left\langle \sqrt{-1}\partial \overline{\partial }v+\mbox{Ric},\; \xi \wedge \overline{\xi } \right\rangle >0\ . $$ Now $\overline{\partial }\,(\eta z_i)$, i=1,2 , is a $\overline{\partial }$ closed $(0,1)$ form on the complete K\"ahler manifold $M$. Using the standard $L^2$--estimates of $\overline{\partial }$ operator (c.f. Theorem 2.1 in \cite{Mo1}), there exists a smooth function $u_i$ such that $$ \overline{\partial }u_i=\overline{\partial }(\eta z_i)\ \qquad i=1,2 $$ and $$
\int_M\left| u_i\right| ^2e^{-v}dx\leq \frac 1c\int_M\left| \overline{
\partial }(\eta z_i)\right| ^2e^{-v}dx $$ where $c$ is a positive constant satisfying $$ \left\langle \sqrt{-1} \partial \overline{\partial }v+ \mbox{Ric},\xi \wedge
\overline{\xi }\right\rangle \geq c|\xi |^2 $$ whenever $\xi$ is a tangent vector on $\mbox{Supp}\, \eta $. First of all, this estimate implies that $u_i$ is of polynomial growth as the weight function $v$ is of logarithmic growth. Secondly,
because of the singularity of $6\log \left| z\right| $ at $x$, it forces the function $u_i$ and its first order derivative to vanish at $x$. Therefore, the holomorphic functions $f_1=u_1-\eta z_1$ and $f_2=u_2-\eta z_2$ define a local biholomorphism at $x$. Clearly, they are algebraically independent over $\bf C$. This concludes our construction.
For later use, we also point out here that, as a consequence of the above argument, the algebra $P(M)$ separates points on $M$. In other words, for any $x_1,x_2\in M$ with $x_1 \neq x_2$, there exists $f\in P(M)$ such that $f(x_1) \neq f(x_2).$
Before we can state our main result in this section, we need the following definition. For a holomorphic function $f\in P(M)$, we define the degree of $f$, $\deg (f)$, to be the infimum of all $q$ for which the following inequality holds $$
\left| f(x)\right| \leq C(q)\left( 1+d^q(x,x_0)\right) \qquad \mbox{for all} \; x\in M, $$ where $x_0$ is some fixed point in $M$ and $C(q)$ is some positive constant depending on $q.$
Our main result in this section is the following uniform bound on the multiplicity of the zero divisor of a function $f\in P(M)$ by its degree.
\vskip 3mm{\bf \underline{Proposition 7.1}} \qquad Let $(M,g_{\alpha \overline{\beta }})$ be a complete noncompact K\"ahler surface as above. For $f\in P(M)$, let $$ \left[ V\right] =\frac{\sqrt{-1}}{2\pi }\partial \overline{\partial }\log
\left| f\right| ^2 $$ be the zero divisor, counting multiplicity, determined by $f$. Then, there exists a positive constant $C$, independent of $f$, such that $$ \mbox{mult} \left( \left[ V\right] ,x\right) \leq C\deg (f) $$ holds for all $x\in M.$
\vskip 3mm{\bf \underline{Proof.}} \qquad Recall that the Ricci flow (\ref{2.1}) with $g_{\alpha \overline{\beta }}(x)$ as initial metric has a solution $g_{\alpha \overline{\beta }}(x,t)$ for all times $t\in [0,+\infty )$ and satisfies the following estimates \begin{equation} \label{7.1} R(x,t) \leq \frac C{1+t}\ \end{equation} and \begin{equation} \label{7.2} \mbox{inj} \left( M,g_{\alpha \overline{\beta }}(\cdot ,t)\right) \geq C_7(1+t)^{\frac 12}\ \end{equation} on $M\times [0,+\infty )$.
Let $d_t$ be the distance function from an arbitrary fixed point $\overline{x}_0\in M$ with respect to the metric $g_{\alpha \overline{\beta }}(\cdot ,t)$. By the standard Hessian comparison theorem (see \cite{ScY}), we have, for any unit real vector $v$ orthogonal to the radial direction $\partial /\partial d_t$, $$ \frac{\sqrt{\frac{\alpha _1}t}d_t}{\tan \left( \sqrt{\frac{\alpha _1}t} d_t\right) }\leq \mbox{Hess}\left( d_t^2\right) (v,v)\leq \frac{\sqrt{\frac{\alpha _1}t}d_t}{\tanh \left( \sqrt{\frac{\alpha _1}t}d_t\right) } \qquad \mbox{when}\quad d_t\leq \frac \pi 4\sqrt{\frac t{\alpha _1}}\ . $$ Here, $\alpha _1$ is some positive constant depending only on the constants $C $ and $C_7$ in (\ref{7.1}) and (\ref{7.2}). Hence, for any unit vector $\widetilde{v}$, we have $$ \frac{\sqrt{\frac{\alpha _1}t}d_t}{\tan \left( \sqrt{\frac{\alpha _1}t} d_t\right) }\leq \mbox{Hess}\left( d_t^2\right) (\widetilde{v},\widetilde{v} ) + \mbox{Hess} \left( d_t^2\right) (J\widetilde{v},J\widetilde{v}) \leq \frac{2\sqrt{\frac{\alpha _1}t}d_t}{\tanh \left( \sqrt{\frac{\alpha _1}t}d_t\right) } $$ whenever $d_t\leq \frac \pi 4\sqrt{\frac t{\alpha _1}}$. Since $M$ is K\"ahler, the above expression is equivalent to $$ \frac{\sqrt{\frac{\alpha _1}t}d_t}{\tan \left( \sqrt{\frac{\alpha _1}t} d_t\right) }\omega _t\leq \sqrt{-1}\partial \overline{\partial }d_t^2\leq \frac{2\sqrt{\frac{\alpha _1}t}d_t}{\tanh \left( \sqrt{\frac{\alpha _1}t} d_t\right) }\omega _t\ . $$ In particular, we have \begin{equation} \label{7.3} \frac 12\omega _t\leq \sqrt{-1}\partial \overline{\partial } d_t^2\leq 4\omega _t \qquad \mbox{whenever} \quad d_t \leq \frac {\pi} {4} \sqrt{\frac t{\alpha _1}}\ . \end{equation} Here, $\omega _t$ is the K\"ahler form of the metric $g_{\alpha \overline{\beta }}(\cdot ,t)$.
We next claim that \footnote{we are grateful to Professor L.F.Tam for this suggestion} \begin{equation} \label{7.4}\sqrt{-1}\partial \overline{\partial }\log \tan \left( \sqrt{ \frac{\alpha _1}t}\frac{d_t}2\right) \geq 0\ ,\qquad \mbox{whenever} \quad d_t\leq \frac \pi 4\sqrt{\frac t{\alpha _1}}\ . \end{equation}
In fact, after recaling, we may assume that the sectional curvature of $g_{\alpha \overline{\beta }}(\cdot ,t)$ is less than $1$ and $\sqrt{\frac{\alpha _1}t}=1$. Then by the standard Hessian comparison, we have $$ \mbox{Hess} \left( d_t\right) (v,v)\geq \frac 1{\tan d_t}\left(
\left| v\right|_t^2-\left\langle v,\frac \partial {\partial d_t}\right\rangle _t^2\right) $$ for any vector $v$ and $d_t\leq \frac \pi 4$. Thus, by a direct computation, \begin{eqnarray} & & \mbox{Hess}\left( \log \tan \left( \frac{d_t}2\right) \right) (v,v)+ \mbox{Hess} \left( \log \tan \left( \frac{d_t}2\right) \right) (Jv,Jv) \nonumber\\ & \geq &\frac 1{\left( \tan d_t\right) \tan \left( \frac{d_t}2\right) }
\left( 1+\tan d_t\right) \left| v\right| _t^2 \nonumber\\ & \geq & 0\ ,\nonumber \end{eqnarray} which is our claim (\ref{7.4}).
Now for any $0<b<a<\frac \pi 8\sqrt{\frac t{\alpha _1}}$, it follows from Stoke's theorem that \begin{eqnarray} 0&\leq&\sqrt{-1}\int\limits_{\left\{ b\leq d_t\leq a\right\} }\left[ V\right] \wedge\partial\overline\partial \log \tan \left( \sqrt{\frac{\alpha _1}t}\frac{d_t}2\right) \nonumber\\ & = &\sqrt{-1}\int\limits_{\left\{ d_t=a\right\} \ }\left[ V\right] \wedge \overline{\partial }\log \tan \left( \sqrt{\frac{\alpha _1}t}\frac{d_t}2\right) \nonumber\\ &&-\sqrt{-1}\int\limits_{\left\{ d_t=b\right\} \ }\left[ V\right] \wedge \overline{\partial }\log \tan \left( \sqrt{\frac{\alpha _1}t} \frac{d_t}2\right) \ .\nonumber \end{eqnarray} Then, it is no hard to see that for $0<b<a<\frac \pi 8\sqrt{\frac t{\alpha _1}}$, \begin{equation} \label{7.5} \frac{\sqrt{-1}}{a^2}\int\limits_{\left\{ d_t=a\right\} \ \ }\left[ V\right] \wedge \overline{\partial }\left( d_t^2\right) \geq \frac 12\cdot \frac{\sqrt{-1}}{b^2}\int\limits_{\left\{ d_t=b\right\} \ \ }\left[ V\right] \wedge \overline{\partial }\left( d_t^2\right) \ . \end{equation}
Using Stoke's theorem on the right hand side of (\ref{7.5}) and letting $b\rightarrow 0$ it follows from the inequality of Bishop--Lelong that \begin{equation} \label{7.6} \frac{\sqrt{-1}}{a^2}\int\limits_{\left\{ d_t=a\right\} \ \ }\left[ V\right] \wedge \overline{\partial }\left( d_t^2\right) \geq \alpha _2\mbox{mult}\,\left( \left[ V\right] ,\overline{x}_0\right) \ , \end{equation} for some positive absolute constant $\alpha _2$.
Then by (\ref{7.3}), (\ref{7.5}), (\ref{7.6}) and Stoke's theorem, we have \begin{eqnarray} \label{7.7} & &\frac 1{a^2}\int\nolimits_{B_t(\overline{x}_0,a)\backslash B_t(\overline{x}_0,\frac a2)}\left[ V\right] \wedge \omega _t\nonumber\\ & \geq &\frac 1{4a^2}\int\nolimits_{B_t(\overline{x}_0,a)\backslash B_t(\overline{x}_0,\frac a2)}\left[ V\right] \wedge \sqrt{-1}\partial \overline{\partial }\left( d_t^2\right) \nonumber\\ & = &\frac{\sqrt{-1}}4\left( \frac 1{a^2}\int\nolimits_{\{d_t=a\}} \left[ V\right] \wedge \overline{\partial }\left( d_t^2\right) - \frac 1{4\cdot \left( \frac a2\right) ^2}\int\nolimits_{\{d_t=\frac a2\}} \left[ V\right] \wedge \overline{\partial } \left( d_t^2\right) \right)\nonumber\\ & = &\frac{\sqrt{-1}}8\left( \frac 1{a^2}\int\nolimits_{\{d_t=a\}} \left[ V\right] \wedge \overline{\partial }\left( d_t^2\right) - \frac 1{2\cdot \left( \frac a2\right) ^2}\int\nolimits_{\{d_t=\frac a2\}} \left[ V\right] \wedge \overline{\partial }\left( d_t^2\right) \right)\nonumber\\ & &+\frac{\sqrt{-1}}{8a^2}\int\nolimits_{\{d_t=a\}}\left[ V\right] \wedge \overline{\partial }\left( d_t^2\right)\nonumber\\ & \geq &\frac{\sqrt{-1}}{8a^2}\int\nolimits_{\{d_t=a\}}\left[ V\right] \wedge \overline{\partial }\left( d_t^2\right)\nonumber\\ & \geq &\frac{\alpha _2}8\mbox{mult}\,\left( \left[ V\right] ,\overline{x}_0\right) \end{eqnarray} for $0<a<\frac \pi 8\sqrt{\frac t{\alpha _1}}.$
For the function $f\in P(M)$, let $\widetilde{x}_0$ be a point close to $\overline{x}_0$ such that $f(\widetilde{x}_0)\neq 0$. By definition, for any $\delta >0$, there exists a constant $C(\delta )>0$ such that \begin{equation}
\label{7.8}\left| f(x)\right| \leq C(\delta )\left( 1+d_0^{\deg (f)+\delta }(x,\widetilde{x}_0)\right) \qquad \mbox{on} \quad M\ . \end{equation} By equation (\ref{2.1}) and estimate (\ref{7.1}), we have \begin{eqnarray} \frac{\partial g_{\alpha \overline{\beta }}(\cdot ,t)} {\partial t}&\geq&-R(\cdot ,t)g_{\alpha \overline{\beta }}(\cdot ,t) \nonumber \\ & \geq &-\frac C{1+t}g_{\alpha \overline{\beta }}(\cdot ,t)\ ,\nonumber \end{eqnarray}which implies that $$ g_{\alpha \overline{\beta }}(\cdot ,0)\leq (1+t)^Cg_{\alpha \overline{\beta } }(\cdot ,t)\qquad \mbox{for any} \quad t>0\ . $$ Hence, (\ref{7.8}) becomes \begin{equation} \label{7.9}
\left| f(x)\right| \leq C(\delta )\left\{ 1+\left[ (1+t)^{\frac C2}d_t(x,\widetilde{x}_0)\right] ^{\deg (f)+\delta }\right\} \qquad \mbox{on} \quad M\ . \end{equation}
We now fix $t=\frac{\alpha _1}{\pi ^2}4^{K+8}$ for each positive interger $K$. Set $$
v_K(x)=\int\nolimits_{B_t(\widetilde{x}_0,2^K)}-G_t^{(K)}(x,y)\bigtriangleup _t\log \left| f(y)\right| ^2\cdot \omega _t^2(y)\ , $$ where $G_t^{(K)}$ is the positive Green function with value zero on the boundary $\partial B_t(\widetilde{x}_0,2^K)$ with respect to the metric $g_{\alpha \overline{\beta }}(\cdot ,t)$. The function
$\log \left| f\right|^2-v_K$ is then harmonic on $B_t(\widetilde{x}_0,2^K)$. From the maximum principle and (\ref{7.9}), we have \begin{eqnarray} \label{7.10}
\log \left( \left| f(\widetilde{x}_0)\right| ^2\right) - v_K(\widetilde{x}_0)&\leq&\sup \limits_{x\in \partial B_t
(\widetilde{x}_0,2^K)}\log \left| f(x)\right| ^2 \nonumber\\ &\leq&C_{19}K\left( \deg (f)+\delta \right) +C^{\prime }(\delta ) \end{eqnarray} for some positive constants $C_{19}$, $C^{\prime }(\delta )$ independent of $K,\ f,$ and $\overline{x}_0.$
On the other hand, since the volume growth condition (i) is preserved for all times, by virtue of (\ref{6.4}) (c.f. Proposition 1.1 in \cite{Mo1}), we have \begin{eqnarray} -v_K(\widetilde{x}_0) & \geq & \frac 1{C_9}\int\nolimits_{B_t(\widetilde{x}_0,
2^K)}\frac 1{d_t^2(x,\widetilde{x}_0)}
\bigtriangleup _t\log \left| f(x)\right| ^2\cdot \omega _t^2(x)
\nonumber\\ & \geq & \frac 1{C_9}\sum\limits_{j=1}^K\left( \frac 1{2^j}
\right)^2\int\nolimits_{B_t(\widetilde{x}_0,2^j)\backslash
B_t(\widetilde{x}_0,2^{j-1})}\bigtriangleup _t
\log \left| f(x)\right| ^2\cdot \omega _t^2(x)\ . \nonumber \end{eqnarray} Then, by (\ref{7.7}) and the fact that $\widetilde{x}_0$ is arbitrarily close to $\overline{x}_0,$ \begin{equation} \label{7.11} -v_K(\widetilde{x}_0)\geq C_{20}\,K\,\mbox{mult}\,\left( \left[ V\right] , \overline{x}_0\right) \end{equation} for some positive constant $C_{20}$ independent of $K,\ f$ and $\overline{x}_0.$
Therefore, by combining (\ref{7.10}) and (\ref{7.11}) and letting $K\rightarrow +\infty $ and then $\delta \rightarrow 0$, we obtain $$ \mbox{mult}\,\left( \left[ V\right] ,\overline{x}_0\right) \leq C_{21}\deg (f) $$ where $C_{21}$ is some positive constant independent of $f$ and $\overline{x}_0.$
$\Box $
\vskip 3mm A modified version of the proof of Proposition 7.1 gives the uniform bound on the number of irreducible components of $[V].$
\vskip 3mm {\bf \underline{Proposition 7.2}} \qquad Suppose $(M,g_{\alpha \overline{\beta }})$ is a complete noncompact K\"ahler surface as assumed in Proposition 7.1. Let $f$ be a holomorphic function of polynomial growth, $$\left[ V\right] =\frac{\sqrt{-1}}{2\pi }
\partial \overline{\partial }\log\left| f\right| ^2 $$ be the corresponding zero divisor determined by $f$. Then the number of irreducible components of $[V]$ is not bigger than $C\deg(f)$ for the same positive constant $C$ as in Proposition 7.1.
\vskip 3mm{\bf \underline{Proof.}} \qquad Let $g_{\alpha \overline{\beta }}(\cdot ,t)$ be the evolving metric to the Ricci flow with $g_{\alpha \overline{\beta }}(\cdot)$ as the initial metric and $[V_1],\ [V_2],\ \cdots ,\ [V_l]$ be any $l$ distinct irreducible components of $\left[ V\right] $. Fix a constant $a>0$ such that the intersection of the smooth points of $\left[ V_i \right]$ with $B_0(\overline{x}_0,a)$ is nonempty for each $0 \leq i \leq l.$
Choose $t=\frac{\alpha _1}{\pi ^2}4^{K+8}a^2$ for each positive integer $K$. As the manifold $M$ is Stein by Theorem 5.1, each $\left[ V_i\right] $ must be noncompact. Hence, for $j=1,2,\cdots ,K$, we have $$ \left[ V_i\right] \cap \left( B_t(\overline{x}_0,2^ja)\left\backslash B_t( \overline{x}_0,2^{j-1}a)\right. \right) \neq \emptyset $$ and there exists a point $x_j\in \left[ V_i\right] $ with $d_t(x_j,\overline{x}_0)=\frac 322^{j-1}a$ in the middle of $B_t(\overline{x}_0,2^ja)\left\backslash B_t( \overline{x}_0,2^{j-1}a)\right.$.
The triangle inequality says $$ B_t(x_j, 2^{j-2}a)\subset \left( B_t(\overline{x}_0,2^ja)\left\backslash B_t(\overline{x}_0,2^{j-1}a)\right. \right) \ . $$ Applying a slight variant of (\ref{7.7}) to $\left[ V_i\right] $, we have \begin{eqnarray} \frac 1{\left( 2^{j-2}a\right) ^2}\int\nolimits_ {B_t(x_j, 2^{j-2}a)}\left[ V_i\right] \wedge \omega _t & \geq & \frac{\alpha _2}8 \mbox{mult}\,\left( \left[ V_i\right] ,x_j\right) \nonumber\\ & \geq & \frac{\alpha _2}8\ . \nonumber \end{eqnarray}
Since $\sum\limits_{i=1}^l\left[ V_i\right] $ is only a part of the divisor $\left[ V\right] $, we get $$ \frac 1{\left(2^{j-2}a\right) ^2}\int\nolimits_{B_t(\overline{x} _0,2^ja)\left\backslash B_t(\overline{x}_0,2^{j-1}a)\right. }
\bigtriangleup_t\log \left| f(x)\right| ^2\cdot \omega _t^2(x) \geq \frac{\alpha _2}8\, l\ . $$
The subsequent argument is then exactly as in the proof of Proposition 7.1. In the end, we have $$
C_{20}K\cdot l\leq -\log \left( \left| f\left( \widetilde{x}_0\right)
\right| ^2\right) +C_{19}K\left( \deg (f)+\delta \right) + C^{\prime }(\delta)\ . $$ Letting $K\rightarrow +\infty $ and then $\delta \rightarrow 0$, we get the desired estimate.
$\Box $
\section*{\S8. Proof of the main theorem}
\setcounter{section}{8} \setcounter{equation}{0}
\qquad In this section, we will basically follow the approach of Mok in \cite{Mo1}, \cite{Mo3} to accomplish the proof of the main theorem. Let $M$ be a K\"ahler surface as assumed in the Main Theorem. Recall that $P(M)$ stands for the algebra of holomorphic functions of polynomial growth on $M$. Let $R(M)$ be the quotient field of $P(M)$. By an abuse of terminology, we will call it the field of rational functions on $M$.
In the previous section, we showed that there exist two functions $f_1,\ f_2\in P(M)$ giving local holomorphic coordinates at any given point $x \in M$, and that the algebra $P(M)$ separates points on $M$. Moreover, we obtained the following basic multiplicity estimate \begin{equation} \label{8.1} \mbox{mult}\,\left( \left[ V\right] ,x\right) \leq C\deg (f) \end{equation} for all $x\in M$ and $f\in P(M)$, where $$ \left[ V\right] =\frac{\sqrt{-1}}{2\pi }\partial
\overline{\partial }\log \left| f\right| ^2 $$ is the zero divisor of $f$ and $C$ is a constant independent of $f$ and $x$. Thus, by combining these facts with the classical arguments of Poincar\'e and Siegel, we have (c.f. the proof of Proposition 5.1 in \cite{Mo1}) \begin{equation} \label{8.2} \dim {}_{{\bf C}}H_p\leq 10^3Cp^2\ , \end{equation} where $H_p$ denotes the vector space of all holomorphic functions with degree $\leq p$, and the field of rational functions $R(M)$ is a finite extension field over ${\bf C}(f_1,f_2)$ for some algebraically independent holomorphic functions $f_1,\ f_2\in P(M)$ over $\bf C$. By the primitive element theorem, we can then write $$ R(M)={\bf C}\left( f_1,f_2,\frac{f_3}{f_4}\right) $$ for some $f_3,\ f_4\in P(M)$.
Now, consider the mapping $F:M\rightarrow {\bf C}^4 $ defined by $$ F=\left( f_1,f_2,f_3,f_4\right) \ . $$ Since $R(M)$ is a finite extension field of ${\bf C}(f_1\ f_2)$, $f_3$ and $f_4$ satisfy equations of the form $$ f_3^p+\sum\limits_{j=0}^{p-1}P_j(f_1,f_2)f_3^j=0\ , $$ $$ f_4^q+\sum\limits_{j=0}^{q-1}Q_j(f_1,f_2)f_4^j=0\ , $$ where $P_j(w_1,w_2)$, $Q_j(w_1,w_2)$ are rational functions of $w_1,\ w_2$. After clearing denominators, we see that $f_1, f_2, f_3, f_4$ satisfy polynomial equations $$ P(f_1,f_2,f_3,f_4) = 0 \qquad \mbox{and} \qquad Q(f_1,f_2,f_3,f_4) = 0. $$ Let $Z_0$ be the subvariety of $\bf C^4$ defined by $$
Z_0=\left\{ \left( w_1,w_2,w_3,w_4\right)\in \bf C^4 \left| \ \begin{array}{l} \bigbreak P(w_1,w_2,w_3,w_4) = 0\ \\ Q(w_1,w_2,w_3,w_4) = 0 \end{array} \right. \right\} \ , $$ and let $Z$ be the connected component of $Z_0$ containing $F(M)$. It is clear that $\dim {}_{{\bf C}}Z=2.$
In the following we will show that $F$ is an "almost injective" and "almost surjective" map to $Z$ and we can desingularize $F$ to obtain a biholomorphic map from $M$ onto a quasi--affine algebraic variety by adjoining a finite number of holomorphic functions of polynomial growth.
First of all, we claim that $Z$ is irreducible and $F$ is "almost injective", i.e., there exists a subvariety $V$ of $M$ such that
$F|_{M\backslash V}:M\backslash V\rightarrow Z$ is an injective locally biholomorphic mapping. Indeed, take $V$ to be the union of $F^{-1}(\mbox{Sing}(Z))$ and the branching locus of $F$, here Sing(Z) denotes the singular set of Z. It is clear that $F$ is locally biholomorphic on $M\backslash V$. That $F$ is also injective there follows from the fact that $P(M)$ separates points and $f_1,...,f_4$ generate $P(M)$. To see the irreducibility of $Z$, note that $M \backslash F^{-1}(\mbox{Sing}(Z))$ is connected and hence $\overline{F(M \backslash F^{-1}(\mbox{Sing}(Z))}$ is irreducible (as its set of smooth points is connected). Since $F(M) \subset \overline{F(M \backslash F^{-1}(\mbox{Sing}(Z))}$, by the definition of $Z$, it must be irreducible.
Next, we come to the "almost surjectivity" of $F$, i.e., there exists an algebraic subvariety $T$ of $Z$ such that $F(M)$ contains $Z\backslash T$. The method of Mok\cite{Mo1} in proving the almost surjectivity of $F$ is to solve an ideal problem for each $x \in Z \backslash T_0$ missed by $F$, where $T_0$ is some fixed algebraic subvariety of $Z$ containing the singular set of $Z$. The solution of the ideal problem gives a holomorphic function $f_{x}\in P(M)$ with degree bounded independent of $x$ which will correspond to a rational function on $\bf C^4$ with pole set passes through $x$. Then, the almost surjectivity of $F$ follows. Otherwise, one could select an infinite number of linearly independent $f_x$'s contradicting the finite dimensionality of the space of holomorphic functions with polynomial growth of some fixed degree, c.f. (\ref{8.2}).
In \cite{Mo1}, Mok used the solution $u$ of the Poincar\'e--Lelong equation as the weight function in the Skoda's estimates for solving the ideal problem. In his case, because of his curvature quadratic decay condition, the growth of $u$ is bounded both from above and below by the logarithm of the distance function on $M$. This does not work in our case because we do not have the luxury of the lower bound of $u$. However, thanks to the Steinness of $M$ by Theorem 5.1, we can adapt the argument of Mok in \cite{Mo3} to choose another weight function by resorting to Oka's theory of pseudoconvex Riemann domains.
Before we carry out the above procedures in proving the almost surjectivity of $F$. We first need
to construct a nontrivial holomorphic $(2,0)$ vector field of polynomial growth on $M$.
Consider the anticanonical line bundle, ${\bf K} ^{-1}$, on $M$ equipped with the induced Hermitian metric, its curvature form $\Omega ({\bf K} ^{-1})$ is then simply the Ricci form of $M$. Let $u$ be the strictly plurisubharmonic function of logarithmic growth obtained in Proposition 6.2. For any given point $\overline{x}_0\in M$, let $\left\{ z_1,z_2\right\} $ be local holomorphic coordinates at $\overline{x}_0$. Choose a smooth cutoff function $\eta $ supporting in this local holomorphic coordinate chart with value $1$ in a neighborhood of $\overline{x}_0$. We study the following $\overline{\partial }$ equation for the sections of ${\bf K} ^{-1}$ on $M$, \begin{equation} \label{8.3} \overline{\partial }S=\overline{\partial }\left( \eta \frac \partial {\partial z_1}\wedge \frac \partial {\partial z_2}\right). \end{equation} Clearly, we can choose $k>0$ large enough such that $$ k\sqrt{-1}\partial \overline{\partial }u+\Omega({\bf K} ^{-1})+ 3\sqrt{-1}\partial \overline{\partial }\left( \eta \log
\left( |z_1|^2+|z_2|^2\right) \right)>0\ . $$ Then by the standard $L^2$ estimate of $\overline{\partial }$ operator on Hermitian holomorphic line bundles (c.f. Theorem 1.2 in \cite{Mo3}), equation (\ref{8.3}) has a smooth solution $S(x)$ satisfying the estimate \begin{eqnarray} \label{8.4}
& &\int_M\left| S\right| ^2e^{-ku-3\eta \log \left(
|z_1|^2+|z_2|^2\right) }\omega ^2 \nonumber\\
& \leq & C\int_M\left| \overline{\partial }\left( \eta \frac \partial {\partial z_1}\wedge \frac \partial
{\partial z_2}\right) \right| ^2e^{-ku-3\eta \log \left(
|z_1|^2+|z_2|^2\right) }\omega ^2\nonumber\\ & < & + \infty \end{eqnarray} for some positive constant $C$. Recall the Poincar\'e--Lelong equation for the section $S(x)$ of the anticanonical line bundle $\bf{K}^{-1}$, $$
\frac{\sqrt{-1}}{2\pi }\partial \overline{\partial }\log \left| S\right| ^2=[V]-\frac 1{2\pi }\mbox{Ric} \qquad \mbox{on} \quad M\ , $$
where $\left[ V\right] $ is the zero divisor of $S(x)$ (c.f. \cite{Mo3}). Thus, $\log \left| S\right| ^2+u$ is subharmonic and so is
$\left| S\right|^2e^u=\exp (\log \left| S\right| ^2+u).$ Since $M$ has positive Ricci curvature and maximal volume growth, we can apply the mean value inequality of subharmonic functions, (\ref{8.4}) and the fact that $u$ has logarithmic growth to show that $S(x)$ is of polynomial growth. Set $$ v=\eta \left( \frac \partial {\partial z_1}\wedge \frac \partial {\partial z_2}\right) -S\ . $$ Then, $v$ is a nontrivial holomorphic $(2,0)$ vector field over $M$ with polynomial growth we desired.
Now, for any $f_i,f_j\in \{f_1,f_2,f_3,f_4\}$ with $df_i\wedge df_j\not \equiv 0$, we can choose the point $\overline{x}_0$ in the above construction of $v$ so that the holomorphic function $f_{ij}$ defined by \begin{equation} \label{8.5} f_{ij}=\left\langle v,df_i\wedge df_j\right\rangle \end{equation} is a nontrivial holomorphic function of polynomial growth. Here,
we have used the fact that $\left\| df_i\wedge df_j\right\| $ grows at most polynomially by the gradient estimate of harmonic functions of Yau \cite{Y1}. It is obvious that the zero divisor of $df_i\wedge df_j$ is contained in the zero divisor of $f_{ij}$, for which we denote by $V_0$. Since $M$ is Stein, the same is also true for $M \backslash V_0$.
Denote by $\pi_{ij}:Z\rightarrow \bf C^2$ the projection map given by $(w_1,w_2,w_3,w_4) \mapsto (w_i,w_j)$. Then, the map $$ \rho =\pi _{ij}\circ F:M\backslash V_0\rightarrow \bf C^2 $$ realises the Stein manifold $M\backslash V_0$ as a Riemann domain of holomorphy over $\bf C^2.$
Let $\delta (x)$ be the Euclidean distance to the boundary as in Oka \cite{O}. Then, $-\log \delta $ is a plurisubharmonic function on $M\backslash V_0$ by a theorem of Oka \cite{O}. $\delta (x)$ will be used in the weight function of the Skoda's estimate mentioned above. It is essential to estimate it from below in terms of the intrinsic distance $d(x,x_0)$ on $M$.
\vskip 3mm{\bf \underline{Lemma 8.1}} \qquad There exist positive constants $p$ and $C$ such that $$
\delta (x)\geq C\left| f_{ij}(x)\right| ^2\left( d(x,x_0)+1\right) ^{-p}\ . $$
\vskip 3mm{\bf \underline{Proof.}} \qquad Let $v_i,\ v_j$ be two holomorphic vector fields on $M \backslash V_0$ defined by $$ \left\langle v_k,df_l\right\rangle =\delta _{kl}\ ,\qquad k,l=i,j\ . $$ By the Cramer's rule, we have $$
\left| v_k\right| \leq \frac{\left| df_i\right| +\left|
df_j\right| }{\left|df_i\wedge df_j\right| }\leq
\frac{\left| v\right| \left( \left| df_i\right|+\left|
df_j\right| \right) }{\left| f_{ij}\right| }\leq C_{22}
\frac{\left(d(x,x_0)+1\right) ^{k_1}}{\left| f_{ij}(x)\right| } \quad \mbox{on}\quad M\backslash V_0, $$ for $k=i,j$ and some positive constants $C_{22}$, $k_1$.
Since $f_{ij}$ is of polynomial growth, $|\nabla f_{ij}|$ is also of polynomial growth by the gradient estimate of Yau, i.e., $$
\max \left\{ f_{ij}(x),\left| \nabla f_{ij}(x)\right| \right\} \leq C_{23}\left( d(x,x_0)+1\right) ^{k_2} \qquad \mbox{on} \quad M\ , $$ for some positive constants $C_{23}$ and $k_2$. Take $x \in M\backslash V_0$, then for any
$y\in B\left( x,\left| f_{ij}(x)\right| \left/ 3C_{23}\left( d(x,x_0)+1\right) ^{k_2}\right. \right) $, we have \begin{eqnarray} \label{8.6}
\left| f_{ij}(y)\right|&\geq&\left| f_{ij}(x)\right| -C_{23}
\left( d(x,x_0)+2\right) ^{k_2}\cdot \frac{\left| f_{ij}(x)
\right| }{3C_{23}\left( d(x,x_0)+1\right) ^{k_2}} \nonumber\\
& \geq & \frac 12\left| f_{ij}(x)\right| \ . \end{eqnarray} This implies $$
B\left( x,\left| f_{ij}(x)\right| \left/ 3C_{23}\left( d(x,x_0)+1\right) ^{k_2}\right. \right) \subset M\backslash V_0 $$ and \begin{equation} \label{8.7}
\left| v_k(y)\right| \leq 2C_{22}\frac{\left( d(x,x_0)+1\right)
^{k_1}}{\left| f_{ij}(x)\right| }\ , \end{equation} for all
$y\in B\left( x,\left| f_{ij}(x)\right| \left/ 3C_{23}\left( d(x,x_0)+1\right) ^{k_2}\right. \right) $, $k=i,j$.
By the definition of $\delta (x)$, it suffices to prove \begin{eqnarray} \label{8.8}
& &\rho \left( B\left( x,\left| f_{ij}(x)\right| \left/ 6C_{23} \left( d(x,x_0)+1\right) ^{k_2}\right. \right) \right) \nonumber\\
& \supset & B_{\bf C^2}\left( \rho(x),C_{24}\left| f_{ij}(x)\right|^2 \left/ \left( d(x,x_0)+1\right) ^{k_1+k_2}\right. \right), \end{eqnarray} for some positive constant $C_{24}$. Here, $B_{{\bf C^2}}(a,r)$ denotes the Euclidean ball in $\bf C^2$ with center $a$ and radius $r$.
Consider the real vector field \begin{eqnarray} \xi & = & \alpha _i\left( 2\mbox{Re}\,\left( v_i\right) \right) + \alpha _j\left( 2\mbox{Re}\,\left( v_j\right) \right) + \beta _i\left( 2\mbox{Im}\,\left( v_i\right) \right) + \beta _j\left( 2\mbox{Im}\,\left( v_j\right) \right) \nonumber\\ & = & \left( \alpha _i-\sqrt{-1}\beta _i\right) v_i+ \left( \alpha _j-\sqrt{-1}\beta _j\right) v_j+ \left( \alpha _i+\sqrt{-1}\beta _i\right) \overline{v}_i \nonumber\\ & & +\left( \alpha _j+\sqrt{-1}\beta _j\right) \overline{v}_j \nonumber \end{eqnarray}
with $\left| \alpha _i\right| ^2+\left| \alpha _j\right| ^2+
\left| \beta_i\right| ^2+\left| \beta _j\right| ^2=1$. Clearly $\xi $ also satisfies (\ref{8.7}). Let $\gamma _\xi (\tau )$ be the integral curve in $M$ defined by $\xi$ and passes through $x$, i.e., \begin{equation} \label{8.9} \left\{ \begin{array}{l} \bigbreak \displaystyle \frac{d\gamma _\xi (\tau )}{d\tau }=\xi \\ \gamma _\xi (0)=x\ . \end{array} \right. \end{equation} We have $$ \frac{d\left( f_i\circ \gamma _\xi (\tau )\right) } {d\tau }=\left\langle \xi,df_i\right\rangle = \alpha _i-\sqrt{-1}\beta _i\ , $$ $$ \frac{d\left( f_j\circ \gamma _\xi (\tau )\right) } {d\tau }=\left\langle \xi,df_j\right\rangle = \alpha _j-\sqrt{-1}\beta _j\ , $$ and \begin{equation} \label{8.10}
\left| f_i\circ \gamma _\xi (\tau )-f_i(x)\right| ^2+\left|
f_j\circ \gamma _\xi (\tau )-f_j(x)\right| ^2=\tau ^2\ . \end{equation} Note that (\ref{8.10})\ implies that $\gamma _\xi (\tau )$ cannot always stay in $$
B\left(x,\left|f_{ij}(x)\right|\left/6C_{23}\left(d(x,x_0)+1 \right) ^{k_2}\right.\right), $$ otherwise $F=(f_1,f_2,f_3,f_4)$ would become unbounded in this ball. Denote by $\tau_0$ the first time when $\gamma _\xi (\tau )$ touches the boundary $$
\partial B\left( x,\left| f_{ij}(x)\right| \left/ 6C_{23}\left( d(x,x_0)+1\right) ^{k_2}\right. \right), $$ it is easy to see that \begin{eqnarray}
\frac{\left|
f_{ij}(x)\right| }{6C_{23}\left( d(x,x_0)+1\right) ^{k_2}} & \leq & \mbox{the length of} \; \gamma _\xi \; \mbox{on} \; [0,\tau_0] \nonumber\\ & \leq & 2C_{22}\int_0^{\tau _0}\frac{\left( d(x,x_0)+1\right)
^{k_1}}{\left| f_{ij}(x)\right| }d\tau \qquad (\mbox{by} \; (8.7)) \nonumber\\
& = & 2C_{22}\tau _0\frac{\left( d(x,x_0)+1\right) ^{k_1}}{\left|
f_{ij}(x)\right| }\ .\nonumber \end{eqnarray} Thus, \begin{equation} \label{8.11}
\tau _0\geq \frac{\left| f_{ij}(x)\right| ^2}{ 2C_{22}C_{23}6\left( d(x,x_0)+1\right) ^{k_1+k_2}}\ . \end{equation}
Note that the integral curve $\gamma_{\xi}$ projects to straight line passing through $f(x)$ by $\rho$. Thus, when $(\alpha_i, \alpha_j, \beta_i, \beta_j)$ runs through the unit sphere in $\bf C^2$, the collection of integral curves $\gamma_{\xi}$ inside $$
B\left(x,\left|f_{ij}(x)\right|\left/6C_{23}\left(d(x,x_0)+1 \right) ^{k_2}\right.\right) $$ will project, by $\rho$, onto the Euclidean ball $$
B_{\bf C^2}\left( \rho(x),\left| f_{ij}(x)\right|^2 \left/ 2C_22C_26 \left( d(x,x_0)+1\right) ^{k_1+k_2}\right. \right). $$ This proves (\ref{8.8}) and hence the lemma.
$\Box $
\vskip 3mm Now, we are ready to prove the almost surjectivity of the holomorphic map $F:M\rightarrow {\bf C^4}$. For each $1\leq i,j\leq 4$, since $f_{ij}$ is a holomorphic function of polynomial growth and $R(M)$ is generated by $f_1,...,f_4$, we can write $$ f_{ij}(x) = H_{ij}\left( f_1(x),f_2(x),f_3(x),f_4(x)\right) \qquad \mbox{on} \quad M, $$ for some rational function $H_{ij}$ on ${\bf C^4}$. Let $T_0$ be the union of the singular set of $Z$ and the zero and pole sets of all $H_{ij},\ 1\leq i,j\leq 4$. For any $b\in Z\backslash (F(M)\cup T_0)$, there exist fixed $\{i,j\}\subset \{1,2,3,4\}$ such that the projection $\pi _{ij}:Z\rightarrow {\bf C^2}$ is nondegenerate at $b$. Since $Z$ is algebraic, the number of points contained in $\pi _{ij}^{-1}\circ \pi _{ij}(b)$ is less than some fixed integer $K$ depending only on $Z$. By interpolation, there is a polynomial $h_b$ of degree $\leq K$ on ${\bf C^4}$ such that $h_b(b)=1$, and $h_b(w)=0$ for all $w\in (\pi _{ij}^{-1}\circ \pi _{ij}(b))\backslash \{b\}$. We now solve on $M\backslash V_0$ the ideal problem with unknown holomorphic functions $g_i$ and $g_j$, \begin{equation} \label{8.12} \left( f_i-b_i\right) g_i+\left( f_j-b_j\right) g_j=\left( h_b\circ F\right) ^4\ , \end{equation} where $b=(b_1,b_2,b_3,b_4).$
Let $$
\psi =-n_1\log \delta +n_2\log (1+|f_i|^2+|f_j|^2), $$ where the integers $n_1,\ n_2 > 0$ will be determined later. Clearly, $\psi $ is a strictly plurisubharmonic function on $M\backslash V_0$. By the estimate of Skoda (c.f. Theorem 1.3 in \cite{Mo3}), given any $\alpha >1$, there exists a solution $\{g_i,g_j\}$ to (\ref{8.12}) such that \begin{eqnarray} \label{8.13}
& &\int\limits_{M\backslash V_0}\frac{\left( \left| g_i\right|^2+
\left| g_j\right| ^2\right) e^{-\psi }}{\left( \left| f_i-b_i\right|^2+
\left| f_j-b_j\right| ^2\right) ^{2\alpha }}\rho ^{*}dV_E \nonumber\\
& \leq & C_\alpha\int\limits_{M\backslash V_0}\frac{\left( h_b\circ F\right) ^8 e^{-\psi }}{\left( \left| f_i-b_i\right| ^2+\left| f_j-b_j\right| ^2 \right) ^{2\alpha +1}}\rho^{*}dV_E\ , \end{eqnarray} provided the right hand side is finite. Recall that $\rho = \pi_{ij} \circ F$ and here $$ \rho ^{*}dV_E=\pm \left( \frac{\sqrt{-1}}2\right) ^2df_i\wedge \overline{df}_i\wedge df_j\wedge \overline{df}_j $$ denotes the pull back of the Euclidean volume element of ${\bf C^4.}$
Let $\{\zeta _1,\zeta _2,\cdots ,\zeta _m\}=\pi _{ij}^{-1}\circ \pi _{ij}(b)$ $(m<K)$ be the preimages of $\pi _{ij}(b)$ with $\zeta _1=b$. And let $U_k\ ( 1\leq k\leq m )$ be disjoint small neighborhoods of $\zeta _k\ ( 1\leq k\leq m )$. The integral on the right hand side of (\ref{8.13}) can be decomposed into three parts \begin{eqnarray} RHS & = &\left( \int_{F^{-1}(U_1)}+\sum_{k=2}^m\int_{F^{-1}(U_k)}+ \int_{\left( M\backslash V_0\right) \left\backslash \cup _{k=1}^mF^{-1}(U_k)\right. }\right) \nonumber\\ & &\frac{\left( h_b\circ F\right) ^8e^{-\psi }}{\left(
\left| f_i-b_i\right| ^2+\left| f_j-b_j\right| ^2\right) ^{2\alpha +1}} \rho ^{*}dV_E \nonumber\\ & = & I_1+I_2+I_3\ .\nonumber \end{eqnarray}
For $I_1$, since $h_b(b)=1$ and $\delta (x)\leq \left( \left| f_i-b_i\right|
^2+\left| f_j-b_j\right| ^2\right) ^{\frac 12}$, we can choose $n_1\geq 2(2\alpha +1)$ and $U_1$ small enough so that the integral $I_1$ is finite.
For $I_2$, since $h_b(\zeta _k)=0$ for $2\leq k\leq m,$ we can choose $ \alpha $ such that $2(2\alpha +1)<8$ (e.g. $\alpha =1.4$). Then the integral $I_2$ is also finite.
For $I_3$, we choose $n_2\geq 10+8K+n_1$, where $h_b$ is of degree $\leq K$. Then $I_3$ can be estimated as $$
I_3\leq C_{25}\int_{{\bf C^2}}\frac 1{\left( 1+|w|^2\right) ^{10}}dV_E <+\infty\ . $$
Hence, we have obtained a solution $\{g_i,g_j\}$ of the ideal problem (\ref{8.12}) such that \begin{equation} \label{8.14}
\int\limits_{M\backslash V_0}\frac{\left( \left| g_i\right|
^2+\left| g_j\right| ^2\right) e^{-\psi }}{\left( \left| f_i-b_i\right|
^2+\left| f_j-b_j\right| ^2\right) ^{2\alpha }}\rho ^{*}dV_E<+\infty \ . \end{equation} Recall from Lemma 8.1 and (\ref{8.5}), we have $$
\delta (x)\geq C\left| f_{ij}(x)\right| ^2\left( d(x,x_0)+1\right) ^{-p} $$ and \begin{eqnarray} \rho ^{*}dV_E&=&\pm \left( \frac{\sqrt{-1}}2\right) ^2df_i\wedge df_j\wedge \overline{df}_i\wedge \overline{df}_j\nonumber\\
&\geq&\frac{\left| f_{ij}\right| ^2}{\left| v\wedge \overline{v}\right| } \omega ^2\nonumber\\
&\geq&C_{26}\frac{\left| f_{ij}(x)\right| ^2}{\left( d(x,x_0)+1\right) ^{k_3}}\omega ^2\nonumber \end{eqnarray} for some positive constants $C_{26}$ and $k_3$. Substituting these two inequalities into (\ref{8.14}) we get \begin{equation} \label{8.15}
\int\limits_{M\backslash V_0}\frac{\left( \left| g_i\right|
^2+\left| g_j\right| ^2\right) \left| f_{ij}\right| ^{2+2n_1}}{\left( d(x,x_0)+1\right) ^{k_4}}\omega ^2<+\infty \ \end{equation} where $k_4$ is some positive constant independent of $b$ and $i,\ j.$ Then, both $g_if_{ij}^{n_1+1}$ and $g_jf_{ij}^{n_1+1}$ are locally square integrable. They can thus be extended holomorphically from $M\backslash V_0$ to $M$. By the mean value inequality of subharmonic functions, we deduce also that they are of polynomial growth with degree bounded by some positive number $k_5$ independent of $b$.
Now, recall that $R(M)={\bf C}(f{_1,}f{_2,}f{_3/}f{_4)}$, the holomorphic functions $g_if_{ij}^{n_1+1}$ and $g_jf_{ij}^{n_1+1}$ are thus rational functions of $f{_1,\ }f{_2,\ }f{_3}$ and $f{_4}$. Hence, we can regard the equation (\ref{8.12}) as an equation on the variety $Z\subset {\bf C^4}$, namely $$ \left( w_i-b_i\right) g_if_{ij}^{n_1+1}+\left( w_j-b_j\right) g_jf_{ij}^{n_1+1}=H_{ij}^{n_1+1}\cdot h_b^4\ . $$ Since $h_b$ is a polynomial with $h_b(b)=1$ and the point $b$ lies outside of the zero and pole sets of $H_{ij}$, either $g_if_{ij}^{n_1+1}$ or $g_jf_{ij}^{n_1+1}$, when regarded as rational function on $\bf C^4$, must have a pole at $b$. Denote this function by $G^0$. Thus, $G^0$ is a rational function on $Z$ with $G^0(b)=\infty $ and $G^0\circ F$ is a holomorphic function on $M$ with degree $\leq k_5$. If $Z\backslash (F(M)\cup T_0 \cup \mbox{pole sets of}\, G^0)$ is empty, then we are done. Otherwise, pick any $b_1\in Z\backslash (F(M)\cup T_0 \cup \mbox{pole sets of}\, G^0)$ and repeat the same procedure to obtain a rational function $G^1$ on $Z$ with $G^1(b_1)=\infty $ and $G^1\circ F$ a holomorphic function on $M$ with degree $\leq k_5$. Proceeding this way, we obtain a sequence of points $\{b,b_1,b_2,\cdots \}$ and rational functions $\{G^0,G^1,G^2,\cdots \}$ such that $G^k(b_k)=\infty $ and $G^l$ regular at $b_k$ for $l<k$. So, $\{G^0,G^1,G^2,\cdots \}$ must be linearly independent over ${\bf C}$. Moreover, all of $G^k\circ F$ are holomorphic functions with degree $\leq k_5$. Hence, by (\ref{8.2}), the above procedure must terminate in a finite number of steps. In other words, there exists an algebraic subvariety $T$ of $Z$ such that $F(M)\supset Z\backslash T$.
Moreover, $F$ establishes a quasi embedding from $M$ to a quasi--affine algebraic variety. Indeed, let $W=F^{-1}(T)$. By the definition of $T_0$ and the construction of $T$, we know that $W \supset V$, where $V$ is the union of the branching locus of $F$ and $F^{-1}(\mbox{Sing}(Z))$, and $W$ is the zero divisor of finitely many holomorphic functions of polynomial growth. Therefore, $F$ maps $M\backslash W$ biholomorphically onto $Z\backslash T.$
Finally, to complete the proof of our Main Theorem, we have to show that the mapping $F$ can be desingularized by adjoining a finite number of holomorphic functions of polynomial growth and taking normalization of the image.
We have constructed the mapping $F:M\rightarrow Z$ into an affine algebraic variety which maps $M\backslash W$ biholomorphically onto $Z\backslash T.$ Now, we use normalization of the affine algebraic variety $Z$ to resolve the codimension $1$ singularities of $F.$ Let $\mbox{Reg}(Z)$ denote the Zariski dense subset of $Z$ consisting of its regular points. It is well known that the normalization $\widetilde{Z}$ of $Z$ can be obtained by taking $\widetilde{Z}$ to be the closure of the graph of $\{Q_{1,}Q_2,\cdots ,Q_m\}$ on $\mbox{Reg}(Z)$ where $Q_i$ is a rational function which is holomorphic (or regular in the terminology of algebraic geometry) on $\mbox{Reg}(Z)$. The lifting of $F:M\rightarrow Z$ to $\widetilde{F}:M\rightarrow Z$ is then given by $\{f{_1,}f{_2,}f{_3,}f{_4,}Q_1\circ F,\cdots ,Q_m\circ F\}$ where, as was shown in proposition 8.1 of Mok \cite{Mo1}, for each $i$, $Q_i\circ F$ can be holomorphically extended to the whole manifold $M$ as a holomorphic function of polynomial growth.
Write $F_0=F:M\rightarrow Z$ and denote $\widetilde{F}_0:M\rightarrow \widetilde{Z}$ the normalization of $F_0$. For any smooth point $x$ on the subvariety $W$, by using the $L^2$ estimates of the $\overline{\partial }$ operator as in Section 7, one can find two holomorphic functions $g_x^1,\ g_x^2$ of polynomial growth which give local holomorphic coordinates at $x$. Adding $g_x^1,\ g_x^2$ to the map $\widetilde{F}_0$, we get a new map $F_1=(\widetilde{F}_0,g_x^1,g_x^2):M\rightarrow Z_1\subset {{\bf C}^{6+m}}$, which is nondegenerate at $x$. Write the normalization of $F_1$ as $\widetilde{F}_1:M\rightarrow \widetilde{Z}_1$ and continue in this way to get holomorphic mappings $F_i:M\rightarrow Z_i$ and their normalizations $\widetilde{F}_i:M\rightarrow \widetilde{Z}_i$ such that $$ \widetilde{W}_0\stackrel{\supset }{\neq }\widetilde{W}_1\stackrel{\supset }{ \neq }\cdots \stackrel{\supset }{\neq }\widetilde{W}_i\stackrel{\supset }{ \neq }\cdots, $$ where $\widetilde{W}_i$ is the locus of ramification of $\widetilde{F}_i.$
Note that $\widetilde{W}_i$ contains no isolated point because $\widetilde{Z}_i$ is normal. Moreover, by Proposition 7.2, $W$ has only finite number of irreducible components because $W$ is the zero divisor of finitely many holomorphic functions of polynomial growth. This implies that the above procedure must terminate in a finite number of steps, say $l$.
Thus, we get a biholomorphism $\widetilde{F}_l$ from $M$ onto its image $\widetilde{F}_l(M)\subset \widetilde{Z}_l$. The argument in our proof of the almost surjectivity shows that $\widetilde{F}_l(M)$ can miss at most finitely many irreducible subvarieties of $\widetilde{Z}_l$, say $\widetilde{T}_1^{(l)},\cdots ,\widetilde{T}_q^{(l)}$. If $\widetilde{F}_l(M)\cap \widetilde{T}_i^{(l)}\neq \emptyset $, then it must intersect $\widetilde{T}_i^{(l)}$ in a nonempty open set because $\widetilde{F}_l$ is open. We arrange $\widetilde{T}_i^{(l)}$ so that $\widetilde{F}_l(M)\cap \widetilde{T}_i^{(l)}=\emptyset $ for $1\leq i\leq p$ and $\widetilde{F}_l(M)\cap \widetilde{T}_i^{(l)}\neq \emptyset $ for $p+1\leq i\leq q$. Note that$\widetilde{F}_l(M)$ is a Stein subset of $\widetilde{Z}_l$ because $M$ is Stein by Theorem 5.1 and $\widetilde{F}_l$ maps $M$ biholomorphically onto its image. By Hartog's extension theorem, every holomorphic function on $\widetilde{Z}_l\backslash \cup _{1\leq i\leq q}\widetilde{T}_i^{(l)}$ extends to $\widetilde{Z}_l\backslash \cup _{1\leq i\leq p}\widetilde{T}_i^{(l)}$. Hence, we get a biholomorphic map from $M$ onto a quasi--affine algebraic variety. Finally, recall that a classical theorem of Ramanujam \cite{R} in affine algebraic geometry says that an algebraic variety homeomorphic to ${\bf R^4}$ is biregular to ${\bf C^2}$. Combining this result of Ramanujam with Theorem 5.1, we deduce that $M$ is actually biholomorphic to ${\bf C^2}$. Therefore we have completed the proof of the Main Theorem.
\end{document} |
\begin{document}
\title[Example for nonexpansive semigroup] {An example for a one-parameter nonexpansive semigroup} \author[T. Suzuki]{Tomonari Suzuki} \date{} \hyphenation{kyu-shu kita-kyu-shu to-bata-ku sen-sui-cho} \address{ Department of Mathematics, Kyushu Institute of Technology, 1-1, Sensuicho, Tobataku, Kitakyushu 804-8550, Japan} \email{suzuki-t@mns.kyutech.ac.jp} \keywords{Nonexpansive semigroup, Common fixed point} \subjclass[2000]{Primary 47H20, Secondary 47H10}
\begin{abstract} In this paper,
we give one example for a one-parameter nonexpansive semigroup. This Example shows that
there exists a one-parameter nonexpansive semigroup
$\{ T(t) : t \geq 0 \}$ on a closed convex subset $C$ of a Banach space $E$
such that
$$ \lim_{t \rightarrow \infty}
\left\| \frac{1}{t} \int_0^t T(s)x \; ds - x \right\| = 0 $$
for some $x \in C$,
which is not a common fixed point of $\{ T(t) : t \geq 0 \}$. \end{abstract} \maketitle
\section{Introduction} \label{SC:introduction}
Throughout this paper,
we denote by $\mathbb N$ and $\mathbb R$
the set of positive integers and real numbers, respectively.
A family $\{ T(t): t \geq 0 \}$ of mappings on $C$
is called a one-parameter nonexpansive semigroup
on a subset $C$ of a Banach space $E$ if the following hold: \begin{enumerate} \renewcommand{(sg\arabic{enumi})}{(sg\arabic{enumi})} \renewcommand{(sg\arabic{enumi})}{(sg\arabic{enumi})} \item\label{ENUM:sg:nonex}
For each $t \geq 0$, $T(t)$ is a nonexpansive mapping on $C$, i.e.,
$$ \| T(t)x - T(t)y \| \leq \| x - y \| $$
for all $x,y \in C$; \item\label{ENUM:sg:T0}
$T(0)x = x$ for all $x \in C$; \item\label{ENUM:sg:s+t}
$T(s+t) = T(s) \circ T(t)$ for all $s, t \geq 0$; \item\label{ENUM:sg:conti}
For each $x \in C$, the mapping $t \mapsto T(t)x$ is continuous. \end{enumerate} We know that
$\{ T(t) : t \geq 0 \}$ has a common fixed point
under the assumption that
$C$ is weakly compact convex and $E$ has the Opial property;
see \cite{REF:Belluce_Kirk1967_Illinois,
REF:Browder1965_ProcNAS_3,
REF:Bruck1974_Pacific,
REF:DeMarr1963_Pacific,
REF:Gossez_LamiDazo1972_Pacific,
REF:Lim1974_Pacific,
REF:Opial1967_BullAMS} and others.
Convergence theorems for one-parameter nonexpansive semigroups
are proved in
\cite{REF:Baillon1976,
REF:Baillon_Brezis1976_Houston,
REF:Hirano1982_JMSJapan,
REF:Miyadera_Kobayasi1982_NATMA,
REF:TS2003_ProcAMS,
REF:TSP_mcbt_1_04}
and others. For example,
Baillon and Brezis in \cite{REF:Baillon_Brezis1976_Houston}
proved the following;
see also page 80 in \cite{REF:Takahashi_ybook}.
\begin{thm}[Baillon and Brezis \cite{REF:Baillon_Brezis1976_Houston}] \label{THM:Baillon-Brezis} Let $C$ be a bounded closed convex subset of a Hilbert space $E$ and
let $\{ T(t) : t \geq 0 \}$ be
a one-parameter nonexpansive semigroup on $C$. Then, for any $x \in C$,
$$ \frac{1}{t} \int_{0}^{t} T(s) x \; ds $$
converges weakly to a common fixed point of $\{ T(t) : t \geq 0 \}$
as $t \rightarrow \infty$. \end{thm}
\noindent Also, Suzuki and Takahashi in \cite{REF:TSP_mcbt_1_04}
proved the following.
\begin{thm}[Suzuki and Takahashi \cite{REF:TSP_mcbt_1_04}] \label{THM:TS-Takahashi-conv} Let $C$ be a compact convex subset of a Banach space $E$ and
let $\{ T(t) : t \geq 0 \}$ be
a one-parameter nonexpansive semigroup on $C$. Let $x_1 \in C$ and
define a sequence $\{ x_n \}$ in $C$ by
$$ x_{n+1} = \frac{\alpha_n}{t_n} \int_{0}^{t_n} T(s) x_n \; d s
+(1 - \alpha_n) x_n $$
for $n \in \mathbb N$,
where $\{ \alpha_n \} \subset [0,1]$ and $\{ t_n \} \subset (0,\infty)$
satisfy the following conditions:
$$ 0 < \liminf_{n \rightarrow \infty} \alpha_n \leq
\limsup_{n \rightarrow \infty} \alpha_n < 1, \quad
\lim_{n \rightarrow \infty} t_n = \infty,
\quad\text{and}\quad
\lim_{n \rightarrow \infty} \frac{t_{n+1}}{t_n} = 1 . $$ Then $\{ x_n \}$ converges strongly to
a common fixed point $z_0$ of $\{ T(t) : t \geq 0 \}$. \end{thm}
\noindent In the proof of it, the following Theorem plays a very important role.
\begin{thm}[Suzuki and Takahashi \cite{REF:TSP_mcbt_1_04}] \label{THM:int-compact} Let $C$ be a compact convex subset of a Banach space $E$. Let $\{ T(t): t \geq 0 \}$ be a one-parameter nonexpansive semigroup on $C$. Then for $z \in C$,
the following are equivalent: \begin{enumerate} \item
$z$ is a common fixed point of $\{ T(t) : t \geq 0 \}$; \item
$$ \liminf_{t \rightarrow \infty} \left\|
\frac{1}{t} \int_{0}^{t} T(s) z \; d s - z
\right\| = 0 $$
holds. \end{enumerate} \end{thm}
\noindent Recently, Suzuki proved in \cite{REF:Suzuki-thm}
the following similar result to Theorem \ref{THM:int-compact}. This Theorem also plays a very important role
in the proof of the existence of nonexpansive retraction
onto the set of common fixed points.
\begin{thm}[Suzuki \cite{REF:Suzuki-thm}] \label{THM:int} Let $E$ be a Banach space with the Opial property and
let $C$ be a weakly compact convex subset of $E$. Let $\{ T(t): t \geq 0 \}$ be a one-parameter nonexpansive semigroup on $C$. Then for $z \in C$,
the following are equivalent: \begin{enumerate} \item
$z$ is a common fixed point of $\{ T(t) : t \geq 0 \}$; \item
$$ \liminf_{t \rightarrow \infty} \left\|
\frac{1}{t} \int_{0}^{t} T(s) z \; d s - z
\right\| = 0 $$
holds; \item
there exists a subnet of
a net
$$ \left\{ \frac{1}{t} \int_{0}^{t} T(s) z \; ds \right\} $$
in $C$
converging weakly to $z$. \end{enumerate} \end{thm}
So, it is natural problem
whether or not
the conclusion of Theorems \ref{THM:int-compact} and \ref{THM:int}
holds in general. In this paper,
we give one example concerning Theorems \ref{THM:int-compact}
and \ref{THM:int}. This Example shows that
there exists a one-parameter nonexpansive semigroup
$\{ T(t) : t \geq 0 \}$ on a closed convex subset $C$ of a Banach space $E$
such that
$$ \lim_{t \rightarrow \infty}
\left\| \frac{1}{t} \int_0^t T(s)x \; ds - x \right\| = 0 $$
for some $x \in C$,
which is not a common fixed point of $\{ T(t) : t \geq 0 \}$. That is, our answer of the problem is negative.
\section{Example} \label{SC:example}
We give one Example concerning Theorems \ref{THM:int-compact} and
\ref{THM:int}. See also Example 3.7 in \cite{REF:Goebel_Kirk}.
\begin{exmp} \label{EX:int} Put $\Omega = \{ -1 \} \cup [0,\infty)$,
let $E$ be the Banach space consisting of all bounded continuous functions
on $\Omega$ with supremum norm, and
define a subset $C$ of $E$ by
$$ C = \left\{ x \in E :
\begin{array}{l}
0 \leq x(u) \leq 1 \quad \text{for } u \in \Omega \\
| x(u_1) - x(u_2) | \leq | u_1 - u_2 | \quad
\text{for } u_1, u_2 \in [0,\infty)
\end{array}
\right\} . $$ Define a nonexpansive semigroup $\{ T(t) : t \geq 0 \}$ as follows: For $t \in [0,1]$, define
$$ \big( T(t)x \big) (u)
=
\begin{cases}
x(u), & \text{if $u = -1$}, \\
x(u-t), & \text{if $u \geq t$}, \\
x(0) - t + u, & \text{if $0 \leq u \leq t$}, \\
& 1 - \alpha_x(1-t+u) \leq x(0)-t+u, \\
x(0) + t - u, & \text{if $0 \leq u \leq t$}, \\
& 1 - \alpha_x(1-t+u) \geq x(0)+t-u, \\
1 - \alpha_x(1-t+u), & \text{if $0 \leq u \leq t$}, \\
& \big| 1 - \alpha_x(1-t+u) - x(0) \big| \leq t - u,
\end{cases} $$
where
$$ \alpha_x(1-t+u) =
\sup\big\{ x(s) : s \in \{ -1 \} \cup [1-t+u,\infty) \big\} .$$ For $t \in (1,\infty)$,
there exist $m \in \mathbb N$ and
$t' \in [0,1/2)$ satisfying $ t = m / 2 + t'$. Define $T(t)$ by
$$ T(t) = T(1/2)^m \circ T(t') .$$ Then
$0 \in C$ is not a common fixed point of $\{ T(t) : t \geq 0 \}$ and \begin{equation} \label{EQU:int:int}
\lim_{t \rightarrow \infty} \left\|
\frac{1}{t} \int_{0}^{t} T(s) 0 \; d s - 0 \right\| = 0 \end{equation}
holds. \end{exmp}
Before proving Example \ref{EX:int},
we need some lemmas.
\begin{lem} \label{LEM:int:alpha}
The following hold:
\begin{enumerate}
\item
$ | \alpha_x(u_1) - \alpha_x(u_2) | \leq | u_1 - u_2 | $
for $x \in C$ and $u_1, u_2 \in [0,\infty)$;
\item
$ | \alpha_x(u) - \alpha_y(u) | \leq \| x - y \| $
for $x,y \in C$ and $u \in [0,\infty)$.
\end{enumerate} \end{lem}
\begin{proof} We first show (i). Without loss of generality, we may assume $u_1 < u_2$. For $s \in [u_1,u_2]$,
we have $| x(s) - x(u_2) | \leq | s - u_2 |$ and hence
$$ x(s)
\leq x(u_2) + | s - u_2 |
\leq \alpha_x(u_2) + | u_1 - u_2 | . $$ For $s \in [u_2,\infty)$, we have
$$ x(s) \leq \alpha_x(u_2) \leq \alpha_x(u_2) + | u_1 - u_2 | . $$ Hence,
$$ \alpha_x(u_1)
\leq \alpha_x(u_2) + | u_1 - u_2 | $$
holds. Since $ \alpha_x(u_2) \leq \alpha_x(u_1) $,
we obtain
$$ | \alpha_x(u_1) - \alpha_x(u_2) | \leq | u_1 - u_2 | .$$ We next show (ii). For each $\varepsilon > 0$,
there exists $s \in \{ -1 \} \cup [ u, \infty)$ satisfying
$ x(s) > \alpha_x(u) - \varepsilon $. We have
$$ \alpha_x(u) - \alpha_y(u)
\leq x(s) + \varepsilon - y(s)
\leq \| x - y \| + \varepsilon .$$ Since $\varepsilon$ is arbitrary,
we have
$ \alpha_x(u) - \alpha_y(u) \leq \| x - y \| $. Similarly we obtain
$ \alpha_y(u) - \alpha_x(u) \leq \| x - y \| $
and hence
$ | \alpha_x(u) - \alpha_y(u) | \leq \| x - y \| $. \end{proof}
\begin{lem} \label{LEM:int:3} Fix $x \in C$, $t \in [0,1]$, and $u_1, u_2$ with $0 \leq u_1 \leq u_2 \leq t$. Then the following hold:
\begin{enumerate}
\item
$1 - \alpha_x(1-t+u_1) < \big( T(t)x \big)(u_2) - u_2 + u_1$
implies
$\big( T(t)x \big)(u_1) = x(0)-t+u_1$ and
$\big( T(t)x \big)(u_2) = x(0)-t+u_2$;
\item
$1 - \alpha_x(1-t+u_1) > \big( T(t)x \big)(u_2) + u_2 - u_1$
implies
$\big( T(t)x \big)(u_1) = x(0)+t-u_1$ and
$\big( T(t)x \big)(u_2) = x(0)+t-u_2$;
\item
$\left| 1 - \alpha_x(1-t+u_1) - \big( T(t)x \big)(u_2) \right|
\leq u_2 - u_1$
implies
$\big( T(t)x \big)(u_1) = 1-\alpha_x(1-t+u_1)$.
\end{enumerate} \end{lem}
\begin{rem} One and only one of the assumptions of (i), (ii) and (iii) holds. \end{rem}
\begin{proof} We first prove (i). We assume that
$ 1 - \alpha_x(1-t+u_2) > x(0) - t + u_2$. Then by the definition of $T(t)$,
$$ \big( T(t)x \big)(u_2)
= \min\{ x(0)+t-u_2, 1-\alpha_x(1-t+u_2) \} .$$ So, we have
\begin{align*}
\big( T(t)x \big)(u_2) - u_2 + u_1
&\leq 1-\alpha_x(1-t+u_2) - u_2 + u_1 \\*
&\leq 1-\alpha_x(1-t+u_1)
\end{align*}
by Lemma \ref{LEM:int:alpha}. This is a contradiction. Therefore we obtain
$ 1 - \alpha_x(1-t+u_2) \leq x(0) - t + u_2$. Hence $\big( T(t)x \big)(u_2) = x(0)-t+u_2$. Since
$$ 1 - \alpha_x(1-t+u_1) < \big( T(t)x \big)(u_2) - u_2 + u_1
= x(0) - t + u_1 ,$$
we have $\big( T(t)x \big)(u_1) = x(0)-t+u_1$. Similarly, we can prove (ii). We finally prove (iii). We assume that
$ 1 - \alpha_x(1-t+u_1) < x(0) - t + u_1$. Then by Lemma \ref{LEM:int:alpha}, we have
\begin{align*}
1 - \alpha_x(1-t+u_2)
&\leq 1 - \alpha_x(1-t+u_1) + u_2 - u_1 \\*
&< x(0) - t + u_1 + u_2 - u_1
= x(0) - t + u_2 .
\end{align*} Hence $\big( T(t)x \big)(u_2) = x(0) - t + u_2$. So,
\begin{align*}
\big( T(t)x \big)(u_2) - \big( 1 - \alpha_x(1 - t + u_1) \big)
&> \big( x(0) - t + u_2 \big) - \big( x(0) - t + u_1 \big) \\*
&= u_2 - u_1 .
\end{align*} This is a contradiction. Therefore we obtain
$ 1 - \alpha_x(1-t+u_1) \geq x(0) - t + u_1$. Similarly we can prove
$ 1 - \alpha_x(1-t+u_1) \leq x(0) + t - u_1$. Hence $\big( T(t)x \big)(u_1) = 1-\alpha_x(1-t+u_1)$. \end{proof}
\begin{proof}[Proof of Example \ref{EX:int}] It is clear that $C$ is closed and convex. We first prove that
$T(t) x \in C$ for all $t \in [0,1]$ and $x \in C$. It is clear that
$$ 0 \leq \big( T(t) x \big)(-1) = x(-1) \leq 1 $$
and
$$ 0 \leq \big( T(t) x \big)(u) = x(u-t) \leq 1 $$
for $u \in [t,\infty)$. For $u \in [0,t]$, since
$ 0 \leq 1 - \alpha_x(1-t+u) \leq 1 $,
$ x(0)-t+u \leq x(0) \leq 1$ and
$ x(0)+t-u \geq x(0) \geq 0$,
we have
$ 0 \leq \big( T(t) x \big)(u) \leq 1 $. Fix $u_1, u_2 \in [0,\infty)$ with $u_1 < u_2$. In the case of $t \leq u_1$,
we have \begin{align*}
\left| \big( T(t)x \big)(u_1) - \big( T(t)x \big)(u_2) \right|
&= | x(u_1 - t) - x(u_2 - t) | \\*
&\leq | (u_1 - t) - (u_2 - t) |
= | u_1 - u_2 | .
\end{align*} In the case of $u_2 \leq t$, by Lemma \ref{LEM:int:3},
it is easily proved that
$$ \left| \big( T(t)x \big)(u_1) - \big( T(t)x \big)(u_2) \right|
\leq | u_1 - u_2 | .$$ In the case of $u_1 \leq t \leq u_2$,
we have \begin{align*}
& \left| \big( T(t)x \big)(u_1) - \big( T(t)x \big)(u_2) \right| \\*
&\leq \left| \big( T(t)x \big)(u_1) - \big( T(t)x \big)(t) \right|
+ \left| \big( T(t)x \big)(t) - \big( T(t)x \big)(u_2) \right| \\*
&\leq | u_1 - t | + | t - u_2 |
= | u_1 - u_2 | .
\end{align*} Therefore we have shown $T(t)x \in C$ for $t \in [0,1]$ and $x \in C$. By the definition of $\{ T(t) : t \geq 0 \}$,
we have
$T(t) x \in C$ for all $t \in [0,\infty)$ and $x \in C$. We next show that
$\{ T(t) : t \geq 0 \}$ is a one-parameter nonexpansive semigroup on $C$.
\ref{ENUM:sg:nonex}: Fix $t \in [0,1]$, and $x, y \in C$. We shall prove
\begin{equation}
\label{EQU:int:nonex}
\Big| \big( T(t) x \big) (u) - \big( T(t) y \big) (u) \Big| \leq
\| x - y \|
\end{equation}
for all $u \in \Omega$. We have
$$ \big| \big( T(t) x \big) (-1) - \big( T(t) y \big)(-1) \big|
= \big| x(-1) - y(-1) \big|
\leq \| x - y \| .$$ For $u \geq t$, we have
$$ \big| \big( T(t)x \big)(u) - \big( T(t)y \big)(u) \big|
= \big| x(u-t) - y(u-t) \big|
\leq \| x - y \| .$$ Fix $u$ with $0 \leq u \leq t$. In the case of
$ 1 - \alpha_x(1-t+u) \leq x(0)-t+u $ and
$ 1 - \alpha_y(1-t+u) \leq y(0)-t+u $,
we have
\begin{align*}
\left| \big( T(t) x \big) (u) - \big( T(t) y \big) (u) \right|
&= \left| \big( x(0) - t + u \big) - \big( y(0) - t + u \big) \right| \\*
&= \left| x(0) - y(0) \right|
\leq \| x - y \| .
\end{align*} In the case of
$ 1 - \alpha_x(1-t+u) \leq x(0)-t+u $ and
$ 1 - \alpha_y(1-t+u) > y(0)-t+u $,
we have
$$ \big( T(t) y \big)(u) = \min\big\{ 1 - \alpha_y(1-t+u), y(0)+t-u \big\}
\geq y(0)-t+u . $$ Hence,
\begin{align*}
\big( T(t) x \big) (u) - \big( T(t) y \big) (u)
&\leq \big( x(0) - t + u \big) - \big( y(0) - t + u \big) \\*
&= x(0) - y(0)
\leq \| x - y \|
\end{align*}
and
\begin{align*}
\big( T(t) y \big) (u) - \big( T(t) x \big) (u)
&\leq \big( 1 - \alpha_y(1-t+u) \big) - \big( 1 - \alpha_x(1-t+u) \big) \\*
&= \alpha_x(1-t+u) - \alpha_y(1-t+u)
\leq \| x - y \|
\end{align*}
hold. Therefore \eqref{EQU:int:nonex} holds. Similarly we can prove \eqref{EQU:int:nonex} in the other cases. On the other hand,
we have
\begin{align*}
\| T(t)x - T(t)y \|
&\geq \sup\left\{ \left| \big( T(t)x \big)(u) - \big( T(t)y \big)(u) \right|
: u \in \{-1\} \cup [t,\infty) \right\} \\*
&= \sup\{ | x(u) - y(u) : u \in \Omega \}
= \| x - y \| . \end{align*} Hence we have shown
\begin{equation}
\label{EQU:int:isometric}
\left\| T(t) x - T(t)y \right\| = \| x - y \|
\end{equation}
for $t \in [0,1]$ and $x,y \in C$. So, by the definition of $\{ T(t) : t \geq 0 \}$,
\eqref{EQU:int:isometric} hold for all $t \in [0,\infty)$
and $x,y \in C$.
\ref{ENUM:sg:T0}: It is clear that
$T(0)$ is the identity mapping on $C$.
\ref{ENUM:sg:s+t}: Fix $t_1, t_2 \in [0,1/2]$ and $x \in C$. We shall prove that
\begin{equation}
\label{EQU:int:s+t}
\big( T(t_1) \circ T(t_2) x \big) (u) = \big( T(t_1 + t_2) x \big)(u)
\end{equation}
for all $u \in \Omega$. We have
$$ \big( T(t_1) \circ T(t_2) x \big) (-1)
= \big( T(t_2) x \big) (-1)
= x(-1)
= \big( T(t_1 + t_2) x \big)(-1) . $$ For $u \geq t_2$, we have
\begin{align*}
\big( T(t_1+t_2)x \big)(t_1 + u)
&= x \big( (t_1+u) - (t_1+t_2) \big)
= x(u - t_2) \\*
&= \big( T(t_2)x \big)(u) .
\end{align*} For $u \in [0,t_2]$,
since
$t_1 + u \leq t_1 + t_2$,
$1 - \alpha_x(1-t_2+u) = 1 - \alpha_x \big( 1 - (t_1+t_2) + (t_1+u) \big)$,
$x(0) - t_2 + u = x(0) - (t_1+t_2) + (t_1+u)$ and
$x(0) + t_2 - u = x(0) + (t_1+t_2) - (t_1+u)$,
the two definitions of $\big( T(t_1+t_2)x \big)(t_1+u)$ and
$\big( T(t_2)x \big)(u)$ coincide. Therefore
$$ \big( T(t_1+t_2)x \big)(t_1+u) = \big( T(t_2)x \big)(u) . $$ So, for $u \geq t_1$,
\begin{align*}
\big( T(t_1) \circ T(t_2) x \big)(u)
&= \big( T(t_2)x \big)(u-t_1)
= \big( T(t_1+t_2) x \big) \big(t_1 + (u-t_1) \big) \\*
&= \big( T(t_1+t_2)x \big) (u) .
\end{align*} Fix $u$ with $0 \leq u \leq t_1$. Then we have
\begin{align*}
& 1 - \alpha_{T(t_2)x}(1-t_1+u) \\*
&= 1 - \sup\left\{ \big( T(t_2)x \big)(s)
: s \in \{ -1 \} \cup [1-t_1+u,\infty)\right\} \\
&= 1 - \max \Big\{ x(-1), \sup\left\{ x(s-t_2)
: s \in [1-t_1+u,\infty)\right\} \Big\} \\*
&= 1 - \alpha_x(1-t_1-t_2+u) .
\end{align*} In the case of
$1 - \alpha_{T(t_2)x}(1-t_1+u) < \big( T(t_2)x \big)(0) - t_1 + u$,
we have
$$ \big( T(t_1) \circ T(t_2)x \big)(u)
= \big( T(t_2)x \big)(0) - t_1 + u . $$ Since
\begin{align*}
1 - \alpha_x(1-t_1-t_2+u)
&= 1 - \alpha_{T(t_2)x}(1-t_1+u) \\*
&< \big( T(t_2)x \big)(0) - t_1 + u \\
&= \big( T(t_1+t_2)x \big)(t_1) - t_1 + u,
\end{align*}
we have
$$ \big( T(t_1+t_2)x \big)(u) = x(0) - t_1 - t_2 + u $$
and
$$ \big( T(t_1+t_2)x \big)(t_1) = x(0) - t_1 - t_2 + t_1
= x(0) - t_2 $$
by Lemma \ref{LEM:int:3}. So,
\begin{align*}
\big( T(t_1) \circ T(t_2)x \big)(u)
&= \big( T(t_2)x \big)(0) - t_1 + u \\*
&= \big( T(t_1+t_2)x \big)(t_1) - t_1 + u \\
&= x(0) - t_2 - t_1 + u \\*
&= \big( T(t_1+t_2)x \big)(u) .
\end{align*} Similarly, we can prove
$ \big( T(t_1) \circ T(t_2)x \big)(u) = \big( T(t_1+t_2)x \big)(u) $
in the cases of
$1 - \alpha_{T(t_2)x}(1-t_1+u) > \big( T(t_2)x \big)(0) + t_1 - u$ and
$ \big| 1 - \alpha_{T(t_2)x}(1-t_1+u) - \big( T(t_2)x \big)(0) \big|
\leq t_1 - u$. Therefore $T(t_1) \circ T(t_2) = T(t_1 + t_2)$. So, we have for $t \in [1/2,1)$,
$$ T(t) = T(1/2) \circ T(t-1/2) \quad\text{and}\quad
T(1) = T(1/2) \circ T(1/2) \circ T(0) .$$ Fix $t_1, t_2 \in [0,\infty)$. Then there exist $m_1, m_2 \in \mathbb N \cup \{ 0 \}$ and
$t_1', t_2' \in [0,1/2)$ satisfying
$t_1 = m_1 / 2 + t_1'$ and $t_2 = m_2 / 2 + t_2'$. We have
\begin{align*}
T(t_1) \circ T(t_2)
&= T(1/2)^{m_1} \circ T(t_1') \circ T(1/2)^{m_2} \circ T(t_2') \\*
&= T(1/2)^{m_1+m_2} \circ T(t_1') \circ T(t_2') \\
&= T(1/2)^{m_1+m_2} \circ T(\min\{ t_1'+t_2', 1/2 \}) \circ T(\max\{ 0, t_1'+t_2'-1/2\}) \\*
&= T(t_1 + t_2) . \end{align*}
\ref{ENUM:sg:conti}: For $x \in C$ and $t \in [0,\infty)$,
we have
\begin{align*}
\| T(t)x - x \|
&= \sup \left\{ \left| \big( T(t)x \big)(u) - x(u) \right|
: u \in [0,\infty) \right\} \\*
&= \sup \left\{ \left| \big( T(t)x \big)(u) - \big( T(t)x \big)(t+u) \right|
: u \in [0,\infty) \right\} \\*
&\leq \sup \left\{ \left| u - (t+u) \right|
: u \in [0,\infty) \right\}
= t .
\end{align*} Therefore
we obtain
$$ \| T(t_1) x - T(t_2) x \|
= \| T(|t_1-t_2|) x - x \|
\leq | t_1 - t_2 | $$
for $x \in C$ and $t_1, t_2 \in [0,1]$. Therefore $T(\cdot)x$ is continuous for all $x \in C$.
Let us prove \begin{equation} \label{EQU:int:F}
\bigcap_{t \geq 0} F \big( T(t) \big) =
\big\{ v_s : s \in [0, 1/2] \big\} \cup \big\{ w_s : s \in [0, 1/2] \big\} , \end{equation}
where
$$ v_s(u) =
\begin{cases}
1-s, & \text{if $u = -1$}, \\
s, & \text{if $u \in [0,\infty)$}
\end{cases} $$
and
$$ w_s(u) =
\begin{cases}
s, & \text{if $u = -1$}, \\
1/2, & \text{if $u \in [0,\infty)$}.
\end{cases} $$ Fix $s \in [0,1/2]$ and $t \in [0,1]$. Then we have
$$ \big| 1 - \alpha_{v_s} (1-t+u) - v_s(0) \big|
= \big| 1 - (1-s) - s \big| = 0 \leq t - u $$
and
$$ \big| 1 - \alpha_{w_s} (1-t+u) - w_s(0) \big|
= \big| 1 - 1/2 - 1/2 \big| = 0 \leq t - u $$
for $u \in [0,t]$. So
$$ \big( T(t) v_s \big) (u) = 1 - \alpha_{v_s}(1-t+u) = s = v_s(u) $$
and
$$ \big( T(t) w_s \big) (u) = 1 - \alpha_{w_s}(1-t+u) = 1/2 = w_s(u) .$$ Hence
$ T(t) v_s = v_s $ and $ T(t) w_s = w_s$. Therefore
$ v_s $ and $w_s$ are common fixed points of $\{ T(t) : t \geq 0 \}$. Conversely, we assume that $x \in C$ is a common fixed point
of $\{ T(t) : t \geq 0 \}$. Put $s = x(0)$. Then we have
$$ x(t+u) = \big( T(t) x \big) (t+u) = x(t+u-t) = x(u) $$
for all $u \in [0,\infty)$ and $t \in [0,1]$. So, $ x(u) = x(0) = s $ hold
for all $u \in [0,\infty)$. We also have
\begin{align*}
s &= x(0) = \big( T(1) x \big) (0) = 1 - \alpha_x(1-1+0) = 1 - \alpha_x(0) \\*
&= \min\{ 1 - x(-1), 1 - s \}
\end{align*} Hence $x(-1) \leq 1 - s$ and $s \leq 1/2$. If $s = 1/2$, then $x = w_{x(-1)}$. If $s < 1/2$, then $x(-1) = 1-s$ and hence $x = v_s$. Therefore we have shown \eqref{EQU:int:F}.
Define a function $f$ from $\mathbb R$ into $[0,1]$ by
$$ f(u) =
\begin{cases}
0, & \text{if $u \geq 0$}, \\
- u, & \text{if $-1 \leq u \leq 0$}, \\
u + 2, & \text{if $-2 \leq u \leq -1$}, \\
0, & \text{if $u \leq -2$}.
\end{cases} $$ We finally show \begin{equation} \label{EQU:int:T(t)0}
\big( T(t) 0 \big)(u) =
\begin{cases}
0, & \text{if $u = -1$}, \\
f(u-t), & \text{if $u \in [0,\infty)$}.
\end{cases} \end{equation} Fix $t \in [0,1]$ and $u \in [0,t]$. Then we have
$$ 1 - \alpha_0(1-t+u) = 1 \geq 0 + t - u $$
and hence
$$\big( T(t) 0 \big) (u) = 0 + t - u = t-u = f(u-t) $$
because of $-1 \leq u - t \leq 0$. Therefore
$$
\big( T(1) 0 \big)(s) =
\begin{cases}
0, & \text{if $s = -1$ or $s \geq 1$}, \\
1-s, & \text{if $0 \leq s \leq 1$}.
\end{cases} $$ Since
\begin{align*}
1 - \alpha_{T(1)0} (1-t+u)
&= 1 - (1-(1-t+u))
= 1-t+u \\
&= \big( T(1) 0 \big) (0) - t + u,
\end{align*}
we have
\begin{align*}
\big( T(t+1) 0 \big) (u)
&= \big( T(t) \circ T(1) 0 \big) (u)
= \big( T(1) 0 \big) (0) - t + u \\
&= 1 - t + u = f \big( u - (1+t) \big) .
\end{align*} Therefore
$$
\big( T(2) 0 \big)(s) =
\begin{cases}
0, & \text{if $s = -1$ or $s \geq 2$}, \\
2-s, & \text{if $1 \leq s \leq 2$}, \\
s, & \text{if $0 \leq s \leq 1$}.
\end{cases} $$ Since
$$
\Big| 1 - \alpha_{T(2)0} (1-t+u) - \big( T(2)0 \big) (0) \Big|
= | 1 - 1 - 0 |
= 0
\leq t - u,
$$
we have
\begin{align*}
\big( T(t+2) 0 \big) (u)
&= \big( T(t) \circ T(2) 0 \big) (u)
= 1 - \alpha_{T(2)0} (1 - t + u) \\
&= 0 = f \big( u - (2+t) \big) .
\end{align*} Similarly,
for $k \in \mathbb N$ with $k > 2$,
we can prove
$$ \big( T(t+k) 0 \big)(u) = 0 = f \big( u - (k+t) \big) .$$ Therefore we have shown \eqref{EQU:int:T(t)0}. So, \eqref{EQU:int:int} clearly holds. This completes the proof. \end{proof}
\end{document} |
\begin{document}
\title{\Large{A Characterization of Group through Isomorphism Classes of Transversals}}
\author{Vivek Kumar Jain}
\address{Department of Mathematics\\ Central University of South Bihar, Gaya, Bihar, India }
\email{jaijinenedra@gmail.com}
\author{Raja Rawat*} \address{Department of Mathematics\\ Central University of South Bihar, Gaya, Bihar, India} \email{20rawraj@gmail.com}
\subjclass{20D60, 20N05.}
\keywords{Right transversals; Right quasigroup.}
\begin{abstract}
Let $G$ be a group and $H$ a subgroup of $G$ of finite index. In this article, it is proved that if the number of isomorphism classes of right transversals of $H$ in $G$ is $5$, then the index of $H$ in $G$ is $6$ and the permutation representation of $G$ on right cosets of $H$ in $G$ is isomorphic to the alternating group on four symbols.
\end{abstract}
\maketitle
\section{Introduction} Let $H$ be a subgroup of a finite group $G$. A set $S$ obtained by choosing one and only one element from each right coset of $H$ in $G$ such that 1, the identity element of $G$ belongs to $S$, is called right transversal of $H$ in $G$. In \cite{vip,skm} authors call it as normalized right transversal (NRT). The set of all right transversals of $H$ in $G$ is denoted as ${\mathcal T}(G,H)$. Each $S \in {\mathcal T}(G,H)$ is a right quasigroup with identity with respect to the induced binary operation $\circ$ defined as $\{x \circ y \}:=Hxy \cap S$. Following \cite[Theorem 3.4, p. 76]{rltr}, each right quasigroup with identity (two-sided) can be embedded as a right transversal in a group with some universal property.
Two right transversals $S, T \in {\mathcal T}(G,H)$ are called isomorphic (denoted as $S \cong T$) if their induced right quasigroup structures are isomorphic. Let ${\mathcal I}(G,H)$ be the set of isomorphism classes of right transversals of $H$ in $G$. By \cite{pss}, $|{\mathcal I}(G,H)|=1$ if and only if $H \trianglelefteq G$. By {\cite{viv,vip}}, $|{\mathcal I}(G,H)|\neq 2,4$. In \cite{ict2}, it is proved that if $|{\mathcal I}(G,H)|=3,$ then $H \not \trianglelefteq G$ and $[G:H]=3$. Converse of this result is also true. Recently, Surendra and Shukla (see \cite{skm}) proved that $|{\mathcal I}(G,H)|>16$ for a finite nilpotent group $G$ non-isomorphic to dihedral group of order $8$ and $H$ is its non-normal subgroup. These results shows the effect of isomorphism classes of transversals on the embedding of subgroup. In this article we prove the following results. \begin{thm}\label{main}
Let $H$ be a finite index subgroup of a group $G$ such that $|{\mathcal I}(G, H)|=5$. Then $[G: H]=6$. \end{thm} \begin{thm} \label{main2}
Suppose $G$ is a transitive subgroup of a symmetric group on six symbols, and $H$ is one point stabilizer of $G$. If $|{\mathcal I}(G, H)|=5$, then $G$ is isomorphic to the alternating group on four symbols. \end{thm} By \cite[Lemma 2.7, p. 348]{vip}, the converse of the above theorem is also true. \begin{thm} \label{main3}
Let $H$ be a finite index subgroup of a group $G$ such that $|{\mathcal I}(G, H)|=5$. Then the permutation representation of $G$ on the right cosets of $H$ in $G$ is isomorphic to the alternating group on four symbols. \end{thm}
Throughout the paper, Sym$(n)$ denotes the symmetric group on set $\{1,2,\ldots , n\}, $ and we adopt the convention that $(rs)(x)=s(r(x))$; $r,s \in$ Sym$(n)$ and Sym$(n-1)$ denotes the subgroup of Sym$(n)$ fixing symbol $1$. We write Alt$(n)$ for the alternating group on the set $\{1,2,\ldots, n\},$ $D_n$ for the dihedral group of order $2n$ and $Z_{n}$ for cyclic group of order $n$. Also, $N_G(H)$ and $C_G(H)$ are used for the normalizer of $H$ in $G$ and centralizer of $H$ in $G$, respectively. By $Aut_{H}G$, we mean the subgroup of $Aut( G)$ consisting of those automorphisms of $G$ which take $H$ onto $H$.
To prove Theorem \ref{main}, we first reduce the problem into finite group case and formulate a minimal counterexample in Section \ref{s3}. In Section \ref{s4}, it is proved that such a minimal counterexample must be non-abelian finite simple. Then using the fact that order of the finite simple group is more than its outer automorphism group, Theorem \ref{main} is proved in Section \ref{s6}. Rest of the theorems are proved in Section \ref{s5}. We conclude this article by proposing some problems in Section \ref{s7}.
\section{Preliminaries}
Let $H$ be a subgroup of a finite group $G$ and $S\in {\mathcal T}(G, H)$. Let $\langle S \rangle$ denote the subgroup generated by $S$, and $\langle S \rangle \cap H$ is denoted by $H_S$. Then $\langle S \rangle=H_SS$. Let $\chi$ denote the permutation representation of $G$ on the set $X$ of all right cosets of $H$ in $G$. That is $\chi$:$G\rightarrow$ Sym$(X)$ is defined as $\chi$(g)(Hx)=Hxg. Clearly, $\text{Ker} \chi=\text{Core}_G(H)$, the core of $H$ in $G$.
The group $\chi (H_S) $ is denoted by $G_S$ and is called the group torsion of $S$ (see \cite[Definition 3.1, p 75]{rltr}). If $Core_G(H) = \{1\}$ and $S$ is a generating right transversal of $H$ in $G$, then $G = H_S S \cong G_SS$ and $H = H_S = G_S.$ By \cite[p.76]{rltr}, group torsion is trivial if and only if $S$ with respect to induced operation is a group. By \cite[Corollary 3.3, p. 76]{rltr}, a subgroup $H$ of a group $G$ is normal if and only if group torsion of every right quasigroup determined by every right transversal of $H$ in $G$ is trivial.
Following Lemma is easy to verify and is also observed in \cite[p. 163]{skm}.
\begin{lem}\label{1}
Let $N$=Core$_{G}(H)$. Assume $N\neq \{1\}$. Then the quotient map from $G$ to $G/N$ is a surjective map from ${\mathcal T}(G, H)$ to ${\mathcal T}(G/N,H/N)$ such that the corresponding transversals are isomorphic. In particular, $|{\mathcal I}(G, H)|=|{\mathcal I}(G/N,H/N)| $. \end{lem}
\begin{rem}\label{r1} Let $H$ be a subgroup of index $n$ of a finite group $G$. Denote each right coset of $H$ in $G$ by a unique number from set $\{1,2, \ldots , n\},$ and $1$ for coset $H$. Then $\chi (G)$ is a transitive subgroup of Sym$(n)$. \end{rem}
\begin{lem} \label{123} \cite[Lemma 2.2, p. 85]{vkjarxiv} Let $G$ be a finite group and $H$ a non-normal subgroup of $G$ of index $n$. Then $T,L\in {\mathcal T}(\chi(G),\chi(H))$ are isomorphic if and only if they are conjugate as a subset of Sym$(n)$.
\end{lem}
\begin{rem}\label{1234}
Let $G$ be a finite group and $H$ a non-normal subgroup of $G$ of index $n$. Then by \cite[Lemma 2.5, p. 87]{vkjarxiv}, for each $f \in Aut_{\chi(H)}$ ${\chi(G)}$, there exists $\alpha \in N_{Sym(n-1)}(G)$ such that $f=i_\alpha|_{\chi(G)}$. \end{rem}
\begin{rem}\label{2.4} Let $G$ be a group and $H$ a non-normal subgroup of $G$. Then it follows from \cite{vkjarxiv} that $|\mathcal{I}(G, H)|>5$ for the following pairs:\\ (a) $G\cong Alt(4), H\cong Alt(3)$; \\ (b) $G\cong Alt(5), H\cong Alt(4)$; \\ (c) $G\cong$ Sym$(4)$, $H\cong$ Sym$(3)$; \\ (d) $G\cong \text{Sym}(5)$, $H\cong$ Sym$(4)$; \\ (e) $G\cong D_{n} , H\cong Z_{2}$; $n=4, 5,6$; $H$ is a non-normal subgroup of $D_n$ of order 2. \end{rem}
\begin{lem} \label{A} Let $G$ be a group, $\{1\} \neq H \leq N$ be $Aut_HG$-invariant subgroup of $G$. Let $f \in Aut_HG,~K\in {\mathcal T}(N,H) $ and $L \in {\mathcal T}(G,N)$. Then\\ (a) $f(KL)=f(K)f(L) \in {\mathcal T}(G, H)$.\\ (b) if $S \in {\mathcal T}(G, H)$ such that $S\neq K_1L_1$ for any $K_1 \in {\mathcal T}(N,H) $ and $L_2 \in {\mathcal T}(G,N)$, then there does not exist $f_1 \in Aut_HG$ such that $f_1(S)=KL$. \end{lem} \begin{proof} Since $KL \in {\mathcal T}(G, H)$, for any $f \in Aut_HG$, $f(KL)=f(K)f(L) \in {\mathcal T}(G, H)$. Since $N$ is $Aut_HG$-invariant, so $f(K) \in {\mathcal T}(N,H)$ and $f(L) \in {\mathcal T}(G,N)$. This proves $(a)$. Clearly, $(b)$ follows from $(a)$. \end{proof}
\section{Formulation of Minimal Counterexample} \label{s3}
Suppose $H$ is a finite index subgroup of a group $G$ such that $|{\mathcal I}(G, H) |=5$. Then by Lemma \ref{1}, $|{\mathcal I}(G, H) |=|{\mathcal I}(G/N,H/N)|=5$, where $N$=Core$_{G}(H)$. Clearly, by Remark \ref{r1}, $G/N$ is a transitive subgroup of Sym$([G: H]$). Whenever there is a pair $(G, H)$ for which $|{\mathcal I}(G, H) |=5$ and $[G: H]<\infty $, there is a finite group and subgroup of index $[G: H]$ with the same number of isomorphism classes of transversals. To prove Theorem \ref{main}, it is sufficient to show that $[G: H]\not \leq 5$ and $[G: H] \not > 6$ for any finite group $G$ and subgroup $H$ such that $|{\mathcal I}(G, H) |=5$.
\begin{prop} \label{3}
Suppose $G$ is a finite group and $H$ is its subgroup such that $|{\mathcal I}(G, H)|=5$. Then $[G: H]\not \leq 5$. \end{prop}
\begin{proof}
Suppose $G$ is a finite group and $H$ is its subgroup such that $|{\mathcal I}(G, H)|$ $=5$. By Lemma \ref{1}, assume that $Core_{G}(H)=\{e\}$. If $[G: H]=1$ or $2$, then $H\trianglelefteq G$. If $[G: H]=3$, then $|{\mathcal I}(G, H)|=1$ or $3$ (by \cite{ict2}). Suppose $[G: H]=4.$ Since $Core_{G}(H)=\{1\}$, we can identify $G$ with a subgroup of $\text{Sym}(4).$ Then by subgroup structure of Sym$(4)$ the possible choices for $(|G|,|H|)$ will be $(8,2),(12,3)$ and $(24,6)$. More precisely, the choices for $(G, H)$ will be $(D_{4},Z_{2}),(Alt(4),Alt(3))$ and $(\mbox{Sym}(4),\text{Sym}(3))$, where $Z_{2}$ is a non-normal subgroup of $D_4$ of order $2$. By Remark \ref{2.4}, the number of non-isomorphic right transversals will be greater than $5$ for each case, a contradiction.
Suppose $[G: H]=5$. Since $Core_{G}(H)=\{1\}$, we can identify $G$ with a subgroup of $\text{Sym}(5).$ Then using \cite[Table 2.1, p. 60]{dix}, the possible cases for $(|G|,|H|)$ will be $(10,2),$ $(20,4),$ $(60,12),$ $(120,24)$. For $(|G|,|H|)=(120,24)$, $(|G|,|H|)=(10,2)$ and $(|G|,|H|)=(60,12)$, their corresponding pairs $(G, H)$ will be (Sym$(5)$,Sym$(4))$, $(D_{5},Z_{2})$ and $(Alt(5),Alt(4))$ respectively, where $Z_{2}$ is a non-normal subgroup of $D_5$ of order $2$. By Remark \ref{2.4}, we get $|\mathcal{I}(G, H)| \geq 6$, for each case, a contradiction.
If $(|G|,|H|)=(20,4),$ then using \cite[Table 2.1, p. 60]{dix}, $G= \langle (1,2,3,4,5),$ $(2,3,5,4) \rangle.$ Take $H=\text{Stab}_G(1)=\langle(2,3,5,4),(2,5)(3,4) \rangle$. Clearly, by Lemma \ref{123}, the following are pairwise non-isomorphic transversals of $H$ in $G$.
\noindent $S_{1}=\{(),(1,2)(3,5),(1,3)(4,5),(1,4)(2,3),(1,5)(2,4)\}$,\\ $S_{2}=\{(),(1,2,5,4),(1,3,4,2),(1,4,3,5),(1,5,2,3)\}$,\\ $S_{3}=\{(),(1,2,3,4,5),(1,3,5,2,4),(1,4,2,5,3),(1,5,4,3,2)\}$,\\ $S_{4}=\{(),(1,2)(3,5),(1,3)(4,5),(1,4,3,5),(1,5,2,3)\}$,\\ $S_{5}=\{(),(1,2)(3,5),(1,3)(4,5),(1,4,2,5,3),(1,5,4,3,2)\}$,\\ $S_{6}=\{(),(1,2)(3,5),(1,3,4,2),(1,4,2,5,3),(1,5)(2,4)\}$.
So $|\mathcal{I}(G.H)|$ $\geq 6$, which is a contradiction. Hence $[G: H]\not \leq 5$. \end{proof}
Now, to prove Theorem \ref{main}, it is sufficient to show that $[G: H] \not > 6$ for any finite group $G$ and subgroup $H$ such that $|{\mathcal I}(G, H) |=5$. Following \cite{viv, ict2, vip, pss}, we prove it by the method of contradiction. Suppose the result is false. Then we can find a pair $(G, H)$ with $G$ finite such that \\
(i) $|{\mathcal I}(G, H)|=5$,\\
(ii) $|G|$ is minimal subjected to $[G: H]>6$. \\ We assume that $[G: H]$ is also minimal. \\ We call such a pair a minimal counterexample.
\section{Properties of Minimal Counterexample}\label{s4}
In this section we study the properties of a minimal counterexample.
\begin{prop}\label{5} Let $(G, H)$ be a minimal counterexample. Then \\ (i) $Core_{G}(H)=\{1\}$.\\ (ii) If $T\in {\mathcal T}(G, H)$ such that $\langle T \rangle \neq G$, then $H_T=\{1\}$.\\ (iii) There exists $S \in {\mathcal T}(G, H)$ such that $\langle S \rangle =G$. \end{prop}
\begin{proof} (i) If $N$=$Core_{G}(H)
\neq \{1\}$, then $|G/N|< |G|$. By Lemma \ref{1}, $|{\mathcal I}(G, H)|=|{\mathcal I}(G/N,H/N)|$. This is a contradiction. So, $Core_{G}(H)=\{1\}$.
\noindent (ii) Suppose $T\in {\mathcal T}(G, H)$ such that $\langle T \rangle \neq G$. Then ${\mathcal T}(H_TT,H_T) \subseteq {\mathcal T}(G, H)$. So $|{\mathcal I}(H_TT,H_T)|\leq 5$. Since $(G, H)$ is a minimal counterexample, $|{\mathcal I}(H_{T}T,H_T)|\neq 5$. By \cite{viv, ict2, vip}, $|{\mathcal I}(H_TT,H_T)|\neq 2,3,4$ respectively. Thus $|{\mathcal I}(H_TT,H_T)|=1$. By Main Theorem of \cite{pss}, $H_T \trianglelefteq H_TT$. So $H_{T}$ $\subseteq$ $Core_{G}(H)$=$\{1\}$. Hence, $H_T=\{1\}$.
\noindent (iii) Suppose for each $S \in {\mathcal T}(G, H)$, $\langle S \rangle=H_SS \neq G$. Then $H_S=\{1\}$. This implies $G_S=\chi(H_S)=\{1\}$ for each $S \in {\mathcal T}(G, H)$. By \cite[Corollary 3.3, p. 76]{rltr}, $H \trianglelefteq G$. This implies $|{\mathcal I}(G, H)| =1$. This a contradiction. Hence the result follows. \end{proof}
\begin{prop} \label{iso}
Let $(G,H)$ be a minimal counterexample. Then for any $S \in \mathcal{T}(G, H)$ such that $\langle S\rangle=G$, there exists an isomorphism from $G$ to $G_S S$ which maps $H$ to $G_S$ and fixes $S$ elementwise.
\end{prop}
\begin{proof} The proof follows from, \cite[Proposition 2.1, p. 1718]{viv}.
\end{proof}
\begin{prop} \label{6} Let $(G, H) $ be a minimal counterexample. Let $S \in {\mathcal T}(G, H)$ such that $\langle S \rangle =G$. Then $\text{Aut}_HG$ acts transitively on the set ${\mathcal A}=\{T \in {\mathcal T}(G, H) \mid T \cong S\}$. \end{prop}
\begin{proof} The proof follows from the proof of \cite[Proposition 2.7, p. 652]{pss}. \end{proof}
\begin{prop}\label{7} Let $(G, H)$ be a minimal counterexample. Let $N$ be a proper $\text{Aut}_{H}G$-invariant subgroup of $G$ containing $H$ properly. Let $K \in {\mathcal T}(N,H)$. Then there exists a right transversal $S$ of $H$ in $G$ containing $K$ such that $S \neq KL$ for every $L \in {\mathcal T}(G,N)$. \end{prop}
\begin{proof}
Assume that each element of $\mathcal{T}(G, H)$ containing $K$ can be written as $KL$ for some $L \in \mathcal{T}(G, N)$. Suppose that $|H|=m,[N: H]=r$ and $[G: N]=t$. Then $|\{S \in \mathcal{T}(G, H) \mid K \subseteq S\}| \leq|\mathcal{T}(G, N)|$, that is $m^{rt-r} \leq(m r)^{t-1}$. This implies $m^{r-1} \leq r$. But, this is true only if $r=1$ or $m=r=2$. Since $H$ is properly contained in $N$, we have $m=r=2$ and $t \geq 2$. In particular, each $S \in \mathcal{T}(G, H)$ containing $K$ can be uniquely written as $K L$, where $L \in \mathcal{T}(G, N)$.
Assume that $N$ is isomorphic to the Klein four group. Let $K=\{1, x\} \in$ $\mathcal{T}(N, H)$ and $L=\left\{1, l_2, l_3, \ldots, l_t\right\} \in \mathcal{T}(G, N)$. Then $S=K L \in \mathcal{T}(G, H)$. Let $h \in H,$ $h \neq 1$, and $S^{'} = \left( S \setminus \{xl_t\}\right) \cup\left\{h x l_t\right\}.$ By assumption, there exists $L^{'} \in \mathcal{T}(G, N)$ such that $S^{'} = K L^{'}$. Since $l_t, h x l_t \in S^{'}$ represent the same right coset of $N$ in $G$, either $l_t \in L^{'}$ or $h x l_t \in L^{'}$. Since neither $x l_t$ nor $x\left(h x l_t\right)$ is in $S^{'}$, this is a contradiction. Thus $N$ is a cyclic group.
Since $N$ is a cyclic group of order 4 , each $S \in \mathcal{T}(G, H)$ contains a generator of $N$. Thus $\langle S\rangle=G$ for each $S \in \mathcal{T}(G, H)$. Next, assume that for $L_1, L_2 \in \mathcal{T}(G, N)$, $K L_1$ and $K L_2$ are isomorphic. Then by Proposition \ref{6}, there exists $f \in A u t_H G$ such that $f(K) f\left(L_1\right)=f\left(K L_1\right)=K L_2$. Since $f(K) \in \mathcal{T}(N, H)$ and $f\left(L_1\right) \in \mathcal{T}(G, N)$, thus by uniqueness of the expression, $f(K)=K$ and $f\left(L_1\right)=L_2$. Thus $|\mathcal{I}(G, N)| \leq 5$. By \cite{viv, vip}, $|{\mathcal I}(G,N)|\neq 2,4$. If $|{\mathcal I}(G,N)|=3,$ by \cite{ict2}, $[G:N]=3,$ this is a contradiction for $[G: H]>6.$ Assume $|{\mathcal I}(G,N)|= 1$. Then by \cite{pss}, $N \trianglelefteq G$. Since $H$ is a characteristic subgroup of $N$, $H\trianglelefteq G$. This is a contradiction. Suppose $|{\mathcal I}(G,N)|= 5$. Then $[G:N]\leq 6$ for $(G,N)$ is not a minimal counterexample. Then using Proposition \ref{3}, we get $[G:N]=6$. This implies $Core_{G}(N)=\{1\}$ or $N$ or $H$. If $Core_{G}(N)=H$, $H\unlhd G$ (a contradiction). If $Core_{G}(N)=N$, then $N\unlhd G$, a contradiction (for $|\mathcal{I}(G,N)|=5$). Therefore, $Core_{G}(N)=\{1\}$ and so we can identify $G$ with a subgroup of $\text{Sym}(6)$.
Then $(|G|,|H|)= (24,2)$. We know that a non-abelian group of order $24$ has either a normal subgroup of order $4$ or a normal subgroup of order $8$. Let $H_{1}$ be a normal subgroup of order $8$. Now $H_{1} \ncong Z_{2}\times Z_{2}\times Z_{2}$ for $N$ is a cyclic group of order $4$ also, $H_{1} \ncong Q_{8}$ (Quaternion group of order $8$) for $Q_{8}$ is not isomorphic to a subgroup of $\text{Sym}(n)$ for any $n\leqslant 7$. If $K$ (Sylow $3$-subgroup) is unique, then $G\cong H_{1}\times K$. Since $G$ is non-abelian, $H_1 \cong D_4$. If $H_{1} \cong D_{4}$, then the cyclic subgroup $N$ of $H_1$ of order $4$ will be normal in $G$, which is not possible (for $|{\mathcal I}(G,N)|= 5$). Now, suppose the Sylow 3-subgroup is not unique. Let $P_{i}$ ($i=1,2,3,4$) be the distinct Sylow $3$-subgroups of $G$. Since $H_{1}$ is normal in $G$, $G\cong P_1 \ltimes_{\phi} H_{1}$(semi-direct product of $P_{1}$ and $H_{1}$), where $\phi: P_1\rightarrow Aut({H_{1}})$, is a non-trivial group homomorphism. For all the three choices of $H_1$, $3$ does not divide $|Aut(H_1)|$. So in all of these cases, we cannot construct a non-trivial semi-direct product of $H_{1}$ and $P_1$.
Now, let us assume that the Sylow 2-subgroup is not unique. Then $G$ has a normal subgroup $N_{1}$ (say) of order $4$. Let $Q_{1}, Q_{2}, Q_{3}$ be distinct Sylow 2-subgroups of $G$. Since $Z_{8}$ and $D_{4}$ has unique subgroup of order $4$, $N_{1}=N\unlhd G$, a contradiction (for $|\mathcal{I}(G,N)|=5$). Finally, we are left with the case when the Sylow 2-subgroup is isomorphic to $ Z_{4}\times Z_{2}$.
It can be easily checked that $|Q_{1}\cap Q_{2}|=|Q_{2}\cap Q_{3}|= |Q_{3}\cap Q_{1}|=4$. For convenience, let us take, $Q_{1} \cong Z_{4}\times Z_{2}$. The cyclic subgroup of order $4$ of $Q_1$ are $<(1,1)>$ and $<(1,0)>$. Since $H\leq G$ and $H$ is contained in a cyclic subgoup of order $4$, $H=\{(0,0),(2,0)\}$. Also, $Q_{i}^{2}=H$ for all $i$. So $gQ_{1}^{2}g^{-1}=(gQ_{1}g^{-1})^{2}= Q_{i}^{2}= H$ (for some $i$), $g \in G$. Therefore, $gHg^{-1}=H$ for all $g \in G$. That is, $H\unlhd G$, a contradiction. Hence the result follows. \end{proof}
\begin{prop} \label{3.5} Let $(G, H)$ be a minimal counterexample and $N$ be $Aut_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G$ and $[N: H]=2$. Let $T=\{1,x\} \in \mathcal{T}(N,H)$ be a subgroup of $N$. Let $\{1,l_{2},l_{3},...,l_{t}\} \in \mathcal{T}(G,N)$ and $1\neq h \in H$. Then the following $t+1$ transversals are pairwise non-isomorphic: $S_{1}=TL$, $S_{i}=(TL\smallsetminus \{l_{2},l_{3},...,l_{i}\})\cup \{hl_{2},hl_{3},...,hl_{i}\},~ (2\leq i \leq t)$, $S_{t+1}=(TL\smallsetminus\{x\})\cup \{hx\}$. \end{prop}
\begin{proof}
Let $S_{i},S_{j} \in \mathcal{T}(G, H)$ such that $2 \leq i<j \leq t$. Clearly, $x,xl_{i} \in S_{i}$ but $l_{i} \notin S_{i}$. So $\langle S_{i} \rangle=G$. If $S_{i}\cong S_{j}$, by Proposition \ref{6}, there exists $f \in Aut_{H}G$ such that $f(S_{i})=S_{j}$. So $f(x)=x$ (for $N$ is $Aut_{H}G$-invariant). Now consider the following sets; $A=\{l_{i+1},...,l_{t},xl_{i+1},...,xl_{t}\}$ $\supseteq$ $B=\{l_{j+1},...,l_{t},xl_{j+1},...,xl_{t}\}$. Suppose the image of an element $x^{k}l_{\alpha} \in A$ ($k=0,1$) goes to an element of $\{xl_{2},xl_{3},...,xl_{j}\}$, that is, $f(x^{k}l_{\alpha})=xl_{\beta}$, where $ i+1\leq \alpha \leq t$, $\beta \in \{2,3,...,j\}$. This implies $f(x^{k+1}l_{\alpha})=l_{\beta} \notin S_{j}$, but $x^{k+1}l_{\alpha} \in S_{i}$. This is a contradiction. So there exists $l_{s}$ or $xl_{s} \in A $ such that $f(l_{s})$ or $f(xl_{s})$ $\notin B$ (for $B\subsetneq A$). That is, $f(x^{k}l_{s})=hl_{u}$ or $f(x^{k+1}l_{s})=xhl_{u}$, for some $k=0$ or $1$, $u \in \{2,3,...,j\}$. Since $x^{k+1}l_{s} \in S_{i}$ but $xhl_{u} \notin S_{j}$, we get a contradiction. Hence $S_{i} \ncong S_{j}$ for all $ 2 \leq i<j \leq t$. If $S_{t+1}\cong S_{i}$, by Proposition \ref{6}, there exists $f \in Aut_{H}G$ such that $f(S_{i})=S_{t+1}$. Clearly, $f(x)=hx$. Suppose for $k\in \{0,1\}$, $f(l_{2})=x^{k+1}l_{j} $. Then $f(hl_{2})=f(h)x^{k+1}l_{j} \notin S_{t+1}$. Hence $S_{i} \ncong S_{t+1}$ for all $i \in \{2,3,...,t\}$. \end{proof}
\begin{cor}\label{c1} Let $N$ be the $Aut_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G$ and $[N: H]=2$. Let $T=\{1,x\} \in \mathcal{T}(N,H)$ be a subgroup of $N$. Then $[G:N]= 4.$ \end{cor}
\begin{proof} If $[G:N] \geq 5$, then by Proposition \ref{3.5}, we have $5+1=6$ non-isomorphic transversals. So $[G:N] \leq 4$. If $[G:N]\leq 3$, then $[G: H]\leq 6$, a contradiction to the assumption that $[G: H]\geq 7$. Hence $[G:N]=4$. \end{proof}
\begin{cor}\label{c5} Let $N$ be an Aut$_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G$ and $[N: H]=2$. Let $T=\{1,x\} \in \mathcal{T}(N,H)$ be a subgroup of $N$. Then each element of $\mathcal{T}(N,H)$ is a subgroup of $G$. \end{cor}
\begin{proof} Suppose $T^\prime \in \mathcal{T}(N,H)$ such that $T^\prime$ is not a subgroup. By Corollary \ref{c1}, $[G:N]=4$. Let $\{1,l_{2},l_{3},l_{4}\} \in \mathcal{T}(G,N)$. Then by Proposition \ref{3.5}, the following $5$ transversals are pairwise non-isomorphic: $S_{1}=TL$, $S_{i}=(TL\smallsetminus \{l_{2},...,l_{i}\})\cup \{hl_{2},...,hl_{i}\},~ (2\leq i \leq 4)$, $S_{5}=(TL\smallsetminus\{x\})\cup \{hx\}$. Define $S_{6}:=T^\prime L$. Since $T^\prime$ is not a subgroup, by Proposition \ref{5}, $S_{6}$ is not a subgroup. Suppose $S_{6} \cong S_i$, $1\leq i\leq 5$. Then by Proposition \ref{6}, there exists $f \in $ Aut$_{H}G$ such that $f(S_{6})=S_i$. Since $N$ is Aut$_{H}G$-invariant, $f(T^\prime)=T$, which is impossible as $T$ is a subgroup and $T^\prime$ is not a subgroup. So we have $6$ non-isomorphic transversals, which is a contradiction. This proves the corollary. \end{proof}
\begin{prop}\label{p1} Let $(G, H)$ be a minimal counterexample, and $N$ be the Aut$_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G.$ Suppose each element of $\mathcal{T}(N,H)$ is a subgroup. Then \\ (i) $[G: H]=8$,\\ (ii) Any two non-trivial elements of $H$ are conjugate in Sym$(8)$,\\
(iii) $|H|=3$. \end{prop}
\begin{proof} By \cite [Lemma 2.4]{viv}, $[N: H]=2$ and using Corollary \ref{c1}, $[G:N]=4.$ So $[G: H]=n=8$. Suppose $h$ and $g$ are two non-trivial elements of $H$ which are not conjugate in $\text{Sym}(8)$. Take $L=\{1,l_2,l_3,l_4\}\in \mathcal{T}(G,N)$ and $T=\{1,x \} \in \mathcal{T}(N,H)$. Then by Proposition \ref{3.5}, the following five transversals of $H$ in $G$ are non-isomorphic. $S_{1}=TL$, $S_{2}=(TL\smallsetminus \{l_{2}\})\cup \{hl_{2}\}, $ $S_{3}=(TL\smallsetminus \{l_{2},l_{3}\})\cup \{hl_{2},hl_{3}\}$, $S_{4}=(TL\smallsetminus \{l_{2},l_{3},l_{4}\})\cup \{hl_{2},hl_{3},hl_{4}\}$, $S_{5}=(TL\smallsetminus\{x\})\cup \{hx\}$. Define $S_6:=(TL\smallsetminus \{x\}) \cup \{gx\}$. Clearly, $S_6$ is not a subgroup of $G$, so by Proposition \ref{5}, $S_6$ generates $G$. By Lemma \ref{A}, $S_{1}$ and $S_{6}$ are non-isomorphic. Suppose $S_6\cong S_i$, $i=2,3,4$. Then by Proposition \ref{6}, there exists $f \in $Aut$_{H}G$ such that $f(S_{i})=S_6$. Then $f(x)=gx$. Suppose $f(hl_{2})=x^kl_i$, where $k=0,1$ and $i=2,3,4$. Then $f(l_{2})=f(h)^{-1}x^kl_{i}$. This implies $f(xl_{2})=f(x)f(h)^{-1}x^kl_{i}=gxf(h)^{-1}x^kl_{i} $. Now $gxf(h)^{-1}x=1$, implies $g=xf(h)x$. This is a contradiction, for $g$ and $h$ are not conjugate. Now $gxf(h)^{-1}=1$, implies $f(h)=gx \not\in H$. This is a contradiction, for $f \in$Aut$_HG$. Hence $S_6\not \cong S_{i}$, $i=2,3,4$.
Further, suppose $S_5\cong S_6$. Then by Proposition \ref{6}, there exists $f \in $ Aut$_{H}G$ such that $f(S_{5})=S_{6}$. Then $f(hx)=gx$. This implies $f(x)=f(h)^{-1}gx$. Suppose $f(l_i)=x^kl_j$, where $i,j\in \{2,3,4\}$ and $k\in \{0,1\}$. Then $f(xl_i)=f(h)^{-1}gx^{k+1}l_j $. Now $f(h)^{-1}gx=1$, implies $f(h)=gx \not\in H$, a contradiction. Now $f(h)^{-1}g=1$, implies $f(h)=g$. But by Remark \ref{1234}, it is not possible. So $S_5\not \cong S_6$. Thus we have six non-isomorphic transversals of $H$ in $G$, which is a contradiction. This implies that any two non-trivial elements of $H$ are conjugate in Sym$(8)$. Hence the order of each non-trivial element of $H$ is a prime number.
By \cite [Lemma 2.4]{viv}, it follows that $H$ is an abelian group. Suppose the order of each non-trivial element of $H$ is prime $p$. Then $|G|=8p^l$ for some natural number $l$. By \cite[Proposition 3.2 and Proposition 3.8]{skm}, $p\neq 2$ for $|{\mathcal I}(G, H)|=5$. Since $H \not \trianglelefteq G$, so by Sylow theorem, $p=3, 7$. Now, if $p=7$, then by Sylow theorem normalizer of $H$ will be $H$. But this is a contradiction for $H$ is normal in $N$. This implies $p=3$. Since $H \subset G \subseteq$ Sym$(8)$ and $H$ is an elementary abelian $3$-group such that any two elements of $H$ are conjugate in Sym$(8)$, $|H|=3$.\end{proof} \begin{prop} \label{13*} Let $(G, H)$ be a minimal counterexample, and $N$ an $Aut_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G.$ Then no element of $\mathcal{T}(N,H)$ is a subgroup of $N$. \end{prop}
\begin{proof}
Assume that all members of ${\mathcal T}(N,H)$ are subgroups of $N$. By Proposition \ref{p1}, $|G|=24$, $|H|=3$, $|N|=6$ and $G \subsetneqq$ Sym$(8)$. Since $G$ is a transitive group of degree $8$, using \cite[p. 879]{butler}, possible choices for $G$ are Sym$(4)$, $Alt(4) \times Z_{2}$, SL$(2,3)$ (special linear group). Now using \cite[p. 881]{butler} and \cite{gap} it can be checked that for the pairs; \\ $Alt(4) \times Z_{2} = \langle(1,6)(2,5)(3,8)(4,7),(3,5,7)(4,6,8) \rangle$,
$H$ =$\{(),(3,5,7)(4,6,8), \\(3,7,5)(4,8,6)\}$ and $SL(2,3)= \langle(1,6)(2,5)(3,8)(4,7),(3,5,7)(4,6,8) \rangle,$ \\ $H =\{(),(3,5,7)(4,6,8),(3,7,5)(4,8,6)\}$, there exists $T \in \mathcal{T}(N,H)$ which is not a subgroup of $N,$ where $H\subsetneqq N\subsetneqq G,$ $|N|=6.$ This is a contradiction. And so we are left with the group, Sym$(4)$ which satisfies our condition. Using \cite[p. 881]{butler} we have,
\noindent $G = \langle(3,5,7)(4,6,8),(1,4)(2,3)(5,6)(7,8) \rangle$;\\ $N = \langle(3,5,7)(4,6,8),(1,2)(3,4)(5,8)(6,7) \rangle;$\\ $H = \{(),(3,5,7)(4,6,8),(3,7,5)(4,8,6)\}$.\\ Take $a=(1,2)(3,6)(4,5)(7,8),$ $b=(1,3)(2,4)(5,7)(6,8)$ and $c=(1,4)(2,3)$ $(5,6)(7,8).$
By Lemma \ref{123}, the following are pairwise non-isomorphic transversals of $H$ in $G$. \begin{align*} S_{1} =& \{(), a, b, c , (1,5)(2,6)(3,7)(4,8), (1,6)(2,5)(3,4)(7,8),(1,7)(2,8)(3,5) (4,6), \\ &(1,8)(2,7)(3,4)(5,6) \},\\ S_{2} =& \{(), a ,b, c, (1,5,3)(2,6,4), (1,6)(2,5)(3,4)(7,8),(1,7)(2,8)(3,5)(4,6),\\ &(1,8)(2,7)(3,4)(5,6) \}, \\ S_{3} =& \{(), a, b, c, (1,5,3)(2,6,4),(1,6,7,4)(2,5,8,3),(1,7)(2,8)(3,5)(4,6),\\ &(1,8)(2,7)(3,4)(5,6) \}, \\ S_{4} =& \{(), a, b, c, (1,5,3)(2,6,4),(1,6,7,4)(2,5,8,3),(1,7,5)(2,8,6),\\ &(1,8)(2,7)(3,4)(5,6) \},\\ S_{5} =& \{(), a, b, c, (1,5,3)(2,6,4),(1,6,7,4)(2,5,8,3),(1,7,5)(2,8,6),\\ &(1,8,3,6)(2,7,4,5) \},\\ S_{6}=& \{(), a, b, c, (1,5,3)(2,6,4),(1,6)(2,5)(3,4)(7,8),(1,7,5)(2,8,6),\\ &(1,8)(2,7)(3,4)(5,6) \}. \end{align*}
Then by Lemma \ref{123}, these are pairwise non-isomorphic. Thus we get $|\mathcal{I}(G, H)| \geq 6$, which is a contradiction.
Suppose $K\in \mathcal{T}(N,H)$ is a subgroup and $U \in \mathcal{T}(N,H)$ is not a subgroup of $N$. If $[N: H]=2$, then using Corollary \ref{c5}, we arrive at a contradiction.
Now we show $[N: H] \ngeq 3.$ Suppose $[N: H]=3$. Then $[G:N] \geq 3$ (for $[G: H] \geq 7$). Choose distinct elements $k_{2},k_{3} \in K \smallsetminus \{1\}$, $l_{2},l_{3} \in L \smallsetminus \{1\},$ for some $L \in \mathcal{T}(G,N)$. Take $h \in H\smallsetminus \{1\}$. Consider $S_{1} = KL,$ $S_{2} = UL.$ By Proposition \ref{7}, $S_{3} \in \mathcal{T}(G, H)$ exists, such as $U \subseteq S_{3}$ and $S_{3} \neq UL_1$ for any $L_{1} \in \mathcal{T}(G,N)$. Also consider $S_{4}=(KL \smallsetminus \{l_{2}\}) \cup \{hl_{2}\}$, $S_{5}=(KL \smallsetminus \{l_{2},l_{3}\}) \cup \{hl_{2},hl_{3}\}$ and $S_{6}=(KL \smallsetminus \{k_{2}l_{2},k_{2}l_{3},k_{3}l_{2}\}) \cup \{hk_{2}l_{2},hk_{2}l_{3},hk_{3}l_{2}\}$. Since $U \subseteq S_{i}$ ($i=2,3$), by Proposition \ref{5}, $\langle S_{i} \rangle = G$ ($i=2,3$). For $k\in K$, $k,hl_{2} \in S_{i}$ ($i=4,5$) but $khl_{2}=h^{'}k^{'}l_{2} \notin S_{i} $ ($N=HK$ is a subgroup), for some $k^{'} \in K,$ $h^{'} \in H$. Thus $\langle S_{i} \rangle = G$ $(i=4,5)$. Now $k_{2},l_{2} \in S_{6}$ but $k_{2}l_{2} \notin S_{6},$ this implies $\langle S_{6} \rangle = G$. We now show that $S_{6} \neq T^{'}L^{'}$ for any $T^{'} \in \mathcal{T}(N,H)$ and $L^{'} \in \mathcal{T}(G,N)$. If possible, suppose $S_{6}= T^{'}L^{'}$ for some $T^{'} \in \mathcal{T}(N,H)$ and $L^{'} \in \mathcal{T}(G,N)$. Then $T^{'}=S_{6} \cap N = K$. Since $hk_{2}l_{2} \in S_{6}$, either $hk_{2}l_{2} \in L^{'} $ or $k_{i}l_{2} \in L^{'}$, for $i \neq 2,3$. If $hk_{2}l_{2} \in L^{'}$, then $hk_{2}l_{2}=k^{'}h^{'}l_{2}$ , for some $k^{'} \in K,$ $h^{'} \in H$. Then for $(k^{'})^{-1} \in K$, $(k^{'})^{-1}k^{'}h^{'}l_{2}=h^{'}l_{2} \notin S_{6}$, which is not possible. If $k_{i}l_{2} \in L^{'}$ then for some $k \in K \smallsetminus \{1\}$, $hk_{2}l_{2} =k(k_{i}l_{2}).$ This implies $hk_{2} \in K,$ a contradiction.
Now we show that $S_{i}$ ($1 \leq i \leq 6$) are pairwise non-isomorphic. Since $U$ is not a subgroup of $N$, $S_{2}, S_{3} \ncong S_{i}$ ($i=1,4,5,6$)(as $K \subseteq S_{i}$ and $N$ is an $Aut_{H}G$-invariant subgroup of $G$). Also by Lemma \ref{A}, $S_{1} \ncong S_{6}$.
If $S_{1} \cong S_{4}$, by Proposition \ref{6}, there exists $f \in Aut_{H}G$ such that $f(S_{1})=S_{4}$. So $f(K)=K$ (for $N$ is an $Aut_{H}G$-invariant subgroup of $G$). If $f(l)=hl_{2}$, then for $k\in K$, $f(kl)=f(k)hl_{2}=h^{'}k^{'}l_{2} \notin S_{4}$ for some $k^{'} \in K,$ $h^{'} \in H$. If for $k\in K$, $f(kl)=hl_{2}$, then $f(l)=f^{-1}(k)hl_{2} \notin S_{4}.$ Hence $hl_{2}$ has no pre-image in $S_{1}.$ Therefore $S_{1} \ncong S_{4}$. Similarly, $S_{1} \ncong S_{5}$. If $S_{4} \cong S_{5}$, by Proposition \ref{6}, there exists $f \in Aut_{H}G$ such that $f(S_{4})=S_{5}$. So $f(K)=K$ (for $N$ is an $Aut_{H}G$-invariant subgroup of $G$). Assume that for $l \in L\smallsetminus \{1,l_2\}$, $f(l)=hl_{i}$, $i=2,3$. Then $f(kl)= f(k)hl_{i}= h^{'}k^{'} l_{i} \notin S_{5}$ for some $k^{'} \in K,$ $h^{'} \in H$. Assume that for $k \in K,l\in L$, $f(kl)=hl_{i}$, $i=2,3.$ Then $f(l)= f(k)^{-1}hl_{i} \notin S_{5}.$ Therefore at least one member from the set $\{hl_{2},hl_{3}\}$ has no pre-image in $S_{4},$ which is a contradiction. Hence $S_4 \not \cong S_5$. If $S_{5} \cong S_{6}$, by Proposition \ref{6}, there exists $f \in Aut_{H}G$ such that $f(S_{5})=S_{6}$. So $f(K)=K$ (for $N$ is an $Aut_{H}G$-invariant subgroup of $G$). Assume that for some $k\in K$ and $l\in L$, $f(kl)= hk_{i}l_{2},$ where $i=2,3$. Then $f(kl)=k^{'}h^{'}l_{2}$, for some $k^{'} \in K,$ $h^{'} \in H$. Then there exists some $k_1 \in K$ such that $f(k_1)=(k^{'})^{-1}$. Then $f(k_1kl)=h^{'}l_{2} \notin S_{6}.$ Now, if $f(l)= hk_{i}l_{2}=k^{''}h^{''}l_{2}$ for some $k^{''} \in K,$ $h^{''} \in H$. Then there exists some $k \in K$ such that $f(k)=(k^{''})^{-1}$. Then $f(kl)=h^{''}l_{2} \notin S_{6}.$ Therefore at least one element from the set $\{hk_{2}l_{2},hk_{2}l_{3},hk_{3}l_{2}\}$ has no pre-image in $S_{5}$, which is a contradiction. Hence $S_{5} \ncong S_{6}.$ Using similar argument, one can show that $S_{4} \ncong S_{6}$. This shows that $[N: H]=3$ is not possible.
Assume that $[N: H] \geq 4$. Then $[G:N] \geq 2$ (for $[G: H] \geq 7$). Choose three distinct elements $k_{2},k_{3},k_{4}\in K\smallsetminus\{1\}$ and $l \in L \smallsetminus \{1\}$ where $L \in \mathcal{T}(G,N)$. Take $S_{i}$ ($1 \leq i \leq 4$) as above. Consider $S^{'}_{5}=(KL \smallsetminus \{k_{2}l,k_{3}l \}) \cup \{hk_{2}l,hk_{3}l \}$, $S^{'}_{6}=(KL \smallsetminus \{k_{2}l,k_{3}l,k_{4}l \}) \cup \{hk_{2}l,hk_{3}l,hk_{4}l \}$. Using similar arguments as used for $S_{6}$, $S^{'}_{i} \neq K^{'}L^{'}$ for any $K^{'} \in \mathcal{T}(N,H)$, $L^{'} \in \mathcal{T}(G,N)$ and $\langle S^{'}_{i} \rangle = G$ for $i=5,6.$ From above, para $S_{i}$ ($1 \leq i \leq 4$) are pairwise non-isomorphic. By Lemma \ref{A}, $S_{1} \ncong S^{'}_{i}$ $ (i=5,6).$ Since $U$ is not a subgroup of $N,$ $S_{2}, S_{3} \ncong S^{'}_{i} $ $ (i=5,6).$ Using the same argument as used for the pair $(S_{4},S_{6})$ it can be shown that $S_{4} \ncong S^{'}_{i}$ $(i=5,6)$. Finally, we are left to show that $S^{'}_{5} \ncong S^{'}_{6}$. If $S^{'}_{5} \cong S^{'}_{6}$, by Proposition \ref{6} there exists $f \in Aut_{H}G$ such that $f(S^{'}_{5})=S^{'}_{6}$. So $f(K)=K$ (for $N$ is an $Aut_{H}G$-invariant subgroup of $G$). Assume that $f(l)=kl$ for some $k \in K.$ Then $f(k^{'}l)=f(k^{'})kl \in Kl$ for all $k^{'} \in K$, this is not possible for $f$ is one-one. Hence $f(l) \in \{hk_{2}l,hk_{3}l,hk_{4}l \}.$ If $f(l)=hk_{2}l= k^{'}h^{'}l$ for some $k^{'} \in K,$ $h^{'} \in H.$ Let $k^{''} \in K$ such that $f(k^{''} )=(k^{'})^{-1}$. This implies that $f(k^{''}l)=h^{'}l \notin S^{'}_{6}.$ Similarly, $f(l) \notin \{hk_{3}l,hk_{4}l\}$. So $S^{'}_{5} \ncong S^{'}_{6} .$ This proves the proposition. \end{proof} \begin{prop} \label{14} Let $(G, H)$ be a minimal counterexample, and $N$ be an $Aut_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G.$ Then $\langle S \rangle=G$ for all $S \in{\mathcal T}(G, H)$. \end{prop}
\begin{proof} Assume that there exists $S \in{\mathcal T}(G, H)$ such that $\langle S \rangle \neq G$, that is, $S$ is a subgroup of $G$ (by Proposition \ref{5}). Then, $S\cap N \in {\mathcal T}(N,H)$ is a subgroup of $N$, which contradicts Proposition \ref{13*}. Hence the result follows. \end{proof}
\begin{prop} \label{3.7} Let $(G, H)$ be a minimal counterexample and $S \in{\mathcal T}(G, H).$ Let $N$ be an $Aut_{H}G$-invariant proper subgroup of $G$ such that $H\subsetneqq N \subsetneqq G$. Then $H \trianglelefteq N$. \end{prop}
\begin{proof}
To show that $H \trianglelefteq N$, it is sufficient to show that $|{\mathcal I}(N,H)| \leq 2$. Suppose $|{\mathcal I}(N, H)| \geq 3$. Take $K_1, K_2, K_3 \in {\mathcal T}(N,H)$ such that they are in distinct $\text{Aut}_HG$ orbits of ${\mathcal T}(N,H)$. Take $L \in {\mathcal T}(G,N)$. Then $K_1L, K_2L, K_3L \in {\mathcal T}(G, H)$. Then by Proposition \ref{7}, there exist $S_1, S_2, S_3$ containing $K_1, K_2, K_3$ respectively such that $S_1\neq K_1L_1$; $S_2 \neq K_2L_1; $ $S_3\neq K_3L_1$ for all $L_1 \in {\mathcal T}(G,N)$. By Proposition \ref{14}, each transversal generates the group. By Lemma \ref{A} and Proposition \ref{6}, $K_1L, K_2L, K_3L, S_1, S_2, S_3$ are pairwise non-isomorphic transversals. This implies $|{\mathcal I}(G, H)| \geq 6$, which is a contradiction. This proves the result. \end{proof}
\begin{prop} \label{16} Let $(G, H)$ be a minimal counterexample. Then $G$ is characteristically simple. \end{prop}
\begin{proof}
Suppose $U$ is a non-trivial characteristic subgroup of $G$. Suppose $G=UH$. Then ${\mathcal T}(U,U\cap H)\subseteq {\mathcal T}(UH,H)={\mathcal T}(G, H)$. Since $U<G$ and $(G, H)$ is a minimal counterexample, $|{\mathcal I}(U,U\cap H)|\leq 4$. Clearly, $|{\mathcal I}(U,U\cap H)|=1$. By \cite{pss}, $U\cap H \trianglelefteq U$. This implies $U\cap H \trianglelefteq UH=G$. But $Core_{G}(H)=\{1\}$, implies $U\cap H=\{1\}$. So $U\in {\mathcal T}(G, H).$ The above argument shows that any proper characteristic subgroup $U$ of $G$ is a transversal of $H$ in $G$ if $G=UH$. Then $|U|\geq7.$ Let $u_{i}$ ($2\leq i\leq 7$) be distinct non-trivial elements of $U$ such that $u_{2}u_{3}=u_{4}$. Take $h \in H\smallsetminus \{1\}$. Let $S_{1}=U$, $S_{2}= (U\smallsetminus \{u_{2}\})\cup \{hu_{2}\}$. $hu_{2},u_{3} \in S_{2}$ but $hu_{2}u_{3}=hu_{4} \notin S_{2}$ which implies $\langle S_{2} \rangle=G$ and hence $S_{2}\ncong S_{1}$. Consider, $S_{3}= (U\smallsetminus \{u_{2},u_{5}\})\cup \{hu_{2},hu_{5}\}$. Clearly, $hu_{2},u_{3} \in S_{3}$ but $hu_{2}u_{3}=hu_{4} \notin S_{3}$ which implies $\langle S_{3} \rangle=G$ and hence $S_{1}\ncong S_{3}$. If $S_{2}\cong S_{3},$ by Proposition $3.3,$ there exists $f \in Aut_{H}G$ such that $f(S_{2})=S_{3}$. Clearly, for some $u \in U$, $f(u)=hu_{i}$ ($i=2,5$). Since $U$ is $Aut_{H}G$-invariant, $hu_{i} \in U$ implies $h \in U$, a contradiction. Therefore, $S_{2}\ncong S_{3}$. Consider $S_{4}= (U\smallsetminus \{u_{2},u_{5},u_{6}\})\cup \{hu_{2},hu_{5},hu_{6}\}$, $S_{5}= (U\smallsetminus \{u_{2},u_{5},u_{6},u_{7}\})\cup \{hu_{2},hu_{5},hu_{6},hu_{7}\}$, $S_{6}= \{1\}\cup \{hu|u \in U\smallsetminus \{1\}\}$. Using a similar argument as before it can be verified that $S_{i}$ ($1\leq i\leq 6$) are pairwise non-isomorphic. Therefore, $|\mathcal{I}(G, H)|\geq 6,$ a contradiction. So, $G \neq UH$.
Suppose $H \not \subseteq U$. Further, suppose $[UH: H]=2.$ Take $x \in U \smallsetminus H.$ Then $x^{2} \in H.$ Therefore, $\langle u^{2}|u \in U \rangle = U^{2} \subseteq H$ and $ U^{2} \unlhd G.$ Since $Core_{H}(G)= \{1\}$, $U^{2} = \{1\}.$ Then $U$ is an elementary abelian 2-group. Therefore, $\{1,x\} \in \mathcal{T}(UH,H),$ is a subgroup of $UH,$ a contradiction to Proposition \ref{13*}. Suppose $[UH: H] > 2$. Let $T_{1}= \{1,x,y\} \in \mathcal{T}(U,U \cap H).$ Also consider $T_{2}= \{1,hx,y\}$, $T_{3}= \{1,hx,hy\}$ in $\mathcal{T}(UH, H),$ where $h \in H \smallsetminus \{1\}.$ There does not exist any $f \in Aut_{H}G$ such that $f(T_{i})=T_{j}$ for all $1\leq i \neq j \leq 3.$ Now take $L \in \mathcal{T}(G,UH).$ Then $S_{1}=T_{1}L,$ $S_{2}=T_{2}L$ and $S_{3}=T_{3}L$ are pairwise non-isomorphic transversals of $H$ in $G$. Using Proposition \ref{7} and Lemma \ref{A}, we have six pairwise non-isomorphic transversals of $H$ in $G$, a contradiction. Therefore $H \subseteq U.$
Let $N_{1}$ be the smallest characteristic subgroup of $G$ containing $H$. Then $N_{1}$ is characteristically simple. Then $N_{1}$ is the direct product of simple isomorphic groups ($3.3.15$ of \cite{robin}). As $N_{1}$, $H$ is $Aut_{H}G$ -invariant, and $H$ is properly contained in $N_{1}$. Then by Proposition \ref{3.7}, $H$ is normal in $N_{1}$. Thus there exists $K \in \mathcal T(N_{1},H)$, a subgroup of $N_{1}$, a contradiction to Proposition \ref{13*}. This completes the proof. \end{proof}
The following proposition can be proved from the proof of \cite[Proposition 2.13]{vip} using the fact, $|\mathcal{I}(G, H)| \neq 4$ (\cite[Theorem 1.1, p. 346]{vip}). But for the sake of completeness we give a detailed proof. \\
\begin{prop} \label{17} Let $(G, H)$ be a minimal counterexample, and $S \in \mathcal{T}(G, H)$. Let $N$ be an $Aut_{H}(G)$ invariant proper subgroup of $G$ such that $H\subsetneqq N\subsetneqq G$. Then $S$ is indecomposable. \end{prop}
\begin{proof}
If possible, suppose that $S$ is decomposable and $S=S_1 \times S_2 \times \cdots \times S_n$ $(n \geq 2)$ is a Remak-Krull-Schmidt decomposition of $S$ (see Theorem 1.11, p. 648 of \cite{pss}). By Proposition $2.12$ and Remark $2.4$, p. 650 of \cite{pss}, we may identify $(G, H)$ with $\left(G_{S_1} S_1 \times\right.$ $\left.G_{S_2} S_2 \times \cdots \times G_{S_n} S_n, G_{S_1} \times G_{S_2} \times \cdots \times G_{S_n}\right)$. We claim that $\left|\mathcal{I}\left(G_{S_i} S_i, G_{S_i}\right)\right| \leq 5$ $(1 \leq$ $i \leq n)$. If possible, assume that there exists $k$ $(1 \leq k \leq n)$ such that $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right|>5$. Let $T_1, T_2, T_3, T_4, T_5, T_6 \in \mathcal{T}\left(G_{S_k} S_k, G_{S_k}\right)$ be pairwise non-isomorphic right transversals. Then $L_i=$ $S_1 \times \cdots \times S_{k-1} \times T_i \times S_{k+1} \times \cdots \times S_n$ $(1 \leq i \leq 6)$ are pairwise non-isomorphic right transversals of $H$ in $G$ by Remak-Krull-Schmidt theorem (see Theorem 1.11, p. 648 of \cite{pss}), a contradiction.
Since $(G, H)$ is a minimal counterexample, $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right| \neq 5.$ Using \cite[Theorem 1.1, p. 346]{vip}, $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right| \neq 4$, for all $k \in\{1,2, \cdots ,n\}$. Also, $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right| \neq 2$ for all $k \in\{1,2, \cdots ,n\}$, \cite{viv}. Therefore, either $\left|\mathcal{I}\left(G S_{S_k} S_k, G, S_k\right)\right|=1$ or $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right|=3$. In either case, we get $T_k \in$ $\mathcal{T}\left(G_{S_k} S_k, G_{S_k}\right)$ which is a group. Let $T=T_1 \times T_2 \times \cdots \times T_n$. Then $T \in \mathcal{T}(G, H)$ and is a group. By \cite[p. 76]{rltr}, the group torsion $G_T$ is trivial. Using Proposition \ref{iso}, $H_T=G_T=$ $\{1\}$. Therefore, $T$ is a subgroup of $G$, a contradiction to Proposition \ref{14}. \end{proof} The following proposition proves that $G$ is indecomposable if $(G,H)$ is a minimal counterexample. Notice that the proof of the following Proposition is obtained by some modification of the proof of \cite[Proposition 3.1]{vip}. \begin{prop} \label{18} Let $(G, H)$ be a minimal counterexample. Then $G$ is indecomposable. \end{prop} \begin{proof}
If possible, suppose that $G$ is decomposable. Let $G_1$ and $G_2$ be nontrivial proper normal subgroups of $G$ such that $G=G_1 G_2$ and $G_1 \cap G_2=\{1\}$. Let $\pi_i: G \rightarrow G_i$ $(i=1,2)$ be projections. Let $\pi_i(H)=U_i$ $(i=1,2)$. The restriction $\left.\pi_i\right|_H$ of $\pi_i$ to $H$ induces isomorphism $\sigma_i: H /\left(H \cap G_1\right)\left(H \cap G_2\right) \rightarrow\left(U_i /\left(H \cap G_i\right)\right)(i=1,2)$. This gives an isomorphism $\theta=\sigma_2 \circ \sigma_1^{-1}$ from $U_1 /\left(H \cap G_1\right)$ to $U_2 /\left(H \cap G_2\right)$ given by $$\theta\left(\pi_1(h)\left(H \cap G_1\right)\right)=\pi_2(h)\big(H \cap G_2\big), h \in H.$$ Also $$H=\left\{u_1 u_2 \in U_1 U_2 \mid \theta\left(u_1\left(H \cap G_1\right)\right)=u_2\left(H \cap G_2\right)\right\}.$$ Since $Core_{G}(H) = \{1\}$ (Proposition \ref{5}), $H \cap G_i \neq G_i$ for $i=1,2$. Suppose that $H \cap G_1 = U_1.$ Then the isomorphism $\theta$ implies $H \cap G_2=U_2$. Let $S_i \in \mathcal{T}\left(G_i, U_i\right)$ $(i=1,2)$. Now as argued in the second paragraph of the proof of Proposition 2.6, p. 650 of \cite{pss}, we get an $S=S_1 S_2 \in \mathcal{T}(G, H)$ which is decomposable. If there exists an $Aut_H G$-invariant subgroup $N$ of $G$ such that $H \varsubsetneqq N \subsetneq G$, then this is a contradiction (Proposition \ref{17}). Thus, there does not exist any $Aut_{H} (G)$-invariant subgroup $N$ of $G$ such that $H \varsubsetneqq N \varsubsetneqq G$.
Next, we prove that there is a member of $\mathcal{T}(G, H)$ which is not a subgroup of $G$ and is decomposable. Assume that each member of $\mathcal{T}\left(G_i, U_i\right)$ $(i=1,2)$ is a subgroup of $G_i$. Then by Lemma 2.4, p. 1719 of \cite{viv}, $\left|S_1\right|=\left|S_2\right|=2$. Hence $|S|=4$. Since $Core_{G}(H)=\{1\}$, we can identify $G$ with a subgroup of Sym$(4)$. By subgroups structure of Sym$(4)$, there is no non-abelian decomposable subgroup of Sym$(4)$. This is a contradiction. Therefore, there exist $S_i^{\prime} \in \mathcal{T}\left(G_i, U_i\right)$ $(i=1,2)$ such that atleast one of them is not a subgroup of $G$. This implies that $S^{\prime}=S_1^{\prime} S_2^{\prime} \in \mathcal{T}(G, H)$ is decomposable which is not a subgroup of $G$.
Let $S^{\prime}=S_1 \times S_2 \times \cdots \times S_n$ $(n \geq 2)$ be a Remak-Krull-Schmidt decomposition of $S$ (see Theorem 1.11, p. 648 of \cite{pss}). Since $\operatorname{Core}_{G}(H)=\{1\}$, $G = H_{S^{\prime}} S^{\prime} \cong G_{S^{\prime}} S^{\prime}$. By Proposition $2.1$ and Remark $2.4$, of \cite{pss}, we identify $(G, H)$ with \\ $\left(G_{S_1} S_1 \times G_{S_2} S_2 \times \cdots \times G_{S_n} S_n, G_{S_1} \times G_{S_2} \times \cdots \times G_{S_n}\right)$. Also since $Core_{G}(H)=\{1\}$, \\ $Core _{H_{S_i} S_i}\left(H_{S_i}\right)=\{1\}$ $(1 \leq i \leq n)$. By the similar arguments as in the first paragraph of the proof of Proposition \ref{17}, either $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right|=1$ or $\left|\mathcal{I}\left(G_{S_k} S_k, G_{S_k}\right)\right|=3$ for all $k \in\{1, 2, \cdots ,n\}.$ Since $H_{S^{\prime}} S^{\prime}=G$, there exists $1 \leq i \leq n$ such that $G_{S_i} \neq\{1\}$. Without any loss, we may assume that $i=1$. Then by Theorem A, p. 2025 of \cite{ict2}, $\left|G_{S_1}\right|=2, \left|S_1\right|=3$ and $G_{S_1} S_1 \cong \operatorname{Sym}$ (3). Further, assume that $G_{S_k} \neq\{1\}$ for some $k>1$. We may assume that $k=2$. Then as argued above $G_{S_2} S_2 \cong \operatorname{Sym}(3)$ and $\left|G_{S_2}\right|=2$. Thus $G_{S^{\prime}} S^{\prime} \cong \operatorname{Sym}(3) \times \operatorname{Sym}(3) \times G_{S_3} S_3 \times \cdots \times G_{S_n} S_n$. Let $T_{1},T_{2},T_{3} \in\mathcal{T}(G_{S_{1}}S_{1},G_{S_{1}})=\mathcal{T}(\text{Sym}(3),\text{Sym}(2))$ and $T_{1}^{'},T_{2}^{'},T_{3}^{'} \in\mathcal{T}(G_{S_{2}}S_{2},G_{S_{2}})=\mathcal{T}(\text{Sym}(3),\text{Sym}(2))$ such that $T_{1}$ and $T_{1}^{'}$ are subgroups of $G_{S_{1}}S_{1}$ and $G_{S_{2}}S_{2}$ respectively. Since $|\mathcal{I}(\text{Sym}(3),\text{Sym}(2))|=3,$ then without any loss of generality we can assume that $T_{2}\ncong T_{3}^{'}$ and $T_{3}\ncong T_{2}^{'}$. Let $K=S_{3}\times S_{4}\times...\times S_{n}$. Let $S_{rq}=T_{r}\times T_{q}^{'}\times K \in \mathcal{T}(G, H)$($1 \leq r,q \leq 3$). Then using \cite [Theorem $1.11$, p. $648$]{pss}, $S_{11}, S_{12}, S_{13}, S_{22}, S_{23}, S_{33}$ are pairwise non-isomorphic which is a contradiction (for $|\mathcal{I}(G, H)|=5$). Therefore $G_{S_{j}}=\{1\}$ for ($2\leq j \leq n$). This means that $S_j$ ($2\leq j \leq n$) is a subgroup of $G$.
Assume that $S_j$ $(2 \leq j \leq n)$ are perfect. Hence, the solvable radical $R$ of $G$ is isomorphic to $G_{S_1} S_1$ and $N=H R=H S_1$ is an $Aut_H G$-invariant proper subgroup of $G$ containing $H$ properly. This is again a contradiction. Thus, there exists $r$ $ (2 \leq r \leq n)$ such that $S_r$ is not a perfect group. Now, consider the commutator $[G, G]$ of the group $G$. Since $[G, G] \varsubsetneqq T_1 \times S_2 \times \cdots \times S_n$ (for $\left[S_r, S_r\right] \neq S_r$ ), $[G, G] \notin \mathcal{T}(G, H)$. This implies that $H[G, G]$ is an $\operatorname{Aut}_H G$-invariant proper subgroup of $G$ containing $H$ properly. This is a contradiction.
Thus, we may now assume that $H \cap G_i \neq U_i$ $(i=1,2)$. Suppose that $U_1=G_1$ and $U_2=G_2$. Then as argued in the third paragraph of the proof of Proposition $2.6$, p. $650$ of \cite{pss}, replacing Corollary $2.3$ of \cite{pss} by Proposition \ref{5} (i), we get $G_2 \in \mathcal{T}(G, H)$ with $H \cong G_1 \cong G_2$. If there exists an $Aut _H G$-invariant subgroup $N$ such that $H \varsubsetneqq N \subsetneq G$, then this is a contradiction (Proposition \ref{14}). Thus, there does not exist any $Aut_{H}(G)-$ invariant subgroup $N$ of $G$ such that $H \varsubsetneqq N \subsetneq G$. Now, assume that $G_2$ is not characteristically simple. Let $K$ be a non-trivial proper characteristic subgroup of $G_2$. Then $H \varsubsetneqq H K \subsetneq G$. Since $H K$ is $Aut_H G$-invariant subgroup of $G$, this is a contradiction. Hence $G_2$ is characteristically simple. Since $G_2 \cong H$, $H$ is characteristically simple. By $3.3.15$ of \cite{robin}, $G_2$ is a direct product of isomorphic finite simple groups. Assume that $G_2$ is an elementary abelian $p$-group. This implies that $G$ is a $p$-group. Then the center $Z(G)$ of the group $G$ is non-trivial. Note that $Z(G) \notin \mathcal{T}(G, H)$ (for otherwise $G \cong H \times Z(G))$. Since $H Z(G)$ is an $Aut_H G-$ invariant proper subgroup of $G$ containing $H$ properly, we get a contradiction. Thus $G_2$ is non-abelian. Let $H=H_1 \times \cdots \times H_n$ and $G_2=L_1 \times \cdots \times L_n$, where all $H_i$ and $L_j(1 \leq i, j \leq n)$ are isomorphic to a fixed non-abelian simple group (see $3.3 .15$ of \cite{robin}). Since $G_2$ is a direct factor of $G$, each direct factor of $G_2$ is a normal subgroup of $G$. Hence $H_i L_j$ $(1 \leq i, j \leq n)$ is a subgroup of $G$. By Theorem 1 of \cite{foguel}, $H_i L_j \cong H_i \times L_j$ $(1 \leq i, j \leq n)$. This implies that $G=H \times G_2$. This is a contradiction.
Thus, we may assume that $H \cap G_i \neq U_i$ $(i=1,2)$ and $U_1 \neq G_1$. Let $S_i \in \mathcal{T}\left(G_i, U_i\right)$ ($i=1,2)$ and $T \in \mathcal{T}\left(U_2, H \cap G_2\right)$. Then by the same argument as in the last paragraph of the proof of Proposition 2.6, p. 650 of \cite{pss}, we get that $T \in \mathcal{T}\left(U_1 U_2, H\right)$ and $S=S_1\left(T S_2\right) \in \mathcal{T}(G, H)$ which is decomposable. If there exists an $Aut_H G$-invariant subgroup $N$ of $G$ such that $H \varsubsetneqq N \varsubsetneqq G$, then this is a contradiction (Proposition \ref{17}). Thus, there does not exist any $Aut_H G$- invariant subgroup $N$ of $G$ such that $H \varsubsetneqq N \varsubsetneqq G$.
Assume that $U_2=G_2$. By similar arguments as in the third paragraph of the above proof, we get a member of $\mathcal{T}(G, H)$ which is not a subgroup of $G$ and is decomposable. Next, assume that $U_2 \neq G_2$. Further assume that each member of $\mathcal{T}\left(G_1, U_1\right)$ and $\mathcal{T}\left(G_2, U_2\right)$ are subgroups of $G_1$ and $G_2$ respectively. Then by Lemma 2.4, p. 1719 of \cite{viv}, $G_i=U_i \rtimes C_2$ $(i=1,2)$ with $U_i$ abelian subgroup of $G_i$. Then $U_1 U_2$ is an abelian subgroup of $G$. Since $H \varsubsetneqq U_1 U_2,$ $H \varsubsetneqq N_G(H)$. This is again a contradiction for $N_G(H)$ is an $Aut_H G$- invariant subgroup of $G$. Therefore, there exists $S_1^{\prime} \in \mathcal{T}\left(G_1, U_1\right)$ or $S_2^{\prime} \in$ $\mathcal{T}\left(G_2, U_2\right)$ which is not a subgroup of $G$. This implies that $S^{\prime}=S_1^{\prime}\left(T S_2^{\prime}\right) \in \mathcal{T}(G, H)$ is decomposable which is not a subgroup of $G$. Thus whether $U_2=G_2$ or $U_2 \neq G_2$, we always get an $S^{\prime} \in \mathcal{T}(G, H)$ which is not a subgroup of $G$ and is decomposable. But this gives a contradiction as argued in fourth paragraph of the above proof. \end{proof}
\begin{cor} \label{19} Let $(G, H)$ be a minimal counterexample. Then $G$ is a non-abelian simple group. \end{cor}
\begin{proof} By Proposition \ref{16}, $G$ is characteristically simple. Also, $G$ is indecomposable by Proposition \ref{18}. Then by \cite[3.3.15]{robin}, $G$ is a non-abelian simple group. \end{proof}
\section{Proof of the Theorem \ref{main}} \label{s6} In this section, we prove that a minimal counterexample is not possible.
\begin{prop} \label{20}
Let $(G, H)$ be a minimal counterexample. Let $S \in \mathcal{T}(G, H)$ such that $H_{S}=H$. Let $\mathcal{A}^{'}=\{L \in \mathcal{T}(G, H) \mid L\cong S\}$. Then $|\mathcal{A}^{'}|< \frac{m^{n-1}}{16}$, where $m$ and $n$ are the order and the index of $H$ in $G$, respectively. \end{prop}
\begin{proof}
Suppose that $|\mathcal{A}^{'}|\geq \frac{m^{n-1}}{16}$. By Corollary \ref{19}, $G$ is non-abelian simple group and so $|G|\geq 60$, ($n\geq5$). Now $H_{S}=H$ implies $\langle S \rangle=G$. Since $Core_{G}(H)=\{1\},$ $|G|\leq n!.$ By Proposition \ref{6}, $Aut_{H}G$ acts transitively on $\mathcal{A}^{'}.$ So,
$$|Aut(G)|\geq |Aut_{H}G|\geq |\mathcal{A}^{'}|\geq \frac{m^{n-1}}{16}. $$
Next, we show that $\frac{m^{n-1}}{16}>m^{2}n^{2}=|G|^{2}$. If $n \in \{7,8,...,14\}$ ( $[G: H]\geq 7$), then using the fact, $|G|\geq 60$ we have $\frac{m^{n-1}}{16}>m^{2}n^{2}=|G|^{2}$. Suppose $n\geq 15$ and so $m\geq 4$ implies $ \frac{m^{n-3}m^{2}}{16}\geq 2^{n-7}m^{2}$ . By induction, it can be shown that $2^\frac{n-7}{2}>n$, for $n\geq15.$ Therefore $2^{n-7}>n^{2}$. Thus $\frac{m^{n-1}}{16}>m^{2}n^{2}=|G|^{2}$, that is, $|Aut(G)|> |G|^{2},$ which is a contradiction by \cite[Lemma $3.4$]{pss}. And hence the proof follows. \end{proof}
\begin{prop} \label{21}
Let $(G, H)$ be a minimal counterexample. Let $\mathcal{A}_{1}=\{S \in \mathcal{T}(G, H)\mid H_{S}=\{1\}\}$. Then $|\mathcal{A}_{1}|<\frac{m^{n-1}}{4}$. \end{prop}
\begin{proof}
If $\mathcal{A}_{1}= \emptyset$, then we are done. Suppose $\mathcal{A}_{1} \neq \emptyset$. Consider $\mathcal{A}_{2}=\{L \in \mathcal{T}(G, H)| H_{L}=H\}.$ Then clearly, $\mathcal{A}_{1}\cap \mathcal{A}_{2}= \emptyset$ and $\mathcal{A}_{1}\cup \mathcal{A}_{2}= \mathcal{T}(G, H)$, so no member of $\mathcal{A}_{1}$ is isomorphic to a member of $\mathcal{A}_{2}$. Let $S \in \mathcal{A}_{1}$, then $H_{S}=1,$ and so $S$ is a subgroup of $G$. Since $[G: H]\geq7,$ there exists $u_{1},u_{2},u_{3}$ distinct non-trivial elements of $S$ such that $u_{1}u_{2}=u_{3}$. Let $S^{'}=(S \smallsetminus \{u_{3}\})\cup \{hu_{3}\}$ where $h \in H\smallsetminus \{1\}$. Now $u_{1},u_{2} \in S^{'}$ but $u_{1}u_{2}=u_{3}$ does not belong to $S^{'}$. Therefore, $\langle S^{'} \rangle=G$ and so $S^{'} \in \mathcal{A}_{2}$. By a similar argument as done in the second part of Proposition $3.5,$ of \cite{ict2} it can be shown that the map $S \rightarrow S^{'}$ from
$\mathcal{A}_{1} \rightarrow \mathcal{A}_{2}$ is injective, and so we get, \\
$|\mathcal{A}_{1}|\leq |\mathcal{A}_{2}|< \frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}$ (for $|\mathcal{I}(G, H)|=5$), that is, $|\mathcal{A}_{1}|< \frac{m^{n-1}}{4}$. \end{proof}
\begin{proof}[Proof of the Theorem \ref{main}] \label{s6} Let $(G, H)$ be a minimal counterexample. Suppose $\mathcal{A}_{i}$ $(1\leq i\leq 5 )$ are the distinct isomorphism classes in $\mathcal{T}(G, H)$. Assume that $\langle S \rangle=G$ \ for all $S \in \mathcal{T}(G, H),$ by Proposition \ref{21}, we get;
$m^{n-1}=|\mathcal{T}(G, H)|=|\mathcal{A}_{1}|+|\mathcal{A}_{2}|+...+|\mathcal{A}_{5}|< \frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}+\frac{m^{n-1}}{16}$= $\frac{5m^{n-1}}{16}$, a contradiction.
So, there exists $S \in \mathcal{T}(G, H)$ such that $\langle S \rangle \neq G$. Let $\mathcal{A}_{1}=\{S \in \mathcal{T}(G, H) \mid H_{S}=\{1\} \}$ and $\mathcal{A}_{2}=\{L \in \mathcal{T}(G, H)\mid H_{L}= H\}$. Then
$m^{n-1}=|\mathcal{T}(G, H)|=|\mathcal{A}_{1}|+|\mathcal{A}_{2}|< \frac{m^{n-1}}{4}+\frac{m^{n-1}}{4}=\frac{m^{n-1}}{2}$, (using Proposition \ref{21}), a contradiction. This completes the proof of the theorem. \end{proof}
\section{Proof of Theorem \ref{main2} and Theorem \ref{main3}} \label{s5} In this section, we will prove Theorem \ref{main2} and Theorem \ref{main3}. Suppose $G$ is a transitive subgroup of Sym$(6)$ and $H=\text{Stab}_G(1)$. From \cite [p. 60]{dix}, following is the table for proper transitive groups of degree $6$.
\begin{table}[h]
\tiny{ \begin{tabular}{|c c c c c |} \hline SN.\!\! & Order & \!\!Description & Generators of Group $G$ & $H=\text{Stab}_G(1)$ \\ [0.5ex] \hline\hline 1 & 6 & $Z_{6}$ & $\langle (123456) \rangle$ & \{()\} \\ \hline 2 & 6 & Sym$(3)$ & $\langle (1,2)(3,4)(5,6),(1,3,5)(2,4,6) \rangle$ & $\langle \{()\} \rangle$ \\ \hline 3 & 12 & $D_{6}$ & $\langle (123456),(1,6)(2,5)(3,4) \rangle$ & $\langle(2,6)(3,5) \rangle$ \\ \hline 4 & 12 & $Alt(4)$ & $\langle (1,2,3)(4,5,6),(1,4)(2,5) \rangle$ & $\langle (2,3)(4,5)\rangle$ \\ \hline 5 & 18 & $Z_{3}$ $\times$ Sym$(3)$ & $\langle (1,2,3),(1,4)(2,5)(3,6) \rangle$ & $\langle (4,5,6) \rangle$ \\ \hline 6 & 24 & $Z_{2} \times Alt(4)$ & $\langle (1,2,3)(4,5,6),(1,4) \rangle$ & $\langle (3,6),(2,5)(3,6) \rangle$ \\ \hline 7 & 24 & Sym$(4)$ & $\langle (1,2,3)(4,5,6),(1,5,4,2) \rangle$ & $\langle (2,6,5,3),(2,5)(3,6)\rangle$ \\ \hline 8 & 36 &\! Sym$(3)$ \! $\times\!$ Sym$(3)$ & $\langle (1,2,3),\!(1,2)(4,5),\!(1,4)(2,5)(3,6) \rangle$ & $\langle (4,5,6),\!(2,3)(5,6) \rangle$ \\ \hline 9 & 36 & $Z_{3}^{2} \rtimes Z_{4}$ & $\langle (1,3,5,4)(2,6),(1,6,5) \rangle$ & $\langle (3,4)(5,6),(2,4)(5,6) \rangle$ \\ \hline 10 & 48 &$Z_{2}$ $\times$ Sym$(4)$ & $\langle (1,2),(1,3,5)(2,4,6),(1,3)(2,4) \rangle$&$\langle(3,5)(4,6),(5,6),(3,4) \rangle$ \\ \hline 11 & 60 & $Alt(5)$ & $\langle (1,2)(3,4),(1,3,4)(2,5,6) \rangle$ & $\langle (2,4)(3,6),(2,5,4,3,6)\rangle$\\ \hline 12 & 72 & Sym$(3)$ $\wr$ $Z_{2}$ & $\langle (1,2,3),(1,2),(1,4)(2,5)(3,6)\rangle$ & $\langle (5,6),(2,3),(4,6,5) \rangle$ \\ \hline 13 & 120 & Sym$(5)$ & $\langle (1,4)(2,6)(3,5),(1,2,3,4) \rangle$ & $\langle (2,6,5,3),(3,6,5,4) \rangle$ \\ \hline 14 & 360 & $Alt(6)$ & $\langle (2,3)(4,5),(1,2,3,5)(4,6) \rangle$ & $\langle (2,3)(4,5),(2,6,4,3,5) \rangle$ \\ \hline \hline \end{tabular}} \end{table}
\begin{center} Table 1: Proper transitive subgroups of Sym$(6)$ \end{center}
Here, Sym$(3)$ $\wr$ $Z_{2}$ is the wreath product of Sym$(3)$ by $Z_{2}$, $Z_{3}^{2} \rtimes Z_{4}$ is the semidirect product of $Z_{3}^{2}$ and $Z_{4}$.\\
\begin{rem} \label{rem1} Suppose $G$ is a transitive subgroup of symmetric group $\text{Sym}(n)$ and $H=\text{Stab}_{G}(1)$. The number of fixed points of $H$ is $[N_{G}(H): H]$ \cite [p. 19]{dix}. \end{rem}
\begin{lem} \label{lem1}
Suppose $G$ is a transitive subgroup of symmetric group Sym$(6)$ and $H=\text{Stab}_{G}(1)$ such that $|\mathcal{I}(G, H)|=5$. Then $H \subseteq Alt(6)$. \end{lem}
\begin{proof} Suppose $H \nsubseteq Alt(6)$. Then $H$ and its each coset has half even and half odd permutations. So we can construct six tansversals $S_{i}$ ($1 \leqslant i \leqslant 6$) of $H$ in $G$ by choosing $i$ even elements and $6-i$ odd elements. By Lemma \ref{123}, $S_i$'s are pairwise non-isomorphic. This is a contradiction. So $H \subseteq Alt(6)$. \end{proof}
In $\text{Sym}(n)$ we say two elements are of same form if they are conjugate in $\text{Sym}(n).$ By elements of form $2,2$-cycle we mean elements conjugate to $(1,2)(3,4).$
\begin{lem} \label{lem2} If $G$ is a transitive subgroup of \text{Sym}metric group \text{Sym}$(6)$ and $H=\text{Stab}_{G}(1)$, then $H$ is not transitive on the set $Y=\{2,3,4,5,6\}$. \end{lem}
\begin{proof} Suppose $H$ is a transitive on the set $Y=\{2,3,4,5,6\}$ and $H$, $Ha_{2}$, $Ha_{3}$, $Ha_{4}$, $Ha_{5}$, $Ha_{6}$ are six cosets of $H$ in $G$ such that each element of $Ha_{i}$ sends $1$ to $i$ ($1\leq i\leq 6$). Take $x \in Ha_{2}$. Since $H$ acts transitively on $Y,$ there exists $h \in H$ such that $h(2)=i$ ($3\leq i\leq 6$). Then $h^{-1}xh \in Ha_{i}$. Clearly, $h^{-1}Ha_{2}h=Hh^{-1}a_{2}h=Ha_{i}$ for all $3 \leqslant i \leqslant 6$. Hence all cosets are conjugate to each other. If a coset $Ha_{i}$ ($2 \leqslant i \leqslant 6$) has two form of elements, then we can construct $6$ non-isomorphic transversals. So each coset has only one form of elements. Now, each element of $G \smallsetminus H$ has the same form. Since, $H \ntrianglelefteq G$ so there exists $g \in G$ such that $(G \smallsetminus H)\cap gHg^{-1} \neq \emptyset $. So each element of $G \smallsetminus H$ is even permutation. Therefore, $G \subseteq Alt(6).$ Also, since $H$ fixes $1,$ elements of $H$ and hence $G$ has possible form; $3$-cycle, $5$-cycle or $2,2$-cycle.
Suppose $G \smallsetminus H$ has a $3$-cycle. Then in coset $Ha_{2}$ the possible elements are $(1,2,3)$, $(1,2,4)$, $(1,2,5)$, $(1,2,6)$. So $|H|\leq 4$. This is a contradiction for $H$ is transitive on $Y$.
Suppose $G \smallsetminus H$ has a $5$-cycle. Suppose $(1,2,3,4,5) \in G \smallsetminus H.$ Then there can be atmost $6$ elements in a coset $Ha_{2}$. Since $H$ is transitive, $|H|=5$ which implies $|G|=30$ and $H$ is the Sylow $5$-subgroup of $G$. So $H$ is normal in $G$. This is a contradiction for $H \ntrianglelefteq G.$
Suppose $G \smallsetminus H$ has a $2,2$-cycle. Suppose $(1,2)(3,4) \in G \smallsetminus H$. Then each coset of $H$ has atmost $6$ elements. So, $|H|\leq 6$. Since $H$ is transitive so, $|H|=5$ which implies $|G|=30$. This is a contradiction for $H \ntrianglelefteq G.$ Therefore $H$ cannot be transitive on $Y$. \end{proof}
\begin{cor} \label{t1} If $G$ is a transitive subgroup of symmetric group Sym$(6)$ and $H=\text{Stab}_{G}(1)$, then the elements of $H$ are of form $3$-cycle or $2,2$-cycle. \end{cor}
\begin{proof} Since $H \subseteq Alt(6)$ $\cap$ Sym$(5)$, the possible form of elements in $H$ are $2,2$-cycle, $3$-cycle, $5$-cycle. But from Lemma \ref{lem2}, $5$-cycle is not possible. Hence each element of $H$ is of the form $3$-cycle or $2,2$-cycle. \end{proof}
\begin{lem} Suppose $G$ is transitive subgroup of symmetric group Sym$(6)$ and $H$=\text{Stab}$_{G}(1)$. Then $H$ fixes some symbol of the set $Y=\{2,3,4,5,6\}$. \end{lem}
\begin{proof}
Suppose $H$ does not fix any symbol of the set $Y=\{2,3,4,5,6\}$. This implies $N_{G}(H)=H$. Now $H$ has orbit of size $2$ and $3$. This implies $6$ divides $|H|$. By \ref{t1}, elements of $H$ are either of the form $3$-cycle or $2,2$-cycle, Suppose $\{a_{2}, a_{3}, a_{4}\} \subseteq Y$ make one orbit and $\{a_{5}, a_{6}\} \subseteq Y$ make another orbit. Now $\text{Stab}_{H}(a_{2})$ have atmost two elements $()$ and $(a_{3}, a_{4})(a_{5}, a_{6})$. Then $|H|$ $\leq$ $|\text{Stab}_{H}(a_{2})| |Orb_{H}(a_{2})| = 6$. Hence $|H|=6$. This gives $|G|=36.$ From Table $1,$ we have two choices for $G$. For, $G= \langle (1,2,3),(1,2)(4,5),(1,4)(2,5)(3,6) \rangle$, $H=\langle (4,5,6),(2,3)(5,6) \rangle.$ Take $a=(1,6,2,4,3,5)$. By Lemma \ref{123}, the following are pairwise non-isomorphic transversals of $H$ in $G$. \begin{align*} S_{1} =& \{(),(1,2,3),(1,3,2), (1,4)(2,5)(3,6), (1,5)(2,6)(3,4), (1,6)(2,5)(3,4)\}, \\ S_{2} =& \{(),(1,2,3),(1,3,2), (1,4)(2,5)(3,6), (1,5)(2,6)(3,4), a\},\\ S_{3} =& \{(),(1,2,3),(1,3,2), (1,4)(2,5)(3,6), a^{-1}, a\},\\ S_{4}=& \{(),(1,2,3),(1,3,2), (1,4,3,6,2,5),a^{-1}, a \}, \\ S_{5}=& \{(),(1,2,3),(1,3,2)(4,6,5), (1,4,3,6,2,5), a^{-1}, a\}, \\ S_{6}=& \{(),(1,2,3)(4,6,5),(1,3,2)(4,6,5), (1,4,3,6,2,5), a^{-1}, a\}. \end{align*}
Thus we get $|\mathcal{I}(G, H)| \geq 6$, which is a contradiction. For, $G= \langle (1,3,5,4)(2,6), \\ (1,6,5) \rangle$, $H=\langle (2,3,4),(2,3)(5,6) \rangle.$ Take $a=(1,2,6,3)(4,5)$, $b=$ $(1,3)$ $(2,6,4,5),$ $c=(1,4,5,3)(2,6).$ By Lemma \ref{123}, the following are pairwise non-isomorphic transversals of $H$ in $G$. \begin{align*} S_{1} =& \{(), a, b, c, (1,5,6), (1,6,5)\},\\ S_{2}=& \{(), a, b, c, (1,5,6), (1,6,5)(2,4,3)\},\\ S_{3}=& \{(), a, b, c, (1,5,6)(2,4,3), (1,6,5)(2,4,3)\}, \\ S_{4}=& \{(), a, b, c, (1,5,6), (1,6)(3,4)\}, \\ S_{5}=& \{(), a, b, c, (1,5)(3,4), (1,6)(3,4)\},\\ S_{6}=& \{(), a, b, c, (1,5,6)(2,4,3), (1,6)(3,4)\}. \end{align*}
Thus we get $|\mathcal{I}(G, H)| \geq 6$, which is a contradiction. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{main2}]
Since the number of fixed points of $H$ is more than one, by Remark \ref{rem1}, $[N_{G}(H):H] > 1$. So, $[N_{G}(H): H]=2$ or $3$. That is, $H$ either fixes $2$ symbols or $3$ symbols. Suppose $H$ fixes $3$ symbols. Then only possible elements of $H$ are $3$-cycles. Then $H \cong Alt(3)$, that is, $|G|=18.$ Since $G$ is a transitive group on $6$ points, from Table $1,$ $G \cong Z_{3} \times \text{Sym}(3)$. Take $a=(1,5,2,6,3,4)$. By Lemma \ref{123}, the following are pairwise non-isomorphic transversals of $H$ in $G$. \begin{align*} S_{1}=& \{(),(1,3,2), (1,4)(2,5)(3,6), (1,2,3), (1,6,3,5,2,4), a\},\\ S_{2}=& \{(),(1,3,2)(4,5,6),(1,4)(2,5)(3,6),(1,2,3),(1,6,3,5,2,4), a\},\\ S_{3}=& \{(),(1,3,2)(4,5,6),(1,4)(2,5)(3,6),(1,2,3)(4,5,6)(1,6,3,5,2,4),a\}, \\ S_{4}=& \{(),(1,3,2),(1,4,2,5,3,6),(1,2,3),(1,6,3,5,2,4), a\}, \\ S_{5}=& \{(),(1,3,2)(4,5,6),(1,4,2,5,3,6),(1,2,3),(1,6,3,5,2,4), a\}, \\ S_{6}=& \{(),(1,3,2),(1,4)(2,5)(3,6),(1,2,3),(1,6)(2,4)(3,5), (1,5)(2,6)(3,4)\}. \end{align*}
So $|\mathcal{I}(G, H)| > 5,$ a contradiction. Hence $G \ncong Z_{3} \times \text{Sym}(3)$. Suppose $H$ fixes $2$ symbols, $1$ and $6$. Then $[N_{G}(H): H]=2,$ and $H$ moves $4$ symbols. Suppose $H$ has elements of the form $2,2$-cycle and $3$-cycle. Then $H$ is an alternating group on these symbols, that is $|H|=12$. So $|G|=72$. As it is clear from table, its corresponding subgroup $H$ is not contained in $Alt(6)$, which is a contradiction to Lemma \ref{lem1}. Suppose $H$ has only elements of the form $2,2$-cycle. Then $H$ has at most following $3$ non-trivial elements: $(2,3)(4,5)$, $(2,4)(3,5)$ and $(2,5)(3,4)$. So, if $|H|=2,$ then $|G|=12$. Then from Table $1,$ $D_{6}$ and $Alt(4)$ are the only possibilities for $G$. From Remark \ref{2.4}, the isomorphism class is greater than $5$ for $D_{6}$. For $|H|=4$ implies $|G|=24$. From Table$1,$ $Z_{2} \times Alt(4)$ and $\text{Sym}(4)$ are two possibilities for group $G.$ But in both cases, $H \nsubseteq Alt(6)$. Then $G=Alt(4)$ is the only possibility. \end{proof} Now the proof of the Theorem \ref{main3} can be obtained by using the above results. \begin{proof}[Proof of the Theorem \ref{main3}]
Suppose $H$ is a finite index subgroup of a group $G$ such that $|\mathcal{I}(G, H)|=5$. Take $n=[G: H]$. Let $X$ denote the set of all right cosets of $H$ in $G$. Consider the permutation representation map $\chi:G \rightarrow$ Sym$(X)$ as defined in the preliminary. Then Ker$(\chi)$=Core$_G(H)$. By Remark \ref{r1}, we can identify $X$ with $\{1,2,\ldots , n\}$ and so identify Sym$(X)$ by Sym$(n)$. Also, $\chi(G)$ is a transitive subgroup of Sym$(n)$ and $H=\text{Stab}_G(1)$. By Lemma \ref{1}, $|\mathcal{I}(G, H)|= |\mathcal{I}(\chi(G), \chi(H))|=5$. Since $\chi(G)$ is a finite group and $\chi(H)$ is its subgroup of index $n$ such that $|\mathcal{I}(\chi(G), \chi(H))|$ $=5$, by Theorem \ref{main}, $[\chi(G): \chi(H)]=6$. By Theorem \ref{main2}, $\chi(G) $ is isomorphic to the alternating group on four symbols. This proves the result. \end{proof} \section{Conclusion} \label{s7}
The problem of determining the number of non-isomorphic transversals for the finite index subgroup of a group is the same as determining the same for the transitive subgroup (of finite symmetric group) and its point stabilizer. For $n \geq 3$, suppose $T_n$ denotes the set of all non-isomorphic transitive subgroups of $\text{Sym}(n)$. Define $A_n=\{(G, H)\mid G \in T_n ~\text{and}~ H=\text{Stab}_G(1)\} $. Take $A=\cup\{A_n\mid n\geq 3\}$. Define a map $\psi: A \rightarrow \mathbb{N}$ as $\psi (G, H)=|\mathcal{I}(G, H)|$, $(G, H) \in A$. It is an interesting problem to determine the range of this map. Several authors have worked on it. As given in the Introduction $1,2,4 \not \in \psi (A)$ and $\psi (G, H)=3$ if and only if $n=3$ and $G=\text{Sym}(3)$. In this article, it is shown that $\psi (G, H) =5$ if and only if $G=Alt(4) \leq \text{Sym}(6)$. We propose the following problems to prove or disprove:
\begin{enumerate} \item For $(G, H) \in A$, if $\psi(G, H) =7$, then $G = Alt(4) \leq \text{Sym}(4)$. Note that converse of this result is true see \cite{vkjarxiv}. \item For $(G, H) \in A$, if $\psi(G, H) =6$, then $G \cong D_4 \leq \text{Sym}(4)$ or $G \cong D_5 \leq \text{Sym}(5)$. Note that converse of this result is also true see \cite{vkjarxiv} and \cite{vip}. \item If $x \in \psi (A)$. Then there exists $k \in \mathbb{N}$ such that $\psi ^{-1}(x) \subseteq A_{n_1}\cup A_{n_2}\cup \ldots \cup A_{n_k}$.
\item If $x \in \psi(A)$, then $|\psi ^{-1}(x) \cap A_n| \leq 1$ for all $n \geq 3$. \item The only prime numbers occuring in the range of $\psi$ are $3,5,7$. \item The range of $\psi$ does not contain powers of $2$. \end{enumerate}
\end{document} |
\begin{document}
\title{Hydrodynamical quantum state reconstruction}
\author{Lars M. Johansen \thanks{Email: lars.m.johansen@hibu.no}} \address{Buskerud College, P.O.Box 251, N-3601 Kongsberg, Norway} \date{\today} \maketitle
\begin{abstract}
The density matrix of a nonrelativistic wave-packet in an arbitrary, one-dimensional and time-dependent potential can be reconstructed by measuring hydrodynamical moments of the Wigner distribution. An $n$-th order Taylor polynomial in the off-diagonal variable is obtained by measuring the probability distribution at $n+1$ discrete time values.
\end{abstract} \pacs{PACS number(s): 03.65.Bz, 05.30. d}
This Letter presents a new and general method for reconstructing the density matrix of a massive particle in an arbitrary, one-dimensional and time-dependent potential. Such a general method may seem called for, e.g., in the reconstruction of the quantum state of particles in anharmonic, time-dependent Paul traps. The method is based upon measuring the position probability distribution for a discrete number of time values in a short time interval. Surprisingly, the method follows almost immediately from known results.
The diagonal of the density matrix can be retrieved by observing a single probability distribution. The state reconstruction problem essentially consists in obtaining the offdiagonal elements. Decoherence and the approach to the classical regime is characterized by a vanishing of the offdiagonal elements \cite{Zurek91}. The method presented here is constructed so that the density matrix is retrieved for increasing values of the off-diagonal variable by increasing the number of discrete time values for which the probability density is observed.
It was shown by Madelung \cite{Madelung26} that quantum mechanics can be reformulated in a form resembling a hydrodynamical description. He reformulated the Schr\"odinger equation as two coupled and nonlinear equations for the ``hydrodynamical" moments of probability density and probability current density. Whereas these two moments can describe a pure state \cite{purehydro}, the situation is more complicated for mixed states.
A somewhat analogous situation is found in classical statistical mechanics. Hilbert \cite{Hilbert12} demonstrated that the phase space distribution for a system in local thermal equilibrium can be expressed as a functional of the density, the current density and the kinetic energy density. These are the three lowest order velocity moments of the phase space distribution. Thus, in thermal equilibrium the phase space distribution is equivalent to it's three lowest order moments. This situation has sometimes been called the Hilbert paradox \cite{Uhlenbeck63}. In general, though, an infinite set of velocity moments is equivalent to the full phase space distribution. These moments are coupled through an infinite set of differential equations. In the case of thermal equilibrium this set is truncated, and a finite and closed set of equations is obtained.
The similarity between quantum mechanics and statistical mechanics goes beyond the observation made by Madelung, which is restricted to pure states. Wigner \cite{Wigner32} showed that quantum mechanics can be reformulated in terms of a quasi phase space distribution. This distribution, which is the Fourier transform of the density matrix, shares most of the properties of a classical phase space distribution. One important exception is that it may take on negative values. It can be shown \cite{Carruthers83} that the probability density and the probability current density are the first two velocity moments of the Wigner distribution. This is in complete analogy with classical statistical mechanics. An infinite hierarchy of velocity moments can be derived from the Wigner distribution, and they are interconnected through an infinite set of coupled equations, much like in classical statistical mechanics.
However, quantum mechanics can offer more. It has been shown \cite{Moyal49,Yvon78,Ploszajczak,Lill} that when the density matrix in position representation is expanded as a Taylor series in the offdiagonal variable, the coefficients of this expansion are velocity moments of the Wigner distribution. This in fact solves the moment problem both for classical statistical mechanics and for quantum mechanics. With ``the moment problem" we here mean the problem of expressing a distribution in terms of it's moments. This is a classical problem in statistics, and it was first raised in the context of quantum mechanics by Moyal \cite{Moyal49}. It was further explored in Ref. \cite{further}. Recently, the density matrix was expressed in terms of normally ordered moments \cite{normal}. It has also been shown \cite{calc} that the normally ordered moments can be calculated from the measured quadrature distribution. Since convergent expansions may be found \cite{Herzog96}, this gives a method for reconstructing the state of a radiation field or a particle in a harmonic oscillator potential.
The possibility of measuring quantum states has attracted a lot of attention in recent years. In particular, the method of homodyne tomography \cite{Bertrand87,Vogel89} has contributed to this interest. It has been used to reconstruct the quantum state of radiation fields \cite{opthomexp} as well as material particles propagating in free space \cite{Kurtsiefer97}. In homodyne tomography the state is retrieved from a parameterized probability distribution. Ideally, a continuous range of parameter values should be used. In the case of optical homodyne tomography, this parameter is the value of a reference phase \cite{Vogel89}, whereas for material particles it might be a time value \cite{Raymer94b,Leonhardt96}. In another recently developed reconstruction scheme, a method for the direct probing of the Wigner distribution has been found \cite{direct}. For a specific parameter value (in this case, the amplitude and phase of a probe field) a certain region of phase space is retrieved. This method has been used to reconstruct the first negative Wigner distribution \cite{Leibfried96}. Numerous other reconstruction methods have also been found \cite{survey}.
Recently, it has been shown that the density matrix of a wave-packet in an arbitrary one-dimensional potential can be reconstructed by observing the time-evolution of the position probability density \cite{Leonhardt96,Opatrny97}. In these methods, the eigenstates of the Schr\"odinger equation are first found for the potential in question. The position probability density should ideally be observed over an infinite time interval, although methods have been considered for obtaining a finite observation time \cite{Opatrny97,Leonhardt97}.
The density matrix is the Fourier-transform of the Wigner distribution \cite{Wigner32} \begin{equation}
\langle x + y | \, \hat{\rho} \, | x - y \rangle =
\int_{-\infty}^{\infty} dp \, e^{2 i p y/\hbar} \, W(x,p,t).
\label{eq:onedimFourier} \end{equation} It follows that \cite{Moyal49} \begin{equation}
\left [ {\partial^{(n)} \langle x + y | \, \hat{\rho} \, | x - y
\rangle \over \partial y^n} \right ]_{y=0} = \left ( {2 i \over
\hbar} \right )^n \, f_n(x,t),
\label{eq:coeff} \end{equation} where $f_n$ are ``hydrodynamical" moments of the Wigner distribution \cite{Moyal49} \begin{equation}
f_n (x,t) = \int_{-\infty}^{\infty} dp \, p^n \, W(x,p,t).
\label{eq:moments} \end{equation} Since the Wigner distribution is real, these moments are also real. $f_0$ trivially is the probability distribution in position representation, whereas $f_1/m$ is the probability current density \cite{Carruthers83}. Generally it is not possible to give these moments a classical hydrodynamical interpretation. Thus, e.g., the moment $f_2$ may take on negative values for certain negative Wigner distributions \cite{Johansen97b}.
It follows that the unique Taylor expansion of the density matrix in the off-diagonal variable $y$ is \cite{Yvon78,Ploszajczak,Lill} \begin{equation}
\langle x + y | \, \hat{\rho} \, | x - y \rangle =
\sum_{n=0}^{\infty} {f_n(x,t) \over n!} \, \left ({2 i y \over
\hbar} \right )^n.
\label{eq:series} \end{equation} We may divide this expansion into a real and an imaginary part by \begin{eqnarray}
\langle x + y | \, \hat{\rho} \, | x - y \rangle =
\sum_{n=0}^{\infty} (-1)^n {f_{2n}(x,t) \over (2n)!} \, \left
({2 y \over \hbar} \right )^{2n} \nonumber \\ + \, i \:
\sum_{n=0}^{\infty} (-1)^n {f_{2n+1}(x,t) \over (2n+1)!} \,
\left ({2 y \over \hbar} \right )^{2n+1}.
\label{eq:complexform} \end{eqnarray} We see that the real part contains only moments $f_n$ of even order $n$, whereas the imaginary part contains only moments of odd order.
Clearly, if we are able to measure the moments $f_n$, we have a state reconstruction scheme. To this end, we recall the equation of motion of the Wigner distribution for a particle with mass $\mu$ in a one-dimensional, time-dependent potential $V(x,t)$, \cite{Wigner32,Lill} \begin{eqnarray}
{\partial \over \partial t} W(x,p,t) &=& \left \{ - {p \over
\mu} {\partial \over \partial x} + \sum_{k=0}^{\infty} \left (
{\hbar \over 2i} \right )^{2k} {1 \over (2k+1)!} \right .
\nonumber \\ &\times& \left . {\partial^{2k+1} V(x,t) \over
\partial x^{2k+1}} {\partial^{2k+1} \over \partial p^{2k+1}}
\right \} W(x,p,t).
\label{eq:wigeqmot} \end{eqnarray} We multiply both sides with $p^n$ and integrate over all momentum space. In this way, we obtain the infinite set of coupled equations \cite{Ploszajczak,Lill} \begin{eqnarray}
{\partial f_n \over \partial t} &=& -{1 \over \mu} {\partial
f_{n+1} \over \partial x} \nonumber \\ &-& \sum_{k=0}^{[(n-1)/2]}
\left ( {\hbar \over 2i} \right )^{2k} \left ( \begin{array}{c} n
\\ 2k+1 \end{array} \right ) {\partial^{2k+1} V \over \partial
x^{2k+1} } \: f_{n-2k-1}.
\label{eq:conserv} \end{eqnarray} For $n=0$ we retrieve the well known conservation equation for probability. By integrating this conservation equation, the probability current density can be expressed in terms of the time-derivative of the cumulative position probability \cite{Royer89} (for one-dimensional systems, that is). The idea of the present reconstruction method is simply to generalize this procedure to arbitrary moments. This will yield an iterative scheme. We therefore integrate Eq. (\ref{eq:conserv}) over the position variable and obtain \cite{classical} \begin{eqnarray}
f_{n+1}(x,t) &=& - \mu {\partial \over \partial t}
\int_{-\infty}^x dx' \, f_n(x',t) \nonumber \\ &-& \, \mu
\sum_{k=0}^{[(n-1)/2]} \left ( {\hbar \over 2i} \right
)^{2k} \left ( \begin{array}{c} n \\ 2k+1 \end{array} \right )
\nonumber \\ &\times& \int_{-\infty}^x dx' \, {\partial^{2k+1}
V(x',t) \over \partial x'^{2k+1}} f_{n-2k-1}(x',t).
\label{eq:recursive} \end{eqnarray} The moment $f_{n+1}$ is expressed in terms of lower order moments only. Therefore an arbitrary moment can be recursively calculated from the zeroth order moment, the probability density. This recursion relation gives an algorithm for reconstructing the density matrix. The algorithm can be used directly on the experimental data. In order to find $f_n$, we must know the time derivative of $f_{n-1}$. Therefore, $f_{n-1}$ must be observed for at least two different time values. This again requires that $f_{n-2}$ is known for three time values. Recursively, it follows that $f_n$ can be found by measuring $f_0$ for at least $n+1$ different time values.
To illustrate the convergence of the Taylor series (\ref{eq:series}), consider the unnormalized superposition state \begin{equation}
\psi(x) = e^{- [x/(2 \sigma)]^2 + i k_0 x } + e^{- [x/(2
\sigma)]^2 - i k_0 x }. \end{equation} The corresponding density matrix is \begin{eqnarray}
\langle x+y | \, \hat{\rho} \, | x-y \rangle &=& 2 \exp \left [ -
{x^2 + y^2 \over 2\sigma^2} \right ] \nonumber \\ &\times& \left
[ \, \cos (2 k_0 x) + \cos (2 k_0 y) \, \right ]. \end{eqnarray} Note that this density matrix is real. This means, according to Eq. (\ref{eq:complexform}), that $f_n$ vanishes for all odd $n$. In Fig. \ref{fig:density} the Taylor polynomial \begin{equation}
\rho_N(x,y,t) = \sum_{n=0}^{N} {f_n(x,t) \over n!} \, \left ( {
2 i y \over \hbar} \right )^n
\label{eq:polynomial} \end{equation} has been plotted for different orders $N$ for the parameter choice $\sigma=1/\sqrt{2}$ and $k_0=2 \sqrt{2}$. The highest order polynomial is $\rho_{36}$ (Fig. \ref{fig:density} c), which involves moments up to $f_{36}$. It would be obtained by measuring the position probability distribution $f_0$ for 37 different time values using perfect detectors. It differs negligibly from the exact density matrix within the chosen plotting region.
As we can see from Fig. \ref{fig:density}, one in effect probes the density matrix further away from the diagonal by increasing the number of discrete time values for which the probability distribution is observed. If, after retrieving the Taylor polynomial (\ref{eq:polynomial}) to a certain order, one finds that the density matrix goes to zero, one may use an additional number of measurements at other time values to check the consistency of the data.
The classical limit is often associated with taking $\hbar \rightarrow 0$. In this case the equation of motion (\ref{eq:wigeqmot}) reduces to a classical Liouville equation. But for the purpose of reconstructing the Taylor polynomial (\ref{eq:polynomial}), it is vital that $\hbar$ should be considered finite. Otherwise, assuming finite hydrodynamical moments $f_n$, every term in the polynomial of order higher than zero diverges. The specific numerical value of $\hbar$ is of no importance in this respect, since a change of $\hbar$ only implies a rescaling of the off-diagonal variable.
The principles outlined in this Letter can also be employed to other expansions of the density matrix. Starting with a density matrix in the momentum representation, we might have expanded it in terms of the moments $\int_{-\infty}^{\infty} dx \, x^n \, W(x,p,t)$. However, the corresponding set of recursion relations is generally more complicated in this case. Other expansions of the density matrix, which converge more rapidly for nearly classical states \cite{Lill}, might also be used for state reconstruction using similar techniques. It is also easily adapted in other areas such as the reconstruction of quantum optical states.
\begin{figure}
\caption{The real part of the Taylor polynomial $\rho_N$ of
the density matrix for a) $N=10$ b) $N=20$ and c) $N=36$. It
is seen that the density matrix is retrieved for increasing
values of the offdiagonal variable as the number of discrete
time values are increased. $\rho_{36}$ is almost
indistinguishable from the exact density matrix within the
chosen plotting region.}
\label{fig:density}
\end{figure}
The expansion (\ref{eq:series}) of the density matrix in terms of quasi-hydrodynamical moments can be generalized to systems with a higher number of dimensions. However, it may not be straightforward to generalize the recursion algorithm (\ref{eq:recursive}). This is basically due to the fact that the current density is not uniquely determined from the time derivative of the probability density for systems with two or more dimensions.
In conclusion, a method was found for reconstructing the density matrix of a particle in an arbitrary, time-dependent potential. The method was based upon a Taylor expansion of the density matrix in the off-diagonal variable. The coefficients $f_n$ in this expansion are hydrodynamical moments of the Wigner distribution. A recursive algorithm was found for calculating an arbitrary moment $f_n$ from the zeroth order moment, the probability distribution. In general, an $n$-th order Taylor polynomial of the density matrix can be found by observing the probability distribution at $n+1$ discrete time values.
\end{document} |
\begin{document}
\title{High-efficiency measurement of an artificial atom embedded in a parametric amplifier}
\author{A. Eddins} \altaffiliation{Author to whom correspondence should be addressed: aeddins@berkeley.edu} \affiliation{Quantum Nanoelectronics Laboratory, Department of Physics, University of California, Berkeley CA 94720, USA.} \affiliation{Center for Quantum Coherent Science, University of California, Berkeley CA 94720, USA.} \QNLauthor{J.M. Kreikebaum} \QNLauthor{D.M. Toyli} \author{E.M. Levenson-Falk} \altaffiliation[Current address: ]{Department of Physics \& Astronomy, University of Southern California, Los Angeles CA 90089, USA} \affiliation{Quantum Nanoelectronics Laboratory, Department of Physics, University of California, Berkeley CA 94720, USA.} \affiliation{Center for Quantum Coherent Science, University of California, Berkeley CA 94720, USA.} \QNLauthor{A. Dove} \QNLauthor{W.P. Livingston} \author{B.A. Levitan} \affiliation{Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada} \author{L.C.G. Govia} \altaffiliation[Current address: ]{Raytheon BBN Technologies, 10 Moulton St., Cambridge, MA 02138, USA} \affiliation{Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA} \author{A.A. Clerk} \affiliation{Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA} \QNLauthor{I. Siddiqi}
\date{\today} \begin{abstract}
A crucial limit to measurement efficiencies of superconducting circuits comes from losses involved when coupling to an external quantum amplifier. Here, we realize a device circumventing this problem by directly embedding a two-level artificial atom, comprised of a transmon qubit, within a flux-pumped Josephson parametric amplifier. Surprisingly, this configuration is able to enhance dispersive measurement without exposing the qubit to appreciable excess backaction. This is accomplished by engineering the circuit to permit high-power operation that reduces information loss to unmonitored channels associated with the amplification and squeezing of quantum noise. By mitigating the effects of off-chip losses downstream, the on-chip gain of this device produces end-to-end measurement efficiencies of up to 80\%. Our theoretical model accurately describes the observed interplay of gain and measurement backaction, and delineates the parameter space for future improvement. The device is compatible with standard fabrication and measurement techniques, and thus provides a route for definitive investigations of fundamental quantum effects and quantum control protocols.
\end{abstract} \maketitle \SIunits[thinspace,thinqspace]
\section{Introduction} The sum of interactions between a quantum system and all environmental channels produces a continuous flow of quantum information into the environment, causing dephasing at a rate $\Gamma_{\phi}$. For a two-level qubit described by $\hat{\sigma}_z$ and measured along that axis, one may define the fraction of this information flux experimentally captured per unit time to be the measurement efficiency $\eta_{\text{meas}} = \Gamma_\text{meas}/2\Gamma_{\phi}$, a critical parameter for continuous quantum measurements, where $\Gamma_\text{meas}$ represents the rate at which the experimentalist learns about $\hat{\sigma}_z$ and is defined such that $\eta_{\text{meas}}$ ranges from 0 to 1.
Maximizing this efficiency for superconducting qubit measurements requires multiple stages of cyrogenic amplification to boost information-bearing quantum microwaves above the noise floor of room-temperature electronics. The use of off-chip superconducting parametric amplifiers for the first gain stage has enabled a variety of experiments investigating quantum measurement dynamics \cite{Vijay2012StabilizingFeedback,Murch2013ObservingBit,Weber2014MappingStates,Hacohen-Gourgy2016QuantumObservables,Campagne-Ibarcq2016ObservingFluorescence,Ficheux2017DynamicsDephasing}, with improvements in efficiency reported using multi-junction circuits \cite{Walter2017RapidQubits}. However, prior to amplification the information encoded in the field is extremely fragile, such that in these configurations $\sim$30\% of the information is dissipated in lossy microwave circulators and other components en route to the amplifier, lowering the ceiling on $\eta_{\text{meas}}$. Interest in surmounting this limitation has helped spur recent progress in the development of superconducting circulators and directional amplifiers \cite{Chapman2017WidelyCircuits,Lecocq2017NonreciprocalAmplifier,Peterson2017DemonstrationCircuit,Sliwa2015ReconfigurableAmplifier,Kerckhoff2015On-ChipRotation,Ranzani2017WidebandLine,Metelmann2015NonreciprocalEngineering}. Alternatively, high efficiency may be realized by strongly measuring a second, ancillary qubit and resonator mode (e.g. \cite{Minev2018ToMid-flight}).
Here we develop a minimal circuit architecture providing on-chip parametric gain through integration of a standard Josephson parametric amplifier (JPA) with the qubit in a configuration we dub the Qubit Parametric Amplifier (QPA), removing virtually all pre-amplification loss. Previous demonstrations with on-chip amplifiers have leveraged the bifurcation dynamics of a nonlinear resonator \cite{Siddiqi2004RF-DrivenMeasurement,Schmitt2014MultiplexedAmplifiers,Krantz2016Single-shotOscillator}. In contrast, the QPA implements on-chip the parametric mode of operation that has been widely applied in continuous measurements of qubits. This scheme presents a novel challenge, as the in-situ microwave amplification and squeezing opens a parasitic measurement channel inducing excess dephasing. We model and characterize this backaction in detail, and successfully mitigate it via a weakly nonlinear design permitting fast measurement, producing steady-state efficiencies as high as $\eta_{\text{meas}} = 0.80$ with direction for further improvement.
\begin{figure}
\caption{ (a) Simplified experimental setup. The QPA consists of a transmon qubit dispersively coupled to a JPA acting as the readout resonator. A coherent measurement tone reflects off the QPA, carrying qubit-state information to a second, off-chip JPA followed by a Josephson Traveling Wave Parametric Amplifier (JTWPA). (b,c) Schematic and false-colored images of the QPA. The port at right (cyan) flux-couples a pump tone to the JPA, producing on-chip amplification.}
\label{intro_fig}
\end{figure}
A schematic of our experiment appears in Fig.~\ref{intro_fig}(a). The QPA consists of a transmon qubit \cite{Koch2007Charge-insensitiveBox} dispersively coupled to a JPA. A microwave readout tone at frequency $\omega_\text{QPA}$ reflects off the QPA, acquiring qubit-state information. A pump tone of the form $\cos(2(\omega_\text{QPA} t+\Phi))$ applied to the pump port of the QPA concurrent with the readout modulates the QPA resonance frequency, producing on-chip phase-sensitive amplification of the measurement field. Adjusting the phase of the pump tone relative to the readout tone changes which field quadrature is amplified and which is squeezed. The output of the QPA is then routed by microwave circulators to additional amplification stages including a second, off-chip JPA and a superconducting Josephson Traveling Wave Parametric Amplifier (JTWPA) \cite{Macklin2015AAmplifier} en route to room-temperature demodulation and digitization. By acting as a phase-sensitive preamplifier before the JTWPA, which necessarily adds at least half a photon of noise in standard phase-preserving operation, the off-chip JPA reduces the amount of on-chip gain required for high efficiency.
A circuit diagram and false-color photographs of the QPA are shown in Fig.~\ref{intro_fig}(b,c). The on-chip JPA design is similar to that of some off-chip JPAs \cite{Toyli2016ResonanceVacuum,Zhou2014High-gainArray}, consisting of an interdigitated capacitor in parallel with a combination of geometric and Josephson inductance to form an $LC$ resonator (purple) whose frequency $\omega_\text{QPA}$ tunes with the flux applied through the pair of SQUID loops. A superconducting coil housed below the chip enables static tuning of $\omega_\text{QPA}$, while a pump applied via the flux-line (cyan) modulates $\omega_\text{QPA}$ to produce parametric gain. Some variation in circuit parameters occurred as data were acquired over the course of multiple cooldowns; we give representative parameter values here, and list precise values for each dataset in Appendix \ref{parametersAppendix}. The QPA resonator has a zero-flux frequency of $\omega_\text{QPA,max}/2\pi = 6.970$ GHz; we tuned this down to $\omega_\text{QPA}/2\pi \leq 6.740$ GHz to increase the modulation amplitude produced by the flux-pump. Coupling capacitors and a $180\degree$ microwave hybrid couple the resonator to the readout transmission line with an effective $\kappa_\text{ext}/2\pi = 25.7$ MHz $ \gg \kappa_\text{int}/2\pi$. The transmon qubit (red) resonates at $\omega_\text{q}/2\pi = 4.271$ GHz and is capacitively coupled to the on-chip JPA with dispersive interaction strength $\chi/2\pi = 1.9$ MHz, with the convention that the AC Stark shift changes $\omega_\text{q}$ by $2\chi\bar{n}$. The paddle design of the qubit is chosen to reduce loss due to electromagnetic participation of the surface-vacuum interface, and a floating radiation shield (white) suppresses radiative decay of the qubit into other environmental modes. The measured lifetime $T_1 = 4.2(8)\ \mu\text{s}$ is near the expected Purcell-decay limited value $T_1 \approx 6\ \mu\text{s}$, which could be improved in future designs via integration of a Purcell filter \cite{Reed2010FastQubit}.
In the dispersive approximation and in the frame rotating at $\omega_\text{QPA}$, the internal QPA dynamics can be described by the Hamiltonian \begin{equation}\label{eq:QPAHam} \hat{H}_\text{QPA} \approx \frac{\hbar}{2}(\Delta+2\chi(\hat{a}^\dagger\hat{a}+1/2))\hat{\sigma}_z + \frac{i\lambda}{2}(\hat{a}^{\dagger 2}-\hat{a}^2), \end{equation} with $\Delta = \omega_\text{q} - \omega_\text{QPA}$. The first of the two terms is the familiar dispersive Hamiltonian that also describes the more common case of readout using a linear resonator. The second term describes the on-chip, phase-sensitive gain process, where $\lambda$ is set by the flux-pump strength and would equal the rate of squeezing if there were no dissipation ($\kappa = 0$). A succinct theoretical analysis of the system is given in Appendix \ref{theory}, with further details available in \cite{LevitanThesis}.
\begin{figure}
\caption{ (a-c) Ramsey traces acquired with three values of the on-chip gain, $G_\text{QPA}$. Increasing $G_\text{QPA}$ increases the QPA output field squeezing, thus decreasing the phase-space overlap of the output fields conditioned on the ground or excited qubit states as approximately represented by the red and blue ellipses. The decreased overlap implies faster parasitic dephasing. (d) The observed dependence of $\Gamma_{\phi}$ on $G_\text{QPA}$ (black dots) is in good agreement with Eq. \ref{eq:parasiticEq} (green curve); the thickness of the curve is the standard deviation of the background dephasing rate $1/T_2^*$.}
\label{parasiteFig}
\end{figure}
\section{Measurement backaction with on-chip gain}\label{sectDephasing} As in conventional qubit measurement setups, the dispersive interaction encodes information about the $\hat{\sigma}_z$ component of the qubit state into the mean value of one quadrature of the output field, which we refer to as the signal quadrature $Q$. During this measurement the qubit is dephased at a rate $\Gamma_{\phi} \geq \Gamma_{\phi,\text{QL}}$, where $\Gamma_{\phi,\text{QL}}$ represents quantum-limited backaction. High $\eta_{\text{meas}}$ requires this inequality to be nearly saturated. Here, however, on-chip amplification drives a second, parasitic measurement process in which $\hat{\sigma}_z$ information is encoded in other statistical moments of the output field. This dephasing mechanism is predicted to be independent of the mean field in the resonator, making it distinct from effects in resonantly current-pumped systems \cite{Ong2011CircuitDephasing,Boissonneault2012Back-actionQubit,Ong2013QuantumQubit, Boissonneault2014SuperconductingResonator,Hatridge2011DispersiveAmplifier,Levenson-Falk2013ADetection}. A rough heuristic model describes the parasitic measurement in two steps: the phase-sensitive on-chip gain squeezes the microwave vacuum noise, and the resultant output squeezed state is rotated in phase by the dispersive interaction, encoding $\hat{\sigma}_z$ information in the covariance of the output-field quadratures. These moments are largely not detected downstream, in part due to the fragility of the moments with respect to losses, and in part because the phase-sensitive following JPA typically deamplifies this information. As the parasitic measurement increases $\Gamma_{\phi}$ without increasing the room-temperature SNR, it lowers $\eta_{\text{meas}}$.
Starting from Eq. \ref{eq:QPAHam}, one can derive an expression for the parasitic dephasing rate (Appendix \ref{theory}, \cite{LevitanThesis}), \begin{equation}\label{eq:parasiticEq} \Gamma_{\phi,\text{parasitic}} = \frac{1}{2}\operatorname{Re}\left(\sqrt{D(-\lambda)}+\sqrt{D(\lambda)} \right)-\frac{\kappa}{2} + 1/T_2^*. \end{equation} Here $\lambda$ is related to the on-chip gain by \begin{equation} \lambda = \frac{\kappa}{2}\frac{\sqrt{G_\text{QPA}-1}}{\sqrt{G_\text{QPA}}+1}, \end{equation}
we have defined $D(\lambda)=(\kappa/2+\lambda+i\chi)^2-2i\chi\lambda$, and $T_2^*$ is an empirical parameter describing dephasing absent any applied drives. Several metrics are available to parameterize the gain dynamics; $G_\text{QPA}$ indicates the (phase-preserving) power gain experienced by a tone slightly detuned from $\omega_\text{QPA}$, which we measure directly using a vector network analyzer.
We characterize $\Gamma_{\phi,\text{parasitic}}$ via Ramsey oscillations of the qubit simultaneous with on-chip gain for several values of $G_\text{QPA}$. Absent any gain (Fig.~\ref{parasiteFig}(a)), we observe $1/T_2^* = 0.23(7) \mu\text{s}^{-1}$. Applying pump power produces squeezed vacuum inside the QPA, causing $\Gamma_{\phi}$ to increase significantly (Fig.~\ref{parasiteFig}(b,c)). The inset ellipses of Fig.~\ref{parasiteFig} \ indicate the quadrature variances and covariance (kurtosis is not shown) of the QPA output field predicted using equations from \cite{LevitanThesis} and experimental parameters, colored red or blue depending on the qubit state. Increasing $G_\text{QPA}$ decreases the overlap of the ellipses, speeding up the parasitic measurement. We find good agreement with the predictions of our zero-free parameter model over a range of $G_\text{QPA}$ values as plotted in Fig.~\ref{parasiteFig}(d), supporting the validity of the model and indicating the absence of any comparable additional dephasing mechanism for these operating conditions.
\begin{figure}\label{dephaseFig}
\end{figure}
A second set of Ramsey experiments illuminates how varying on-chip gain modifies backaction during an applied weak measurement. Our theory analysis (Appendix \ref{theory}, \cite{LevitanThesis}) predicts the total dephasing to vary according to \begin{equation} \label{eq:dephaseEqn}
\Gamma_{\phi} = \frac{2\chi^2\kappa^2P_{\text{in}}}{\hbar\omega_\text{QPA}}\left(\frac{\cos^2\Phi}{|D(-\lambda)|^2}+\frac{\sin^2\Phi}{|D(\lambda)|^2}\right) + \Gamma_{\phi, \text{parasitic}}, \end{equation} where $P_{\text{in}}$ is the power of the measurement tone incident to the QPA, and $\Gamma_{\phi,\text{parasitic}}$ is, notably, still given by Eq. \ref{eq:parasiticEq}. Absent any on-chip gain ($\lambda = 0$), Eq. \ref{eq:dephaseEqn} can be approximated by the more standard expression $\Gamma_{\phi} = 8\chi^2\bar{n}/\kappa + O(\frac{\chi}{\kappa})^4$, describing dephasing induced as $\hat{\sigma}_z$ information is encoded in the phase of the QPA output field (Fig.~\ref{dephaseFig}(a), theory). Experimentally, with $G_\text{QPA} = 0$ dB we observe dephasing at rate $\Gamma_{\phi} = 0.49\ \mu \text{s}^{-1}$ (Fig.~\ref{dephaseFig}(d)), from which we infer $P_\text{in} = -142$ dBm for this choice of drive. Keeping $P_\text{in}$ fixed, we apply a flux-pump such that $G_\text{QPA} = 3$ dB. Fig. \ref{dephaseFig}(b,c) show the expected output fields when the on-chip gain is aligned with ($\Phi=0$) or orthogonal to ($\Phi=\pi/2$) the signal quadrature. The signal size ($\langle Q_e \rangle - \langle Q_g \rangle$) is nearly constant in all three cases: since the input measurement drive lies along $I$ while the output signal lies along $Q$, the net effects of amplification and deamplification approximately cancel. In contrast, the noise fluctuations do get amplified (squeezed), such that the SNR at the QPA output depends on $\Phi$. Since amplifying the signal quadrature squeezes the photon number fluctuations in the conjugate quadrature which cause dephasing, we expect amplifier mode ($\Phi=0$) to minimize $\Gamma_{\phi}$ and squeezer mode ($\Phi=\pi/2$) to maximize it, in agreement with the comparison of Figs. \ref{dephaseFig}(e,f). Results of additional Ramsey measurements shown in Figure \ref{dephaseFig}(g) reveal the full dependence of $\Gamma_{\phi}$ on $\Phi$ and $G_\text{QPA}$, and verify the predictions of our theory model (Eq. \ref{eq:dephaseEqn}). We note that squeezer mode is also of interest as a means of improving SNR by reducing the quantum fluctuations of the output measurement field \cite{LevitanThesis,Peano2015IntracavityDeamplification,Govia2017EnhancedSuppression}; similar in-situ squeezing generation has been demonstrated in a recent optical experiment \cite{Korobko2017BeatingGeneration}.
We henceforth focus exclusively on amplifier mode ($\Phi=0$). The primary benefit of amplifier mode is that the noise floor of the QPA output signal quadrature is increased with minimal information loss, which enables greater overall efficiency by making the SNR insensitive to noise added downstream. A secondary effect is the deamplification of the mean field without deamplification of the signal; an interesting question is whether this effect, perhaps combined with injected orthogonally-squeezed vacuum, might enable a greater dispersive signal size for fixed mean intra-resonator photon number.
\section{Measurement efficiency}\label{sectMeasEff} The total measurement efficiency is the product of on-chip efficiency and the efficiency of the rest of the measurement chain: $\eta_{\text{meas}} = \eta_{\text{QPA}} \eta_{\text{rest}}$. Increasing on-chip gain, $G_\text{QPA}$, increases $\eta_{\text{rest}}$ as the amplified signal quadrature becomes robust to losses, but decreases $\eta_{\text{QPA}}$ due to the parasitic measurement discussed in Section \ref{sectDephasing}, such that there is an optimal $G_\text{QPA}$ value maximizing $\eta_{\text{meas}}$ for a given measurement drive. We can write the efficiency as the ratio of empirical quantities, $\eta_{\text{meas}} = \Gamma_\text{meas}/2\Gamma_{\phi}$. Here $\Gamma_\text{meas} = \frac{\text{d}}{\text{d}t}\text{SNR}^2/4$ is the rate at which the square of the room-temperature voltage SNR increases with integration time, or equivalently the rate at which $\hat{\sigma}_z$ information is acquired by our digitizer.
\begin{figure}\label{etaFig}
\end{figure}
In order to determine steady-state $\eta_{\text{meas}}$ as a function of $G_\text{QPA}$ and measurement drive $\lvert\alpha_{\text{in}}\rvert = \sqrt{P_\text{in}/\hbar\omega_\text{QPA}}$, we extracted both $\Gamma_\text{meas}$ and $\Gamma_{\phi}$ using the pulse sequence shown in Fig.~\ref{etaFig}(a). The flux-pump was switched on in advance such that the mean intra-QPA field was deamplified along $I$ at all times, reducing the intra-QPA circulating power produced by a strong measurement drive and thus helping to minimize undesired nonlinear processes in the QPA. Next, a continuous measurement tone was turned on and sufficient time allowed to pass to ensure the cavity had stabilized before a $\pi/2$ pulse was applied to initiate Ramsey evolution. After the second $\pi/2$, $\lvert\alpha_{\text{in}}\rvert$ was increased to perform projective readout. The ensemble-averaged readout results for variable Ramsey evolution time were used to determine the dephasing rate $\Gamma_{\phi}$, as in the right inset of Fig.~\ref{etaFig}(a). Integrating the steady-state weak-measurement record from the longest Ramsey evolution for a variable amount of time $t_\text{int} \leq 280$ ns and fitting Gaussians to the resultant histograms, we determined SNR($t_\text{int}$) and thus $\Gamma_\text{meas}$. A modified Gaussian model adapted from Section III-A of \cite{Gambetta2007ProtocolsMeasurement} was used to account for $T_1$ decay with $T_1$ fixed at the independently measured value above. Note this treatment implicitly defines $\eta_{\text{meas}}$ to be independent of relaxation events, such that a greater $T_1$ would result in higher readout fidelity but the same $\eta_{\text{meas}}$.
Sweeping measurement strength and $G_\text{QPA}$, we found an ideal operating regime, indicated by the orange dashed box in Fig.~\ref{etaFig}(b), with an average $\eta_{\text{meas}} = 80\%$. To the left of the box, $G_\text{QPA}$ is too low to mitigate the effect of loss in circulators and other off-chip components. The bottom edge of the box is defined by the decrease in $\eta_{\text{meas}}$ associated with $\Gamma_{\phi,\text{parasitic}}$ becoming a larger fraction of $\Gamma_{\phi}$ as $\lvert\alpha_{\text{in}}\rvert$ is decreased. The other two sides of the box are marked by the onset of non-ideal behavior evidenced by a third peak appearing in the measurement histograms. This spurious peak is not fully understood, but seems to involve population of the third transmon level driven by large intra-QPA photon numbers occurring at too low or too high $G_\text{QPA}$ (corresponding to a large mean field or a large field variance, respectively) or at too high $\lvert\alpha_{\text{in}}\rvert$.
\setlength{\parskip}{2pt}
With the assumption that $\Gamma_{\phi,\text{parasitic}}$ remains independent of $\lvert\alpha_{\text{in}}\rvert$ at this operating point, we can express the on-chip efficiency $\eta_{\text{QPA}}$ in terms of empirical dephasing rates as $\eta_{\text{QPA}} = 1 - \Gamma_{\phi,\text{parasitic}}/\Gamma_{\phi},$ from which we calculate the values shown in Fig.~\ref{etaFig}(c). In the absence of on-chip gain, $\eta_{\text{QPA}}$ approaches unity as $\Gamma_{\phi,\text{parasitic}} = 1/T_2^* \ll \Gamma_{\phi}$. As gain is increased, $\Gamma_{\phi,\text{parasitic}}$ increases, resulting in lower $\eta_{\text{QPA}}$; as the measurement strength is increased, the parasitic dephasing becomes less significant, increasing $\eta_{\text{QPA}}$. Calculating further, we can divide these values by the $\eta_{\text{meas}}$ values in Fig.~\ref{etaFig}(b) to estimate $\eta_{\text{rest}}$, shown in Fig.~\ref{etaFig}(d). This plot is restricted to lower-power operating conditions in which the device was better behaved; over this domain, we infer that information loss downstream of the QPA decreases with $G_\text{QPA}$ and is approximately independent of $\lvert\alpha_{\text{in}}\rvert$, supporting our previous assumption that $\Gamma_{\phi,\text{parasitic}}$ is likewise independent of $\lvert\alpha_{\text{in}}\rvert$. It is encouraging that the calculated values of $\eta_{\text{rest}}$ approach 1 at $G_\text{QPA} = 4$ dB, though the current device did not permit increasing $\lvert\alpha_{\text{in}}\rvert$ sufficiently to maximally benefit from this much gain. We expect this dynamic range ceiling, and thus $\eta_{\text{meas}}$, may be raised by reducing $\chi/\kappa$ and/or increasing the number of JPA SQUIDs \cite{Eichler2014ControllingAmplifier,Boutin2017EffectAmplifiers}, suppressing deleterious Kerr effects not included in our model.
\section{Conclusion} We have characterized the measurement backaction on a qubit dispersively coupled to a parametric amplifier flux-pumped for gain, demonstrated how on-chip gain can mitigate off-chip sources of information loss, and observed steady-state efficiency $\eta_{\text{meas}}$ up to $80\%$. Going forward, incremental improvements in $\eta_{\text{meas}}$ may be achieved by further weakening device nonlinearities as discussed above to permit a larger measurement drive $\lvert\alpha_{\text{in}}\rvert$ and thus greater $\eta_{\text{QPA}}$. A more dramatic improvement might be realized by probing the device stroboscopically \cite{Hacohen-Gourgy2016QuantumObservables}. In a linear readout resonator, stroboscopic measurement has been shown to eliminate the undesired squeezing rotations caused by dispersive coupling \cite{Eddins2017StroboscopicIllumination}; realizing the analogous effect in the QPA would close the parasitic dephasing channel such that increasing $G_\text{QPA}$ would boost $\eta_\text{rest}$ without degrading $\eta_{\text{QPA}}$, even for small $\lvert\alpha_{\text{in}}\rvert$. Other potential near-term experiments include exploration of effects of on-chip gain on initial transients when switching on a measurement, and an investigation of whether combining amplifier mode with injected orthogonally-squeezed vacuum enables greater dispersive readout SNR for a fixed intra-resonator photon number. Longer term, we envision the QPA to be an enabling technology for applications demanding signal-to-noise ratios approaching the quantum limit, such as measurement-based quantum feedback \cite{Li2013OptimalityImperfections,Martin2015DeterministicFeedback,Martin2017WhatFeedback} or further studies of individual quantum trajectories, perhaps with extension to multi-qubit experiments via a chip layout similar to \cite{Schmitt2014MultiplexedAmplifiers} followed by a broadband off-chip amplifier.
\begin{acknowledgments} The authors thank R. Vijay, S. Hacohen-Gourgy, E. Flurin, and L. Martin for useful discussions, and thank MIT Lincoln Labs for fabrication of the JTWPA with support from the LPS and IARPA. Work was supported by the Army Research Office under Grant No. W911NF-14-1-0078. A.A.C. and L.C.G.G. acknowledge support from the the AFOSR MURI FA9550-15-1-0029. A.E. acknowledges support from the Department of Defense through the NDSEG fellowship program. \end{acknowledgments}
\appendix
\section{Experimental Methods}
\subsection{Device fabrication} The device used for this paper is patterned on $>8000$ $\Omega$-cm intrinsic Si using photolithography and subsequent plasma etching of 100 nm thick e-beam evaporated Al. Al/AlO$_x$/Al junctions for the transmon and SQUIDs are defined in separate steps with e-beam lithography and subsequent double-angle evaporation. Adhering to the constraints of the pre-defined bond pads and QPA interdigitated capacitor, we were able to reduce the radiative loss of the qubit by introducing electrically-floating metal shielding around the qubit capacitor paddles (white in Fig.~\ref{intro_fig}(c)). The chip is mounted on OFHC copper enclosed in an Al package, surrounded by cryoperm, and mounted to the base stage of a dilution refrigerator at $\leq$35 mK. A copper wire fed through a high aspect-ratio hole in the aluminum helps thermalize the interior OFHC mount to the base stage. Flux to tune the QPA was applied with an off-chip coil.
\subsection{Circuit parameters}\label{parametersAppendix}
Circuit parameters changed slightly after thermal cycling, and also as the QPA was flux-biased to different operating frequencies $\omega_\text{QPA}$. The table lists precise parameter information corresponding to the three data figures of the main text. The change in $\chi$ between cooldowns is not well understood.
\begin{center}
\begin{tabular}{| l | l | l | l |}
\hline
\ & Fig. 2 & Fig. 3 & Fig. 4 \\ \hline
Cooldown & B & A & B \\ \hline
$\omega_\text{QPA}/2\pi$ (GHz) & 6.740 & 6.740 & 6.700 \\ \hline
$\kappa/2\pi$ (MHz) & 25.4 & 25.7 & 28.6 \\ \hline
$\omega_\text{q}/2\pi$ (GHz) & 4.271 & 4.274 & 4.271 \\ \hline
$\chi/2\pi$ (MHz) & 1.9 & 1.7 & 2.0 \\ \hline
\end{tabular} \end{center}
\subsection{Detailed wiring diagram} Figure \ref{fullSetup} shows a complete diagram of the experiment. An experimental overview appears in (a), with individual subsystems detailed in (b) and a component legend given in (c). Most centrally, a microwave generator at $\omega_\text{QPA}$ (estimated by measuring the QPA resonance frequency while Rabi-driving the qubit) is split to drive four phase-sensitive processes: dispersive qubit measurement, on-chip amplification in the QPA, off-chip amplification in the JPA, and room-temperature demodulation of the output measurement signal. The amplification processes are highly sensitive to changes in applied pump power, such that the small change in insertion-loss associated with adjusting the phase of a phase-shifter would problematically change the amplifier gain. Several technical solutions are possible; for the QPA, we use a spectrum analyzer to ensure power-flatness as we programmatically step the phase of the flux-pump. For the JPA, we change the effective amplification phase by phase-shifting all other tones at $\omega_\text{QPA}$ while leaving the JPA pump unchanged. To realize amplifier-mode operation of the QPA, the phase of the QPA pump was first chosen to minimize $\Gamma_{\phi}$, and then the phase of the off-chip JPA pump was adjusted to maximize SNR. Stark shifts were measured in advance for all $G_\text{QPA}, \lvert\alpha_{\text{in}}\rvert$ settings used to produce the data in Fig.~\ref{etaFig}, and qubit pulse frequencies and amplitudes were programmatically adjusted accordingly at each setting. Small superconducting coils (not shown) were used to apply dc-flux biases to the QPA and JPA to tune their resonance frequencies. A vector network analyzer was used to characterize device resonance frequencies and gains. Cross-talk effects of the JPA flux-pump on the JTWPA became apparent at high JTWPA gain despite the intermediary low-pass filter, degrading JTWPA performance. These effects were suppressed by operating the JTWPA at reduced gain ($\sim 15$ dB).
\begin{figure}
\caption{ Detailed experimental wiring diagram. }
\label{fullSetup}
\end{figure}
\begin{figure*}\label{dephasing2d}
\end{figure*}
\subsection{$T_1$ vs on-chip gain} We briefly investigated the effect of on-chip gain on the qubit lifetime $T_1$. Results are shown in Fig. \ref{T1vsGain}. The data suggest a small decrease in $T_1$ when gain is turned on, with no clear dependence as gain is further increased.
\begin{figure}
\caption{ (a) Gain profiles of the QPA for varying amounts of on-chip gain $G_\text{QPA}$ as measured with a vector network analyzer. (b) Qubit $T_1$ measured at several values of $G_\text{QPA}$ corresponding to the color-coded gain profiles in (a). Measurements were repeated for approximately $\sim 12$ hours, during which time the order in which the gain settings were cycled through was repeatedly randomized. Error bars indicate the standard deviations of all results at each $G_\text{QPA}$ setting.}
\label{T1vsGain}
\end{figure}
\subsection{Amplifier-mode dephasing vs $P_\text{in}$} Extending the measurement-backaction data presented in Fig.~\ref{dephaseFig}, we fixed the QPA pump phase to operate the QPA in amplifier mode ($\Phi=0$) and recorded $\Gamma_{\phi}$ for variable on-chip gain and measurement strength $\lvert\alpha_{\text{in}}\rvert$. The results are displayed alongside the theory prediction of Eq. \ref{eq:dephaseEqn} in Fig. \ref{dephasing2d}. Good agreement is seen for low measurement strength and on-chip gain. At high drive strengths or high gains, excess dephasing is observed intermittently, i.e. for some experimental executions. At intermediate drive strength, near the center of the plot, this undesired behavior appears to be reduced as $G_\text{QPA}$ increases from $\sim$1 dB to $\sim2-3$ dB.
\section{Theoretical derivations} \label{theory}
\subsection{Dephasing with on-chip gain}
Our goal is to derive an analytic expression for the qubit dephasing rate in the long-time limit, without assuming that the dispersive coupling $\chi$ is weak. We start with the master equation \begin{align}
\nonumber\dot{\hat{\rho}} = &-i \left[ \hat{H}, \hat{\rho} \right] + \kappa \mathcal{D} \left[ \hat{a} \right] \hat{\rho} \\ &+ \frac{1}{T_1} \mathcal{D} \left[ \hat{\sigma}_- \right] \hat{\rho} + \frac{1}{2T_2} \mathcal{D} \left[ \hat{\sigma}_z \right] \hat{\rho}, \label{eqn:ME} \end{align} where $\kappa$ is the resonator decay rate, $T_1$ and $T_2$ are the relaxation and pure dephasing times of the qubit, $\hat{\sigma}_-$ is the qubit lowering operator, and $\mathcal{D} \left[\hat{O}\right] \hat{\rho} = \hat{O} \hat{\rho} \hat{O}^{\dagger} - \frac{1}{2} \left\{\hat{O}^{\dagger} \hat{O}, \hat{\rho} \right\}$ is the usual dissipator. In a frame where the qubit rotates at its bare frequency $\omega_{\rm q}$, and the resonator at its static flux-biased frequency $\omega_{\rm QPA}$, the Hamiltonian is \begin{align}
\hat{H} = \frac{i \lambda}{2} \left( \hat{a}^{\dagger 2} - \hat{a}^2\right) + \chi \hat{a}^{\dagger} \hat{a} \hat{\sigma}_z
+ \sqrt{\kappa} \left( \alpha_{\rm in} \hat{a}^{\dagger} + \alpha_{\rm in}^{*} \hat{a} \right), \end{align} which contains the QPA dynamics of $\hat{H}_{\rm QPA}$ from Eq.~\ref{eq:QPAHam}, and the coherent measurement drive on the QPA resonator, characterized by the drive amplitude $\alpha_{\rm in}$.
The dephasing rate quantifies the decay of the qubit coherence in the long time limit, described by the decay of the qubit off-diagonal matrix elements of the full density matrix. If we write the full density matrix as
\begin{align}
\hat{\rho} = \sum_{\mu,\nu \in\{\uparrow,\downarrow\}}\hat{\rho}_{\mu\nu}\otimes\ket{\mu}\bra{\nu} \end{align} where $\hat{\rho}_{\mu\nu}$ is an operator on the resonator Hilbert space and $\ket{\downarrow}$ and $\ket{\uparrow}$ are the ground and excited states of the qubit, then the qubit dephasing rate is fully captured by the evolution of the part of the density matrix proportional to $\ket{\uparrow}\bra{\downarrow}$ (or its Hermitian conjugate). Thus, we are interested in the evolution of the operator $\hat{\rho}_{\uparrow\downarrow}$. As is standard, we define the dephasing rate as \begin{align}
\Gamma_{\phi} = \lim_{t \rightarrow \infty} \frac{-{\rm ln}\left({\rm Tr}\left[\hat{\rho}_{\uparrow\downarrow}(t)\right]\right)}{t}, \label{eqn:Drate} \end{align} which captures the exponential decay of the qubit coherence in the long-time limit.
From Eq.~\ref{eqn:ME} we calculate the evolution equation for the operator $\hat{\rho}_{\uparrow\downarrow}$ \begin{align}
\nonumber \dot{\hat{\rho}}_{\uparrow\downarrow} &= \left[ \frac{\lambda}{2} (\hat{a}^{\dagger} \hat{a}^{\dagger} - \hat{a} \hat{a}) - \sqrt{\kappa} (\alpha_{\rm in} \hat{a}^{\dagger} - \alpha_{\rm in}^{*} \hat{a}), \hat{\rho}_{\uparrow\downarrow} \right]\\
&- i \chi \left\lbrace \hat{a}^{\dagger} \hat{a}, \hat{\rho}_{\uparrow\downarrow} \right\rbrace
+ \kappa \mathcal{D} [\hat{a}] \hat{\rho}_{\uparrow\downarrow} - \left(\frac{1}{2T_1} + \frac{1}{T_2}\right)\hat{\rho}_{\uparrow\downarrow}. \label{eqn:UpDown} \end{align} Note that this equation is not trace preserving, as it does not describe evolution of a valid density matrix. Extending beyond the results of Ref.~\cite{LevitanThesis}, we have included the effect of qubit relaxation $(T_1)$ in the evolution of $\hat{\rho}_{\uparrow\downarrow}$, and while this means the evolution of the qubit is no longer QND, the resulting equation for $\hat{\rho}_{\uparrow\downarrow}$ remains closed on itself, and can be solved analytically.
The first step in solving Eq.~\ref{eqn:UpDown} is to remove the exponential decay caused by the qubit incoherent dynamics, and we do so by defining $\hat{\rho}'_{\uparrow\downarrow} = e^{t/T_2^*}\hat{\rho}_{\uparrow\downarrow}$, where $T_2^* = 2T_1T_2/(2T_1 + T_2)$ introduced in the main text describes the intrinsic dephasing of the qubit. The evolution equation for $\hat{\rho}'_{\uparrow\downarrow}$ describes the qubit dephasing due to interaction with the resonator, and has the same form as Eq.~\ref{eqn:UpDown}, but without the last term (proportional to $1/T_2^*$).
To solve the evolution equation for $\hat{\rho}'_{\uparrow\downarrow}$, it is more convenient to move to the Wigner representation, and obtain a partial differential equation for $W_{\rm \uparrow\downarrow} (x, p; t)$, the Wigner function representation of $\hat{\rho}'_{\uparrow\downarrow}$ \cite{WahyuUtami2008EntanglementSystem}. As Eq.~\ref{eqn:UpDown} contains terms at most quadratic in $\hat{a}$ and $\hat{a}^\dagger$ it is possible to solve this PDE with a Gaussian ansatz.
The Gaussian ansatz reduces Eq.~\ref{eqn:UpDown} to a set of coupled ODE's for the means, variances and overall norm of $\hat{\rho}'_{\uparrow\downarrow}$. After solving these in steady state (see Ref.~\cite{LevitanThesis} for further details), with the coherent drive defined by \begin{align}
\alpha_{\rm in} = \sqrt{\frac{P_{\rm in}}{\hbar\omega_{\rm QPA}}}\left(\cos\left(\Phi\right) + i\sin\left(\Phi\right)\right), \label{eqn:alpha} \end{align} we can then use Eq.~\ref{eqn:Drate} to define the dephasing rate, which gives the expression found in Eq.~\ref{eq:dephaseEqn}. (By defining $\lvert\alpha_{\text{in}}\rvert$ in terms of $\Phi$ here, we implicitly fix the phase of the pump, in contrast to the convention used in the main text).
\subsection{Measurement rate with on-chip gain}
We now briefly outline the theoretical calculations of the measurement rate, and for further details the interested reader should consult chapter 3 of Ref.~\cite{LevitanThesis}. From standard input-output theory, the Heisenberg-Langevin equation for the resonator operator $\hat{a}$ in a frame rotating at the bare resonator frequency $\omega_{\rm QPA}$ is \begin{align}
\dot{\hat{a}} = \left( -i \chi \hat{\sigma}_z - \frac{\kappa}{2} \right) \hat{a} + \lambda \hat{a}^{\dagger} - \sqrt{\kappa} \hat{a}_{\rm in}, \label{eqn:HL} \end{align} where $\hat{a}_{\rm in}$ is the input field to the resonator. In our case, for dispersive measurement of the qubit this is ideally a coherent state, such that $\left<\hat{a}_{\rm in}\right> = \alpha_{\rm in}$, with $\alpha_{\rm in}$ defined in Eq.~\ref{eqn:alpha}. For the purposes of the intra-resonator dynamics and calculation of the measurement rate we can treat the qubit operator $\hat{\sigma}_z$ as a classical real variable $\sigma = \pm 1$, corresponding to the ground or excited state of the qubit in the $\hat{\sigma}_z$ basis. Doing so allows us to solve Eq.~\ref{eqn:HL} exactly, and from this solution extract the measurement rate.
We consider two modes of operation for the QPA, ``amplifier mode'', where the input field aligns with the direction of squeezing ($\Phi = 0$), and ``squeezer mode'', where the input field aligns with the direction of amplification ($\Phi = \pi/2$). In amplifier mode, we use the QPA to amplify the size of the signal created by the qubit, which comes at the cost of also amplifying the noise in the cavity output field. In squeezer mode, we use the QPA to squeeze the noise in the quadrature containing qubit information, and while the noise can be heavily squeezed, the signal produced by the qubit is only squeezed at most by a factor of two. For a fixed input photon flux, the steady-state intra-resonator photon number is not the same for both modes of operation, but is independent of the qubit state in both cases.
The output field is related to the input field by the standard input-output relation $\hat{a}_{\rm out} = \hat{a}_{\rm in} + \sqrt{\kappa}\hat{a}$, and from the output mode we define the measured signal operator by \begin{align}
\hat{Q}(t) = \frac{e^{-i \delta} \hat{a}_{\rm out} + e^{i \delta} \hat{a}_{\rm out}^{\dagger}}{\sqrt{2}}, \label{eqn:I} \end{align} where the angle $\delta$ parameterizes the quadrature measured. We must choose the measured quadrature such that it is out-of-phase with the input coherent signal (as the qubit information will be contained in the out-of-phase quadrature), such that $\abs{\delta - \Phi} = \pi/2$.
As we are interested in the long-time limit of the QPA dynamics, rather than the SNR we will calculate the measurement rate, defined by \begin{align}
\Gamma_{\rm meas} &\equiv \lim_{\tau \rightarrow \infty} \frac{{\rm SNR}^2 (\tau)}{2 \tau} = \frac{1}{4} \frac{\left(\left<\hat{Q}\right>_{\uparrow} - \left<\hat{Q}\right>_{\downarrow}\right)^2}{(\bar{S}_{QQ, \uparrow} [0] + \bar{S}_{QQ, \downarrow} [0])} \label{eqn:GammaM}, \end{align} where $\left<.\right>_{\nu}$ indicates that the expectation value is taken with respect to the cavity in steady-state and the qubit in state $\ket{\nu}$ for $\nu\in \{\uparrow,\downarrow\}$, corresponding to $\sigma = \pm 1$ respectively. $\bar{S}_{QQ, \nu} [\omega]$ is the symmetrized noise power of the detected quadrature at frequency $\omega$, defined in the standard way \cite{LevitanThesis}.
The measurement rate will depend on what mode the QPA is operated in (i.e.~the angle $\Phi$), and for our two operation modes the measurement rates are \begin{align}
&\Gamma_{\rm meas}^{\rm amp}= \frac{\frac{\chi^2 \kappa |\alpha|^2}{(\frac{\kappa}{2} - \lambda)^2 + \chi^2}}
{\frac{1}{2}\frac{ \left[ (\frac{\kappa}{2} + \lambda)^2 - \chi^2 \right]^2 + \chi^2 \kappa^2}{\left(\frac{\kappa^2}{4} - \lambda^2 + \chi^2 \right)^2} + \bar{n}_{\rm add}},\label{eqn:GammaA} \\
&\Gamma_{\rm meas}^{\rm sqz}= \frac{\frac{\chi^2 \kappa |\alpha|^2}{(\frac{\kappa}{2} + \lambda)^2 + \chi^2}}
{\frac{1}{2}\frac{ \left[ (\frac{\kappa}{2} - \lambda)^2 - \chi^2 \right]^2 + \chi^2 \kappa^2}{\left(\frac{\kappa^2}{4} - \lambda^2 + \chi^2 \right)^2} + \bar{n}_{\rm add}},\label{eqn:GammaS} \end{align} where we have added by hand a noise term $\bar{n}_{\rm add}$ to quantify noise added to the signal downstream of the QPA. For a fair comparison we parametrize the rates in terms of a constant intra-resonator photon number $\abs{\alpha}^2$, which we note requires different input photon flux for the two operation modes.
From Eqs.~\ref{eqn:GammaA} and \ref{eqn:GammaS} we see that in both modes of operation the output contains amplified noise, no matter the value of $\lambda$. While this is by design in amplifier mode, in squeezer mode it is na\"ively unexpected, and is a result of interaction with the qubit. The dispersive interaction results in a qubit-dependent phase shift on the field exiting the resonator, such that it no longer perfectly interferes with the promptly reflected field. The effect of this is a mixing of the squeezed and amplified noise, such that all quadratures contain noise contributions from both.
However, for very small $\chi/\kappa$, squeezer mode operation does not suffer from this unwanted mixed-in amplified noise, as can be seen when we write the measurement rates to leading order in $\chi/\kappa$ \begin{align}
&\Gamma_{\rm meas}^{\rm amp} \approx \frac{2\chi^2|\alpha|^2(1+\sqrt{G_0})^2}{\kappa(G_0+2\bar{n}_{\rm add})}, \label{eqn:GammaA0} \\
&\Gamma_{\rm meas}^{\rm sqz} \approx \frac{2\chi^2|\alpha|^2(1+1/\sqrt{G_0})^2}{\kappa(1/G_0+2\bar{n}_{\rm add})}, \label{eqn:GammaS0} \end{align} where we have defined $\sqrt{G_0} = (\kappa/2 + \lambda)/(\kappa/2 - \lambda)$, with $G_0 = 1$ for zero gain. Both measurement rates should be contrasted with the zero gain measurement rate \begin{align}
\nonumber\Gamma_{\rm meas}^{0} &= \frac{2\chi^2 \kappa |\alpha|^2}{\left(\frac{\kappa^2}{4} + \chi^2\right)\left[1 + 2\bar{n}_{\rm add}\right]} \\ &\xrightarrow{\chi/\kappa \ll 1} \frac{8\chi^2 |\alpha|^2}{\kappa(1+2\bar{n}_{\rm add})}, \label{eqn:GammaSt} \end{align} found for a standard linear-resonator setup, or when the QPA is operated with zero gain.
In the ideal limit, where $\bar{n}_{\rm add} = 0$, amplifier mode offers little to no advantage over zero gain, as both the signal and noise are amplified by the same factor at large gain, which can be seen by comparing Eq.~\ref{eqn:GammaA0} for $\bar{n}_{\rm add} = 0$ to Eq.~\ref{eqn:GammaSt}. However, in this case squeezer mode can be advantageous, as in the large gain limit the noise is drastically reduced, while the signal is relatively unaffected. In particular, for $\chi/\kappa \ll 1$, comparing Eq.~\ref{eqn:GammaS0} for $\bar{n}_{\rm add} = 0$ to Eq.~\ref{eqn:GammaSt}, we see that the measurement rate is enhanced by a large factor proportional to $G_0$. Accounting for effects beyond first order in $\chi/\kappa$ by using the full expression of Eq.~\ref{eqn:GammaS}, we find that the squeezer mode measurement rate is enhanced by the factor $\Gamma_{\rm meas}^{\rm sqz}/\Gamma_{\rm meas}^{0} = \kappa/\chi$ at the optimal value of $\lambda$.
Conversely, in the non-ideal situation where $\bar{n}_{\rm add}$ is large, squeezer mode offers no advantage, as the noise can never be reduced below the noise floor set by $\bar{n}_{\rm add}$, as clearly indicated by Eq.~\ref{eqn:GammaS0}. In this situation amplifier mode is beneficial, as by amplifying both the signal and noise leaving the QPA, the output becomes insensitive to noise added downstream (concretely, $G_0 \gg \bar{n}_{\rm add}$ in the denominator of Eq.~\ref{eqn:GammaA0}). As shown in the main text, this is the mode of operation we find gives the greatest efficiency for our current setup.
\begin{thebibliography}{41} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Vijay}\ \emph {et~al.}(2012)\citenamefont {Vijay},
\citenamefont {Macklin}, \citenamefont {Slichter}, \citenamefont {Weber},
\citenamefont {Murch}, \citenamefont {Naik}, \citenamefont {Korotkov},\ and\
\citenamefont {Siddiqi}}]{Vijay2012StabilizingFeedback}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Vijay}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Macklin}},
\bibinfo {author} {\bibfnamefont {D.~H.}\ \bibnamefont {Slichter}}, \bibinfo
{author} {\bibfnamefont {S.~J.}\ \bibnamefont {Weber}}, \bibinfo {author}
{\bibfnamefont {K.~W.}\ \bibnamefont {Murch}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Naik}}, \bibinfo {author} {\bibfnamefont
{A.~N.}\ \bibnamefont {Korotkov}}, \ and\ \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Stabilizing Rabi oscillations in a superconducting qubit using
quantum feedback}},}\ }\href {\doibase 10.1038/nature11505} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {490}},\
\bibinfo {pages} {77--80} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Murch}\ \emph {et~al.}(2013)\citenamefont {Murch},
\citenamefont {Weber}, \citenamefont {Macklin},\ and\ \citenamefont
{Siddiqi}}]{Murch2013ObservingBit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont
{Murch}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Weber}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Macklin}}, \ and\
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Observing single quantum trajectories
of a superconducting quantum bit}},}\ }\href {\doibase 10.1038/nature12539}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {502}},\ \bibinfo {pages} {211--214} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weber}\ \emph {et~al.}(2014)\citenamefont {Weber},
\citenamefont {Chantasri}, \citenamefont {Dressel}, \citenamefont {Jordan},
\citenamefont {Murch},\ and\ \citenamefont
{Siddiqi}}]{Weber2014MappingStates}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Weber}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Chantasri}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Dressel}}, \bibinfo
{author} {\bibfnamefont {A.~N.}\ \bibnamefont {Jordan}}, \bibinfo {author}
{\bibfnamefont {K.~W.}\ \bibnamefont {Murch}}, \ and\ \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Mapping the optimal route between two quantum states}},}\
}\href {\doibase 10.1038/nature13559} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {511}},\ \bibinfo {pages}
{570--573} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hacohen-Gourgy}\ \emph {et~al.}(2016)\citenamefont
{Hacohen-Gourgy}, \citenamefont {Martin}, \citenamefont {Flurin},
\citenamefont {Ramasesh}, \citenamefont {Whaley},\ and\ \citenamefont
{Siddiqi}}]{Hacohen-Gourgy2016QuantumObservables}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Shay}\ \bibnamefont
{Hacohen-Gourgy}}, \bibinfo {author} {\bibfnamefont {Leigh~S}\ \bibnamefont
{Martin}}, \bibinfo {author} {\bibfnamefont {Emmanuel}\ \bibnamefont
{Flurin}}, \bibinfo {author} {\bibfnamefont {Vinay~V}\ \bibnamefont
{Ramasesh}}, \bibinfo {author} {\bibfnamefont {K~Birgitta}\ \bibnamefont
{Whaley}}, \ and\ \bibinfo {author} {\bibfnamefont {Irfan}\ \bibnamefont
{Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Quantum
dynamics of simultaneously measured non-commuting observables}},}\ }\href
{\doibase 10.1038/nature19762} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {538}},\ \bibinfo {pages} {491}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campagne-Ibarcq}\ \emph {et~al.}(2016)\citenamefont
{Campagne-Ibarcq}, \citenamefont {Six}, \citenamefont {Bretheau},
\citenamefont {Sarlette}, \citenamefont {Mirrahimi}, \citenamefont
{Rouchon},\ and\ \citenamefont
{Huard}}]{Campagne-Ibarcq2016ObservingFluorescence}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Campagne-Ibarcq}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Six}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Bretheau}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sarlette}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Mirrahimi}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Rouchon}}, \ and\ \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Huard}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Observing Quantum State Diffusion by Heterodyne Detection
of Fluorescence}},}\ }\href {\doibase 10.1103/PhysRevX.6.011002} {\bibfield
{journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo
{volume} {6}},\ \bibinfo {pages} {011002} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ficheux}\ \emph {et~al.}(2017)\citenamefont
{Ficheux}, \citenamefont {Jezouin}, \citenamefont {Leghtas},\ and\
\citenamefont {Huard}}]{Ficheux2017DynamicsDephasing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q}~\bibnamefont
{Ficheux}}, \bibinfo {author} {\bibfnamefont {S}~\bibnamefont {Jezouin}},
\bibinfo {author} {\bibfnamefont {Z}~\bibnamefont {Leghtas}}, \ and\ \bibinfo
{author} {\bibfnamefont {B}~\bibnamefont {Huard}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Dynamics of a qubit while simultaneously
monitoring its relaxation and dephasing}},}\ }\href
{https://arxiv.org/pdf/1711.01208.pdf} {\bibfield {journal} {\bibinfo
{journal} {arXiv:1711.01208}\ } (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Walter}\ \emph {et~al.}(2017)\citenamefont {Walter},
\citenamefont {Kurpiers}, \citenamefont {Gasparinetti}, \citenamefont
{Magnard}, \citenamefont {Poto{\v{c}}nik}, \citenamefont {Salath{\'{e}}},
\citenamefont {Pechal}, \citenamefont {Mondal}, \citenamefont {Oppliger},
\citenamefont {Eichler},\ and\ \citenamefont
{Wallraff}}]{Walter2017RapidQubits}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Walter}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kurpiers}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gasparinetti}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Magnard}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Poto{\v{c}}nik}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Salath{\'{e}}}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Pechal}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Mondal}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Oppliger}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Eichler}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Wallraff}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Rapid High-Fidelity Single-Shot Dispersive Readout of
Superconducting Qubits}},}\ }\href {\doibase 10.1103/PhysRevApplied.7.054020}
{\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\
}\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {054020} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chapman}\ \emph {et~al.}(2017)\citenamefont
{Chapman}, \citenamefont {Rosenthal}, \citenamefont {Kerckhoff},
\citenamefont {Moores}, \citenamefont {Vale}, \citenamefont {Mates},
\citenamefont {Hilton}, \citenamefont {Lalumi{\`{e}}re}, \citenamefont
{Blais},\ and\ \citenamefont {Lehnert}}]{Chapman2017WidelyCircuits}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Benjamin~J.}\
\bibnamefont {Chapman}}, \bibinfo {author} {\bibfnamefont {Eric~I.}\
\bibnamefont {Rosenthal}}, \bibinfo {author} {\bibfnamefont {Joseph}\
\bibnamefont {Kerckhoff}}, \bibinfo {author} {\bibfnamefont {Bradley~A.}\
\bibnamefont {Moores}}, \bibinfo {author} {\bibfnamefont {Leila~R.}\
\bibnamefont {Vale}}, \bibinfo {author} {\bibfnamefont {J. A. B.}\
\bibnamefont {Mates}}, \bibinfo {author} {\bibfnamefont {Gene~C.}\
\bibnamefont {Hilton}}, \bibinfo {author} {\bibfnamefont {Kevin}\
\bibnamefont {Lalumi{\`{e}}re}}, \bibinfo {author} {\bibfnamefont
{Alexandre}\ \bibnamefont {Blais}}, \ and\ \bibinfo {author} {\bibfnamefont
{K. W.}\ \bibnamefont {Lehnert}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Widely Tunable On-Chip Microwave Circulator for Superconducting
Quantum Circuits}},}\ }\href {\doibase 10.1103/PhysRevX.7.041043} {\bibfield
{journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {041043} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lecocq}\ \emph {et~al.}(2017)\citenamefont {Lecocq},
\citenamefont {Ranzani}, \citenamefont {Peterson}, \citenamefont {Cicak},
\citenamefont {Simmonds}, \citenamefont {Teufel},\ and\ \citenamefont
{Aumentado}}]{Lecocq2017NonreciprocalAmplifier}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Lecocq}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ranzani}},
\bibinfo {author} {\bibfnamefont {G. A.}\ \bibnamefont {Peterson}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo
{author} {\bibfnamefont {R. W.}\ \bibnamefont {Simmonds}}, \bibinfo
{author} {\bibfnamefont {J. D.}\ \bibnamefont {Teufel}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Aumentado}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Nonreciprocal Microwave Signal Processing with
a Field-Programmable Josephson Amplifier}},}\ }\href {\doibase
10.1103/PhysRevApplied.7.024028} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Applied}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo
{pages} {024028} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peterson}\ \emph {et~al.}(2017)\citenamefont
{Peterson}, \citenamefont {Lecocq}, \citenamefont {Cicak}, \citenamefont
{Simmonds}, \citenamefont {Aumentado},\ and\ \citenamefont
{Teufel}}]{Peterson2017DemonstrationCircuit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G. A.}\ \bibnamefont
{Peterson}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Lecocq}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo
{author} {\bibfnamefont {R. W.}\ \bibnamefont {Simmonds}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Aumentado}}, \ and\ \bibinfo
{author} {\bibfnamefont {J. D.}\ \bibnamefont {Teufel}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Demonstration of Efficient
Nonreciprocity in a Microwave Optomechanical Circuit}},}\ }\href {\doibase
10.1103/PhysRevX.7.031001} {\bibfield {journal} {\bibinfo {journal}
{Physical Review X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages}
{031001} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sliwa}\ \emph {et~al.}(2015)\citenamefont {Sliwa},
\citenamefont {Hatridge}, \citenamefont {Narla}, \citenamefont {Shankar},
\citenamefont {Frunzio}, \citenamefont {Schoelkopf},\ and\ \citenamefont
{Devoret}}]{Sliwa2015ReconfigurableAmplifier}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont
{Sliwa}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hatridge}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Narla}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Shankar}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo {author} {\bibfnamefont
{R.~J.}\ \bibnamefont {Schoelkopf}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.~H.}\ \bibnamefont {Devoret}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Reconfigurable josephson circulator/directional amplifier}},}\
}\href {https://link.aps.org/doi/10.1103/PhysRevX.5.041020} {\bibfield
{journal} {\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo
{volume} {5}} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kerckhoff}\ \emph {et~al.}(2015)\citenamefont
{Kerckhoff}, \citenamefont {Lalumi{\`{e}}re}, \citenamefont {Chapman},
\citenamefont {Blais},\ and\ \citenamefont
{Lehnert}}]{Kerckhoff2015On-ChipRotation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Joseph}\ \bibnamefont
{Kerckhoff}}, \bibinfo {author} {\bibfnamefont {Kevin}\ \bibnamefont
{Lalumi{\`{e}}re}}, \bibinfo {author} {\bibfnamefont {Benjamin~J.}\
\bibnamefont {Chapman}}, \bibinfo {author} {\bibfnamefont {Alexandre}\
\bibnamefont {Blais}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~W.}\
\bibnamefont {Lehnert}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{On-Chip Superconducting Microwave Circulator from Synthetic Rotation}},}\
}\href {https://link.aps.org/doi/10.1103/PhysRevApplied.4.034002} {\bibfield
{journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo
{volume} {4}} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ranzani}\ \emph {et~al.}(2017)\citenamefont
{Ranzani}, \citenamefont {Kotler}, \citenamefont {Sirois}, \citenamefont
{DeFeo}, \citenamefont {Castellanos-Beltran}, \citenamefont {Cicak},
\citenamefont {Vale},\ and\ \citenamefont
{Aumentado}}]{Ranzani2017WidebandLine}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Leonardo}\
\bibnamefont {Ranzani}}, \bibinfo {author} {\bibfnamefont {Shlomi}\
\bibnamefont {Kotler}}, \bibinfo {author} {\bibfnamefont {Adam~J.}\
\bibnamefont {Sirois}}, \bibinfo {author} {\bibfnamefont {Michael~P.}\
\bibnamefont {DeFeo}}, \bibinfo {author} {\bibfnamefont {Manuel}\
\bibnamefont {Castellanos-Beltran}}, \bibinfo {author} {\bibfnamefont
{Katarina}\ \bibnamefont {Cicak}}, \bibinfo {author} {\bibfnamefont
{Leila~R.}\ \bibnamefont {Vale}}, \ and\ \bibinfo {author} {\bibfnamefont
{José}\ \bibnamefont {Aumentado}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Wideband Isolation by Frequency Conversion in a Josephson-Junction
Transmission Line}},}\ }\href {\doibase 10.1103/PhysRevApplied.8.054035}
{\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\
}\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {054035} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Metelmann}\ and\ \citenamefont
{Clerk}(2015)}]{Metelmann2015NonreciprocalEngineering}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Metelmann}}\ and\ \bibinfo {author} {\bibfnamefont {A. A.}\ \bibnamefont
{Clerk}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Nonreciprocal
Photon Transmission and Amplification via Reservoir Engineering}},}\ }\href
{\doibase 10.1103/PhysRevX.5.021025} {\bibfield {journal} {\bibinfo
{journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo
{pages} {021025} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Minev}\ \emph {et~al.}(2018)\citenamefont {Minev},
\citenamefont {Mundhada}, \citenamefont {Shankar}, \citenamefont {Reinhold},
\citenamefont {Gutierrez-Jauregui}, \citenamefont {Schoelkopf}, \citenamefont
{Mirrahimi}, \citenamefont {Carmichael},\ and\ \citenamefont
{Devoret}}]{Minev2018ToMid-flight}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.~K.}\ \bibnamefont
{Minev}}, \bibinfo {author} {\bibfnamefont {S.~O.}\ \bibnamefont {Mundhada}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Shankar}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Reinhold}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Gutierrez-Jauregui}}, \bibinfo {author}
{\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Mirrahimi}}, \bibinfo {author}
{\bibfnamefont {H.~J.}\ \bibnamefont {Carmichael}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{To catch and reverse a quantum jump
mid-flight}},}\ }\href {http://arxiv.org/abs/1803.00545} {\bibfield
{journal} {\bibinfo {journal} {arXiv:1803.00545}\ } (\bibinfo {year}
{2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Siddiqi}\ \emph {et~al.}(2004)\citenamefont
{Siddiqi}, \citenamefont {Vijay}, \citenamefont {Pierre}, \citenamefont
{Wilson}, \citenamefont {Metcalfe}, \citenamefont {Rigetti}, \citenamefont
{Frunzio},\ and\ \citenamefont {Devoret}}]{Siddiqi2004RF-DrivenMeasurement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Siddiqi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Vijay}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pierre}}, \bibinfo
{author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Metcalfe}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Rigetti}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Frunzio}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.~H.}\ \bibnamefont {Devoret}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{RF-Driven Josephson Bifurcation Amplifier for Quantum
Measurement}},}\ }\href {\doibase 10.1103/PhysRevLett.93.207002} {\bibfield
{journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo
{volume} {93}},\ \bibinfo {pages} {207002} (\bibinfo {year}
{2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schmitt}\ \emph {et~al.}(2014)\citenamefont
{Schmitt}, \citenamefont {Zhou}, \citenamefont {Juliusson}, \citenamefont
{Royer}, \citenamefont {Blais}, \citenamefont {Bertet}, \citenamefont
{Vion},\ and\ \citenamefont {Esteve}}]{Schmitt2014MultiplexedAmplifiers}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Schmitt}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhou}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Juliusson}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Royer}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Bertet}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Vion}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Esteve}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Multiplexed readout of transmon qubits with Josephson bifurcation
amplifiers}},}\ }\href {\doibase 10.1103/PhysRevA.90.062333} {\bibfield
{journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {062333} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krantz}\ \emph {et~al.}(2016)\citenamefont {Krantz},
\citenamefont {Bengtsson}, \citenamefont {Simoen}, \citenamefont
{Gustavsson}, \citenamefont {Shumeiko}, \citenamefont {Oliver}, \citenamefont
{Wilson}, \citenamefont {Delsing},\ and\ \citenamefont
{Bylander}}]{Krantz2016Single-shotOscillator}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Philip}\ \bibnamefont
{Krantz}}, \bibinfo {author} {\bibfnamefont {Andreas}\ \bibnamefont
{Bengtsson}}, \bibinfo {author} {\bibfnamefont {Michaël}\ \bibnamefont
{Simoen}}, \bibinfo {author} {\bibfnamefont {Simon}\ \bibnamefont
{Gustavsson}}, \bibinfo {author} {\bibfnamefont {Vitaly}\ \bibnamefont
{Shumeiko}}, \bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont
{Oliver}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}},
\bibinfo {author} {\bibfnamefont {Per}\ \bibnamefont {Delsing}}, \ and\
\bibinfo {author} {\bibfnamefont {Jonas}\ \bibnamefont {Bylander}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{Single-shot read-out of a
superconducting qubit using a Josephson parametric oscillator}},}\ }\href
{\doibase 10.1038/ncomms11417} {\bibfield {journal} {\bibinfo {journal}
{Nature Communications}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages}
{11417} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koch}\ \emph {et~al.}(2007)\citenamefont {Koch},
\citenamefont {Yu}, \citenamefont {Gambetta}, \citenamefont {Houck},
\citenamefont {Schuster}, \citenamefont {Majer}, \citenamefont {Blais},
\citenamefont {Devoret}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{Koch2007Charge-insensitiveBox}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Jens}\ \bibnamefont
{Koch}}, \bibinfo {author} {\bibfnamefont {Terri~M.}\ \bibnamefont {Yu}},
\bibinfo {author} {\bibfnamefont {Jay}\ \bibnamefont {Gambetta}}, \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}}, \bibinfo {author}
{\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{Alexandre}\ \bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont {M.~H.}\
\bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Charge-insensitive qubit design derived from the Cooper pair box}},}\
}\href {\doibase 10.1103/PhysRevA.76.042319} {\bibfield {journal} {\bibinfo
{journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo
{pages} {042319} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Macklin}\ \emph {et~al.}(2015)\citenamefont
{Macklin}, \citenamefont {O'Brien}, \citenamefont {Hover}, \citenamefont
{Schwartz}, \citenamefont {Bolkhovsky}, \citenamefont {Zhang}, \citenamefont
{Oliver},\ and\ \citenamefont {Siddiqi}}]{Macklin2015AAmplifier}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Chris}\ \bibnamefont
{Macklin}}, \bibinfo {author} {\bibfnamefont {K}~\bibnamefont {O'Brien}},
\bibinfo {author} {\bibfnamefont {D}~\bibnamefont {Hover}}, \bibinfo {author}
{\bibfnamefont {M.~E.}\ \bibnamefont {Schwartz}}, \bibinfo {author}
{\bibfnamefont {V}~\bibnamefont {Bolkhovsky}}, \bibinfo {author}
{\bibfnamefont {X}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{W.~D.}\ \bibnamefont {Oliver}}, \ and\ \bibinfo {author} {\bibfnamefont
{I}~\bibnamefont {Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{A near – quantum-limited Josephson traveling-wave parametric
amplifier}},}\ }\href {\doibase 10.1126/science.aaa8525} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{350}},\ \bibinfo {pages} {307} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Toyli}\ \emph {et~al.}(2016)\citenamefont {Toyli},
\citenamefont {Eddins}, \citenamefont {Boutin}, \citenamefont {Puri},
\citenamefont {Hover}, \citenamefont {Bolkhovsky}, \citenamefont {Oliver},
\citenamefont {Blais},\ and\ \citenamefont
{Siddiqi}}]{Toyli2016ResonanceVacuum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Toyli}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Eddins}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boutin}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Puri}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Hover}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Bolkhovsky}}, \bibinfo {author} {\bibfnamefont {W.~D.}\
\bibnamefont {Oliver}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Blais}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Resonance
Fluorescence from an Artificial Atom in Squeezed Vacuum}},}\ }\href {\doibase
10.1103/PhysRevX.6.031004} {\bibfield {journal} {\bibinfo {journal}
{Physical Review X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{031004} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2014)\citenamefont {Zhou},
\citenamefont {Schmitt}, \citenamefont {Bertet}, \citenamefont {Vion},
\citenamefont {Wustmann}, \citenamefont {Shumeiko},\ and\ \citenamefont
{Esteve}}]{Zhou2014High-gainArray}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Zhou}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Schmitt}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bertet}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Vion}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Wustmann}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Shumeiko}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Esteve}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{High-gain weakly nonlinear flux-modulated Josephson
parametric amplifier using a SQUID array}},}\ }\href {\doibase
10.1103/PhysRevB.89.214517} {\bibfield {journal} {\bibinfo {journal}
{Physical Review B}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages}
{214517} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reed}\ \emph {et~al.}(2010)\citenamefont {Reed},
\citenamefont {Johnson}, \citenamefont {Houck}, \citenamefont {DiCarlo},
\citenamefont {Chow}, \citenamefont {Schuster}, \citenamefont {Frunzio},\
and\ \citenamefont {Schoelkopf}}]{Reed2010FastQubit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont
{Reed}}, \bibinfo {author} {\bibfnamefont {B.~R.}\ \bibnamefont {Johnson}},
\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {DiCarlo}}, \bibinfo {author}
{\bibfnamefont {J.~M.}\ \bibnamefont {Chow}}, \bibinfo {author}
{\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Fast reset and suppressing spontaneous emission
of a superconducting qubit}},}\ }\href {\doibase 10.1063/1.3435463}
{\bibfield {journal} {\bibinfo {journal} {Applied Physics Letters}\
}\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {203110} (\bibinfo
{year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Levitan}(2015)}]{LevitanThesis}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Benjamin}\
\bibnamefont {Levitan}},\ }\emph {\bibinfo {title} {Dispersive qubit
measurement using an on-chip parametric amplifier}},\ \href
{http://digitool.library.mcgill.ca/R/-?func=dbin-jump-full&object_id=138943&silo_library=GEN01}
{Master's thesis},\ \bibinfo {school} {McGill University} (\bibinfo {year}
{2015})\BibitemShut {NoStop} \bibitem [{\citenamefont {Ong}\ \emph {et~al.}(2011)\citenamefont {Ong},
\citenamefont {Boissonneault}, \citenamefont {Mallet}, \citenamefont
{Palacios-Laloy}, \citenamefont {Dewes}, \citenamefont {Doherty},
\citenamefont {Blais}, \citenamefont {Bertet}, \citenamefont {Vion},\ and\
\citenamefont {Esteve}}]{Ong2011CircuitDephasing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~R.}\ \bibnamefont
{Ong}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Boissonneault}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Mallet}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Palacios-Laloy}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Dewes}}, \bibinfo {author}
{\bibfnamefont {A.~C.}\ \bibnamefont {Doherty}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Bertet}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Vion}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Esteve}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Circuit QED with a Nonlinear Resonator: ac-Stark Shift and Dephasing}},}\
}\href {\doibase 10.1103/PhysRevLett.106.167002} {\bibfield {journal}
{\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume}
{106}},\ \bibinfo {pages} {167002} (\bibinfo {year} {2011})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Boissonneault}\ \emph {et~al.}(2012)\citenamefont
{Boissonneault}, \citenamefont {Doherty}, \citenamefont {Ong}, \citenamefont
{Bertet}, \citenamefont {Vion}, \citenamefont {Esteve},\ and\ \citenamefont
{Blais}}]{Boissonneault2012Back-actionQubit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Maxime}\ \bibnamefont
{Boissonneault}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Doherty}}, \bibinfo {author} {\bibfnamefont {F.~R.}\ \bibnamefont {Ong}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bertet}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Vion}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Esteve}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Blais}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Back-action of a driven nonlinear resonator on a
superconducting qubit}},}\ }\href {\doibase 10.1103/PhysRevA.85.022305}
{\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf
{\bibinfo {volume} {85}},\ \bibinfo {pages} {022305} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ong}\ \emph {et~al.}(2013)\citenamefont {Ong},
\citenamefont {Boissonneault}, \citenamefont {Mallet}, \citenamefont
{Doherty}, \citenamefont {Blais}, \citenamefont {Vion}, \citenamefont
{Esteve},\ and\ \citenamefont {Bertet}}]{Ong2013QuantumQubit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~R.}\ \bibnamefont
{Ong}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Boissonneault}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Mallet}}, \bibinfo
{author} {\bibfnamefont {A.~C.}\ \bibnamefont {Doherty}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Vion}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Esteve}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Bertet}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Quantum Heating
of a Nonlinear Resonator Probed by a Superconducting Qubit}},}\ }\href
{\doibase 10.1103/PhysRevLett.110.047001} {\bibfield {journal} {\bibinfo
{journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {110}},\
\bibinfo {pages} {047001} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boissonneault}\ \emph {et~al.}(2014)\citenamefont
{Boissonneault}, \citenamefont {Doherty}, \citenamefont {Ong}, \citenamefont
{Bertet}, \citenamefont {Vion}, \citenamefont {Esteve},\ and\ \citenamefont
{Blais}}]{Boissonneault2014SuperconductingResonator}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Maxime}\ \bibnamefont
{Boissonneault}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Doherty}}, \bibinfo {author} {\bibfnamefont {F.~R.}\ \bibnamefont {Ong}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bertet}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Vion}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Esteve}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Blais}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {{Superconducting qubit as a probe of squeezing in a
nonlinear resonator}},}\ }\href {\doibase 10.1103/PhysRevA.89.022324}
{\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf
{\bibinfo {volume} {89}},\ \bibinfo {pages} {022324} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hatridge}\ \emph {et~al.}(2011)\citenamefont
{Hatridge}, \citenamefont {Vijay}, \citenamefont {Slichter}, \citenamefont
{Clarke},\ and\ \citenamefont {Siddiqi}}]{Hatridge2011DispersiveAmplifier}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hatridge}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Vijay}},
\bibinfo {author} {\bibfnamefont {D.~H.}\ \bibnamefont {Slichter}}, \bibinfo
{author} {\bibfnamefont {John}\ \bibnamefont {Clarke}}, \ and\ \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{Dispersive magnetometry with a quantum limited
SQUID parametric amplifier}},}\ }\href {\doibase 10.1103/PhysRevB.83.134501}
{\bibfield {journal} {\bibinfo {journal} {Physical Review B}\ }\textbf
{\bibinfo {volume} {83}},\ \bibinfo {pages} {134501} (\bibinfo {year}
{2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Levenson-Falk}\ \emph {et~al.}(2013)\citenamefont
{Levenson-Falk}, \citenamefont {Vijay}, \citenamefont {Antler},\ and\
\citenamefont {Siddiqi}}]{Levenson-Falk2013ADetection}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E~M}\ \bibnamefont
{Levenson-Falk}}, \bibinfo {author} {\bibfnamefont {R}~\bibnamefont {Vijay}},
\bibinfo {author} {\bibfnamefont {N}~\bibnamefont {Antler}}, \ and\ \bibinfo
{author} {\bibfnamefont {I}~\bibnamefont {Siddiqi}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {{A dispersive nanoSQUID magnetometer for
ultra-low noise, high bandwidth flux detection}},}\ }\href {\doibase
10.1088/0953-2048/26/5/055015} {\bibfield {journal} {\bibinfo {journal}
{Superconductor Science and Technology}\ }\textbf {\bibinfo {volume} {26}},\
\bibinfo {pages} {055015} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peano}\ \emph {et~al.}(2015)\citenamefont {Peano},
\citenamefont {Schwefel}, \citenamefont {Marquardt},\ and\ \citenamefont
{Marquardt}}]{Peano2015IntracavityDeamplification}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Peano}}, \bibinfo {author} {\bibfnamefont {H. G. L.}\ \bibnamefont
{Schwefel}}, \bibinfo {author} {\bibfnamefont {Ch.}\ \bibnamefont
{Marquardt}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Marquardt}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Intracavity
Squeezing Can Enhance Quantum-Limited Optomechanical Position Detection
through Deamplification}},}\ }\href {\doibase 10.1103/PhysRevLett.115.243603}
{\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\
}\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages} {243603} (\bibinfo
{year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Govia}\ and\ \citenamefont
{Clerk}(2017)}]{Govia2017EnhancedSuppression}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Luke C~G}\
\bibnamefont {Govia}}\ and\ \bibinfo {author} {\bibfnamefont {Aashish~A}\
\bibnamefont {Clerk}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Enhanced qubit readout using locally generated squeezing and inbuilt
Purcell-decay suppression}},}\ }\href {\doibase 10.1088/1367-2630/aa5f7b}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {19}},\ \bibinfo {pages} {023044} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Korobko}\ \emph {et~al.}(2017)\citenamefont
{Korobko}, \citenamefont {Kleybolte}, \citenamefont {Ast}, \citenamefont
{Miao}, \citenamefont {Chen},\ and\ \citenamefont
{Schnabel}}]{Korobko2017BeatingGeneration}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Korobko}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Kleybolte}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ast}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Miao}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Schnabel}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {{Beating the Standard Sensitivity-Bandwidth Limit of Cavity-Enhanced
Interferometers with Internal Squeezed-Light Generation}},}\ }\href {\doibase
10.1103/PhysRevLett.118.143601} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo
{pages} {143601} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gambetta}\ \emph {et~al.}(2007)\citenamefont
{Gambetta}, \citenamefont {Braff}, \citenamefont {Wallraff}, \citenamefont
{Girvin},\ and\ \citenamefont
{Schoelkopf}}]{Gambetta2007ProtocolsMeasurement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Jay}\ \bibnamefont
{Gambetta}}, \bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {Braff}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}}, \bibinfo
{author} {\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Protocols for optimal readout of qubits
using a continuous quantum nondemolition measurement}},}\ }\href {\doibase
10.1103/PhysRevA.76.012325} {\bibfield {journal} {\bibinfo {journal}
{Physical Review A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages}
{012325} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Eichler}\ and\ \citenamefont
{Wallraff}(2014)}]{Eichler2014ControllingAmplifier}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Christopher}\
\bibnamefont {Eichler}}\ and\ \bibinfo {author} {\bibfnamefont {Andreas}\
\bibnamefont {Wallraff}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{Controlling the dynamic range of a Josephson parametric amplifier}},}\
}\href {\doibase 10.1140/epjqt2} {\bibfield {journal} {\bibinfo {journal}
{EPJ Quantum Technology}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages}
{2} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boutin}\ \emph {et~al.}(2017)\citenamefont {Boutin},
\citenamefont {Toyli}, \citenamefont {Venkatramani}, \citenamefont {Eddins},
\citenamefont {Siddiqi},\ and\ \citenamefont
{Blais}}]{Boutin2017EffectAmplifiers}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Samuel}\ \bibnamefont
{Boutin}}, \bibinfo {author} {\bibfnamefont {David~M.}\ \bibnamefont
{Toyli}}, \bibinfo {author} {\bibfnamefont {Aditya~V.}\ \bibnamefont
{Venkatramani}}, \bibinfo {author} {\bibfnamefont {Andrew~W.}\ \bibnamefont
{Eddins}}, \bibinfo {author} {\bibfnamefont {Irfan}\ \bibnamefont {Siddiqi}},
\ and\ \bibinfo {author} {\bibfnamefont {Alexandre}\ \bibnamefont {Blais}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{Effect of Higher-Order
Nonlinearities on Amplification and Squeezing in Josephson Parametric
Amplifiers}},}\ }\href {\doibase 10.1103/PhysRevApplied.8.054030} {\bibfield
{journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo
{volume} {8}},\ \bibinfo {pages} {054030} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Eddins}\ \emph {et~al.}(2017)\citenamefont {Eddins},
\citenamefont {Schreppler}, \citenamefont {Toyli}, \citenamefont {Martin},
\citenamefont {Hacohen-Gourgy}, \citenamefont {Govia}, \citenamefont
{Ribeiro}, \citenamefont {Clerk},\ and\ \citenamefont
{Siddiqi}}]{Eddins2017StroboscopicIllumination}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Andrew}\ \bibnamefont
{Eddins}}, \bibinfo {author} {\bibfnamefont {Sydney}\ \bibnamefont
{Schreppler}}, \bibinfo {author} {\bibfnamefont {David~M.}\ \bibnamefont
{Toyli}}, \bibinfo {author} {\bibfnamefont {Leigh~S.}\ \bibnamefont
{Martin}}, \bibinfo {author} {\bibfnamefont {Shay}\ \bibnamefont
{Hacohen-Gourgy}}, \bibinfo {author} {\bibfnamefont {Luke C.~G.}\
\bibnamefont {Govia}}, \bibinfo {author} {\bibfnamefont {Hugo}\ \bibnamefont
{Ribeiro}}, \bibinfo {author} {\bibfnamefont {Aashish~A.}\ \bibnamefont
{Clerk}}, \ and\ \bibinfo {author} {\bibfnamefont {Irfan}\ \bibnamefont
{Siddiqi}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Stroboscopic
qubit measurement with squeezed illumination}},}\ }\href
{http://arxiv.org/abs/1708.01674} {\ (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2013)\citenamefont {Li},
\citenamefont {Shabani}, \citenamefont {Sarovar},\ and\ \citenamefont
{Whaley}}]{Li2013OptimalityImperfections}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Hanhan}\ \bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {Alireza}\ \bibnamefont {Shabani}},
\bibinfo {author} {\bibfnamefont {Mohan}\ \bibnamefont {Sarovar}}, \ and\
\bibinfo {author} {\bibfnamefont {K.~Birgitta}\ \bibnamefont {Whaley}},\
}\bibfield {title} {\enquote {\bibinfo {title} {{Optimality of qubit
purification protocols in the presence of imperfections}},}\ }\href {\doibase
10.1103/PhysRevA.87.032334} {\bibfield {journal} {\bibinfo {journal}
{Physical Review A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages}
{032334} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Martin}\ \emph {et~al.}(2015)\citenamefont {Martin},
\citenamefont {Motzoi}, \citenamefont {Li}, \citenamefont {Sarovar},\ and\
\citenamefont {Whaley}}]{Martin2015DeterministicFeedback}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Leigh}\ \bibnamefont
{Martin}}, \bibinfo {author} {\bibfnamefont {Felix}\ \bibnamefont {Motzoi}},
\bibinfo {author} {\bibfnamefont {Hanhan}\ \bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {Mohan}\ \bibnamefont {Sarovar}}, \ and\ \bibinfo
{author} {\bibfnamefont {K.~Birgitta}\ \bibnamefont {Whaley}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Deterministic generation of remote
entanglement with active quantum feedback}},}\ }\href {\doibase
10.1103/PhysRevA.92.062321} {\bibfield {journal} {\bibinfo {journal}
{Physical Review A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages}
{062321} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Martin}\ \emph {et~al.}(2017)\citenamefont {Martin},
\citenamefont {Sayrafi},\ and\ \citenamefont
{Whaley}}]{Martin2017WhatFeedback}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Leigh}\ \bibnamefont
{Martin}}, \bibinfo {author} {\bibfnamefont {Mahrud}\ \bibnamefont
{Sayrafi}}, \ and\ \bibinfo {author} {\bibfnamefont {K~Birgitta}\
\bibnamefont {Whaley}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{{What is the optimal way to prepare a Bell state using measurement and
feedback?}}}\ }\href {\doibase 10.1088/2058-9565/aa804c} {\bibfield
{journal} {\bibinfo {journal} {Quantum Science and Technology}\ }\textbf
{\bibinfo {volume} {2}},\ \bibinfo {pages} {044006} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wahyu~Utami}\ and\ \citenamefont
{Clerk}(2008)}]{WahyuUtami2008EntanglementSystem}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Wahyu~Utami}}\ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Clerk}},\ }\bibfield {title} {\enquote {\bibinfo {title} {{Entanglement
dynamics in a dispersively coupled qubit-oscillator system}},}\ }\href
{\doibase 10.1103/PhysRevA.78.042323} {\bibfield {journal} {\bibinfo
{journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo
{pages} {042323} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\baselineskip = 14pt
\title[Probabilistic global well-posedness of vNLW] {Probabilistic global well-posedness for a viscous nonlinear wave equation modeling fluid-structure interaction}
\author[J.~Kuan, T.~Oh, and S. \v{C}ani\'{c}] {Jeffrey Kuan, Tadahiro Oh, and Sun\v{c}ica \v{C}ani\'{c}}
\address{Jeffrey Kuan\\ Department of Mathematics\\ University of California Berkeley\\ Berkeley, CA, USA 94720-3840\\ USA}
\email{jeffreykuan@berkeley.edu}
\address{Tadahiro Oh, School of Mathematics\\ The University of Edinburgh\\ and The Maxwell Institute for the Mathematical Sciences\\ James Clerk Maxwell Building\\ The King's Buildings\\ Peter Guthrie Tait Road\\ Edinburgh\\ EH9 3FD\\
United Kingdom}
\email{hiro.oh@ed.ac.uk}
\address{Sun\v{c}ica \v{C}ani\'{c}\\ Department of Mathematics\\ University of California Berkeley\\ Berkeley, CA, USA 94720-3840\\ USA}
\email{canics@berkeley.edu}
\subjclass[2020]{35L05, 35L71, 35R60, 60H15}
\keywords{viscous nonlinear wave equation; Wiener randomization; probabilistic well-posedness}
\begin{abstract} We prove probabilistic well-posedness for a 2D viscous nonlinear wave equation modeling fluid-structure interaction between a 3D incompressible, viscous Stokes flow and nonlinear elastodynamics of a 2D stretched membrane. The focus is on (rough) data, often arising in real-life problems,
for which it is known that the deterministic problem is ill-posed.
We show that random perturbations of such data give rise almost surely to the existence of a unique solution. More specifically, we prove almost sure global well-posedness for a viscous nonlinear wave equation with the subcritical initial data in the Sobolev space $\mathcal{H}^s (\mathbb{R}^2)$, $s > - \frac 15$, which are randomly perturbed using Wiener randomization. This result shows ``robustness'' of nonlinear FSI problems/models, and provides confidence that even for the ``rough data'' (data in $\mathcal{H}^s$, $s > -\frac 1 5$) random perturbations of such data (due to e.g., randomness in real-life data, numerical discretization, etc.) will almost surely provide a unique solution which depends continuously on the data in the $\mathcal{H}^s$ topology.
\end{abstract}
\maketitle
\section{Background}
This paper is motivated by a study of fluid-structure (FSI) interaction and the impact of {\emph{rough data}} and {\emph{random perturbations}} of such data on the solution of a nonlinear fluid-structure interaction problem, where the nonlinearity may come, e.g., from a nonlinear forcing of the structure. The main motivation derives from real-life applications that often exhibit such data and nonlinear forcing, e.g., coronary arteries contracting and expanding on the surface of a moving heart, where the forcing comes from the surrounding heart tissue.
In particular, we are interested in the flow of a viscous incompressible fluid modeled by the Stokes equations in a channel that is bounded by a two-dimensional membrane, modeled by a nonlinear wave equation. See Fig.~\ref{domain}. A competition between dissipative effects coming from the fluid viscosity, dispersion and nonlinear effects coming from the structure model are of particular interest, especially for the (rough) initial data for which one can show that the Sobolev ${\mathcal{H}^s}$ norm gets inflated very quickly after a small time $t_\epsilon > 0$ (the ${\mathcal{H}^s}$ norm gets greater than $1/\epsilon$), even for the initial data with a small ${\mathcal{H}^s}$ norm (less than $\epsilon$). For example, due to the nonlinearity in the problem, ``small'' oscillations in the initial data get amplified quickly, giving rise to an ill-posed problem in Hadamard sense. This is often the case for the initial data in ${\mathcal{H}^s}$ with the Sobolev exponent $s$ below a critical exponent $s_\text{crit}$ (rough initial data), where $s_\text{crit}$ is ``given'' by the ``natural'' scaling of the problem. In this paper we show that for a range of Sobolev exponents {\emph{below the critical exponent}}, $s_\text{min} < s < s_\text{crit}$ where $s_\text{min} < 0$, random perturbations of such initial data via a Wiener randomization will still give rise to a globally well-posed problem almost surely. This result shows ``robustness'' of nonlinear FSI problems/models, and provides confidence that even for the ``rough data'' (data in $\mathcal{H}^s$, $s\in (s_\text{min}, s_\text{crit})$), random perturbations due to a combination of factors (e.g., real-life data, numerical discretization, etc.) will almost surely provide a solution which depends continuously on the data in the $\mathcal{H}^s$ topology.
\begin{figure}
\caption{A sketch of the reference configurations for the structure and fluid (left), and the fluid-structure interaction system with nonzero vertical displacement of the structure (right).}
\label{domain}
\end{figure}
More precisely, we study a prototype equation capturing dispersive, dissipative, and nonlinear (forcing) effects in fluid-structure interaction problems between the flow of an incompressible, viscous fluid modeled by the 3D Stokes equations, and the elastodynamics of a 2D elastic membrane modeled by a nonlinear wave equation. The prototype equation is the following {\emph{viscous nonlinear wave equation}} (vNLW) defined on $\mathbb{R}^2$, given by a 2D nonlinear wave equation (NLW) augmented by the viscoelastic effects modeled by the {\emph{fractional Laplacian operator}}
(Dirichlet-to-Neumann operator) $D = |\nabla| = \sqrt{-\Delta}$ applied to the time derivative $\partial_t u$, where $u$ denotes vertical membrane displacement: \begin{align} \begin{cases}
\partial_t^2 u - \Delta u + 2\mu D \partial_t u + |u|^{p-1}u = 0\\
(u, \partial_t u)|_{t = 0} = (u_0, u_1) \end{cases} \qquad (x, t) \in \mathbb{R}^2 \times \mathbb{R}_+, \label{vNLW1} \end{align}
\noindent Here $\mu > 0$ is a constant denoting fluid viscosity, and $\mathbb{R}_+ = [0, \infty)$.
\if 1 = 0 The viscous nonlinear wave equation arises from a prototypical model for fluid-structure interaction, and models wave dynamics under the influence of viscous regularizing effects. Fluid-structure interaction (FSI) is a physical phenomenon involving the coupled dynamical interaction between a solid and a fluid, where the solid is for instance, deformable with elastic or viscoelastic properties. Such problems feature mathematical difficulties, in terms of the coupling between the solid and fluid equations, and additional geometric nonlinearities that appear in problems in which the fluid domain evolves over time, giving rise to a moving boundary problem. As such, the nonlinear viscous wave equation gives a simplified prototypical model in which one can study well-posedness properties of the FSI system, by isolating the effects of fluid viscosity and nonlinear effects on the elastic structure.
The mathematical study of FSI is an area of extensive research, motivated by the presence of numerous physical applications. In recent years, many FSI problems motivated by real-life physical systems have been considered in the mathematics literature, including hemodynamics in a curved compliant artery with a coronary stent \cite{CanicCMAME}, the dynamics of floating objects on bodies of water \cite{Lannes}, multilayered poroelastic structures interacting with fluid flow as a model for a bioartificial pancreas \cite{BCMW}, and flow-structure interactions motivated by applications in aeroelasticity \cite{LasieckaAbsorbing, LasieckaSupersonic, LasieckaLongTime, LasieckaRotational, WebsterSubsonic}.
The nature of the two-way coupling between the partial differential equations (PDEs)
describing the dynamics of the fluid and the elastodynamics of the structure is crucial in the study of FSI. In mathematical models of FSI, one can consider either linear coupling or nonlinear coupling. The mathematical study of FSI first involved the well-posedness analysis of models with the assumption of \textit{linear coupling}, including a models involving an elastic solid interacting with a fluid described by the linear Stokes equations \cite{Gunzburger}, or the full Navier-Stokes equations \cite{BarGruLasTuff2,BarGruLasTuff,KukavicaTuffahaZiane}. Even though the deformation or displacement of the structure often affects the fluid domain in FSI problems, \textit{linear coupling} is a linearization that assumes the fluid-structure interface is fixed, and evaluates the coupling conditions along the fixed (linearized) fluid-structure interface. Other works have considered the more general case of \textit{nonlinear coupling}, in which the fluid domain is not known a priori. Thus, one must solve a moving boundary problem that has additional geometric nonlinearities, which complicate the analysis of the problem. See for example, \cite{BdV1,Lequeurre,FSIforBIO_Lukacova,CDEM,CG,MuhaCanic13, BorSun3d,LengererRuzicka,CSS1,CSS2,Kuk,ChenShkoller,ChengShkollerCoutand,IgnatovaKukavica,Raymod,ignatova2014well, BorSunSlip,BorSunNonLinearKoiter,BorSunMultiLayered,Grandmont16, CanicCMAME}.
The viscous nonlinear wave equation, first considered in \cite{KC}, arises naturally as a prototypical model for linearly coupled fluid-structure interaction between an elastic structure with nonlinear forcing effects and a fluid that isolates the interaction between the viscous effects of the fluid, nonlinear effects, and the hyperbolic dynamics of the elastic structure. This model features an elastic membrane and a fluid modeled by the stationary Stokes equations, coupled together. Even though this FSI model has two components (the structure and the fluid), the various assumptions in the model, including the linear coupling assumption, give rise to a self-contained equation for the elastodynamics of the structure without reference to the fluid, which is the viscous nonlinear wave equation \eqref{vNLW1}.
The viscous nonlinear wave equation \eqref{vNLW1}, which arises from this fluid-structure interaction model, is given by the usual nonlinear wave equation, considered for example in \cite{TAO}, augmented by the fractional Laplacian operator $D = \sqrt{-\Delta}$ acting on the structure velocity $\partial_{t}u$. The operator $D$ arises naturally as the Dirichlet-Neumann operator for the lower half-plane in $\mathbb{R}^{d}$ (see, for example, \cite{CS} for more information about the fractional Laplacian operator). The presence of this operator in the viscous nonlinear wave equation represents the (parabolic) regularizing effects of the fluid viscosity on the structure dynamics. For this reason, we study the viscous nonlinear wave equation \eqref{vNLW1} by making use of both the dispersive and dissipative properties of the dynamics. \fi
{\bf{Background on the viscous nonlinear wave equation.}} The viscous nonlinear wave equation \eqref{vNLW1} was derived in \cite{KC} by coupling the elastodynamics of a 2D elastic, prestressed membrane whose reference configuration is given by the infinite plane \begin{equation*} \Gamma = \{(x, y, 0) \in \mathbb{R}^{3}\}, \end{equation*} with the flow of an incompressible, viscous Newtonian fluid residing in the lower half space, which we will denote by \begin{equation*} \Omega = \{(x, y, z) \in \mathbb{R}^{3} : z < 0\}. \end{equation*}
\noindent See Figure \ref{domain}. The membrane and the fluid are \textit{linearly coupled}, namely, the fluid domain remains fixed over time. The structure is assumed to only experience displacement in the $z$ direction, which is denoted by $u$, where $u$ satisfies the following wave equation \begin{equation}\label{wave} \partial_{t}^{2}u - \Delta u = f, \qquad (x, y, t) \in \mathbb{R}^{2} \times \mathbb{R}_{+}. \end{equation} Here $f$ is the external loading force on the elastic membrane, which can be nonlinear, as we specify later.
The membrane interacts with the flow of an incompressible, viscous Newtonian fluid, defined on the domain $\Omega$, which is fixed in time due to the assumption of linear coupling. In order to isolate the dynamical effect of the fluid viscosity on the structure, we model the fluid velocity $\boldsymbol{v} = (v_1, v_2, v_3)$ and pressure $\pi$ by the stationary Stokes equations \begin{align}\label{stokes} \begin{cases} \nabla \pi = \mu \Delta \boldsymbol{v} \\ \nabla \cdot \boldsymbol{v} = 0 \end{cases} \qquad \text{ on } \Omega, \end{align} where the constant, $\mu > 0$, denotes the fluid viscosity. See Figure \ref{domain}.
The fluid and structure are coupled via a two-way coupling, specified by the following two coupling conditions:\begin{enumerate} \item The \textit{kinematic coupling condition}, which in our problem is a no-slip condition (the trace of the fluid velocity at the interface $\Gamma$ is equal to the structure velocity):
\begin{equation} \boldsymbol{v} = (\partial_{t}u) \boldsymbol{e_{z}}, \qquad \text{ on } \Gamma, \label{kin1} \end{equation}
\noindent where $ \boldsymbol{e_{z}} = (0, 0, 1)$; and
\item The \textit{dynamic coupling condition}, which states that the elastodynamics of the membrane is driven by the jump across $\Gamma$ between the normal component of the normal Cauchy fluid stress $\boldsymbol{\sigma}$ and the external forcing $F_{\text{ext}}$: \begin{equation*}
f = -\boldsymbol{\sigma} \boldsymbol{e_{z}} \cdot \boldsymbol{e_{z}}|_{\Gamma} + F_{\text{ext}}, \end{equation*} where \begin{equation}\label{kinematic} \boldsymbol{\sigma} = -\pi \textup{\bf Id} + 2\mu \boldsymbol{D}(\boldsymbol{v}). \end{equation} Thus, the structure equation with the dynamic coupling condition reads \begin{equation}\label{dynamic}
\partial_{t}^{2} u - \Delta u = -\boldsymbol{\sigma} \boldsymbol{e_{z}} \cdot \boldsymbol{e_{z}}|_{\Gamma} + F_{\text{ext}}. \end{equation} \end{enumerate}
For completeness, we summarize the derivation here.
We start by noting that the fluid load is given \textit{entirely by the pressure}, due to the particular geometry of this model. Specifically, \begin{equation*}
-\boldsymbol{\sigma}\boldsymbol{e_{z}} \cdot \boldsymbol{e_{z}} |_{\Gamma}
= \bigg( \pi - 2\mu \frac{\partial v_{3}}{\partial z}\bigg)\bigg|_{\Gamma} = \pi|_{\Gamma}, \end{equation*}
which follows from $\frac{\partial v_{3}}{\partial z}|_\Gamma = 0$ by the incompressibility condition, the kinematic coupling condition \eqref{kin1}, and the fact that $v_{1} = v_{2} = 0$ on $\Gamma$. Hence, we obtain \begin{equation}\label{pressuredyn}
\partial_{t}^{2} u - \Delta u = \pi|_{\Gamma} + F_{\text{ext}}. \end{equation} One can then ``solve'' the stationary Stokes equations \eqref{stokes} with the boundary condition \eqref{kin1} on $\Gamma$ for $\pi$ using a Fourier transform argument, to obtain the final result: \begin{equation}\label{pressure}
\pi|_{\Gamma} = -2\mu D \partial_t u. \end{equation} We will only sketch the main steps of this derivation in the following, and refer readers to the full derivation in \cite{KC}.
From \eqref{pressuredyn}, we see that the goal is to express $\pi|_{\Gamma}$ in terms of the structure displacement $u$ and its derivatives. We use the fact that $\pi$ and $\boldsymbol{v}$ satisfy the stationary Stokes equations~\eqref{stokes}
with a boundary condition provided by the kinematic coupling condition~\eqref{kin1}. We impose a boundary condition at infinity that $\boldsymbol{v}$ is bounded and $\pi$ decreases to zero at infinity. From the stationary Stokes equations \eqref{stokes}, one concludes that $\pi$ is a harmonic function in $\Omega$ with a normal derivative along $\Gamma$ given by \begin{equation}\label{Neumann}
\frac{\partial \pi}{\partial z}\bigg|_{\Gamma}
= \bigg(\mu \Delta_{x, y} v_{3} + \mu \frac{\partial^{2}v_{3}}{\partial z^{2}}\bigg)\bigg|_{\Gamma} .
\end{equation}
\noindent Hence,
we can find $\pi|_{\Gamma}$ in \eqref{pressure} by inverting the Dirichlet-Neumann operator on the
lower half space $\Omega$.
Our main goal is to express
$v_{3}$ in terms of $u$ and its derivatives, as this will give the Neumann boundary condition for the harmonic function $\pi$ in \eqref{Neumann}.
By taking the Laplacian of the first equation in \eqref{Neumann} and recalling that $\pi$ is harmonic,
we see that $v_3$ satisfies the biharmonic equation: \begin{equation} \Delta^{2} v_{3} = 0 \label{kin3} \end{equation} with boundary conditions given by the kinematic boundary condition (see \eqref{kin1}): \begin{equation}
v_{3}|_{\Gamma} =\partial_t u \label{kin4} \end{equation}
Furthermore, there is a boundary condition at infinity that $v_{3}$ must be bounded in
$\Omega$.
By taking the Fourier transform of \eqref{kin3} in the $x$ and $y$ variables but not the $z$ variable, we can establish that \begin{equation}
\widehat v_3 (\xi, z) = \widehat {\partial_t u}(\xi )e^{|\xi|z} - |\xi|\widehat{\partial_t u}(\xi)ze^{|\xi| z}, \label{kin5} \end{equation}
\noindent where $\xi$ denotes the frequency variable corresponding to the $x$ and $y$ variables. For more details, see the explicit calculation in \cite{KC}. Then, by taking the Fourier transform of \eqref{Neumann} in the $x$ and $y$ variables and using \eqref{kin4} and \eqref{kin5}, we obtain \begin{equation}
\frac{\partial \widehat{\pi}}{\partial z}\Big\vert_{\Gamma} = -2\mu|\xi|^{2} \widehat {\partial_t u }(\xi). \label{kin6} \end{equation}
\noindent By taking the inverse Fourier transform, this gives
the Neumann boundary condition for the harmonic function $\pi$
in terms of (derivatives) of $u$. Recall that the Dirichlet-Neumann operator for the lower half plane with the vanishing boundary condition at infinity is given by $D = \sqrt{-\Delta}$; see \cite{CS}. By inverting this operator, we see that the Neumann-Dirichlet operator with the same boundary condition at infinity is given by the Riesz potential $D^{-1} = (-\Delta)^{-\frac 12}$
with a Fourier multiplier $|\xi|^{-1}$. Therefore, by applying the
Neumann-Dirichlet operator to (the inverse Fourier transform of) \eqref{kin6},
we obtain the desired result in \eqref{pressure}.
As a model for nonlinear restoring external forcing effects, we consider a defocusing power nonlinearity of the form \begin{equation}\label{Fext}
F_{\text{ext}}(u) = -|u|^{p - 1} u, \end{equation} for positive integers $p > 1$. Such a power-type nonlinearity has been studied extensively for dispersive equations
such as the nonlinear Schr\"{o}dinger equations and the nonlinear wave equations; see, for example, \cite{TAO}. Combining \eqref{pressuredyn}, \eqref{pressure}, and \eqref{Fext} gives the final form of the viscous nonlinear wave equation, as stated in \eqref{vNLW1}, in dimension $d = 2$. Although $d = 2$ corresponds
to the scenario described in this fluid-structure interaction model, the equation \eqref{vNLW1} can be stated in full generality for arbitrary dimension $d$.
{\bf{Critical exponent and ill-posedness.}} Let us now turn to analytical aspects of the viscous NLW \eqref{vNLW1}. When $\mu \geq 1$, this equation is purely parabolic, where the general solution to the homogeneous linear equation \begin{align*} \partial_t^2 u - \Delta u + 2\mu D \partial_t u = 0
\end{align*}
\noindent with initial data $(u, \partial_{t}u)|_{t = 0} = (u_{0}, u_{1})$, is given by
\[ u(t) = e^{-\mu|\nabla| + \sqrt{(\mu^2 - 1) |\nabla|^2}}
f_1 + e^{-\mu|\nabla| - \sqrt{(\mu^2 - 1) |\nabla|^2}} f_2. \]
\noindent Noting that $-\mu |\xi| + \sqrt{(\mu^2 - 1) |\xi|^2}\sim \mu^{-1} |\xi|$ in this case ($\mu \ge 1$), the solution theory can be studied by simply using the Schauder estimate for the Poisson kernel (see Lemma \ref{LEM:Sch} below). We will not pursue this direction in this paper. Instead, our main interest in this paper is to study the combined effect of the dissipative-dispersive mechanism, appearing in \eqref{vNLW1}. As such, we will restrict our attention to $0 < \mu < 1$. Without loss of generality, we set $\mu = \frac 12$ as in \cite{KC} and focus on the following version of vNLW: \begin{align} \begin{cases}
\partial_t^2 u - \Delta u + D \partial_t u + |u|^{p-1} u = 0\\
(u, \partial_t u) |_{t = 0} = (u_0, u_1). \end{cases} \label{vNLW1b} \end{align}
As in the case of the usual NLW: \begin{align}
\partial_t^2 u - \Delta u + |u|^{p-1}u = 0, \label{vNLW2} \end{align}
\noindent the viscous NLW in \eqref{vNLW1b} enjoys the following scaling symmetry. If $u(x, t)$ is a solution to~\eqref{vNLW1b}, then $u^\lambda (x, t) = \lambda^\frac{2}{p-1} u(\lambda x, \lambda t)$ is also a solution to \eqref{vNLW1b} for any $\lambda > 0$. This induces the critical Sobolev regularity $s_\text{crit}$ on $\mathbb{R}^d$ given by \[ s_\text{crit} = \frac d2 - \frac 2{p-1} \]
\noindent such that the homogeneous Sobolev norm on $\mathbb{R}^2$ remains invariant under this scaling symmetry. This scaling heuristics provides a common conjecture
that an evolution equation is well-posed in $H^s$ for $s > s_\text{crit}$,
while it is ill-posed for $s < s_\text{crit}$. Indeed, for many dispersive PDEs, ill-posedness below a scaling critical regularity is known. In particular, the following form of strong ill-posedness, known as {\it norm inflation}, is established for many dispersive PDEs, including NLW; see \cite{CCT, BT1, CK, Kishimoto, O1, OW, CP, Ok, Tzvet, OOTz, FO}. Norm inflation in the case of the wave equation on $\mathbb{R}^d$ states the following: given any $\varepsilon > 0$, there exist a solution $u$ to \eqref{vNLW2} and $t_\varepsilon \in (0, \varepsilon) $ such that \begin{align*}
\| (u, \partial_t u)(0) \|_{\mathcal{H}^s} < \varepsilon \qquad \text{ but } \qquad \| (u, \partial_t u)(t_\varepsilon)\|_{\mathcal{H}^s} > \varepsilon^{-1},
\end{align*}
\noindent where \[\mathcal{H}^s (\mathbb{R}^d) = H^s(\mathbb{R}^d) \times H^{s-1}(\mathbb{R}^d) .\]
In \cite{KC}, Kuan and \v{C}ani\'c studied this issue for vNLW \eqref{vNLW1b}. Due to the presence of the viscous term in \eqref{vNLW1b}, which induces some smoothing property, one may expect to have a different ill-posedness result but this was shown not to be the case. More precisely,
Kuan and \v{C}ani\'c proved norm inflation for vNLW \eqref{vNLW1b} in $\mathcal{H}^s(\mathbb{R}^d)$ for $0 < s < s_\text{crit}$ (for any odd integer $p\geq 3$) as in the case of the usual NLW. Moreover, they showed that the viscous contribution has the potential to slow down the speed of the norm inflation. See \cite{KC} for details. It is of interest to see if norm inflation in negative Sobolev spaces for the usual NLW \cite{CCT, OOTz, FO} carries over to the viscous NLW. See \cite{dROk}.
This norm inflation for vNLW \eqref{vNLW1b} shows that the equation is ill-posed in $\mathcal{H}^s(\mathbb{R}^d)$ for $0 < s < s_\text{crit}$, showing that there is no hope in
studying well-posedness in this low regularity space in a deterministic manner.
However, we can go beyond the limit of deterministic analysis and consider our Cauchy problem with randomized initial data. The area of nonlinear dispersive equations with randomized initial data has become rather active in recent years \cite{BO96, BT1, CO, LM, BT3, BOP1, BOP2, Poc, OOP, BOP3}. See also a survey paper \cite{BOP4} in this direction.
In fact, in \cite{KC} Kuan and \v{C}ani\'c considered the Cauchy problem \eqref{vNLW1b} with $p = 5$ and dimension $d = 2$, and proved almost sure local well-posedness for {\emph{randomized initial data}} in $\mathcal{H}^s(\mathbb{R}^2)$ for $ s> -\frac 16$. In this manuscript we extend this result in two important directions: \begin{enumerate} \item We prove {\emph{global}} rather than local well-posedness in the probabilistic sense for randomized initial data in $\mathcal{H}^s(\mathbb{R}^2)$ $s > s_\text{min}$ where $s_\text{min} = -1/5$; and \item We extend the interval for the exponents $s$ from $s_\text{min} = -1/6$ to $s_\text{min} = -1/5$, where the threshold $s_\text{min} = -1/5$ seems to be sharp, see Remark \ref{Remark3.6}(i). \end{enumerate}
Since the randomized initial data considered in this work are given in terms of Wiener randomization, we provide a brief description of Wiener randomization next.
{\bf{Wiener randomization.}} \label{SUBSEC:Wiener} Let $\psi \in \mathcal{S}(\mathbb{R}^d)$ be such that $\supp \psi \subset [-1, 1]^d$, $\psi(-\xi ) = \overline{\psi(\xi)}$, and \[ \sum_{n \in \mathbb{Z}^d} \psi(\xi - n) \equiv 1 \quad \text{for all }\xi \in \mathbb{R}^d.\]
\noindent Then, any function $f$ on $\mathbb{R}^d$ can be written as \begin{equation} f = \sum_{n \in \mathbb{Z}^d} \psi(D-n) f, \label{B1}
\end{equation}
\noindent where $ \psi (D-n) $ denotes the Fourier multiplier operator with symbol $\psi (\,\cdot\, -n)$. Hence, $\psi(D - n)f$ localizes $f$ in the frequency space around the frequency $n \in \mathbb{Z}^{d}$ over a unit scale. We recall a particular example of Bernstein's inequality: \begin{align}
\|\psi(D-n) f \|_{L^q(\mathbb{R}^d)}
\lesssim \|\psi(D-n) f \|_{L^p(\mathbb{R}^d)} \label{B2} \end{align}
\noindent for any $ 1\leq p \leq q \leq \infty$. This classical inequality follows from the localization in the frequency space due to the compact support of $\psi$, and Young's convolution inequality (see, for example, Lemma 2.1 in \cite{LM}).
We now introduce a randomization adapted to the uniform decomposition \eqref{B1}. For $j = 0, 1$, let $\{g_{n,j}\}_{n \in \mathbb{Z}^d}$ be a sequence of mean-zero complex-valued random variables on a probability space $(\Omega, \mathcal{F}, P)$ such that \begin{align} g_{-n,j}=\overline{g_{n,j}} \label{K1} \end{align} for all $n\in\mathbb{Z}^d$, $j=0,1$. In particular, $g_{0, j}$ is real-valued. Moreover, we assume that $\{g_{0,j}, \Re g_{n,j}, \Im g_{n,j}\}_{n\in\mathcal I, j=0,1}$ are independent, where the index set $\mathcal{I}$ is defined by \begin{equation*} \mathcal I=\bigcup_{k=0}^{d-1} \mathbb{Z}^k\times \mathbb{Z}_{+}\times \{0\}^{d-k-1}.
\end{equation*}
\noindent Note that $\mathbb{Z}^d = \mathcal I \cup (-\mathcal I)\cup \{0\}$. Then, given a pair $(u_0, u_1)$ of functions on $\mathbb{R}^d$, we define the \emph{Wiener randomization} $(u_0^\omega, u_1^\omega)$ of $(u_0,u_1)$ by \begin{align} (u_0^\omega, u_1^\omega) & = \bigg(\sum_{n \in \mathbb{Z}^d} g_{n,0} (\omega) \psi(D-n) u_0, \sum_{n \in \mathbb{Z}^d} g_{n,1} (\omega) \psi(D-n) u_1\bigg). \label{R1} \end{align}
\noindent See \cite{ZF, LM, BOP1, BOP2}. We emphasize that thanks to \eqref{K1}, this randomization has the desirable property that if $u_0$ and $u_1$ are real-valued, then their randomizations $u_0^\omega$ and $u_1^\omega$ defined in~\eqref{R1} are also real-valued.
We make the following assumption on the
probability distributions $\mu_{n,j}$ for $g_{n, j}$;
there exists $c>0$ such that \begin{equation}
\int e^{\gamma \cdot x}d\mu_{n,j}(x)\leq e^{c|\gamma|^2}, \quad j = 0, 1, \label{B3} \end{equation}
\noindent for all $n \in \mathbb{Z}^d$, (i) all $\gamma \in \mathbb{R}$ when $n = 0$, and (ii) all $\gamma \in \mathbb{R}^2$ when $n \in \mathbb{Z}^d \setminus \{0\}$. Note that \eqref{B3} is satisfied by standard complex-valued Gaussian random variables, standard Bernoulli random variables, and any random variables with compactly supported distributions.
It is easy to see that, if $(u_0,u_1) \in \mathcal{H}^s(\mathbb{R}^d)$ for some $s \in \mathbb{R}$, then the Wiener randomization $(u_0^\omega, u_1^\omega)$ is almost surely in $\mathcal{H}^s(\mathbb{R}^d)$. Note that, under some non-degeneracy condition on the random variables $\{g_{n, j}\}$,
there is almost surely no gain from randomization in terms of differentiability (see, for example, Lemma B.1 in \cite{BT1}). Instead, the main feature of the Wiener randomization
\eqref{R1} is that $(u_0^\omega, u_1^\omega)$ behaves better in terms of integrability. More precisely, if $u_j \in L^2(\mathbb{R}^d)$, $j=0,1$, then the randomized function $u_j^\omega$ is almost surely in $L^p(\mathbb{R}^d)$ for any finite $p \geq 2$. See \cite{BOP1}.
Using the Wiener randomization of the initial data, we prove the main results of this paper, which are the local and global almost sure existence of a unique solution for the quintic vNWL in $\mathbb{R}^2$. More precisely, we have the following main results.
{\bf{Main results.}} Fix $(u_0, u_1) \in \mathcal{H}^s(\mathbb{R}^2)$ for some $s \in \mathbb{R}$ and let $(u_0^\omega, u_1^\omega)$ denote the Wiener randomization of $(u_0, u_1)$ defined in \eqref{R1}. Consider the following defocusing quintic vNLW on $\mathbb{R}^2$ with the random initial data: \begin{align} \begin{cases} \partial_t^2 u - \Delta u + D \partial_t u + u^5 = 0\\
(u, \partial_t u) |_{t = 0} = (u_0^\omega, u_1^\omega) \end{cases} \quad (x, t) \in \mathbb{R}^2\times \mathbb{R}_+. \label{NLW1} \end{align}
\begin{theorem}\label{THM:LWP1} Let $s > -\frac 15$. Then, the quintic vNLW \eqref{NLW1} is almost surely {\bf{locally}} well-posed with respect to the Wiener randomization $(u_0^\omega, u_1^\omega)$ as initial data. More precisely, there exist $ C, c, \gamma>0$ and $0 < T_0 \ll1 $ such that for each $0< T \le T_0$, there exists a set $\Omega_T \subset \Omega$ with the following properties:
\begin{itemize} \item[\textup{(i)}]
$\displaystyle P(\Omega_T^c) < C \exp\bigg(-\frac{c}{T^\gamma \|(u_0, u_1)\|_{\mathcal{H}^s}^2 }\bigg)$,
\item[\textup{(ii)}] For each $\omega \in \Omega_T$, there exists a \textup{(}unique\textup{)} local-in-time solution $u$ to \eqref{NLW1}
with $(u, \partial_t u) |_{t = 0} = (u_0^\omega, u_1^\omega)$ in the class \begin{align*} V(t)(u_0^\omega, u_1^\omega) + C([0, T]; H^{s_0} (\mathbb{R}^2)) \cap L^{5+\delta}([0, T]; L^{10}(\mathbb{R}^2))
\end{align*}
\noindent for some $s_0 = s_0(s)> \frac 35$, sufficiently close to $\frac 35$, and small $\delta> 0$ such that $s_0 \geq 1 - \frac 1{5+\delta} - \frac 2{10}$. Here, $V(t)$ denotes the linear propagator for the viscous wave equation defined in \eqref{A1}.
\end{itemize}
\end{theorem}
\begin{remark}\rm
Let $k_0$ be the smallest integer such that $k_0 \ge T_0^{-1}$. Then, by setting \[ \Sigma = \bigcup_{k = k_0}^\infty \Omega_{k^{-1}}, \]
\noindent we have: \begin{enumerate} \item $P(\Sigma) = 1$, and \item for each $\omega\in \Sigma$, there exist a (unique) local-in-time solution~$u$ to \eqref{NLW1}
with $(u, \partial_t u) |_{t = 0} = (u_0^\omega, u_1^\omega)$ on the time interval $[0, T_\omega]$ for some $T_\omega > 0$. More specifically, for $\omega \in \Omega_{k^{-1}}$, the random local existence time $T_\omega$ is given by $T_\omega = k^{-1}$. \end{enumerate} \end{remark}
The proof of Theorem \ref{THM:LWP1} is based on
the first order expansion \cite{BO96, BT1, CO, BOP1, BOP2, KC}: \begin{align} u = z + v, \label{exp1} \end{align}
\noindent where $z = z^\omega$ denotes the random {\emph{linear}} solution given by \begin{align} z(t) = V(t) (u_0^\omega, u_1^\omega). \label{z1} \end{align}
\noindent Then, \eqref{NLW1} can be rewritten as \begin{align} \begin{cases} \partial_t^2 v - \Delta v + D \partial_t v + (v + z)^5 =0\\
(v, \partial_t v) |_{t = 0} = (0, 0) \end{cases} \label{NLW5} \end{align}
\noindent and we study the fixed point problem \eqref{NLW5} for $v$. In contrast with \cite{KC}, where the proof of almost sure local well-posedness was based on the Strichartz estimate for the viscous wave equation with the diagonal Strichartz space $L^6([0, T]; L^6(\mathbb{R}^2))$, we prove Theorem \ref{THM:LWP1}, using the Schauder estimate for the Poisson kernel (Lemma \ref{LEM:Sch}) and work in a non-diagonal space $ L^{5+\delta}([0, T]; L^{10}(\mathbb{R}^2))$. This was important to obtain higher regularity of $\vec{v}$ ($\mathcal{H}^1$ regularity), which allowed us to show boundedness of the energy, not otherwise attainable using Strichartz estimates. See Section \ref{SEC:LWP} for details.
Once local almost sure well-posedness is established we obtain the following global almost sure well-posedness result:
\begin{theorem}\label{THM:GWP1} Let $s > -\frac 15$. Then, the defocusing quintic vNLW \eqref{NLW1} is almost surely globally well-posed with respect to the Wiener randomization $(u_0^\omega, u_1^\omega)$ as initial data. More precisely, there exists a set $\Sigma \subset \Omega$ with $P(\Sigma) = 1$ such that, for each $\omega \in \Sigma$, there exists a \text{(}unique\textup{)} global-in-time solution $u$ to \eqref{NLW1}
with $(u, \partial_t u) |_{t = 0} = (u_0^\omega, u_1^\omega)$ in the class \begin{align*} V(t)(u_0^\omega, u_1^\omega) + C(\mathbb{R}_+; H^{s_0} (\mathbb{R}^2)) \end{align*}
\noindent for some $s_0 > \frac 35$.
\end{theorem}
Here, the uniqueness holds in the following sense. Given any $t_0 \in \mathbb{R}_+$, there exists a random time interval $I(t_0, \omega) \ni t_0$ such that the solution $u = u^\omega$ constructed in Theorem~\ref{THM:GWP1} is unique in \begin{align*} V(t)(u_0^\omega, u_1^\omega) + C(I(t_0, \omega); H^{s_0} (\mathbb{R}^2)) \cap L^{5+\delta}(I(t_0, \omega); L^{10}(\mathbb{R}^2)), \end{align*}
\noindent where $s_0 >\frac 35$ and $\delta > 0$ are as in Theorem \ref{THM:LWP1}.
The main idea of the proof of Theorem \ref{THM:GWP1} is based on a probabilistic energy estimate, see e.g., \cite{BT1, OP}. With $\vec v = (v, \partial_t v)$, a smooth solution $\vec v $ to the defocusing vNLW \eqref{NLW5} (with $z \equiv 0$ and general initial data) satisfies monotonicity of the energy (for the usual NLW): \begin{align}
E(\vec v) = \frac 12 \int_{\mathbb{R}^2} |\nabla v|^2 dx + \frac 12 \int_{\mathbb{R}^2} (\partial_t v)^2 dx + \frac 16 \int_{\mathbb{R}^2} v^6 dx . \label{E0} \end{align}
\noindent Indeed a simple integration by parts with \eqref{NLW5} (with $z \equiv 0$) shows \begin{align*}
\partial_t E(\vec v) = - \|\partial_t v\|_{\dot H^\frac{1}{2}}^2 \leq 0. \end{align*}
\noindent For our problem, we proceed with
the first order expansion \eqref{exp1}
and thus the residual term $v = u-z$ only satisfies the perturbed vNLW \eqref{NLW5}. As such, the monotonicity of the energy $E(\vec v)$ no longer holds. Nonetheless, by using the time integration by parts trick introduced by Oh and Pocovnicu \cite{OP}, we establish a Gronwall type estimate for $E(\vec v)$ to prove almost sure global well-posedness.
One important point to note is that as it is written, the local theory (Theorem \ref{THM:LWP1}) does not provide a sufficient regularity (i.e.~$\mathcal{H}^1(\mathbb{R}^2)$) for $\vec v$ to guarantee finiteness of the energy $E(\vec v)$. By using the Schauder estimate (Lemma \ref{LEM:Sch}), however, we can show that the residual term $\vec v(t)$ is smoother and indeed lies in $\mathcal{H}^1(\mathbb{R}^2)$ for strictly positive times. It is at this step that the dissipative nature of the equation plays an important role in this globalization argument. See Subsection~\ref{SUBSEC:GWP1} for details.
We conclude this introduction by a few remarks.
First, we expect that probabilistic {\emph{continuous dependence}}, a notion introduced by Burq and Tzvetkov in \cite{BT3}, see also \cite{Poc}, can be extended from the range $s > -\frac 16$, proved by Kuan and \v{C}ani\'c in \cite{KC}, to the entire range $s > -\frac 15$. We omit details here.
Secondly, we note that it is also possible to establish almost sure global well-posedness with respect to the Wiener randomization for the defocusing vNLW~\eqref{vNLW1b} on $\mathbb{R}^2$
with a {\emph{general defocusing nonlinearity}} $|u|^{p-1} u$ for $p < 5$, provided that $s > - \frac 1p$. For $p \leq 3$, a straightforward Gronwall type argument by Burq and Tzvetkov \cite{BT3} applies. See also \cite{Poc}. For $3 < p < 5$, one can adapt the argument in Sun and Xia \cite{SX} which interpolates the $p = 3$ case \cite{BT3} and the $p = 5$ case \cite{OP} in the context of the usual NLW. See Remark \ref{REM:2}\,(ii) for a discussion on the $p > 5$ case.
Finally, we remark that the derivation discussed above but with a {\emph{random external forcing}} $F_{\text{ext}}$ in \eqref{pressuredyn}, leads to a stochastic version of vNLW. In \cite{KC2}, Kuan and \v{C}ani\'c studied the following stochastic vNLW on $\mathbb{R}^2$ with a multiplicative space-time white noise forcing: \begin{align} \partial_t^2 u - \Delta u + 2\mu D \partial_t u = F(u) \xi , \label{vNLWx} \end{align} where $\xi$ denotes a space-time white noise on $\mathbb{R}^2\times \mathbb{R}_+$. Under a suitable assumption on $F$, they proved global well-posedness of \eqref{vNLWx}. We also note that well-posedness of stochastic vNLW with a (singular) additive noise on the two-dimensional torus $\mathbb{T}^2= (\mathbb{R}/\mathbb{Z})^2$ was recently considered in \cite{LO, Liu}.
The results of this work shed a new light on this active and important research area by showing the first global well-posedness result for a prototype fluid-structure interaction problem with randomly perturbed rough initial data in the case when the corresponding deterministic problem is ill-posed.
We begin by presenting the estimates that will be used to obtain the local and global almost sure well-posedness results.
\section{Basic estimates}
In this section, we go over the deterministic and probabilistic linear estimates that will be the basis of the proofs of the main results. For this purpose, we introduce the following {\bf{notation}}:
\begin{itemize} \item We write $ A \lesssim B $ to denote an estimate of the form $ A \leq CB $ for some $C > 0$. Similarly, we write $ A \sim B $ to denote $ A \lesssim B $ and $ B \lesssim A $ and use $ A \ll B $ when we have $A \leq c B$ for small $c > 0$. \item We define the operators $D$ and $\jb{\nabla}$ by setting \begin{align}
D = |\nabla| = \sqrt{-\Delta} \qquad \text{and}\qquad \jb{\nabla} = \sqrt{ 1- \Delta}, \label{nb} \end{align}
viewed as Fourier multiplier operators with multipliers $|\xi|$ and $\jb{\xi}$, respectively. \end{itemize}
\subsection{Linear operators and the relevant linear estimates}
By writing \eqref{NLW1} in the Duhamel formulation, we have \begin{align*} u(t) = V(t) (u_0^\omega, u_1^\omega) - \int_0^t W(t - t') u^5(t') dt',
\end{align*}
\noindent where the linear propagator $V(t)$ is defined by \begin{align} V(t) (u_0, u_1) = e^{- \frac{D}{2}t} \bigg(\cos \big(\tfrac{\sqrt{3}}{2} Dt\big) + \frac{1}{\sqrt{3}}\sin \big(\tfrac{\sqrt{3}}{2} Dt\big) \bigg)u_0 + e^{- \frac{D}{2}t}\frac{\sin \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\tfrac{\sqrt{3}}{2} D} u_1, \label{A1} \end{align}
\noindent and $W(t)$ is defined by \begin{align} W(t) = e^{- \frac{D}{2}t} \frac{\sin \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\tfrac{\sqrt{3}}{2} D}. \label{A2} \end{align}
\noindent By letting \begin{align} P(t) = e^{-\frac{D}{2}t} \label{A2a} \end{align}
\noindent
denote the Poisson kernel (with a parameter $\frac{t}{2}$) and \begin{align*} S(t) = \frac{\sin \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\tfrac{\sqrt{3}}{2} D},
\end{align*}
\noindent we have \begin{align*} W(t) = P(t)\circ S(t) .
\end{align*}
\noindent By defining $U(t)$ by \begin{align*} U (t) (u_0, u_1) = \Big(\cos \big(\tfrac{\sqrt{3}}{2} Dt\big) + \frac{1}{\sqrt{3}}\sin \big(\tfrac{\sqrt{3}}{2} Dt\big) \Big)u_0 + \frac{\sin \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\tfrac{\sqrt{3}}{2} D} u_1,
\end{align*}
\noindent we have \begin{align} V(t) = P(t) \circ U(t) . \label{A6} \end{align}
We first recall the Strichartz estimates for the homogeneous linear viscous wave equation (Theorem 3.2 in \cite{KC}). Given $\sigma > 0$, we say that a pair $(q, r)$ is $\sigma$-admissible if $2 \leq q, r \leq \infty$ with $(q, r, \sigma) \ne (2, \infty, 1)$ and \begin{align} \frac 2q + \frac{2\sigma}{r} \leq \sigma. \label{admis1} \end{align}
\begin{lemma}\label{LEM:Str}
Given $\sigma > 0$, let $(q, r)$ be a $\sigma$-admissible pair with $r < \infty$. Then, a solution $u$ to the homogeneous linear wave equation on $\mathbb{R}^d$: \begin{align*} \begin{cases} \partial_t^2 u - \Delta u + D \partial_t u = 0\\
(u, \partial_t u) |_{t = 0} = (u_0, u_1) \end{cases} \end{align*}
\noindent satisfies \begin{align}
\| (u, \partial_t u) \|_ {L^\infty(\mathbb{R}_+; \mathcal{H}^s_x(\mathbb{R}^d)) } +
\| u \|_{L^q(\mathbb{R}_+; L^r_x(\mathbb{R}^d))} \lesssim
\|(u_0, u_1) \|_{\mathcal{H}^s(\mathbb{R}^d)}, \label{hStr} \end{align}
\noindent provided that the following scaling condition holds: \begin{align}
\frac{1}{q} + \frac dr = \frac d2- s. \label{admis2} \end{align}
\end{lemma}
\begin{remark}\rm In view of the scaling condition \eqref{admis2}, if a pair $(q, r)$ satisfies \eqref{admis2} for some $s \geq 0$, then it is $\sigma$-admissible with $\sigma = d$. \end{remark}
\begin{remark}\rm We remark that the bounding constant in the estimate \eqref{hStr} depends only on $\sigma > 0$. See \cite{KC} for details. \end{remark}
\begin{remark}\rm In the usual Strichartz estimates for the homogeneous wave equation, one must impose an additional restriction on $s$ that $0 \le s \le 1$. This is not present in the corresponding estimate for the homogeneous viscous wave equation in Lemma \ref{LEM:Str}. Although a restriction on $s$ is not explicitly stated in Lemma \ref{LEM:Str}, $s$ does have a limited range of possible values, due to the constraints imposed by the fact that $\sigma > 0$, $2 \le q, r \le \infty$, with $(q, r, \sigma) \ne (2, \infty, 1)$ in \eqref{admis1}, and the scaling condition \eqref{admis2}. The exponent $s$ can take values in the range $-\frac 12 < s \le \frac d2$ depending on the choice of parameters. One attains the lower end of the range by taking $q = 2$ and taking $r$ arbitrarily close to $2$, while one attains the upper endpoint of the range by taking $q, r = \infty$ for $s = \frac d2$. \end{remark}
Next, we state a Schauder-type estimate for the Poisson kernel $P(t)$, which allows us to exploit the dissipative nature of the dynamics.
\begin{lemma}\label{LEM:Sch} Let $ 1 \leq p \leq q \leq \infty$ and $\alpha \geq 0$. Then, we have \begin{align}
\| D^\alpha P(t) f\|_{L^q(\mathbb{R}^d)} \lesssim t^{- \alpha - d(\frac{1}{p} - \frac{1}{q})}\| f\|_{L^p(\mathbb{R}^d)} \label{P1} \end{align}
\noindent for any $t > 0$. \end{lemma}
\begin{proof} Let $K_t(x)$ denote the kernel for $P(t)$, whose Fourier transform is given by
$\widehat K_t(\xi) = e^{-\frac{|\xi|}{2}t}$. Then, we have \begin{align}
K_t(x) = t^{-d} K_1(t^{-1}x),
\label{P2} \end{align}
\noindent where $K_1(x)$ satisfies
\[ K_1(x) = \frac{c_1}{(c_2 + |x|^2)^{\frac{d + 1}{2}}}\]
\noindent for some $c_1, c_2 > 0$. In particular, we have $K_1 \in L^r(\mathbb{R}^d)$ for any $1\leq r \leq \infty$.
We first consider the case $\alpha = 0$. For $1 \leq r \leq \infty$ with $\frac{1}{r} = \frac{1}{q} - \frac{1}{p} + 1$, it follows from \eqref{P2} that \begin{align}
\| K_t\|_{L^r} =t^{-d (1 - \frac{1}{r})}
\| K_1\|_{L^r} = C_r t^{-d (\frac{1}{p} - \frac{1}{q})}. \label{P3} \end{align}
\noindent Then, \eqref{P1} follows from Young's inequality and \eqref{P3}.
Next, we consider the case $\alpha > 0$. Noting that $D^\alpha P(t) f = (D^\alpha K_t) *f$, we need to study the scaling property of $D^\alpha K_t$. On the Fourier side, we have
\[ \widehat{D^\alpha K_t}(\xi) = |\xi|^{\alpha} e^{-\frac{|\xi|}{2}t}
= t^{-\alpha} (t |\xi|)^{\alpha} e^{-\frac{|\xi|t}{2}} = t^{-\alpha} \widehat {D^\alpha K_1}(t \xi). \]
\noindent Namely, we have \begin{align} D^\alpha K_t(x) = t^{-d-\alpha} (D^\alpha K_1)(t^{-1}x). \label{P4} \end{align}
\noindent Then, proceeding as before, the bound \eqref{P1} follows from Young's inequality and \eqref{P4}. \end{proof}
\subsection{Probabilistic estimates}
In this subsection, we establish certain probabilistic Strichartz estimates. See also Lemma 5.3 in \cite{KC}.
We first recall the following probabilistic estimate. See \cite{BT1} for the proof.
\begin{lemma} \label{LEM:R1} Given $j = 0, 1$, let $\{g_{n, j}\}_{n\in \mathbb{Z}^d}$ be a sequence of mean-zero complex-valued,
random variables,
satisfying
\eqref{B3}, as in Subsection \ref{SUBSEC:Wiener}.
Then, there exists $C>0$ such that
\[ \bigg\| \sum_{n \in \mathbb{Z}^d} g_{n, j}(\omega) c_n\bigg\|_{L^p(\Omega)}
\leq C \sqrt{p} \| c_n\|_{\ell^2_n(\mathbb{Z}^d)}\]
\noindent for any $j = 0, 1$, any finite $p \geq 2$, and any sequence $\{c_n\} \in \ell^2(\mathbb{Z}^d)$. \end{lemma}
We now establish the first probabilistic Strichartz estimate.
\begin{proposition}\label{PROP:PS}
Given $(u_0, u_1) \in \mathcal{H}^0(\mathbb{R}^d)$, let $(u_0^\omega, u_1^\omega)$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{B3}. Then, given any $2\leq q, r<\infty$ and $\alpha \geq 0$, satisfying $q\alpha < 1$, there exist $C, c>0$ such that \begin{align}
P\Big(\|D^\alpha V(t) (u_0^\omega,& u_1^\omega) \|_{L^q([0, T]; L^r_x)}> \lambda\Big)
\leq C\exp\Bigg(-c\frac{\lambda^2}{T^{\frac 2 q -2\alpha } \| (u_0, u_1) \|_{\mathcal{H}^0}^{2}}\Bigg) \label{PS1} \end{align}
\noindent for any $T > 0$ and $\lambda > 0$.
\end{proposition}
\begin{remark}\label{REM:PS1}\rm (i) From \eqref{PS1}, we conclude that
\[ P\Big( \|D^\alpha V(t) (u_0^\omega, u_1^\omega)\| \le \lambda \Big) \longrightarrow 1, \]
\noindent
as $\lambda \to \infty$ for fixed $T > 0$, or as $T \searrow 0$ for fixed $\lambda > 0$.
\noindent (ii) Let $\alpha_0 \ge 0$ and $q\alpha_0 < 1$. Then, by applying Proposition \ref{PROP:PS} with $\alpha = 0$ and $\alpha = \alpha_0$, we have \begin{align}
P\Big(\|\jb{\nabla}^{\alpha_0} V(t) (u_0^\omega,& u_1^\omega) \|_{L^q([0, T]; L^r_x)}> \lambda\Big)
\leq C\exp\Bigg(-c\frac{\lambda^2}{T^{\frac 2 q -2\alpha_0 } \| (u_0, u_1) \|_{\mathcal{H}^0}^{2}}\Bigg) \label{PS1a} \end{align}
\noindent for any $0 < T \le 1$ and $\lambda > 0$, where $\jb{\nabla} = \sqrt {1-\Delta}$ is as in \eqref{nb}. We also have \begin{align}
P\Big(\|\jb{\nabla}^{\alpha_0} V(t) (u_0^\omega,& u_1^\omega) \|_{L^q([0, T]; L^r_x)}> \lambda\Big)
\leq C\exp\Bigg(-c\frac{\lambda^2}{T^{\frac 2 q } \| (u_0, u_1) \|_{\mathcal{H}^0}^{2}}\Bigg) \label{PS1b} \end{align}
\noindent for any $ T \ge 1$ and $\lambda > 0$.
\end{remark}
See also Lemma 5.3 in \cite{KC}, where the case $q = r = 6$ was treated. The proof of Proposition \ref{PROP:PS} follows the usual proofs of the probabilistic Strichartz estimates via Minkowski's integral inequality \cite{BT1, CO, BOP1} but also utilizes
the Schauder estimate (Lemma \ref{LEM:Sch}).
\begin{proof}
From \eqref{A6} and Lemma \ref{LEM:Sch} followed by Minkowski's integral inequality, we have \begin{align} \begin{split}
\Big\| \|D^\alpha V(t) (u_0^\omega,& u_1^\omega) \|_{L^q_t([0, T]; L^r_x)} \Big\|_{L^p(\Omega)}
\lesssim
\Big\| \big\| t^{-\alpha} \|U (t) (u_0^\omega, u_1^\omega) \|_{L^r_x} \big\|_{L^q_t([0, T])}\Big\|_{L^p(\Omega)}\\
& \leq
\Big \| \big\| t^{-\alpha} \|U (t) (u_0^\omega, u_1^\omega) \|_{L^p(\Omega)} \big\|_{L^r_x} \Big\|_{L^q_t([0, T])} \end{split} \label{PS2} \end{align}
\noindent for any finite $p \geq \max (q, r)$. By Lemma \ref{LEM:R1}, Minkowski's integral inequality, Bernstein's unit-scale inequality \eqref{B2},
and the boundedness of $U(t)$ from $\mathcal{H}^0(\mathbb{R}^d)$ into $L^2(\mathbb{R}^d)$, we obtain \begin{align} \begin{split} \eqref{PS2} & \lesssim \sqrt{p}\,
\Big \| t^{-\alpha} \big\| \|\psi(D-n) U(t) (u_0, u_1) \|_{\ell^2_n} \big\|_{L^r_x} \Big\|_{L^q_t([0, T])}\\ & \leq \sqrt{p}\,
\Big \| t^{-\alpha} \big\| \|\psi(D-n) U(t) (u_0, u_1) \|_{L^r_x} \big\|_{\ell^2_n} \Big\|_{L^q_t([0, T])}\\ & \lesssim \sqrt{p}\,
\Big \| t^{-\alpha} \| U(t) (u_0, u_1) \|_{L^2_x} \Big\|_{L^q_t([0, T])}\\ & \lesssim \sqrt{p} \, T^{\frac 1q - \alpha}
\| (u_0, u_1) \|_{\mathcal{H}^0}, \end{split} \label{PS3} \end{align}
\noindent where we used $q\alpha < 1$ in the last step. Then, the tail estimate \eqref{PS1} follows from \eqref{PS3} and Chebyshev's inequality. See the proof of Lemma 3 in \cite{BOP1}.\footnote{Lemma 2.2 in the arXiv version.} \end{proof}
In establishing almost sure global well-posedness, we need to introduce several additional linear operators.
Define $\widetilde V(t) $ by \begin{align} \begin{split} \widetilde V(t) (u_0, u_1) & = \jb{\nabla}^{-1} \partial_t V(t) \\ & = - \frac{2\sqrt 3}{3}\frac{D}{\jb{\nabla}} e^{- \frac{D}{2}t} \sin \big(\tfrac{\sqrt{3}}{2} Dt\big) u_0\\ & \quad + e^{- \frac{D}{2}t} \bigg(-\frac 12 \frac{D}{\jb{\nabla}}\frac{\sin \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\tfrac{\sqrt{3}}{2} D} + \frac{\cos \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\jb{\nabla}} \bigg)
u_1.
\end{split} \label{z3} \end{align}
\noindent Then, \noindent defining $\widetilde U(t)$ by \begin{align*} \begin{split} \widetilde U (t) (u_0, u_1) & = - \frac{2\sqrt 3}{3}\frac{D}{\jb{\nabla}} \sin \big(\tfrac{\sqrt{3}}{2} Dt\big) u_0\\ & \quad + \bigg(-\frac 12 \frac{D}{\jb{\nabla}}\frac{\sin \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\tfrac{\sqrt{3}}{2} D} + \frac{\cos \big(\tfrac{\sqrt{3}}{2} Dt\big)}{\jb{\nabla}} \bigg)
u_1,
\end{split}
\end{align*}
\noindent we have \begin{align} \widetilde V(t) = P(t) \circ \widetilde U(t) . \label{z5} \end{align}
Next, we state
a probabilistic estimate involving the $L^\infty_t$-norm,
which plays an important role in establishing an energy bound
for almost sure global well-posedness. The proof is based on an adaptation of the proof of Proposition 3.3 in \cite{OP} combined with the Schauder estimate (Lemma \ref{LEM:Sch}).
\begin{proposition} \label{PROP:PS2}
Given a pair $(u_0, u_1)$ of real-valued functions defined on $\mathbb{R}^2$, let $(u_0^{\omega}, u_1^\omega)$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{B3}. Fix $T \gg 1 \geq T_0 > 0$ and let $V^*(t) = V(t)$ or $\widetilde V(t)$ defined in \eqref{A1} and \eqref{z3}, respectively. Then, given any $2\leq r\le \infty$, $\alpha \geq 0$, and $\varepsilon_0 > 0$, there exist $C, c>0$ such that \begin{align*}
P\Big(\|D^\alpha V^*(t) (u_0^\omega,& u_1^\omega) \|_{L^\infty([T_0, T]; L^r_x)}> \lambda\Big)
\leq CT \exp\Bigg(-c\frac{\lambda^2}{T^2 T_0^{-2\alpha} \| (u_0, u_1) \|_{\mathcal{H}^{\varepsilon_0}}^{2}}\Bigg)
\end{align*}
\noindent for any $\lambda > 0$.
\end{proposition}
\begin{proof} Let $U^*(t) = P(-t) \circ V^*(t)$. Then, from Lemma \ref{LEM:Sch} with \eqref{A6} or \eqref{z5}, we have \begin{align*} \begin{split}
\|D^\alpha V^*(t) (u_0^\omega,& u_1^\omega) \|_{L^\infty_t([T_0, T]; L^r_x)}
\lesssim
\big\| t^{-\alpha} \|U^* (t) (u_0^\omega, u_1^\omega) \|_{L^r_x} \big\|_{L^\infty_t([T_0, T])}\\
& \leq T_0^{-\alpha}
\|U^*(t) (u_0^\omega, u_1^\omega) \|_{L^\infty_t([T_0, T]; L^r_x)} \end{split}
\end{align*}
\noindent As in the proof of Proposition 3.3 in \cite{OP}, the rest follows from Lemma 3.4 in \cite{OP}, which established similar $L^\infty_t$-bounds for the half-wave operators $e^{\pm i tD}$. \end{proof}
\begin{remark}\rm It is also possible to prove Proposition \ref{PROP:PS2}, using the Garsia-Rodemich-Rumsey inequality (\cite[Theorem A.1]{FV}). See, for example, Lemma 2.3 in \cite{GKOT} in the context of the stochastic nonlinear wave equation. \end{remark}
\section{Local well-posedness}\label{SEC:LWP}
In this section, we present the proof of Theorem \ref{THM:LWP1}. Instead of \eqref{NLW5} with the zero initial data, we study \eqref{NLW5} with general (deterministic) initial data $ (v_0, v_1)$:
\begin{align} \begin{cases} \partial_t^2 v - \Delta v + D \partial_t v + (v + z)^5 =0\\
(v, \partial_t v) |_{t = 0} = (v_0, v_1). \end{cases} \label{NLW6} \end{align} We recall from \eqref{z1} and \eqref{A1} that $z = V(t) (u_0^\omega, u_1^\omega)$ is the random linear solution with the randomized initial data
$(u_{0}^{\omega}, u_{1}^{\omega})$
which is the result of
the Wiener randomization \eqref{R1} performed on the given deterministic initial data $(u_{0}, u_{1}) \in \mathcal{H}^{s}(\mathbb{R}^{2})$.
\begin{theorem}\label{THM:LWP2} Let $s > -\frac 15$. Fix $(v_0, v_1) \in \mathcal{H}^{s_0}(\mathbb{R}^2)$ for some $s_0 = s_0(s) > \frac 35$ sufficiently close to~$\frac 35$. Then, there exist $ C, c, \gamma>0$ and $0 < T_0 \ll1 $ such that for each $0< T \le T_0$, there exists a set $\Omega_T \subset \Omega$ with the following properties:
\begin{itemize} \item[\textup{(i)}] The following probability bound holds: \begin{equation}\label{probbound1}
\displaystyle P(\Omega_T^c) < C \exp\bigg(-\frac{c}{T^\gamma \|(u_0, u_1)\|_{\mathcal{H}^s}^2 }\bigg). \end{equation}
\item[\textup{(ii)}] For each $\omega \in \Omega_T$, there exists a \textup{(}unique\textup{)} solution $(v, \partial_t v)$ to \eqref{NLW6}
with $(v, \partial_t v) |_{t = 0} = (v_0, v_1)$ in the class \begin{align}
(v, \partial_t v) \in C([0, T]; \mathcal{H}^{s_0} (\mathbb{R}^2)) \qquad \text{and}\qquad v\in L^{5+\delta}([0, T]; L^{10}(\mathbb{R}^2)) \label{class1} \end{align}
\noindent for small $\delta> 0$ such that $s_0 \geq 1 - \frac 1{5+\delta} - \frac 2{10}$.
\end{itemize}
\end{theorem}
In Subsection \ref{SUBSEC:3.1}, we first state several linear estimates. We then present the proof of Theorem \ref{THM:LWP2} in Subsection \ref{SUBSEC:3.2}.
\subsection{Linear estimates} \label{SUBSEC:3.1}
In this subsection, we establish several nonhomogeneous linear estimates, which are slightly different from those in Theorem 3.3 in \cite{KC}.
\begin{lemma}\label{LEM:lin1} Let $W(t)$ be as in \eqref{A2}. Then, given sufficiently small $\delta > 0$, we have \begin{align}
\bigg\|\int_0^t W(t - t')F(t') dt'\bigg\|_{L^{5+\delta }_t([0, T]; L^{10}_x(\mathbb{R}^2))}
\lesssim \|F\|_{L^1([0, T]; L^2_x(\mathbb{R}^2))} \label{lin1} \end{align}
\noindent for any $0 < T \leq 1$.
\end{lemma}
\begin{proof} Let $\mathbf{P}_{\lesssim 1}$ be a smooth\footnote{Namely, given by a smooth Fourier multiplier.} projection onto
spatial frequencies $\{|\xi|\leq 1\}$ and set $\mathbf{P}_{\gg 1} = \textup{\bf Id} - \mathbf{P}_{\lesssim 1}$. In the following, we separately estimate the contributions from $\mathbf{P}_{\lesssim 1} F$ and $\mathbf{P}_{\gg 1}F$.
Let us first estimate the low frequency contribution. By Minkowski's integral inequality and Bernstein's unit-scale inequality \eqref{B2} with $\sin x \leq x$ for $x \geq 0$, we have \begin{align} \begin{split}
\bigg\|\int_0^t & W(t - t') \mathbf{P}_{\lesssim 1}F(t') dt'\bigg\|_{L^{5+\delta}_t([0, T]; L^{10}_x)}\\ & \leq
\bigg\|\int_0^t \|\mathbf 1_{[0, t]}(t') W(t - t')\mathbf{P}_{\lesssim 1}F(t')\|_{L^{10}_x}
dt'\bigg\|_{L^{5+\delta}_t([0, T])}\\ & \leq
\bigg\|\int_0^t (t-t') \|\mathbf 1_{[0, t]}(t')\mathbf{P}_{\lesssim 1}F(t')\|_{L^{2}_x}
dt'\bigg\|_{L^{5+\delta}_t([0, T])}\\
& \lesssim T^\theta \|F\|_{L^1([0, T]; L^2_x)} \end{split} \label{lin2} \end{align}
\noindent for some $\theta > 0$.
Next, we estimate the high frequency contribution. Note that the pair $(5+\delta, 10)$ is $\sigma$-admissible for $\sigma \geq \frac 12$ in the sense of \eqref{admis1}. Let \begin{align} s_0 = 1 - \frac{1}{5+\delta} - \frac{2}{10} = \frac{3}{5} + \delta_0 \label{lin2a} \end{align}
\noindent for some small $ \delta_0 = \delta_0(\delta)> 0$. Then, by Minkowski's integral inequality and the homogeneous Strichartz estimate (Lemma \ref{LEM:Str}), we have \begin{align} \begin{split}
\bigg\|\int_0^t& W(t - t')\mathbf{P}_{\gg1} F(t') dt'\bigg\|_{L^{5+\delta}_t([0, T]; L^{10}_x)}\\ & \leq
\int_0^T \|\mathbf 1_{[0, t]}(t') W(t - t')\mathbf{P}_{\gg 1}F(t')
\|_{L^{5+\delta}_t([0, T]; L^{10}_x)} dt'\\ & \lesssim
\int_0^T \|\mathbf{P}_{\gg1} F(t')
\|_{H^{s_0-1}_x} dt'\\
&
\lesssim \|F\|_{L^1([0, T]; L^2_x)}. \end{split} \label{lin3} \end{align}
The desired bound \eqref{lin1} then follows from \eqref{lin2} and \eqref{lin3}. \end{proof}
\begin{lemma}\label{LEM:lin2} Let $W(t)$ be as in \eqref{A2}. Then, given $0 \leq s \leq 1$, we have \begin{align}
\bigg\|\int_0^t W(t - t')F(t') dt'\bigg\|_{C([0, T]; H^s_x(\mathbb{R}^2))}
& \lesssim \|F\|_{L^1([0, T]; L^2_x(\mathbb{R}^2))}, \label{lin4}\\
\bigg\|\partial_t \int_0^t W(t - t')F(t') dt'\bigg\|_{C([0, T]; H^{s-1}_x(\mathbb{R}^2))}
& \lesssim \|F\|_{L^1([0, T]; L^2_x(\mathbb{R}^2))}, \label{lin5} \end{align}
\noindent for any $0 < T \leq 1$.
\end{lemma}
\begin{proof} The first estimate \eqref{lin4} follows from Minkowski's integral inequality with \eqref{A2}. As for the second estimate \eqref{lin5}, we first note from \eqref{A2} that \[ \partial_t \int_0^t W(t - t')F(t') dt' = \int_0^t \partial_t W (t - t') F(t') dt', \]
\noindent where \[ \partial_t W(t) = e^{- \frac{D}{2}t} \bigg(\cos \big(\tfrac{\sqrt{3}}{2} Dt\big) - \frac{1}{\sqrt{3}}\sin \big(\tfrac{\sqrt{3}}{2} Dt\big) \bigg).\]
\noindent Then, the second estimate \eqref{lin5} follows from Minkowski's integral inequality and the boundedness of $\partial_t W(t)$ on $H^{s-1}(\mathbb{R}^2)$. \end{proof}
\subsection{Local well-posedness} \label{SUBSEC:3.2}
We now present the proof of Theorem \ref{THM:LWP2}.
\begin{proof}[Proof of Theorem \ref{THM:LWP2}] Fix $s > -\frac{1}{5}$ and $(u_0, u_1) \in \mathcal{H}^s(\mathbb{R}^2)$. Then, there exists small $\delta > 0$ such that \begin{equation}\label{delta} s > -\frac{1}{5 + \delta}, \end{equation} and we fix this choice of $\delta > 0$ for the remainder of the proof.
Fix $C_0 > 0$ and define the event $\Omega_T = \Omega_T(C_0)$ by setting \begin{equation}
\Omega_{T} = \big\{ \omega \in \Omega: ||z||_{L^{5+\delta}([0, T]; L^{10}_{x})} \le C_0
\big\}.
\label{O1} \end{equation}
\noindent Then, from
the probabilistic Strichartz estimate (Proposition \ref{PROP:PS})
(see also \eqref{PS1a})
with \eqref{z1}, \eqref{R1}, and
\eqref{delta}
(which guarantees $\alpha_0 q < 1$ in invoking \eqref{PS1a} with $\alpha_0 = -s$ and $q = 5+\delta$),
we have \begin{align} P(\Omega_T^c)
\leq C\exp\Bigg(-c\frac{C_0^2 }{T^{\frac 2 q +2s } \| (u_0, u_1) \|_{\mathcal{H}^s}^{2}}\Bigg) \label{PS1x} \end{align}
\noindent for any $0 < T \le 1$.
We remark that the choice of $C_0 > 0$ does not matter,
and that the specific value of $C_0 > 0$ affects only the size of $T_{0} \ll 1$ and the constants in the estimate~\eqref{probbound1}.
By writing \eqref{NLW6} in the Duhamel formulation, we have \begin{align*} v(t) = \Gamma_{(v_0, v_1), z} (v)(t) := V(t) (v_0, v_1) - \int_0^t W(t - t') (v+z)^5(t') dt'.
\end{align*}
\noindent For simplicity, we set $\Gamma = \Gamma_{(v_0, v_1), z} $. Let $\vec \Gamma(v) = (\Gamma(v), \partial_t \Gamma(v))$. Let $s_0 = s_0(\delta) = \frac 35+\delta_0$ as in \eqref{lin2a}. Then, given $T > 0$, define the solution space $Z(T)$ by setting \begin{align*} Z(T) = X(T) \times Y(T), \end{align*}
\noindent where $X(T)$ and $Y(T)$ are defined by \begin{align*} X(T) & = C([0, T]; H^{s_0}(\mathbb{R}^2))\cap L^{5+\delta}([0, T]; L^{10}(\mathbb{R}^2))\\ Y(T) & = C([0, T]; H^{s_0-1}(\mathbb{R}^2)).
\end{align*}
\noindent In order to prove Theorem \ref{THM:LWP2}, we show that there exists small $ 0 < T_{0} \ll 1$ such that $\vec \Gamma: (v, \partial_{t}v) \mapsto (\Gamma(v), \partial_t \Gamma(v))$ is a strict contraction on an appropriate closed ball in $Z(T)$ for any $0 < T \le T_{0}$ and for any $\omega \in \Omega_T$, where $\Omega_T$ is as in \eqref{O1}. The probability estimate~\eqref{probbound1} on $\Omega_T^c$ follows from \eqref{PS1x}.
Fix arbitrary $\omega \in \Omega_T$ for $0 < T \le T_{0}$, where $T_0$ is to be determined later.
Recall $\vec \Gamma(v) = (\Gamma(v), \partial_t \Gamma(v))$. Note that the ordered pair $(5 + \delta, 10)$ is $\sigma$-admissible for $\sigma \ge \frac{1}{2}$ in the sense of \eqref{admis1} and furthermore, it satisfies the scaling condition \eqref{admis2} with $s_{0}$ as defined in \eqref{lin2a}. Then, by Lemmas \ref{LEM:Str}, \ref{LEM:lin1}, and \ref{LEM:lin2} with \eqref{O1}, we have \begin{align*}
\|\vec \Gamma(v)\|_{Z(T)}
& \lesssim \| (v_0, v_1) \|_{\mathcal{H}^{s_0}}
+ \|(v+z)^5\|_{L^1([0, T]; L^2_x)}\\
& \lesssim \| (v_0, v_1) \|_{\mathcal{H}^{s_0}}
+ T^\theta \Big(\|v\|_{L^{5+\delta}([0, T]; L^{10}_x)}
+ \|z\|_{L^{5+\delta}([0, T]; L^{10}_x)}\Big)\\
& \lesssim \| (v_0, v_1) \|_{\mathcal{H}^{s_0}}
+ T^\theta \Big(\|\vec v\|_{Z(T)} + C_0 \Big) \end{align*}
\noindent for some $\theta > 0$, where $\vec v = (v, \partial_t v)$.
A similar computation yields the following difference estimate: \begin{align*}
\|\vec \Gamma(v) - \vec \Gamma(w)\|_{Z(T)} & \lesssim
\|(v+z)^5 - (w+z)^5\|_{L^1([0, T]; L^2_x)}\\
& \lesssim T^\theta \Big( \| v\|_{L^{5+\delta}([0, T]; L^{10}_x)}^4
+ \| w\|_{L^{5+\delta}([0, T]; L^{10}_x)}^4\\
& \hphantom{XXXX}+ \|z\|_{L^{5+\delta}([0, T]; L^{10}_x)}^4\Big)
\| v - w\|_{L^{5+\delta}([0, T]; L^{10}_x)}\\
& \lesssim T^\theta \Big( \|\vec v\|_{Z(T)}^4
+ \|\vec w\|_{Z(T)}^4 + C_0^4\Big)
\|\vec v - \vec w\|_{Z(T)}. \end{align*}
\noindent Hence
by choosing $T_{0} > 0$ sufficiently small, depending on the initial choice of $C_0 > 0$ and $||(v_{0}, v_{1})||_{\mathcal{H}^{s_{0}}}$, we see that $\vec \Gamma = \vec \Gamma_{(v_0, v_1), z} $ is a strict contraction on the ball in $Z(T)$ of radius $\sim
1 + \| (v_0, v_1) \|_{\mathcal{H}^{s_0}}$,
whenever $\omega \in \Omega_T$ and $0 < T \le T_{0}$. This proves almost sure local well-posedness of \eqref{NLW6} (and \eqref{NLW1}) for $s > -\frac 15$. This concludes the proof of Theorem~\ref{THM:LWP2} (and hence of Theorem \ref{THM:LWP1}). \end{proof}
Let us conclude this section by stating some corollaries and remarks. Given $N \in \mathbb{N}$, let $\mathbf{P}_{\le N}$ denote a smooth projection onto the (spatial) frequencies $\{|\xi|\leq N\}$. Then, consider the following perturbed vNLW: \begin{align} \begin{cases} \partial_t^2 v_N - \Delta v_N + D \partial_t v_N + (v_N + z_N)^5 =0\\
(v_N, \partial_t v_N) |_{t = 0} = (\mathbf{P}_{\le N}v_0, \mathbf{P}_{\le N}v_1), \end{cases} \label{NLW6b} \end{align}
\noindent where $z_N$ denotes the truncated random linear solution defined by \begin{align*} z_N(t) = V(t) (\mathbf{P}_{\le N}u_0^\omega, \mathbf{P}_{\le N}u_1^\omega).
\end{align*}
\noindent Then, a slight modification of the proof of Theorem \ref{THM:LWP2} yields the following approximation result.
\begin{corollary}\label{COR:LWP3} Let $s > -\frac 15$ and $s_0 > \frac 35$ be as in Theorem \ref{THM:LWP2}. Fix $(v_0, v_1) \in \mathcal{H}^{s_0}(\mathbb{R}^2)$. Let $\Omega_T$ be as in Theorem~\ref{THM:LWP2}. Furthermore, for each $\omega \in \Omega_T$, let
$(v, \partial_t v)$ be the solution to \eqref{NLW6} on $[0, T]$
with $(v, \partial_t v) |_{t = 0} = (v_0, v_1)$ constructed in Theorem \ref{THM:LWP2}. By possibly shrinking the local existence time $T$ by a constant factor \textup{(}while keeping the definition \eqref{O1} of $\Omega_T$ unchanged\textup{)}, for each $\omega \in \Omega_T$, the solution
$(v_N, \partial_t v_N)$
to \eqref{NLW6b} converges to $(v, \partial_t v)$ in the class \eqref{class1} as $N \to \infty$.
\end{corollary}
Next, consider the following perturbed vNLW: \begin{align} \begin{cases} \partial_t^2 v - \Delta v + D \partial_t v + (v + f)^5 =0\\
(v, \partial_t v) |_{t = t_0} = (v_0, v_1), \end{cases} \label{NLW6a} \end{align}
\noindent where $f$ is a given deterministic function. As a corollary to the proof of Theorem \ref{THM:LWP2}, we have the following local well-posedness result of \eqref{NLW6a}.
\begin{corollary}\label{COR:LWP4} Let $s > -\frac 15$, $s_0 > \frac 35$, and small $\delta > 0$ be as in Theorem \ref{THM:LWP2}. Fix $(v_0, v_1) \in \mathcal{H}^{s_0}(\mathbb{R}^2)$ and fix $t_0\in \mathbb{R}_+$. Suppose that \[f \in L^{5+\delta}([t_0, t_0 + 1]; L^{10}(\mathbb{R}^2)).\]
Then, there exists $T = T\big(\| (v_0, v_1) \|_{\mathcal{H}^{s_0}},
\|f\|_{L^{5+\delta}([t_0, t_0+ T]; L^{10}_x)}\big) >0$ and
a \textup{(}unique\textup{)} solution $(v, \partial_t v)$ to \eqref{NLW6a} on the time interval $[t_0, t_0 + T]$
with $(v, \partial_t v) |_{t = t_0} = (v_0, v_1)$ in the class \begin{align*}
(v, \partial_t v) \in C([t_0, t_0 + T]; \mathcal{H}^{s_0} (\mathbb{R}^2)) \qquad \text{and}\qquad v\in L^{5+\delta}([t_0, t_0 + T]; L^{10}(\mathbb{R}^2)).
\end{align*}
\end{corollary}
\begin{remark}\label{Remark3.6}\rm (i) In terms of the current approach based on the first order expansion \eqref{exp1}, the threshold $s = -\frac 15$ seems to be sharp. Since we need to measure the quintic power in $L^1$ in time, this forces us to measure the random linear solution essentially in $L^5$ in time. In view of Proposition \ref{PROP:PS}, local-in-time integrability of $t^s$ in $L^5$ requires $s > -\frac 15$. It is worthwhile to note that the regularity restriction $s > -\frac 15$ comes only from the temporal integrability and does not have anything to do with the spatial integrability.
With a $p$th power nonlinearity $|u|^{p-1}u$, $p > 1$, (in place of the quintic power $u^5$), a similar argument shows almost sure local well-posedness of \eqref{NLW1} for $s > -\frac 1p$, which is essentially sharp (in terms of the first order expansion). For $p \notin 2\mathbb{N} + 1$, the nonlinearity is not algebraic and thus we need to proceed as in \cite{OOP}, where probabilistic well-posedness of the nonlinear Schr\"odinger equations with non-algebraic nonlinearities was studied. See~\cite{Liu} for details. See also Remark \ref{REM:2}.
\noindent (ii) It would be of interest to investigate if higher order expansions, such as those in \cite{BOP3, OPTz}, give any improvement over Theorem \ref{THM:LWP2} on almost sure local well-posedness. One may also adapt the paracontrolled approach used for the stochastic NLW \cite{GKO2, OOTol, Bring, OOTol2} to study vNLW with random initial data. \end{remark}
\section{Global well-posedness}\label{SEC:GWP}
In this section, we prove almost sure global well-posedness of \eqref{NLW1}. As noted in \cite{CO, BOP2}, it suffices to prove the following
``almost'' almost sure global well-posedness result.
\begin{proposition}\label{PROP:aasGWP} Let $s> -\frac 15$. Given $(u_0, u_1) \in \mathcal{H}^s(\mathbb{R}^2)$, let $(u_0^\omega, u_1^\omega)$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{B3}. Then, given any $T, \varepsilon > 0$, there exists a set $ \Omega_{T, \varepsilon}\subset \Omega$ such that \begin{itemize} \item[\textup{(i)}] $P( \Omega_{T, \varepsilon}^c) < \varepsilon$,
\item[\textup{(ii)}] For each $\omega \in \Omega_{T, \varepsilon}$, there exists a \textup{(}unique\textup{)} solution $u$ to \eqref{NLW1} on $[0, T]$
with $(u, \partial_t u)|_{t = 0} = (u_0^\omega, u_1^\omega)$.
\end{itemize}
\end{proposition}
It is easy to see from the Borel-Cantelli lemma that almost sure global well-posedness (Theorem \ref{THM:GWP1}) follows once we prove ``almost'' almost sure global well-posedness stated in Proposition \ref{PROP:aasGWP} above. See~\cite{CO, BOP2}. Hence, the remaining part of this section is devoted to the proof of Proposition \ref{PROP:aasGWP}.
Fix $T \gg1 $. In order to extend our local-in-time result for the initial value problem \eqref{NLW5} to a result on $[0, T]$ for arbitrary $T > 0$, we consider \eqref{NLW6a} with $f = z = V(t)(u_{0}^{\omega}, u_{1}^{\omega})$, given explicitly by \begin{align} \begin{cases} \partial_t^2 v - \Delta v + D \partial_t v + (v + z)^5 =0\\
(v, \partial_t v) |_{t = t_0} = (v_0, v_1), \end{cases} \label{NLWglobal} \end{align} where $t_{0} \in \mathbb{R}^{+}$. In view of Corollary \ref{COR:LWP4} and almost sure boundedness of the $L^{5+\delta}([0, T]; L^{10}_x(\mathbb{R}^2))$-norm of the random linear solution $z (t)= V(t) (u_0^\omega, u_1^\omega)$ thanks the probabilistic Strichartz estimate (Proposition \ref{PROP:PS}), it suffices to control the $\mathcal{H}^{s_0}$-norm of the remainder term $\vec v = (v, \partial_t v)$, where $s_0 = \frac 35 +\delta_0$ as in \eqref{lin2a} and the remainder term $v$ satisfies the initial value problem \eqref{NLW5}. This will allow us to extend the local-in-time result in Theorem \ref{THM:LWP2} to $[0, T]$ by iteratively applying Corollary \ref{COR:LWP4}. In the next subsection, we first show a gain of regularity such that $(v(t), \partial_t v(t))$ indeed belongs to $\mathcal{H}^1(\mathbb{R}^2)$ as soon as $t > 0$. Then, the problem is reduced to controlling the growth of the energy $E(\vec v)$ in~\eqref{E0}, associated with the standard nonlinear wave equation, since the energy $E(\vec v)$ controls the $\mathcal{H}^{1}$-norm (and hence $\mathcal{H}^{s_{0}}$-norm) of the remainder term $(v, \partial_{t}v)$, as needed to establish the result. See Subsection \ref{SUBSEC:GWP2}.
\subsection{Gain of regularity}\label{SUBSEC:GWP1} Consider the initial value problem \eqref{NLWglobal}. Fix $s > -\frac 15$ and $s_0 = \frac 35+\delta_0$ with small $\delta_0 > 0$ as in (the proof of) Theorem \ref{THM:LWP2}. Let $(u_0^\omega, u_1^\omega)$ be the Wiener randomization of a given deterministic pair $(u_0, u_1)\in \mathcal{H}^s(\mathbb{R}^2)$ and fix $(v_0, v_1) \in \mathcal{H}^{s_0}(\mathbb{R}^2)$.
Let $T \gg 1$ and let $z(t) = V(t) (u_0^\omega, u_1^\omega)$ be
the random linear solution.
Then, it follows from
the probabilistic Strichartz estimate (Proposition \ref{PROP:PS})
(see also \eqref{PS1b}) that there exists an almost surely finite
random constant $C_\omega = C_\omega (T)> 0$ such that \begin{align}
\| z\|_{L^{5+\delta}([0, T]; L^{10}_x)} \leq C_\omega. \label{X1} \end{align}
\noindent Fix a good $\omega\in \Omega$ such that $C_\omega$ in \eqref{X1} is finite. Then, from Corollary \ref{COR:LWP4}, we see that there exist $\tau_\omega > 0$ and a unique solution $\vec v = (v, \partial_t v)$ to \eqref{NLWglobal} on the time interval $[t_0, t_0 + \tau_\omega]$
with $(v, \partial_t v) |_{t = t_0} = (v_0, v_1)$ in the class \begin{align*}
(v, \partial_t v) \in C([t_0, t_0 + \tau_\omega]; \mathcal{H}^{s_0} (\mathbb{R}^2)) \qquad \text{and}\qquad v\in L^{5+\delta}([t_0, t_0 + \tau_\omega]; L^{10}(\mathbb{R}^2)).
\end{align*}
We show that the solution $\vec v = (v, \partial_t v)$ to~\eqref{NLWglobal} in fact belongs to $C((t_0, t_0 + \tau_\omega]; \mathcal{H}^1(\mathbb{R}^2))$, thanks to the smoothing due to the Poisson kernel $P(t)$ in \eqref{A2a}. Fix $t > t_0$. By \eqref{A6} and Lemma \ref{LEM:Sch}, we have \begin{align}
\|V(t-t_0) (v_0, v_1) \|_{\mathcal{H}^1}
\lesssim (t-t_0)^{-1+s_0} \|(v_0, v_1 )\|_{\mathcal{H}^{s_0}}. \label{G1} \end{align}
\noindent
Then, from \eqref{G1}, Lemma \ref{LEM:lin2} with $s = 1$, and \eqref{X1}, we have, for any $t_0 < t \leq t_0 +\tau_\omega$, \begin{align*}
\|\vec v(t)\|_{\mathcal{H}^1}
& \lesssim (t-t_0)^{-1+s_0} \|(v_0, v_1 )\|_{\mathcal{H}^{s_0}}
+ \|(v+z)^5\|_{L^1([t_0, t_0+\tau_\omega]; L^2_x)}\\
& \lesssim (t-t_0)^{-1+s_0}\| (v_0, v_1) \|_{\mathcal{H}^{s_0}}
+ \tau_\omega^\theta \Big(\|v\|_{L^{5+\delta}([t_0, t_0+\tau_\omega]; L^{10}_x)} + C_\omega\Big)\\ & < \infty. \end{align*}
\noindent This proves the gain of regularity for $\vec v = (v, \partial_t v)$.\footnote{Here, we did not show the continuity in time of $\vec v$ in $\mathcal{H}^1(\mathbb{R}^2)$ but this can be done by a standard argument, which we omit.} In the following, our main goal is to control the $\mathcal{H}^1$-norm of $\vec v(t)$ on $[0, T]$ for any given $T \gg 1$.
\subsection{Energy bound} \label{SUBSEC:GWP2}
Fix $\varepsilon > 0$. Then, it follows from Theorem \ref{THM:LWP2} that there exists $\Omega_{T_0}$ with sufficiently small $T_0 = T_0(\varepsilon) > 0$ such that \begin{align} P( \Omega_{T_0}^c) < \frac \varepsilon 2 \label{G2} \end{align}
\noindent and, for each $\omega \in \Omega_{T_0}$,
the local well-posedness of \eqref{NLW1} holds on $[0, T_0]$.
Fix a large target time $T \gg 1$. In the following, by excluding further a set of small probability, we construct the solution $\vec v = (v, \partial_t v)$ on the time interval $[T_0, T]$ and hence on $[0, T]$. Our goal is to control the growth of the $\mathcal{H}^1$-norm of $\vec v(t)$ on $[T_0, T]$. To do this, we closely follow the procedure for a similar energy bound for a defocusing quintic nonlinear wave equation in \cite{OP} in the remainder of this section. Since the same argument in the proof of Proposition 4.1 in \cite{OP} applies to establishing the energy bound in the current context, we only summarize the main steps below and refer the reader to \cite{OP} for details.
\textbf{Step 1:} \textit{Reduction to an energy bound.} In order to control the $\mathcal{H}^{1}$ norm of $\vec v(t)$ on $[T_{0}, T]$, it suffices to control the $\dot{\mathcal{H}}^{1}$ norm on $[T_{0}, T]$, where $\dot \mathcal{H}^1(\mathbb{R}^2) := \dot H^1(\mathbb{R}^2) \times L^2(\mathbb{R}^2)$. This is due to the fundamental theorem of calculus:
\begin{align*}
\|v (t)\|_{L^2_x}
&=\bigg\|\int_0^t\partial_t v(t')dt'\bigg\|_{L^2_x}
\leq T\|\partial_t v\|_{L^\infty_T L^2_x},
\end{align*}
\noindent for $ 0 < t \leq T$. The $\dot{\mathcal{H}}^{1}$ norm of $\vec v$ is further controlled by the energy $E(\vec v)$ in \eqref{E0}. \textit{Hence, it suffices to control the energy $E(\vec v)$ on $[T_{0}, T]$.}
\textbf{Step 2:} \textit{Statement of desired energy growth inequality on $[T_{0}, T]$.} We will derive an energy inequality to estimate $E(\vec v)(t) - E(\vec v)(T_{0})$ for $t \in [T_{0}, T]$. We remark that if we were considering the case of the cubic nonlinearity (rather than a quintic nonlinearity), we can follow the Gronwall argument by Burq and Tzvetkov \cite{BT3}.
In the current quintic case, however, this argument fails. To overcome this difficulty, we employ the integration-by-parts trick introduced by Pocovnicu and the second author~\cite{OP} in studying almost sure global well-posedness of the energy-critical defocusing quintic NLW on $\mathbb{R}^3$.
Let $z(t) = V(t) (u_0^\omega, u_1^\omega)$ be the random linear solution defined in \eqref{z1}. With $\widetilde V(t)$ as in~\eqref{z3}, define $\widetilde z$ by \begin{align} \widetilde z (t) = \jb{\nabla}^{-1} \partial_t z(t) = \widetilde V(t) (u_0^\omega, u_1^\omega). \label{z2} \end{align}
\noindent Then, given $0 < T_0 < T$, we set $A(T_0, T)$ as \begin{align} \begin{split} A(T_0, T)
& = 1 + \|z \|^2_{L^\infty([T_0, T];L^\infty_x)} +
\|z \|^{10}_{L^{10}([T_0, T]; L^{10}_x )}
+ \|z\|_{L^\infty([T_0, T]; L^6_x)}^6 \\
& \quad + \|\widetilde z \|_{L^6([T_0, T]; L^6_x)}^6
+ \big\|\jb{\nabla}^{s_1} \widetilde z\big\|_{L^\infty([T_0, T]; L^\infty_x)}, \end{split} \label{E1a} \end{align}
\noindent where $s_1 > \frac 12$ is sufficiently close to $\frac 12$ (to be chosen later). Since we can control $A(T_{0}, T)$ with high probability by the probabilistic Strichartz estimate in Proposition \ref{PROP:PS2}, our goal is to obtain an energy inequality of the following form and apply Gronwall's inequality: \begin{align}\label{energycontrol}
E(\vec v)(t) \lesssim E(\vec v)(T_0) + A(T_0, T) + A(T_0, T) \int_{T_0}^t E(\vec v)(t') dt'. \end{align}
\textbf{Step 3:} \textit{Calculation of $\frac{d}{dt}E(\vec v)$.} By using the equation \eqref{NLW5}, we have \begin{equation} \frac{d}{dt} E(\vec v)(t) =- \int_{\mathbb{R}^2}(D^\frac{1}{2}\partial_t v)^2 dx - \int_{\mathbb{R}^2}\partial_t v \big((v+z)^5 - v^5\big) dx. \label{E1b} \end{equation} By using the fact that $- \int_{\mathbb{R}^2}(D^\frac{1}{2}\partial_t v)^2 dx \le 0$ and proceeding as in \cite{OP}, we deduce that \begin{align} \begin{split} E( \vec v)(t) - & E(\vec v)(T_0)
\leq - \int_{T_0}^t \int_{\mathbb{R}^2} z (t') \partial_t ( v(t') ^5) dt' dx - \int_{T_0}^t\int_{\mathbb{R}^2} \partial_t v(t') \mathcal N(z, v)(t') dx dt' \\ & =:\hspace{0.5mm}\text{I}\hspace{0.2mm}(t) +\text{I \hspace{-2.8mm} I} (t) \end{split}
\label{E1} \end{align} for any $t \in [T_0, T]$, where $\mathcal{N}(z, v)$ denotes the lower order terms in $v$: \[\mathcal{N}(z, v) = 10 z^2 v^3 + 10 z^3 v^2 + 5 z^4 v + z^5.\]
\textbf{Step 4:} \textit{Estimate of $\text{I \hspace{-2.8mm} I} (t)$.} We have reduced the goal of showing the energy bound estimate \eqref{energycontrol} to estimating the quantities $\hspace{0.5mm}\text{I}\hspace{0.2mm}(t)$ and $\text{I \hspace{-2.8mm} I} (t)$. Using the same arguments in \cite{OP}, we have that \begin{align} \begin{split}
|\text{I \hspace{-2.8mm} I} (t)| & \lesssim \big(1+ \|z \|^2_{L^\infty([T_0, T];L^\infty_x)} \big) \int_{T_0}^t E(\vec v)(t') dt'
+ \|z \|^{10}_{L^{10}([T_0, T]; L^{10}_x )}\\ & \le A(T_0, T) \int_{T_0}^t E(\vec v)(t') dt' + A(T_0, T). \end{split} \label{E2} \end{align}
\textbf{Step 5:} \textit{Estimate of $\hspace{0.5mm}\text{I}\hspace{0.2mm}(t)$.} The estimate of $\hspace{0.5mm}\text{I}\hspace{0.2mm}(t)$ is more involved. We proceed by using the integration-by-parts trick in \cite{OP} and integrating by parts in time, to obtain two quantities $\hspace{0.5mm}\text{I}\hspace{0.2mm}_1(t)$ and $\hspace{0.5mm}\text{I}\hspace{0.2mm}_2(t)$: \begin{align} \hspace{0.5mm}\text{I}\hspace{0.2mm}(t) =
- \int_{\mathbb{R}^2} z (t') v(t') ^5 dx\bigg|_{T_0}^t + \int_{\mathbb{R}^2} \int_{T_0}^t \partial_t z (t') v(t') ^5dt' dx
=:\hspace{0.5mm}\text{I}\hspace{0.2mm}_1(t')\Big|_{T_0}^t +\hspace{0.5mm}\text{I}\hspace{0.2mm}_2(t). \label{E3} \end{align}
\noindent As for the first term $\hspace{0.5mm}\text{I}\hspace{0.2mm}_1$, we use Young's inequality with exponents $6$ and $6/5$ to obtain \begin{align} \begin{split}
|\hspace{0.5mm}\text{I}\hspace{0.2mm}_1 (t) - \hspace{0.5mm}\text{I}\hspace{0.2mm}_1(T_0)|
& \lesssim \varepsilon_0^{-6}\|z (T_0)\|_{L^6_x}^6 + \varepsilon_0^\frac{6}{5} \|v(T_0)\|^6_{L^6_x} + \varepsilon_0^{-6}\|z (t)\|_{L^6_x}^6 + \varepsilon_0^\frac{6}{5} \|v(t)\|^6_{L^6_x}\\
& \lesssim \varepsilon_0^{-6} \|z\|_{L^\infty([T_0, T]; L^6_x)}^6 + \varepsilon_0^\frac{6}{5} E(\vec v)(T_0)
+ \varepsilon_0^\frac{6}{5} E(\vec v)(t) \end{split} \label{E4} \end{align}
\noindent for some small constant $\varepsilon_0 > 0$ (to be chosen later).
Next, we consider the second term $\hspace{0.5mm}\text{I}\hspace{0.2mm}_2$ in \eqref{E3}. While we closely follow the argument in~\cite{OP}, we summarize the procedure here for readers' convenience. From \eqref{z2}, we have \begin{equation} \hspace{0.5mm}\text{I}\hspace{0.2mm}_2 (t) = \int_{T_0}^t \int_{\mathbb{R}^2} \jb{\nabla} \widetilde z (t') \cdot v(t') ^5 dx dt'. \label{E4a} \end{equation}
\noindent Given dyadic $M \geq 1$, let $\mathbf{Q}_M$ denote the \textit{nonhomogeneous} Littlewood-Paley projector onto the (spatial)
frequencies $\{|\xi|\sim M\}$. This means that $\mathbf{Q}_1$ is a smooth projector onto the (spatial)
frequencies $\{|\xi|\lesssim 1\}$ and by convention, $\mathbf{Q}_{2^{-1}} = 0$. Then, define $\mathcal{I}(t)$ by \begin{equation*} \mathcal{I} (t) : \! = \int_{\mathbb{R}^2} \jb{\nabla} \widetilde z (t) \cdot v(t) ^5 dx \end{equation*} and note that by using a Littlewood-Paley frequency decomposition, we have that \begin{equation}\label{dyadicI2} \mathcal{I}(t) \sim \sum_{\substack{M \geq 1\\ \text{dyadic}}}\mathcal{I}^M(t) \end{equation} for dyadic $M = 2^{k}$ with $k$ a nonnegative integer, and \begin{equation*} \mathcal{I}^{M}(t) := \sum_{k = -1}^1 M \int_{\mathbb{R}^2} \mathbf{Q}_{2^k M} \widetilde z (t) \mathbf{Q}_M\big( v(t) ^5\big) dx. \end{equation*}
\noindent We also set \[\mathcal{I}^{M\geq 2}(t) = \mathcal{I}(t) - \mathcal{I}^1(t).\]
\noindent $\bullet$ {\bf Case 1:} $M = 1$. \quad In this case, we can bound the contribution
to $|\hspace{0.5mm}\text{I}\hspace{0.2mm}_2(t)|$ by Young's inequality as \begin{align} \begin{split}
\bigg|\int_{T_0}^t \mathcal{I}^1(t') dt'\bigg|
& \lesssim \|\widetilde z \|_{L^6([T_0, T]; L^6_x)}^6
+ \int_{T_0}^t \|v (t') \|_{L^6_{x}}^6 dt'\\ & \lesssim A(T_0, T) + \int_{T_0}^t E(\vec v)(t') dt'. \end{split} \label{E4b} \end{align}
\noindent $\bullet$ {\bf Case 2:} $M \geq 2$. \quad We can follow the full details given in \cite{OP} and apply the Littlewood-Paley decomposition on each factor of $v$ in the quintic term $v^{5}$, in order to obtain the following estimate: \begin{equation}
|\mathcal{I}^{M\ge 2} (t)| \lesssim
\big\|\jb{\nabla}^{s_2-\theta} \widetilde z(t)\big\|_{L^\infty_x}E(\vec v), \label{E4c} \end{equation}
\noindent provided that $2(1-s_2+\theta+) \leq 1$. Hence, by setting $s_1 = s_2 -\theta$, it follows from \eqref{E1a} and \eqref{E4c} that \begin{align}
\bigg|\int_{T_0}^t \mathcal{I}^{M\ge 2} (t')dt'\bigg| & \lesssim A(T_0, T) \int_{T_0}^t E(\vec v)(t') dt', \label{E4d} \end{align}
\noindent provided that $2(1-s_2+\theta+) \leq 1$, namely $s_2 \ge \frac 12 + \theta+$, which is satisfied by choosing $s_2 > \frac 12$ and $\theta > 0$ sufficiently small.
This determines the choice of $s_1 = s_2 - \theta$ in \eqref{E1a}.
Therefore, from
\eqref{E4a}, \eqref{dyadicI2}, \eqref{E4b}, and \eqref{E4d}, we obtain \begin{align} \begin{split}
|\hspace{0.5mm}\text{I}\hspace{0.2mm}_2 (t)| & \lesssim
\|\widetilde z \|_{L^6([T_0, T]; L^6_x)}^6 +
\Big(1+\big\|\jb{\nabla}^{s_1} \widetilde z\big\|_{L^\infty([T_0, T]; L^\infty_x)}\Big) \int_{T_0}^t E(\vec v)(t') dt'\\ & \lesssim A(T_0, T) + A(T_0, T) \int_{T_0}^t E(\vec v)(t') dt'. \end{split} \label{E5} \end{align}
\textbf{Step 6:} \textit{Final estimate and Gronwall inequality.} Putting \eqref{E1}, \eqref{E2}, \eqref{E3}, \eqref{E4}, and \eqref{E5} together and choosing sufficiently small $\varepsilon_0 > 0$ in \eqref{E4}, we obtain \begin{align*}
E(\vec v)(t) \lesssim E(\vec v)(T_0) + A(T_0, T) + A(T_0, T) \int_{T_0}^t E(\vec v)(t') dt', \end{align*}
\noindent for any $t \in [T_0, T]$. Therefore, from Gronwall's inequality, we conclude that \begin{align}
E(\vec v)(t) \lesssim C\big(T_0, T, E(\vec v)(T_0), A(T_0, T)\big) \label{Ex} \end{align}
\noindent for any $t \in [T_0, T]$.
\begin{remark}\label{REM:2}\rm (i) In order to justify the formal computation in this subsection, we need to proceed with the smooth solution $(v_N , \partial_t v_N)$ associated with the frequency truncated random initial data (for example, to guarantee finiteness of the term $- \int_{\mathbb{R}^2}(D^\frac{1}{2}\partial_t v)^2 dx$ in~\eqref{E1b}) and then take $N \to \infty$, using the approximation argument (Corollary \ref{COR:LWP3}). This argument, however, is standard and thus we omit details. See, for example, \cite{OP}.
\noindent (ii) In this section, we followed the argument in \cite{OP} to obtain an energy bound in the quintic case.
In this argument, the first term after the first inequality in \eqref{E2} provides the restriction $p \leq 5$
on the degree of the nonlinearity $|u|^{p-1} u $. For $p > 5$, we will need to apply the integration by parts trick to lower order terms as well. See for example \cite{Latocca} in the context of the standard NLW. In a recent preprint \cite{Liu}, Liu extended Theorems \ref{THM:LWP1} and \ref{THM:GWP1} to the super-quintic case ($p > 5$) and proved almost sure global well-posedness of the defocusing vNLW \eqref{vNLW1} in $\mathcal{H}^s(\mathbb{R}^2)$ for $s > -\frac 1p$.
\end{remark}
\subsection{Proof of Proposition \ref{PROP:aasGWP}}
Fix a target time $T \gg 1$ and small $\varepsilon > 0$. Then, let $T_0$ be as in \eqref{G2}. With $A(T_0, T)$ as in \eqref{E1a}, set \[A_\lambda = \big\{ \omega \in \Omega : A(T_0, T) < \lambda\big\}\]
\noindent for $\lambda > 0$. From Proposition \ref{PROP:PS2}, there exists $\lambda_0 \gg 1$ such that \begin{align} P(A_{\lambda_0}^c) < \frac \varepsilon 2. \label{G4} \end{align}
\noindent Now, set $\Omega_{T, \varepsilon} = \Omega_{T_0} \cap A_{\lambda_0}$. Then, from \eqref{G2} and \eqref{G4}, we have $P(\Omega_{T, \varepsilon}^c) < \varepsilon$.
Let $\omega \in \Omega_{T, \varepsilon}$. From \eqref{E1a} and H\"older's inequality, we see that $A(T_0, T)$ controls the $L^{5+\delta}_t([T_0, T]; L^{10}_x )$-norm of $z$:
\[ \|z \|_{L^{5+\delta}([T_0, T]; L^{10}_x )} \lesssim T^\theta \lambda_0^\frac{1}{10}\]
\noindent for some $\theta > 0$, where $\delta > 0$ is as in (the proof of) Theorem \ref{THM:LWP2}. Then, together with the energy bound \eqref{Ex} and the discussion in Section \ref{SUBSEC:GWP1}, we can iteratively apply Corollary \ref{COR:LWP4} (see also the discussion right after Proposition~\ref{PROP:aasGWP}) and construct a solution $u = z + v$ to \eqref{NLW1} on $[0, T]$
with $(u, \partial_t u)|_{t = 0} = (u_0^\omega, u_1^\omega)$ for each $\omega \in \Omega_{T, \varepsilon}$. This proves Proposition \ref{PROP:aasGWP} and hence almost sure global well-posedness (Theorem \ref{THM:GWP1}).
\begin{ackno}\rm
This work was partially supported by the National Science Foundation under grants DMS-1853340, and DMS-2011319 (\v{C}ani\'{c} and Kuan), and by the European Research Council under grant number 864138 ``SingStochDispDyn'' (Oh).
\end{ackno}
\end{document} |
\begin{document}
\begin{abstract}
The cohomological dimension of a field is the largest degree with
non-vanishing Galois cohomology. Serre's ``Conjecture II'' predicts
that for every perfect field of cohomological dimension $2$, every torsor
over the field for a semisimple, simply connected algebraic group is
trivial.
A field is perfect and ``pseudo algebraically closed'' (PAC) if
every geometrically irreducible curve over the field has a rational
point. These have cohomological dimension $1$. Every
transcendence degree $1$ extension of such a field has cohomological
degree $2$. We prove Serre's ``Conjecture II'' for such fields of
cohomological degree $2$ provided either the field is of
characteristic $0$ or the field contains primitive roots of unity
for all orders $n$ prime to the characteristic. The
method uses ``rational simple connectedness'' in an essential way.
With the same method, we prove that such fields are $C_2$-fields,
and we
prove that ``Period equals Index'' for the Brauer groups of such
fields. Finally, we use a similar method to reprove and extend a
theorem of Fried-Jarden: every
perfect PAC field of positive characteristic is $C_2$. \end{abstract}
\maketitle
\section{Statement of Results} \label{sec-int} \marpar{sec-int}
\noindent For a field $L$, the \emph{cohomological dimension} is the supremum (possibly infinite) over all integers $n$ such that there exists a discrete Galois module with non-vanishing degree $n$ Galois cohomology. For every finite extension $L'/L$, the cohomological dimension of $L'$ is no greater than the cohomological dimension of $L$, \cite[Proposition II.10, p. 83]{GalCoh}. The cohomological dimension equals $0$ if and only if the field is separably closed. A \emph{Severi-Brauer variety} of dimension $n-1$ over $L$ is a smooth, projective $L$-scheme $X$ such that $X\times_{\text{Spec } L} \text{Spec }(L^{\text{sep}})$ is isomorphic to $\mbb{P}^{n-1}_{L^{\text{sep}}}$. These are in bijection with the torsors over $L$ for the semisimple adjoint group $\textbf{PGL}_n=\text{Aut}(\mbb{P}^{n-1})$, which is connected but not simply connected, via the $L$-scheme of isomorphisms between $X$ and $\mbb{P}^{n-1}_L$. The \emph{period}, or \emph{exponent}, of $X$ equals the smallest integer $d>0$ such that there exists an invertible sheaf on $X$ whose base change to $X\otimes_L L^{\text{sep}} \cong \mbb{P}^{n-1}_{L^{\text{sep}}}$ is isomorphic to $\mc{O}_{\mbb{P}^{n-1}_{L^{\text{sep}}}}(d)$. The \emph{index} equals the smallest integer $m$ such that there exists a closed subscheme $Y$ of $X$ whose base change in $\mbb{P}^{n-1}_{L^{\text{sep}}}$ is a linear subvariety of dimension $m-1$. The period divides the index, the index divides $n$, and the period and index have the same prime factors \cite[Proposition 4.5.13]{GilleSzamuely}. The \emph{Period-Index Problem} asks for the smallest integer $e$ (assuming one exists), such that always the index divides the period raised to the power $e$.
\noindent For a perfect field $L$, the cohomological dimension of $L$ is $\leq 1$ if and only if for every finite separable extension $L'/L$, every Severi-Brauer variety over $L'$ has period $1$, i.e., every $\textbf{PGL}_n$-torsor over $L'$ is trivial \cite[Proposition II.5, p. 78]{GalCoh}. For imperfect fields, typically this condition is taken as the definition of \emph{dimension $\leq 1$}, as opposed to ``cohomological dimension $\leq 1$''. Serre formulated a strong converse, ``Conjecture I'': for every field of dimension $\leq 1$, every torsor over $L$ for every semisimple and connected algebraic group is trivial. This was proved by Steinberg, \cite{Steinberg}. For a perfect field $L$, by a theorem of Merkurjev-Suslin \cite[Corollary 24.9]{Suslin84}, the cohomological dimension is $\leq 2$ if and only if all $\textbf{SL}_D$-torsors over $L$ are trivial for those semisimple and \emph{simply connected} algebraic groups $\textbf{SL}_D$ arising as inner forms of $\textbf{SL}_n$ over $L$. Serre formulated a strong converse, ``Conjecture II'': for every perfect field of cohomological dimension $\leq 2$, every torsor over $L$ for every semisimple and simply connected algebraic group is trivial. Serre also formulated a version of his conjecture for imperfect fields of characteristic $p$ \cite[Section 5.5]{SerrePP}: Serre adds the hypotheses that $[L:L^p]\leq p^2$ and that $H^3_p(F')$ is zero for all finite, separable extensions $F'/F$.
\noindent A field $L$ is \emph{perfect}, resp. \emph{perfect and
pseudo-algebraically closed} (PAC), if every quasi-projective $L$-scheme that is geometrically irreducible and zero-dimensional, resp. one-dimensional, has an $L$-point. Since every perfect PAC field $L$ is infinite, Bertini theorems imply that every quasi-projective $L$-scheme $X$ that is geometrically irreducible contains a closed subscheme that is geometrically irreducible of dimension $1$ (or dimension $0$ if $X$ has dimension $0$). Thus, for a perfect PAC field, every quasi-projective $L$-scheme $X$ that is geometrically irreducible has an $L$-point. Via Weil's restriction of scalars, every finite extension of a perfect PAC field is again a perfect PAC field. Since Severi-Brauer varieties are geometrically irreducible, every Severi-Brauer variety over a perfect PAC field has a rational point, and thus it has index $1$ (so also it has period $1$). Therefore, every perfect PAC field has dimension $\leq 1$. Thus, every function field $K/L$ of transcendence degree $1$ over a perfect PAC field $L$ has cohomological dimension $\leq 2$, \cite[Proposition II.11, p. 83]{GalCoh}. If $L$ has characteristic $p$, then $K$ is imperfect. Nonetheless, $[K:K^p]$ equals $p$, so Serre's modified version of ``Conjecture II'' predicts triviality for all $G$-torsors over $K$ for semisimple and simply connected algebraic groups $G$. A perfect PAC field is \emph{nice} if either it has characteristic $0$ or if it has positive characteristic $p$ and it contains a primitive root of unity of order $n$ for every integer $n$ prime to $p$. Jarden and Pop proved the following theorem under the hypothesis that the field has characteristic $0$ \emph{and} contains all roots of unity, \cite{JardenPop}.
\begin{thm} \label{thm-SerreIIpac} \marpar{thm-SerreIIpac} For every perfect PAC field $L$ that either has characteristic zero or contains a primitive root of unity of order $n$ for every integer $n$ prime to the characteristic $p$, for every function field $K/L$ of transcendence degree $1$, every torsor over $K$ for every semisimple and simply connected algebraic group is trivial. \end{thm}
\noindent The same method of proof also proves Period equals Index for $K$. We thank Max Lieblich who shared with us his independent (and different) proof of the following theorem. This theorem can also be proved using the Hasse principle of Efrat, \cite{Efrat}.
\begin{thm} \label{thm-PeriodIndexpac} \marpar{thm-PeriodIndexpac} For every perfect PAC field $L$ that is nice, for every function field $K/L$ of transcendence degree $1$, every Severi-Brauer variety over $K$ has period equal to index. \end{thm}
\noindent A smooth, projective, geometrically connected scheme $X$ over a field $K$ is \emph{Fano}, resp. \emph{$2$-Fano}, etc., if the first graded piece of the Chern character of $T_{X/K} = (\Omega_{X/K})^\vee$ is positive, resp. if the first two graded pieces are positive, etc., cf. \cite{dJS9}. A field $K$ is $C_1$, resp. $C_2$, etc., if every $K$-scheme in $\mathbb{P}^{n-1}_K$ that is a specialization of Fano complete intersections, resp. $2$-Fano complete intersections, has a $K$-point. The method proves that every function field $K$ of transcendence degree $1$ over a nice, perfect PAC field $L$ is a $C_2$-field, first proved by Fried-Jarden, \cite[Theorem 21.3.6]{FriedJarden}. In fact, there are by now many examples of rationally simply connected varieties beyond $2$-Fano complete intersections. The following formulation includes one such family discovered by Robert Findley, \cite{Findley}.
\begin{thm} \label{thm-C2nicepac} \marpar{thm-C2nicepac} Let $L$ be a PAC field that is nice, and let $K/L$ be a function field of transcendence degree $1$. Let $X$ be a $K$-scheme, and let $\mathcal{L}$ be an invertible sheaf on $X$. Then $X$ has a $K$-point in either of the following cases: first, $X\otimes_K K^{\text{sep}}$ is isomorphic to the common zero locus in $\mbb{P}^{n-1}_{K^{\text{sep}}}$ of $c$ homogeneous polynomials of degrees $(d_1,\dots,d_c)$ such that $d_1^2 + \dots + d_c^2 < n$, and the base change of $\mathcal{L}$ is the restriction of $\mc{O}_{\mbb{P}^{n-1}}(1)$. Second, there is a $K$-point if $X\otimes_K K^{\text{sep}}$ is isomorphic to the intersection of a degree $d$ hypersurface and $\text{Grass}_{K^{\text{sep}}}(r,(K^{\text{sep}})^{\oplus n})$, with its Pl\"{u}cker embedding, such that $(3r-1)d^2 - d < n-4r-1$, and the base change of $\mathcal{L}$ is the restriction of the Pl\"{u}cker $\mc{O}(1)$. \end{thm}
\noindent Now let $(R,\mathfrak{m}_R)$ be a Henselian DVR whose fraction field $K$ has characteristic $0$ and whose residue field $L$ is a perfect PAC field of characteristic $p$ that contains a primitive root of unity of order $n$ for every integer $n$ prime to $p$. The Ax-Kochen method of Denef, \cite{DenefAxKochen}, yields the following.
\begin{thm} \label{thm-AKnicepac} \marpar{thm-AKnicepac} The field $K$ has cohomological dimension $\leq 2$. For Severi-Brauer varieties over $K$, the Period equals the Index. There exists an integer $p_0$ such that for every such field with characteristic $p\geq p_0$, Serre's ``Conjecture II'' holds for $K$. For every integer $n$ and sequence of integers $(d_1,\dots,d_c)$ with $d_1^2 + \dots + d_c^2 \leq n-1$, there exists an integer $p_0=p_0(n;d_1,\dots,d_r)$ such that for every field $K$ as above of characteristic $p\geq p_0$, every closed subscheme of $\mbb{P}^{n-1}_K$ defined by equations of degrees $(d_1,\dots,d_c)$ has a $K$-point. Finally, for all triples of integers $(n,r,d)$ with $(3r-1)d^2-d<n-4r-1$, there exists an integer $p_0=p_0(n,r,d)$ such that for every $K$ as above of characteristic $p\geq p_0$, for every polarized $K$-scheme $(X_K,\mathcal{L}_K)$ whose base change to $\overline{K}$ is a degree $d$ hypersurface in $\text{Grass}_{\overline{K}}(r,\overline{K}^{\oplus n})$ with its Pl\"{u}cker invertible sheaf, $X_K$ has a $K$-point. \end{thm}
\noindent Finally, a variant of the method implies a similar result for perfect PAC fields of characteristic $p$ that are not necessarily nice. The $C_2$ result was first proved by Fried-Jarden, \cite[Theorem 21.3.6]{FriedJarden}, but there are many other rationally simply connected varieties than $2$-Fano complete intersections.
\begin{thm} \label{thm-C2pac} \marpar{thm-C2pac} Let $L$ be a perfect PAC field that is not necessarily nice. Let $X$ be an $L$-scheme, and let $\mathcal{L}$ be an invertible sheaf on $X$. Then $X$ has a $L$-point in either of the following cases: first, $X\otimes_L \overline{L}$ together with $\mathcal{L}$ is isomorphic to the common zero locus in $\mathbb{P}^{n-1}_{\overline{L}}$ of $c$ homogeneous polynomials of degrees $(d_1,\dots,d_c)$ such that $d_1^2 + \dots + d_c^2 < n$ together with the restriction of $\mc{O}_{\mbb{P}^{n-1}}(1)$. Second, there is a $K$-point if $X\otimes_L \overline{L}$ is isomorphic to the intersection of a degree $d$ hypersurface nd $\text{Grass}_{\overline{L}}(r,\overline{L}^{\oplus n})$, with its Pl\"{u}cker embedding, such that $(3r-1)d^2 - d < n-4r-1$, and the base change of $\mathcal{L}$ is the restriction of the Pl\"{u}cker $\mc{O}(1)$. \end{thm}
\noindent The method uses very much the notions of \emph{rationally connected} and \emph{rationally simply connected} varieties, together with their specializations. The full results are considerably stronger than the formulations above: they also include results about arbitrary specializations, and they give height bounds for rational points.
\noindent \textbf{Acknowledgments.} I am very grateful to Chenyang Xu; this article builds on the earlier joint work in \cite{StarrXu}. I am very grateful to Max Lieblich who explained his independent proof that ``Period equals Index'' for function fields over PAC fields. I an grateful to Yi Zhu with whom I discussed the technique to transport from characteristic $0$ to characteristic $p$ results around Serre's ``Conjecture II'' using Nisnevich's solution of the Grothendieck-Serre conjecture in dimension $1$. During the development of this article I was supported by NSF Grants DMS-0846972 and DMS-1405709, as well as a Simons Foundation Fellowship.
\section{A Bertini Theorem over Non-Algebraically Closed Fields} \label{sec-Bertini} \marpar{sec-Bertini}
\noindent The following Bertini theorem extends \cite[Corollary 2.2]{GS} to arbitrary fields. Let $k$ be a field. Let $B$ be a separated, geometrically integral $k$-scheme of dimension $m\geq 1$. Up to replacing $B$ by a dense, open subscheme, assume that $B$ is a normal scheme. Let $u:B\to \mbb{P}^N_k$ be a generically unramified, finite type morphism. Let $h:B'\to B$ be a generically finite, finite type morphism. Let $c$ be an integer satisfying $0\leq c \leq \max(0,m-1)$, and let $r$ denote $N-c$. Denote by $\text{Grass}_k(\mbb{P}^r,\mbb{P}^N)$ the Grassmannian parameterizing linear subspaces of $\mbb{P}^N_k$ of dimension $r$, i.e., of codimension $c$. This is the Hilbert scheme of $\mbb{P}^N_k$ over $k$ for the numerical polynomial $P_r(t)$ such that $P_r(d) = \binom{d+r}{r}$ for every integer $d\geq -r$. Denote by $\Lambda \subset \text{Grass}_k(\mbb{P}^r,\mbb{P}^N)\times_{\text{Spec } k} \mbb{P}^N_k$ the universal linear subspace. For every field extension $K/k$, for every $[L]\in \text{Grass}_k(\mbb{P}^r,\mbb{P}^N)(\text{Spec } K)$, the associated morphism $$ h_L: B'\times_{\mbb{P}^N_k} L\to B\times_{\mbb{P}^N_k} L, $$ is a morphism of finite type $K$-schemes.
\begin{defn} \label{defn-insepsec} \marpar{defn-insepsec} An \emph{inseparable section} of $h$ is an integral
closed subscheme $Z\subset B'$ such that $h|_Z:Z\to B$ is dominant and the field extension $k(Z)/k(B)$ is purely inseparable. A \emph{rational section} is an inseparable section such that the extension $k(Z)/k(B)$ is an isomorphism of fields.
The \emph{domain of definition} is the maximal open subscheme of $B$ over which $h|_Z$ is flat and finite. Equivalently, this is the maximal open subscheme over which $h|_Z$ is faithfully flat. \end{defn}
\noindent For every inseparable section, resp. rational section, $Z$ with domain of definition $W$, if $B\times_{\mbb{P}^N_k} L$ intersects $W\times_{\text{Spec } k} \text{Spec } K$ in a dense open subset of $B\times_{\mbb{P}^N_k} L$, then the associated reduced scheme of $Z\times_{\mbb{P}^N_k} L$ is an inseparable section, resp. rational section, of $h_L$.
\begin{thm} \label{thm-ffBertini} \marpar{thm-ffBertini} There exists a dense Zariski open subset $U\subset \text{Grass}_k(\mbb{P}^r,\mbb{P}^N)$ and a finite, Galois extension $k'/k$ such that for every field extension $K/k$ that is linearly disjoint from $k'$, for every $[L]\in U(\text{Spec } K)$, the restriction map from the set of inseparable sections of $h$ to the set of inseparable sections of $h_L$ is well-defined and is a bijection. If $k$ is perfect, the same holds with ``rational sections'' in place of ``inseparable sections''. \end{thm}
\begin{proof} If $c$ equals $0$, then $r$ equals $N$, and the result is vacuously true with $k'=k$. Thus, assume that $m\geq 2$, and assume that $1\leq c \leq m-1$.
\noindent \textbf{Restriction Map Well-Defined and Injective.} By \cite[Th\'{e}or\`{e}me 4.10, 6.10]{Jou}, there exists a dense open subscheme $U_0\subset \text{Grass}_k(\mbb{P}^r,\mbb{P}^N)$ such that for every $K/k$ and for every $L\subset \mbb{P}^N_K$ with $[L]\in U_0(K)$, $B\times_{\mbb{P}^N_k} L$ is a geometrically integral $K$-scheme.
\noindent There are only finitely many inseparable sections of $h$, resp. rational sections of $h$. For each, the maximal domain of definition is a dense open subset whose complement is a proper closed subset. Since $u$ is generically finite, the image of this proper closed subset in $\mathbb{P}^N_k$ is contained in a proper closed subset. Similarly, for any two distinct inseparable sections, resp. rational sections, the intersection of the closures of the images is a closed subset of $X$ whose closed image in $B$ does not contain the generic point. Thus, the image of this closed subset in $\mathbb{P}^N_k$ is contained in a proper closed subset. Since $\mathbb{P}^N_k$ is irreducible, the union of finitely many proper closed subsets is a proper closed subset, say $C$. The Fano scheme parameterizing linear subspaces contained in $C$ is a proper closed subset of $\text{Grass}_k(\mbb{P}^r,\mbb{P}^N)$. Replace $U_0$ by the relative complement in $U_0$ of this Fano scheme. Then for every $K/k$, for every $L\subset \mbb{P}^N_K$ with $[L]\in U_0(\text{Spec } K)$, $B\times_{\mbb{P}^N_k} L$ is a geometrically integral $K$-scheme that intersects the domain of definition of each inseparable section, resp. rational section, of $h$ and such that for any two inseparable sections, resp. rational sections, $B\times_{\mbb{P}^N_k} L$ is not contained in the closure of the locus over which the two sections are equal. Thus, the base change over $K$ of every inseparable section, resp. rational section, of $h$ restricts to a well-defined inseparable section, resp. rational section, of $h_L$, and this restriction map is injective.
\noindent \textbf{Surjectivity of Restriction Map. Noetherian Induction.} It remains to prove that there exists a dense open subset $U$ of $U_0$ and a finite Galois extension $k'/k$ such that for every finite $K/k$ that is linearly disjoint from $k'$, for every $L\subset \mbb{P}^N_K$ with $[L]\in U(\text{Spec } K)$, the restriction map is surjective. This is proved by Noetherian induction for restrictions of $h$ to closed subsets of $B'$. If $B'$ equals $C\cup Y$ for proper closed subsets $C$ and $Y$, and if the result is proved for $C$ and $Y$, then define $k'/k$ to be the compositum of $k'_c/k$ and $k'_Y/k$, and define $U =U_C\cap U_Y$. Since $B$ is integral, resp. since $B\times_{\mbb{P}^N_k}L$ is integral, every inseparable section, resp. rational section, has image in $C$ or in $Y$. Thus, the results for $C$ and $Y$ imply the result for $B'$. Thus, assume that $B'$ is irreducible.
\noindent Similarly, if $B'$ is nonreduced, every inseparable section, resp. rational section, over $B$, resp. over $B\times_{\mbb{P}^N_k} L$, factors through the associated reduced scheme of $B'$. Thus, assume that $B'$ is irreducible and reduced.
\noindent \textbf{Case I. Morphism not Dominant.} If $h(B')$ is contained in a proper closed subset of $B$, then the same argument as above shows that, for $d=1$ and for $U$ a dense open subset of $U_0$, no $B\times_{\mbb{P}^N_k}L$ is contained in $h(B')$. Thus, the restriction map on sections is the unique set map from the empty set to the empty set, and the result is proved.
\noindent \textbf{Case II. Morphism Birational.} Similarly, if $h:B'\to B$ is birational, then $h_L$ is also birational since the domain of definition of the inverse rational section intersects the integral scheme $B\times_{\mbb{P}^N_k} L$. Thus, the set of sections for each is a singleton set, and the restriction map is a bijection.
\noindent \textbf{Case III. Morphism Purely Inseparable, not Birational.} If $h:B'\to B$ is dominant and purely inseparable of degree $a>1$, then $B'$ is an inseparable section. So again, the restriction map on inseparable sections is a bijection between singleton sets.
\noindent For rational sections, assume that $k$ is perfect (otherwise the argument is much more technical). Since $B'$ is integral and $k$ is perfect, $B'$ is generically smooth over $k$. Up to shrinking $B$ and $B'$, assume that $B$ and $B'$ are $k$-smooth, and assume that $h$ is finite and flat. Then $dh^\dagger:h^*\Omega_{B/k}\to \Omega_{B'/k}$ is a homomorphism of locally free sheaves of rank $m$, and it is not surjective. Up to shrinking further, assume that the cokernel is locally free, so that also the image of $dh^\dagger$ is locally free. Thus, also the kernel $\mathcal{T}^\vee_g$ of $dg^\dagger$ is locally free of positive rank $e\geq 1$. The fiber product $$ \Lambda_{U_0}' = U_0\times_{\text{Grass}_k(\mbb{P}^r,\mbb{P}^N)} \Lambda \times_{\mbb{P}^N_k} B', $$ parameterizes pairs $([L],x)$ of a linear space $L$ and a point $x\in B'$ such that $u(h(x))\in L$. By generic flatness, up to replacing $U_0$ by a dense open subscheme, the projection morphism $\Lambda_{U_0,X}\to U_0$ is flat.
\noindent By \cite[Th\'{e}or\`{e}me 4.10, 6.10]{Jou}, there is a dense open subset $V_0\subset \Lambda_{U_0}'$ parameterizing pairs $([L],x)$ such that $B\times_{\mbb{P}^N_k} L$ is smooth of dimension $m-c$ at $h(x)$. For every such $([L],x)$, the tangent space to $B\times_{\mbb{P}^N_k} L$ at $h(x)$ gives a point in the Grassmannian bundle of $(m-c)$-dimensional subspaces of the Zariski tangent space $T_{g(x)}B$. The associated morphism from $V_0$ to the Grassmannian bundle over $B$ of the tangent bundle is dominant. Thus, there exists a dense open $V\subset V_0$ parameterizing $([L],x)$ such that the tangent space to $B\times_{\mbb{P}^N_k} L$ at $h(x)$ is not contained in the annihilator of $\mathcal{T}_g^\vee$ (this annihilator is a subspace of the Zariski tangent space of codimension $\geq 1$). Since $\Lambda_{U_0}'\to U_0$ is flat, the image in $U_0$ of $V$ is a dense Zariski open subscheme $U\subset U_0$. Set $d$ equal to $1$. For every field extension $K/k$ and for every $[L]\in U(K)$, for the generic point of $B\times_{\mbb{P}^N_k} L$, the derivative of $h_L$ is not surjective. Therefore $h_L$ admits no rational section.
\noindent \textbf{Case IV. Morphism Separable, not Birational.} Finally, assume that $B'\to B$ is dominant and the separable closure $L$ of $k(B)$ in $k(B')$ has degree $a>1$. Denote by $B''$ the integral closure of $B$ in $L$. There is a factorization $B'\to B''\to B$, and $B''\to B$ is dominant and generically \'{e}tale of degree $>1$. Up to shrinking $B$ and $B''$, assume that that $B$ is regular, and assume that $B''\to B$ is finite and \'{e}tale of degree $a$. Thus, there is no inseparable section nor rational section of $B''/B$. The goal is to find $k'/k$ and $U$ such that for every finite extension $K/k$ that is linearly disjoint from $k'/k$ and for every $[L]\in U(\text{Spec } K)$, also the finite \'{e}tale morphism $B''\times_{\mbb{P}^N_k} L\to B\times_{\mbb{P}^N_k} L$ has no rational section (and thus no inseparable section). Then the restriction map is again the unique map between empty sets, which is a bijection. Up to replacing $B'$ by $B''$, assume that $h:B'\to B$ is finite and \'{e}tale of degree $a>1$.
\noindent \textbf{Case IVa. Base Field not Separably Closed in Extension. $[k':k]>1$.} Since $B$ is generically smooth over $k$ (being geometrically integral), $k(B)/k$ is a separable field extension. Thus also $k(B')/k$ is a separable field extension. It is also a finitely generated field extension. Thus the algebraic closure of $k$ in $k(B')$ is a finite, separable extension $\kappa/k$. Denote by $k'/k$ the Galois closure of $\kappa/k$. Note that $[\kappa:k]$ is greater than $1$ if and only if $[k':k]$ is greater than $1$. In this case, set $U$ equal to $U_0$. For every finite field extension $K/k$ that is linearly disjoint from $k'/k$, and thus also linearly disjoint from $\kappa/k$, the composite morphisms $$ B'\times_{\mbb{P}^N_k}L\to B\times_{\mbb{P}^N_k} L \to L \to \text{Spec } K, $$ and $$ B'\times_{\mbb{P}^N_k} L \to B' \to \text{Spec } \kappa, $$ establish that, as a $K$-scheme, $B'\times_{\mbb{P}^N_k} L$ factors through the nontrivial field extension $K\otimes_k \kappa / K$. Finally, $K$ is algebraically closed in $K(B\times_{\mbb{P}^N_k}L)$, since $B\times_{\mbb{P}^N_k}L$ is geometrically integral over $K$. Thus, there is no rational section of $h_L$ (and thus there is no inseparable section).
\noindent \textbf{Case IVb. Base Field Separably Closed in Extension. $k'=k$.} In the final case, assume that $k$ is already algebraically closed in $X$. Set $k'$ equal to $k$. Now repeat the proof of \cite[Corollary 2.2]{GS}. The composite morphism $u\circ h:B'\to \mbb{P}^N_k$ is generically unramified. Thus, repeating the argument above, there exists a dense open subset $U\subset U_0$ such that for every $K/k$ and every $[L]\in U(K)$, $B'\times_{\mbb{P}^N_k} L$ is geometrically integral over $K$. Finally, $h_L$ is a finite, flat morphism of degree $a>1$ between geometrically integral $K$-schemes. Thus, there is no rational section. This completes the proof by Noetherian induction. \end{proof}
\begin{defn} \label{defn-PACsec} \marpar{defn-PACsec} As above, let $B$ be a finite type scheme over a field $k$, and assume that $B$ is separated and normal. Let $f:X\to B$ be a finite type morphism. A \emph{PAC section} of $f$ is an integral closed subscheme $Y\subset E$ such that the restriction of $f$, $f_Y:Y\to B$, is dominant with irreducible (but possibly nonreduced) geometric generic fiber. The \emph{domain of definition} is the maximal open subscheme of $B$ over which $f_Y:Y\to B$ is faithfully flat. \end{defn}
\begin{cor} \label{cor-ffBertini} \marpar{cor-ffBertini} For every finite type morphism $f:X\to B$, if $f$ has no PAC section, then there exists a finite Galois extension $k'/k$ and a dense open subset $U\subset \text{Grass}_k(\mbb{P}^r,\mbb{P}^N_k)$ such that for every field extension $K/k$ that is linearly disjoint from $k'/k$, for every $[L]\in U(\text{Spec } K)$, the restriction $f_L:X\times_{\mbb{P}^N_k}L\to B\times_{\mbb{P}^N_k} L$ also has no PAC section. \end{cor}
\begin{proof} There exists a finite type, surjective monomorphism $i:X'\to X$ such that every connected component $X'_i$ of $X'$ is regular and separated. Every PAC section of $X'\to B$ maps under $i$ to a PAC section of $X\to B$. Conversely, for every PAC section $Y$ of $X\to B$, the generic point $\eta_Y$ lifts uniquely to $X'$, and the closure of this generic point in $X'$ gives a PAC section of $X'\to B$. The same argument holds for $X'\times_{\mbb{P}^N_k} L \to X\times_{\mbb{P}^N_k} L \to B\times_{\mbb{P}^N_k} L$. Thus, up to replacing $X$ by $X'$, assume that every connected component $X_i$ of $X$ is regular and separated, and assume that each restriction morphism
$f|_{X_i}:X_i\to B$ has no PAC section.
\noindent
Up to shrinking $B$, assume that every $f|_{X_i}:X_i\to B$ is flat, and that $B$ is regular. The separable closure of $k(B)$ in $k(X_i)$ is a finite, separable extension of $B$. Denote by $h_i:B_i\to B$ the integral closure of $B$ in this finite, separable extension. Since $B$ is finite type over a field, $B$ is excellent. Thus, $h_i$ is finite. By construction, $h_i$ is generically \'{e}tale. Up to shrinking $B$ further, assume that $h_i$ is everywhere finite and \'{e}tale.
\noindent Since $X_i$ is regular, it is normal. Thus, $f|_{X_i}$ factors through $h_i$, i.e., there exists a finite type, dominant morphism $g_i:X_i\to B_i$ such that $f_{X_i}$ equals $h_i\circ g_i$.
By construction, the geometric generic fiber of $g_i$ is irreducible. By \cite[Th\'{e}or\`{e}me 4.10, 6.10]{Jou}, there exists a dense open subscheme of $B_i$ over which $X_i$ is faithfully flat with irreducible geometric fibers. Since $B_i$ is finite over $B$, up to shrinking $B$ further, assume that this dense open subscheme equals all of $B_i$. Thus, every PAC section of $f|_{X_i}$ maps under $g_i$ to a PAC section of $h_i$. Conversely, for every PAC section $Z$ of
$h_i$, the inverse image $g_i^{-1}(Z)$ is a PAC section of $f|_{X_i}$. In particular, since $f|_{X_i}$ has no PAC section, also $h_i$ has no PAC section. Since $h_i$ is finite, PAC sections are the same as inseparable sections. So $h_i$ has no inseparable sections. Denote by $h:B'\to B$ the disjoint union of the finitely many morphisms $h_i$.
\noindent By Theorem \ref{thm-ffBertini}, there exists a finite Galois extension $k'/k$ and a dense open subset $U\subset \text{Grass}_k(\mbb{P}^r,\mbb{P}^N)$ such that for every field extension $K/k$ that is linearly disjoint from $k'/k$, for every $[L]\in U(\text{Spec } K)$, also $h_L$ has no inseparable section. Every PAC section of $f_L:X\times_{\mbb{P}^N_k} L \to B\times_{\mbb{P}^N_k}L$ maps under $g$ to a PAC section of $h_L$. Since $h_L$ is finite and has no inseparable section, it has no PAC section. Thus, $f_L$ has no PAC section. \end{proof}
\section{Fields that Admit Rational Points on Specializations of Rationally Connected Varieties} \label{sec-RCsolv} \marpar{sec-RCsolv}
\noindent After James Ax introduced PAC fields, he asked whether every specialization of Fano hypersurfaces in $\mathbb{P}^{n-1}$ over a perfect PAC field $L$ has an $L$-point \cite[Problem 3]{Ax}. A projective variety over a field $F$ is \emph{rationally connected}, resp. \emph{separably rationally
connected}, if for every algebraically closed field extension $E/F$, resp. for every separably closed field extension $E/F$, every pair of $E$-points of the variety is contained in the image of an $E$-morphism from $\mathbb{P}^1_{E}$. In characteristic $0$, these two definitions agree. Sufficiently general Fano hypersurfaces are \emph{separably rationally connected
varieties}, \cite{KMM} (in characteristic $0$), \cite{Zhu2} (in arbitrary characteristic). Thus, Ax was asking about rational points on specializations of certain separably rationally connected varieties.
\noindent The most common separably rationally connected varieties have unobstructed deformations, e.g., all Fano manifolds in characteristic $0$, (standard) projective homogeneous spaces in all characteristics, Fano complete intersections of ample divisors in (standard) projective homogeneous spaces in all characteristics, etc. However, to formulate results that also apply to those rationally connected varieties with obstructed deformations, it is necessary to address \emph{ramification}, particularly in mixed characteristic. For DVRs $(\Lambda,\mathfrak{m}_{\Lambda})$ and $(R,\mathfrak{m}_R)$, a local homomorphism $\phi:\Lambda \to R$ is \emph{regular} if \begin{enumerate} \item[(i)] $\phi(\mathfrak{m}_{\Lambda})R$ equals $\mathfrak{m}_R$, i.e., $\phi$
is \emph{weakly unramified} (note that this implies that $\phi$ is
injective), \item[(ii)] the residue field extension $\Lambda/\mathfrak{m}_\Lambda \to
R/\mathfrak{m}_R$ is separable (note that this holds automatically if
$\Lambda/\mathfrak{m}_\Lambda$ is perfect), and \item[(iii)] the fraction field extension is separable (note that this
holds automatically if the fraction field has characteristic $0$). \end{enumerate} If the local homomorphism is essentially of finite type and if $\Lambda$ is complete, then the first two hypotheses imply the third. The importance of regularity here has to do with \emph{smooth
parameter spaces}.
\begin{defn} \label{defn-param} \marpar{defn-param} For an integral scheme $S$, a \emph{parameter space} over $S$ is a triple $(M \to S, f_M:X_M\to M,\mathcal{L})$ of a smooth $S$-scheme $M$ of pure relative dimension $m$, a flat, projective morphism $f_M$, and an invertible sheaf $\mathcal{L}$ on the fiber of $X_M$ over $\text{Spec } \text{Frac}(S)$. \end{defn}
\begin{lem} \label{lem-ft} \marpar{lem-ft} Let $(R,\mathfrak{m}_R)$ be a DVR, let $X_R$ be a flat, projective $R$-scheme, and let $\mathcal{L}_{\text{Frac}(R)}$ be an invertible sheaf on $X_{\text{Frac}(R)}$. Let $(\Lambda,\mathfrak{m}_\Lambda)\to (R,\mathfrak{m}_R)$ be a local homomorphism of DVRs that is regular. There exists a parameter space over $S=\text{Spec } \Lambda$, and there exists a dominant $S$-morphism $\zeta:\text{Spec } R \to M$ such that $(X_R,\mathcal{L}_R)$ is the pullback by $\zeta$ of $(X_M,\mathcal{L})$. \end{lem}
\begin{proof} For any subring $L$ of $R$, by the usual limit arguments, there exists a subring $A\subset R$ containing $L$ such that $L\to A$ is finitely generated, there exists $X_A\to A$ a flat, projective scheme, and there exists an invertible sheaf on $X_{\text{Frac}(A)}$. Without loss of generality, also assume that $A$ contains a generator for the principal ideal $\mathfrak{m}_R \subset R$.
\noindent Now, set $L$ equal to $\phi(\Lambda)$. Since $\text{Frac}(A)$ is a subextension of $\text{Frac}(\Lambda)\to \text{Frac}(R)$, which is a separable extension by (iii), also $\text{Frac}(\Lambda)\to \text{Frac}(A)$ is separably generated. Since $A\otimes_\Lambda \text{Frac}(\Lambda)$ is finitely generated over $\text{Frac}(\Lambda)$ with separably generated fraction field, there exists $a\in A\setminus\{0\}$ such that $A[1/a]\otimes_\Lambda \text{Frac}(\Lambda)$ is a smooth algebra over $\text{Frac}(\Lambda)$. Every uniformizing element $\pi$ of $\Lambda$ is also a uniformizing element of $R$ by (i). Thus, for $a\in A\subset R$, there exists an integer $e\geq 0$ and there exists $u\in R\setminus \mathfrak{m}_R$ such that $a$ equals $u\pi^e$. Adjoining $u$ to $A$ does not change $A\otimes_\Lambda \text{Frac}(\Lambda)$. Thus, assume that $u$ is in $A$. Then $A[1/u]\otimes_\Lambda \text{Frac}(\Lambda)$ equals $A[1/a]\otimes_\Lambda \text{Frac}(\Lambda)$. Since $u$ is in $R\setminus \mathfrak{m}_R$, also $1/u$ is in $R\setminus \mathfrak{m}_R$. Thus, adjoin $1/u$ to $A$, and assume that $A\otimes_\Lambda \text{Frac}(\Lambda)$ is smooth over $\text{Frac}(\Lambda)$.
\noindent Since $A$ is a finite type, flat $\Lambda$-algebra such that $A\otimes_\Lambda \text{Frac}(\Lambda)$ is smooth over $\text{Frac}(\Lambda)$, and by hypotheses (i) and (ii), there exists a N\'{e}ron desingularization, \cite[Tag 0BJ6]{stacks-project}. Precisely, there exist finitely many ``N\'{e}ron blowups'', $A\mapsto A[\mathfrak{p}/\pi]$ where $\mathfrak{p}$ equals $\mathfrak{m}_R\cap A$, after which $\Lambda \to A$ is smooth at $\mathfrak{p}$, i.e., $A/\mathfrak{m}_AA$ is smooth over $\Lambda/\mathfrak{m}_A$ at the prime $\mathfrak{p}/\mathfrak{m}_AA$. Thus, there exists $v\in A\setminus \mathfrak{p}$ such that $A[1/v]$ is smooth over $\Lambda$. Since $v$ is in $R\setminus \mathfrak{m}_R$, $1/v$ is also in $R\setminus \mathfrak{m}_R$. Thus, $A[1/v]$ is a subring of $R$. So after replacing $A$ by $A[1/v]$, now $A$ is a subring of $R$ that is a finitely generated $\Lambda$-algebra that is smooth. Define $M$ to be $\text{Spec } A$. \end{proof}
\begin{defn} \label{defn-primereg} \marpar{defn-primereg} A \emph{prime finite} DVR $(\Lambda,\mathfrak{m}_\Lambda)$ is a DVR whose residue field $\Lambda/\mathfrak{m}_\Lambda$ is a finite extension of the prime subfield, i.e., either the residue field is a finite field if the characteristic is positive, or it is a number field if the characteristic is $0$. A DVR $(R,\mathfrak{m}_R)$ is \emph{prime regular}, or \emph{regular over a DVR whose residue field is finite over
the prime subfield}, if there exists a prime finite DVR $(\Lambda,\mathfrak{m}_\Lambda)$ and a local homomorphism $\phi:\Lambda\to R$ that is regular. \end{defn}
\begin{lem} \label{lem-equi} \marpar{lem-equi} Every equicharacteristic DVR is prime regular. \end{lem}
\begin{proof} Let $(R,\mathfrak{m}_R)$ be an equicharacteristic DVR. Denote by $F\subset R$ the prime subfield. Let $\theta\in \mathfrak{m}_R$ be a generator. Since $\theta$ is not an invertible element of $R$, $\theta$ is not in $F$. In fact, $\theta$ is transcendental over $F$, for otherwise a minimal polynomial $m_\theta(t) = t^d + \dots + a_1\theta+ a_0$ has degree $d\geq 1$ and gives a relation $1= -a_0^{-1}\theta(a_1+\dots+\theta^{d-1})$. This implies that $\theta$ is invertible in $R$. Thus, $F[\theta]$ is a copy of the polynomial ring in $R$. Since the multiplicative system $F[\theta]\setminus \theta F[\theta]$ is contained in $R\setminus \mathfrak{m}_R$, $R$ contains the ring of fractions $\Lambda = F[\theta]_{\langle \theta \rangle}$. The inclusion of DVRs $(\Lambda,\mathfrak{m}_\Lambda)\to (R,\mathfrak{m}_R)$ is a local homomorphism. It is weakly unramified since $\theta\in \mathfrak{m}_\Lambda$ is a generator of $\mathfrak{m}_R$. Since the residue field $F$ of $\Lambda$ is perfect, every field extension of the residue field is separable. Thus, it only remains to check that $K=\text{Frac}(R)$ is separable over $F(\theta)$.
\noindent In characteristic $0$ this is automatic. Assume the characteristic equals $p$. Since $F(\theta)^{1/p}$ equals $F(\theta)[t]/\langle t^p-\theta \rangle$, we need to prove that $A=K[t]/\langle t^p-\theta \rangle$ contains no nonzero nilpotent $\alpha$ with $\alpha^p$ equal to $0$. The $K$-vector space $A$ is free with basis $(1,t,\dots,t^{p-1})$. So every element $\alpha$ of $A$ has a unique decomposition, $$ \alpha = a_0 + a_1t + \dots + a_{p-1}t^{p-1}. $$
\noindent Let $\alpha$ be a nonzero element, i.e., some $a_i$ is nonzero. Denote by $e\in \mbb{Z}$ the minimum of the valuations of those $a_i\in K$ that are nonzero. Then up to replacing $\alpha$ by $\theta^{-e}\alpha$, assume that every $a_i$ is in $R$, and at least one $a_i$ has valuation $0$. Let $\ell$ be the minimal $i$ with $0\leq i \leq p-1$ such that $a_i$ has valuation $0$. Then $a_\ell$ is invertible, so that also $a_\ell^p$ is invertible. Therefore $a_\ell^p\theta^\ell$ has valuation $\ell$.
\noindent On the other hand, for every $m$ with $0\leq m\leq p-1$ and $m\neq \ell$, either $a_m$ equals $0$ so that $a_m^p\theta^m$ equals $0$, or $\text{val}(a_m)>0$ so that $a_m^p\theta^m$ has valuation $\geq p > \ell$, or $\text{val}(a_m)$ equals $0$ but $m>\ell$ so that again $a_m^p\theta^m$ has valuation $m>\ell$. So also the sum $$ \sum_{0\leq m \leq p-1, m\neq \ell} a_m^p\theta^m, $$ is either zero or has valuation $\geq \ell+1$. Thus the full sum, $$ \alpha^p = a_\ell^p \theta^\ell + \sum_{0\leq m\leq p-1,m\neq \ell} a_m^p \theta^m $$ is nonzero of valuation $\ell$. Therefore, $A$ contains no nonzero nilpotent elements, and $K$ is separable over $F(\theta)$. \end{proof}
\noindent Because of the lemma, the only DVRs that are not prime regular are mixed characteristic DVRs that are not weakly unramified over a Cohen ring, e.g., for $e>1$, the localization of $\mbb{Z}[x,y]/\langle y^e-px \rangle$ at the height one prime generated by $p$ and $y$.
\noindent For a DVR $(\Lambda,\mathfrak{m}_\Lambda)$, for a regular local homomorphism $(\Lambda,\mathfrak{m}_\Lambda) \to (R,\mathfrak{m}_R)$, for every pair $(X_R\to \text{Spec } R,\mathcal{L}_{\text{Frac}(R)})$ of a flat, projective $R$-scheme $X_R$ and an invertible sheaf on the generic fiber, there exists a parameter space $(M\to \text{Spec } \Lambda,f_M:X_M\to M,\mathcal{L})$ and a $\Lambda$-morphism $\zeta:\text{Spec } R\to M$ pulling back $(X_M,\mathcal{L})$ to $(X_R,\mathcal{L}_{\text{Frac}(R)})$, by Lemma \ref{lem-ft}. Thus every extension field of $R/\mathfrak{m}_R$ admits a morphism to $M_0=M\times_{\text{Spec } \Lambda} \text{Spec } (\Lambda/\mathfrak{m}_\Lambda)$, and this morphism is even dominant.
\noindent For a DVR $(\Lambda,\mathfrak{m}_\Lambda)$ and for a parameter space as above, let $F$ be a field (not necessarily finite), let $E$ be the function field of a geometrically integral $F$-scheme of dimension $d$, and let $z:\text{Spec } E\to M_0$ be a morphism, not necessarily dominant. Let $M^o_{\eta}$ be a specified dense open subset of the generic fiber $M_\eta=M\times_{\text{Spec } \Lambda} \text{Spec } \text{Frac}(\Lambda)$ (for instance the entire generic fiber, but there are applications when $M^o_{\eta}$ is a smaller dense open).
\begin{defn} \label{defn-int} \marpar{defn-int} An \emph{integral extension} of $z$ is a triple $$ (B\to \text{Spec } \Lambda, \pi^o_B:C^o_B\to B,z_B:C^o_B\to M), $$ and a pair $$ (\psi_B:\text{Frac}(B_0)\to F,\psi_E:F(C^o_F)\to E) $$ consisting of a smooth, quasi-projective, surjective morphism $B\to \text{Spec } \Lambda$ with integral closed fiber and generic fiber, a smooth, quasi-projective, surjective morphism $\pi^o_B$ of relative dimension $d$ with geometrically integral fibers, and a $\Lambda$-morphism $z_B$ together with a field homomorphism $\psi_B:\text{Frac}(B_0)\to F$ for the fraction field field of the integral scheme $B_0=B\otimes_\Lambda \Lambda/\mathfrak{m}_\Lambda$ and an isomorphism of $F$-extensions $\psi_C: F(C^o_F) \to E$ for the fraction field of the integral scheme $C^o_F=C^o_B\times_B \text{Spec } F$ such that \begin{enumerate} \item[(i)] $z_B^{-1}(M^o)$ contains the generic fiber $C^o_B\times_{\text{Spec } \Lambda}\text{Spec }
\text{Frac}(\Lambda)$ \item[(ii)] the morphism $z$ equals the composition of $\text{Spec } \psi:\text{Spec } E \to \text{Spec } F(C_F)$ and the morphism $\text{Spec } F(C^o_F) \to M$ induced by $z_B$. \end{enumerate} \end{defn}
\begin{rmk} \label{rmk-integralreg} \marpar{rmk-integralreg} For an integral extension, the stalk $R$ of the structure sheaf of $B$ at the generic point of the closed fiber $B_0=B\times_{\text{Spec } \Lambda} \text{Spec } (\Lambda/\mathfrak{m}_\Lambda)$ is a DVR that is regular over $\Lambda$ (and also essentially of finite type) since $B$ is smooth over $R$. \end{rmk}
\begin{lem} \label{lem-extend} \marpar{lem-extend} For a prime finite DVR $(\Lambda,\mathfrak{m}_\Lambda)$, for a parameter space over $\Lambda$ and a specified dense open subset $M^o_\eta$ of the generic fiber, for every pair $(E/F,z:\text{Spec } E\to M)$ as above, there exists an integral extension. \end{lem}
\begin{proof} Notice first, since $F$ is algebraically closed in $E$, the algebraic closure in $E$ of the prime subfield is actually a subfield of $F$. Thus, since $\Lambda/\mathfrak{m}_\Lambda$ is a finite extension of the prime field, the subfield $\Lambda/\mathfrak{m}_\Lambda$ of $E$ is actually a subfield of $F$. By limit arguments there exists a subring $A_0\subset F$ and a smooth morphism $\pi^o_{B_0}:C^o_{B_0}\to \text{Spec } A_0$ such that $E$ equals $F(C^o_F)$ and such that $\Lambda/\mathfrak{m}_\Lambda \to A_0$ is finite type. Up to adjoining finitely many elements of $F$ to $A_0$, up to replacing $C^o_{B_0}$ by the base change over this larger ring $A_0$, and up to replacing $C^o_{B_0}$ by a dense Zariski open subscheme, there also exists a $\Lambda$-morphism $z_{B_0}:C^o_{B_0}\to M$ that induces $z$. Also, since $\Lambda/\mathfrak{m}_\Lambda$ is perfect, up to inverting one nonzero element of $A_0$, the scheme $B_0=\text{Spec } A_0$ is smooth over $\Lambda/\mathfrak{m}_\Lambda$.
\noindent It remains to extend from $\Lambda/\mathfrak{m}_\Lambda$ to all of $\Lambda$ the triple $(B_0=\text{Spec } A_0$, $\pi^o_{B_0}:C^o_{B_0}\to B_0,z_{B_0}:C^o_{B_0}\to M)$. This follows by the method of \cite[Section 3]{StarrXu}. First, since $A_0$ is a smooth, finite type algebra over $\Lambda/\mathfrak{m}_\Lambda$, up to a further localization, it is the quotient of a polynomial ring over $\Lambda/\mathfrak{m}_\Lambda$ by an ideal generated by a regular sequence. Lifting the coefficients of the polynomials in this regular sequence, there exists a $\Lambda$-smooth algebra $A'$ with $A'\otimes_\Lambda \Lambda/\mathfrak{m}_\Lambda$ equal to $A_0$. If $d$ equals $0$, define $C'\to \text{Spec } A'$ to be the identity. For $d\geq 1$, up to localizing $A_0$ and replacing $C^o_{B_0}$ by a dense Zariski open, realize $C^o_{B_0}$ as a dense open subset of a hypersurface in $\mbb{P}^{d+1}_{A_0}$. For a general lift to $A'$ of the coefficients of the defining polynomial of this hypersurface, the lift of the hypersurface in $\mbb{P}^{d+1}_{A'}$ is flat over a dense open subset of $\text{Spec } A'$ that contains $\text{Spec } A_0$, and it is smooth over a dense open subset. Up to localizing $A_0$ and $A'$ further, this hypersurface is $A'$-flat. Since the geometric generic point is a smooth hypersurface of dimension $d\geq 1$ in projective space, it is integral. Define $C'$ to be a dense open subset that intersects the fiber over $\text{Spec } A_0$ and that is smooth over $\text{Spec } A'$. Since the geometric generic fiber is integral, up to shrinking $C'$ further, $C'$ has geometrically integral fibers over $\text{Spec } A'$.
\noindent Now consider the graph of $z_{B_0}$ as a closed subscheme of the fiber product $C'\times_{\text{Spec } \Lambda} M$. Up to shrinking further, it is an irreducible component (of multiplicity $1$) of a complete intersection of ample divisors in the closed fiber of $C'\times_{\text{Spec } \Lambda} M$. As above, lift the coefficients of the defining equations to lift $z_{B_0}$ to a closed subscheme curve $C$ of $C'\times_{\text{Spec } \Lambda} M$ whose generic fiber over $\text{Frac}(\lambda)$ is a general complete intersection of ample divisors. Since $\text{Frac}(\Lambda)$ is an infinite field, for an appropriate choice of the lifts of the coefficients, $C$ is smooth and the image in $M_\eta$ intersects $M^o_\eta$.
\noindent There is an issue about irreducibility of geometric fibers. Choosing projective models over $\Lambda$ of all of the schemes, the Stein factorization $\overline{B}$ of $C\to \text{Spec } A'$ (roughly the integral closure of $A'$ in the fraction field of $C$) may be nontrivial. However, since the closed fiber $C_0$ is smooth over $A_0$ with geometrically integral fibers, $\text{Spec } A_0$ is an irreducible component of the closed fiber $\overline{B}_0$. Replace $\overline{B}$ by the open complement in $\overline{B}$ of the union of the finitely many irreducible components of $\overline{B}_0$ different from $\text{Spec } A_0$. The restriction of $C$ over $\overline{B}$ now has geometrically irreducible fibers. Define $C^o_B$ to be the open subset of $C$ that is the smooth locus of the morphism to $\overline{B}$. Define $B$ to be the open image in $\overline{B}$ of this smooth morphism. Define $z_B$ to be the restriction to the locally closed subscheme $C^o_B$ of $C\times_{\text{Spec } \Lambda} M$ of the projection to $M$. By construction, the inverse image of $M^0_\eta$ in the generic fiber $C^o_\eta = C^o_B\otimes_{\text{Spec } \Lambda}\text{Spec } \text{Frac}(\Lambda)$ is a dense open. The complement is a proper closed subset $D_\eta$. Since $C^o_B$ is flat over $\text{Spec } \Lambda$, the closure $D$ of $D_\eta$ is flat over $\text{Spec } \Lambda$. Thus, $D_\eta$ cannot contain the irreducible component $C^o_{B_0}$. After replacing $C^o_B$ by the open complement of $D_\eta$, this triple $(B\to \text{Spec } \Lambda,\pi^o_B:C^o_B\to B,z_B:C^o_B\to M)$ is an integral extension of $z$. \end{proof}
\noindent Here is a precise formulation of existence of rational points for specializations of separably rationally connected varieties.
\begin{defn} \label{defn-RCfld} \marpar{defn-RCfld} A field $L$ is \emph{RC solving}, or \emph{admits rational points on
specializations of separably rationally connected varieties}, if for every projective, flat scheme $X_R$ over a prime regular DVR $R$ such that $X_R\times_{\text{Spec } R} \text{Spec } \overline{\text{Frac}(R)}$ is smooth, integral, and separably rationally connected, for every field extension $z^*:R/\mathfrak{m}_R \hookrightarrow L$, $X_R\times_{\text{Spec } R} \text{Spec } L$ has an $L$-rational point. The field $L$ is \emph{characteristic $0$ RC
solving} if the condition holds for every prime regular DVR $R$ whose fraction field has characteristic $0$. \end{defn}
\begin{rmk} \label{rmk-reminder} \marpar{rmk-reminder} To summarize the lemmas above, the prime regular hypothesis is automatic if $(R,\mathfrak{m}_R)$ is an equicharacteristic DVR. Also, $L$ is RC solving if and only if, for every prime finite DVR $(\Lambda,\mathfrak{m}_\Lambda)$, for every parameter space over $\text{Spec } \Lambda$ such that there is a dense open subset $M^o_\eta$ of the generic fiber over which $f_M$ is smooth with geometric fibers that are integral and separably rationally connected, for every morphism to the closed fiber $\text{Spec } L \to M_0$, the pullback $X_M\times_M \text{Spec } L$ has an $L$-point. \end{rmk}
\begin{thm}\cite[Lemma 2.5]{GHMS} \label{thm-RCfib}
\marpar{thm-RCfib} For every algebraically closed field $k$, every function field $k(C)$ of a geometrically integral, smooth, projective $k$-curve $C$ is RC solving. \end{thm}
\begin{proof} The notation is as in the definition. By hypothesis there exists a prime finite DVR $(\Lambda,\mathfrak{m}_\Lambda)$ and a local homomorphism $(\Lambda,\mathfrak{m}_\Lambda) \to (R,\mathfrak{m}_R)$. By Lemma \ref{lem-ft}, there exists a parameter space $(M\to S,f_M:X_M\to M, \mathcal{L})$ and a $\Lambda$-morphism $\zeta:\text{Spec } R\to M$ pulling back $X_M$ to $X_R$. In particular, the geometric generic fiber $X_R\otimes_R \overline{\text{Frac}(R)}$ is the base change of the geometric generic fiber of $f_M$. Each of the following properties of a proper scheme over an algebraically closed field holds if and only if it holds after base change to an arbitrary algebraically closed field extension: smoothness, integrality, separable rational connectedness. Thus, these properties all hold for the geometric generic fiber. Thus the generic point of $M$ is contained in the maximal open subscheme $M^o_\eta$ of the generic fiber $M_\eta = M\otimes_\Lambda \text{Frac}(\Lambda)$ over which the geometric fibers of $f_M$ are smooth, integral, and separably rationally connected. So $M^o_\eta$ is a dense open subset of the generic fiber $M_\eta$.
\noindent Define $z:\text{Spec } k(C) \to M$ to be the composition of $\text{Spec } k(C) \to \text{Spec } R/\mathfrak{m}_R$ and the restriction to $\text{Spec } R/\mathfrak{m}_R$ of $\zeta$. By Lemma \ref{lem-extend}, there exists an integral extension of $z$. Denote by $f_C:X_C\to C^o_B$ the pullback of $X_M$ by $z_B$. By construction, $\pi^o_B:C^o_B\to B$ is smooth, quasi-projective of relative dimension $1$, and the geometric generic fiber of $f_C$ is a smooth, integral, separably rationally connected variety. Thus, by \cite{GHS}, \cite{dJS}, there exists a finite extension $K'$ of the fraction field of $B$ and a $C^o_B$-morphism $s:\text{Spec } K'\times_B C^o_B\to X_C$. Consider the DVR $(\mc{O},\mathfrak{n})$ that is the stalk $\mc{O}_{B,\eta_{B_0}}$ of the structure sheaf of $B$ at the generic point of the closed fiber $B_0 = B\times_{\text{Spec }
\Lambda}\text{Spec }(\Lambda/\mathfrak{m}_\Lambda)$. By the Krull-Akizuki theorem, there exists a DVR $(\mc{O}',\mathfrak{n}')$ with fraction field $K'$ that dominates $(\mc{O},\mathfrak{n})$ (but $\mc{O}'$ is not necessarily a finite $\mc{O}$-module). Since $f_C$ is proper, by the valuative criterion of properness, the maximal domain $V$ of definition of the rational transformation $s:\text{Spec } \mc{O}' \times_B C^o_B\to X_C$ intersects the closed fiber $\text{Spec }(\mc{O}'/\mathfrak{n}') \times_B C^o_B\to X_C$. Thus, $s$ gives a rational section of $f_C$ over the base change of $C^o_{B_0}$ to the extension field $\mc{O}'/\mathfrak{n}'$ of the fraction field $\text{Frac}(B_0)$. So the same holds after extension to any bigger field, e.g., algebraic closure of $\mc{O}'/\mathfrak{n}'$.
\noindent Existence of a section of a morphism of finite type schemes over an algebraically closed field is a property that holds if and only it holds after an arbitrary extension to an algebraically closed field. Thus, since it holds after extension from $\text{Frac}(B_0)$ to the algebraic closure of $\mc{O}'/\mathfrak{n}'$, it also holds after extension from $\text{Frac}(B_0)$ to $k$. Therefore there is a $k(C)$-point of $X_R\otimes_R k(C)$. \end{proof}
\begin{thm}\cite{Esnaultpadic}, \cite{EsnaultXu} \label{thm-Esnault} \marpar{thm-Esnault} Every finite field is RC solving. \end{thm}
\begin{rmk} \label{rmk-Esnault} \marpar{rmk-Esnault} Every subfield of a finite field is a finite field. So for a finite field $L$, for every DVR $(R,\mathfrak{m}_R)$ and field homomorphism $R/\mathfrak{m}_R\to L$, already $(R,\mathfrak{m}_R)$ is prime finite. So $(R,\mathfrak{m}_R)$ is prime regular. \end{rmk}
\begin{proof} When $R$ has mixed characteristic, this follows from \cite{Esnaultpadic}. When $R$ is equicharacteristic, this follows from \cite{EsnaultXu}. \end{proof}
\begin{thm}\cite{HogadiXu} \label{thm-HX} \marpar{thm-HX} Every PAC field of characteristic $0$ is RC solving. \end{thm}
\begin{thm}\cite[Theorem 1.1]{SPAC} \label{thm-StarrAx}
\marpar{thm-StarrAx} A PAC field of characteristic $p$ is RC solving if it contains a primitive root of unity of order $n$ for every integer $n$ prime to the characteristic. \end{thm}
\begin{rmk} Please note: in \cite{SPAC} the hypothesis that the DVR is prime regular is missing. This is a mistake. The proof is only valid under the hypothesis that the DVR is prime regular: for the ``bifurcation'' in Step 3 in Section 2 and the use of \cite[Lemma 2.5]{GHMS} in the argument preceding Lemma 1.12, the argument holds only if the local homomorphism $S_{\mathfrak{m}_S}\to \mc{O}_{P,Q}$ is weakly unramified. For Corollary 1.2, and similar applications, this is irrelevant: there is a parameter space over $\text{Spec } \mbb{Z}$ for Fano complete intersections. \end{rmk}
\begin{proof} The proof in \cite{SPAC} depends on a symmetry of ``Bertini theorems''. Here is a proof that instead uses the finite field Bertini theorem.
\noindent By Lemma \ref{lem-ft} and Lemma \ref{lem-extend} (with $d=0$) assume we have the following: a DVR $(\Lambda,\mathfrak{m}_\Lambda)$ whose residue field is a finite field, a quasi-projective, smooth $\Lambda$-scheme $B$ of relative dimension $m$, a projective, flat morphism $g_B:X_B\to B$, and a field homomorphism $\text{Frac}(B_0)\hookrightarrow L$ from the fraction field of the (integral) closed fiber $B_0=B\times_{\text{Spec } \Lambda} \text{Spec } (\Lambda/\mathfrak{m}_\Lambda)$ to a perfect PAC field $L$ that is nice. Replace $\Lambda$ by its Henselization, and base change $B$ and $X_B$ over the Henselization. Then there is a unique connected component of $B$ that contains $B_0$. Up to replacing $B$ by this open and closed subset, assume that $B\to \text{Spec } \Lambda$ has irreducible geometric generic fiber.
\noindent Since $B$ is smooth over $\text{Spec } \Lambda$, up to replacing $B$ by an open subset that is dense in all fibers, there exists an \'{e}tale morphism $f:B\to \mbb{A}^m_\Lambda$. Denote $X_B\times_B B_0$ by $X_{B_0}$. By the proof of Corollary \ref{cor-ffBertini}, there exists a finite type, surjective monomorphism $i:X'_0\to X_0$ such that every connected component of $X_0'$ is regular and separated. Up to shrinking $B$ and $B_0$, assume that the composition $g_0\circ i:X_0'\to B_0$ is flat. For each of the finitely many connected components, the algebraic closure of the field $\Lambda/\mathfrak{m}_\Lambda$ in the function field of the component is a finite extension obtained by adjoining a root of unity whose order $n$ is prime to $p$. Thus the compositum of these fields is obtained by adjoining a root of unity $\alpha$.
\noindent Denote by $m_\alpha(t)$ the minimal polynomial of $\alpha$ over $\Lambda/\mathfrak{m}_\Lambda$. This is an irreducible, separable, monic polynomial. Let $m(t)\in \Lambda[t]$ be any element reducing to $m_\alpha(t)$. Then $\Lambda'=\Lambda[t]/\langle m(t) \rangle$ is an \'{e}tale extension of $\Lambda$. Since the quotient $\Lambda'/\mathfrak{m}_\Lambda \Lambda'$ is a field, the ideal $\mathfrak{m}_{\Lambda'} = \mathfrak{m}_\Lambda \Lambda'$ is a maximal ideal. Since $\mathfrak{m}_\Lambda$ is principal, also $\mathfrak{m}_{\Lambda'}$ is principal. Finally, by hypothesis the field $L$ contains all roots of unity, so $\Lambda/\mathfrak{m}_\Lambda \to L$ factors through $\Lambda'/\mathfrak{m}_{\Lambda'}$. Thus, up to replacing $\Lambda$ by $\Lambda'$ and replacing every scheme as above by its base change by the finite and regular local homomorphism $\Lambda\to \Lambda'$, assume that $\Lambda/\mathfrak{m}_\Lambda$ is algebraically closed in the fraction field of every irreducible component of $X_0'$. Thus the field extension in Corollary \text{cor-ffBertini} is an isomorphism. So every field extension of $\Lambda/\mathfrak{m}_\Lambda$ satisfies the hypothesis of the corollary.
\noindent Denote by $k$ the residue field $\Lambda/\mathfrak{m}_\Lambda$. If $X'_0\to B_0$ has a PAC section $Y_0\subset X_0'$, then the base change $Y_0\times_{B_0} \text{Spec } L$ is a geometrically irreducible, finite type $L$-scheme. Since $L$ is a perfect PAC field, this scheme has an $L$-point, which then maps under $i$ to an $L$-point of $X_0\times_{B_0}\text{Spec } L$, as desired.
\noindent Thus, by way of contradiction, assume that there is no PAC section of $g$. Then by Corollary \ref{cor-ffBertini} with $k'$ equal to $k$, there exists a dense, Zariski open subset $U\subset \text{Grass}_k(\mbb{P}^1,\mbb{P}^m)$ such that for every field extension $K/k$ and for every $[L]\in U(K)$, the inverse image $f^{-1}(L)$ is geometrically irreducible, and the restriction of $g$ over $f^{-1}(L)$ has no PAC section. By hypothesis, the family $X_B\to B\to \text{Spec }\Lambda$ is a parameter space such that the geometric generic fiber of $g_B$ is smooth, integral, and separably rationally connected. Thus, by Theorem \ref{thm-RCfib}, up to replacing $K$ by a finite extension $K'/K$ obtained by adjoining a root of unity, there is a rational section of the restriction of $g$ over $f^{-1}(L)$. This contradiction implies that $g$ does have a PAC section. Therefore $X_0\times_{B_0} \text{Spec } L$ has an $L$-point. \end{proof}
\section{Specializations of Rationally Simply Connected Varieties} \label{sec-RSCspec} \marpar{sec-RSCspec}
\noindent The first new result is an analogous result for perfect PAC fields of positive characteristic that do not necessarily contain all roots of unity, but where rational connectedness is replaced by \emph{rational simple
connectedness}. For an algebraically closed field $\overline{Q}$ of characteristic $0$, a \emph{rationally simply connected fibration} over a $\overline{Q}$-curve is a pair $(f_{\overline{Q}}:X_{\overline{Q}}\to C_{\overline{Q}},\mathcal{L}_{\overline{Q}})$ where $C_{\overline{Q}}$ is a smooth, projective, connected $\overline{Q}$-curve, where $f_{\overline{Q}}$ is a proper, flat morphism, and where $\mathcal{L}_{\overline{Q}}$ is an invertible sheaf that satisfies the following six hypotheses. First, $X_{\overline{Q}}$ is smooth over $\overline{Q}$. Second, every geometric fiber of $f_{\overline{Q}}$ is irreducible. Third, $\mathcal{L}_{\overline{Q}}$ is $f_{\overline{Q}}$-ample. The final three hypotheses involve the geometric generic fiber $Y$ of $f_{\overline{Q}}$ is a scheme over the algebraically closure $k$ of the fraction field of $C_{\overline{Q}}$, together with the pullback $\mathcal{L}_Y$ of $\mathcal{L}_{\overline{Q}}$ as an invertible sheaf on $Y$. The fourth hypothesis is that for the parameter space $\Kgnb{0,1}(Y/k,1)$ of $1$-pointed, genus $0$ stable maps to $Y$ having $\mathcal{L}_Y$-degree $1$, i.e., ``lines'' $\ell$, for the maximal open subscheme $Y_{\text{free}}$ of $Y$ over which the evaluation morphism $$ \text{ev}_1:\Kgnb{0,1}(Y/k,1)\to Y, \ \ ([\ell],p)\mapsto p, $$ is smooth (automatically this is a dense open in $Y$), the fiber of $\text{ev}_1$ over every geometric point of $Y_{\text{free}}$ is nonempty, irreducible, and has rationally connected fibers. There is a second important morphism, the ``forgetful'' morphism, $$ \Phi:\Kgnb{0,1}(Y/k,1)\to \Kgnb{0,0}(Y/k,1), \ \ ([\ell],p)\mapsto [\ell]. $$ For every integer $m\geq 1$ there is a parameter $k$-scheme $\text{FreeChains}_2(Y/k,m)$ of ordered $m$-tuples $$ (([\ell_1],p_{1,0},p_{1,\infty}),\dots,([\ell]_m,p_{m,0},p_{m,\infty})) $$ of triples $([\ell_i],p_{i,0},p_{i,\infty})$ of $2$-pointed lines in $Y$ such that $p_{i,0}$ and $p_{i,\infty}$ are in the dense open $Y_{\text{free}}$, and such that $p_{i,\infty}$ equals $p_{i+1,0}$ as points of $Y_{\text{free}}$ for every $i=1,\dots,m-1$. There is an evaluation morphism $$ \text{ev}_2:\text{FreeChains}_2(Y/k,m)\to Y\times_{\text{Spec }(k)} Y, $$ that sends each ordered $m$-tuple as above to the ordered pair $(p_{1,0},p_{m,\infty})$. The fifth hypothesis is that there exists an integer $m\geq 1$ and a dense open $V$ of $Y\times_{\text{Spec }(k)}Y$ such that the fiber of $\text{ev}_2$ over every geometric point of $V$ is nonempty, irreducible, and ``birationally rationally connected'', i.e., there exists one projective model of this quasi-projective variety that is rationally connected (hence every projective model is rationally connected). If this holds for one $m$, then there exists an integer $m_0$ such that it holds for every $m\geq m_0$. The final hypothesis is that $(Y,\mathcal{L}_Y)$ contains a very twisting scroll, cf. \cite[Definition 12.7]{dJHS}. This hypothesis is equivalent to existence of a morphism $\zeta:\mbb{P}^1_k\to \Kgnb{0,1}(Y/k,1)$ such that all of the following hold, \begin{enumerate} \item[(i)] the composition $\text{ev}_1\circ \zeta:\mbb{P}^1_k\to Y$ is
free, i.e., $(\text{ev}_1\circ \zeta)^*T_{Y/k}$ is globally
generated, \item[(ii)] the morphism $\text{ev}_1$ is smooth at every point in
$\zeta(\mbb{P}^1_k)$, \item[(iii)] the pullback by $\zeta$ of the relative tangent sheaf
$T_{\text{ev}_1}$ is ample, and \item[(iv)] the pullback by $\zeta$ of $T_{\Phi}$ is globally generated. \end{enumerate} For a characteristic $0$ field $Q$, a pair $(f_Q:X_Q\to C_Q,\mathcal{L}_Q)$ is a \emph{rationally simply connected fibration} over a $Q$-curve if the base change of the pair to the algebraic closure $\overline{Q}$ is a rationally simply connected fibration over a $\overline{Q}$-curve as above. The main theorem of \cite{dJHS}, Theorem 13.1 (cf. also \cite[Definition 4.8 and Theorem 4.9]{SStrsbg}), gives an integer $\epsilon$ and a sequence $(Z_{Q,e})_{e\geq \epsilon}$ of irreducible components $Z_{Q,e}$ of the Hilbert scheme $\text{Hilb}^{et+1-g(C_Q)}_{X_Q/Q}$ satisfying all of the following. \begin{enumerate} \item[(i)] The geometric generic point of $Z_{Q,e}$ parameterizes the closed image of a section $\sigma:C_Q\to X_Q$ of $f_K$ of $\mathcal{L}_Q$-degree $e$ and that is $(g)$-free, i.e., the deformations of the section relative to a fixed divisor of degree $\max(2g(C_Q),1)$ are unobstructed. \item[(ii)] The restriction to $Z_{Q,e}$ of the Abel map of $\mathcal{L}_Q$,
$\alpha_{\mathcal{L}_Q}|_Z:Z_{Q,e} \to \Pic{e}{C_Q/Q}$, has fiber over the geometric generic point of $\Pic{e}{C_Q/Q}$ that is nonempty, irreducible, and rationally connected. \item[(iii)] After arbitrary base change from $Q$ to an algebraically closed field, for every section $\sigma$ as above that is $(g)$-free, after attaching to the closed image $\sigma(C_Q)$ sufficiently many general lines in general fibers of $f_Q$, the resulting curve is parameterized by $Z_{Q,e}$ for the appropriate $\mathcal{L}_Q$-degree $e$. \end{enumerate}
\noindent In order to state the result, it is useful to specify a bounded family of polarized schemes. Let $S$ be an integral, Noetherian, regular scheme of dimension $\leq 1$ whose function field $Q$ has characteristic $0$; usually $S$ will be a dense Zariski open subset of the ring of integers $\text{Spec } \mathfrak{o}_Q$ of a number field $Q$. Fix integers $m,c\geq 1$.
\begin{defn}\cite[Definition 1.9]{StarrXu} \label{defn-pd} \marpar{defn-pd} A \emph{parameter datum} over $S$ with a codimension $>c$ compactification is a datum $$ ((M,f_M:X_M\to M,\mathcal{L}_{M_Q}),(\overline{M},\mc{O}_{\overline{M}}(1),i)) $$ of a smooth $S$-scheme $M$ of constant relative dimension $m>1$, a proper, flat morphism $f_M$, an $f_M$-very ample invertible sheaf $\mathcal{L}_{M_Q}$ on $X\times_S \text{Spec}(Q)$, and a codimension $>c$ compactification $(\overline{M},\mc{O}_{\overline{M}}(1),i)$ of $M$ over $S$, i.e., $q:\overline{M}\to S$ is flat and proper, $\mc{O}_{\overline{M}}(1)$ is $q$-very ample, $i:M\to \overline{M}$ is a dense open immersion such that $\partial \overline{M}:=\overline{M}\setminus M$ intersects every generic fiber of $q$ in a subscheme of codimension $>c$ in that fiber. \end{defn}
\noindent For every integer $r\geq 1$, the complete linear system $H^0(\overline{M}_Q,\mc{O}_{\overline{M}_Q}(r))$ induces a closed immersion into projective space over $\text{Spec }(Q)$. For the corresponding Grassmannian parameterizing flat families of linear subvarieties of this projective space of codimension $m-1$, there is a dense open subset $G_r$ parameterizing linear subvarieties whose inverse images in $\overline{M}_Q$ are smooth, irreducible curves that are complete contained in the open subset $M_Q$.
\begin{defn}\cite[Definition 1.10]{StarrXu} \label{defn-RSCprop}
\marpar{defn-RSCprop} A parameter datum as above satisfies the \emph{RSC property} if there exists an integer $r_0$ and a sequence $(W_r)_{r\geq r_0}$ of dense open subschemes $W_r \subset G_r$ such that for every algebraically closed extension $\overline{K}$ of $Q$ and for every $\overline{K}$-point of $W_r$ parameterizing a smooth curve $C_{\overline{K}}$ in $M_{\overline{K}}$, the pullback family $f:C_{\overline{K}} \times_MX\to C_{\overline{K}}$ together with the pullback of $\mathcal{L}$ is a rationally simply connected fibration over $C_{\overline{K}}$. \end{defn}
\noindent The results of \cite{StarrXu} are stated with respect to a \emph{finite field} $F$. However, the results hold for any RC solving field. When the field is the function field of a curve over an algebraically closed field, this is already in the proofs of \cite[Corollary 13.2 and Lemma 13.4]{dJHS}.
\begin{prop}\cite[Theorem 1.6]{StarrXu} \label{prop-concordmain} \marpar{prop-concordmain} Let $(R,\mathfrak{m}_R)$ be a Henselian DVR that is prime regular with characteristic $0$ fraction field $K$, and let $R/\mathfrak{m}_R\to L$ be a field extension. Let $C_R\to \text{Spec } R$ be a generically smooth $R$-curve. Let $f_R:X_R\to C_R$ be a projective, surjective morphism, and let $\mathcal{L}_K$ be an $f_R$-ample invertible sheaf on the generic fiber $X_K =X_R\times_{\text{Spec } R} \text{Spec } K$. Let $z:\text{Spec } L\to \text{Spec } R/\mathfrak{m}_R$ be a field extension that is RC solving. If $(f_{\overline{K}}:X_{\overline{K}}\to C_{\overline{K}},\mathcal{L}_{\overline{K}})$ is a rationally simply connected fibration, then there exists a sequence $(U_e)_{e\geq \epsilon}$ of dense open subschemes $U_e\subset \Pic{e}{C_K/K}$ such that $X_L/C_L$ admits rational points compatibly with $(U_e)_{e\geq
\epsilon}$. In particular, for every generic point $\eta$ of the smooth locus of $C_L=C_R\times_{\text{Spec } R} \text{Spec } L$, the $L(\eta)$-scheme $X_R\times_{C_R} \text{Spec } L(\eta)$ has an $L(\eta)$-point. \end{prop}
\begin{proof} The proof in \cite{StarrXu} applies in the RC solving case. By \cite[Theorem 13.1]{dJHS}, there exists a sequence $(Z_{R,e})_{e\geq \epsilon}$ of geometrically integral closed subschemes of the Hilbert scheme $\text{Hilb}_{X_R/R}$ whose geometric generic point parameterizes the image of a section of $X_K\to C_K$ and such that the restriction of the natural Abel map $\alpha_e:Z_{K,e}\to \Pic{e}{C_K/K}$ is surjective with geometric generic fiber an integral scheme that is rationally connected. Since $K$ has characteristic $0$, by resolution of singularities, there exists a projective morphism $\widetilde{Z}_{R,e}\to Z_{R,e}$ such that $\widetilde{Z}_{K,e}$ is $K$-smooth. Then $U_e$ is defined as the maximal open subscheme over which $\widetilde{Z}_{K,e}\to \Pic{e}{C_K/K}$ is smooth and the fibers intersect the dense open parameterizing closed subschemes of $X_K$ that are sections of $X_K\to C_K$. For every $K$-point $[\mathcal{A}]$ of $U_e$ (and these do exist for $e$ sufficiently positive and divisible using the fact that $R$ is Henselian) the closure $\widetilde{Z}_{R,\mathcal{A}}$ in $\widetilde{Z}_{R,e}$ of the fiber of $\alpha_e$ over this $K$-point is a projective, flat scheme over $\text{Spec } R$ whose geometric generic fiber is smooth, integral and separably rationally connected. Thus, since $L$ is RC solving, there exists an $L$-point of $\widetilde{Z}_{R,\mathcal{A}}\times_{\text{Spec } R} \text{Spec } L$. Using \cite[Lemma 4.1]{StarrXu}, this $L$-point gives an $L(\eta)$-point of $X_R\times_{C_R} \text{Spec } L(\eta)$. \end{proof}
\noindent The various complements to the main theorem from \cite{StarrXu} also extend.
\begin{prop} \label{prop-concord} \marpar{prop-concord} Let $S$ be a regular, integral, Noetherian scheme of dimension $\leq 1$ whose function field has characteristic $0$ and whose residue fields at closed points are either finite fields or characteristic $0$ fields. Fix a parameter datum over $S$ with a codimension $>1$ compactification. Then the following results of \cite{StarrXu} hold with the finite field $F$ replaced by any RC solving field $L$: Proposition 1.12 and Corollary 1.13. Assuming that $S$ is a dense open of the spectrum of the ring of integers of a number field, Corollary 1.14 also holds. Finally, for a parameter datum as above, for a closed point $\mathfrak{s}\in S$ and a regular local homomorphism of DVRs $\mc{O}_{S,\mathfrak{s}} \to R$, for $\pi_R:C_R\to \text{Spec } R$, for an $S$-morphism $\zeta:\mc{O}_{C_R,\eta} \to \overline{M}$, and for a projective flat morphism $f_\eta:X_\eta \to \text{Spec } \mc{O}_{C_R,\eta}$ as in Proposition 1.17, there exists a modification of the parameter datum over $S$ such that $\zeta$ extends to a regular morphism $\zeta'$ to $M'$ and such that the pullback by $\zeta'$ of $X'_{M'}\to M'$ equals $f_\eta$. In particular, if the parameter datum satisfies the RSC property, for every field homomorphism from the residue field of a closed point of $S$ to $L$, for every function field $E=L(\eta)$ of a geometrically integral $L$-curve, and for every $S$-morphism $z_E:\text{Spec } E \to M$, the pullback $X_E$ of $X_M$ by $z_E$ has an $E$-rational point. \end{prop}
\begin{proof} The proof of Proposition 1.12 is essentially Corollary 3.15, which is valid over any field $F$.
\noindent The proof of Corollary 1.13 appears to use the fact that the field is finite, so that the image $z_o$ under $z:\text{Spec } L\Sem{t}\to \overline{M}$ of the closed point is a closed point of $\overline{M}$. That can certainly fail if $L$ is not a finite field. However, since the construction of the curve $C_e$ is made with respect to the base change $\overline{M}_L=\overline{M}\times_R\text{Spec } L$, the image point $z_0$ in $\overline{M}_L$ is an $L$-point, and that is a closed point since $\overline{M}_L$ is a finite type $L$-scheme. Let $X_{\overline{M}}\to \overline{M}$ be any projective model of $X_M\to M\subset \overline{M}$. By \cite[Theorem 1.10]{ArtinApprox}, for every integer $e$ there exists an integral closed curve $C_e$ in $\overline{M}_L\times_{\text{Spec } L} \mbb{P}^1_L$ that approximates to order $e$ the graph of $z:\text{Spec } L\Sem{t} \to \overline{M}_L\times_{\text{Spec } L} \mbb{P}^1_L$, considered as a formal section of $\text{pr}_2:\overline{M}_L\times_{\text{Spec } L} \mbb{P}^1_L \to \mbb{P}^1_L$ over the closed point $0\in\mbb{P}^1_L(\text{Spec } L)$. Since $z$ maps the generic point of $L\Sem{t}$ to $M$, also there exists such $C_e$ that intersects the open $M_L\times_{\text{Spec }
L}\mbb{P}^1_L$ in a dense open $C_e^o$; in fact this condition will be automatic for all $e\gg 0$. Proposition 1.12 then gives a section of the pullback $X_M\times_M C_e^o \to C_e^o$, which extends to a section of $X_{\overline{M}}\times_{\overline{M}} C_e \to C_e$ by the valuative criterion of properness. Then by \cite[Theorem 1]{Greenberg}, also $X_{\overline{M}}\times_{\overline{M}} \text{Spec } L\Sem{t} \to \text{Spec } L\Sem{t}$ has a section whose generic fiber is a section of $X_M\times_M \text{Spec } L\Semr{t} \to \text{Spec } L\Semr{t}$.
\noindent Corollary 1.14 uses \cite{DenefAxKochen}. Although the article is focused on completions at closed points of $\mbb{Z}$, resp. $\mathbb{F}_p[t]$, the proofs have no hypotheses that the rings involved should be finite over the residue field, etc. Using the beautiful general arguments, the key computation is the isomorphism $\tau_p:\mathcal{MR}(\mbb{Z}_p) \to \mathcal{MR}(\mathbb{F}_p\Sem{t})$. This is completely general. Let $(A,\theta A)$ and $(B, \pi B)$ be Henselian DVRs, and let $\overline{\tau}:A/\theta A\to B/\pi B$ be a field isomorphism. For every unit $u\in A\setminus \theta A$, there exists a unit $v\in B \setminus \pi B$ such that $\overline{\tau}(\overline{u})$ equals $\overline{v}$. Thus, there is a unique bijection $\tau_{\theta,\pi}:\mathcal{MR}(A)\to \mathcal{MR}(B)$ such that for every integer $n\geq 0$ and every unit $u$, $\tau_{\theta,\pi}(\text{mres}(u\theta^n))$ equals $\text{mres}(v\pi^n)$. This bijection is all that is used in the Transfer of Residues Lemma, and thus also in the Transfer of Surjectivity Theorem. In particular, this applies for $A$ equal to $L\Sem{t}$ with $\theta$ to $t$ and to $B$ equal to a Cohen ring for $L$ with $\pi$ equal to $p$ (the Witt vectors is one explicit construction of such a Cohen ring).
\noindent The proof of Proposition 1.17 works in the same way. In order to apply the N\'{e}ron desingularization, it is important that the local homomorphism of DVRs is regular. \end{proof}
\noindent To extend \cite[Proposition 1.18]{StarrXu}, we apply Lemma \ref{lem-extend} to the parameter space of complete intersection curves in a specified parameter datum. Thus, let $S$ be the spectrum of a Dedekind domain such that the fraction field $Q$ of $S$ has characteristic $0$, and every residue field is a finite field. For a specified closed point of $S$, denote by $(\Lambda,\mathfrak{m}_\Lambda)$ the stalk of the structure sheaf at that point. Thus $(\Lambda,\mathfrak{m}_\Lambda)$ is a DVR whose residue field $\kappa$ is a finite field or a characteristic $0$ field. Let there be specified a parameter datum over $S$. Denote by $X_{\overline{M}}\to \overline{M}$ a projective morphism (not necessarily flat) that restricts to $X_M$ over $M$.
\noindent Assume that the parameter datum satisfies the RSC property. For every integer $r\geq r_0$, denote by $C_{W_r}\subset W_r\times_{\text{Spec } Q} M_Q$ the universal closed subscheme. Denote by $C_{G_r} \subset G_{\Lambda,r}\times_{\text{Spec } \Lambda} \overline{M}$ the closure of $C_{W_r}$. Denote the restrictions to $C_{W_r}$ of the two projections as follows, $$ \rho_{W_r}:C_{W_r}\to W_r, $$ $$ \rho_{M,r}:C_{W_r} \to M_Q. $$ By hypothesis, $\rho_{W_r}:C_{W_r}\to W_r$ is smooth and projective with geometric fibers that are irreducible curves. For every integer $e$, denote by $\Pic{e}{C/W_r}\to W_r$ the relative Picard scheme of $\rho_{W_r}$ parameterizing invertible sheaves of degree $d$ on these curves. The relative degree over $W_r$ of the pullback to $C_{W_r}$ of $\mc{O}_{\overline{M}_Q}(1)$ equals $e(r) = e_0 r^{m-1}$ for an integer $e_0$. For every integer $\ell$, the pullback of $\mc{O}_{\overline{M}_Q}(\ell)$ defines a global section of $\Pic{\ell e(r)}{C/W_r}$ over $W_r$.
\noindent Denote by $f_{r}:X_{r}\to C_{W_r}$ the pullback by $\rho_{M,r}$ of the universal family $f_{M_Q}:X_{M_Q}\to M_Q$. Denote by $\overline{f}_r:\overline{X}_r \to C_{G_r}$ the closure of $X_r$ in $C_{G_r}\times_{\overline{M}} X_{\overline{M}}$. Denote by $\mathcal{L}_r$ the pullback of $\mathcal{L}_Q$ to $X_r$. Consider the base change by the generic point $\text{Spec }(Q(W_r))\to W_r$ of $X_r$, $C_{W_r}$ and $\mathcal{L}_r$. By the RSC hypothesis, these base changes form a rationally simply connected fibration. Thus, by \cite[Theorem 13.2]{dJHS}, there exists a sequence $(Z_{e,Q(W_r)})_{e\geq \epsilon'(r)}$ of irreducible components of the relative Hilbert scheme $\text{Hilb}^{et+1-g_r}_{X_r/W_r}\times_{W_r} \text{Spec } Q(W_r)$ satisfying the conditions listed above. For every $e\geq \epsilon'(r)$, define $Z_e$ to be the closure of $Z_{e,Q(W_r)}$ in the relative Hilbert scheme $\text{Hilb}^{et+1-g(C_Q)}_{\overline{X}_r/G_r}$. Define $\epsilon(r)$ to be the smallest multiple $\ell e(r)$ of $e(r)$ such that $\epsilon(r)\geq \epsilon'(r)$.
\begin{prop}\cite[Proposition 1.18]{StarrXu}
\label{prop-concordheight} \marpar{prop-concordheight} For every integer $r>0$ there exists an integer $\epsilon(r)>0$ with the following property. For every closed point of $S$ with residue field $\kappa$, for every field extension $L/\kappa$ such that $L$ is RC solving, for every integral curve $C_L \subset \overline{M}_L$ intersecting $M_L$, if $\text{deg}_{C_L}(\mc{O}_{\overline{M}}(1)) \leq r$ then there exists a curve $\mathcal{C}_L\subset f_{\overline{M}}^{-1}(C_L)$ with $f:\mathcal{C}_L\to C_L$ an isomorphism over the generic point and with $\text{deg}_{\mathcal{C}_L}(\mathcal{L}_{\overline{M}}) \leq \epsilon$. \end{prop}
\begin{proof} Having specified $\epsilon(r)$ independent of $\kappa$, we are now free to prove the proposition after replacing $S$ by $\text{Spec } \Lambda$. Since $Q(W_r)$ has characteristic $0$, by resolution of singularities, there exists a projective morphism $\widetilde{Z}_e\to Z_e$ such that the geometric generic fiber of $\widetilde{Z}_e\to G_{\Lambda,r}$ is smooth. There is an Abel map, $$ \widetilde{\alpha}_e:\widetilde{Z}_e\times_{G_r} W_r \to \Pic{e}{C/W_r}. $$ Denote by $U_e\subset \Pic{e}{C/W_r}$ the maximal open subscheme over which $\widetilde{\alpha}_e$ is smooth and such that the fiber of $\widetilde{\alpha}_e$ over every geometric point of $U_e$ intersects the dense open subset of $\widetilde{Z}_e$ parameterizing closed subschemes that are images of sections of $f_r$. In particular, for $e$ equal to $\epsilon(r)$, define $W^o_r\subset W_r$ to be the image of the dense open $U_e$ under the smooth morphism $\Pic{e}{C/W_r}\to W_r$.
\noindent By \cite[Corollary 3.15]{StarrXu}, the curve $C_L$ is an an irreducible component of multiplicity $1$ in a curve in $\overline{M}_L$ parameterized by a $\Lambda$-morphism $z:\text{Spec } L \to G_{\kappa,r}$. By Lemma \ref{lem-extend} applied to the parameter space $G_{\Lambda,r}\to \text{Spec } \Lambda$ and the dense open subscheme $W^o_r$ of the generic fiber $G_{Q,r}$, for $d=0$ and $E=F=L$, there exists an integral extension of $z$, say $$ ((B\to \text{Spec } \Lambda, z_B:B\to G_r),\psi_B:\text{Spec } L \to B_0). $$ Since $\Lambda$ is Henselian, $B$ has geometrically integral closed fiber and generic fiber over $\text{Spec } \Lambda$.
\noindent Define $R$ to be the Henselization of the DVR $\mc{O}$ obtained as the stalk of the structure sheaf of $B$ at the generic point of $B_0$. Then $\Lambda \to R$ is a regular local homomorphism, $z_B$ induces a $\Lambda$-morphism, $z_R:\text{Spec } R \to G_R$ mapping the generic point into $W^o_r$, and there exists a $\Lambda$-morphism $\psi_R:\text{Spec } L\to \text{Spec } R$ whose composition with $z_R$ equals $z$. By construction, for $e=\epsilon(r) = \ell e(r)$, the relative Picard $\Pic{e}{\mbb{C}/W_r}$ has a section over $W_r$ coming from $\mc{O}_{\overline{M}}(\ell)$. Thus the pullback by $z_R$ has a section over $\text{Spec } \text{Frac}(R)$. Since $R$ is Henselian, and since relative Picard scheme is smooth, there exists a section $[\mathcal{A}]$ over $R$ that maps into the open subscheme $U_e$.
\noindent Define $\widetilde{Z}_{R,\mathcal{A}}$ to be the closure in $\widetilde{Z}_e\times_{G_r} \text{Spec } R$ of the fiber of $\widetilde{\alpha}_e$ over $[\mathcal{A}]$. Then $\widetilde{Z}_{R,\mathcal{A}}\to \text{Spec } R$ is projective and flat. By construction, the geometric generic fiber is separably rationally connected. Since $R$ is prime regular, and since $L$ is RC solving, there exists an $L$-point of $\widetilde{Z}_{R,\mathcal{A}}\times_{\text{Spec } R} \text{Spec } L$. By \cite[Proposition 4.1]{StarrXu}, the image of this $L$-point in the Hilbert scheme $\text{Hilb}^{et+1-g(C_Q)}_{\overline{X}_r/G_r}\times_{G_r} \text{Spec } L$ parameterizes a closed subscheme of $\overline{X}_r\times_{G_r}\text{Spec } L$ whose restriction over a dense open $C_L^o$ in $C_L$ is the image of a rational section of $X_M\times_M C_L^o \to C_L^o$. By construction, the total degree of this closed subscheme has degree $e(r)$. Thus, the closed image of the rational section has degree $\leq e(r)$. \end{proof}
\section{Perfect PAC fields and Global Function Fields} \label{sec-PACglob} \marpar{sec-PACglob}
\noindent Let $S$ be $\text{Spec } \Lambda$, where $\Lambda$ is a Henselian DVR with finite residue field $\kappa$. Fix a parameter datum over $S$ with a codimension $>1$ compactification. A \emph{global function field over $M$} is a function field over $M$, $$ (\text{Spec }(R)\to S, E/F,z:\text{Spec } E\to M), $$ such that the residue field extension $\kappa \to F= R/\mathfrak{m}_R$ is finite, i.e., $F$ is a finite field. Thus, the field $E$ -- a function field of a geometrically integral $F$-curve -- is a global function field.
\begin{defn} \label{defn-global} \marpar{defn-global} The parameter datum has \emph{rational points over global function
fields} if for every global function field over $M$ as above, the base change $X_E = \text{Spec } E\times_M X_E$ has an $E$-point. \end{defn}
\begin{prop} \label{prop-integral} \marpar{prop-integral} If a parameter datum has rational points over global function fields, then for every irreducible closed subset of the closed fiber $B\subset M_0$ with pullback family $X_B=B\times_M X_M$, there exists a PAC section of $X_B\to B$. Thus, for every field extension $\kappa \to L$, for every $S$-morphism $\text{Spec } L\to M_0$, the pullback family $X_L = \text{Spec } L\times_M X_M$ has a PAC section over $\text{Spec } L$. In particular, if $L$ is a perfect PAC field, then $X_L$ has an $L$-point. \end{prop}
\begin{proof} First consider the case that $B$ has dimension $0$, i.e., as a $\kappa$-scheme this equals $\text{Spec } F$ for a finite field extension $F/\kappa$. Define $E$ to be $F(t)$, the function field of $\mbb{P}^1_F$. Define $z:\text{Spec } E \to M$ to be the composition of $\text{Spec } E\to \text{Spec } F$ and the specified closed point $\text{Spec } F \to M$. Since the parameter datum has rational points over global function fields, there exists a lifting of $z$ to an $F$-morphism $s:E\to X_B$. The closure $Y$ of the image of this morphism is the image of an $F$-morphism from $\mbb{P}^1_F$, and hence $Y$ is geometrically irreducible over $\text{Spec } F$. Thus $Y$ is a PAC section of $X_B \to B$.
\noindent Thus, without loss of generality, assume that $B$ has dimension $m\geq 1$. Define $F$ to be the algebraic closure of $\kappa$ in the fraction field of $B$. Thus $B$ is a geometrically integral $F$-scheme. Let $u:B\to \mbb{P}^N_F$ be a generically unramified, finite type morphism. Set $c$ equal to $m-1$, and set $r$ equal to $N-c$. Thus, $\text{Grass}_F(\mbb{P}^r,\mbb{P}^N)$ parameterizes linear subspaces of codimension $c=m-1$, so that the inverse image under $u$ has dimension $\geq 1$.
\noindent By way of contradiction, assume that $f_B:X_B\to B$ has no PAC section. Then by Corollary \ref{cor-ffBertini}, there exists a finite Galois extension $F'/F$ and a dense open subset $U\subset \text{Grass}_F(\mbb{P}^r,\mbb{P}^N)$ such that for every field extension $K/F$ that is linearly disjoint from $F'/F$, for every $[L]\in U(\text{Spec } K)$, the curve $C=B\times_{\mbb{P}^N_F} L$ is a geometrically integral curve over $K$ and for $X_C := X_B \times_B C$, the projection morphism $f_C:X_C \to C$ has no PAC section. By elementary considerations of the intersection of $U$ with any of the open affine spaces in $\text{Grass}_F(\mbb{P}^r,\mbb{P}^N)$ forming an open Bruhat cell, or by the more refined Lang-Weil estimates, there exists an integer $d_0$ such that for every finite field extension $K/F$ of degree $\geq d_0$, $U(K)$ is nonempty. In particular, there exists such an extension of the finite field $F$ of degree that is prime to $d=[F':F]$, e.g., of degree $dd_0+1$. Since $X_C\to C$ has no PAC section, it also has no rational section. This contradicts the hypothesis that the parameter datum has rational points over all global fields. This contradiction implies that $X_B\to B$ does have a PAC section.
\noindent Now for a field extension $\kappa \to L$ and a $S$-morphism $z:\text{Spec } L \to M_0$, define $B\subset M_0$ to be the closure of the image of $z$. Then $B$ is an integral closed subscheme of $M_0$. By the argument above, $X_B\to B$ has a PAC section. The base change of this PAC section by the dominant morphism $\text{Spec } L\to B$ is a PAC section of $X_L\to \text{Spec } L$. \end{proof}
\noindent By \cite[Proposition 1.12]{StarrXu}, if the parameter datum satisfies the RSC property, then the parameter datum has rational points over global function fields.
\begin{cor} \label{cor-integral} \marpar{cor-integral} For every parameter datum that satisfies the RSC property, the parameter datum has rational points over global function fields. Thus, for every extension field $\kappa \to L$ for which $L$ is a perfect PAC field, for every $S$-morphism $z:\text{Spec } L \to M$, $X_L$ has an $L$-point. \end{cor}
\section{The Grothendieck-Serre Conjecture and Serre's ``Conjecture
II'' in Positive Characteristic} \label{sec-GrothSerre} \marpar{sec-GrothSerre}
\noindent The proof of Serre's ``Conjecture II'' for the function field $k(S)$ of a surface over an algebraically closed field $k$ of arbitrary characteristic was completed in \cite[Theorem 1.5]{dJHS}. Since this is crucial for establishing Serre's ``Conjecture II'' for function fields of curves over perfect PAC fields that are nice, we briefly recall the proof and clarify one point.
\noindent The main step there is the proof of Serre's ``Conjecture II'' for those semisimple, connected, and simply connected groups over $k(S)$ that are of the form $G_0\times_{\text{Spec } k} \text{Spec } k(S)$, with $G_0$ a semisimple, connected, and simply connected group over $k$. Such a group is necessarily split, since $k$ is algebraically closed. In particular, since the automorphism group of the split group $G_0$ of type $E_8$ is precisely $G_0$ acting by conjugation, it follows that every group $G$ over $k(S)$ of type $E_8$ is isomorphic to $G_0\times_{\text{Spec } k} \text{Spec } k(S)$. Thus, the split case of Serre's ``Conjecture II'' implies the $E_8$ case of Serre's ``Conjecture II''. Combined with tremendous earlier work on Serre's ``Conjecture II'', the $E_8$ case of Serre's ``Conjecture II'' settles the full case of Serre's ``Conjecture II'' over $k(S)$, when $k$ has \emph{characteristic $0$}, cf. \cite[Theorem 1.2(v)]{CTGP}.
\noindent In fact, the proof in characteristic $0$ implies the proof in arbitrary characteristic via the type of lifting results from \cite{dJS8} and further explored in \cite{StarrXu} and this article.
\begin{thm} \label{thm-redcharp} \marpar{thm-redcharp} The characteristic $0$ case of Serre's ``Conjecture II'' for function fields of surfaces implies the characteristic $p$ version. Precisely, for every algebraically closed field $F$ of characteristic $p$, for every function field $E/F$ of a geometrically integral $F$-scheme of dimension $2$, for every semisimple algebraic group $G_E$ over $\text{Spec } E$ that is connected and simply connected, for every torsor $\mathcal{T}_E$ over $\text{Spec } E$ for $G_E$, there exists an $E$-point of $\mathcal{T}_E$. \end{thm}
\begin{proof} Let $(\Lambda,\mathfrak{m}_\Lambda)$ be a Henselian DVR whose residue field $\kappa$ is a finite field of characteristic $p$ and whose fraction field $Q$ has characteristic $0$. Let $H_0$ be a semisimple group that is connected and split. Let $\kappa \to F$ be a field extension (not yet assumed algebraically closed), let $E/F$ be the function field of a geometrically integral $F$-scheme of dimension $d$ (not yet assumed equal to $2$), let $G_E$ be a linear algebraic group over $E$ such that $G_E\times_{\text{Spec } E}\text{Spec } \overline{E}$ is isomorphic to $H_0\times_{\text{Spec } \kappa} \text{Spec } \overline{E}$ as a group scheme over $\overline{E}$, and let $\mathcal{T}_E$ be a $G_E$-torsor over $\text{Spec } E$. By the proof of \cite[Corollary 1.22]{StarrXu} and Lemma \ref{lem-extend}, there exists an integral model, $$ (B\to \text{Spec } \Lambda, \pi^o_B:C^o_B\to B,(G_{C}\to C^o_B,\mathcal{T}_{C}\to C^o_B)), $$ and a pair $$ (\psi_B:\text{Frac}(B_0)\to F,\psi_E:F(C^o_F)\to E) $$ as in Definition \ref{defn-int}, and where $G_{C}\to C^o_B$, resp. $\mathcal{T}_{C}\to C^o_B$, is a semisimple group scheme, resp. is a torsor for this group scheme, such that the base change by $\psi_E$ is $(G_E,\mathcal{T}_E)$. The choice of parameter datum $M$ for this integral extension involves the automorphism group scheme of $H_0$ and is fully explored in \cite[Proof of Corollary 1.22]{StarrXu}.
\noindent Now assume that $d$ equals $2$. The function field $Q(B)$ is characteristic $0$, so by the characteristic $0$ case of Serre's ``Conjecture II'', there exists a finite extension $Q(B')/Q(B)$ such that after base change by this extension, the torsor is trivial. After replacing $B$ by a dense Zariski open whose complement has codimension $\geq 2$, which thus has nonempty intersection with the closed fiber $B_0$, there exists a finite, flat morphism $B'\to B$ such that $B'$ is integral and the associated extension of fraction fields is the extension $Q(B')/Q(B)$ above. It may well happen that $B'\times_{\text{Spec } \Lambda} \text{Spec } \kappa$ is reducible. Choose one irreducible component $B'_0$, and replace $B'$ by the open complement of the remaining irreducible components. Then the residue field $\kappa(B'_0)$ is a finite algebraic extension of $\kappa(B_0)$.
\noindent Assume now that $F$ is algebraically closed. Then there is a factorization of $\psi_B:\text{Spec } E\to \text{Spec } \kappa(B_0)$ through $\psi_{B'}:\text{Spec } E \to \text{Spec } \kappa(B'_0)$. Thus, up to replacing $B$ by $B'$, assume that the generic fiber of $\mathcal{T}_{C}\to C^o_B$ is a trivial torsor over the fraction field $Q(C^o_B)$. Denote by $\mc{O}$ the DVR that is the stalk of the structure sheaf of $C^o_B$ at the generic point of the closed fiber $C^o_0=C^o_B\times_{\text{Spec } \Lambda} \text{Spec } \kappa$. Then the pullback of $G_{C}$ and $\mathcal{T}_{C}$ over $\text{Spec } \mc{O}$ form a semisimple group scheme and a torsor for that group scheme over a DVR. By construction, this torsor is trivial when restricted to the fraction field $Q(C^o_B)$ of $\mc{O}$. Thus, by Nisnevich's solution of the Grothendieck-Serre Conjecture over DVRs, \cite{Nisnevich}, the restriction of the torsor over the residue field of $\mc{O}$ is also trivial. Taking the further base change by $\psi_E$, it follows that the original torsor $\mathcal{T}_E$ over $\text{Spec } E$ is trivial. \end{proof}
\section{Proofs of Theorems \ref{thm-SerreIIpac}, \ref{thm-PeriodIndexpac},\ref{thm-C2nicepac}} \label{sec-proofs} \marpar{sec-proofs}
\noindent By Theorem \ref{thm-HX} and Theorem \ref{thm-StarrAx}, every perfect PAC field $L$ that is nice is RC solving. Thus, by Proposition \ref{prop-concord}, for every regular, integral Noetherian scheme $S$ of dimension $\leq 1$ whose function field has characteristic $0$, for every parameter datum over $S$ with a codimension $>1$ compactification that satisfies the RSC property, for every function field $E=L(\eta)$ of a geometrically integral $L$-curve, for every $S$-morphism $z_E:\text{Spec } E\to M$, the pullback $X_E$ of the universal family $X_M\to M$ by $z_E$ has an $E$-rational point.
\noindent \textbf{The $C_2$ Property and the Period-Index Theorem.} By \cite[Proposition 1.19]{SPAC}, there is a parameter datum as above whose universal family is the family of $2$-Fano complete intersections in projective space, resp. the family of minimal homogeneous varieties. Thus, every function field $L(\eta)$ of a geometrically integral curve over a perfect PAC field that is nice is $C_2$. One example of minimal homogeneous spaces are generalized Severi-Brauer varieties, i.e., a smooth projective $E$-scheme $X_E$ such that $X_E\otimes_{\text{Spec } E}\text{Spec } \overline{E}$ is isomorphic to a (standard $A_n$-type) Grassmannian $\text{Grass}_{\overline{E}}(r,\overline{E}^n)$ whose associated Isom torsor reduces to the group of inner automorphisms (there are outer automorphisms only if $n$ equals $2r$, $r>1$). Thus, each generalized Severi-Brauer variety $X_E$ has an $E$-point if (and only if) it has vanishing elementary obstruction, i.e., if there exists an invertible sheaf $\mathcal{L}_E$ on $X_E$ whose base change to $X_E\times_{\text{Spec } E} \text{Spec } \overline{E}$ generates the Picard group. As explained in \cite[Theorem 11.1]{SStrsbg} (based on joint work with de Jong from \cite{dJS8}), this implies that Period equals Index for Severi-Brauer varieties over $E$.
\noindent \textbf{The Split Case of Serre's ``Conjecture II''. Full Serre's
``Conjecture II'' in Characteristic $0$.} Next, as explained in the proof of \cite[Theorem 1.4]{dJHS}, existence of $E$-rational points on minimal homogeneous varieties implies Serre's ``Conjecture II'' for torsors over $E$ for semisimple groups that are connected, simply connected, and \emph{split}. As explained in the proof of Theorem \ref{thm-redcharp}, when $E$ is of characteristic $0$, this implies the full Serre's ``Conjecture II''. Thus, it only remains to prove Serre's ``Conjecture II'' in case $L$ has characteristic $p$. Since $L$ is nice, $L$ contains the algebraic closure $\overline{\kappa}$ of the prime field.
\noindent \textbf{Full Serre's ``Conjecture II'' in Positive Characteristic.} By the proof of Theorem \ref{thm-redcharp}, for every semisimple algebraic group $G_E$ over $E$ that is connected and simply connected, and for every $G_E$-torsor $\mathcal{T}_E$, there exists an integral model. In particular, the closed fiber of this integral model gives a smooth, quasi-projective $\kappa$-scheme $B_0$ that is integral (but typically not geometrically irreducible), a quasi-projective, smooth morphism $C_0^o\to B_0$ whose geometric fibers are irreducible curves, a semisimple group scheme $G_{C,0}\to C_0^o$ whose geometric fibers are connected and simply connected, and a torsor $\mathcal{T}_{C,0}\to C_0^o$ under $G_{C,0}$. Moreover, there is a homomorphism of $\kappa$-extensions $\psi_B:\kappa(B_0) \to L$ and an associated isomorphism of $F$-extensions $\psi_E:F(C_F^o)\to E$ such that the base changes by $\psi_E$ of $G_{C,0}$, resp. $\mathcal{T}_{C,0}$ equal $G_E$, resp. $\mathcal{T}_E$.
\noindent Since $B_0$ is a quasi-projective, smooth, irreducible $\kappa$-scheme, $\kappa(B)/\kappa$ is a finitely generated field extension. Thus the algebraic closure $\kappa'$ of $\kappa$ in $\kappa(B)$ is a finite extension of $\kappa$, and so it is again a finite field. Note that $B$ is geometrically integral over $\kappa'$. Via the morphisms to $B_0$, the schemes $C_0^o$, $G_{C,0}$ and $\mathcal{T}_{C,0}$ are all quasi-projective $\kappa'$-schemes.
\noindent Denote by $C_0$ a projective compactification of $C_0^o$. Up to replacing $B_0$ by a dense open subscheme, assume that $C_0\to B_0$ is projective and flat. Also, up to normalizing (all schemes are of finite type over $\kappa'$, so normalization is finite), assume that $C_0^o$ is normal. Then the non-regular locus has codimension $2$. Since $C_0$ has relative dimension $1$ over $B_0$, the image of the non-regular locus in $B_0$ is a closed subset of codimension $\geq 1$. Up to shrinking $B_0$ once more, assume that $C_0$ is regular.
\noindent Denote by $\overline{\mathcal{T}}_{C,0}$ a projective compactification of $\mathcal{T}_{C,0}$. As above, the non-flat locus of $\overline{\mathcal{T}}_{C,0}\to C_0$ has codimension $\geq 2$ in $C_0$. The image of this subset in $B_0$ is a closed subset of codimension $\geq 1$. Thus, up to shrinking $B_0$ once more, assume that $\overline{\mathcal{T}}_{C,0} \to C_0$ is projective and flat.
\noindent Inside the relative Hilbert scheme $\text{Hilb}_{\overline{\mathcal{T}}_{C}/B_0}$, there is a locally closed subscheme $\text{Sec}$ such that for every $B$-scheme $T$, the $T$-points of $\text{Sec}$ are precisely those closed subschemes $\overline{Z}\subset T\times_{B_0} \overline{\mathcal{T}}_{C,0}$ satisfying the following conditions: for the intersection $Z$ of $\overline{Z}$ with the open subscheme $T\times_{B_0} \mathcal{T}_{C,0}$, for the projection $Z\to T\times_{B_0} C_0$, the maximal open subset of $T\times_{B_0}C_0$ over which this projection is an isomorphism is an open subset that surjects to $T$. Denote by $Z^o$ the inverse image in $Z$ of this open subset, so that $Z^o\to T\times_{B_0} C_0$ is an open immersion. Thus, $Z^o$ defines a rational section of the base change morphism, $$ \mathcal{T}_{C,0}\times_{B_0}\text{Sec} \to C_0\times_{B_0} \text{Sec}. $$ Of course the scheme $\text{Sec}$ has countably many connected components $(\text{Sec}_i)_{i\in I}$ indexed by the countably many possible Hilbert polynomials of $Z$.
\noindent The claim is that there exists $i\in I$ such that the morphism $$ \phi_i:\text{Sec}_i\times_{\text{Spec } \kappa'} \text{Spec } \overline{\kappa}' \to B_0\times_{\text{Spec }
\kappa'} \text{Spec } \overline{\kappa}', $$ has a PAC section. Assuming the claim, since $L$ is assumed to contain $\overline{\kappa}'$, the base change $\text{Spec } L \times_{B_0} \text{Sec}$ has an $L$-point. For this $L$-point, the rational section $Z^o$ defines an $E$-point of $\mathcal{T}_E$. Thus, it suffices to prove the claim.
\noindent If $B_0$ has dimension $0$, then the claim is true. Indeed, $B_0\to \text{Spec } \kappa'$ is an isomorphism, and $C_0\to B_0$ is a geometrically integral curve over the finite field $\kappa'$. By Steinberg's Theorem, \cite{Steinberg}, or by \cite{dJS}, after base change to $\text{Spec } \overline{\kappa}',$ the pullback of the torsor $\mathcal{T}_{C,0}$ over this curve has a rational section. Thus, assume that $B$ has dimension $m\geq 1$.
\noindent The existence or non-existence of a PAC section of $\phi_i$ is preserved by base change from the algebraically closed field $\overline{\kappa}'$ by any extension $\overline{\kappa}'\hookrightarrow k$ with $k$ an algebraically closed field. Let $u:B_0\to \mbb{P}^N_{\kappa'}$ be a generically unramified, finite type morphism. Define $c$ to be $m-1$, and define $r$ to be $N-c$. Define $k$ to be the algebraic closure of the function field of $\text{Grass}_{\kappa'}(\mbb{P}^r,\mbb{P}^N)$. Define $[H]\in \text{Grass}_{\kappa'}(\mbb{P}^r,\mbb{P}^N)(\text{Spec } k)$ to be the $k$-point corresponding to the universal linear space. By the proof of Theorem \ref{thm-ffBertini}, the corresponding curve $B_H=B\times_{\mbb{P}^N_{\kappa'}} H$ is a smooth, irreducible, quasi-projective curve over $k$.
\noindent The base change of $C_0\to B_0$ by $B_H\to B_0$ gives a projective, flat, generically smooth morphism $C_H\to B_H$ with geometrically irreducible fibers. Since $B_H$ is itself a smooth, irreducible, quasi-projective curve over the algebraically closed field $k$, $C_H$ is a generically smooth, irreducible, quasi-projective surface over $k$. The pullback of $\mathcal{T}_{C,0}$ over the surface $C_H$ is a torsor for a semisimple, connected, and simply connected algebraic group over $C_H$. By Theorem \ref{thm-redcharp}, there exists a rational section of this torsor. The closure of this section in the pullback of $\overline{\mathcal{T}}_{C,0}$ defines a closed subscheme $Z$. The projection $Z\to B_H$ is flat, since $B_H$ is a smooth curve. Over a dense open subset of $B_H$, this closed subscheme defines a section of the restriction of $\phi_i$, for $i$ equal to the Hilbert polynomial of the fibers of $Z\to B_H$.
\noindent Note that $\overline{\kappa}'$ is already algebraically closed, so every finite Galois extension is an isomorphism, thus automatically linearly disjoint from every field extension $K/k$. For this choice of $i$, if there is no PAC section of $\phi_i$, then by Corollary \ref{cor-ffBertini}, there exists a dense, Zariski open subset $U_i \subset \text{Grass}_{\overline{\kappa}}(\mbb{P}^r,\mbb{P}^N)$ such that for every field extension $K/\kappa$ and for every $[H']\in U_i(\text{Spec } K)$, there is no PAC section of the restriction of $\phi_i$ over the curve $B_0\times_{\mbb{P}^N_{\kappa'}} H'.$ For $K$ equal to $k$ and for $[H]$, there is a PAC section of the restriction of $\phi_i$ over $B_0\times_{\mbb{P}^N_\kappa} H$. Therefore, there does exist a PAC section of $\phi_i$. Thus, the torsor $\mathcal{T}_E$ has an $E$-point.
\noindent \textbf{Low Degree Complete Intersections in Grassmannians.} By \cite[Proposition 1.19]{StarrXu}, there exists a parameter datum $M$ with a codimension $>1$ compactification for pairs $(X,\mathcal{L})$ of polarized schemes whose base change to an algebraic closure is isomorphic to $\text{Grass}(r,K^{\oplus n})$ with its Pl\"{u}cker invertible sheaf. Denote by $P\to M$ the projective bundle parameterizing $1$-dimensional subspaces of $H^0(X,\mathcal{L}^{\otimes
d})$. This admits a codimension $>1$ compactification $P\hookrightarrow \overline{P}$ by the same GIT construction used to construct the codimension $>1$ compactification of $M$. Inside $X_M\times_M P$, define $Y_P$ to be the closed subscheme that is the zero scheme of the corresponding global section of $\mathcal{L}^{\otimes d}$. The projection $Y_P\to X_M$ is a projective bundle. Thus $Y_P$ is smooth over $S$. Thus, by \cite[Lemma 4.4]{StarrXu}, the datum satisfies the first hypothesis of the RSC property. The inequality $(3r-1)d^2-d<n-4r-1$ implies the inequality $r(n-r) \geq 3$. Thus, by \cite[Corollary 4.6]{StarrXu}, the datum satisfies the second hypothesis of the RSC property. The third hypothesis of the RSC property follows by construction: since $\mathcal{L}$ is relatively ample for $X_M\to M$, it is also relatively ample for $X_M\times_M P \to P$, and thus its restriction to $Y_P$ is relatively ample for $Y_P\to P$. Finally, hypotheses four, five, and six are verified in the PhD thesis of Robert Findley, \cite{Findley}.
\section{Proof of Theorem \ref{thm-AKnicepac}} \label{sec-proofsAK} \marpar{sec-proofsAK}
\noindent Let $(R,\mathfrak{m}_R)$ be a Henselian DVR with residue field $k$ (no hypothesis yet on $k$) and with fraction field $K$. Assume that $R$ is excellent, i.e., $\text{Frac}(\widehat{R})/K$ is a separable extension. This holds automatically if $K$ has characteristic $0$ or if $R$ is complete.
\begin{lem} \label{lem-CDtrans} \marpar{lem-CDtrans} If the residue field $k$ is separably closed, then $K$ has cohomological dimension $\leq 1$. If $k$ is algebraically closed, then $K$ has dimension $\leq 1$, and it is even $C_1$. If the cohomological dimension of $k$ is $\leq 1$, then the cohomological dimension of $K$ is $\leq 2$ under either of the following conditions: if $K$ has characteristic $0$ or if $R$ is complete. If $k$ is perfect of dimension $\leq 1$, then for every finite extension $K'/K$, for every Severi-Brauer variety over $K'$, the Period equals the Index. \end{lem}
\begin{proof} Let $K'/K$ be any finite algebraic extension. Since $R$ is excellent, the integral closure $R'$ of $R$ in $K'$ is a finite $R$-module. Moreover, $R'$ is a semilocal ring whose localization at any maximal ideal is a DVR. The residue fields of $R'$ are finite extensions of $k$. The above hypotheses on $k$ are each preserved under finite field extension. Thus, the localizations of $R'$ satisfy the same (respective) hypotheses as $R$. Thus, every argument below for $K$ also applies to $K'$.
\noindent If $K$ is complete, and if $k$ has cohomological dimension $\leq 0$, resp. $\leq 1$, then also $K$ has cohomological dimension $\leq 1$. resp. $\leq 2$, \cite[Proposition II.12, p. 85]{GalCoh}.
\noindent Next consider the case that $K$ is not complete. For every prime $\ell$ different from the characteristic of $K$, $\text{cd}_\ell(K)\leq 1$ if and only if $\text{Br}(K)[\ell]=\{0\}$, \cite[Proposition II.4, p. 76]{GalCoh}. Given a Severi-Brauer variety $X_K\to \text{Spec } K$ whose period equals $\ell$, there is a projective, flat model $X_R\to \text{Spec } R$ (typically ramified over the closed point). If $\text{Frac}(\widehat{R})$ has cohomological dimension $\leq 1$, then $X_R\times_{\text{Spec } R} \text{Spec } \widehat{R}$ has an $\widehat{R}$-point. Then by approximation \cite[Theorem 1]{Greenberg}, also $X_R$ has an $R$-point. If $K$ has characteristic $p$, then the $p$-cohomological dimension of $K$ is $\leq 1$, \cite[Proposition II.3, p. 75]{GalCoh}. Thus, if $\text{Frac}(\widehat{R})$ has cohomological dimension $\leq 1$, then also $K$ has cohomological dimension $\leq 1$. Please note: this argument does not say anything about $\text{Br}(K)[p]$.
\noindent By \cite{Lang52}, if $R$ is complete and $k$ is algebraically closed, then $K$ is a $C_1$-field. Again applying \cite[Theorem 1]{Greenberg}, this also holds if $R$ is excellent but not necessarily complete.
\noindent Next assume that $K$ has characteristic $0$ and that $k$ has cohomological dimension $\leq 1$. Then $\text{Frac}(\widehat{R})$ has cohomological dimension $\leq 2$. By the Merkurjev-Suslin Theorem, \cite[Corollary 24.9]{Suslin84}, the field $K$, resp. $\text{Frac}(\widehat{R})$, has cohomological dimension $\leq 2$ if and only if, for every central simple algebra over $K$, resp. over $\text{Frac}(\widehat{R})$, the reduced norm is surjective. For a central simple algebra $A_K$ over $K$, this extends to a finite, flat $R$-algebra $A_R$ (possibly ramified at the closed point). For every $r\in R$, the equation $\text{Nrd}_{A_R/R}(x) = r$ have a solution in $\widehat{R}$, since $\text{Frac}(\widehat{R})$ has cohomological dimension $\leq 2$. Thus, again applying \cite[Theorem 1]{Greenberg}, also there is a solution over $R$. Thus $K$ has cohomological dimension $\leq 2$.
\noindent Finally, assume that $k$ is perfect. Since $R$ is Henselian, for every $n\geq 0$, the pullback map $$ H^n_{\text{\'{e}t}}(\text{Spec } k,\mathbb{G}_m) \to H^n_{\text{\'{e}t}}(\text{Spec } R,\mathbb{G}_m) $$ is an isomorphism. By \cite[Proposition 2.1, p. 93]{BrauerIII}, there is a restriction exact sequence $$ \begin{CD} 0 @>>> \text{Br}(k) @>>> \text{Br}(K) @>>> H^1_{\text{\'{e}t}}(\text{Spec } k,\mbb{Q}/\mbb{Z}) @>>> H^3_{\text{\'{e}t}}(\text{Spec } k,\mathbb{G}_m) \dots \end{CD} $$ If $k$ has cohomological dimension $\leq 1$, this gives an isomorphism, $$ \textbf{Br}(K)\xrightarrow{\cong} H^1_{\text{\'{e}t}}(\text{Spec } k,\mbb{Q}/\mbb{Z}). $$ Via the short exact sequence of Abelian groups, $$ \begin{CD} 0 @>>> (1/d)\mbb{Z}/\mbb{Z} @>>> \mbb{Q}/\mbb{Z} @> d >> \mbb{Q}/\mbb{Z} @>>> 0, \end{CD} $$ the $d$-torsion in the Brauer group is identified with $H^1_{\text{\'{e}t}}(\text{Spec } k,(1/d)\mbb{Z}/\mbb{Z}).$ Thus, for every element $\alpha$ of order $d$ in the Brauer group, the associated torsor over $k$ gives a Galois field extension $k'/k$ with cyclic Galois group of order $d$. There is an associated \'{e}tale extension $R\to R'$ of degree $d$ that is also a local homomorphism of DVRs. The pullback of $\alpha$ to $\text{Spec } k'$ is the zero class. Thus, comparing exact sequences for $R$ and $R'$, the pullback of $\alpha$ to $R'$ is the zero class. Thus, the index of $\alpha$ equals $d$. \end{proof}
\noindent Let $S$ be a dense open subscheme of the spectrum of the ring of integers of a number field. For every parameter datum over $S$, by the extension Proposition \ref{prop-concord} of \cite[Corollary 1.4]{StarrXu}, which in turn relies on \cite[Section 7]{DenefAxKochen}, there exists an integer $p_0$ such that for every closed point $\text{Spec } \kappa \to S$ of characteristic $p\geq p_0$, for every field extension $\kappa\hookrightarrow L$ with $L$ a perfect PAC field that is nice, for every pair of Henselian DVR over $S$, $\text{Spec } A\to S$ with $\mathfrak{m}_A = \theta A$, resp. $\text{Spec } R\to S$ with $\mathfrak{m}_R = \pi R$, each having isomorphic residue field extensions of $\kappa$, $\overline{\tau}:A/\mathfrak{m}_A\xrightarrow{\cong} R/\mathfrak{m}_R$, both equal to $\kappa \hookrightarrow L$, for every $S$-morphism $z_A: \text{Spec } A\to M$, the pullback $z_A^*X_M$ has an $A$-point if and only if for every $S$-morphism $z_R:\text{Spec } R\to M$, the pullback $z_R^*X_M$ has an $R$-point. This uses the following isomorphism of commutative monoids under multiplication $\mathcal{MR}(A) = A/(1+\mathfrak{m}_A)$, resp. $\mathcal{MR}(B)=B/(1+\mathfrak{m}_B)$, given by $$ \tau_{\theta,\pi}:\mathcal{MR}(A)\to \mathcal{MR}(R), \ \ \tau_{\theta,\pi}(\text{mres}(u\theta^n)) = \text{mres}(v\pi^n), $$ where $u\in A\setminus \mathfrak{A}$, resp. $v\in R\setminus\mathfrak{R}$, are units such that $\overline{\tau}(\overline{u})$ equals $\overline{v}$ as elements in $L$.
\noindent Assume now that the parameter datum has a codimension $>1$ compactification and has the RSC property. By the extension of \cite[Corollary 1.13]{StarrXu}, for $A=L\Sem{t}$, for every morphism $z_A:\text{Spec } L\Sem{t}\to M$, $z_A^*X_M$ does have an $A$-point. Thus, for every Henselian DVR $R$ over $S$ with residue field extension $\kappa\hookrightarrow L$, for every $S$-morphism $z_R:\text{Spec } R\to M$, also $z_R^*X_M$ has an $R$-point.
\noindent Applying this to the parameter datum from the previous section for complete intersections in projective space, resp. hypersurfaces in Grassmannians (both of which are defined over $\text{Spec } \mbb{Z}$), for every $(n;d_1,\dots,d_c)$ with $d_1^2+\dots + d_c^2 < n-1$, resp. for every $(n,r,d)$ with $(3r-1)d^2 - d < n-4r-1$, there exists an integer $p_0$ such that for every $p\geq p_0$, for every Henselian DVR $R$ with residue field $L$ a perfect PAC field of characteristic $p$ that contains a primitive root of unity of order $n$ for every integer $n$ prime to $p$, there is a $K$-point of every $(X_K,\mathcal{L}_K)$ over $\text{Spec } K$ whose base change to $\overline{K}$ is the common zero scheme in $\mbb{P}^{n-1}_{\overline{K}}$ of hypersurfaces of degrees $(d_1,\dots,d_c)$ together with the restriction of $\mc{O}_{\mbb{P}^{n-1}}(1)$, resp. the base change is isomorphic to a degree $d$ hypersurface in $\text{Grass}_{\overline{K}}(r,\overline{K}^{\oplus n})$ together with its Pl\"{u}cker invertible sheaf.
\noindent Finally, applying this to the parameter datum for torsors for the split group of type $E_8$, there exists an integer $p_0$ such that for every $p\geq p_0$, for every DVR as above with $L$ of characteristic $p\geq p_0$, every torsor over $\text{Spec } K$ for the split group of type $E_8$ is trivial. Since the center of this group is trivial, and since there are no outer automorphisms for this group, also every torsor for the automorphism group of this group is a trivial torsor. Thus every form of $E_8$ over $\text{Spec } K$ is isomorphic to the split form. Therefore, every torsor for a group of type $E_8$ over $\text{Spec } K$ is a trivial torsor. By Lemma \ref{lem-CDtrans} also the characteristic $0$ field $K$ has cohomological dimension $2$ and satisfies Period equals Index. By \cite[Theorem 1.2]{CTGP}, Serre's ``Conjecture II'' holds for $K$.
\section{Proof of Theorem \ref{thm-C2nicepac}} \label{sec-proofs2} \marpar{sec-proofs2}
\noindent For $2$-Fano complete intersections in projective space, resp. for low degree hypersurfaces in Grassmannians, there exists a parameter datum for these schemes over $\text{Spec } \mbb{Z}$ with a codimension $>1$ compactification, and the parameter datum satisfies the RSC property, cf. the previous section. Thus, by Corollary \ref{cor-integral}, every perfect PAC field of positive characteristic has rational points for these schemes.
\end{document} |
\begin{document}
\title{Self-testing nonlocality without entanglement}
\author{Ivan \v{S}upi\'{c}} \affiliation{CNRS, LIP6, Sorbonne Universit\'{e}, 4 place Jussieu, 75005 Paris, France} \email{ivan.supic@lip6.fr} \author{Nicolas Brunner} \affiliation{Département de Physique Appliquée, Université de Genève, 1211 Genève, Switzerland}
\date{\today}
\begin{abstract} Quantum theory allows for nonlocality without entanglement. Notably, there exist bipartite quantum measurements consisting of only product eigenstates, yet they cannot be implemented via local quantum operations and classical communication. In the present work, we show that a measurement exhibiting nonlocality without entanglement can be certified in a device-independent manner. Specifically, we consider a simple quantum network and construct a self-testing procedure. This result also demonstrates that genuine network quantum nonlocality can be obtained using only non-entangled measurements. From a more general perspective, our work establishes a connection between the effect of nonlocality without entanglement and the area of Bell nonlocality. \end{abstract}
\maketitle
\section{Introduction}
The (un)ability to distinguish certain quantum states via measurements is a central aspect of quantum theory, and central to applications such as quantum key distribution~\cite{BB84,Ekert} and data-hiding~\cite{QDH}. This question is of particular interest when considering composite systems. One may consider for instance two remote observers, Alice and Bob, sharing a state chosen from a certain set. Alice and Bob should try to identify which state they received. In this context, a natural limitation is that Alice and Bob must restrict to local measurements (or operations) assisted by classical communication (LOCC). Surprisingly in this case, there exist sets of product states forming a basis, which nevertheless cannot be perfectly distinguished by any LOCC measurement. While Ref.~\cite{PeresWootters} provided preliminary results in this direction, the first examples were constructed by Bennett et al.~\cite{Bennett1999}, coining the effect ``nonlocality without entanglement''. This shows that separable quantum measurements (where all eigenstates are non-entangled) are strictly stronger than LOCC measurements. These ideas have been generalized in many different directions see e.g.~\cite{UPB,Walgate2002,Niset,Cohen,FengShi,Childs,Croke}, with connections to the notion of unextendible product basis and bound entanglement~\cite{UPB,DiVincenzo}. In recent years, renewed interest has been devoted to these ideas, with the discovery of stronger forms of this effect in particular in multipartite systems, see e.g.~\cite{Halder2019,Banik2,Ha}.
In this work, we discuss the question of certifying the effect of ``nonlocality without entanglement'' (NLWE) in a black-box setting. Specifically, we consider the quantum measurement featuring NLWE introduced in~\cite{Bennett1999} (for a two-qutrit system), and show that it can be certified in a device-independent manner. For this we consider the simple quantum network of entanglement swapping~\cite{Zukowski1993} (also known as ``bilocality'' network~\cite{Branciard_2010}), where the middle party performs the NLWE measurement (see Fig.~\ref{fig:domino}). Based on the assumption that the two quantum sources present in the network are independent, the standard assumption in network nonlocality~\cite{Branciard_2010,Fritz_2012}, we can show that NLWE can be certified using concepts and tools from self-testing~\cite{MayersYao,SupicBowles}, a framework for the device-independent certification of quantum resources. In fact, the full quantum setup can be self-tested, including also the shared entangled states and the local measurements of the side parties. Finally, we discuss how our self-testing scheme can be generalized.
Our result shows that a strong form of nonlocal quantum correlations in networks, known as genuine network quantum nonlocality~\cite{Supic2022}, can be obtained without the need for entangled measurements (as traditionally used in network nonlocality). From a more general point of view, our work also finally connects the effect of ``nonlocality without entanglement'' with the area of Bell nonlocality~\cite{Bell,review}.
\section{Problem}
\begin{figure}
\caption{We consider the entanglement swapping (or bilocality) network represented in (a), where the middle party Bob performs measurements on two incoming subsystems. Our goal is to self-test one of these measurements, denoted $\mathsf{M}'_{\lozenge}$, which features nonlocality without entanglement. That is, the measurement consists of only product eigenstates---as shown by the domino tiles in (b)---yet it cannot be implemented by LOCC. The red tile corresponds to the two projectors $\mathsf{M}'_{0,1|\lozenge}$ and $\mathsf{M}'_{0,2|\lozenge}$ from Eq. \eqref{dominomeas}, and so on.
}
\label{fig:scenario}
\label{fig:domino}
\end{figure}
We consider the entanglement swapping experiment~\cite{Zukowski1993} consisting of two sources and three parties (nodes), as shown in Fig.~\ref{fig:scenario}. We call the central party Bob, and two lateral ones Alice and Charlie. The center of out interest are the correlations obtained in this setting, the so-called bilocality network~\cite{Branciard_2010,branciard2012bilocal}, where the two sources are assumed to be independent from each other. Note that the characterisation of local and quantum correlations in networks featuring independent sources has attracted growing attention in recent years, see e.g~ \cite{Branciard_2010,Fritz_2012,gisin2020constraints,Renou_2019, Pozas_Kerstjens_2019} and~\cite{ArminReview} for a review.
In this work, our main goal is to certify in a device-independent manner that Bob performs a quantum measurement featuring NLWE. That is, assuming only the independence of the two sources, we will show that, from observed data alone, the presence of such a measurement can be demonstrated, up to irrelevant local transformations. We use the tools and concepts of self-testing~\cite{MayersYao,SupicBowles}, in particular the results of Ref.~\cite{Yang}. While self-tests have been developed for specific joint entangled measurements (such as the well-known Bell-state measurement)~\cite{renou2018self,Bancal_BSM}, we show here that a similar construction is possible for relevant measurements with only separable eigenstates.
Self-testing is a procedure which establishes a form of equivalence between two experiments. First, we have the physical experiment, which corresponds to the actual experiment performed in the laboratory, featuring a priori unknown states and measurements to be certified. The second experiment is termed the reference (or idealized) experiment, or the target for the certification. We say that the physical experiment simulates the reference experiment if it reproduces exactly its statistics. If such a simulation is enough to infer the existence of a local isometry mapping the physical experiment to the reference experiment, we say that the reference experiment is self-tested. In other words, this shows that the physical experiment must be equivalent (up to irrelevant local transformations) to the reference experiment.
We start by describing the reference experiment. The left source, connecting Alice and Bob, prepares a pair of maximally entangled qutrits. The right source, connecting Bob and Charlie, prepares the same state. Hence, the reference states are given by \begin{align}
\ket{\psi'}^{A'B_1'} = \ket{\psi'}^{B_2'C'} = \ket{\phi_+} \end{align} where $\ket{\phi_+} = (\ket{00}+\ket{11}+\ket{22})/\sqrt{3}$. Note that we use superscripts to identify the various systems, and that we distinguish the two subsystems of Bob by $B_1'$ and $B_2'$. The reference measurements for Alice are given by three ternary measurements:
\begin{align*}
\mathsf{M}'_{0} &= \left\{\mathsf{M}'_{0|0} = \proj{0}, \mathsf{M}'_{1|0} = \proj{1}, \mathsf{M}'_{2|0} = \proj{2} \right\},\\
\mathsf{M}'_{1} &= \left\{\mathsf{M}'_{0|1} = \proj{+}_{0,1}, \mathsf{M}'_{1|1} = \proj{-}_{0,1}, \mathsf{M}'_{2|1} = \proj{2} \right\},\\
\mathsf{M}'_{2} &= \left\{\mathsf{M}'_{0|2} = \proj{0}, \mathsf{M}'_{1|2} = \proj{+}_{1,2}, \mathsf{M}'_{2|2} = \proj{-}_{1,2} \right\}, \end{align*}
where $\proj{\pm}_{j,k} \equiv (\ket{j}\pm \ket{k})(\bra{j}\pm \bra{k})/{2}$. Here $\mathsf{M}'_{x}$ denotes the projective measurement for input $x=0,1,2$. Each measurement produces a ternary output $a$, with POVM elements denoted $\mathsf{M}'_{a|x}$. The reference measurements for Charlie are the same as for Alice, and we will denote them by $\mathsf{M}'_{z}$ with elements $\mathsf{M}'_{c|z}$.
The reference measurements of Bob are of two types. First, we have four measurements with a clear subsystem separation. On the subsystem $B_1$, these are used to self-test the state shared with Alice, as well as Alice's measurements. On the subsystem $B_2$, these are used to self-test the state shared with Charlie, and Charlie's measurements. More precisely, these measurements take the form $\mathsf{M}'_{b_1,b_2|y} = \mathsf{M}'_{b_1|y}\otimes \mathsf{M}'_{b_2|y}$ for $b_1,b_2=0,1,2$ and $y = 0,1,2,3$, where the POVM elements $\mathsf{M}'_{b_1|y}$ and $\mathsf{M}'_{b_2|y}$ correspond to the eigenvectors of the following four operators:
\begin{align*}
\frac{1}{\sqrt{2}}\begin{bmatrix}
1 & \pm1 & 0\\ \pm1 & \mp1 & 0\\ 0& 0 & \sqrt{2}
\end{bmatrix}, \qquad \frac{1}{\sqrt{2}}\begin{bmatrix}
\sqrt{2} & 0 & 0\\ 0 & 1 & \pm 1\\ 0& \pm 1 & \mp 1
\end{bmatrix}, \end{align*}
The second type of measurement for Bob is our main object of interest. This corresponds to a single extra measurement, which corresponds to the input $y=\lozenge$. This measurement exhibits the property of nonlocality without entanglement, and is denoted ${\mathcal{M}'_{\lozenge}} = \{\mathsf{M}'_{b_1,b_2|\lozenge}\}_{b_1,b_2=0}^2$. It consists of nine eigenstates, which are all product with respect to the partition $B_1$ vs $B_2$, given by
\begin{widetext} \begin{align} \nonumber
\mathsf{M}'_{0,0|\lozenge} = \proj{1}\otimes\proj{1}, \qquad \mathsf{M}'_{0,1|\lozenge} = \proj{0}\otimes\proj{+}_{0,1}, \qquad \mathsf{M}'_{0,2|\lozenge} = \proj{0}\otimes\proj{-}_{0,1},\\ \label{dominomeas}
\mathsf{M}'_{1,0|\lozenge} = \proj{2}\otimes\proj{+}_{1,2}, \qquad \mathsf{M}'_{1,1|\lozenge} = \proj{2}\otimes\proj{-}_{1,2}, \qquad \mathsf{M}'_{1,2|\lozenge} = \proj{+}_{1,2}\otimes\proj{0},\\ \nonumber
\mathsf{M}'_{2,0|\lozenge} = \proj{-}_{1,2}\otimes\proj{0}, \qquad \mathsf{M}'_{2,1|\lozenge} = \proj{+}_{0,1}\otimes\proj{2}, \qquad \mathsf{M}'_{2,2|\lozenge} = \proj{-}_{0,1}\otimes\proj{2}. \end{align} \end{widetext}
The key property of this set of states, is that they cannot be perfectly distinguished via local measurements assisted with classical communication (so-called LOCC measurements) \cite{Bennett1999}. Therefore, the measurement $\mathcal{M}'_{\lozenge}$ cannot be implemented via LOCC, illustrating the fact that separable measurements are strictly stronger than LOCC measurements.
Combining the above states and measurements, we obtain the statistics of the reference experiment, given by the joint conditional probability distribution \begin{multline} \label{ReferenceCorrelations}
p'(a,b_1,b_2,c|x,y,z) = \\ = \textrm{Tr}\left[\left({\mathsf{M}'}_{a|x}^{A'}\otimes {\mathsf{M}'}_{b_1,b_2|y}^{B_1'B_2'}\otimes {\mathsf{M}'}_{a|x}^{C'} \right){\psi}_+^{A'B'_1} \otimes {\psi}_+^{B'_2C'}\right]. \end{multline} where $x,z=0,1,2$, $y=0,1,2,3,\lozenge$ and $a,b_1,b_2,c=0,1,2$. Note that when computing the above equation, one should be careful about the order of the subsystems.
\section{Main result}
We now present our main result, namely that the reference experiment is a self-test. Consider a physical experiment with a priori unknown states $\ket{\psi}^{AB_1}$ and $\ket{\psi}^{B_2C}$ and measurements $\{\mathsf{M}_{a|x}\}$, $\{\mathsf{M}_{b_1,b_2|y}\}$ and $\{\mathsf{M}_{c|z}\}$, resulting in observed correlations \begin{multline} \label{PhysicalCorrelations}
p(a,b_1,b_2,c|x,y,z) = \\ = \textrm{Tr}\left[\left({\mathsf{M}}_{a|x}^{A}\otimes {\mathsf{M}}_{b_1,b_2|y}^{B_1B_2}\otimes {\mathsf{M}}_{a|x}^{C} \right){\psi}^{AB_1} \otimes {\psi}^{B_2C}\right]. \end{multline} Below we show that if these statistics correspond to those of the reference experiment, as given in Eq.~\eqref{ReferenceCorrelations}, then all states and measurements of the physical experiment are equivalent (up to irrelevant local transformation) to the reference states and measurements. In particular, this implies that the measurement $y=\lozenge$ for Bob must feature NLWE. More formally, we have the following theorem.
\begin{thm}\label{theorem} Consider a physical experiment such that its statistics, as given in Eq. \eqref{PhysicalCorrelations}, match exactly the statistics of the reference experiment given in Eq. \eqref{ReferenceCorrelations}. Then there exists a local isometry $\tilde{\Phi}$ mapping \begin{itemize}
\item Bob's measurement $\mathsf{M}_\lozenge$ to the NLWE measurement $\mathcal{M}'_{\lozenge}$:
\begin{multline}\label{stdomino}
\tilde{\Phi}\left(\mathsf{M}_{b|\lozenge}^{B_1B_2}\ket{\psi}^{AB_1}\otimes\ket{00}^{A'B_1'}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{B_2'C'}\right) = \\
= \ket{\tilde{\xi}}^{AB_1B_2C}\otimes{\mathsf{M}'}_{b|\lozenge}^{B_1'B_2'}\ket{\phi_+}^{A'B_1'}\otimes\ket{\phi_+}^{B_2'C'}, \end{multline} where $\ket{\tilde{\xi}}^{AB_1B_2C} = \ket{\xi_1}^{AB_1}\otimes\ket{\xi_2}^{B_2C}$ is a valid quantum state.
\item the states $\ket{\psi}^{AB_1}$ and $\ket{\psi}^{B_2C}$ to the reference states (i.e. pairs of maximally entangled qutrits), and Alice's and Charlie's measurements to the corresponding reference measurements $\mathsf{M}'_j$:
\begin{multline}\label{iso1}
\tilde{\Phi}\left(\mathsf{M}_{a|x}\ket{\psi}^{A_1B_1}\otimes\mathsf{M}_{c|z}\ket{\psi}^{B_2C}\otimes\ket{0000}^{A'B_1'B_2'C'}\right) = \\ = \ket{\tilde{\xi}}^{AB_1B_2C}\otimes\mathsf{M}'_{a|x}\ket{\phi_+}^{A'B_1'}\otimes\mathsf{M}'_{c|z}\ket{\phi_+}^{B_2'C'}. \end{multline}
\end{itemize} \end{thm} \begin{proof}
The theorem consists of two self-testing results, and we start by proving the second one. The idea is to first prove that in order to reproduce some of the marginals of the reference correlations, specifically $\{p(a,b_1|x,y)\}$, implies the equivalence between $\ket{\psi}^{AB_1}$ and $\ket{\phi_+}$. The formal statement is given in the following lemma.
\begin{table}
\begin{tabular}{|c|c|}
\hline
$\quad$ Input set $(x,z)$ $\quad$ & $\quad$ Matching output set $(a,b_1,b_2,c)$ $\quad$ \\
\hline
\hline
$(0,0)$ & $(1,0,0,1)$ \\
\hline
$(0,1)$ & $(0,0,1,0)$\\
& $(0,0,2,1)$\\
\hline
$(0,2)$ & $(2,1,0,1)$\\
& $(2,1,1,2)$\\
\hline
$(2,0)$ & $(1,1,2,0)$\\
& $(2,2,0,0)$\\
\hline
$(1,0)$ & $(0,2,1,2)$\\
& $(1,2,2,2)$\\
\hline
\end{tabular}
\caption{For Bob's input $y=\lozenge$, certain sets of inputs $(x,z)$ for Alice and Charlie imply specific outputs patterns.}\label{table} \end{table}
\begin{lem}\label{lem2}
Let the state $\ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}$ and measurements $\{\mathsf{M}_{a|x}\}$ and $\{\mathsf{M}_{b_1|y}\}$ such that $\mathsf{M}_{b_1|y} = \sum_{b_2}\mathsf{M}_{b_1,b_2|y}$ produce correlations $p(a,b_1|x,y)$. If those correlations are such that
\begin{multline}
p(a,b_1|x,y) = \emph{\textrm{Tr}}\left[\left({\mathsf{M}'}_{a|x}^{A'}\otimes {\mathsf{M}'}_{b_1,|y}^{B'_1}\right)\left({\phi}_+^{A'B'_1}\right)\right], \end{multline}
for every $a,b_1,x,y$ then there exists isometry $\Phi$ such that
\begin{multline}\label{iso}
\Phi\left(\mathsf{M}_{a|x}\ket{\psi}^{A_1B_1}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{A'B'}\right) = \\ = \ket{\xi}^{ABC}\otimes\left(\mathsf{M}'_{a|x}\ket{\phi_+}^{A'B_1'}\right), \end{multline}
where $\ket{\xi}^{ABC}$ is a valid quantum state. \end{lem}
The proof can be found in Appendix~\ref{suppmat1} and it is directly inspired by the self-testing of maximally entangled pair of qutrits presented in~\cite{Yang}. Combined with methods for self-testing joint measurements introduced in~\cite{Supic2021} allows us to get the self-testing result for both states, as well as all measurements performed by Alice and Charlie, as stated in eq.~\eqref{iso1}. The proof of this equation is given in Appendix~\ref{suppmat2}.
Based on these self-testing results, we move on to the second part of the proof, for certifying the additional measurement of Bob, namely $\mathsf{M}_{\lozenge}$ from Eq.~\eqref{stdomino}. First, note that simulation of the reference correlations involves
\begin{align}\nonumber
p(b|\lozenge) &= \textrm{Tr}\left[\mathsf{M}_{b|\lozenge}^{B}\left(\psi^{AB_1}\otimes\psi^{B_2C}\right)\right] = \frac{1}{9} \qquad \forall b. \end{align}
This implies that the norm of vectors $\mathsf{M}_{b|\lozenge}\left(\ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}\right)$ for all outcomes $b$ equals $1/3$. Similarly, from the simulation of the reference correlations we have:
\begin{equation}
p(a,c|x,z) = \frac{1}{9} \qquad \forall a,c,x,z, \end{equation}
implying that the norm of vectors $\mathsf{M}_{a|x}\ket{\psi}^{A_1B_1}\otimes \mathsf{M}_{c|z}\ket{\psi}^{B_2C}$ for all $a,c,x,z$ equals $1/3$.
Let us define for certain input sets $(x,\lozenge,z)$ their matching outputs sets $(a,b,c)$, as given in Table~\ref{table}. The simulation of the reference correlations imposes that for every set of inputs from Table~\ref{table}, the matching set of outputs happens with probability $1/9$. Let us now concentrate on the set of inputs $(0,\lozenge,0)$ and its matching set of outputs $(1,0,0,1)$, \emph{i.e.} eq.~$p(1,0,0,1|0,\lozenge,0) = 1/9$. Given eq. \eqref{iso} we obtain the following set of relations:
\begin{widetext} \begin{align*}
p(1,0,0,1|0,\lozenge,0) &= \bra{\psi}^{AB_1}\otimes\bra{\psi}^{B_2C}\mathsf{M}^A_{1|0}\otimes \mathsf{M}^{B_1B_2}_{0,0|\lozenge}\otimes\mathsf{M}^C_{1|0}\ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}\\
&= \left(\bra{\psi}^{AB_1}\otimes\bra{00}^{A'B_1'}\otimes\bra{\psi}^{B_2C}\otimes\bra{00}^{B'_2C'}\mathsf{M}^A_{1|0}\otimes\mathsf{M}^C_{1|0}\otimes\mathds{1}^{A'B_1B_2B'_1B'_2C'}\right)\Phi^\dagger\times \\ &\qquad\qquad\times \Phi\left(\mathsf{M}_{0,0|\lozenge}^{B_1B_2}\otimes\mathds{1}^{AA'B_1'B_2'CC'}\ket{\psi}^{AB_1}\otimes\ket{00}^{A'B_1'}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{B_2'C'}\right)\\
&= \bra{\xi_1}^{AB_1}\otimes\bra{\xi_2}^{B_2C}\otimes\bra{\phi_+}^{A'B'_1}{\mathsf{M}'}_{1|0}^{A'}\otimes\bra{\phi_+}^{B'_2C'}{\mathsf{M}'}^{C'}_{1|0}\times\\ &\qquad\qquad\times\Phi\left(\mathsf{M}_{0,0|\lozenge}^{B_1B_2}\otimes\mathds{1}^{AA'B_1'B_2'CC'}\ket{\psi}^{AB_1}\otimes\ket{00}^{A'B_1'}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{B_2'C'}\right) = \frac{1}{9} \end{align*} \end{widetext}
The second relation is the consequence of the fact that isometries do not change the scalar product. Given that both vectors in the scalar product in the third equality have norm $1/3$ the saturation of Cauchy-Schwarz inequality implies
\begin{multline}
\Phi\left(\mathsf{M}_{0,0|\lozenge}^{B_1B_2}\ket{\psi}^{AB_1}\otimes\ket{00}^{A'B_1'}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{B_2'C'}\right) = \\
= \frac{1}{9}\ket{\xi_1}^{AB_1}\otimes\ket{\xi_2}^{B_2C}\otimes\ket{11}^{A'B_1'}\otimes\ket{11}^{B_2'C'} \end{multline}
With similar argumentation we obtain the following set of relations for all $b = b_1,b_2$:
\begin{align*}
&\Phi\left(\mathsf{M}_{b|\lozenge}^{B_1B_2}\ket{\psi}^{AB_1}\otimes\ket{00}^{A'B_1'}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{B_2'C'}\right) = \\
& = \frac{1}{9}\ket{\xi_2}^{AB_1}\otimes\ket{\xi_2}^{B_2C}\otimes\ket{\psi'_{b}}^{A'}\otimes\ket{\psi'_{b}}^{B_1'}\otimes\ket{\phi'_{b}}^{B_2'}\otimes\ket{\phi'_{b}}^{C'},
\end{align*}
where states $\ket{\psi'_{b}}$ and $\ket{\phi'_{b}}$ are such that $\mathsf{M}'_{b|\lozenge} = \proj{\psi'_{b}}\otimes\proj{\phi'_{b}}$. All these equations together with eq.~\eqref{finalst} imply the self-testing result we needed as given in eq.~\eqref{stdomino}.
\end{proof}
\section{General construction}
The above construction can be generalized to other measurements featuring NLWE, for higher dimensions (still in the bipartite case) and to the multipartite case. The general idea is that the above procedure allows one to basically self-test any measurement with product rank-one eigenstates. When the later involve only real parameters, the construction is rather straightforward, while the general case with complex coefficients is more challenging, as usual in self-testing~\cite{Supic2021}.
For bipartite NLWE measurements, the bilocality network can be readily used.
If the measurement $\mathsf{M}'_{b|\lozenge}$ acts now on $\mathbb{C}^{d_A}\otimes\mathbb{C}^{d_B}$, the local dimensions of two maximally entangled states distributed in the network must be adapted (to $d_A$ and $d_B$ respectively). These states, and the local measurements of Alice and Charlie, can be self-tested using any of the available methods (see for example~\cite{Yang,Supic2021}). In turn, Alice and Charlie can remotely prepare for Bob (via their certified local measurements acting on half of the shared maximally entangled pairs) the pair of input states in order to match any of the eigenstates of the measurement $\mathsf{M}'_{b|\lozenge}$.
Moving to the multipartite case will involve a star network, where the central node will perform the measurement with NLWE. For an N-party measurement with qudits, we consider a star network with $N$ branches. On each branch a maximally entangled state of two qudits must first be self-tested, as well as local measurements of the lateral nodes. Second the central NLWE is self-tested as above.
\section{Discussion}
We discussed the device-independent certification of the effect of ``nonlocality without entanglement. Specifically, we showed that a quantum measurement featuring only separable eigenstates, but which cannot be implemented via an LOCC procedure, can be certified in a quantum network with independent sources, based only on observed statistics.
A point worth noting is that our self-test construction has interesting consequences from the perspective of network nonlocality. In particular, this example shows that genuine quantum network nonlocality \cite{Supic2022}, a form of quantum nonlocality that can only arise in networks, is in fact possible without involving any entangled measurement.
Let us point out that previous works have discussed the certification of non-classical quantum measurements in a partially device-independent setting, considering prepared quantum states of limited dimension \cite{Vertesi2011,Bennet2014}, also with an experimental demonstration \cite{Bennet2014}. A key difference with our work (besides the stronger assumptions), is that these previous results could only certify that a measurement is not achievable via LOCC, but could not certify the property of NLWE, as we do here.
From a more general perspective, our work establishes a connections between two forms of nonlocality in quantum theory, namely Bell nonlocality and the effect of nonlocality without entanglement.
\begin{acknowledgments} The authors acknowledge financial support from the Starting ERC grant QUSCO and the Swiss National Science Foundation (project $2000021\_192244/1$ and NCCR SwissMAP). \end{acknowledgments}
\onecolumngrid \appendix
\section{Proof of Lemma~\ref{lem2}}\label{suppmat1}
In this section we prove Lemma~\ref{lem2}. The proof is largely inspired by the proof offered in~\cite{Yang}. The most important part of our contribution is self-testing of measurements as well, and not just the state as in~\cite{Yang}. It will be convenient to define the marginal physical measurements operators for Bob $\mathsf{M}^B_{b_1|y} = \sum_{b_2}\mathsf{M}^B_{b_1,b_2|y}$. Not that this physical coarse-grained measurement can, in principle, act on both Hilbert spaces $\mathcal{H}^{B_1}$ and $\mathcal{H}^{B_2}$. Let us further introduce the following notation:
\begin{align}
\mathsf{Z}_{0,1}^A = \mathsf{M}^A_{0|0} - \mathsf{M}^A_{1|0}, \qquad \mathsf{Z}_{1,2}^A = \mathsf{M}^A_{1|0} - \mathsf{M}^A_{2|0}, &\qquad \mathsf{X}_{0,1}^A = \mathsf{M}^A_{0|1} - \mathsf{M}^A_{1|1}, \qquad \mathsf{X}_{1,2}^A = \mathsf{M}^A_{1|2} - \mathsf{M}^A_{2|2},\\
\mathsf{D}_{0,1}^B = \mathsf{M}^B_{0|0} - \mathsf{M}^B_{1|0}, \qquad \mathsf{D}_{1,2}^B = \mathsf{M}^B_{1|2} - \mathsf{M}^B_{2|2}, &\qquad \mathsf{E}_{0,1}^B = \mathsf{M}^B_{0|1} - \mathsf{M}^B_{1|1}, \qquad \mathsf{E}_{1,2}^B = \mathsf{M}^B_{1|3} - \mathsf{M}^B_{2|3},\\ \label{hatoperators}
{\hat{\mathsf{Z}}}^B_{0,1} = \frac{\mathsf{D}_{0,1}^B + \mathsf{E}_{0,1}^B}{\sqrt{2}}, \qquad {\hat{\mathsf{X}}}^B_{0,1} = \frac{\mathsf{D}_{0,1}^B - \mathsf{E}_{0,1}^B}{\sqrt{2}},&\qquad {\hat{\mathsf{Z}}}^B_{1,2} = \frac{\mathsf{D}_{1,2}^B + \mathsf{E}_{1,2}^B}{\sqrt{2}}, \qquad {\hat{\mathsf{X}}}^B_{1,2} = \frac{\mathsf{D}_{1,2}^B - \mathsf{E}_{1,2}^B}{\sqrt{2}}. \end{align}
Physical operators $\mathsf{Z}^A_{i,j}$ and $\mathsf{X}_{i,j}^A$ are supposed to act as Pauli's $\sigma_z$ and $\sigma_x$ respectively on qubit subspaces spanned by vector basis $\{\ket{i}^A,\ket{j}^A\}$, hence the notation. Of course, these operators are uncharacterized and only after self-testing result is proven such notation is justified. Operators $\hat{\mathsf{Z}}^B_{i,j}$ and $\hat{\mathsf{X}}_{i,j}^B$ are supposed to act as Pauli's $\sigma_z$ and $\sigma_x$ respectively on qubit subspaces spanned by vector basis $\{\ket{i}^B,\ket{j}^B\}$, but for the beginning we note that these operators unlike $\mathsf{Z}^A_{i,j}$ and $\mathsf{X}_{i,j}^A$ do not even need to be unitary. Let us now define operators which in ideal case on designated qubit subspaces act as identities:
\begin{align}
\mathds{1}_{0,1,Z}^A = \mathsf{M}^A_{0|0} + \mathsf{M}^A_{1|0}, \qquad \mathds{1}_{1,2,Z}^A = \mathsf{M}^A_{1|0} + \mathsf{M}^A_{2|0}, &\qquad \mathds{1}_{0,1,X}^A = \mathsf{M}^A_{0|1} + \mathsf{M}^A_{1|1}, \qquad \mathds{1}_{1,2,X}^A = \mathsf{M}^A_{1|2} + \mathsf{M}^A_{2|2},\\
\mathds{1}_{0,1,D}^B = \mathsf{M}^B_{0|0} + \mathsf{M}^B_{1|0}, \qquad \mathds{1}_{1,2,D}^B = \mathsf{M}^B_{1|2} + \mathsf{M}^B_{2|2}, &\qquad \mathds{1}_{0,1,E}^B = \mathsf{M}^B_{0|1} + \mathsf{M}^B_{1|1}, \qquad \mathds{1}_{1,2,E}^B = \mathsf{M}^B_{1|3} + \mathsf{M}^B_{2|3} \end{align}
The operators $\mathds{1}_{0,1,D}^B$, $\mathds{1}_{0,1,E}^B$, $\mathds{1}_{1,2,D}^B$ and $\mathds{1}_{1,2,E}^B$ are projectors to the subspaces spanned by eigenvectors of $\mathsf{D}_{0,1}^B$, $\mathsf{E}_{0,1}^B$, $\mathsf{D}_{1,2}^B$ and $\mathsf{E}_{1,2}^B$. In a similar fashion we want to define projectors to the subspaces spanned by ${\hat{\mathsf{Z}}}^B_{0,1}$, ${\hat{\mathsf{X}}}^B_{0,1}$, ${\hat{\mathsf{Z}}}^B_{0,1}$ and ${\hat{\mathsf{X}}}^B_{0,1}$. However, as we said earlier those operators are not necessarily unitary. However, we define the regularized version of this operators in the following way:
\begin{equation}
{\mathsf{Z}}^B_{0,1} = \frac{{\hat{\mathsf{Z}}}^B_{0,1}}{|{\hat{\mathsf{Z}}}^B_{0,1}|}, \qquad {\mathsf{X}}^B_{0,1} = \frac{{\hat{\mathsf{X}}}^B_{0,1}}{|{\hat{\mathsf{X}}}^B_{0,1}|}, \qquad {\mathsf{Z}}^B_{1,2} = \frac{{\hat{\mathsf{Z}}}^B_{1,2}}{|{\hat{\mathsf{Z}}}^B_{1,2}|}, \qquad {\mathsf{X}}^B_{1,2} = \frac{{\hat{\mathsf{X}}}^B_{1,2}}{|{\hat{\mathsf{X}}}^B_{1,2}|}. \end{equation}
Such renormalization of eigenvalues is not possible if any of the operators figuring in the numerators have eigenvectors with a corresponding eigenvalue equal to zero. If such case appears we just change all such eigenvalues from $0$ to $1$. In this way we obtain unitary operators ${\mathsf{Z}}^B_{i,j}$ and ${\mathsf{X}}^B_{i,j}$. Now we define subspace $\mathcal{B}_{i,j}$ which comprises the range of operators $D_{i,j}^B$ and $E_{i,j}^B$. Let $\mathds{1}_{i,j}^B$ be the projector to the subspace $\mathcal{B}_{i,j}$. The definition of ${{\mathsf{Z}}}^B_{i,j}$ and ${{\mathsf{X}}}^B_{i,j}$ implies that the their range is exactly the subspace $\mathcal{B}_{i,j}$, and since all their eigenvalues are either $1$ or $-1$ the following equations hold
\begin{equation}
{{\mathsf{Z}}^B_{i,j}}^2 = \mathds{1}_{i,j}^B, \qquad {{\mathsf{X}}^B_{i,j}}^2 = \mathds{1}_{i,j}^B \end{equation}
Let us now define the following subnormalized states:
\begin{align}
\ket{\psi_{i,j,Z,A}} = \mathds{1}_{i,j,Z}^A\ket{\psi} &\qquad \ket{\psi_{i,j,X,A}} = \mathds{1}_{i,j,X}^A\ket{\psi}\\
\ket{\psi_{i,j,D,B}} = \mathds{1}_{i,j,D}^B\ket{\psi} &\qquad \ket{\psi_{i,j,E,B}} = \mathds{1}_{i,j,E}^B\ket{\psi},\qquad
\ket{\psi_{i,j,B}} = \mathds{1}_{i,j}^B\ket{\psi}, \end{align}
where we used notation $\ket{\psi}\equiv \ket{\psi}^{ABC} = \ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}$. Whenever state is written without any superscript it means that we are taking into account the whole state distributed in the network. Let us now analyse the consequences of the fact that physical experiment simulates the reference one. In the first step we concentrate on the following set of correlations:
\begin{align} \label{jedan}
\bra{\psi}\mathsf{M}_{2|0}^A\otimes \mathds{1}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{2|1}^A\otimes \mathds{1}^B\ket{\psi} = \bra{\psi}\mathds{1}^A\otimes\mathsf{M}_{2|0}^B\ket{\psi} = \bra{\psi}\mathds{1}^A\otimes\mathsf{M}_{2|1}^B\ket{\psi} = \frac{1}{3}\\ \label{dva}
\bra{\psi}\mathsf{M}_{0|0}^A\otimes \mathds{1}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{0|2}^A\otimes \mathds{1}^B\ket{\psi} = \bra{\psi}\mathds{1}^A\otimes\mathsf{M}_{0|2}^B\ket{\psi} = \bra{\psi}\mathds{1}^A\otimes\mathsf{M}_{0|3}^B\ket{\psi} = \frac{1}{3}\\ \label{tri}
\bra{\psi}\mathsf{M}_{2|0}^A\otimes \mathsf{M}_{2|0}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{2|1}^A\otimes \mathsf{M}_{2|0}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{2|0}^A\otimes\mathsf{M}_{2|1}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{2|1}^A\otimes\mathsf{M}_{2|1}^B\ket{\psi} = \frac{1}{3}\\ \label{cetiri}
\bra{\psi}\mathsf{M}_{0|0}^A\otimes \mathsf{M}_{0|2}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{0|2}^A\otimes \mathsf{M}_{0|2}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{0|0}^A\otimes\mathsf{M}_{0|3}^B\ket{\psi} = \bra{\psi}\mathsf{M}_{0|2}^A\otimes\mathsf{M}_{0|3}^B\ket{\psi} = \frac{1}{3} \end{align}
Given that in DI scenario we can consider all measurement operators as projectors, eqs.~\eqref{jedan},\eqref{dva} imply that the norms of vectors $\mathsf{M}_{2|0}^A\otimes \mathds{1}^B\ket{\psi}$, $\mathsf{M}_{2|1}^A\otimes \mathds{1}^B\ket{\psi}$, $\mathsf{M}_{0|0}^A\otimes \mathds{1}^B\ket{\psi}$, $\mathsf{M}_{0|2}^A\otimes \mathds{1}^B\ket{\psi}$, $\mathds{1}^A\otimes\mathsf{M}_{2|0}^B\ket{\psi}$,$\mathds{1}^A\otimes\mathsf{M}_{2|1}^B\ket{\psi}$,$\mathds{1}^A\otimes\mathsf{M}_{0|2}^B\ket{\psi}$ and $\mathds{1}^A\otimes\mathsf{M}_{0|3}^B\ket{\psi}$ is equal to $1/\sqrt{3}$. Eqs.~\eqref{tri},\eqref{cetiri} show that the inner product between two vectors of norm $1/\sqrt{3}$ as determined from eqs.~\eqref{jedan},\eqref{dva} is equal to $1/3$, which by saturation of the Cauchy–Bunyakovsky–Schwarz inequality implies that vectors figuring in the inner product are parallel:
\begin{align}\label{peta}
\mathsf{M}_{2|0}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{2|0}^B\ket{\psi}, \qquad &\mathsf{M}_{2|0}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{2|1}^B\ket{\psi},\\
\mathsf{M}_{2|1}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{2|0}^B\ket{\psi}, \qquad &\mathsf{M}_{2|1}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{2|1}^B\ket{\psi},\\ \label{sedma}
\mathsf{M}_{0|0}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{0|2}^B\ket{\psi}, \qquad &\mathsf{M}_{0|0}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{0|3}^B\ket{\psi},\\ \label{osma}
\mathsf{M}_{0|2}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{0|2}^B\ket{\psi}, \qquad &\mathsf{M}_{0|2}^A\otimes \mathds{1}^B\ket{\psi} = \mathds{1}^A\otimes\mathsf{M}_{0|3}^B\ket{\psi} \end{align}
The tensor product form of the state $\ket{\psi} = \ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}$ and eqs.~\eqref{peta}-\eqref{osma} imply that measurements $\mathsf{M}_{2|0}^B$, $\mathsf{M}_{2|1}^B$, $\mathsf{M}_{0|2}^B$ and $\mathsf{M}_{0|3}^B$ act nontrivially only on Hilbert space $\mathcal{H}^{B_1}$. Given that for all measurements it holds $\sum_{i}\mathsf{M}_{i|j} = \mathds{1}$, eqs.~\eqref{peta}-\eqref{osma} imply the following relations:
\begin{align}\label{skoro}
\ket{\psi_{0,1,Z,A}} = \ket{\psi_{0,1,D,B}} = \ket{\psi_{0,1,E,B}} = \ket{\psi_{0,1,X,A}} \equiv \ket{\psi_{0,1}}\\
\ket{\psi_{1,2,Z,A}} = \ket{\psi_{1,2,D,B}} = \ket{\psi_{1,2,E,B}} = \ket{\psi_{1,2,X,A}} \equiv \ket{\psi_{1,2}} \end{align}
and furthermore the norm of $\ket{\psi_{0,1}}$ and $\ket{\psi_{1,2}}$ is equal to $\sqrt{\frac{2}{3}}$. Also since $\ket{\psi_{0,1,D,B}} = \ket{\psi_{0,1,E,B}}$ and $\ket{\psi_{1,2,D,B}} = \ket{\psi_{1,2,E,B}}$, given the definition of $\mathds{1}_{0,1}^B$ and $\mathds{1}_{1,2}^B$, it must be
\begin{equation}
\ket{\psi_{0,1,B}} = \ket{\psi_{0,1}}, \qquad \ket{\psi_{1,2,B}} = \ket{\psi_{1,2}}. \end{equation}
The simulation of reference correlations implies:
\begin{equation}
\bra{\psi}\mathsf{Z}_{0,1}^A\otimes\mathsf{D}_{0,1}^B + \mathsf{Z}_{0,1}^A\otimes\mathsf{E}_{0,1}^B + \mathsf{X}_{0,1}^A\otimes\mathsf{D}_{0,1}^B - \mathsf{X}_{0,1}^A\otimes\mathsf{E}_{0,1}^B\ket{\psi} = \frac{2}{3}2\sqrt{2} \end{equation}
A variant of sum-of-squares (SOS) decomposition of generalized shifted Bell operator reads:
\begin{multline}\label{sos}
\sqrt{2}\left[\frac{\mathds{1}^A_{0,1,Z} + \mathds{1}^A_{0,1,X} + \mathds{1}^B_{0,1,D} + \mathds{1}^B_{0,1,E}}{\sqrt{2}} - \left(\mathsf{Z}_{0,1}^A\otimes\left(\mathsf{D}_{0,1}^B + \mathsf{E}_{0,1}^B\right)+ \mathsf{X}_{0,1}^A\otimes\left(\mathsf{D}_{0,1}^B-\mathsf{E}_{0,1}^B\right) \right)\right] = \\ = \left(\mathsf{Z}^A_{0,1} - \frac{\mathsf{D}_{0,1}^B + \mathsf{E}_{0,1}^B}{\sqrt{2}}\right)^2 + \left(\mathsf{X}^A_{0,1} - \frac{\mathsf{D}_{0,1}^B - \mathsf{E}_{0,1}^B}{\sqrt{2}}\right)^2, \end{multline}
where we used the fact that ${\mathsf{Z}^A_{0,1}}^2 = \mathds{1}^A_{0,1,Z}$, ${\mathsf{X}^A_{0,1}}^2 = \mathds{1}^A_{0,1,X}$, ${\mathsf{D}^B_{0,1}}^2 = \mathds{1}^B_{0,1,D}$ and ${\mathsf{E}^B_{0,1}}^2 = \mathds{1}^B_{0,1,E}$. Given that
\begin{equation} \bra{\psi}\mathds{1}^A_{0,1,Z} + \mathds{1}^A_{0,1,X} + \mathds{1}^B_{0,1,D,} + \mathds{1}^B_{0,1,E}\ket{\psi} = \frac{8}{3}, \end{equation}
the l.h.s. of eq.~\eqref{sos} is equal to $0$, which means that both sums on the r.h.s. must be equal to $0$ as well, as they are squares of operators implying they must be both nonnegative. Hence, recalling notation introduced in~\eqref{hatoperators} this implies:
\begin{equation}\label{zz}
\mathsf{Z}^{A}_{0,1}\ket{\psi} = {\hat{\mathsf{Z}}}^{B}_{0,1}\ket{\psi}, \qquad \mathsf{X}^{A}_{0,1}\ket{\psi} = {\hat{\mathsf{X}}}^{B}_{0,1}\ket{\psi} \end{equation}
Since $\ket{\psi} = \ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}$, and on the l.h.s. of eqs.~\eqref{zz} identity operators act on $\ket{\psi}^{B_2C}$ we can conclude that operators ${\hat{\mathsf{Z}}}^{B}_{0,1}$ and ${\hat{\mathsf{X}}}^{B}_{0,1}$ act nontrivially only on Hilbert space $\mathcal{H}^{B_1}$.
As operators ${\hat{Z}}_{0,1}^{B}$ and ${\hat{X}}_{0,1}^{B}$ anticommute by construction, operators ${{Z}}_{0,1}^{A}$ and ${{X}}_{0,1}^{A}$ anticommute on the support of $\textrm{Tr}_B[\proj{\psi}]$:
\begin{align} \nonumber \{{{\mathsf{X}}}_{0,1}^{A},{{\mathsf{Z}}}_{0,1}^{A}\}\ket{\psi} &= {{\mathsf{X}}}_{0,1}^{A}{{\mathsf{Z}}}_{0,1}^{A}\ket{\psi} + {{\mathsf{Z}}}_{0,1}^{A}{{\mathsf{X}}}_{0,1}^{A}\ket{\psi}\\ \nonumber &= {{\mathsf{X}}}_{0,1}^{A}{\hat{\mathsf{Z}}}_{0,1}^{B}\ket{\psi} + {{\mathsf{Z}}}_{0,1}^{A}{\hat{\mathsf{X}}}_{0,1}^{B}\ket{\psi}\\ \nonumber &= {\hat{\mathsf{Z}}}_{0,1}^{B}{{\mathsf{X}}}_{0,1}^{A}\ket{\psi} + {\hat{\mathsf{X}}}_{0,1}^{B}{{\mathsf{Z}}}_{0,1}^{A}\ket{\psi}\\ \nonumber &= {\hat{\mathsf{Z}}}_{0,1}^{B}{\hat{\mathsf{X}}}_{0,1}^{B}\ket{\psi} + {\hat{\mathsf{X}}}_{0,1}^{B}{\hat{\mathsf{Z}}}_{0,1}^{B}\ket{\psi}\\ \label{xz01} &= 0. \end{align}
The same procedure can be repeated for the correlations among $\mathsf{Z}^A_{1,2}$, $\mathsf{X}^A_{1,2}$, $\hat{\mathsf{D}}^A_{1,2}$ and $\hat{\mathsf{E}}^A_{1,2}$ to obtain:
\begin{align}\label{zz12}
\mathsf{Z}^{A}_{1,2}\ket{\psi} = {\hat{\mathsf{Z}}}^{B}_{1,2}\ket{\psi}, &\qquad \mathsf{X}^{A}_{1,2}\ket{\psi} = {\hat{\mathsf{X}}}^{B}_{1,2}\ket{\psi}\\ \label{xz12}
\{{{\mathsf{X}}}_{1,2}^{A},{{\mathsf{Z}}}_{1,2}^{A}\}\ket{\psi} = 0, &\qquad \{{\hat{\mathsf{X}}}_{1,2}^{B},{\hat{\mathsf{Z}}}_{1,2}^{B}\}\ket{\psi} = 0 \end{align}
Again, as is the case for ${\hat{\mathsf{Z}}}^{B}_{0,1}$ and ${\hat{\mathsf{X}}}^{B}_{0,1}$, operators ${\hat{\mathsf{Z}}}^{B}_{1,2}$ and ${\hat{\mathsf{X}}}^{B}_{1,2}$ act nontrivially only on the Hilbert space $\mathcal{H}^{B_2}$. As noted earlier, hatted operators do not necessarilly have eigenvalues $-1$ and $1$, and in prospect of using unitary operators to build self-testing isometry we defined the regularized operators $\mathsf{Z}_{0,1}^B$, $\mathsf{X}_{0,1}^B$, $\mathsf{Z}_{1,2}^B$ and $\mathsf{X}_{1,2}^B$ which are unitary by construction and they act on $\ket{\psi}$ in the same way as ${\hat{\mathsf{Z}}}_{0,1}^B$, ${\hat{\mathsf{X}}}_{0,1}^B$, ${\hat{\mathsf{Z}}}_{1,2}^B$ and ${\hat{\mathsf{X}}}_{1,2}^B$ respectfully. The proof of this is described in details in Appendix A2 in~\cite{SupicBowles}. Let us now introduce projective operators
\begin{align}
\mathsf{P}_{0}^B &= \frac{\mathds{1}_{0,1,Z}^B + \mathsf{Z}_{0,1}^B}{2},\\
\mathsf{P}_{1}^B &= \frac{\mathds{1}_{0,1,Z}^B - \mathsf{Z}_{0,1}^B}{2} = \frac{\mathds{1}_{0,1,Z}^B + \mathsf{Z}_{1,2}^B}{2},\\
\mathsf{P}_{2}^B &= \frac{\mathds{1}_{0,1,Z}^B - \mathsf{Z}_{1,2}^B}{2} \end{align}
which together with $\omega = \exp{\frac{i2\pi}{3}}$ form the operators used to build the self-testing isometry
\begin{align}\label{za}
\mathsf{Z}^A &= \sum_{j=0}^2\omega^j\mathsf{M}^A_{j|0},\\ \label{zb}
\mathsf{Z}^B &= \sum_{j=0}^2\omega^j\mathsf{P}^B_{j},\\ \label{x1}
{\mathsf{X}^{(1)}}^{A} &= \mathsf{X}_{0,1}^{A} + \mathds{1}^{A} - \mathds{1}_{0,1,X}^{A},\qquad {\mathsf{X}^{(1)}}^{B} = \mathsf{X}_{0,1}^{B} + \mathds{1}^{B} - \mathds{1}_{0,1}^{B},\\
{\mathsf{X}^{(2)}}^{A} &= {\mathsf{X}^{(1)}}^{A}\left(\mathds{1}^{A} - \mathds{1}_{1,2,X}^{A}+\mathsf{X}_{1,2}^{A}\right), \qquad {\mathsf{X}^{(2)}}^{B} = {\mathsf{X}^{(1)}}^{B}\left(\mathds{1}^{B} - \mathds{1}_{1,2}^{B}+\mathsf{X}_{1,2}^{B}\right). \label{x2} \end{align}
Operators $\mathsf{Z}^{A/B}$, ${\mathsf{X}^{(1)}}^{A/B}$ and ${\mathsf{X}^{(2)}}^{A/B}$ are unitary. Indeed, operators $\mathsf{Z}^{A/B}$ are constructed as a sum of projectors with eigenvalues squaring to $1$. For operators ${\mathsf{X}^{(1)}}^{A}$ and ${\mathsf{X}^{(2)}}^{A}$ unitarity can be proven in the following way
\begin{align}
{\mathsf{X}^{(1)}}^{A}{{\mathsf{X}^{(1)}}^\dagger}^{A} &=\left(\mathsf{X}_{0,1}^{A} + \mathds{1}^{A} - \mathds{1}_{0,1,X}^{A}\right)\left(\mathsf{X}_{0,1}^{A} + \mathds{1}^{A} - \mathds{1}_{0,1,X}^{A}\right)\\
&= \mathds{1}\\
{\mathsf{X}^{(2)}}^{A}{{\mathsf{X}^{(2)}}^\dagger}^{A} &= {\mathsf{X}^{(1)}}^{A}\left(\mathds{1}^{A} - \mathds{1}_{1,2,X}^{A}+\mathsf{X}_{1,2}^{A}\right)\left(\mathds{1}^{A} - \mathds{1}_{1,2,X}^{A}+{\mathsf{X}_{1,2}^\dagger}^{A}\right){{\mathsf{X}^{(1)}}^\dagger}^{A}\\
&= {\mathsf{X}^{(1)}}^{A}{{\mathsf{X}^{(1)}}^\dagger}^{A}\\
&= \mathds{1}. \end{align}
The same unitarity proof holds for ${\mathsf{X}^{(1)}}^{B}$ and ${\mathsf{X}^{(2)}}^{B}$. Note that ${\mathsf{X}^{(1)}}^{A}$ can alternatively be written as
\begin{align}\label{x1Aalt}
{\mathsf{X}^{(1)}}^{A} = {\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}, \end{align}
while for ${\mathsf{X}^{(1)}}^{B}$ holds:
\begin{equation}\label{x1Balt}
{\mathsf{X}^{(1)}}^{B}\ket{\psi} = {\mathsf{X}^{B}_{0,1}}\ket{\psi} + \mathsf{M}^{B}_{2|0}\ket{\psi}. \end{equation}
Similarly we obtain the following relations:
\begin{align}\label{X12alt}
\mathds{1}^{A} - \mathds{1}_{1,2,X}^{A}+\mathsf{X}_{1,2}^{A} &= \mathsf{M}_{0|2} + \mathsf{X}_{1,2}^{A},\\
\left(\mathds{1}^{B} - \mathds{1}_{1,2}^{B}+\mathsf{X}_{1,2}^{B}\right)\ket{\psi} &= \left(\mathsf{M}_{0|2}^B + \mathsf{X}_{1,2}^{B}\right)\ket{\psi}. \end{align}
Eqs.~\eqref{zz} and \eqref{zz12} imply
\begin{align}\label{parallelz}
\mathsf{Z}^A\ket{\psi} &= \mathsf{Z}^B\ket{\psi},\\
{\mathsf{X}^{(1)}}^{A}\ket{\psi} &= {\mathsf{X}^{(1)}}^{B}\ket{\psi}\\
{\mathsf{X}^{(2)}}^{A}\ket{\psi} &= {{\mathsf{X}^{(2)}}^\dagger}^{B}\ket{\psi}. \end{align}
Eq.~\eqref{parallelz} implies
\begin{equation}\label{deltaz}
\mathsf{M}_{j|0}^A\otimes\mathsf{P}_k^B\ket{\psi} = \delta_{j,k}\mathsf{M}_{j|0}^A\ket{\psi}. \end{equation}
\begin{figure}
\caption{Isometry for self-testing maximally entangled pair of qutrits.
}
\label{fig:iso}
\end{figure}
The self-testing isometry $\Phi = \Phi_A\otimes\Phi_{B_1}$ is given on Fig.~\ref{fig:iso}. Note that operators $\mathsf{Z}^B$ and ${\mathsf{X}^{(1/2)}}^B$ are built from measurements that act nontrivially only on Hilbert space $\mathcal{H}^{B_1}$, which means that the same holds for the isometry $\Phi_{B_1}$. Operator $F$ is the Fourier transform acting in the following way: $F\ket{j} = \frac{1}{\sqrt{3}}\sum_{k=0}^2\omega^{jk}\ket{k}$. The controlled gates $C\mathsf{S}^{A/B}$ and $C\mathsf{R}^{A/B}$ are defined as:
\begin{align}
C\mathsf{S}^{A/B}\ket{j}^{A'/B_1'}\ket{\psi} &= \ket{j}^{A'/B_1'}{\mathsf{Z}^j}^{A/B}\ket{\psi},\\
C\mathsf{R}^{A/B}\ket{j}^{A'/B_1'}\ket{\psi} &= \ket{j}^{A'/B_1'}{\mathsf{X}^{(j)}}^{A/B}\ket{\psi}, \end{align}
where $\mathsf{Z}^j$ is simply $j$-th power of $\mathsf{Z}$, $\mathsf{X}^{(0)} = \mathds{1}$ and $\mathsf{X}^{(j)}$ are given in eqs.~\eqref{x1},\eqref{x2}. The output state of the isometry is:
\begin{align}
\Phi(\ket{\psi}^{AB}\otimes\ket{00}^{A'B_1'}) = \sum_{j,k=0}^2{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\ket{\psi}^{ABC}\otimes\ket{jk}^{A'B_1'}, \end{align}
where we used the identities
\begin{align}
\mathsf{M}_{0|0}^A = \frac{1}{3}\left({\mathsf{Z}^0}^A + {\mathsf{Z}^1}^A+ {\mathsf{Z}^2}^A\right), \quad &\mathsf{M}_{1|0}^A = \frac{1}{3}\left({\mathsf{Z}^0}^A + \omega{\mathsf{Z}^1}^A+ \omega^*{\mathsf{Z}^2}^A\right),\quad \mathsf{M}_{2|0}^A = \frac{1}{3}\left({\mathsf{Z}^0}^A + \omega^*{\mathsf{Z}^1}^A+ \omega{\mathsf{Z}^2}^A\right),\\
\mathsf{P}_{0}^B = \frac{1}{3}\left({\mathsf{Z}^0}^B + {\mathsf{Z}^1}^B+ {\mathsf{Z}^2}^B\right), \quad &\mathsf{P}_{1}^B = \frac{1}{3}\left({\mathsf{Z}^0}^B + \omega{\mathsf{Z}^1}^B+ \omega^*{\mathsf{Z}^2}^B\right),\quad \mathsf{P}_{2}^B = \frac{1}{3}\left({\mathsf{Z}^0}^B + \omega^*{\mathsf{Z}^1}^B+ \omega{\mathsf{Z}^2}^B\right), \end{align}
as per definition of $\mathsf{Z}^{A/B}$ given in eqs.~\eqref{za},\eqref{zb}. Eqs.~\eqref{deltaz} imply that only surviving elements of the sum are those containing $\ket{jk}^{A'B'}$ such that $j=k$. Hence, the explicitly written full output state $\Phi(\ket{\psi}^{AB}\otimes\ket{00}^{A'B_1'})$ reads
\begin{align}
\Phi(\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) = \mathsf{M}_{0|0}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'} + {\mathsf{X}^{(1)}}^A\mathsf{M}_{1|0}^A\otimes{\mathsf{X}^{(1)}}^B\ket{\psi}^{ABC}\otimes\ket{11}^{A'B_1'} + {\mathsf{X}^{(2)}}^A\mathsf{M}_{2|0}^A\otimes{\mathsf{X}^{(2)}}^B\ket{\psi}^{ABC}\otimes\ket{22}^{A'B_1'} \end{align}
We simplify the second term on the r.h.s. of the last eq.:
\begin{align}
{\mathsf{X}^{(1)}}^A\mathsf{M}_{1|0}^A\otimes{\mathsf{X}^{(1)}}^B\ket{\psi} &= \left({\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}\right)\frac{\mathds{1}_{0,1,Z}-\mathsf{Z}_{0,1}^A}{2}\otimes\left({\mathsf{X}^{B}_{0,1}} + \mathsf{M}^{B}_{2|0}\right)\ket{\psi}\\&=
\frac{\mathds{1}_{0,1,Z}+\mathsf{Z}_{0,1}^A}{2}\left({\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}\right)\otimes\left({\mathsf{X}^{B}_{0,1}} + \mathsf{M}^{B}_{2|0}\right)\ket{\psi}\\
&= \mathsf{M}_{0|0}^A\ket{\psi} \end{align}
In the first line we used eqs.~\eqref{x1Aalt} and \eqref{x1Balt}. To get the second line we used the anticommutation between $\mathsf{X}_{0,1}^A$ and $\mathsf{Z}_{0,1}^A$. The third eq. was obtained from~\eqref{zz} and the fact that ${X_{0,1}^A}^2\ket{\psi} = \mathds{1}_{0,1,X}^A\ket{\psi} = \ket{\psi_{0,1}}$. In the last equation we used the fact that $\mathsf{M}_{0|0}^A = \frac{\mathds{1}_{0,1,Z}+\mathsf{Z}_{0,1}^A}{2}$ and $\ket{\psi_{0,1}} = (\mathsf{M}_{0|0}^A+\mathsf{M}_{1|0}^A)\ket{\psi}$.
Hence, we get simplified output state
\begin{align}
\Phi(\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) = \mathsf{M}_{0|0}^A\ket{\psi}^{ABC}\otimes\left(\ket{00}^{A'B'} + \ket{11}^{A'B_1'}\right) + {\mathsf{X}^{(2)}}^A\mathsf{M}_{2|0}^A\otimes{\mathsf{X}^{(2)}}^B\ket{\psi}^{ABC}\otimes\ket{22}^{A'B_1'}. \end{align}
Now we take care of the last term
\begin{align}
{\mathsf{X}^{(2)}}^A\mathsf{M}_{2|0}^A\otimes{\mathsf{X}^{(2)}}^B\ket{\psi} &= \left({\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}\right)\left(\mathsf{M}_{0|2}^{A}+\mathsf{X}_{1,2}^{A}\right)\frac{\mathds{1}_{1,2,Z}-\mathsf{Z}^A_{1,2}}{2}\otimes\left({\mathsf{X}^{B}_{0,1}} + \mathsf{M}^{B}_{2|0}\right)\left(\mathsf{M}_{0|2}^{B}+\mathsf{X}_{1,2}^{B}\right)\ket{\psi}\\
&= \left({\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}\right)\frac{\mathds{1}_{1,2,Z}+\mathsf{Z}^A_{1,2}}{2}\left(\mathsf{M}_{0|2}^{A}+\mathsf{X}_{1,2}^{A}\right)\otimes\left({\mathsf{X}^{B}_{0,1}} + \mathsf{M}^{B}_{2|0}\right)\left(\mathsf{M}_{0|2}^{B}+\mathsf{X}_{1,2}^{B}\right)\ket{\psi}\\
&= \left({\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}\right)\frac{\mathds{1}_{0,1,Z}-\mathsf{Z}^A_{0,1}}{2}\otimes\left({\mathsf{X}^{B}_{0,1}} + \mathsf{M}^{B}_{2|0}\right)\ket{\psi}\\
&= \frac{\mathds{1}_{0,1,Z}+\mathsf{Z}^A_{0,1}}{2}\left({\mathsf{X}^{A}_{0,1}} + \mathsf{M}^{A}_{2|0}\right)\otimes\left({\mathsf{X}^{B}_{0,1}} + \mathsf{M}^{B}_{2|0}\right)\ket{\psi}\\
&= \mathsf{M}_{0|0}^A\ket{\psi}. \end{align}
In the first line we used eqs.~\eqref{X12alt}. In the second line we used the anticommutation relation between $\mathsf{Z}_{1,2}^A$ and $\mathsf{X}_{1,2}^A$ (cf.~\eqref{xz12}). To get the third line we used the fact that $\left(\mathsf{M}_{0|2}^{A}+\mathsf{X}_{1,2}^{A}\right)\otimes \left(\mathsf{M}_{0|2}^{B}+\mathsf{X}_{1,2}^{B}\right)\ket{\psi} = \ket{\psi}$ and also equality $\mathds{1}_{1,2,Z}+\mathsf{Z}^A_{1,2} = \mathds{1}_{0,1,Z}-\mathsf{Z}^A_{0,1}$. The last two lines are the consequence of the anticommutation relation between $\mathsf{Z}_{1,2}^A$ and $\mathsf{X}_{1,2}^A$ (cf.~\eqref{xz01}) and relation $\left(\mathsf{M}_{2|0}^{A}+\mathsf{X}_{0,1}^{A}\right)\otimes \left(\mathsf{M}_{2|0}^{B}+\mathsf{X}_{0,1}^{B}\right)\ket{\psi} = \ket{\psi}$. Finally, we obtain the self-testing remark for the state:
\begin{align}\label{ststate}
\Phi(\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) = \sqrt{3}\mathsf{M}_{0|0}^A\ket{\psi}^{ABC}\otimes\ket{\phi_+}^{A'B_1'} \equiv \ket{\xi}^{ABC} \otimes \ket{\phi_+}^{A'B_1'}, \end{align}
where we introduced notation $\ket{\xi}^{ABC} = \left(\sqrt{3}\mathsf{M}_{0|0}^A\ket{\psi}^{AB_1}\right)\otimes\ket{\psi}^{B_2C}$, which is a valid quantum state, because the norm of $\mathsf{M}_{0|0}^A\ket{\psi}^{AB_1}$ is equal to $1/\sqrt{3}$ (cf.~\eqref{sedma}).
Now we move to the self-testing of measurements. Let us start with measurement $\mathsf{M}_{l|0}^A$, and see how the self-testing isometry maps the state $\mathsf{M}_{l|0}^A\ket{\psi}^{ABC}$:
\begin{align}\nonumber
\Phi(\mathsf{M}_{l|0}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \sum_{j,k=0}^2{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\mathsf{M}_{l|0}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\ket{\psi}^{ABC}\otimes\ket{jk}^{A'B_1'}\\ \nonumber
&= \sum_{j,k=0}^2{\mathsf{X}^{(j)}}^A\delta_{j,l}\mathsf{M}_{l|0}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\ket{\psi}^{ABC}\otimes\ket{jk}^{A'B_1'}\\ \nonumber
&= \ket{\xi}^{ABC}\otimes\frac{1}{\sqrt{3}}\ket{ll}^{A'B_1'}\\ \label{stmeas0}
&= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{l|0}^{A'}\ket{\phi_+}^{A'B_1'}, \end{align}
which is exactly the self-testing statement for $\mathsf{M}'_{l|0}$. To get the second line we used orthogonality of projectors $\{\mathsf{M}_{j|0}\}_j$, and the following two lines just reproduce the proof of self-testing the state. Exactly the same proof holds for measurement operators $\mathsf{M}_{2|1}^A$ and $\mathsf{M}_{0|2}^{A}$, as they act on $\ket{\psi}$ in the same way as $\mathsf{M}_{2|0}^A$ and $\mathsf{M}_{0|0}^A$ respectively. Concerning self-testing of $\mathsf{M}^A_{0|1}$ and $\mathsf{M}^A_{1|1}$ given that $\mathsf{X}_{0,1}^A = \mathsf{M}^A_{0|1} - \mathsf{M}^A_{1|1}$ and $\mathds{1}_{0,1,X}^A = \mathsf{M}^A_{0|1} + \mathsf{M}^A_{1|1}$, self-testing $\mathsf{X}_{0,1}^A$ and $\mathds{1}_{0,1,X}^A$ is equivalent to self-testing $\mathsf{M}^A_{0|1}$ and $\mathsf{M}^A_{1|1}$. Given that $\mathds{1}_{0,1,X}^A\ket{\psi} = \mathds{1}_{0,1,Z}^A\ket{\psi}$ and $\mathds{1}_{0,1,Z} = \mathsf{M}_{0|0}^A + \mathsf{M}_{1|0}^A$ the self-testing statement for $\{\mathsf{M}_{j|0}^A\}_j$ allows to conclude
\begin{align}\label{idd01st}
\Phi(\mathds{1}_{0,1,X}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes (\proj{0}^{A'} + \proj{1}^{A'})\ket{\phi_+}^{A'B_1'}. \end{align}
We now turn to self-testing od $\mathsf{X}_{0,1}^A$:
\begin{align}
\Phi(\mathsf{X}_{0,1}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \sum_{j,k=0}^2{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\ket{\psi}^{ABC}\otimes\ket{jk}^{A'B_1'}\\
&= \sum_{j,k=0}^1{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\ket{\psi}^{ABC}\otimes\ket{jk}^{A'B_1'}. \\ \end{align}
Note that in the second line the sum has different upper bounds. Let us first prove that all elements of the sum corresponding to $k=2$ vanish:
\begin{align}
\sum_{j=0}^2{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(2)}}^B\mathsf{P}_2^B\ket{\psi}^{ABC}\otimes\ket{j2}^{A'B_1'} &= \sum_{j}^2{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\mathsf{X}_{0,1}^A\mathsf{M}_{2|0}^A\otimes{\mathsf{X}^{(2)}}^B\ket{\psi}^{ABC}\otimes\ket{j2}^{A'B_1'}\\
&=\sum_{j}^2{\mathsf{X}^{(j)}}^A\mathsf{M}_{j|0}^A\left(\mathsf{M}_{0|1}^A-\mathsf{M}_{1|1}^A\right)\mathsf{M}_{2|1}^A\otimes{\mathsf{X}^{(2)}}^B\ket{\psi}^{ABC}\otimes\ket{j2}^{A'B_1'}\\ &= 0, \end{align}
where in the second line we used eq.~\eqref{deltaz}, and in the second we used eq.~\eqref{skoro} and definition of $\mathsf{X}_{0,1}^A$. Finally, to get the last line we used orthogonality of projectors corresponding to the same measurement. Now we consider elements of the sum corresponding to $j=2$:
\begin{align}
\sum_{k=0}^1{\mathsf{X}^{(2)}}^A\mathsf{M}_{2|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\ket{\psi}^{ABC}\otimes\ket{2k}^{A'B_1'}
&= \sum_{k=0}^1{\mathsf{X}^{(2)}}^A\mathsf{M}_{2|0}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\mathsf{X}_{0,1}^B\ket{\psi}^{ABC}\otimes\ket{2k}^{A'B_1'}\\
&= \sum_{k=0}^1{\mathsf{X}^{(2)}}^A\otimes{\mathsf{X}^{(k)}}^B\mathsf{P}_k^B\mathsf{X}_{0,1}^B\mathsf{P}_2^B\ket{\psi}^{ABC}\otimes\ket{2k}^{A'B_1'}\\
&= 0. \end{align}
In the first line we used eq.~\eqref{zz} and the fact that hatted and nonhatted operators of Bob act on $\ket{\psi}$ in the same way. In the second eq. we used eq.~\eqref{parallelz}, and to get the last eq. we used the fact that ranges of $\mathsf{P}_2^B$ and $\mathsf{X}_{0,1}^B$ are orthogonal.
Let us write the whole remaining state
\begin{align}\nonumber
\Phi(\mathsf{X}_{0,1}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \mathsf{M}_{0|0}^A\mathsf{X}_{0,1}^A\otimes\mathsf{P}_0^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'} + \mathsf{M}_{0|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(1)}}^B\mathsf{P}_1^B\ket{\psi}^{ABC}\otimes\ket{01}^{A'B_1'} + \\ \nonumber &\qquad + {\mathsf{X}^{(1)}}^A\mathsf{M}_{1|0}^A\mathsf{X}_{0,1}^A\otimes\mathsf{P}_0^B\ket{\psi}^{ABC}\otimes\ket{10}^{A'B_1'} + {\mathsf{X}^{(1)}}^A\mathsf{M}_{1|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(1)}}^B\mathsf{P}_1^B\ket{\psi}^{ABC}\otimes\ket{11}^{A'B_1'}\\ \nonumber &= \mathsf{X}_{0,1}^A\mathsf{M}_{1|0}^A\otimes\mathsf{P}_0^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'} + \mathsf{X}_{0,1}^A\mathsf{M}_{1|0}^A\otimes{\mathsf{X}^{(1)}}^B\mathsf{P}_1^B\ket{\psi}^{ABC}\otimes\ket{01}^{A'B_1'} + \\ \nonumber &\qquad + {\mathsf{X}^{(1)}}^A\mathsf{X}_{0,1}^A\mathsf{M}_{0|0}^A\otimes\mathsf{P}_0^B\ket{\psi}^{ABC}\otimes\ket{10}^{A'B_1'} + {\mathsf{X}^{(1)}}^A\mathsf{X}_{0,1}^A\mathsf{M}_{0|0}^A\otimes{\mathsf{X}^{(1)}}^B\mathsf{P}_1^B\ket{\psi}^{ABC}\otimes\ket{11}^{A'B_1'}\\ \nonumber
&= \mathsf{X}_{0,1}^A\mathsf{M}_{1|0}^A\otimes{\mathsf{X}^{(1)}}^B\mathsf{P}_1^B\ket{\psi}^{ABC}\otimes\ket{01}^{A'B_1'} + {\mathsf{X}^{(1)}}^A\mathsf{X}_{0,1}^A\mathsf{M}_{0|0}^A\otimes\mathsf{P}_0^B\ket{\psi}^{ABC}\otimes\ket{10}^{A'B_1'}\\ \nonumber
&=\mathsf{M}_{0|0}^A\mathsf{X}_{0,1}^A\otimes{\mathsf{X}^{(1)}}^B\ket{\psi}^{ABC}\otimes\ket{01}^{A'B_1'} + \mathsf{M}_{0|0}^A
\mathds{1}_{0,1,X}^A\otimes\mathsf{P}_0^B\ket{\psi}^{ABC}\otimes\ket{10}^{A'B_1'}\\ \nonumber
&= \mathsf{M}_{0|0}^A\ket{\psi}^{ABC}\otimes (\ket{01}^{A'B_1'} + \ket{10}^{A'B_1'})\\ \label{stX01}
&\ket{\xi}^{ABC}\otimes \left(\ketbra{0}{1}^{A'} + \ketbra{1}{0}^{A'}\right)\ket{\phi}^{A'B_1'}. \end{align}
The second equality (the third and fourth lines) is obtained by using anticommutation relation between $X_{0,1}^A$ and $\mathsf{Z}_{0,1}^A$ (cf.~\eqref{xz01}). In the fifth line we used the fact that $\mathsf{M}_{1|0}^A\otimes\mathsf{P}_0^B\ket{\psi} = 0$ and $\mathsf{M}_{0|0}^A\otimes\mathsf{P}_1^B\ket{\psi} = 0$ (cf.~\eqref{deltaz}). To get the sixth and seventh lines we used again the anticommutation between $\mathsf{Z}_{0,1}^A$ and $\mathsf{X}_{0,1}^A$ and relation $\mathsf{X}_{0,1}^A{\mathsf{X}^{1}}^A\ket{\psi} = \mathds{1}_{0,1,X}\ket{\psi}$. Given the definition of $\mathsf{M}'_{0|1}$ and $\mathsf{M}'_{1|1}$, eqs.~\eqref{idd01st} and~\eqref{stX01} imply
\begin{align}\label{stmeas1}
\Phi(\mathsf{M}_{j|1}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{j|1}^{A'}\ket{\phi_+}^{A'B_1'}, \end{align}
for $j = 0,1,2$. Completely analogous proof holds for self-testing the third Alice's measurement:
\begin{align}\label{stmeas2}
\Phi(\mathsf{M}_{j|2}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{j|2}^{A'}\ket{\phi_+}^{A'B_1'}, \end{align}
for $j = 0,1,2$.
Eqs.~\eqref{stmeas0},\eqref{stmeas1} and~\eqref{stmeas2} together imply:
\begin{align}\label{stmeasAlice}
\Phi(\mathsf{M}_{a|x}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{a|x}^{A'}\ket{\phi_+}^{A'B_1'}, \end{align}
The set of eqs.~\eqref{peta}-\eqref{osma} implies that operators $\mathsf{M}_{2|0}^B$, $\mathsf{M}_{2|1}^B$, $\mathsf{M}_{0|2}^B$ and $\mathsf{M}_{0|3}^B$ act on $\ket{\psi}$ in the same way as Alice's measurements self-tested in~\eqref{stmeasAlice} which implies:
\begin{align}\label{jedanmeasB}
\Phi(\mathsf{M}_{2|0}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{2|0}^{B_1'}\ket{\phi_+}^{A'B_1'}, \\
\Phi(\mathsf{M}_{2|1}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{2|1}^{B_1'}\ket{\phi_+}^{A'B_1'}, \\
\Phi(\mathsf{M}_{0|2}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{0|2}^{B_1'}\ket{\phi_+}^{A'B_1'}, \\ \label{cetirimeasB}
\Phi(\mathsf{M}_{0|3}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{0|3}^{B_1'}\ket{\phi_+}^{A'B_1'}. \end{align}
Let us now define reference operators:
\begin{align} \mathsf{X}'_{0,1} = \begin{bmatrix}
0 & 1 & 0\\ 1 & 0 & 0\\ 0& 0 & 0
\end{bmatrix}, \qquad \mathsf{Z}'_{0,1} = \begin{bmatrix}
1 & 0 & 0\\ 0 & -1 & 0\\ 0& 0 & 0
\end{bmatrix}, \qquad \mathsf{X}'_{1,2} = \begin{bmatrix}
0 & 0 & 0\\ 0 & 0 & 1\\ 0& 1 & 0
\end{bmatrix}, \qquad \mathsf{Z}'_{1,2} = \begin{bmatrix}
0 & 0 & 0\\ 0 & 1 & 0\\ 0& 0 & -1
\end{bmatrix}. \end{align}
Eqs.~\eqref{stmeasAlice} implies:
\begin{align}\label{jedanOp}
\Phi(\mathsf{X}_{0,1}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{X}'}_{0,1}^{A'}\ket{\phi_+}^{A'B_1'},\\
\Phi(\mathsf{Z}_{0,1}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{Z}'}_{0,1}^{A'}\ket{\phi_+}^{A'B_1'},\\
\Phi(\mathsf{X}_{1,2}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{X}'}_{1,2}^{A'}\ket{\phi_+}^{A'B_1'},\\ \label{cetiriOp}
\Phi(\mathsf{Z}_{1,2}^A\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{Z}'}_{1,2}^{A'}\ket{\phi_+}^{A'B_1'}. \end{align}
Now eqs.~\eqref{zz} and \eqref{zz12} together with the set of eqs.~\eqref{jedanOp}-\eqref{cetiriOp} lead to:
\begin{align}\label{jedanOpB}
\Phi(\mathsf{X}_{0,1}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{X}'}_{0,1}^{B_1'}\ket{\phi_+}^{A'B_1'},\\
\Phi(\mathsf{Z}_{0,1}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{Z}'}_{0,1}^{B_1'}\ket{\phi_+}^{A'B_1'},\\
\Phi(\mathsf{X}_{1,2}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{X}'}_{1,2}^{B_1'}\ket{\phi_+}^{A'B_1'},\\ \label{cetiriOpB}
\Phi(\mathsf{Z}_{1,2}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{Z}'}_{1,2}^{B_1'}\ket{\phi_+}^{A'B_1'}, \end{align}
and equivalently
\begin{align}\label{jedanOpD}
\Phi(\mathsf{D}_{0,1}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes\frac{{\mathsf{X}'}_{0,1}^{B_1'}+{\mathsf{Z}'}_{0,1}^{B_1'}}{\sqrt{2}}\ket{\phi_+}^{A'B_1'},\\ \Phi(\mathsf{E}_{0,1}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes\frac{{\mathsf{Z}'}_{0,1}^{B_1'}-{\mathsf{X}'}_{0,1}^{B_1'}}{\sqrt{2}}\ket{\phi_+}^{A'B_1'},\\
\Phi(\mathsf{D}_{1,2}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes\frac{{\mathsf{X}'}_{1,2}^{B_1'}+{\mathsf{Z}'}_{1,2}^{B_1'}}{\sqrt{2}}\ket{\phi_+}^{A'B_1'},\\ \label{cetiriOpE}
\Phi(\mathsf{E}_{1,2}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes\frac{{\mathsf{Z}'}_{1,2}^{B_1'}-{\mathsf{X}'}_{1,2}^{B_1'}}{\sqrt{2}}\ket{\phi_+}^{A'B_1'}. \end{align}
The last set of equations together with definitions of $\mathsf{D}_{j,k}$ and $\mathsf{E}_{j,k}$ and eqs. \eqref{jedanmeasB}-\eqref{cetirimeasB} imply:
\begin{align}\label{stmeasBob}
\Phi(\mathsf{M}_{b_1|y}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{b_1|y}^{B_1'}\ket{\phi_+}^{A'B_1'}. \end{align}
Eqs.~\eqref{stmeasAlice} and~\eqref{stmeasBob} together give
\begin{align}\label{stmeasAB}
\Phi(\mathsf{M}_{a|x}^A\otimes\mathsf{M}_{b_1|y}^B\ket{\psi}^{ABC}\otimes\ket{00}^{A'B_1'}) &= \ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{a|x}^{A'}\otimes{\mathsf{M}'}_{b_1|y}^{B_1'}\ket{\phi_+}^{A'B_1'}, \end{align}
where
\begin{align}
\ket{\xi}^{ABC} &= \left(\sqrt{3}\mathsf{M}_{0|0}^A\ket{\psi}^{AB_1}\right)\otimes\ket{\psi}^{B_2C}\\
&\equiv \ket{\xi_1}^{AB_1}\otimes\ket{\psi}^{B_2C}. \end{align}
\section{Proof of Eq. (6)}\label{suppmat2}
Eq.~\eqref{ststate} together with~\eqref{stmeasBob} leads to
\begin{equation}
\Phi_{B_1}^\dagger\left(\mathsf{M}^B_{b_1|y}\otimes\mathds{1}^{B_1'}\right) = \mathds{1}^{B}\otimes{\mathsf{M}'}_{b_1|y_1}^{B'_1}. \end{equation}
Since $\mathsf{M}'_{b_1|y_1}$ are projective it must be $\Phi_{B_1}^\dagger\left(\mathsf{M}^B_{b_1,b_2|y}\otimes\mathds{1}^{B_1'}\right) = \mathsf{K}_{b_1,b_2|y}^{B}\otimes{\mathsf{M}'}_{b_1|y}^{B'_1}$, where $\mathsf{K}_{b_1,b_2|y}$ is positive semidefinite and $\sum_{b_2}\mathsf{K}_{b_1,b_2|y} = \mathds{1}$. With this insight, and eqs.~\eqref{stmeasAlice} and \eqref{stmeasBob} for every collection of inputs $x,y,z$ and outputs $a,b,c$ the equivalence between reference correlations given in eq.~\eqref{ReferenceCorrelations} and physical correlations given in~\eqref{PhysicalCorrelations} implies:
\begin{align}
\textrm{Tr}\left[\left({\mathsf{M}'}_{a|x}^{A'}\otimes {\mathsf{M}'}_{b_1|y}^{B'_1}\right){\phi}_+^{A'B'_1}\right]\textrm{Tr}\left[\left({\mathsf{M}'}_{b_2|y}^{B'_2}\otimes{\mathsf{M}'}_{c|z}^{C'}\right){\phi}_+^{B'_2C'}\right] = \bra{\psi}^{AB_1}\otimes\bra{\psi}^{B_2C}{\mathsf{M}}_{a|x}^{A}\otimes {\mathsf{M}}_{b_1,b_2|y}^{B_1B_2}\otimes\mathsf{M}_{c|z}^C\ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C},\\
= \bra{\xi}^{ABC}\mathds{1}^{A}\otimes\mathsf{K}_{b_1,b_2|y}^{B}\otimes \mathsf{M}_{c|z}^C\ket{\xi}^{ABC} \bra{\phi_+}^{A'B_1'}{\mathsf{M}'}_{a|x}^{A'}\otimes {\mathsf{M}'}_{b_1|y}^{B'_1}\ket{\phi_+}^{A'B_1'} \end{align}
By cancelling the same terms on two sides of equality we obtain:
\begin{equation}
\textrm{Tr}\left[\left({\mathsf{M}'}_{b_2|y}^{B'_2}\otimes{\mathsf{M}'}^{C'}_{c|z}\right){\phi}_+^{B'_2C'}\right] = \bra{\xi}^{ABC}\mathds{1}^{A}\otimes\mathsf{K}_{b_1,b_2|y}^{B}\otimes \mathsf{M}_{c|z}^C\ket{\xi}^{ABC} \end{equation}
Since for every fixed $b_1$ the set of operators $\{\mathsf{K}_{b_1,b_2|y}\}_{b_2}$ represents a valid measurement, together with Charlie's measurements and state $\ket{\xi}$ we have a physical experiment which satisfies conditions given in Lemma~\ref{lem2}. Hence, we can repeat exactly the same procedure as in Appendix~\ref{suppmat1} to establish existence ofthe $\Phi'= \Phi_{B_2}\otimes\Phi_C$ such that $\Phi_{B_2}$ acts nontrivially only on Hilbert space $\mathcal{H}^{B_2}$ and the whole isometry transforms state $\ket{\xi}^{ABC}$ as follows
\begin{equation}\label{isoprime}
\Phi'\left(\mathsf{K}_{b_1,b_2|y}\otimes\mathsf{M}_{c|z}\ket{\xi}^{ABC}\otimes\ket{00}^{B_2'C'}\right) = \ket{\tilde{\xi}}^{ABC}\otimes\left(\mathsf{M}'_{b_2|y}\otimes\mathsf{M}'_{c|z}\ket{\phi_+}^{B_2'C'}\right), \end{equation}
which is the analogue of~\eqref{stmeasAB} and where $\ket{\tilde{\xi}}$ is defined as
\begin{align}
\ket{\tilde{\xi}}^{ABC} &= \ket{\xi_1}^{AB_1}\otimes\left(\sqrt{3}\mathsf{M}_{0|0}^C\ket{\psi}^{B_2C}\right)\\
&\equiv \ket{\xi_1}^{AB_1}\otimes\ket{\xi_2}^{B_2C} \end{align}
Let us now rewrite eq.~\eqref{stmeasAB}
\begin{align}
\Phi\left(\mathsf{M}_{a|x}^A\otimes\mathsf{M}_{b_1|y}^B\otimes\mathsf{M}_{c|z}^C\ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{A'B_1'}\right) &= \mathsf{M}_{c|z}^C\ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{a|x}^{A'}\otimes{\mathsf{M}'}_{b_1|y_1}^{B_1'}\ket{\phi_+}^{A'B_1'}. \end{align}
Insights from the beginning of this Appendix allow us to write further
\begin{align}
\Phi\left(\mathsf{M}_{a|x}^A\otimes\mathsf{M}_{b_1,b_2|y}^B\otimes\mathsf{M}_{c|z}^C\ket{\psi}^{AB_1}\otimes\ket{\psi}^{B_2C}\otimes\ket{00}^{A'B_1'}\right) &= \mathsf{K}_{b_1,b_2|y}^B\otimes\mathsf{M}_{c|z}^C\ket{\xi}^{ABC}\otimes{\mathsf{M}'}_{a|x}^{A'}\otimes{\mathsf{M}'}_{b_1|y}^{B_1'}\ket{\phi_+}^{A'B_1'}. \end{align}
By acting on the r.h.s. of this equation with $\Phi'$, use eq.~\eqref{isoprime}, taking into account that $\Phi_{B_1}$ acts nontrivially only on $\mathcal{H}^{B_1}$, while $\Phi_{B_2}$ acts nontrivially only on $\mathcal{H}^{B_2}$, and summing over $b_1$ and $b_2$ we obtain:
\begin{equation}\label{finalst}
\Phi_A\otimes\Phi_{B_1}\circ\Phi_{B_2}\otimes\Phi_C\left(\mathsf{M}_{a|x}\ket{\psi}^{A_1B_1}\otimes\mathsf{M}_{c|z}\ket{\psi}^{B_2C}\otimes\ket{0000}^{A'B_1'B_2'C'}\right) = \ket{\tilde{\xi}}^{AB_1B_2C}\otimes\mathsf{M}'_{a|x}\ket{\phi_+}^{A'B_1'}\otimes\mathsf{M}'_{c|z}\ket{\phi_+}^{B_2'C'}, \end{equation}
which completes the proof of eq.~\eqref{iso1} if we denote $\tilde{\Phi} = \Phi_A\otimes\Phi_{B_1}\circ\Phi_{B_2}\otimes\Phi_C$.
\end{document} |
\begin{document}
\author{Robin Blume-Kohout} \affiliation{Theoretical Division, Los Alamos National Laboratory; Los Alamos, NM 87545} \author{Peter S. Turner} \affiliation{Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan 113-0033}
\title{The curious nonexistence of Gaussian 2-designs}
\begin{abstract} 2-designs -- ensembles of quantum pure states whose 2nd moments equal those of the uniform Haar ensemble -- are optimal solutions for several tasks in quantum information science, especially state and process tomography. We show that Gaussian states cannot form a 2-design for the continuous-variable (quantum optical) Hilbert space $L^2(\mathbb{R})$. This is surprising because the affine symplectic group HWSp (the natural symmetry group of Gaussian states) is irreducible on the symmetric subspace of two copies. In finite dimensional Hilbert spaces, irreducibility guarantees that HWSp-covariant ensembles (such as mutually unbiased bases in prime dimensions) are always 2-designs. This property is violated by continuous variables, for a subtle reason: the (well-defined) HWSp-invariant ensemble of Gaussian states does not have an average state because the averaging integral does not converge. In fact, no Gaussian ensemble is even \emph{close} (in a precise sense) to being a 2-design. This surprising difference between discrete and continuous quantum mechanics has important implications for optical state and process tomography. \end{abstract}
\maketitle
A quantum $t$-design is an ensemble of pure quantum states, whose $t^\mathrm{th}$ (and lower) moments mimic those of the unitarily invariant Haar ensemble, which captures the notion of `random' pure states in Hilbert space. Designs have a variety of applications in quantum information science and the foundations of quantum theory, most notably in quantum \emph{tomography} \cite{WoottersAP89,ScottJPA06,GrasslENDM05,MedendorpPRA11}. To date, designs have been used primarily in finite dimensional Hilbert spaces, with optical coherent states \cite{GlauberPRA63,Perelomov86,LvovskyRMP09} (a 1-design for continuous variables) as an outstanding exception. While 1-designs are useful, for example as resolutions of the identity operator, 2-designs seem to be the most useful and interesting of designs. They are far superior to 1-designs -- often optimal -- for a variety of tasks, including quantum state and process tomography \cite{ScottJPA06,ScottJPA08}, and maximal unclonability \cite{Fuchs03}. Relatively few applications \cite{AmbainisCCC07,HarrowLNCS09} are known for higher-order designs. Finite-dimensional 2-designs include mutually unbiased bases (MUBs) (see, e.g. \cite{WoottersAP89,GibbonsPRA04}) and symmetric informationally complete positive operator valued measures (SICPOVMs) \cite{RenesJMP04}.
In this paper, we attempt to construct a continuous-variable 2-design from Gaussian states (including coherent states \emph{and} their squeezed cousins). There is ample reason to suspect that such a construction is possible. The simplest 2-design construction in $d$-dimensional finite Hilbert spaces comprises $(d+1)$ mutually unbiased bases in prime power dimensions ($d = p^n$) \cite{WoottersAP89}. The basis states' discrete Wigner functions form lines in discrete phase space. These constructions, and indeed the whole idea of discrete phase space, were motivated by continuous-variable phase space and the associated Hilbert space $L^2(\mathbb{R})$.
Thus, we are surprised to report that Gaussian 2-designs do \emph{not} exist. At first inspection (Section \ref{sec:sp}), it appears that they should because their natural transitive symmetry group -- the affine symplectic group -- acts irreducibly on the symmetric subspace of $L^2(\mathbb{R})\otimes L^2(\mathbb{R})$. Schur's Lemma implies that a symplectically-invariant ensemble of Gaussians should therefore form a 2-design. However, an explicit calculation of matrix elements (Section \ref{sec:no}) shows that it is impossible to construct a 2-design out of Gaussian states. In fact, it's impossible even to get close! The resolution (Section \ref{sec:discuss}) involves the non-convergence of an integral over the symplectic group. Because this integral fails to converge, a symplectically-invariant mixture of Gaussian states does not exist -- which is a sort of end run around Schur's Lemma. We point out that this is not merely due to the noncompactness of the symplectic group, for integrations over noncompact groups are often well-defined and yield invariant quantities (as demonstrated by the coherent states).
\section{Designs and representations} \label{sec:background}
A set of states $\mathcal{M}$ on a Hilbert space $\mathcal{H}$ is a $t$-design for $\mathcal{H}$ if its $t^\mathrm{th}$ moments are identical to those of the unitarily invariant (Haar) ensemble of pure states on $\mathcal{H}$: \begin{equation} \mathop{\mathrm{Avg}}_{\psi\in\mathcal{M}}\left(\proj{\psi}^{\otimes t}\right) = \int_{\psi\in\mathrm{Haar}}{\proj{\psi}^{\otimes t}\mathrm{d}\!\psi}. \label{eq:tdesign1} \end{equation} The Haar average on the R.H.S. of Eq. \ref{eq:tdesign1} can be calculated easily using Schur's Lemma, which states: if a nonzero operator on an irreducible representation (irrep) space of a group $G$ commutes with every element of that irrep, it must be proportional to $1\!\mathrm{l}$ on that space. The R.H.S. commutes (by construction) with every $U^{\otimes t}$, so it must be a sum of irrep projectors. Since $\proj{\psi}^{\otimes t}$ lies entirely in the symmetric subspace $\mathcal{H}_{\mathrm{symm}}^{(t)}$ of $\mathcal{H}^{\otimes t}$ (which is an irrep space), the Haar average is proportional to the projector onto $\mathcal{H}_{\mathrm{symm}}^{(t)}$, and the $t$-design condition is \begin{equation} \mathop{\mathrm{Avg}}_{\psi\in\mathcal{M}}\left(\proj{\psi}^{\otimes t}\right) = \frac{\Pi_{\mathrm{symm}}^{(t)}}{\mathrm{Tr}\left(\Pi_{\mathrm{symm}}^{(t)}\right)}. \label{eq:tdesign} \end{equation}
Here we are concerned with Gaussian states on the Hilbert space $L^2(\mathbb{R})$, and whether it is possible to construct a 2-design from them. The coherent states are a simple example of a Gaussian 1-design, and we will review this construction in order to introduce some simple, useful group theory tricks.
\section{Heisenberg-Weyl covariant 1-designs} \label{sec:h}
Schur's Lemma applies to any group. This suggests an elegant way to construct designs. Choose a group $G$ that has a unitary representation $T$ on $\mathcal{H}$, \begin{equation*} g\to T_g, \quad g\in G. \end{equation*} If the natural $t$-copy tensor product representation \begin{equation*} g\to T_g^{\otimes t} \end{equation*} is irreducible on the symmetric subspace\footnote{Note that the $\{T_g^{\otimes t}\}$ representation commutes with permutations of the $t$ copies, so its action on on $\mathcal{H}^{\otimes t}$ is reducible onto the irrep spaces of the symmetric group $S_t$. $\mathcal{H}_{\mathrm{symm}}^{(t)}$ is one of these.} of $\mathcal{H}^{\otimes t}$, then Schur's Lemma implies that the ensemble \begin{equation*} \mathcal{M} = \{T_g\ket{\psi_0}\ \forall\ g\in G\} \end{equation*} is a $t$-design for any $\ket{\psi_0}\in\mathcal{H}$.
$\mathcal{M}$ will be a 1-design if $T_g$ itself is irreducible on $\mathcal{H}$. For example, the Heisenberg-Weyl group on one degree of freedom, HW$(1)$ (or simply HW, hereafter), has an irreducible representation on $\mathcal{H} = L^2(\mathbb{R})$ as translations in phase space (displacement operators) \begin{equation*} \{T_{x,p,\phi}\} \equiv \{e^{-i(\phi + x\mathbf{P} + p\mathbf{X})/\hbar}\ \forall\ x,p\in\mathbb{R}\ \mathrm{and}\ \phi\in[0,2\pi]\}, \end{equation*} where $\mathbf{X}$ and $\mathbf{P}$ are the position and momentum operators on $L^2(\mathbb{R})$. HW is a noncompact Lie group. Its natural Haar measure is $\mu = \mathrm{d}\! x\mathrm{d}\! p\mathrm{d}\!\phi$ -- which is simply the Lebesgue measure over phase space and over the phase $\phi$. Because we use state \emph{projectors} $\proj{\psi}$ exclusively, central phases such as $\phi$ vanish. We can safely treat all group representations as projective, and will denote this phase-space-translation representation of the Heisenberg-Weyl group by $\{T_{x,p}\}$.
Since $\{T_{x,p}\}$ is irreducible on $L^2(\mathbb{R})$, any ``fiducial state'' $\ket{\psi_0}$ can be used to generate a Heisenberg-Weyl (HW)-covariant 1-design \begin{equation*} \mathcal{M}_1 = \{T_{x,p}\ket{\psi_0}\}, \end{equation*} weighted according to the invariant measure $\mu$ of its defining group. The coherent state POVM (positive operator valued measure\footnote{A pedagogical note on the term ``POVM'' may be useful here. A POVM is a measure over a sample space; the sample space is the set of possible outcomes of a quantum measurement. To each [measurable] subset of outcomes, the POVM assigns a positive operator (whereas conventional measures would assign a positive number). Often, in quantum information science, the sample space is finite -- and so measure theory is mostly superfluous, and practitioners often forget just what ``POVM'' means. Here, our sample space is the continuously infinite manifold of a Lie group, and [basic] measure theory is required.}) of quantum optics is generated by choosing $\ket{\psi_0}$ to be the ``vacuum'' state $\ket{0}$ of a dimensionless ($m=\omega=\hbar=1$) harmonic oscillator Hamiltonian $H = \mathbf{X}^2 + \mathbf{P}^2$, with wavefunction \begin{equation*} \psi_0(x) = \braket{x}{\psi_0} = \frac{e^{-x^2/2}}{(\pi)^{1/4}}. \end{equation*}
The coherent states are not unique; we can generate a HW-covariant Gaussian 1-design from any Gaussian fiducial $\ket{\psi_0}$. But none of them are 2-designs, and the proof is rather instructive.
\section{Heisenberg-Weyl is insufficient for 2-designs}
To evaluate whether an ensemble $\mathcal{M}$ is a 2-design, we consider \begin{equation*} \mathcal{H}^{\otimes 2} = \mathcal{H}\otimes\mathcal{H} = L^2(\mathbb{R})\otimes L^2(\mathbb{R}), \end{equation*} containing square-integrable functions $\psi(x_1,x_2)$ of \emph{two} real variables. For any group $G$, the tensor product representation $T_g\otimes T_g$ is reducible onto the symmetric and antisymmetric subspaces of $\mathcal{H}^{\otimes 2}$, because $T_g\otimes T_g$ commutes with the SWAP operator $\pi$ that permutes the two systems. The SWAP operator becomes very simple if we change coordinates from $(x_1,x_2)$ to $\left( x_+ = \frac{x_1+x_2}{\sqrt2}, x_- = \frac{x_1-x_2}{\sqrt2} \right)$, which defines a refactorization of the Hilbert space as \begin{equation*} L^2(x_1)\otimes L^2(x_2) \to L^2(x_+)\otimes L^2(x_-). \end{equation*} Now SWAP has no effect on the $x_+$ subsystem (since $x_2+x_1 = x_1+x_2$), but acts on the $x_-$ subsystem as the parity operator $P:x\to -x$, because $x_2-x_1 = -(x_1-x_2)$. The projectors onto the symmetric and antisymmetric subspaces are thus: \begin{eqnarray*} \Pi_{\mathrm{symm}}^{(2)} &=& 1\!\mathrm{l}_{+}\otimes\frac{(1\!\mathrm{l} + P)_-}{2} \\ \Pi_{\mathrm{antisymm}}^{(2)} &=& 1\!\mathrm{l}_{+}\otimes\frac{(1\!\mathrm{l} - P)_-}{2} \end{eqnarray*} But the HW representation $\{T_{x,p}^{\otimes 2}\}$ is \emph{not} irreducible on these subspaces. Its elements act on $L^2(x_+)\otimes L^2(x_-)$ as \begin{eqnarray*} T_{x,p}\otimes T_{x,p}
&=& e^{-i(x\mathbf{P}_1 + x\mathbf{P}_2 + p\mathbf{X}_1 + p\mathbf{X}_2)/\hbar} \\
&=& e^{-i\sqrt2(x\mathbf{P}_+ + p\mathbf{X}_+)/\hbar}\otimes1\!\mathrm{l}_-, \end{eqnarray*} so they act faithfully on $L^2(x_+)$, but trivially on $L^2(x_-)$. \emph{Any} 1-dimensional subspace of $L^2(x_-)$ -- e.g., $\mathrm{span}\{\ket{\psi}_-\}$ -- is an irrep space of this representation, \begin{equation*} L^2(x_+) \otimes \mathrm{span}\{\ket{\psi}_-\}. \end{equation*} Note that this structure is slightly different from the usual ``direct sum of irrep spaces'' structure. As a direct sum of uncountably many copies of the irrep $\{T_{\sqrt{2}x,\sqrt{2}p}\}$, it is much more naturally described as a tensor product.
This simple representation structure makes it easy to evaluate whether a Heisenberg-Weyl-covariant ensemble is a 2-design. We tensor two copies of the fiducial state $\ket{\psi_0}$ and rewrite it in the refactored Hilbert space using its Schmidt decomposition: \begin{equation*} \ket{\psi_0}\ket{\psi_0} = \sum_k{ c_k \ket{k}_+\otimes \ket{k}_- }, \end{equation*} where $\ket{k}_\pm$ are elements of orthonormal bases for $L^2(x_\pm)$. This state is generally entangled, with more than one Schmidt coefficient $c_k$. Now, the average of $\proj{\psi}^{\otimes 2}$ is equal to the averaged action of the Heisenberg-Weyl group on $\proj{\psi_0}^{\otimes 2}$, \begin{equation*} \mathop{\mathrm{Avg}}_\mathcal{M}\left( \proj{\psi}^{\otimes 2} \right) = \mathop{\mathrm{Avg}}_{\mathrm{HW}}\left(T_{x,p}^{\otimes 2}\proj{\psi_0}^{\otimes 2}\left(T^\dagger_{x,p}\right)^{\otimes 2}\right) \end{equation*} Since HW acts irreducibly on $L^2(x_+)$, it will completely depolarize the $L^2(x_+)$ subsystem, and destroy all correlation with the $L^2(x_-)$ subsystem: \begin{equation*}
\mathop{\mathrm{Avg}}_\mathcal{M}\left( \proj{\psi}^{\otimes 2} \right) \propto 1\!\mathrm{l}_+ \otimes \sum_k{ |c_k|^2 \proj{k}_- }. \end{equation*} This is proportional to $\Pi_{\mathrm{symm}}^{(2)}$ if and only if $\ket{\psi_0}\ket{\psi_0}$ is maximally entangled between $L^2(x_+)$ and the positive-parity subspace of $L^2(x_-)$. Such states exist for many (perhaps all) \emph{finite}-dimensional Hilbert spaces, which admit a discrete analogue of the Heisenberg-Weyl group, but finding them is an open challenge known as the SICPOVM problem \cite{RenesJMP04}.
But when $\ket{\psi_0}$ is a Gaussian state, $\ket{\psi_0}\ket{\psi_0}$ is actually a product state of $L^2(x_+)$ and $L^2(x_-)$. For a completely general Gaussian wavefunction \begin{equation*} \psi_0(x) \propto e^{-(\alpha+i\beta)x^2 + (\gamma+i\delta)x}, \end{equation*} the wavefunction for $\ket{\psi_0}\ket{\psi_0}$ is (in the $x_{\pm}$ coordinates) \begin{eqnarray*} \psi_0^{\otimes2}(x_+,x_-) &=& \psi_0\left(\frac{x_+ + x_-}{\sqrt2}\right)\psi_0\left(\frac{x_+ - x_-}{\sqrt2}\right) \\ \nonumber &\propto & e^{-(\alpha+i\beta)x_+^2 + \sqrt2(\gamma+i\delta)x_+} \cdot e^{-(\alpha+i\beta)x_-^2}. \end{eqnarray*} So $\psi_0^{\otimes2}(x_+,x_-) =: \psi_+(x_+)\psi_-(x_-)$ is a product state, and \begin{equation*} \mathop{\mathrm{Avg}}_\mathcal{M}(\proj{\psi}^{\otimes 2}) \propto 1\!\mathrm{l}_+ \otimes \proj{\psi_-} \neq \Pi_{\mathrm{symm}}^{(2)}. \end{equation*} In fact, the two-copy average state is about as far as possible from the projector on the symmetric subspace, since it is rank-1 on $L^2(x_-)$. The Heisenberg-Weyl group is clearly insufficient for the production of Gaussian 2-designs.
\section{The symplectic group, and the existence of Gaussian 2-designs} \label{sec:sp}
Each HW-covariant design contains only a subset of the Gaussian states. Since its elements are related by translation, they all have the same shape in phase space, inherited from the fiducial $\ket{\psi_0}$ (which, without loss of generality, we may take to be a ``squeezed vacuum'' state with $\expect{x} = \expect{p} = 0$). We need a richer ensemble, containing states with different shapes. To get it, we observe that every squeezed vacuum state can be obtained from the vacuum state by applying a particular \emph{linear symplectic} transformation. These are linear transformations on phase space that preserve symplectic area $\mathrm{d}\! x\mathrm{d}\! p$, and they form the symplectic group Sp$(1,\mathbb{R})$. It is isomorphic to the special linear group SL$(2,\mathbb{R})$, whose fundamental representation is the $2\times2$ real matrices with unit determinant \cite{Lang85}. This 3-parameter Lie group is the natural symmetry group of squeezed vacuum states, on which it acts transitively, and it will be central to our discussion.
Sp$(1,\mathbb{R})$, which we will denote by Sp for convenience, contains \emph{all} the linear transformations of phase space (which means all the transformations that preserve Gaussianity). This includes: \begin{enumerate} \item rotations (``elliptic transformations''), of the form $R(\theta) = \begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{pmatrix}$, \item squeezings (``hyperbolic transformations''), of the form $U(u) = \begin{pmatrix} u^{-1/2} & 0 \\ 0 & u^{1/2} \end{pmatrix}$, \item shearings (``parabolic transformations''), of the form $V(v) = \begin{pmatrix} 1 & 0 \\ -v & 1 \end{pmatrix}$. \end{enumerate} Rotations, squeezings, and shearings each form a 1-parameter subgroup of Sp, but these subgroups are \emph{not} normal. In fact, the projective symplectic group is simple (has no nontrivial normal subgroups), so it cannot be factored as a group. However, if viewed as a 3-parameter manifold, it \emph{can} be factored, using the Iwasawa decomposition \cite{IwasawaAM49}, usually written in the notation $\mathrm{Sp} = KAN$, which identifies each element $g$ of a group $G$ with three elements of subgroups $K,A,N$: $g \sim \{k,a,n\}$. The Iwasawa decomposition writes Sp as a direct product of the three sets $K = \{k\}$, $A = \{a\}$, and $N = \{n\}$. The group element corresponding to $\{k,a,n\}$ is just the composition (matrix product) of the three subgroup elements, $g = kan$, and the three subgroups $K,A,N$ are precisely the ones listed above -- rotations ($K$), squeezings ($A)$, and shearings ($N$).
Since Sp is transitive on squeezed vacuum states, any squeezed vacuum state can be obtained by applying some $g\in$ Sp to the $\ket{0}$ state. Adding the generators of the Heisenberg-Weyl group (whose action on phase space is affine, not linear) yields the \emph{affine symplectic group}. This semidirect product group, HWSp = $\mathrm{HW} \rtimes \mathrm{Sp}(1,\mathbb{R})$ is transitive on \emph{all} Gaussian states. So there is a unique HWSp-covariant design $\mathcal{M}_\mathrm{Sp}$ containing \emph{all} Gaussian states, weighted by a HWSp-invariant measure, which has the potential to form a Gaussian 2-design.
\subsection{The affine symplectic group is irreducible on the symmetric subspace of $L^2(\mathbb{R})^{\otimes 2}$}
It's fairly straightforward to show that HWSp is irreducible on $\mathcal{H}_{\mathrm{symm}}^{(2)}$ -- which would appear to imply that $\mathcal{M}_\mathrm{Sp}$ is a 2-design. An element of HWSp is a projective linear transformation on phase space, and is completely characterized by its action on the position ($\mathbf{X}$) and momentum ($\mathbf{P}$) operators, which form a symplectic vector \begin{equation*} \vec{\mathbf{Z}} = \pmat{ \mathbf{X} \\ \mathbf{P} }. \end{equation*} Heisenberg-Weyl transformations act on $\vec{\mathbf{Z}}$ additively, \begin{equation*} h_{x,p}\left[\pmat{ \mathbf{X} \\ \mathbf{P} }\right] = \pmat{ \mathbf{X}+x \\ \mathbf{P}+p }, \end{equation*} while symplectic transformations (represented as $2\times 2$ matrices with unit determinant) act on $\vec{\mathbf{Z}}$ by matrix multiplication: \begin{equation*} s_{a,b,c,d}\left[\pmat{ \mathbf{X} \\ \mathbf{P} }\right] = \pmat{a&b\\c&d} \pmat{ \mathbf{X} \\ \mathbf{P} } = \pmat{a\mathbf{X}+b\mathbf{P} \\ c\mathbf{X}+d\mathbf{P}}. \end{equation*} So an arbitrary element of HWSp acts on $\vec{\mathbf{Z}}$ as \begin{equation*} \pmat{ \mathbf{X} \\ \mathbf{P} }\to \pmat{a\mathbf{X}+b\mathbf{P} + x \\ c\mathbf{X}+d\mathbf{P} + p}. \end{equation*} The 2-copy tensor product action is thus \begin{equation*} \pmat{ \mathbf{X}_1 \\ \mathbf{P}_1 \\ \mathbf{X}_2 \\ \mathbf{P}_2 }\to \pmat{a\mathbf{X}_1+b\mathbf{P}_1 + x \\ c\mathbf{X}_1+d\mathbf{P}_1 + p \\ a\mathbf{X}_2+b\mathbf{P}_2 + x \\ c\mathbf{X}_2+d\mathbf{P}_2 + p}, \end{equation*} so its action on $\mathbf{X}_\pm$ and $\mathbf{P}_\pm$ is \begin{equation} \pmat{ \mathbf{X}_+ \\ \mathbf{P}_+ \\ \mathbf{X}_- \\ \mathbf{P}_- }\to \pmat{a\mathbf{X}_+ +b\mathbf{P}_+ + \sqrt2x \\ c\mathbf{X}_+ + d\mathbf{P}_+ + \sqrt2p \\ a\mathbf{X}_- + b\mathbf{P}_- \\ c\mathbf{X}_- + d\mathbf{P}_-}. \label{eq:HWSpaction} \end{equation} Eq. \ref{eq:HWSpaction} shows that, while the Sp subgroup acts faithfully on \emph{both} factors, the Heisenberg-Weyl subgroup acts faithfully on $L^2(x_+)$ but trivially on $L^2(x_-)$. Irreducibility follows from three observations. \begin{enumerate} \item HWSp contains HW as a subgroup, and HW is irreducible on $L^2(x_+)$, so HWSp is irreducible on this factor. \item HWSp contains a Sp subgroup that acts faithfully on $L^2(x_-)$. Sp contains an SO$(2)$ subgroup, represented as rotations on phase space. These are generated by the harmonic oscillator Hamiltonian $H = \mathbf{X}_-^2 + \mathbf{P}_-^2$, so the irrep spaces of the SO$(2)$ representation are its 1-dimensional eigenspaces $\{\ket{n}\}$. The irrep spaces of Sp on $L^2(x_-)$ must be coarse-grainings of the SO$(2)$ irrep spaces, so they are direct sums of harmonic oscillator eigenspaces. \item Sp also contains squeezing transformations, which act as \begin{equation*} \pmat{ \mathbf{X} \\ \mathbf{P} }\to \pmat{u^{-1/2}\mathbf{X} \\ u^{1/2}\mathbf{P}}. \end{equation*} Any nontrivial squeezing transformation has nonzero matrix elements between $\ket{0}$ and $\ket{2n}$ for every $n$ -- which is to say that it maps $\ket{2n}\to\ket{0}$ and vice-versa with some amplitude. Squeezing transformations therefore mix together all the even number states. They span the positive-parity subspace of $L^2(x_-)$, which therefore has no Sp-invariant subspaces. \end{enumerate} Observations 2 and 3 imply that HWSp is irreducible on the positive-parity subspace of $L^2(x_-)$. Together with Observation 1, this implies that HWSp is irreducible on the symmetric subspace of $L^2(\mathbb{R})^{\otimes 2}$.
This proof is entirely nonconstructive; it does not even suggest what is the HWSp-invariant measure over Gaussian states. Since HWSp is noncompact, the measure will not be normalizable. This doesn't necessarily pose problems -- the same is true of the Heisenberg-Weyl group, but the coherent state POVM is perfectly well-defined and physically meaningful. However, when we attempt to \emph{construct} the 2-design whose existence seems to be implied by the irreducibility of HWSp, we shall see that things get rather confused.
\section{Explicit constructions, and the nonexistence of Gaussian 2-designs} \label{sec:no}
Let us try to construct a Gaussian 2-design $\mathcal{M}$. To keep things clear and simple, we will not try (yet) to compute the invariant measure over HWSp. Instead, we simply assume that $\mathcal{M}$ is invariant under the Heisenberg-Weyl subgroup, and under the SO$(2)$ subgroup of phase space rotations. These invariances imply: \begin{enumerate} \item The ensemble-average state will be maximally mixed on the $L^2(x_+)$ factor (which can therefore be ignored) because HW is irreducible on that factor. \item $\mathcal{M}$ comprises Heisenberg-Weyl \emph{orbits}, each defined by (i) applying a specific linear symplectic transformation $g\in$ Sp to $\ket{0}^{\otimes 2}$ to get a Gaussian $\ket{\psi_g}^{\otimes 2}$, and (ii) applying all $T_{x,p}\in\mathrm{HW}$ to $\ket{\psi_g}^{\otimes2}$. Each orbit is a tensor product, between a Heisenberg-Weyl covariant 1-design on $L^2(x_+)$, and a single squeezed vacuum state $\proj{\psi_g}$ on $L^2(x_-)$. This is true because HW acts trivially on the $L^2(x_-)$ factor, while Sp acts faithfully on the $L^2(x_-)$ factor. \end{enumerate} Now, each squeezed vacuum state $\ket{\psi_g}$ on $L^2(x_-)$ can be described by its degree of squeezing $s$ (see Eq. \ref{eq:sdef}), and the angle $\theta$ in phase space between its major axis and the $x$-axis. The HWSp-invariant measure over states will be \emph{something} of the form \begin{equation} \mu(s,\theta)\mathrm{d}\! s\mathrm{d}\!\theta. \end{equation} So, when we calculate \begin{equation*} \mathop{\mathrm{Avg}}_{\ket{\psi}\in\mathcal{M}}\left(\proj{\psi}^{\otimes 2}\right), \end{equation*} we will be adding up a lot of 2-mode Gaussian states. And, thanks to the first two points above, we know that as long as the ensemble is Heisenberg-Weyl-covariant, \begin{equation*} \mathop{\mathrm{Avg}}_{\ket{\psi}\in\mathcal{M}}\left(\proj{\psi}^{\otimes 2}\right) \propto 1\!\mathrm{l}_{+}\otimes\mathop{\mathrm{Avg}}_{\ket{\psi_g}\in\mathcal{M}}\proj{\psi_g}_{-}, \end{equation*} where for each $\ket\psi$ in $\mathcal{M}$, the corresponding $\ket{\psi_g}$ is a squeezed vacuum state with the same shape (but $\expect{x}=\expect{p}=0$). $\mathcal{M}$ is a 2-design if and only if \begin{equation*} \mathop{\mathrm{Avg}}_{\ket{\psi_g}\in\mathcal{M}}\proj{\psi_g} \propto \Pi = \frac{(1\!\mathrm{l}+P)_{-}}{2}. \end{equation*} The R.H.S. is the projector on the symmetric subspace. In the ``number basis'' $\{\ket{k}:k=0,1,2,\ldots\}$ (eigenstates of $H = \mathbf{X}^2 + \mathbf{P}^2$), parity acts as $P\ket{k} = (-1)^k\ket{k}$, so the projector onto positive parity states is \begin{equation*} \Pi = \sum_{k=0}^\infty{\proj{2k}}. \end{equation*} But this operator cannot be built as a sum of Gaussian projectors $\proj{\psi_g}$, as we will now show by computing the diagonal matrix entries of a single $\proj{\psi_g}$ -- where $g\sim(s,\theta)$ -- in the number basis.
\subsection{Detailed calculation of overlaps}
Suppose that $\ket{\psi_g} = \ket{\psi_{\theta,s}}$ is an arbitrary squeezed vacuum state, and let \begin{equation*}
p_k(\theta,s) = |\braket{k}{\psi_{\theta,s}}|^2. \end{equation*} Now, $\ket{\psi_{\theta,s}}$ is parity-symmetric, and if $k$ is odd then $\ket{k}$ is parity-antisymmetric. So $p_k=0$ for odd $k$. Furthermore, $\proj{k}$ is invariant under phase space rotations, so $\theta$ has no impact on $p_k$. We thus write it as $p_k(s)$, and assume without loss of generality that $\ket{\psi_s}$ is squeezed along the $x$ axis. The corresponding wavefunction is \begin{equation*} \psi_s(x) = \braket{x}{\psi_{\theta=0,s}} = (\pi s^2)^{-1/4} e^{-x^2/2s^2}, \end{equation*} yielding squeezed quadratures \begin{equation} \pmat{ \Delta x^2 & \Delta xp \\ \Delta xp & \Delta p^2 } = \frac12\pmat{ s^2 & 0 \\ 0 & 1/s^2}. \label{eq:sdef} \end{equation}
The wavefunction for a number state $\ket{k}$ is the product of a Hermite polynomial and the ground-state Gaussian wavefunction: \begin{equation*} \phi_k(x) = \braket{x}{k} = \frac{H(k,x) e^{-x^2/2}}{(\pi)^{1/4}\sqrt{2^k k!}}, \end{equation*} where \begin{equation*} H(k,x) = \left[ \left(\frac{\partial}{\partial \, t}\right)^k e^{2xt-t^2} \right]_{t=0}. \end{equation*} Using these wavefunctions, for $k$ even, \begin{eqnarray}
\sqrt{p_k(s)} &=& |\braket{k}{\psi_s}| \nonumber \\ &=& \int{\phi_k(x)\psi_s(x)\mathrm{d}\! x} \nonumber \\ &=& \frac{1}{\sqrt{\pi 2^{k} k! s}} \left(\frac{\partial}{\partial \, t}\right)^k \left[ \int{ e^{2xt-t^2-x^2(1+1/s^2)/4} \mathrm{d}\! x} \right]_{t=0} \nonumber \\ &=& \frac{1}{\sqrt{\pi 2^{k} k! s}} \left( \frac{\partial}{\partial \, t}\right)^k \left[ \frac{s\sqrt{2\pi}}{\sqrt{s^2+1}}e^{t^2(s^2-1)/(s^2+1)} \right]_{t=0} \nonumber \\ &=& \sqrt{\frac{s}{2^{k-1} k! (s^2+1)}} \left[ \left(\frac{s^2-1}{s^2+1}\right)^{k/2} \frac{k!}{(k/2)!} \right]\nonumber \\ &=& \sqrt{\frac{2^{k+1} s (s^2-1)^k}{\pi k! (s^2+1)^{k+1}}}\Gamma\left(\frac{k+1}{2}\right) \nonumber \end{eqnarray} Squaring this expression and rewriting $k! = \Gamma(k+1)$ yields \begin{equation*} p_k = \frac{2^{k+1}\Gamma\left(\frac{k+1}{2}\right)^2s}{\pi\Gamma(k+1)(s^2+1)}\left(\frac{s^2-1}{s^2+1}\right)^k. \end{equation*} We now use the identity \begin{equation*} \Gamma(2z) = \frac{1}{\sqrt{2\pi}} 2^{2z-1/2}\Gamma(z)\Gamma(z+1/2) \end{equation*} to rewrite the denominator using \begin{equation*} \Gamma(k+1) = \pi^{-1/2}2^k\Gamma\left(\frac{k}{2}+\frac12\right)\Gamma\left(\frac{k}{2}+1\right), \end{equation*} which yields \begin{equation*} p_k = \frac{2\Gamma\left(\frac{k+1}{2}\right)s}{\sqrt{\pi}\Gamma\left(\frac{k}{2}+1\right)(s^2+1)}\left(\frac{s^2-1}{s^2+1}\right)^k. \end{equation*} Finally, we observe that $p_k$ is symmetric under $s \to 1/s$ -- as it should be -- and with this in mind we rewrite it in terms of the squeezed vacuum state's energy \begin{equation*} E = \frac{\expect{X^2}+\expect{P^2}}{2} = \frac{\Delta x^2 + \Delta p^2}{2} = \frac14\left(s^2 + \frac{1}{s^2}\right). \end{equation*} Solving for $s$ gives $s = \sqrt{2E\pm\sqrt{4E^2-1}}$, and therefore \begin{eqnarray*} \frac{s}{s^2+1} &=& \frac{1}{\sqrt{4E+2}} \\ \frac{s^2-1}{s^2+1} &=& \sqrt{\frac{2E-1}{2E+1}}. \end{eqnarray*} This yields our final result: \begin{eqnarray}
p_k(E) &\equiv& |\braket{k}{\psi_{s,\theta}}|^2 \nonumber \\ &=& \sqrt{\frac{2}{\pi}}\frac{\Gamma\left(\frac{k}{2}+\frac12\right)}{\Gamma\left(\frac{k}{2}+1\right)}\cdot\frac{1}{\sqrt{2E+1}}\cdot\sqrt{\frac{2E-1}{2E+1}}^k \label{eq:pk} \end{eqnarray} (for even $k$; $p_k$ vanishes when $k$ is odd.)
\subsection{There are no Gaussian 2-designs} \label{sec:fail}
Let us now consider the implications of Eq.~(\ref{eq:pk}). Our proposed ensemble will contain Gaussian states with some measure $f(E)\mathrm{d}\! E$ over $E$, which has not yet been defined. But states with each value of $E\in[1,\infty)$ will contribute to the average of $\proj{\psi}$ on $L^2(x_-)$, according to \begin{eqnarray} \mathop{\mathrm{Avg}}_{\psi\in\mathcal{M}}\left(\proj{\psi}\right) &=& \int{\left(\sum_{k=0}^{\infty}{p_k(E)\proj{2k}}\right)f(E)\mathrm{d}\! E} \nonumber \\ &=& \sum_{k=0}^{\infty}{\int{p_k(E)f(E)\mathrm{d}\! E}\proj{2k}}. \label{eq:Eintegral} \end{eqnarray} This equation is entirely correct as long as the measure over Gaussian states is invariant under rotations in phase space (in which case the off-diagonal elements vanish by symmetry); otherwise it is only correct for the diagonal elements, which is still sufficient to demonstrate a problem.
Now, $p_k$ splits into a factor which depends on the squeezing (energy) of the Gaussian state, \begin{equation*} \frac{1}{\sqrt{E+1}}\cdot\sqrt{\frac{E-1}{E+1}}^k, \end{equation*} and a factor that is completely independent of the state, \begin{equation*} \frac{\Gamma\left(\frac{k}{2}+\frac12\right)}{\sqrt{2\pi}\Gamma\left(\frac{k}{2}+1\right)}, \end{equation*} so \begin{equation*} \mathop{\mathrm{Avg}}_{\psi\in\mathcal{M}}\left(\proj{\psi}\right) = \sqrt{\frac{2}{\pi}}\sum_{k=0}^{\infty}{\frac{\Gamma\left(k+\frac12\right)}{\Gamma(k+1)}w_k(E)\proj{2k}}, \end{equation*} where $w_k(E)$ contains all the $E$-dependence of this average state, and is given by \begin{equation*} w_k(E) = \int{\frac{1}{\sqrt{E+1}}\left(\frac{E-1}{E+1}\right)^k f(E)\mathrm{d}\! E}. \end{equation*} Now, without saying anything about the measure over $E$, we can see a problem. The integrand of $w_k(E)$ is a strictly decreasing function of $k$ -- for \emph{every} finite value of $E$ -- so $w_k(E)$ is strictly nonincreasing with $k$. But the $E$-independent coefficient is \emph{also} strictly decreasing; the first few values of \begin{equation*} \frac{\Gamma\left(k+\frac12\right)}{\Gamma(k+1)} \end{equation*} are \begin{equation*} \sqrt{\pi},\ \frac{\sqrt{\pi}}{2},\ \frac{3\sqrt{\pi}}{8},\ \frac{5\sqrt{\pi}}{16},\ \frac{35\sqrt{\pi}}{128},\ \frac{63\sqrt{\pi}}{256},\ \frac{231\sqrt{\pi}}{1024}, \ldots \end{equation*} and by $k=2$ it is already within $1\%$ of its asymptotic value \begin{equation*} \lim_{k\to\infty}{\frac{\Gamma\left(k+\frac12\right)}{\Gamma(k+1)}} = \frac{1}{\sqrt{k+1/4}}. \end{equation*} So the average state is bounded above by the product of an operator with strictly decreasing eigenvalues and \begin{equation} \sum_{k=0}^\infty{\frac{\proj{2k}}{\sqrt{k+1/4}}}.\label{eq:sqrtop} \end{equation} This is \emph{not} equal to the projector onto the positive-parity subspace -- which would have a flat spectrum. So it is \emph{not} possible to build a 2-design out of Gaussian states.
In fact, it's not even possible to get close. The best we can do is to choose an ensemble of very highly squeezed states, so that $E\gg 1$ for all states. The $E$-dependent terms in Eq.~(\ref{eq:pk}) go to 1, and we are left with something proportional to the operator in Eq.~(\ref{eq:sqrtop}) -- which is not in any sense ``close'' to the projector that would signify a 2-design.
\section{So what is going on?}\label{sec:discuss}
The previous sections appear to present a contradiction. On one hand, HWSp is irreducible on the symmetric subspace of $L^2(\mathbb{R})\otimes L^2(\mathbb{R})$, so Schur's Lemma seems to imply that a HWSp-invariant ensemble of Gaussian states must be a 2-design. On the other hand, the explicit calculation of the last section makes it clear that no mixture of Gaussian states can yield a 2-design.
The resolution to this tension is to observe that Schur's Lemma states only that \emph{if} an operator $X$ on an irreducible representation space of $G$ commutes with every element of the representation $\{T_g\}$, then $X \propto 1\!\mathrm{l}$ on that space. It does \emph{not} guarantee the existence of $X$. If the $G$-twirled operator \begin{equation*} \rho = \int{T_g\proj{\psi}T^\dagger_g\mathrm{d}\! g} \end{equation*} exists, then (by construction) it must commute with every $T_g$ (and then Schur's Lemma completes the proof). But it may not exist! When $G$ is not compact, the integral may or may not converge. When it does not converge -- as turns out to be the case here -- the $G$-twirled state does not exist, and Schur's Lemma is not applicable.
The symplectic group provides a particularly interesting (and -- to a physicist -- irritating) case, because: \begin{enumerate} \item Sp \emph{does} have a well-defined Haar measure (see next section), \item there is in fact an Sp-invariant operator on $L^2(\mathbb{R})$ (the identity operator) and a HWSp-invariant operator on $L^2(\mathbb{R})\otimes L^2(\mathbb{R})$ (the projector on the symmetric subspace), \item $G$-twirling frequently \emph{does} converge, even for noncompact groups $G$ (e.g., when $G = \mathrm{HW}$, as we showed in Section \ref{sec:h}). \end{enumerate} The last point is illustrated by the Heisenberg-Weyl group. While it is noncompact, that does not prevent us from HW-twirling the vacuum state to get \begin{equation*} \int_{\mathrm{HW}}{T_{x,p}\proj{0}T_{x,p}^\dagger\mathrm{d}\! x\mathrm{d}\! p} \propto 1\!\mathrm{l}. \end{equation*}
Actually, a rather subtle trick is required to make this integral converge. If we define a sequence of partial integrals over $|x+ip|<r$ for $r = 1,2,4,8,16,\ldots$, then the sequence of partial integrals is not Cauchy -- i.e., it does not converge to an operator in $\mathcal{B}(L^2(\mathbb{R}))$. However, if we project these partial integrals onto \emph{any} finite subspace of $L^2(\mathbb{R})$, then the sequence of projections converges to $1\!\mathrm{l}$ on that subspace.
In contrast, the Sp-twirling integral diverges on \emph{every} finite subspace. Let us show this explicitly by (finally) deriving the Sp-invariant measure over Gaussian states.
\section{The symplectically-invariant measure}
Although Sp is a 3-parameter Lie group, we are really only concerned with its action on the 2-parameter manifold of Gaussian squeezed vacuum states. So we do not need the whole group, nor do we need to calculate the Haar measure over the entire group. However, we need a subgroup that is transitive on squeezed vacuum states. The 2-parameter parabolic subgroup of Sp generated by squeezings $U$ and shearings $V$ (see Sec.\ref{sec:sp}) is exactly such a group \cite{SimonPRA88}.
An arbitrary element of the parabolic subgroup is given by $S(u,v) = V(v)U(u)$. Composition of two elements is given by \begin{equation*} S(u',v')S(u,v) = S(u'u,u'v+v'). \end{equation*} Viewed as a left action, this implies that the measure $\mu(u,v)$d$u$d$v$ transforms under $S(u',v')$ as \begin{equation*}
\mu(u,v) \mapsto \mu(u'u,u'v+v') |u'^2|, \end{equation*}
where $|u'^2|$ is the Jacobian of the transformation. Let us choose normalization so that the measure equals $\mathrm{d}\! u\mathrm{d}\! v$ at the identity element ($u=1$, $v=0$), so $\mu(1,0)=1$. Then we have $\mu(u',v')u'^2=1$, so the left invariant Haar measure is \begin{equation}\label{eq:uvmeas} \mu = \frac{1}{u^2} \mathrm{d}u \mathrm{d}v. \end{equation}
Note that, although Sp$(1,\mathbb{R})$ is unimodular (its left and right invariant measures are equal), the parabolic subgroup is not; it has modulus $1/|u'|$ when $S(u',v')$ acts on the right.
Since this group is transitive on squeezed vacuum states, we can parameterize them using $(u,v)$ as well. To do so, we observe that a squeezed vacuum state is completely specified by a real, symmetric, positive definite matrix $G$ with unit determinant (the state's covariance matrix is $\frac12G^{-1}$). $G$ defines the state's Wigner function on phase space (parameterized by $Z=[x,p]^\mathrm{T}$) as \begin{equation*} W_G(Z) = \frac{1}{\pi} \exp(-Z^\mathrm{T} G Z), \end{equation*} We can apply a symplectic transformation contragradiently to $Z$, as $Z\mapsto S^{-1}Z$, or we can apply it to $G$ as $G\mapsto (S^{-1})^\mathrm{T} G S^{-1}.$ So, to find the matrix $G(u,v)$ for a specific squeezed state, we apply squeezing and shearing transformations to the vacuum state ($G=1\!\mathrm{l}$), obtaining \begin{eqnarray*} G(u,v) &=& V(-v)^\mathrm{T}U(1/u)^\mathrm{T}U(1/u)V(-v) \nonumber \\ &=& \begin{pmatrix} u+v^2/u & v/u \\ v/u & 1/u \end{pmatrix} \end{eqnarray*} for a general squeezed vacuum state. We could reach the same state by squeezing and then rotating, \begin{equation*} G(s,\theta) = R(-\theta)^\mathrm{T} U(s^2)^\mathrm{T} U(s^2) R(-\theta), \end{equation*} provided that \begin{eqnarray*} u &=& \frac{s^2}{s^4\cos^2(\theta)+\sin^2(\theta)} ,\\ v &=& \frac{(s^4-1)\sin(\theta)\cos(\theta)}{s^4\cos^2(\theta)+\sin^2(\theta)}. \end{eqnarray*} The measure in Eq.~(\ref{eq:uvmeas}) becomes \begin{equation*}
\mu = \frac{2|1-s^4|}{s^3} \mathrm{d}s \mathrm{d}\theta, \end{equation*} and if we substitute $s = \sqrt{2E\pm\sqrt{4E^2-1}}$, we find (after some algebra) \begin{equation} \mu = 4\mathrm{d}\! E\mathrm{d}\!\theta. \label{eq:HWSpmeasure} \end{equation} The average energy of the invariant ensemble is divergent, and -- since every energy contributes equally -- it is dominated by states with arbitrarily large squeezing. Furthermore, every nonzero matrix element in the ensemble-average state defined by Eq. \ref{eq:Eintegral} diverges. This explains (and resolves) the apparent contradiction with Schur's Lemma: there \emph{is} no ensemble-average positive operator, since the integral diverges irreconcilably. Since the average doesn't exist, it need not be proportional to an irrep projector.
Note that the measure defined in Eq. \ref{eq:HWSpmeasure} is perfectly well-defined and well-behaved. Some functions have well-defined averages over it, but some (notably the positive-operator-valued function $\proj{\psi}^{\otimes 2}$) do not. Consider (as a simple example of the same phenomenon), the standard Lebesgue measure $\mathrm{d}\! x$. Some functions, such as $f(x) = (x^2+1)^{-1}$, have well-defined integrals over $\mathrm{d}\! x$. Others, such as $f(x) = x^{-1}$, do not. Although physicists can often use tricks to evaluate ill-defined integrals (e.g., setting $\int_{-\infty}^{\infty}{x^{-1}\mathrm{d}\! x}=0$ by symmetry), some integrals over $\mathrm{d}\! x$ are truly ill-defined -- like that of $f(x)=1$. For squeezed Gaussians, the ensemble-average state is such a case.
\section{Conclusion}
The main result of this paper is, of course, that Gaussian 2-designs do not exist. There \emph{is} a natural measure over all Gaussian states, which is left-invariant under the entire affine symplectic group, but neither this measure nor any other can yield a 2-design. This is not, of course, to say that there are no 2-designs for $L^2(\mathbb{R})$ -- just that they cannot be built of Gaussian states.
Equation \ref{eq:sqrtop} shows that Gaussian ensembles can't even get arbitrarily close to the 2-design condition. This is significant for state and process tomography, since the spectral non-uniformity of Eq. \ref{eq:sqrtop} means that any Gaussian POVM will be less than optimally sensitive to variations in certain matrix elements. \emph{However}, this by no means implies that squeezing does not help in quantum tomography. While squeezed states do not form a 2-design, our analysis has shown that they come far closer to doing so than do coherent states. Squeezed measurements promise major improvements in the statistical efficiency with which non-classical states can be reconstructed -- just not quite as much improvement as (in principle) could be gotten with non-Gaussian states.
In particular, we note that the eigenvectors of position, momentum, and $\mathbf{X}\cos\theta + \mathbf{P}\sin\theta$ (for uniformly distributed $\theta$), which are measured in the theoretical idealization of heterodyne quantum state tomography \cite{VogelPRA89}, can be seen as infinitely squeezed Gaussian states (or, more precisely, as a $s\to\infty$ limit of Gaussian states). This ``heterodyne POVM'', rotationally invariant and dominated by infinitely squeezed states, is essentially identical to the HWSp-invariant measure. So while it is still rather far from being a 2-design, it is as close as \emph{any} Gaussian POVM can be -- and far more uniformly sensitive than the coherent-state POVM.
We still find the nonexistence of Gaussian 2-designs to be quite ``curious''; the representation-theoretic argument given in Section \ref{sec:sp} had us convinced for quite some time that they \emph{did} exist. The counterargument is unique to noncompact groups and infinite-dimensional Hilbert spaces. It also demonstrates a fundamental difference between the symplectic phase space structure on finite Hilbert spaces, where irreducible representations \emph{do} reliably yield designs, and $L^2(\mathbb{R})$ where, as we have shown, irreducibility does not necessarily imply that $\proj{\psi}^{\otimes 2}$ can be integrated over the group.
\begin{acknowledgments} P.S.T. acknowledges useful discussion with S. Bartlett, A. Harrow, and J. Repka, as well as support from JSPS Research Fellowships for Young Scientists, JSPS KAKENHI (20549002) for Scientific Research (C). R.B.K. is supported by LANL's LDRD program.
\end{acknowledgments}
\end{document} |
\begin{document}
\begin{abstract} We consider in the whole plane the following Hamiltonian coupling of Schr\"{o}dinger equations \begin{equation*}\left\{ \begin{array}{ll} -\Delta u+V_0u=g(v)\\ -\Delta v+V_0v=f(u)\\ \end{array} \right. \end{equation*} where $V_0>0$, $f,g$ have critical growth in the sense of Moser. We prove that the (nonempty) set $S$ of ground state solutions is compact in $H^1(\mathbb R^2)\times H^1(\mathbb R^2)$ up to translations. Moreover, for each $(u,v)\in S$, one has that $u,v$ are uniformly bounded in $L^\infty(\mathbb R^2)$ and uniformly decaying at infinity. Then we prove that actually the ground state is positive and radially symmetric. We apply those results to prove the existence of semiclassical ground states solutions to the singularly perturbed system \begin{equation*} \begin{cases} -\varepsilon^2\Delta \varphi+V(x)\varphi=g(\psi)&\\ -\varepsilon^2\Delta \psi+V(x)\psi=f(\varphi) \end{cases} \end{equation*} where $V\in \mathcal{C}(\mathbb{R}^2)$ is a Schr\"{o}dinger potential bounded away from zero. Namely, as the adimensionalized Planck constant $\varepsilon\to 0$, we prove the existence of minimal energy solutions which concentrate around the closest local minima of the potential with some precise asymptotic rate. \end{abstract}
\maketitle
\section{Introduction} \noindent Consider in the whole $\mathbb{R}^2$ the following system of coupled Schr\"odinger equations \begin{equation}\label{q1} \left\{ \begin{array}{ll} \displaystyle -\varepsilon^2\Delta \varphi+V(x)\varphi=\frac{\partial H(\varphi,\psi)}{\partial\psi}\\ \displaystyle -\varepsilon^2\Delta \psi+V(x)\psi=-\frac{\partial H(\varphi,\psi)}{\partial\varphi} \end{array} \right. \end{equation} where $\varepsilon>0$, the external Schr\"odinger potential $V\in C(\mathbb R^2,\mathbb R)$ enjoys the following condition: \begin{itemize}
\item [$(V)$] $0<V_0:=\inf_{\mathbb R^2}V(x)<\lim_{|x|\rightarrow\infty}V(x)=V_\infty\leq\infty$. \end{itemize} The Hamiltonian has the following form $H(\varphi,\psi)=G(\psi)-F(\varphi)$, with $F(t)=\int_0^t f(\tau)\, \mathrm{d} \tau$ and $G(t)=\int_0^t g(\tau)\, \mathrm{d} \tau$ and the nonlinearities $f,g\in C(\mathbb R,\mathbb R)$ satisfy the following hypotheses: \begin{itemize} \item [$(H1)$] $f(t)=o(t)$ and $g(t)=o(t)$, as $t\rg0$; \item [$(H2)$] There exists $\theta>2$ such that for any $t\not=0$, \begin{center} $0<\theta F(t)\le f(t)t$ and $0<\theta G(t)\le g(t)t$; \end{center} \item [$(H3)$] There exists $M>0$ such that for any $t\not=0$, \begin{center} $0<F(t)\le Mf(t)$ and $0<G(t)\le Mg(t)$; \end{center}
\item [$(H4)$] $f(t)/|t|$ and $g(t)/|t|$ are strictly increasing for $t\not=0$. \end{itemize}
\noindent As a consequence of the Pohozaev-Trudinger-Moser inequality for which the Sobolev space $H^1$ embeds into the space of functions such that $e^{\alpha u^2}\in L^1$, the following notion of critical growth in dimension two was first introduced in \cite{AY,DMR} (in the case of bounded domains): \begin{definition}\label{cgdef} A function $f:\mathbb R^+\rightarrow\mathbb R^+$ has critical growth in the sense of Pohozaev-Trudinger-Moser inequality, if there exists $\alpha_0>0$ such that \begin{align*} \lim_{t\to + \infty} \frac {f(t)}{e^{\alpha t^2}}= \begin{cases} 0 & \text{if } \alpha > \alpha_0 \\ + \infty & \text{if } \alpha < \alpha_0 \end{cases} \end{align*} \end{definition}
\noindent It will be crucial in what follows the following growth assumptions:
\begin{itemize}
\item [$(H5)$] $\displaystyle\liminf_{|t|\rightarrow\infty}\frac{tf(t)}{e^{\alpha_0t^2}}\ge\beta_0>\frac{2e}{\alpha_0}V_0$ and $\displaystyle\liminf_{|t|\rightarrow\infty}\frac{tg(t)}{e^{\alpha_0t^2}}\ge\beta_0>\frac{2e}{\alpha_0}V_0.$ \end{itemize}
\noindent It is well known, both from the theoretical point of view as well as from that of applications, that minimal energy solutions, the so-called ground states, play a fundamental role, see e.g. \cite{BF}. In what follows we will focus on this class of solutions. In particular, to investigate the sign of ground state solutions to \re{q1}, we require in addition the following condition: \begin{itemize}
\item [$(H6)$] There exist $p, q>1$ such that $f(t)\geq t^q$ and $g(t)\geq t^p$ for small $t>0$; \end{itemize}
\noindent As a reference model take $F(t)=|t|^p (\mathrm{e}^{4\pi t^2}-1)$ and $G(t)=|t|^q(\mathrm{e}^{4\pi t^2}-1)$ with $p,q>2$ and $\alpha_0=4\pi$ which clearly satisfy $(H1)$-$(H6)$.
Our main result reads as follows: \begin{theorem}\label{Th1} Assume condition $(V)$ and that $f,g$ have critical growth in the sense of Definition \ref{cgdef} and satisfy $(H1)$--$(H5)$. Then, for sufficiently small $\varepsilon>0$, $(\ref{q1})$ admits a least energy solution $z_\varepsilon=(\varphi_\varepsilon,\psi_\varepsilon)\in H^1(\mathbb R^2)\times H^1(\mathbb R^2)$. Moreover, the following properties hold: \begin{itemize}
\item [$(i)$] let $x_\varepsilon^1, x_\varepsilon^2, x_\varepsilon$ be any maximum point of $|\varphi_\varepsilon|, |\psi_\varepsilon|, |\varphi_\varepsilon|+|\psi_\varepsilon|$ respectively, then, setting $$\mathcal{M}\equiv\{x\in \mathbb R^2: V(x)=V_0\}$$ one has $$ \lim_{\varepsilon\rightarrow
0}\mbox{dist}(x_\varepsilon,\mathcal{M})=0\:\text{ and }\: \lim_{\varepsilon\rg0}|x_\varepsilon^i-x_\varepsilon|=0,\quad i=1,2. $$ Furthermore, $(\varphi_\varepsilon(\varepsilon x+x_\varepsilon),\psi_\varepsilon(\varepsilon x+x_\varepsilon))$ converges (up to a subsequence) as $\varepsilon\to 0$ to a ground state solution of \begin{align*} \left\{ \begin{array}{ll} -\Delta u+V_0u=g(v)\\ -\Delta v+V_0v=f(u) \end{array} \right. \end{align*} \item [$(ii)$] if in addition $(H6)$ holds, then replacing $f$ and $g$ above with their odd extensions, for $\varepsilon>0$ small enough, up to changing sign $u_\varepsilon, v_\varepsilon>0$ in $\mathbb R^2$ and $x_\varepsilon^1,x_\varepsilon^2$ are the unique global maximum points of $u_\varepsilon,v_\varepsilon$ respectively and which also enjoy the following
$$\lim_{\varepsilon\rg0}|x_\varepsilon^1-x_\varepsilon^2|/\varepsilon=0.$$
Moreover, for some $c,C>0$ one has $$|\varphi_\varepsilon(x)|\le C\exp(-\frac{c}{\varepsilon}|x-x_\varepsilon^1|),\,\,|\psi_\varepsilon(x)|\le C\exp(-\frac{c}{\varepsilon}|x-x_\varepsilon^2|), \,\, x\in\mathbb R^2;$$ \end{itemize} \end{theorem} (Without loss of generality, throughout the paper we may assume $0\in\mathcal{M}$.) \begin{remark}\label{remark1} Let us point out a few comments on the conditions we assume in Theorem \ref{Th1}: \begin{itemize} \item Actually the Ambrosetti-Rabinowitz condition $(H2)$ can be replaced by the following slightly weaker assumption: \begin{itemize} \item [$(H2)'$] There exists $\theta>2$ such that for any $t\not=0$, \begin{center} $0<2F(t)\le f(t)t$ and $0<\theta G(t)\le g(t)t$, \end{center} or equivalently \begin{center} $0<\theta F(t)\le f(t)t$ and $0<2G(t)\le g(t)t$. \end{center} \end{itemize}
\item We also point out that conditions $(H2)$ and $(H4)$ are weaker than the following assumption: \begin{itemize} \item [$(H)$] $f,g\in C^1(\mathbb R,\mathbb R)$ and there exists $\delta'>0$ such that for any $s\not=0$, $$ 0<(1+\delta')f(s)s\le f'(s)s^2\,\,\mbox{and}\,\,\, 0<(1+\delta')g(s)s\le g'(s)s^2 $$ which appears in the literature, see \cite{dsr,Pisto,Ramos1}. \end{itemize}
\item Hypotheses $(H6)$ and $(H)$ can be also found in \cite{dsr}. Clearly hypothesis $(H)$ yields $sf(s)\le f(1)|s|^{2+\delta'}$ and $sg(s)\le g(1)|s|^{2+\delta'}$ if $|s|\le1$. Let us point out that in the present paper we do not require $sf(s),sg(s)$ to be less than $|s|^r$ near the origin for some $r>2$. \end{itemize} \end{remark} Systems of the form \eqref{q1} have been largely investigated in the last three decades being a prototype in many different applications, where they model for instance the minimal energy interaction between nonlinear fields, see \cite{BF,yang}. The scenario changes remarkably from the higher dimensional case $N\geq 3$ to the planar case $N=2$. In particular, $N=2$ affects the notion of critical growth which is the maximal admissible growth for the nonlinearities in order to preserve the variational structure of the problem; we refer to \cite{dcassani2,CST,CT} for a discussion on this topic and to \cite{bst,ruf} for a survey on systems of the form \eqref{q1} in the case of bounded domains. As far as we are concerned with minimal energy solutions in the whole space, existence results have been first established in \cite{boyan}, see also \cite{Weth}, in the higher dimensional case and then recently extended to $N=2$ in \cite{DJJ} , where the Trudinger-Moser critical case is covered, see also \cite{bsrt,dsr}. Qualitative properties of minimal energy solutions such as symmetry and positivity have been investigated in the higher dimensional case in \cite{Sirakov1,syso,dsr}, see also \cite{dgp,QS} for closely related results. Always in dimension $N\geq 3$, a priori estimates have been obtained in \cite{DY}. A priori bounds open the way to investigate the existence and concentrating behavior, as $\varepsilon\to 0$, of the so-called semiclassical states. From the point of view of Physics, these solutions live on the interface between quantum and classical Mechanics, in the sense that the field behaves like a Newtonian particle as $\varepsilon\to 0$, see \cite{EvansZ} for a survey on the topic and references therein. Semiclassical states for singularly perturbed Schr\"{o}dinger systems have been studied in the higher dimensional case in \cite{ASY,Ramos1,dlz}.
\noindent Finally, let us mention that the question weather the ground state we find is unique, seems to be out of reach at the moment. This is still a challenging open problem even in the subcritical case as well as in higher dimensions in which uniqueness of positive solutions (not necessarily ground states) is known just in a few particular cases such as Lane-Emden systems \cite{rd}. More in general, the matter of uniqueness of ground states, even in cases in which one has multiplicity of positive solutions, remains open even for the single equation.
\subsection*{Overview} The paper is organized as follows: in Section \ref{lp} we begin with studying a limit problem for system \eqref{q1}. Here we complete the work initiated in \cite{DJJ}, where the existence of ground states is proved, by establishing a priori estimates, regularity, symmetry and qualitative properties of solutions. Here we use a suitable Nehari manifold approach in the spirit of Pankov \cite{Pankov} combined with Moser type techniques, as everything is set in dimension two and in presence of Moser critical growth. In particular we exploit those preliminary results to prove positivity of ground states solutions in a quite general setting, as developed throughout Section \ref{sign_s}. Then, Section \ref{semiclassical_s} is devoted to apply the informations previously obtained on the limit problem, to analyze the concentrating behavior of semiclassical solutions from the point of view of localizing bumps as well as of deriving the asymptotic rate of concentration. Here the presence of critical Moser's growth requires some delicate energy estimates which we then apply to establish compactness. \section{The limit problem}\label{lp}
\noindent By denoting $u_\varepsilon(x)=\varphi(\varepsilon x),v_\varepsilon(x)=\psi(\varepsilon x)$ and $V_\varepsilon(x)=V(\varepsilon x)$, \re{q1} is equivalent to \begin{align*} \left\{ \begin{array}{ll} -\Delta u_\varepsilon+V_\varepsilon(x)u_\varepsilon=g(v_\varepsilon)\\ -\Delta v_\varepsilon+V_\varepsilon(x)v_\varepsilon=f(u_\varepsilon) \end{array} \right. \end{align*} in the whole plane. Let $x_0\in\mathbb R^2$ and assume $u_\varepsilon(\cdot+\frac{x_0}{\varepsilon})\rightarrow u$, $v_\varepsilon(\cdot+\frac{x_0}{\varepsilon})\rightarrow v$ in $C^1_{loc}(\mathbb R^2)$, if $V_0=V(x_0)$ then one has \begin{equation}\label{q11} \left\{ \begin{array}{ll} -\Delta u+V_0u=g(v)\\ -\Delta v+V_0v=f(u) \end{array} \right. \end{equation} which is the so-called limit problem of \re{q1}. Recently, D.\ G. De\ Figueiredo, J. M. do \'O and J. Zhang established in \cite{DJJ} the existence of ground state solutions to \re{q11}, precisely \begin{theoremletter}\label{a} {\rm (Theorem 1.3 in \cite{DJJ})} {\it Suppose that $f,g$ have critical growth and satisfy $(H1)$--$(H5)$. Then \eqref{q11} admits a ground state solution $(u,v)\in H^1(\mathbb R^2)\times H^1(\mathbb R^2)$.} \end{theoremletter} \noindent Denote by $\mathcal{S}$ the set of of ground state solutions to system \re{q11}, then by Theorem A $\mathcal{S}\not=\emptyset$. Here we investigate the regularity and qualitative properties of the ground state solutions to \re{q11}. Precisely, we prove the following results:
\begin{theorem}\label{Th2} Suppose $f,g$ have critical growth and satisfy $(H1)$--$(H5)$. Then the following hold true: \begin{itemize} \item [$(i)$] $(u,v)\in \mathcal{S}\Rightarrow u,v\in L^{\infty}(\mathbb R^2)\cap C_{loc}^{1,\gamma}(\mathbb R^2)$ for some $\gamma\in(0,1)$;
\item [$(ii)$] let $x_z\in\mathbb R^2$ be the maximum point of $|u(x)|+|v(x)|$, then the set $$\{(u(\cdot+x_z),v(\cdot+x_z))\,|\, (u,v)\in \mathcal{S}\}$$ is compact in $H^1(\mathbb R^2)\times H^1(\mathbb R^2)$;
\item [$(iii)$] $0<\inf\{\|u\|_{\infty},\|v\|_{\infty}\,|\, (u,v)\in \mathcal{S}\}\le \sup\{\|u\|_{\infty},\|v\|_{\infty}\,|\, (u,v)\in \mathcal{S}\}<\infty$;
\item [$(iv)$] $u(x+x_z)\rightarrow 0$ and $v(x+x_z)\rightarrow 0$, as $|x|\rightarrow\infty$ uniformly for any $z=(u,v)\in \mathcal{S}$, where $x_z$ is given in $(ii)$; \item [$(v)$] for any $(u,v)\in \mathcal{S}$, the following Pohozaev-type identity holds $$ \int_{\mathbb R^2}(F(u)+G(v)-V_0uv)\,\mathrm{d} x=0. $$ \end{itemize} \end{theorem} \begin{theorem}\label{sign} Assume in addition to the hypotheses of Theorem \ref{Th2} that also $(H6)$ holds. Then, replacing $f$ and $g$ in Theorem \ref{Th2} with their odd extensions, for any $(u,v)\in\mathcal{S}$ one has $u,v\in C^2(\mathbb R^2)$ and $uv>0$ in $\mathbb R^2$. Moreover, there exists some point
$x_0\in\mathbb R^2$ such that $u,v$ are radially symmetric with respect to the same point $x_0$, namely $u(x)=u(|x-x_0|)$, $v(x)=v(|x-x_0|)$ and setting $r=|x-x_0|$, one has for $r>0$ $$ \frac{\partial u}{\partial r}<0\quad \text{ and }\quad \frac{\partial v}{\partial r}<0 $$ as well as $$\Delta u(x_0)<0\quad \text{ and }\quad \Delta v(x_0)<0.$$
Moreover, there exist $C,c>0$, independent of $z=(u,v)\in \mathcal{S}$, such that $$|D^{\alpha}u(x)|+|D^{\alpha}v(x)|\le C\exp(-c|x-x_0|),\quad x\in \mathbb R^2,\,|\alpha|=0,1$$ \end{theorem}
\subsection{The functional setting: a generalized Nehari manifold}
\renewcommand{5.\arabic{equation}}{2.\arabic{equation}}
Let $H^1(\mathbb R^2)$ be the usual Sobolev space endowed with the inner product $$
(u,v)_{H^1}:=\int_{\mathbb R^2}\nabla u\nabla v+V_0uv,\ \ \|u\|_{H^1}^2:=(u,u)_{H^1}, u,v\in H^1(\mathbb R^2). $$ and set $E=H^1(\mathbb R^2)\times H^1(\mathbb R^2)$ with the inner product $$ (z_1,z_2):=(u_1,u_2)_{H^1}+(v_1,v_2)_{H^1},\ \ z_i=(u_i,v_i)\in E, i=1,2. $$ Clearly we have the space decomposition $E=E^+\oplus E^-$, where $$
E^+:=\{(u,u)\,|\, u\in H^1(\mathbb R^2)\}\ \ \ \mbox{and}\ \ \ E^-:=\{(u,-u)\,|\, u\in H^1(\mathbb R^2)\}. $$ For each $z=(u,v)\in E$, one has $$z=z^++z^-=((u+v)/2,(u+v)/2)+((u-v)/2,(v-u)/2).$$
\vskip0.1in Weak solutions to \eqref{q11} are the critical points of the associated energy functional $$ \Phi(z):=\int_{\mathbb R^2}\nabla u\nabla v+V_0uv-I(z),\ \ z=(u,v)\in E, $$ where $I(z)=\int_{\mathbb R^2}F(u)+G(v)$. Using the above notation we have \begin{equation}\label{y1}
\Phi(z):=\frac{1}{2}\|z^+\|^2-\frac{1}{2}\|z^-\|^2-I(z), \end{equation} which emphasizes the strongly indefinite nature of $\Phi$ which however, by the hypotheses on $f$ and $g$, is of class $C^1(E,\mathbb R)$ and \begin{equation}\label{y2} I(0)=0, \ \langle I'(z), z\rangle>2I(z)>0,\ \ \mbox{for all}\ \ z\in E\setminus\{0\}. \end{equation} On one hand, if $z=(u,v)\in E\setminus \{0\}$ such that $\Phi'(z)=0$, then by $(H2)$ \begin{equation*} \Phi(z)=\Phi(z)-\frac{1}{2}\langle \Phi'(z),z\rangle=\int_{\mathbb R^2}\frac{1}{2}f(u)u-F(u)+\frac{1}{2}g(u)u-G(u)>0. \end{equation*} On the other hand, if $z=(u,-u)\in E^-$, we have by $(H2)$ $$
\Phi(z)=-\int_{\mathbb R^2}(|\nabla u|^2+V_0u^2)-\int_{\mathbb R^2} F(u)+G(-u)\le 0. $$ As a consequence, if $z\in E$ is a nontrivial critical point of $\Phi$, then necessarily $z\in E\setminus E^-$. This motivates the introduction of the following generalized Nehari manifold, due to Pankov \cite{Pankov} and then used also in \cite{Szulkin, Weth, DJJ}: $$ \mathcal{N}:=\{z\in E\setminus E^-: \langle \Phi'(z),z\rangle=0, \langle \Phi'(z),\varphi\rangle=0\ \mbox{for all}\ \ \varphi\in E^-\}. $$ Let $$ c_\ast:=\inf_{z\in\mathcal{N}}\Phi(z) $$ then $c_\ast$ is called the least energy level of system \re{q11}. In \cite{DJJ} the authors proved that $c_\ast\in(0,4\pi/{\alpha_0})$ and that it is achieved on $\mathcal{N}$.
\subsection{Proof of Theorem \ref{Th2}} \renewcommand{5.\arabic{equation}}{3.\arabic{equation}} Let $\{z_n\}\subset S$, namely \begin{equation}\label{pss} \Phi(z_n)=c_\ast \ \ \mbox{and}\ \ \Phi'(z_n)=0, \quad \forall n\in\mathbb{N} \end{equation} We carry out the proof of $(ii)$ of Theorem \ref{Th2} through the following four steps: \begin{itemize} \item We first prove that $\{z_n\}$ is bounded in $E$ (Proposition \ref{o1}); \item In Proposition \ref{nv} we prove that there exisst $\{y_n\}\subset\mathbb R^2$ and $z_0\not={\bf 0}$ such that $z_n(\cdot+y_n)\rightharpoonup z_0$ in $E$ and $z_n(\cdot+y_n)\xrightarrow{a.e.}z_0$ in $\mathbb R^2$, as $n\rightarrow\infty$; \item In Proposition \ref{o11} we show that $z_0$ is actually a critical point of $\Phi$; \item Finally in Proposition \ref{con} we prove that $z_0\in \mathcal{S}$ and that actually $z_n(\cdot+y_n)\longrightarrow z_0$ strongly in $E$, as $n\to \infty$. \end{itemize} In the proof of the Proposition \ref{o1} below we will use the following lemma which we borrow from \cite{Fi}: \begin{lemmaletter}\label{RUF1} {\it The following inequality holds \[
s\text{ }t\leq \left\{ \begin{array}{ll} (e^{t^{2}}-1)+s(\log s)^{1/2}, \; & \text{ for all }t\geq 0 \text{ and }s\geq e^{1/4}; \\ (e^{t^{2}}-1)+\frac{1}{2}s^{2}, \; & \text{ for all } t \geq 0 \text{ and }0 \leq s\leq e^{1/4}. \end{array} \right. \]} \end{lemmaletter} \noindent The proofs of Proposition \ref{o1} and \ref{o11} are similar to \cite{DJJ}, however for the sake of completeness we give the details. \begin{proposition}\label{o1} There exists $C>0$ such that for all $n\in\mathbb{N}$: \begin{itemize}
\item [$1)$] $\|z_n\|=\|(u_n,v_n)\|\le C$;
\item [$2)$] $\int_{\mathbb R^2}f(u_n)u_n\, \mathrm{d} x\le C$ and $\int_{\mathbb R^2}g(v_n)v_n\, \mathrm{d} x\le C$;
\item [$3)$] $\int_{\mathbb R^2}F(u_n)\, \mathrm{d} x\le C$ and $\int_{\mathbb R^2}G(v_n)\, \mathrm{d} x\le C$. \end{itemize} \end{proposition} \begin{proof} From $\langle\Phi'(z_n),z_n\rangle=0$ we have \begin{equation}\label{pitomba}
2 \int_{\mathbb R^2} (\nabla u_n \nabla v_n+V_0u_nv_n)\, \mathrm{d} x - \int_{\mathbb R^2} f(u_n)u_n\, \mathrm{d} x - \int_{\mathbb R^2} g(v_n)v_n \, \mathrm{d} x =0. \end{equation} Recalling that $$ \Phi(z_n)=\int_{\mathbb R^2}(\nabla u_n \nabla v_n+V_0u_nv_n)\, \mathrm{d} x-\int_{\mathbb R^2}(F(u_n)+G(v_n))\, \mathrm{d} x=c_\ast $$ we obtain by $(H_3)$ the following \begin{align*} \int_{\mathbb R^2} [f(u_n)u_n + g(v_n)v_n]\, \mathrm{d} x &= 2\int_{\mathbb R^2} [F(u_n) + G(v_n)]\, \mathrm{d} x + 2c_\ast \\ & \leq \frac{2}{\theta}\int_{\mathbb R^2} [f(u_n)u_n + g(v_n)v_n]\, \mathrm{d} x + 2c_\ast. \end{align*} Thus \begin{equation}\label{goiaba} \int_{\mathbb R^2} [f(u_n)u_n + g(v_n)v_n]\, \mathrm{d} x \leq \frac{2c_\ast\theta}{\theta-2}. \end{equation} From $ \langle\Phi'(z_n),(v_n,0)\rangle=0$ and $ \langle\Phi'(z_n),(0,u_n)\rangle=0$, we have $$
\| v_n \|^2-\int_{\mathbb R^2}
f(u_n)v_n\, \mathrm{d} x=0\ \ \mbox{and}\ \ \| u_n \|^2-\int_{\mathbb R^2} g(v_n)u_n\, \mathrm{d} x=0. $$
Let $U_n=u_n/ \| u_n\|$ and $ V_n = v_n / \| v_n \| $, then \begin{align}
\| v_n \| & = \int_{\mathbb R^2} f(u_n)V_n\, \mathrm{d} x \label{laranja},\\
\| u_n \| & = \int_{\mathbb R^2} g(v_n)U_n\, \mathrm{d} x . \label{limao} \end{align} By $(H1)$, there exist $\beta>0$ and $C_\beta>0$ such that $$ f(t)\le C_\beta e^{\beta t^2}\quad \text{ and }\quad g(t)\le C_\beta e^{\beta t^2}\ \ \mbox{for all} \ \ t\ge0. $$ Moreover, there exists $C_1>0$ such that for all $n$ $$ f(u_n(x))\le C_1 u_n(x)\ \ \mbox{for}\ \ x\in\{\mathbb R^2 : f(u_n(x))/C_\beta \leq e^{1/4}\}. $$ Setting $ t = V_n $ and $ s = f(u_n)/C_\beta $ in Lemma \ref{RUF1} then by $(H1)$-$(H2)$ together with the Pohozaev-Trudinger-Moser inequality, we get \begin{align*} \int_{\mathbb R^2} f(u_n)V_n\, \mathrm{d} x& \leq C_\beta \int_{\{x \in \mathbb R^2 : f(u_n(x))/C_\beta \geq e^{1/4} \}} \frac{1}{C_\beta}f(u_n) \left[\log (\frac{1}{C_\beta} f(u_n))\right]^{1/2}\, \mathrm{d} x \\ &+ \frac{1}{2}\int_{\{x \in \mathbb R^2 : f(u_n(x))/C_\beta \leq e^{1/4} \}} \frac{1}{C_\beta} (f(u_n))^2\, \mathrm{d} x+C_\beta \int_{\mathbb R^2} (e^{V_n^{2}}-1) \, \mathrm{d} x \\ & \leq C_2 + (\beta^{1/2}+C_1/(2C_\beta) \int_{ \mathbb R^2 } f(u_n) u_n \, \mathrm{d} x, \end{align*} for some constant $ C_2>0$. This estimate together with (\ref{laranja}) implies, for some constant $ c_1 > 0 $, that \begin{equation}\label{maravilha}
\| v_n \| \leq c_1(1 + \int_{ \mathbb R^2 } f(u_n) u_n\, \mathrm{d} x) \end{equation} and similarly \begin{equation}\label{macacheira}
\| u_n \| \leq c_1(1 + \int_{ \mathbb R^2 } g(v_n) v_n\, \mathrm{d} x). \end{equation} From \re{maravilha}, \re{macacheira} and \re{goiaba} it follows the first claim $1)$. Then, by (\ref{goiaba}) and $(H_3) $ we obtain the remaining bounds $2)$ and $3)$. \end{proof} Next we prove that, up to translations, $\{z_n\}$ has a nontrivial weak limit. Clearly $(u_n,v_n)$ satisfies just one of the following conditions: \begin{itemize}
\item[] ({\it Vanishing}) $ \quad \lim_{n\rightarrow\infty}\sup_{y\in\mathbb R^2}\int_{B_R(y)}(u_n^2+v_n^2)\, \mathrm{d} x=0\ \ \mbox{for all}\ \ R>0; $
\item[] ({\it Nonvanishing}) there exist $\nu>0$, $R_0>0$ and $\{y_n\}\subset\mathbb R^2$ such that $$ \lim_{n\rightarrow\infty}\int_{B_{R_0}(y_n)}(u_n^2+v_n^2)\, \mathrm{d} x\ge\nu. $$ \end{itemize}
We borrow from \cite{Fi} the following lemma: \begin{lemmaletter}\label{l3.3}{\it Let $\Omega\subset\mathbb R^2$ be a bounded domain and $f\in C(\mathbb R,\mathbb R)$. Let $\{u_n\}\subset L^1(\Omega)$ be such that $u_n\rightarrow u$ strongly in $L^1(\Omega)$, $$
f(u_n)\in L^1(\Omega)\ \ \mbox{and}\ \ \ \int_{\Omega}|f(u_n)u_n|\, \mathrm{d} x\le C, n\ge1 $$ for some $C>0$. Then, up to a subsequence we have $$ f(u_n)\rightarrow f(u)\ \ \mbox{strongly in}\ \ L^1(\Omega)\ \ \mbox{as}\ \ n\rightarrow\infty. $$} \end{lemmaletter}
\begin{proposition}\label{nv} Vanishing does not occur. \end{proposition} \begin{proof} We know from \cite{DJJ} that $c_\ast\in(0,4\pi/\alpha_0)$, hence for some $\delta>0$ sufficiently small one has $c_\ast\in(0,4\pi/\alpha_0-\delta)$. Assume by contradiction that vanishing occurs, namely $$ \lim_{n\rightarrow\infty}\sup_{y\in\mathbb R^2}\int_{B_R(y)}(u_n^2+v_n^2)\, \mathrm{d} x=0\ \ \mbox{for all}\ \ R>0, $$ then Lions's lemma \cite{lionslemma} yields $u_n\rg0, v_n\rg0$ strongly in $L^s(\mathbb R^2)$ for any $s>2$.
Let us divide the proof into two steps: \vskip0.1in {\bf Step 1.} We claim $$ \lim_{n\rightarrow\infty}\int_{\mathbb R^2}F(u_n)\, \mathrm{d} x=0\ \ \mbox{and}\ \ \lim_{n\rightarrow\infty}\int_{\mathbb R^2}G(v_n)\, \mathrm{d} x=0. $$ Indeed, by Lemma \ref{l3.3}, for any $R>0$ one has $f(u_n)\rg0$ and $g(v_n)\rg0$ strongly in $L^1(B_R(0))$ as $n\rightarrow\infty$. Then by $(H3)$ and the Lebesgue dominated convergence theorem, \begin{equation}\label{y4.1} \lim_{n\rightarrow\infty}\int_{B_R(0)}F(u_n)\, \mathrm{d} x=0\ \ \mbox{and}\ \ \lim_{n\rightarrow\infty}\int_{B_R(0)}G(v_n)\, \mathrm{d} x=0. \end{equation} In order to prove the claim, it is enough to prove that for any $\varepsilon>0$, there exists $R>0$ such that for $n$ large enough, \begin{equation}\label{y4.2} \int_{\mathbb R^2\setminus B_R(0)}F(u_n)\, \mathrm{d} x\le\varepsilon\quad\text{and}\quad \int_{\mathbb R^2\setminus B_R(0)}G(v_n)\, \mathrm{d} x\le\varepsilon. \end{equation} By $(H3)$ and Proposition \ref{o1}, for any $K>0$ and $n$, $$
\int_{\{x\in\mathbb R^2\setminus B_R(0): |u_n(x)|\ge K\}}F(u_n)\le\frac{M}{K}\int_{\{x\in\mathbb R^2\setminus B_R(0): |u_n(x)|\ge K\}}f(u_n)u_n\le\frac{MC}{K}. $$ Then choosing $K>0$ large enough, we get that for all $n$ \begin{equation}\label{y4.3}
\int_{\{x\in\mathbb R^2\setminus B_R(0): |u_n(x)|\ge K\}}F(u_n)\le\frac{\varepsilon}{2}. \end{equation} By $(H1)$, for any $\rho>0$ there exists $C_{\rho,K}>0$ such that $$
F(t)\le \rho t^2+C_{\rho,K}t^4,\ \ \ |t|\le K. $$ Recalling that $u_n\rightarrow 0$ strongly in $L^4(\mathbb R^2)$, we obtain $$
\limsup_{n\rightarrow\infty}\int_{\{x\in\mathbb R^2\setminus B_R(0): |u_n(x)|\le K\}}F(u_n)\le\rho\sup_{n}\|u_n\|_2^2. $$ By Proposition \ref{o1} and since $\rho$ is arbitrary, for $n$ large enough we get \begin{equation}\label{y4.4}
\int_{\{x\in\mathbb R^2\setminus B_R(0): |u_n(x)|\le K\}}F(u_n)\le\frac{\varepsilon}{2}. \end{equation} Thus \re{y4.3} and \re{y4.4} yield the first bound in \re{y4.2} and similarly one gets the second bound.
\vskip0.1in {\bf Step 2.} We claim that $c_\ast=0$, from which the contradiction follows as we know $c_*>0$. We need the following inequality used in \cite[Lemma 4.1]{Souza} \begin{equation}\label{inequa} t\ s\le t^2(e^{t^2}-1)+s(\log{s})^{\frac{1}{2}},\ \ \mbox{for all}\ \ (t,s)\in[0,\infty)\times[e^{\frac{1}{\sqrt[3]{4}}},\infty). \end{equation} By Step 1, \begin{equation}\label{y4.5} \lim_{n\rightarrow\infty}\int_{\mathbb R^2}(\nabla u_n\nabla v_n+V_0u_nv_n)=c_*\,.
\end{equation} If $u_n\rightarrow 0$ or $v_n\rg0$ strongly in $H^1(\mathbb R^2)$ as $n\rightarrow\infty$, then \re{y4.5} directly yields $c_*=0$. Therefore, let us assume $\inf_{n\ge1}\|u_n\| \ge b > 0$. Note that \begin{equation}\label{unvn4}
\| u_n \|^2 = \int_{\mathbb R^2} g(v_n) u_n \, \mathrm{d} x. \end{equation} By $(H1)$, for any fixed $\varepsilon>0$, there exists $C_\varepsilon>0$ such that $$ f(t), g(t)\le C_\varepsilon e^{(\alpha_0+\varepsilon)t^2}\ \ \mbox{for}\ \ t\ge 0. $$
Let $ \overline{u}_n = (4\pi/\alpha_0 - \delta)^{1/2} u_n / \|
u_n \|$ and using the inequality \re{inequa} with $s =
g(v_n)/C_\varepsilon$ and $t = \sqrt{\alpha_0}\, |\overline{u}_n|$, {\allowdisplaybreaks \begin{align*}
&(4\pi/\alpha_0 - \delta)^{1/2} \| u_n \| \le \int_{\mathbb R^2}g(v_n)|\overline{u}_n|\, \mathrm{d} x \\
&=\frac{C_\varepsilon}{\sqrt{\alpha_0}}\int_{\mathbb R^2}\frac{g(v_n)}{C_\varepsilon}\sqrt{\alpha_0}|\overline{u}_n|\, \mathrm{d} x \\ & \leq\frac{C_\varepsilon}{\sqrt{\alpha_0}}\int_{\{x \in\mathbb R^2 : g(v_n(x))/C_\varepsilon \geq e^{1/\sqrt[3]{4}} \}} \frac{g(v_n)}{C_\varepsilon} [\log(\frac{ g(v_n)}{C_\varepsilon})]^{1/2}\, \mathrm{d} x \\
&\ \ +\int_{\{x \in\mathbb R^2 : g(v_n(x))/C_\varepsilon \leq e^{1/\sqrt[3]{4}} \}}g(v_n)|\overline{u}_n|\, \mathrm{d} x+C_\varepsilon\sqrt{\alpha_0}\int_{\mathbb R^2}\overline{u}_n^2 ( e^{\alpha_0 \overline{u}_n^{2}} - 1 )\, \mathrm{d} x\\ & \leq\sqrt{\frac{\alpha_0+\varepsilon}{\alpha_0}}\int_{\{x \in\mathbb R^2 : g(v_n(x))/C_\varepsilon \geq e^{1/\sqrt[3]{4}} \}} g(v_n)v_n\, \mathrm{d} x+C_\varepsilon\sqrt{\alpha_0}\int_{\mathbb R^2}\overline{u}_n^2 ( e^{\alpha_0 \overline{u}_n^{2}} - 1 )\, \mathrm{d} x \\
&\ \ +\int_{\{x \in\mathbb R^2 : g(v_n(x))/C_\varepsilon \leq e^{1/\sqrt[3]{4}} \}}g(v_n)|\overline{u}_n|\, \mathrm{d} x\\ &\le \sqrt{\frac{\alpha_0+\varepsilon}{\alpha_0}}\int_{\mathbb R^2}g(v_n)v_n\, \mathrm{d} x+I_{1,n}+I_{2,n}. \end{align*} }
Recalling that $\overline{u}_n\rg0$ strongly in $L^s(\mathbb R^2)$ for any $s>2$. Since $\| \overline{u}_n \|^2 = 4\pi/\alpha_0 - \delta$, there exists $p>1$(close to 1) such that $p\alpha_0(4\pi/\alpha_0 - \delta)<4\pi$. Thus, by the Pohozaev-Trudinger-Moser inequality, as $n\rightarrow\infty$, $$
I_{1,n}\le C_\varepsilon\sqrt{\alpha_0}\left(\int_{\mathbb R^2}|\overline{u}_n|^{2q}\right)^{1/q}\left(\int_{\mathbb R^2}( e^{p\alpha_0 \overline{u}_n^{2}}-1)\right)^{1/p}\rg0, $$ where $1/p+1/q=1$, namely, $I_{1,n}=o_n(1)$. Note that by $(H1)$-$(H2)$, for any $\rho>0$, there exits $C_{\rho,\varepsilon}>0$ such that $$
g(v_n(x))\le\rho|v_n(x)|+C_{\rho,\varepsilon}v_n^2,\ \ \mbox{for any}\ x\in\mathbb R^2\ \mbox{with}\ \ g(v_n(x))/C_\varepsilon \leq e^{1/\sqrt[3]{4}}. $$ Then \begin{align*}
I_{2,n}\le\int_{\mathbb R^2}(\rho|v_n|+C_{\rho,\varepsilon}v_n^2)|\overline{u}_n|\le\left[\rho\left(\int_{\mathbb R^2}|v_n|^2\right)^{1/2}+
C_{\rho,\varepsilon}\left(\int_{\mathbb R^2}|v_n|^4\right)^{1/2}\right]\left(\int_{\mathbb R^2}|\overline{u}_n|^2\right)^{1/2}. \end{align*} Recalling $v_n\rg0$ strongly in $L^4(\mathbb R^2)$, $$ \limsup_{n\rightarrow\infty}I_{2,n}\le C'\rho, $$ where $C'>0$ is independent of $\rho$. By the arbitrary choice of $\rho$, $I_{2,n}=o_n(1)$. Hence, \begin{equation}\label{barao4}
(4\pi/\alpha_0 - \delta)^{1/2} \| u_n \| \leq o_n(1) + (1 + \frac{\varepsilon}{\alpha_0})^{1/2}\int_{\mathbb R^2 } g(v_n) v_n . \end{equation} Similarly, we have \begin{equation}\label{CPV4}
(4\pi/\alpha_0 - \delta)^{1/2} \| v_n \| \leq o_n(1) + (1 + \frac{\varepsilon}{\alpha_0})^{1/2} \int_{\mathbb R^2 } f(u_n) u_n. \end{equation} Note that $$ \langle\Phi'(z_n),z_n\rangle=2\int_{\mathbb R^2}(\nabla u_n \nabla v_n+V_0u_nv_n)-\int_{ \mathbb R^2 } f(u_n) u_n + \int_{ \mathbb R^2 } g(v_n) v_n=0 $$ and that by \re{y4.5} we get $$\int_{ \mathbb R^2 } f(u_n) u_n+\int_{ \mathbb R^2 } g(v_n) v_n=2c_\ast+o_n(1).$$ It follows from (\ref{barao4})-(\ref{CPV4}) that $$
(4\pi/\alpha_0 - \delta)^{1/2}(\| u_n \|_{H^1} + \| v_n \|_{H^1})\le 2c_\ast(1 + \frac{\varepsilon}{\alpha_0})^{1/2}+o_n(1). $$ Since $c_\ast<4\pi/\alpha_0 - \delta$, for $\varepsilon > 0$ sufficiently small, as $n$ is large enough we have $$
\| u_n \|_{H^1} + \| v_n \|_{H^1}\le 2(4\pi/\alpha_0 - \delta/2)^{1/2}. $$
Then similarly as above, by the Trudinger-Moser inequality and $u_n\rg0$ strongly in $L^q(\mathbb R^2)$ for any $q>2$, we have $\int_{\mathbb R^2}g(v_n)u_n\rg0$, which implies by (\ref{unvn4}) that $ u_n \rightarrow 0 $ strongly in $ H^1(\mathbb R^2)$. Thus, it follows from \re{y4.5} that $c_\ast=0$ and hence a contradiction and vanishing does not occur. \end{proof}
\noindent As a consequence of Proposition \ref{nv}, up to a subsequence, there exist $\{y_n\}\subset\mathbb R^2$ and $z_0\not\equiv 0$ such that $z_n(\cdot+y_n)\rightharpoonup z_0$ in $E$ and $z_n(\cdot+y_n)\xrightarrow{a.e.}z_0$ in $\mathbb R^2$, as $n\rightarrow\infty$.
\begin{proposition}\label{o11} The weak limit $z_0$ is a critical point of $\Phi$. \end{proposition} \begin{proof} By $(H1)$, there exist $a>0$ and $\alpha>\alpha_0$ such that $$
|f(t)|\le a|t|+(e^{\alpha t^2}-1)\ \ \mbox{for all} \ \ t\in\mathbb R. $$ Then by the Pohozaev-Trudinger-Moser inequality $f(\bar{u}_n)\in L_{loc}^1(\mathbb R^2)$ and $g(\bar{v}_n)\in L_{loc}^1(\mathbb R^2)$, where $\bar{z}_n=(\bar{u}_n,\bar{v}_n)=(u(\cdot+y_n),v(\cdot+y_n))$. From Lemma \ref{l3.3} and and Proposition \ref{o1} we get, as $n\rightarrow\infty$ $$ \int_{\mathbb R^2}(f(\bar{u}_n)\varphi+g(\bar{v}_n)\phi)\rightarrow\int_{\mathbb R^2}(f(u_0)\varphi+g(v_0)\phi), $$ for any $(\varphi,\phi)\in C_0^\infty(\mathbb R^2)\times C_0^\infty(\mathbb R^2)$. Noting that $\Phi'(\bar{z}_n)=0$, it follows that $$ \int_{\mathbb R^2}(\nabla u_0\nabla \phi+\nabla v_0\nabla \varphi+V_0u_0\phi+V_0v_0\varphi)\, \mathrm{d} x=\int_{\mathbb R^2}(f(u_0)\varphi+g(v_0)\phi)\, \mathrm{d} x, $$ for any $(\varphi,\phi)\in C_0^\infty(\mathbb R^2)\times C_0^\infty(\mathbb R^2)$. Thus, $\Phi'(z_0)=0$ in $E$ and $z_0=(u_0,v_0)$ is a critical point of $\Phi$. \end{proof} \vskip0.1in \begin{proposition}\label{con} $z_0\in \mathcal{S}$ and $z_n(\cdot+y_n)\longrightarrow z_0$ in $E$, as $n\rightarrow\infty$, thus $\mathcal{S}$ is a compact set. \end{proposition} \begin{proof} Thanks to the invariance of $\Phi$ by translation, let us write for simplicity $z_n$ in place of $z_n(\cdot+y_n)$ and let $z_n=(u_n,v_n)$, $z_0=(u_0,v_0)$. By $(H2)$, $f(s)s-2F(s)\ge0$ and $g(s)s-2G(s)\ge0$ for any $s\in\mathbb R$. Then by Fatou's Lemma, {\allowdisplaybreaks \begin{align}\label{fatou} c_\ast&=\Phi(z_n)-\frac{1}{2}\langle \Phi'(z_n),z_n\rangle\nonumber\\ &=\lim_{n\rightarrow\infty}\left(\int_{\mathbb R^2}\frac{1}{2}f(u_n)u_n-F(u_n)+\int_{\mathbb R^2}\frac{1}{2}g(u_n)u_n-G(u_n)\right)\nonumber\\ &\ge\limsup_{n\rightarrow\infty}\int_{\mathbb R^2}\frac{1}{2}f(u_n)u_n-F(u_n)+\liminf_{n\rightarrow\infty}\int_{\mathbb R^2}\frac{1}{2}g(u_n)u_n-G(u_n)\nonumber\\ &\ge\liminf_{n\rightarrow\infty}\int_{\mathbb R^2}\frac{1}{2}f(u_n)u_n-F(u_n)+\liminf_{n\rightarrow\infty}\int_{\mathbb R^2}\frac{1}{2}g(u_n)u_n-G(u_n)\\ &\ge\int_{\mathbb R^2}\frac{1}{2}f(u_0)u_0-F(u_0)+\int_{\mathbb R^2}\frac{1}{2}g(u_0)u_0-G(u_0)\nonumber\\ &=\Phi(z_0)-\frac{1}{2}\langle \Phi'(z_0),z_0\rangle=\Phi(z_0).\nonumber \end{align}}
On the other hand, since $z_0\not\equiv 0$ and $\Phi'(z_0)=0$ one has $\Phi(z_0)\ge c_\ast$. Therefore, $z_0$ is a ground state solution of \eqref{q11}, namely, $z_0\in \mathcal{S}$. \vskip0.1in \noindent Next we prove that $z_n\rightarrow z_0$ in $E$. By \re{fatou} and $\Phi(z_0)=c_\ast$ we have \begin{equation}\label{ff} \lim_{n\rightarrow\infty}\int_{\mathbb R^2}\frac{1}{2}f(u_n)u_n-F(u_n)=\int_{\mathbb R^2}\frac{1}{2}f(u_0)u_0-F(u_0) \end{equation} and \begin{equation}\label{gg} \lim_{n\rightarrow\infty}\int_{\mathbb R^2}\frac{1}{2}g(v_n)v_n-G(v_n)=\int_{\mathbb R^2}\frac{1}{2}g(v_0)v_0-G(v_0). \end{equation} By $(H2)$ we get $$ 0\le\frac{\theta-2}{2}F(u_n)\le \frac{1}{2}f(u_n)u_n-F(u_n),\quad 0\le\frac{\theta-2}{2}G(v_n)\le \frac{1}{2}g(v_n)v_n-G(v_n), \quad n\geq 1 $$ and the Lebesgue dominated convergence theorem, together with \re{ff} and \re{gg} yields \begin{equation}\label{fg1} \lim_{n\rightarrow\infty}\int_{\mathbb R^2}F(u_n)=\int_{\mathbb R^2}F(u_0),\quad \lim_{n\rightarrow\infty}\int_{\mathbb R^2}G(v_n)=\int_{\mathbb R^2}G(v_0). \end{equation} Then, by \re{ff} and \re{gg} one has \begin{equation}\label{fg2} \lim_{n\rightarrow\infty}\int_{\mathbb R^2}f(u_n)u_n=\int_{\mathbb R^2}f(u_0)u_0,\quad \lim_{n\rightarrow\infty}\int_{\mathbb R^2}g(v_n)v_n=\int_{\mathbb R^2}g(v_0)v_0. \end{equation} Since $z_n,z_0\in S$, we have $$ \int_{\mathbb R^2}\nabla u_n\nabla v_n+V_0u_nv_n=c_\ast+\int_{\mathbb R^2}F(u_n)+G(v_n), $$ $$ \int_{\mathbb R^2}\nabla u_0\nabla v_0+V_0u_0v_0=c_\ast+\int_{\mathbb R^2}F(u_0)+G(v_0). $$ Thanks to \re{fg1}, \begin{equation}\label{fg3} \lim_{n\rightarrow\infty}\int_{\mathbb R^2}\nabla u_n\nabla v_n+V_0u_nv_n=\int_{\mathbb R^2}\nabla u_0\nabla v_0+V_0u_0v_0. \end{equation} By $\langle\Phi'(u_n,v_n),(u_n,u_n)\rangle=0$ and \re{fg2}-\re{fg3} we obtain \begin{equation}\label{fg4}
\int_{\mathbb R^2}|\nabla u_n|^2+V_0u_n^2=\int_{\mathbb R^2}f(u_0)u_0+g(v_n)u_n-\int_{\mathbb R^2}\nabla u_0\nabla v_0+V_0u_0v_0+o_n(1). \end{equation} At the same time from $\langle\Phi'(u_n,v_n),(u_n,-u_n)\rangle=0$ and $\langle\Phi'(u_0,v_0),(u_0,-u_0)\rangle=0$, we have \begin{equation}\label{fg5} \int_{\mathbb R^2}f(u_n)u_n=\int_{\mathbb R^2}g(v_n)u_n,\quad \int_{\mathbb R^2}f(u_0)u_0=\int_{\mathbb R^2}g(v_0)u_0. \end{equation} This implies by \re{fg2} that $\lim_{n\rightarrow\infty}\int_{\mathbb R^2}g(v_n)u_n=\int_{\mathbb R^2}g(v_0)u_0$. As a consequence, by \re{fg4} we obtain $$
\lim_{n\rightarrow\infty}\int_{\mathbb R^2}|\nabla u_n|^2+V_0u_n^2=\int_{\mathbb R^2}f(u_0)u_0+g(v_0)u_0-\int_{\mathbb R^2}\nabla u_0\nabla v_0+V_0u_0v_0. $$ Recalling that $\langle\Phi'(u_0,v_0),(u_0,u_0)\rangle=0$, namely
$$\int_{\mathbb R^2}|\nabla u_0|^2+V_0u_0^2=\int_{\mathbb R^2}f(u_0)u_0+g(v_0)u_0-\int_{\mathbb R^2}\nabla u_0\nabla v_0+V_0u_0v_0,$$
which implies $$\lim_{n\rightarrow\infty}\int_{\mathbb R^2}|\nabla u_n|^2+V_0u_n^2=\int_{\mathbb R^2}|\nabla u_0|^2+V_0u_0^2$$ and hence $u_n\rightarrow u_0$ in $H^1(\mathbb R^2)$. Similarly, $v_n\rightarrow v_0$ in $H^1(\mathbb R^2)$. \end{proof}
\noindent Next we prove $(i), (iii)$ of Theorem \ref{Th2} through the following three steps: \begin{itemize} \item In Proposition \ref{bo1} we prove regularity, namely for any fixed $z=(u,v)\in \mathcal{S}$ we prove that $u,v\in L^{\infty}(\mathbb R^2)\cap C_{loc}^{1,\gamma}(\mathbb R^2)$ for some $\gamma\in(0,1)$; \item In Proposition \ref{bo2} we prove that for any $\{z_n\}\subset \mathcal{S}$, $z_n=(u_n,v_n)$, for which there exists $y_n\in \mathbb{R}^2$ such that $z_n(\cdot +y_n)\to z_0\in \mathcal{S}$, one has
$$\sup_{n\ge1}(\|u_n\|_\infty+\|v_n\|_\infty)<\infty;$$ \item Finally, in Proposition \ref{pro_apriori} we prove the following a priori estimates
$$0<\inf_{z=(u,v)\in \mathcal{S}}\min\{\|u\|_\infty,\|v\|_\infty\}< \sup_{z=(u,v)\in \mathcal{S}}(\|u\|_\infty+\|v\|_\infty)<\infty.$$ \end{itemize} \begin{proposition}\label{bo1} Let $(u,v)\in \mathcal{S}$, then $u,v\in L^{\infty}(\mathbb R^2)\cap C_{loc}^{1,\gamma}(\mathbb R^2)$ for some $\gamma\in(0,1)$. \end{proposition} \begin{proof} For any $r>0$, let $B_1=B_r(0), B_2=B_{2r}(0)$. Noting that $u$ is a weak solution of the following problem \begin{equation}\label{v0} -\Delta U+V_0U=g(v)\ \mbox{in}\ B_2,\ U-u\in H_0^1(B_2), \end{equation} by the Pohozaev-Trudinger-Moser inequality one has $g(v)\in L^p(B_2)$ for all $p\ge2$. By the Calderon-Zygmund inequality, see e.g. \cite[Theorem 9.9]{GT}, one has $u\in W^{2,p}(B_2)$. It follows from classical interior $L^p$-estimates that \begin{equation}\label{v1}
\|u\|_{W^{2,p}(B_1)}\le C\left(\|g(v)\|_{L^p(B_2)}+\|u\|_{L^p(B_2)}\right), \end{equation} where $C$ only depends on $r,p$. Meanwhile, by the Sobolev embedding theorem, if $p>2$ we get that $u\in C^{1,\gamma}(\overline{B_1})$ for some $\gamma\in(0,1)$ and there exists $c$ (independent of $u$) such that
\begin{equation}\label{v2}\|u\|_{C^{1,\gamma}(\overline{B_1})}\le c\|u\|_{W^{2,p}(B_1)}. \end{equation}
\noindent Next we prove that $u$ vanishes at infinity, namely that for any $\delta>0$, there exists $R>0$ such that $|u(x)|\le \delta,\ \forall |x|\ge R$. Indeed, otherwise there exists $\{x_j\}\subset\mathbb R^2$ with $|x_j|\rightarrow\infty$, as
$j\rightarrow\infty$ and $\liminf_{j\rightarrow\infty}|u(x_j)|>0$. Let $u_j(x)=u(x+x_j)$ and $v_j(x)=v(x+x_j)$, then $\|u_j\|=\|u\|$ and \begin{equation}\label{v3} -\Delta u_j+V_0u_j=g(v_j),\quad u_j\in H^1(\mathbb R^2). \end{equation} We may assume $u_j \rightharpoonup u_0$ weakly in $H^1(\mathbb R^2)$, we claim that $u_0\not\equiv 0$. In fact, noting that $u_j$ is a weak solution of \eqref{v0} replacing $g(v)$ by $g(v_j)$, it follows from \eqref{v1} and \eqref{v2} that, up to a subsequence, $u_j\rightarrow u_0$ uniformly in $\overline{\Omega}$. Hence, $$ u_0(0)=\liminf_{j\rightarrow\infty}u_j(0)=\liminf_{j\rightarrow\infty}u(x_j)\not=0, $$ which implies that $u_0\not\equiv 0$. On the other hand, for any fixed $R>0$ and $j$ large enough, we have \begin{align*} \int_{\mathbb R^2}u^2 \, \mathrm{d} x &\ge \int_{B_R(0)}u^2 \, \mathrm{d} x +\int_{B_R(x_j)}u^2 \, \mathrm{d} x\\ &=\int_{B_R(0)}u^2 \, \mathrm{d} x+\int_{B_R(0)}u_j^2\, \mathrm{d} x\\ &=\int_{B_R(0)}u^2\, \mathrm{d} x +\int_{B_R(0)}u_0^2\, \mathrm{d} x +o_j(1), \end{align*}
where $o_j(1)\rightarrow 0$, as $j\rightarrow\infty$. Since $R$ is arbitrary, we get $u_0\equiv 0$, which is a contradiction. Thus, $u(x)\rightarrow 0$, as $|x|\rightarrow\infty$. Moreover, since $u\in C(B_r)$ for any $r>0$, we have $u\in L^\infty(\mathbb R^2)$. Similarly, $v\in L^\infty(\mathbb R^2)$. \end{proof} \begin{proposition}\label{bo2} Let $z_n=(u_n,v_n)\subset \mathcal{S}$ such that $\bar{z}_n=z_n(\cdot+y_n)\rightarrow z_0=(u_0,v_0)\in \mathcal{S}$ in $E$, then
$$\sup_{n\ge1}(\|u_n\|_\infty+\|v_n\|_\infty)<\infty$$ \end{proposition} \begin{proof} Let $\bar{u}_n=u(\cdot+y_n), \bar{v}_n=v_n(\cdot+y_n)$. Similarly as above, $\bar{u}_n$ is a weak solution of the following problem \begin{equation}\label{vb0} -\Delta U+V_0U=g(\bar{v}_n)\ \mbox{in}\ B_2,\ U-\bar{u}_n\in H_0^1(B_2). \end{equation} Moreover, for any $p\ge2$ we have \begin{equation}\label{vb1}
\|\bar{u}_n\|_{W^{2,p}(B_1)}\le C\left(\|g(\bar{v}_n)\|_{L^p(B_2)}+\|\bar{u}_n\|_{L^p(B_2)}\right), \end{equation} where $C$ only depends on $r,p$. By the Sobolev embedding theorem, if $p>2$ we get $\bar{u}_n\in C^{1,\gamma}(\overline{B_1})$ for some $\gamma\in(0,1)$ and there exists $c$ (independent of $n$) such that
\begin{equation}\label{vb2}\|\bar{u}_n\|_{C^{1,\gamma}(\overline{B_1})}\le c\|\bar{u}_n\|_{W^{2,p}(B_1)}. \end{equation} Then by \re{vb1}-\re{vb2}, we get
\begin{equation}\label{vb3}\|\bar{u}_n\|_{C^{1,\gamma}(\overline{B_1})}\le c\left(\|g(\bar{v}_n)\|_{L^p(\mathbb R^2)}+\|\bar{u}_n\|_{L^p(\mathbb R^2)}\right). \end{equation}
\noindent By $(H1)$, for $\beta>\alpha_0$ and some $C>0$, we have $|g(t)|\le C(|t|+\exp{(\beta t^2)}-1), t\in\mathbb R$. Recalling that $\bar{v}_n\rightarrow v_0$ in $H^1(\mathbb R^2)$, we next prove that \begin{equation}\label{yf1}
\lim_{n\rightarrow\infty}\int_{\mathbb R^2}|\exp(p\beta \bar{v}_n^2)-\exp(p\beta v_0^2)| \, \mathrm{d} x=0. \end{equation} In fact, since $v_0\in L^{\infty}(\mathbb R^2)$, there exists $c>0$ such that \begin{align*}
&\int_{\mathbb R^2}|e^{(p\beta \bar{v}_n^2)}-e^{(p\beta v_0^2)}| \, \mathrm{d} x \\
&\le c\int_{\mathbb R^2}e^{(2p\beta |\bar{v}_n-v_0|^2)}|\bar{v}_n^2-v_0^2| \mathrm{d} x \\
&= c\int_{\mathbb R^2}[e^{(2p\beta |\bar{v}_n-v_0|^2)}-1]|\bar{v}_n^2-v_0^2| \, \mathrm{d} x +o_n(1)\\
&\le c\left(\int_{\mathbb R^2}\left[e^{(4p\beta |\bar{v}_n-v_0|^2)}-1\right]\, \mathrm{d} x \right)^{{1}/{2}}\left(\int_{\mathbb R^2}\left|\bar{v}_n^2-v_0^2\right|^2\, \mathrm{d} x \right)^{{1}/{2}}+o_n(1), \end{align*}
where $o_n(1)\rightarrow 0$, as $n\rightarrow\infty$. From $\|\bar{v}_n-v_0\|_1\rg0$, as $n\rightarrow\infty$ and the Pohozaev-Trudinger-Moser inequality, there exists $C$ such that $$\int_{\mathbb R^2}\left[e^{(4p\beta
|\bar{v}_n-v_0|^2)}-1 \right]\, \mathrm{d} x \le C$$ as $n$ is large enough; thus \eqref{yf1} follows.
Recalling that $\bar{z}_n\rightarrow z_0$ in $E$, by \re{yf1} $\|g(\bar{v}_n)\|_{L^p(\mathbb R^2)}\rightarrow\|g(v_0)\|_{L^p(\mathbb R^2)}$ as $n\rightarrow\infty$. Finally we have \begin{equation}\label{vb4}
\sup_{n\ge1}\|\bar{u}_n\|_{C^{1,\gamma}(\overline{B_1})}<\infty. \end{equation} \vskip0.1in
\noindent Next we prove that $\bar{u}_n(x)\rg0$, uniformly as $|x|\rightarrow\infty$. It is enough to prove that for any $\delta>0$, there exists $R>0$ such that $|\bar{u}_n(x)|\le \delta,\ \forall n\ge1, |x|\ge R$. Suppose this does not occur, so that there exists
$\{x_n\}\subset\mathbb R^2$ with $|x_n|\rightarrow\infty$, as $n\rightarrow\infty$ and $\liminf_{n\rightarrow\infty}|\bar{u}_n(x_n)|>0$. Let $\tilde{u}_n(x)=\bar{u}_n(x+x_n)$ and $\tilde{v}_n(x)=\bar{v}_n(x+x_n)$, then \begin{equation}\label{vb5} -\Delta \tilde{u}_n+V_0\tilde{u}_n=g(\tilde{v}_n),\quad \tilde{u}_n\in H^1(\mathbb R^2). \end{equation} We may assume $\tilde{u}_n \rightharpoonup \tilde{u}_0$ weakly in $H^1(\mathbb R^2)$ and we claim $\tilde{u}_0\not\equiv 0$. For any $n\ge1$, $\tilde{u}_n$ is a weak solution to the following problem \begin{equation}\label{vbb0} -\Delta U+V_0U=g(\tilde{v}_n)\ \mbox{in}\ B_2,\ U-\tilde{u}_n\in H_0^1(B_2). \end{equation} Moreover, \begin{equation}\label{vbb1}
\|\tilde{u}_n\|_{W^{2,4}(B_1)}\le C\left(\|g(\tilde{v}_n)\|_{L^4(B_2)}+\|\tilde{u}_n\|_{L^4(B_2)}\right) \end{equation} where $C$ depends on $r$ only. At the same time, by the Sobolev embedding theorem, we get $\tilde{u}_n\in C^{1,\gamma}(\overline{B_1})$ for some $\gamma\in(0,1)$ and there exists $c$ (independent of $n$) such that
\begin{equation}\label{vbb2}\|\tilde{u}_n\|_{C^{1,\gamma}(\overline{B_1})}\le c\|\tilde{u}_n\|_{W^{2,4}(B_1)}. \end{equation} Then by \re{vbb1}-\re{vbb2}, we get
$$\|\tilde{u}_n\|_{C^{1,\gamma}(\overline{B_1})}\le c\left(\|g(v_n)\|_{L^4(\mathbb R^2)}+\|u_n\|_{L^4(\mathbb R^2)}\right). $$
Then similar to \re{vb4}, $\sup_{n\ge1}\|\tilde{u}_n\|_{C^{1,\gamma}(\overline{B_1})}<\infty$. Hence up to a subsequence, $\tilde{u}_n\rightarrow \tilde{u}_0$ uniformly in $\overline{B_1}$. Thus, $$ \tilde{u}_0(0)=\liminf_{n\rightarrow\infty}\tilde{u}_n(0)=\liminf_{n\rightarrow\infty}u_n(x_n)\not=0, $$ which implies that $\tilde{u}_0\not\equiv 0$. On the other hand, for any fixed $R>0$ and $j$ large enough, we have \begin{align*} &o_n(1)+\int_{\mathbb R^2}u_0^2 \, \mathrm{d} x=\int_{\mathbb R^2}\bar{u}_n^2 \, \mathrm{d} x\\ &\ge \int_{B_R(0)}\bar{u}_n^2 \, \mathrm{d} x +\int_{B_R(x_n)}\bar{u}_n^2 \, \mathrm{d} x\\ &=\int_{B_R(0)}\bar{u}_n^2 \, \mathrm{d} x+\int_{B_R(0)}\tilde{u}_n^2\, \mathrm{d} x\\ &=\int_{B_R(0)}u_0^2\, \mathrm{d} x +\int_{B_R(0)}\tilde{u}_0^2\, \mathrm{d} x +o_n(1), \end{align*}
where $o_n(1)\rightarrow 0$, as $n\rightarrow\infty$ and we have used the fact that $\bar{u}_n=u_n(\cdot+y_n)\rightarrow u_0$ in $H^1(\mathbb R^2)$. Since $R$ is arbitrary, we get $\tilde{u}_0\equiv 0$, which is a contradiction. Thus, $\bar{u}_n(x)\rightarrow 0$, uniformly as $|x|\rightarrow\infty$, which immediately implies by \re{vb4} that
$\sup_{n\ge1}\|u_n\|_\infty=\sup_{n\ge1}\|\bar{u}_n\|_\infty<\infty$. Similarly, $\sup_{n\ge1}\|v_n\|_\infty<\infty$. \end{proof} \begin{proposition}\label{pro_apriori} The following a priori estimates hold \begin{equation}\label{aprioribound}
0<\inf_{z=(u,v)\in \mathcal{S}}\min\{\|u\|_\infty,\|v\|_\infty\}<\sup_{z=(u,v)\in \mathcal{S}}(\|u\|_\infty+\|v\|_\infty)<\infty \end{equation} \end{proposition} \begin{proof} The upper bound is a consequence of Proposition \ref{bo2} and the fact that $\mathcal{S}$ is compact.
\noindent In order to prove the lower bound we argue by contradiction and thus assume $$\inf_{z=(u,v)\in \mathcal{S}}\min\{\|u\|_\infty,\|v\|_\infty\}=0.$$
Then, there exists $\{z_n\}\subset \mathcal{S}$ such that, without loss of generality, $\|v_n\|_\infty\rg0$, as $n\rightarrow\infty$. From $$
\int_{\mathbb R^2}|\nabla u_n|^2+V_0u_n^2=\int_{\mathbb R^2}g(v_n)u_n, $$ by $(H1)$ we have $$
\int_{\mathbb R^2}|\nabla u_n|^2+V_0u_n^2\leq o_n(1)\left(\int_{\mathbb R^2}v_n^2\right)^{1/2}\left(\int_{\mathbb R^2}u_n^2\right)^{1/2} $$ and hence $u_n\rg0$ in $H^1(\mathbb R^2)$. From $$
\int_{\mathbb R^2}|\nabla v_n|^2+V_0v_n^2=\int_{\mathbb R^2}f(u_n)v_n\le\left(\int_{\mathbb R^2}v_n^2\right)^{1/2}\left(\int_{\mathbb R^2}[f(u_n)]^2\right)^{1/2}, $$ together with the fact $u_n\rg0$ in $H^1(\mathbb R^2)$ which implies $\int_{\mathbb R^2}[f(u_n)]^2\rightarrow 0$, we have also $v_n\rg0$ in $H^1(\mathbb R^2)$. Finally, as $(u_n,v_n)\in \mathcal{S}$, we obtain a contradiction from the following $$ 0<c_\ast=\lim_{n\rightarrow\infty}\left(\int_{\mathbb R^2}\nabla u_n\nabla v_n+V_0u_nv_n-\int_{\mathbb R^2}F(u_n)+G(v_n)\right)=0 $$ \end{proof} In order to complete the proof of Theorem \ref{Th2} it remains to show that ground states vanish at infinity and that enjoy a suitable Pohozaev-type identity in the whole plane; we prove these results in Proposition \ref{vanishing_R} and \ref{stanislav} of the next Section.
\subsection{Vanishing and Pohozaev-type identity}
\begin{proposition}{\rm(Uniform vanishing)}\label{vanishing_R}
Let $x_z\in\mathbb R^2$ be a maximum point of $|u(x)|+|v(x)|$, $z=(u,v)\in \mathcal{S}$. Then $u(x+x_z)\to 0$ and $v(x+x_z)\rightarrow 0$, as $|x|\rightarrow\infty$, uniformly for any $(u,v)\in \mathcal{S}$. \end{proposition} \noindent In order to prove Proposition \ref{vanishing_R} we need the following technical lemma
\begin{lemma}\label{boo4}
For any $\{z_n\}\subset \mathcal{S}, z_n=(u_n,v_n)$, up to a subsequence, $z_n(\cdot+x_n)\rightarrow z_1$ in $E$, as $n\rightarrow\infty$, where $\{x_n\}\subset\mathbb R^2$ is such that $|u_n(x_n)|+|v_n(x_n)|=\max_{x\in\mathbb R^2}(|u_n(x)|+|v_n(x)|).$ \end{lemma} \begin{proof} We first claim that there exist $\mu>0$ and $R_1>0$ such that \begin{equation}\label{nvv} \lim_{n\rightarrow\infty}\int_{B_{R_1}(x_n)}(u_n^2+v_n^2)\, \mathrm{d} x\ge\mu. \end{equation} Let us argue by contradiction, indeed if not, for some $\{z_n\}\subset\mathcal{S}$ and any $R>0$, we get $$ \lim_{n\rightarrow\infty}\int_{B_R(x_n)}(u_n^2+v_n^2)\, \mathrm{d} x=0. $$ Let $\hat{u}_n=u_n(\cdot+x_n)$ and $\hat{v}_n=v_n(\cdot+x_n)$, then $\hat{u}_n,\hat{v}_n\rg0$ in $L_{loc}^2(\mathbb R^2)$, as $n\rightarrow\infty$. Similarly as above, $\hat{u}_n$ is a weak solution of the following problem $$ -\Delta U+V_0U=g(\hat{v}_n)\ \mbox{in}\ B_2,\ U-\hat{u}_n\in H_0^1(B_2). $$ By standard elliptic regularity we get $\hat{u}_n\in C^{1,\gamma}(\overline{B_1})$ for some $\gamma\in(0,1)$ and there exists $c$ (independent of $n$) such that for $p>2$,
\begin{equation}\label{vbb3}\|\hat{u}_n\|_{C^{1,\gamma}(\overline{B_1})}\le c\left(\|g(\hat{v}_n)\|_{L^p(\mathbb R^2)}+\|\hat{u}_n\|_{L^p(\mathbb R^2)}\right).
\end{equation} By Proposition \ref{bo2}, $\bar{z}_n\rightarrow z_0$ in $E$, by \re{yf1} $\|g(\hat{v}_n)\|_{L^p(\mathbb R^2)}=\|g(\bar{v}_n)\|_{L^p(\mathbb R^2)}\rightarrow\|g(v_0)\|_{L^p(\mathbb R^2)}$, as $n\rightarrow\infty$. Then we have \begin{equation}\label{vbb4}
\sup_{n\ge1}\|\hat{u}_n\|_{C^{1,\gamma}(\overline{B_1})}<\infty, \end{equation} which implies by $\hat{u}_n\rg0$ in $L^2(B_1)$ that $\hat{u}_n\rg0$ uniformly in $B_1$. In particular, $\hat{u}_n(0)=u_n(x_n)\rg0$. Similarly, we have $\hat{v}_n(0)=v_n(x_n)\rg0$. Finally we obtain $$
\lim_{n\rightarrow\infty}\max_{x\in\mathbb R^2}(|u_n(x)|+|v_n(x)|)=\lim_{n\rightarrow\infty}(|u_n(x_n)|+|v_n(x_n)|)=0, $$ which implies $$
\lim_{n\rightarrow\infty}\min\{\|u_n\|_\infty,\|v_n\|_\infty\}=0 $$ and thus a contradiction.
Now by \re{nvv} $\lim_{n\rightarrow\infty}\int_{B_{R_1}(0)}(\hat{u}_n^2+\hat{v}_n^2)\, \mathrm{d} x\ge\mu$ which combined with the local compactness of the embedding $H^1(\mathbb R^2)\hookrightarrow L^2(\mathbb R^2)$, yields up to a subsequence, $z_n(\cdot+x_n)=(\hat{u}_n+\hat{v}_n)\rightharpoonup z_1\not={0}$ in $E$ and $z_n(\cdot+x_n)\to z_1$ a.e. in $\mathbb R^2$, as $n\rightarrow\infty$. Then arguing as in Proposition \ref{o11}-\ref{con}, we get $z_1\in \mathcal{S}$ and $z_n(\cdot+x_n)\rightarrow z_1$ in $E$, as $n\rightarrow\infty$, and this completes the proof. \end{proof} \noindent{\it Proof of Proposition \ref{vanishing_R}.}
Next let us prove that for any $\delta>0$, there exists $R>0$ such that $|u(x+x_z)|+|v(x+x_z)|\le \delta, |x|\ge R$ for any $z=(u,v)\in \mathcal{S}$, where $x_z\in\mathbb R^2$ is a maximum point of $|u(x)|+|v(x)|$. If not, there exist $z_n=(u_n,v_n)\in \mathcal{S}$ and $\{x_n\}\subset\mathbb R^2$ such that $|x_n|\rightarrow\infty$ as $n\rightarrow\infty$ and $$\liminf_{n\rightarrow\infty}(|u_n(x_n+x_{z_n})|+|v_n(x_n+x_{z_n})|)>0,$$ where $x_{z_n}\in\mathbb R^2$ is a maximum point of $|u_n(x)|+|v_n(x)|$. Without loss of generality, we may assume $\liminf_{n\rightarrow\infty}|u_n(x_n+x_{z_n})|>0$. Let $\tilde{u}_n(x)=u_n(x+x_n+x_{z_n})$ and $\tilde{v}_n(x)=v_n(x+x_n+x_{z_n})$. Assume $\tilde{u}_n \rightharpoonup \tilde{u}_0$ weakly in $H^1(\mathbb R^2)$, in the following we claim $\tilde{u}_0\not\equiv 0$. Indeed, by Lemma \ref{boo4}, up to a subsequence, there exists $z\in\mathcal{S}$ such that $(u_n(\cdot+x_{z_n}),v_n(\cdot+x_{z_n}))\rightarrow z$ strongly in $E$. Then as in the proof of the above Lemma, by the elliptic estimates, up to a subsequence, for some $\tilde{u}_0\in H^1(\mathbb R^2)$ and $\gamma\in(0,1)$, $\tilde{u}_n\rightarrow \tilde{u}_0$ in $C_{loc}^{1,\gamma}(\mathbb R^2)$, as $n\rightarrow\infty$. Hence, $$ \tilde{u}_0(0)=\liminf_{n\rightarrow\infty}\tilde{u}_n(0)=\liminf_{n\rightarrow\infty}u_n(x_n+x_{z_n})\not=0, $$ which implies that $\tilde{u}_0\not\equiv 0$. On the other hand, proceeding as in Proposition \ref{bo2}, we get $\tilde{u}_0\equiv 0$, which is a contradiction. \qed
\begin{proposition}{\rm(Pohozaev-type identity)}\label{stanislav} For any $z=(u,v)\in \mathcal{S}$, the following Pohozaev-type identity holds true \begin{equation}\label{idpo} \int_{\mathbb R^2}(F(u)+G(v)-V_0uv)\,\mathrm{d} x=0. \end{equation} \end{proposition} \begin{proof} By the proof of Proposition \ref{bo1} we know $u,v\in W_{\operatorname{\rm loc}}^{2,p}(\mathbb R^2)$ for any $p\ge2$. Then $\Delta u=V_0u-g(v)$ a.e. in $\mathbb R^2$ and $\Delta v=V_0v-f(u)$ a.e. in $\mathbb R^2$. Following\cite{Pucci, van} we get \begin{align}\label{poha} &\oint_{\partial B_r}\nabla u\nabla v\cdot(x,{\bf n})\,\mathrm{d} s-\oint_{\partial B_r}\left(\sum_{i,j=1}^2 x_j\left(\frac{\partial u}{\partial x_j}\frac{\partial v}{\partial x_i}+\frac{\partial v}{\partial x_j}\frac{\partial u}{\partial x_i}\right),{\bf n}\right)\,\mathrm{d} s\\ &=2\int_{B_r}(V_0uv-F(u)-G(v))\,\mathrm{d} x,\nonumber \end{align}
where $B_r(0):=\{x\in\mathbb R^2:|x|<r\}, r>0$ and $\bf{n}$ is the outward normal of $\partial B_r$ at $x$. From $\nabla u,\nabla v\in L^2(\mathbb R^2)$, by virtue of the coarea formula, there exits $r_n$ such that $r_n\rightarrow\infty$ and $$
r_n\oint_{\partial B_{r_n}}\left|\frac{\partial u}{\partial x_j}\frac{\partial v}{\partial x_i}\right|\,\mathrm{d} s\rg0,\,\,\hbox{for any}\,\,i,j=1,2. $$ As a consequence as $n\rightarrow\infty$, $$
\left|\oint_{\partial B_{r_n}}\nabla u\nabla v\cdot(x,{\bf n})\,\mathrm{d} s\right|\le r_n\oint_{\partial B_{r_n}}\left|\nabla u\nabla v\right|\,\mathrm{d} s\rg0 $$ and hence $$ \oint_{\partial B_r}\left(\sum_{i,j=1}^2 x_j\left(\frac{\partial u}{\partial x_j}\frac{\partial v}{\partial x_i}+\frac{\partial v}{\partial x_j}\frac{\partial u}{\partial x_i}\right),{\bf n}\right)\,\mathrm{d} s\rg0. $$ Then, let $r=r_n$ in \re{poha} to get, as $n\rightarrow\infty$, identity \eqref{idpo}. \end{proof}
\subsection{Sign and symmetry properties}\label{sign_s} This section is devoted to proving Theorem \ref{sign}. To investigate positivity and radial symmetry of ground state solutions to \re{q11}, without loss generality, throughout this section we assume that $f,g$ are odd symmetric functions.
\noindent Let $$\kappa:=\sup\{\|u\|_{\infty},\|v\|_{\infty}: (u,v)\in S\}<\infty$$ by Theorem \ref{Th2}. By $(H1)$ and $(H6)$, there exist small $a_0,b_0\in[0,1)$ and $k_1,k_2>0$ with $$k_1=\max_{a_0<|t|\le\kappa}|f(t)|/|t|^q,\,\, k_2=\max_{b_0<|t|\le\kappa}|g(t)|/|t|^p,$$ such that $f(t)\le t$, for $t\in[0,a_0]$ and $g(t)\le t$, for $t\in[0,b_0]$. Moreover, $f(a_0)=k_1a_0^q$ and $g(b_0)=k_2b_0^p$. In fact, if $\limsup_{t\rg0}|f(t)|/|t|^q<\infty$, we can choose $a_0=0$, otherwise there exists $a_0\in(0,1)$ such that $f(a_0)/a_0^q=\max_{t\in[a_0,\kappa]}f(t)/t^q.$ Let $$ f_k(t)=\left\{ \begin{array}{ll} f(t),\ \ & \mbox{if}\ t\in[0,a_0]\\ \min\{f(t),k_1t^q\},\ \ \ & \mbox{if}\ t\in(a_0,\infty) \end{array} \right. $$
and $f_k(t)=-f_k(-t)$ for $t\le0$ and similarly for $g$. Then, $f_k,g_k\in C(\mathbb R,\mathbb R)$ and $f_k(t)=f(t),g_k(t)=g(t)$ if $|t|\le\kappa$, $0<f_k(t)\le f(t),0<g_k(t)\le g(t)$ for all $t>0$. At the same time, there exists $\beta>0$ such that \begin{equation}\label{gj} \left\{ \begin{array}{ll}
|f_k(t)|\ge\beta|t|^q\text{ and }|g_k(t)|\ge\beta|t|^p,\ \ \mbox{for any}\ t\in\mathbb R\\
|f_k(t)|=|f(t)|\le|t|\,\,\,\mbox{if}\ |t|\le a_0,\ \ |g_k(t)|=|g(t)|\le|t|\,\ \mbox{if}\ |t|\le b_0\\
|f_k(t)|\le k_1|t|^q\,\,\mbox{if}\ |t|\ge a_0,\ \ |g_k(t)|\le k_2|t|^p\,\ \mbox{if}\ |t|\ge b_0. \end{array} \right. \end{equation} Moreover, it is easy to check that $f_k, g_k$ satisfy $(H1)$, $(H4)$ and \begin{equation}\label{am1} 0<2F_k(t)\le f_k(t)t,\,\,0<2G_k(t)\le g_k(t)t,\,\,t\not=0, \end{equation} \begin{equation}\label{am2}
\lim_{|t|\rightarrow\infty}\frac{F_k(t)}{t^2}=\infty,\,\,\lim_{|t|\rightarrow\infty}\frac{G_k(t)}{t^2}=\infty, \end{equation} where $F_k(t)=\int_0^tf_k(\tau)\,\mathrm{d}\tau$ and $G_k(t)=\int_0^tg_k(\tau)\,\mathrm{d}\tau$.
Now consider the truncated problem \begin{equation}\label{qk1} \left\{ \begin{array}{ll} &-\Delta u+V_0u=g_k(v)\\ &-\Delta v+V_0v=f_k(u) \end{array} \right. \end{equation} whose associated energy functional is $$ \Phi_k(z):=\int_{\mathbb R^2}(\nabla u\nabla v+V_0uv)\,\mathrm{d} x-\int_{\mathbb R^2}(F_k(u)+G_k(v))\,\mathrm{d} x,\ \ z=(u,v)\in E. $$ Recall the generalized Nehari Manifold $$ \mathcal{N}_k:=\{z\in E\setminus E^-: \langle \Phi_k'(z),z\rangle=0, \langle \Phi_k'(z),\varphi\rangle=0\ \mbox{for all}\ \ \varphi\in E^-\} $$ and the least energy $$ c_\ast^k:=\inf_{z\in\mathcal{N}_k}\Phi_k(z). $$ Noting that for any $(u,v)\in \mathcal{S}$, $(u,v)$ is a solution to \re{qk1}, hence $c_\ast^k\le c_\ast$. For $z\in E\setminus E^-$, set $$ \hat{E}(z)=E^-\oplus\mathbb R^+z=E^-\oplus\mathbb R^+z^+. $$ From \cite{DJJ,Szulkin,Weth} we have
\begin{lemma}\label{lk54.1}\ \begin{itemize}
\item [1)] For any $z\in \mathcal{N}_k$, $\Phi_k|_{\hat{E}(z)}$ has a unique maximum point which occurs exactly at $z$;
\item [2)] For any $z\in E\setminus E^-$, the set $\hat{E}(z)$ intersects $\mathcal{N}_k$ at exactly one point $\hat{m}_k(z)$, which is the unique global maximum point of $\Phi_k|_{\hat{E}(z)}$;
\item [3)] $$ c_\ast^k:=\inf_{z\in E\setminus E^-}\max_{\omega\in\hat{E}(z)}\Phi_k(\omega). $$ \end{itemize} \end{lemma} \noindent From $0\le G_k(t)\le G(t)$ and $0\le F_k(t)\le F(t)$ for any $t\in\mathbb R$, we have $$ c_\ast^k\ge\inf_{z\in E\setminus E^-}\max_{\omega\in\hat{E}(z)}\Phi(\omega)=c_\ast. $$ thus $c_\ast^k=c_\ast>0$.
\noindent Next define $$ \hat{m}_k: z\in E\setminus E^-\mapsto\hat{m}_k(z)\in\hat{E}(z)\cap\mathcal{N}_k. $$
There exists $\delta>0$ such that $\|z^+\|_\varepsilon\ge\delta$ for all $z\in\mathcal{N}_k$; in particular one has $$
\|\hat{m}_k(z)^+\|_\varepsilon\ge\delta\ \ \ \mbox{for all}\ \ z\in E\setminus E^-. $$ Moreover, for each compact subset $\mathcal{W}\subset E\setminus E^-$, there exists a constant $C_{\mathcal{W}}>0$ such that $$
\|\hat{m}(z)\|\le C_{\mathcal{W}}\ \ \ \mbox{for all}\ \ z\in\mathcal{W}. $$ \noindent Define $$
S^+:=\{z\in E^+: \|z\|=1\}, $$ then, $S^+$ is a $C^1$-submanifold of $E^+$ and the tangent manifold of $S^+$ at $z\in S^+$ is given by $$ T(S^+)=\{\omega\in E^+: (\omega,z)=0\}. $$ Let $$
m_k:=\hat{m}_k|_{S^+}: S^+\longrightarrow\mathcal{N}_k, $$ then $\hat{m}_k$ is continuous and $m_k$ is a homeomorphism between $S^+$ and $\mathcal{N}_k$. Define $$ \Psi_k: S^+\longrightarrow\mathbb R, \Psi_k(z):=\Phi_k(m_k(z)), z\in S^+ $$ then, by \cite[Corollary 4.3]{Weth} we have \begin{proposition}\label{pk5.5}\noindent \begin{itemize}
\item [1)] $\Psi_k\in C^1(S^+,\mathbb R)$ and $$
\langle\Psi_k'(z),\omega\rangle=\|m_k(z)^+\|\langle\Phi_k'(m_k(z)),\omega\rangle,\ \ \mbox{for all}\ \ \omega\in T_z(S^+); $$
\item [2)] If $\{\omega_n\}\subset S^+$ is a Palais-Smale sequence for $\Psi_k$, then $\{m_k(\omega_n)\}\subset \mathcal{N}_k$ is a Palais-Smale sequence for $\Phi_k$. Namely, if $\Psi_k(\omega_n)\rightarrow d$ for some $d>0$ and $\|\Psi_k'(\omega_n)\|_\ast\rightarrow 0$ as $n\rightarrow\infty$, then $\Phi_k(m_k(\omega_n))\rightarrow d$ and $\|\Phi_k'(m_k(\omega_n))\|\rg0$ as $n\rightarrow\infty$, where
$$
\|\Psi_k'(\omega_n)\|_\ast=\sup_{\stackrel{\phi\in T_{\omega_n}(S^+)}{\|\phi\|=1}}\langle\Psi_k'(\omega_n),\phi\rangle\ \ \mbox{and}\ \ \|\Phi_k'(m_k(\omega_n))\|=\sup_{\stackrel{\phi\in E}{\|\phi\|=1}}\langle\Phi_k'(m_k(\omega_n)),\phi\rangle;
$$
\item [3)] $\omega\in S^+$ is a critical point of $\Psi_k$ if and only if $m_k(\omega)\in \mathcal{N}_k$ is a critical point of $\Phi_k$;
\item [4)] $\inf_{S^+}\Psi_k=\inf_{\mathcal{N}_k}\Phi_k$. \end{itemize} \end{proposition} \noindent It follows from the Ekeland Variational Principle (see \cite[Theorem 3.1]{E}) that there exists $\{z_n^k\}\subset\mathcal{N}_k$ such that \begin{equation}\label{pkss4} \Phi_k(z_n^k)\rightarrow c_\ast>0 \ \ \mbox{and}\ \ \Phi_k'(z_n^k)\rightarrow 0,\ \ \mbox{as}\ \ n\rightarrow\infty. \end{equation} Next we prove that $\{z_n^k\}$ is uniformly bounded in $E$. Precisely we have the following \begin{lemma}\label{o14}
There exists $C>0$ such that $\|z_n^k\|=\|(u_n^k,v_n^k)\|\le C$, for all $n\in\mathbb{N}$. \end{lemma}
\begin{proof} Let $z_n^k=z_n^++z_n^-$, where $z_n^+\in E^+,\,\, z_n^-\in E^-$. Noting that $z_n^k\in\mathcal{N}_k$, we have $\|z_n^+\|^2\ge\|z_n^k\|^2/2$ for all $n\in\mathbb{N}$. Let $w_n^k=w_n^++w_n^-=z_n^k/\|z_n^k\|$, where $w_n^+\in\hat{E}(z_n^k)\subset E^+,w_n^-\in E^-$ and $w_n^+=(\tilde{w}_n,\tilde{w}_n)$, then $\|w_n^+\|^2\ge1/2$. By Lemma \ref{lk54.1}, for some $R>2\sqrt{c_\ast}$, we have \begin{align*} c_\ast+o_n(1)&=\Phi_k(z_n^k)=\max_{w\in\hat{E}(z_n^k)}\Phi_k(w)\ge\Phi_k(R w_n^+)\\ &\ge R^2/4-\int_{\mathbb R^2}F_k(R\tilde{w}_n)+G_k(R\tilde{w}_n), \end{align*} which implies $$ \liminf_{n\rightarrow\infty}\int_{\mathbb R^2}F_k(R\tilde{w}_n)+G_k(R\tilde{w}_n)>0. $$
By Lions' Lemma, up to translations, $\tilde{w}_n\rightarrow w\not=0$ weakly in $H^1(R^2)$ as $n\rightarrow\infty$. Assume that $w_n^k\rightarrow (u,v)$ weakly in $H^1(R^2)$ as $n\rightarrow\infty$, then $u+v=2w$. If $\|z_n^k\|\rightarrow\infty$ as $n\rightarrow\infty$, then $u_n^k(x)\rightarrow\infty$ if $u(x)\not=0$ as $n\rightarrow\infty$ and by Fatou's Lemma and \re{am2}, $$
\liminf_{n\rightarrow\infty}\int_{\mathbb R^2}\left(\frac{F_k(u_n^k)}{\|z_n^k\|^2}+\frac{G_k(v_n^k)}{\|z_n^k\|^2}\right)=+\infty, $$ which yields $\Phi_k(z_n^k)\rightarrow-\infty$ as $n\rightarrow\infty$. This is a contradiction and therefore $\{z_n^k\}$ stays bounded in $E$. \end{proof} Up to a subsequence, we may assume $z_n^k\rightharpoonup z^k$ weakly in $E$, as $n\rightarrow\infty$. It is standard to check that $\Phi_k'(z^k)=0$. \begin{proposition}\label{tk1} The truncated problem \re{qk1} admits a ground state solution. \end{proposition} \begin{proof} If $z^k\not=0$, then by \re{am2} and Fatou's Lemma one has \begin{align*} c_\ast+o_n(1)&=\Phi_k(z_n^k)-\frac{1}{2}\langle \Phi_k'(z_n^k),z_n^k\rangle\\ &=\int_{\mathbb R^2}\frac{1}{2}f_k(u_n^k)u_n^k-F_k(u_n^k)+\int_{\mathbb R^2}\frac{1}{2}g_k(u_n^k)u_n^k-G_k(u_n^k)\\ &\ge\int_{\mathbb R^2}\frac{1}{2}f_k(u^k)u^k-F_k(u^k)+\int_{\mathbb R^2}\frac{1}{2}g_k(u^k)u^k-G_k(u^k)+o_n(1)\\ &=\Phi_k(z^k)-\frac{1}{2}\langle \Phi_k'(z^k),z^k\rangle+o_n(1)\\ &=\Phi_k(z^k)\ge c_\ast+o_n(1). \end{align*} from which $z^k$ is a ground state solution to \re{qk1}.
\noindent If $z^k=0$, we claim there exist $\nu>0$, $R_0>0$ and $\{y_n\}\subset\mathbb R^2$ such that \begin{equation}\label{nonvanishing}
\lim_{n\rightarrow\infty}\int_{B_{R_0}(y_n)}(|u_n^k|^2+|v_n^k|^2)\, \mathrm{d} x\ge\nu. \end{equation} Suppose the claim holds true and set $\tilde{u}_n^k(\cdot):=u_n^k(\cdot+y_n)$ and $\tilde{v}_n^k(\cdot):=v_n^k(\cdot+y_n)$, so that \begin{equation}\label{yk22}
\lim_{n\rightarrow\infty}\int_{B_{R_0}(0)}(|\tilde{u}_n^k|^2+|\tilde{v}_n^k|^2)\, \mathrm{d} x\ge\nu, \end{equation} and $\Phi_k(\tilde{z}_n^k)\rightarrow c_\ast>0$ and $\Phi_k'(\tilde{z}_n^k)\rightarrow 0$, as $n\rightarrow\infty$ where $\tilde{z}_n^k=(\tilde{u}_n^k,\tilde{v}_n^k)$. Clearly $\{\tilde{z}_n^k\}$ is bounded in $E$ and up to a subsequence, by \re{yk22} we may assume that $\tilde{z}_n^k\rightarrow \tilde{z}^k\not=0$ weakly in $E$ to a ground state solution of \re{qk1}.
\noindent Hence let us prove by contradiction the claim \re{nonvanishing}. Indeed, if \re{nonvanishing} does not hold we have $$
\lim_{n\rightarrow\infty}\sup_{y\in\mathbb R^2}\int_{B_R(y)}(|u_n^k|^2+|v_n^k|^2)\, \mathrm{d} x=0\ \ \mbox{for all}\ \ R>0, $$ then by Lions's Lemma, $u_n^k\rg0, v_n^k\rg0$ strongly in $L^s(\mathbb R^2)$ for any $s>2$. By $(H1)$ and \re{gj} we have $$
\int_{\mathbb R^2}(|\nabla u_n^k|+V_0|u_n^k|)\,\mathrm{d} x=\int_{\mathbb R^2}g_k(v_n^k)u_n^k\,\mathrm{d} x\rightarrow 0,\,\, n\rightarrow\infty. $$ Namely, $u_n^k\rg0$ strongly in $E$, as $n\rightarrow\infty$. It follows that $$
\int_{\mathbb R^2}(|\nabla v_n^k|+V_0|v_n^k|)\,\mathrm{d} x=\int_{\mathbb R^2}f_k(u_n^k)v_n^k\,\mathrm{d} x\rightarrow 0,\,\, n\rightarrow\infty. $$ Namely, $v_n^k\rg0$ strongly in $E$, as $n\rightarrow\infty$. So we get $c_\ast+o_n(1)=\Phi_k(z_n^k)\rg0$, as $n\rightarrow\infty$, which is a contradiction. \end{proof} \noindent Denote by $\mathcal{S}_k$ the set of of ground state solutions to system \re{qk1}, then $\mathcal{S}_k\not=\emptyset$. Similarly as above, for any $z=(u,v)\in \mathcal{S}_k$, $u,v\in L^{\infty}(\mathbb R^2)\cap C_{loc}^{1,\gamma}(\mathbb R^2)$ for some $\gamma\in(0,1)$. Recalling that $c_\ast=c_\ast^k$, we get $\mathcal{S}\subseteq \mathcal{S}_k$. In order to prove the reverse inclusion let us recall the following results from \cite{DJJ}
\begin{lemma}\label{l54.1}{\rm \cite{DJJ}} With the assumptions in Theorem \ref{Th2}, we have: \begin{itemize}
\item [1)] for any $z\in \mathcal{N}$, $\Phi|_{\hat{E}(z)}$ admits a unique maximum point which is precisely at $z$;
\item [2)] for any $z\in E\setminus E^-$, the set $\hat{E}(z)$ intersects $\mathcal{N}$ at exactly one point $\hat{m}(z)$, which is the unique globally maximum point of $\Phi|_{\hat{E}(z)}$;
\item [3)] $$c_\ast=\inf_{z\in E\setminus E^-}\max_{\omega\in\hat{E}(z)}\Phi(\omega).$$ \end{itemize} \end{lemma}
\noindent Let $m:=\hat{m}|_{S^+}: S^+\mapsto\mathcal{N}$ and $$ \Psi: S^+\mapsto\mathbb R, \Psi(z):=\Phi(m(z)), z\in S^+, $$
then $\hat{m}$ is continuous and $m$ is a homeomorphism between $S^+$ and $\mathcal{N}$. As in \cite{Weth}, $m$ is invertible and the inverse is given by $$m^{-1}(z)=\frac{z^+}{\|z\|},\,\, z=z^++z^-\in\mathcal{N},\,\, z^+\in E^+,\,\ z^-\in E^-.$$ Similar to Proposition \ref{pk5.5}, we have \begin{proposition}\label{pk5.6}\noindent \begin{itemize}
\item [1)] $\Psi\in C^1(S^+,\mathbb R)$ and $$
\langle\Psi'(z),\omega\rangle=\|m(z)^+\|\langle\Phi'(m(z)),\omega\rangle\ \ \mbox{for all}\ \ \omega\in T_z(S^+); $$
\item [2)] If $\{\omega_n\}\subset S^+$ is a Palais-Smale sequence for $\Psi$, then $\{m(\omega_n)\}\subset \mathcal{N}$ is a Palais-Smale sequence for $\Phi$. Namely, if $\Psi(\omega_n)\rightarrow d$ for some $d>0$ and $\|\Psi'(\omega_n)\|_\ast\rightarrow 0$ as $n\rightarrow\infty$, then $\Phi(m(\omega_n))\rightarrow d$ and $\|\Phi'(m(\omega_n))\|\rg0$ as $n\rightarrow\infty$, where
$$
\|\Psi'(\omega_n)\|_\ast=\sup_{\stackrel{\phi\in T_{\omega_n}(S^+)}{\|\phi\|=1}}\langle\Psi'(\omega_n),\phi\rangle\ \ \mbox{and}\ \ \|\Phi'(m(\omega_n))\|=\sup_{\stackrel{\phi\in E}{\|\phi\|=1}}\langle\Phi'(m(\omega_n)),\phi\rangle;
$$
\item [3)] $\omega\in S^+$ is a critical point of $\Psi$ if and only if $m(\omega)\in \mathcal{N}$ is a critical point of $\Phi$;
\item [4)] $\inf_{S^+}\Psi=\inf_{\mathcal{N}}\Phi$. \end{itemize} \end{proposition}
\begin{proposition}\label{sk} $$\mathcal{S}_k=\mathcal{S}.$$ \end{proposition}
\begin{proof} For any $z^k\in \mathcal{S}_k$, we know $z^k\in \mathcal{N}_k$, by Lemma \ref{lk54.1} $\Phi_k|_{\hat{E}(z)}$ admits a unique maximum point at $z^k$ and $$ c_\ast^k:=\inf_{z\in E\setminus E^-}\max_{\omega\in\hat{E}(z)}\Phi_k(\omega)=\max_{\omega\in\hat{E}(z^k)}\Phi_k(\omega). $$
Since $z^k\in E\setminus E^-$, by Lemma \ref{l54.1} the set $\hat{E}(z^k)$ intersects $\mathcal{N}$ just at one point $\hat{m}(z^k)$, which is the unique global maximum of $\Phi|_{\hat{E}(z^k)}$. Let $\hat{m}(z^k)=(\hat{u}^k,\hat{v}^k$), then by $0\le f_k(t)\le f(t)$ and $0\le g_k(t)\le g(t)$, for $t\ge0$ we have \begin{align*} c_\ast^k&=\max_{\omega\in\hat{E}(z^k)}\Phi_k(\omega)\ge\Phi_k(\hat{m}(z^k))\\ &=\Phi(\hat{m}(z^k))+\int_{\mathbb R^2}[F(\hat{u}_k)-F_k(\hat{u}_k)]\,\mathrm{d} x+\int_{\mathbb R^2}[G(\hat{v}_k)-G_k(\hat{v}_k)]\,\mathrm{d} x\\ &=\max_{\omega\in\hat{E}(z^k)}\Phi(\omega)+\int_{\mathbb R^2}[F(\hat{u}_k)-F_k(\hat{u}_k)]\,\mathrm{d} x+\int_{\mathbb R^2}[G(\hat{v}_k)-G_k(\hat{v}_k)]\,\mathrm{d} x\\ &\ge\inf_{z\in E\setminus E^-}\max_{\omega\in\hat{E}(z)}\Phi(\omega)\ge c_\ast, \end{align*} which implies $F(\hat{u}_k(x))\equiv F_k(\hat{u}_k(x))$ and $G(\hat{v}_k(x))\equiv G_k(\hat{v}_k(x))$ for all $x\in\mathbb R^2$ and $$\max_{\omega\in\hat{E}(z^k)}\Phi_k(\omega)=\Phi_k(\hat{m}(z^k))=\Phi(\hat{m}(z^k))=c_\ast.$$ Then $\Psi(m^{-1}(\hat{m}(z^k))):=\Phi(\hat{m}(z^k))=c_\ast$. Notice that $m^{-1}(\hat{m}(z^k))\in S^+$. Then, by Proposition \ref{pk5.6}, $m^{-1}(\hat{m}(z^k))$ is a minimizer of $\Psi$ on the $C^1$-manifold $S^+$. Thus $$ \langle\Psi'(m^{-1}(\hat{m}(z^k))),\omega\rangle=0\ \ \mbox{for all}\ \ \omega\in T_{m^{-1}(\hat{m}(z^k))}(S^+). $$
If follows from $3)$ of Proposition \ref{pk5.6} that $\Phi'(\hat{m}(z^k))=0$, which yields $\hat{m}(z^k)\in \mathcal{S}$. By uniqueness of the global maximum point of $\Phi_k|_{\hat{E}(z^k)}$, we get $z^k=\hat{m}(z^k)$ and hence $z^k\in \mathcal{S}$. Therefore, $\mathcal{S}_k=\mathcal{S}$. \end{proof} In the last part of this section, in the spirit of \cite{dsr} we prove that $uv>0$ in $\mathbb R^2$ for any $z=(u,v)\in \mathcal{S}_k$.
\noindent Let $h(s):=g_k^{-1}(s)$ and $H$ denote the primitive function of $h$. By \re{gj}, for some $c,C>0$, \begin{equation}\label{gj2} \left\{ \begin{array}{ll}
h(s)s\le C|s|^{(p+1)/p}\,\,\ &\mbox{for}\,\,s\in\mathbb R,\\
h(s)s\ge s^2/2\,\,\,&\mbox{if}\,\,|s|\le g(b_0),\\
h(s)s\ge c|s|^{(p+1)/p}\,\,\,&\mbox{if}\,\, |s|>g(b_0). \end{array} \right. \end{equation} and clearly the same estimates hold for $H(s)$ as well. Consider the Schr\"odinger operator $L:=-\Delta +V_0$ and the Sobolev space $W^{2,(p+1)/p}(\mathbb R^2)$ endowed with the norm
$$\interleave{u\interleave}=\left(\int_{\mathbb R^2}|Lu|^{\frac{p+1}{p}}\,\mathrm{d} x\right)^{\frac{p}{p+1}}.$$ The following embeddings hold $$ W^{2,\frac{s+1}{s}}(\mathbb R^N)\hookrightarrow L^r(\mathbb R^N),\,\, \mbox{for any}\,\, r\ge\frac{s+1}{s},\,s>1,\,\,\mbox{if}\,\, s(N-2)\le 2, $$ in particular $W^{2,(p+1)/p}(\mathbb R^2)\hookrightarrow L^2(\mathbb R^2)\cap L^{p+1}(\mathbb R^2)\cap L^{q+1}(\mathbb R^2)$. For $u\in W^{2,(p+1)/p}(\mathbb R^2)$, define $$ J_k(u)=\int_{\mathbb R^2}H(Lu)-F_k(u)\,dx $$ then $J_k$ is of class $C^1$ and $$ \langle J_k'(u),\varphi\rangle=\int_{\mathbb R^2}(h(Lu)L(\varphi)-f(u)\varphi)\,\mathrm{d} x,\,\, u,\varphi\in W^{2,(p+1)/p}(\mathbb R^2). $$ \begin{proposition}\label{equ} $(u,v)\in E$ is a critical point of $\Phi_k$ if and only if $u$ is a critical point of $J_k$ and $v=h(Lu)$. Moreover, one has $\Phi_k(u,v)=J_k(u)$. \end{proposition} \noindent Define $$ c_1(\mathbb R^2)=\inf_{u\in\mathcal{N}_J}J_k(u),\,\,\,\mbox{where}\,\,\,\mathcal{N}_J:=\{u\in W^{2,(p+1)/p}(\mathbb R^2)\setminus\{0\}: \langle J_k'(u),u\rangle=0\}, $$ which under our assumptions might not be well defined. We overcome this difficulty by considering an approximation via bounded domains. Precisely, for any $R>0$ let us consider the problem \begin{equation}\label{qk2} \left\{ \begin{array}{ll} -\Delta u+V_0u=g_k(v)\\ -\Delta v+V_0v=f_k(u) \end{array} \right. \end{equation} $u,v\in H_0^1(B_R(0))$ whose associated energy functional is $$ I_R(z):=\int_{B_R(0)}(\nabla u\nabla v+V_0uv)\,\mathrm{d} x-(F_k(u)+G_k(v))\,\mathrm{d} x, $$ where $z=(u,v)\in E_R:=H_0^1(B_R(0))\times H_0^1(B_R(0))$.
\noindent We can define as above $E_R^+,E_R^-,\hat{E}_R(z)$ and $$ \mathcal{N}_R:=\{z\in E_R\setminus E_R^-:\langle I_R'(z),z\rangle=0,\,\,\langle I_R'(z),\phi\rangle=0\,\,\, \mbox{for all}\,\,\phi\in E_R^-\}. $$ Denote by $c_\ast(B_R(0))$ the corresponding least energy associated to the energy functional $I_R$. Similar to Lemma \ref{o14}, every Palais-Smale sequence for $I_R$ is bounded in $E_R$. Then $c_\ast(B_R(0))$ is the ground state critical level associated to $I_R$. Moreover, $$ c_\ast(B_R(0))=\inf_{z\in E_R\setminus E_R^-}\max_{\omega\in\hat{E}_R(z)}I_R(\omega). $$ \begin{remark} If $z=(u,v)\in\mathcal{N}_R$, we have $\langle I_R'(z),(\varphi,-\varphi)\rangle=0$ for all $\varphi\in H_0^1(B_R(0))$. In general, $\langle I_R'(z),(\varphi,-\varphi)\rangle=0$ does not hold for all $\varphi\in H^1(\mathbb R^2)$. Then, $\mathcal{N}_R$ is not a subset of $\mathcal{N}$, so it is not clear if $c_\ast(B_R(0))$ is greater than $c_\ast$. \end{remark} \noindent Let $$X_R=W^{2,(p+1)/p}(B_R(0))\cap W_0^{1,(p+1)/p}(B_R(0))$$
endowed with the norm $$\interleave{u\interleave}=\left(\int_{B_R}|Lu|^{\frac{p+1}{p}}\,\mathrm{d} x\right)^{\frac{p}{p+1}}$$ and $$ J_R(u)=\int_{B_R(0)}H(Lu)-F_k(u)\,dx,\,\,\ u\in X_R. $$
\begin{proposition} $z=(u,v)\in E_R$ is a critical point of $I_R$ if and only if $u$ is a critical point of $J_R$ and $v=h(Lu)$. Moreover, $I_R(u,v)=J_R(u)$. \end{proposition} \noindent Let $$ \mathcal{N}_{J_R}:=\{u\in X_R\setminus\{0\}: \langle J_R'(u),u\rangle=0\},\,\, c_1(B_R(0)):=\inf_{u\in\mathcal{N}_{J_R}}J_R(u). $$ Notice that $\mathcal{N}_{J_R}$ might not be a $C^1-$manifold, so that we next borrow some ideas of \cite{Weth} to overcome this difficulty and prove the existence of ground states corresponding to the functional $J_R$ on $\mathcal{N}_{J_R}$ for any $R$. Then by passing to the limit, we show that $c_1(\mathbb R^2)$ is the ground state critical value.
\begin{lemma}\label{mountain} For any $u\in X_R\setminus\{0\}$, $J_R(tu)\rightarrow-\infty$, as $t\rightarrow+\infty$ and the set $\mathbb R^+u$ intersects $\mathcal{N}_{J_R}$ at exactly one point denoted by $\hat{m}_R(u)$, which is the unique global maximum point of $J_R(tu)$, for $t>0$. In particular, $\hat{m}_R(u)=1$ if and only if $u\in\mathcal{N}_{J_R}$. Moreover, there exist $a_R,b_R>0$ such that $$ \interleave{u\interleave}\ge a_R\,\,\mbox{for any}\,\,u\in\mathcal{N}_{J_R}\,\,\mbox{and}\,\,c_1(B_R(0))\ge b_R. $$ \end{lemma} \begin{proof} {\bf Step 1.} By \re{gj} and \re{gj2}, for any $u\in X_R\setminus\{0\}$ and $t>0$, $$
J_R(tu)\le Ct^{(p+1)/p}\int_{B_R(0)}|Lu|^{(p+1)/p}-\frac{q+1}{q}\beta t^{q+1}\int_{B_R(0)}|u|^{q+1}\rightarrow-\infty,\,\,t\rightarrow+\infty, $$ and for any $\gamma>0$ small, there exists $c_\gamma>0$ such that \begin{align*}
J_R(tu)\ge& \frac{t^2}{2}\int_{\{|Lu|\le g(b_0)\}}|Lu|^2+ct^{(p+1)/p}\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}\\
&-\gamma t^2\int_{B_R(0)}|u|^2-c_\gamma t^{q+1}\int_{B_R(0)}|u|^{q+1}>0,\,\,|t|\ll1, \end{align*} where $$
\{|Lu|\le g(b_0)\}:=\{x\in B_R(0): |Lu(x)|\le g(b_0)\}. $$ For any $u\in\mathcal{N}_{J_R}$, let $\theta(t)=J_R(tu)$, then $\theta(0)=0$ and $\theta'(1)=0$. Recalling that $g_k(s)/s$ is strictly increasing for $s>0$, $h(s)/s$ is strictly decreasing for $s>0$. Obviously, $Lu=0$ if and only if $u=0$. Then for any $t>1$, thanks to $(H4)$, $(H6)$, \begin{align*} \theta'(t)&=\int_{B_R(0)}h(tLu)Lu-\int_{B_R(0)}f_k(tu)u\\
&=\int_{B_R(0)}h(t|Lu|)|Lu|-\int_{B_R(0)}f_k(t|u|)|u|\\
&=\int_{B_R(0)}\frac{h(t|Lu|)}{t|Lu|}t|Lu|^2-\int_{B_R(0)}\frac{f_k(t|u|)}{t|u|}t|u|^2\\
&<t\int_{B_R(0)}h(|Lu|)|Lu|-t\int_{B_R(0)}f_k(|u|)|u|\\ &=t\int_{B_R(0)}h(Lu)Lu-t\int_{B_R(0)}f_k(u)u=0. \end{align*} Similarly, $\theta'(t)>0$ for $t<1$. Namely, $J_R(u)=\max_{t\ge0}J_R(tu)$. Similarly, for any $u\in X_R\setminus\{0\}$, $J_R(tu)\rightarrow-\infty$ as $t\rightarrow+\infty$ and the set $\mathbb R^+u$ intersects $\mathcal{N}_{J_R}$ at exactly one point, which is the unique globally maximum point of $J_R(tu)$ for $t>0$. \vskip0.1in {\bf Step 2.} We prove that there exists $a_R>0$ such that $$ \interleave{u\interleave}\ge a_R\,\,\mbox{for any}\,\,u\in\mathcal{N}_{J_R}. $$ For any $u\in X_R\setminus\{0\}$, by \re{gj2} one has \begin{align}\label{biao1}
\int_{B_R(0)}h(Lu)Lu&\ge\frac{1}{2}\int_{\{|Lu|\le g(b_0)\}}|Lu|^2+c\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}\nonumber\\
&\ge\frac{1}{2}|B_R(0)|^{\frac{1-p}{1+p}}\left(\int_{\{|Lu|\le g(b_0)\}}|Lu|^{(p+1)/p}\right)^{2p/(p+1)}\\
&\ \ \ +c\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}.\nonumber \end{align} Moreover, by $(H1)$, for any small $\gamma>0$, there exist $c_\gamma>0$ and $C>0$ (independent of $\gamma$) such that \begin{align}\label{biao2}
\int_{B_R(0)}f_k(u)u\le\int_{B_R(0)}\gamma u^2+c_\gamma |u|^{q+1}\le C\interleave{u\interleave}^2(\gamma+c_\gamma\interleave{u\interleave}^{q-1}) \end{align} Here we used the embedding of $X_R$ into $L^r(B_R(0))$ for $r=2$ and $r=q+1$. By choosing
$$\gamma=2^{-\frac{4p+2}{p+1}}|B_R(0)|^{\frac{1-p}{1+p}}C^{-1},$$ and for any $u\in\mathcal{N}_{J_R}$, if $\interleave{u\interleave}^{q-1}\le\gamma c_\gamma^{-1}$, by \re{biao1} and \re{biao2}, \begin{align*}
&\frac{1}{4}|B_R(0)|^{\frac{1-p}{1+p}}\left(\int_{\{|Lu|\le g(b_0)\}}|Lu|^{(p+1)/p}\right)^{2p/(p+1)}+c\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}\\
&\le C\g2^{2p/(p+1)}\left(\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}\right)^{2p/(p+1)}. \end{align*}
Since $u\not=0$, we have $\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}>0$ and then $$
\int_{\{|Lu|>g(b_0)\}}|Lu|^{(p+1)/p}\ge\left(\frac{c}{C\g2^{2p/(p+1)}}\right)^{\frac{p+1}{p-1}}>0. $$ So that for any $u\in\mathcal{N}_{J_R}$ the following holds $$ \interleave{u\interleave}\ge\min\left\{(\gamma c_\gamma^{-1})^{\frac{1}{q-1}},\left(\frac{c}{C\g2^{2p/(p+1)}}\right)^{\frac{p}{p-1}}\right\}:=a_R>0. $$ \vskip0.1in {\bf Step 3.} We prove that there exists $b_R>0$ such that $c_1(B_R(0))\ge b_R.$ Obviously, $c_1(B_R(0))\ge0$. Assume by contradiction that there exists $\{u_n\}\subset\mathcal{N}_{J_R}$ such that $J_R(u_n)\rg0$, as $n\rightarrow\infty$. We claim that $\{u_n\}$ is bounded in $X_R$. Indeed, if not we may assume $\interleave{u_n\interleave}\rightarrow\infty$, as $n\rightarrow\infty$. Let $v_n=u_n/\interleave{u_n\interleave}$ and assume that $v_n\rightharpoonup v$ weakly in $X_R$. If $v=0$, then by compactness of the embedding of $X_R$ into $L^r(B_R(0))$ for $r=2$ and $r=q+1$, we get $\int_{B_R(0)}F_k(v_n)\rg0$, as $n\rightarrow\infty$. Then by Step 1, \begin{align*} J_R(u_n)=\max_{t\ge0}J_R(tu_n)\ge J_R(v_n)=\int_{B_R(0)}H(Lv_n)+o_n(1). \end{align*} Namely, $\int_{B_R(0)}H(Lv_n)=o_n(1)$. On the other hand, similar to \re{biao1}, \begin{align*}
\int_{B_R(0)}H(Lv_n)&\ge\frac{1}{2}|B_R(0)|^{\frac{1-p}{1+p}}\left(\int_{\{|Lv_n|\le g(b_0)\}}|Lv_n|^{(p+1)/p}\right)^{2p/(p+1)}\\
&\ \ \ +c\int_{\{|Lv_n|>g(b_0)\}}|Lv_n|^{(p+1)/p}. \end{align*} It follows that $v_n\rg0$ strongly in $X_R$, which contradicts the fact $\interleave{v_n\interleave}=1$. So $v\not=0$ and by \re{am2}, \re{gj2} and Fatou's Lemma, $$
o_n(1)=\frac{J_R(u_n)}{\interleave{u_n\interleave}^{\frac{p+1}{p}}}\le C-\int_{B_R(0)}\frac{F_k(u_n)}{|u_n|^{(p+1)/p}}|v_n|^{(p+1)/p}\rightarrow-\infty. $$ This is a contradiction. Hence, $\{u_n\}$ is bounded in $X_R$. We may assume, up to a subsequence, $u_n\rightharpoonup u$ weakly in $X_R$ and strongly in $L^2(B_R(0))$. Noting that $h(t)/t$ is strictly decreasing for $t>0$, we have $0<h(t)t\le2H(t)$ for all $t\not=0$. Then by $(H2)$, \begin{align*} o_n(1)&=J_R(u_n)-\frac{1}{2}\langle J_R'(u_n),u_n\rangle\\ &=\int_{B_R(0)}H(Lu_n)-\frac{1}{2}h(Lu_n)Lu_n+\frac{1}{2}\int_{B_R(0)}f_k(u_n)u_n-2F_k(u_n)\\
&\ge\frac{1}{2}\int_{B_R(0)}f_k(u_n)u_n-2F_k(u_n)\ge\frac{\theta-2}{2}\int_{\{x\in B_R(0): |u_n|\le a_0\}}F(u_n)\\
&\rightarrow\frac{\theta-2}{2}\int_{\{x\in B_R(0): |u|\le a_0\}}F(u),\,\,\mbox{as}\,\,n\rightarrow\infty. \end{align*} It follows that $$
\int_{\{x\in B_R(0): |u|\le a_0\}}F(u)=0. $$ Since $u\in X_R$, from elliptic regularity we get $u\in C^{0,2/(p+1)}(\overline{B_R(0)})$, which yields $u=0$. Analogously we get $\int_{B_R(0)}F_k(u_n)\rg0$, as $n\rightarrow\infty$ and \begin{align*} \int_{B_R(0)}H(Lu_n)=J_R(u_n)+o_n(1)=o_n(1). \end{align*} Similar to \re{biao1}, \begin{align*}
\int_{B_R(0)}H(Lu_n)&\ge\frac{1}{2}|B_R(0)|^{\frac{1-p}{1+p}}\left(\int_{\{|Lu_n|\le g(b_0)\}}|Lu_n|^{(p+1)/p}\right)^{2p/(p+1)}\\
&\ \ \ +c\int_{\{|Lu_n|>g(b_0)\}}|Lu_n|^{(p+1)/p}. \end{align*} Thus $u_n\rg0$ strongly in $X_R$, which contradicts the fact $\interleave{u\interleave}\ge a_R$ for all $u\in\mathcal{N}_{J_R}$. \end{proof} \noindent Define $$ \hat{m}_R: u\in X_R\setminus\{0\}\mapsto\hat{m}_R(u)\in\mathbb R^+u\cap\mathcal{N}_{J_R}. $$ Similar as in \cite{Szulkin}, we have the following \begin{lemma}\label{l2.4} There exists $\delta>0$ such that $\interleave{u\interleave}\ge\delta$ for all $u\in\mathcal{N}_{J_R}$. In particular, $$ \interleave{\hat{m}_R(u)\interleave}\ge\delta,\ \ \ \mbox{for all}\ \ u\in X_R\setminus\{0\}. $$ Moreover, for each compact subset $\mathcal{W}\subset X_R\setminus\{0\}$, there exists a constant $C_{\mathcal{W}}>0$ such that $$ \interleave{\hat{m}_R(u)\interleave}\le C_{\mathcal{W}},\ \ \ \mbox{for all}\ \ u\in\mathcal{W}. $$ \end{lemma} \begin{proof} By \re{gj2}, for any $u\in\mathcal{N}_{J_R}$, we have $$ b_1\le J_R(u)\le\int_{B_R(0)}H(Lu)\le C\interleave{u\interleave}^{p/(p+1)}. $$ Thus, there exists $\delta>0$ such that $\interleave{u\interleave}\ge\delta$ for any $u\in\mathcal{N}_{J_R}$. Moreover, since $\hat{m}_R(u)=\hat{m}_R(u/\interleave{u\interleave})$ for any $u\not=0$, without loss generality, we may assume $\mathcal{W}\subset S_R:=\{u\in X_R: \interleave{u\interleave}=1\}$. In the following, we claim that there exists $C_{\mathcal{W}}>0$ such that \begin{equation}\label{claim} \hbox{$J_R\le 0$ on $\mathbb R^+u\setminus B_{C_{\mathcal{W}}}(0)$, for all $u\in\mathcal{W}$,} \end{equation} where $B_{C_{\mathcal{W}}}(0)=\{v\in X_R: \interleave{v\interleave}\le C_{\mathcal{W}}\}$. If the claim \re{claim} is true, then noting that $J_R(\hat{m}_R(u))\ge b_1>0$ for all $0\not=u\in X_R$, we have
$\|\hat{m}_R(u)\|=\|\hat{m}_R(u/\interleave{u\interleave})\|\le C_{\mathcal{W}}$ for any $u\in \mathcal{W}$. \vskip0.1in \noindent So let us prove \re{claim}. Assume by contradiction that there exists $\{u_n\}\subset\mathcal{W}\subset S_R$ with $u_n\rightarrow u$ strongly in $\mathcal{W}$ and $\omega_n\in\mathbb R^+u_n$ with $\omega_n=t_nu_n$, $t_n\rightarrow\infty$ such that $J_R(\omega_n)\ge0$, as $n\rightarrow\infty$. For $n$ large enough, by \re{gj2} one has \begin{align}\label{y3} 0\le\frac{J_R(\omega_n)}{\interleave{\omega_n\interleave}^{(p+1)/p}}&\le C
-\int_{B_R(0)}\frac{F_k(t_nu_n)}{|t_nu_n|^{(p+1)/p}}|u_n|^{(p+1)/p}. \end{align} Noting that $u_n\xrightarrow{a.e.}u\not=0$, it follows from Fatou's Lemma and \re{y3} that $$\frac{J_R(\omega_n)}{\interleave{\omega_n\interleave}^{(p+1)/p}}\rightarrow-\infty$$
as $n\rightarrow\infty$, which is a contradiction. \end{proof}
\noindent Let $m_R:=\hat{m}_R|_{S_R}: S_R\longrightarrow\mathcal{N}_{J_R}$ and $$ K: S_R\longrightarrow \mathbb R,\quad K(u):=J_R(m_R(u)), u\in S_R, $$ then $\hat{m}_R$ is continuous and $m_R$ is a homeomorphism between $S_R$ and $\mathcal{N}_{J_R}$. \begin{proposition}\label{pkr5.6}\noindent \begin{itemize}
\item [1)] $K\in C^1(S_R,\mathbb R)$ and $
\langle K'(u),\omega\rangle=\|m_R(u)\|\langle J_R'(m_R(u)),\omega\rangle$, for all $\omega\in T_u(S_R)$;
\item [2)] If $\{\omega_n\}\subset S_R$ is a Palais-Smale sequence for $K$, then $\{m_R(\omega_n)\}\subset \mathcal{N}_{J_R}$
is a Palais-Smale sequence for $J_R$. Namely, if $K(\omega_n)\rightarrow d$ for some $d>0$ and $\|K'(\omega_n)\|_\ast\rightarrow 0$, as $n\rightarrow\infty$,
then $J_R(m_R(\omega_n))\rightarrow d$ and $\|J_R'(m_R(\omega_n))\|\rg0$, as $n\rightarrow\infty$, where
$$
\|K'(\omega_n)\|_\ast=\sup_{\stackrel{\phi\in T_{\omega_n}(S_R)}{\interleave{\phi\interleave}=1}}\langle K'(\omega_n),\phi\rangle\ \
\mbox{and}\ \ \|J_R'(m_R(\omega_n))\|=\sup_{\stackrel{\phi\in X_R}{\interleave{\phi\interleave}=1}}\langle J_R'(m_R(\omega_n)),\phi\rangle.
$$
\item [3)] $\omega\in S_R$ is a critical point of $K$ if and only if $m_R(\omega)\in \mathcal{N}_{J_R}$ is a critical point of $J_R$;
\item [4)] $\inf_{S_R}K=\inf_{\mathcal{N}_{J_R}}J_R$. \end{itemize} \end{proposition} \begin{lemma} For any $R>0$, $c_1(B_R(0))\ge c_\ast(B_R(0)).$ \end{lemma} \begin{proof} Observing that $S_R$ is a $C^1$-manifold in $X_R$, by virtue of the Ekeland variational principle (see \cite[Theorem 3.1]{E}), there exists $\{u_n\}\subset\mathcal{N}_{J_R}$ such that \begin{equation}\label{pkrss4} J_R(u_n)\rightarrow c_1(B_R(0))>0 \ \ \mbox{and}\ \ J_R'(u_n)\rightarrow 0,\ \ \mbox{as}\ \ n\rightarrow\infty. \end{equation} It is standard to show that $\{u_n\}$ is bounded in $X_R$, thus up to a subsequence, $u_n\rightarrow u$ weakly in $X_R$, as $n\rightarrow\infty$. By means of the compactness of $X_R\hookrightarrow L^{r}(B_R(0))$ for any $r\ge(p+1)/p$, $u_n\rightarrow u$ strongly in $L^{q+1}(B_R(0))$. Then \begin{align}\label{nonzero} \liminf_{n\rightarrow\infty}\int_{B_R(0)}h(Lu_n)Lu_n=\liminf_{n\rightarrow\infty}\int_{B_R(0)}f(u_n)u_n=\int_{B_R(0)}f(u)u. \end{align} By \re{gj2}, we also have $$
\int_{B_R(0)}h(Lu_n)Lu_n\ge\frac{1}{2}\int_{|Lu_n|\le g(b_0)}|Lu_n|^2+c\int_{|Lu_n|>g(b_0)}|Lu_n|^{(p+1)/p}. $$ We claim that $u\not\equiv0$. Indeed, otherwise by \re{nonzero} we get $$
\lim_{n\rightarrow\infty}\int_{|Lu_n|\le g(b_0)}|Lu_n|^2=0\,\,\mbox{and}\,\,\ \lim_{n\rightarrow\infty}\int_{|Lu_n|>g(b_0)}|Lu_n|^{(p+1)/p}=0. $$ Hence \begin{align*}
\lim_{n\rightarrow\infty}&\int_{B_R(0)}|Lu_n|^{(p+1)/p}\le\lim_{n\rightarrow\infty}\int_{|Lu_n|>g(b_0)}|Lu_n|^{(p+1)/p}\\
&+\lim_{n\rightarrow\infty}\left(\int_{|Lu_n|\le g(b_0)}|Lu_n|^2\right)^{(p+1)/(2p)}|B_R(0)|^{(p-1)/(2p)} \rg0& \end{align*} as $n\to\infty$, which implies $J_R(u_n)\rg0$, as $n\rightarrow\infty$. This is a contradiction.
\noindent Next let $u_0=\hat{m}_R(u)u$ and $v_n=\hat{m}_R(u)u_n$. By $(H7)$, $H$ is convex. Therefore $$ \liminf_{n\rightarrow\infty}\int_{B_R(0)}H(Lv_n)\ge\int_{B_R(0)}H(Lu_0)\,\,\mbox{and}\,\, \lim_{n\rightarrow\infty}\int_{B_R(0)}F(v_n)=\int_{B_R(0)}F(u_0). $$ As $u_0\in\mathcal{N}_{J_R}$ one the one hand on has \begin{align*} \liminf_{n\rightarrow\infty}J_R(v_n)\ge\int_{B_R(0)}H(Lu_0)-\int_{B_R(0)}F(u_0)\ge c_1(B_R(0)). \end{align*} On the other hand, it follows from Lemma \ref{mountain} and $u_n\in\mathcal{N}_{J_R}$ the following $$ \liminf_{n\rightarrow\infty}J_R(v_n)\le\liminf_{n\rightarrow\infty}\max_{t\ge0}J_R(tu_n)=\liminf_{n\rightarrow\infty}J_R(u_n)=c_1(B_R(0)). $$ and in turn $J_R(u_0)=c_1(B_R(0))$. By Proposition \ref{pkrss4} $J_R'(u_0)=0$ and by Proposition \ref{equ}, $(u_0,v_0)$ is a nontrivial critical point of $I_R$, namely $(u_0,v_0)\in\mathcal{N}_R$ where $v_0=h(Lu_0)$. Finally, $$ c_\ast(B_R(0))\le I_R(u_0,v_0)=J_R(u_0)=c_1(B_R(0)). $$ \end{proof} Similar as in \cite{dsr}, one can prove the reversed inequality to get the following \begin{lemma}\label{crr} For any $R>0$, $$c_\ast(B_R(0))=c_1(B_R(0)).$$ \end{lemma} \begin{lemma}\label{signr} Let $(u_R,v_R)$ be any ground state for the functional $I_R$, then $u_Rv_R>0$ in $B_R(0)$. \end{lemma}
\begin{proof} Recalling that $\mathcal{S}=\mathcal{S}_k$, it is enough to prove $uv>0$ in $\mathbb R^2$ for any $(u,v)\in\mathcal{S}_k$. For any $R>0$ and any ground state $(u_R,v_R)$ for the functional $I_R$, by Lemma \ref{crr} and Proposition \ref{equ}, $u_R$ is a ground state for the functional $J_R$. Let $\omega=L^{-1}(|Lu_R|)$, then $\omega>0$
and $\omega\ge|u_R|$. Moreover, $\langle J_R'(t\omega),\omega\rangle=0$, where $t=\hat{m}_R(\omega)>0$. On the other hand, \begin{align*}
c_1(B_R(0))&\le J_R(t\omega)=J_R(tu_R)+\int_{B_R(0)}F_k(t|u_R|)-F_k(t\omega)\\
&\le c_1(B_R(0))+\int_{B_R(0)}F_k(t|u_R|)-F_k(t\omega). \end{align*} So that
$\int_{B_R(0)}F_k(t|u_R|)-F_k(t\omega)\ge0$. It follows from $(H7)$ that $|u_R|=\omega>0$. If $u_R>0$ in $B_R(0)$, then by means of the maximum principle, $v_R>0$ in $B_R(0)$ and $u_Rv_R>0$ in $B_R(0)$. Similarly, if $u_R<0$ in $B_R(0)$, $u_Rv_R>0$ in $B_R(0)$. \end{proof} \noindent As a consequence of Lemma \ref{crr} and Lemma \ref{signr}, see also \cite[Remark 4.11]{dsr}, we have \begin{lemma}\label{compare} The map $R\mapsto c_\ast(B_R(0))$ is decreasing for $R>0$. \end{lemma} \begin{lemma}\label{cast} For any $R>0$, we have $c_\ast(B_R(0))\ge c_\ast(\mathbb R^2)$. \end{lemma} \begin{proof} For any $R>0$, let $z_R=(u_R,v_R)$ be a ground state solution of $I_R$. Namely, $I_R(z_R)=c_\ast(B_R(0))$ and $I_R'(z_R)=0$. We extend $z_R\in E_R$ to $z_R\in E$ by zero extension outside $B_R(0)$. Then, as in Lemma \ref{o14}, $\{z_R\}$ turns out to be bounded in $E$. Up to a subsequence, we may assume $z_R\rightharpoonup z_0$ weakly in $E$, as $R\rightarrow\infty$, then $z_0=(u_0,v_0)\in E$ is a nonnegative solution to \re{qk1}, namely $\Phi_k'(z_0)=0$.
If $z_0\not=0$, by $(H2)$ and Fatou's Lemma, we have for any $r\le R$, \begin{align*} c_\ast(B_r(0))&\ge\lim_{R\rightarrow\infty}c_\ast(B_R(0))=\lim_{R\rightarrow\infty}\left(I_R(z_R)-\frac{1}{2}\langle I_R'(z_R),z_R\rangle\right)\\ &=\lim_{R\rightarrow\infty}\left(\int_{B_R(0)}\frac{1}{2}f_k(u_R)u_R-F_k(u_R)+\int_{B_R(0)}\frac{1}{2}g_k(v_R)v_R-G_k(v_R)\right)\\ &\ge\int_{\mathbb R^2}\frac{1}{2}f_k(u_0)u_0-F_k(u_0)+\int_{\mathbb R^2}\frac{1}{2}g_k(v_0)v_0-G_k(v_0)\\ &=\Phi_k(z_0)-\frac{1}{2}\langle \Phi_k'(z_0),z_0\rangle=\Phi_k(z_0)\ge c_\ast(\mathbb R^2). \end{align*} If $z_0=0$, then $\{z_R\}$ satisfies one of the following alternatives: \begin{itemize}
\item [(1)] ({\it Vanishing}) $$ \lim_{R\rightarrow\infty}\sup_{y\in\mathbb R^2}\int_{B_r(y)}(u_R^2+v_R^2)\, \mathrm{d} x=0,\ \ \mbox{for all}\ \ r>0; $$
\item [(2)] ({\it Nonvanishing}) there exist $\nu>0$, $r_0>0$ and $\{y_R\}\subset\mathbb R^2$ such that $$ \lim_{R\rightarrow\infty}\int_{B_{r_0}(y_R)}(u_R^2+v_R^2)\, \mathrm{d} x\ge\nu. $$ \end{itemize}
As in Proposition \ref{nv}{\it Vanishing} does not occur. So let $\tilde{u}_R:=u_R(\cdot+y_R)$ and $\tilde{v}_R:=v_R(\cdot+y_R)$, then $\tilde{z}_R=(\tilde{u}_R,\tilde{v}_R)$ is bounded in $H^1(\mathbb R^2)$ and $\tilde{z}_R\rightharpoonup \tilde{z}_0\not=0$ weakly in $H^1(\mathbb R^2)$. Moreover, let $\tilde{z}_0=(\tilde{u}_0,\tilde{v}_0)$, we know $\tilde{u}_0,\tilde{v}_0$ are nonnegative. Obviously, $|y_R|\le R+r_0$. Assume that, up to a rotation,
$y_R/|y_R|\rightarrow (0,-1)\in\mathbb R^2$ and $(\tilde{u}_0,\tilde{v}_0)\in H_0^1(\Omega)\times H_0^1(\Omega)$ satisfies \begin{equation}\label{qkj2} \begin{cases} -\Delta \tilde{u}_0+V_0\tilde{u}_0=g_k(\tilde{v}_0)\\ -\Delta \tilde{v}_0+V_0\tilde{v}_0=f_k(\tilde{u}_0) \end{cases} \end{equation} where $\Omega=\mathbb R^2$ or $\Omega=\{(x_1,x_2)\in\mathbb R^2:x_2>d\}$, where $d:=\liminf_{R\rightarrow\infty}\mbox{dist}(y_R,\partial B_R(0))$. If $\Omega=\mathbb R^2$ the proof follows. If $\Omega=\{(x_1,x_2)\in\mathbb R^2:x_2>d\}$, then by the Hopf Lemma, $\partial\tilde{u}_0/\partial \eta<0$ and $\partial\tilde{v}_0/\partial \eta<0$ on $\partial\Omega$, where $\eta$ is the outward pointing unit normal to $\partial\Omega$. Finally from the Pohozaev type identity proved in \cite[Proposition 1.2]{Lions}(see also \cite[Lemma 3.1]{Pisto}) one actually has $$ \int_{\partial\Omega}\frac{\partial\tilde{u}_0}{\partial n}\frac{\partial\tilde{v}_0}{\partial n}=0, $$ which is a contradiction. \end{proof}
\begin{proof}[Proof of Theorem \ref{sign} completed] Thanks to Lemma \ref{cast} any ground state solution $(u,v)$ to \re{q11} does not change sign. Assume $u>0$ and $v>0$ in $\mathbb R^2$. Setting $$ f_1(u,v)=g(v)-V_0u\,\,\,\mbox{and}\,\,\ f_2(u,v)=f(u)-V_0v, $$ as a consequence of \cite[Theorem 1]{Sirakov1} and $(H1)$, $(u,v)$ is radially symmetric and strictly decreasing with respect to the same point, which we denote by $x_0$. Clearly, $\Delta u(x_0)\le0$ and $\Delta v(x_0)\le0$. To complete the proof of Theorem \ref{sign}, we next prove that actually $\Delta u(x_0)<0$ and $\Delta v(x_0)<0$. Indeed, if not, without loss of generality we may assume $\Delta u(x_0)=0$ and then $g(v(x_0))=V_0u(x_0)$. Let $u_1=u-u(x_0)$, then $u_1(x)\le0$ in $\mathbb R^2$ and \begin{align*} &-\Delta u_1=-\Delta u=g(v)-V_0u\\ &\le g(v(x_0))-V_0u(x_0)-V_0u_1\\ &=-V_0u_1. \end{align*}
Namely, $-\Delta u_1+V_0 u_1\le 0$ in $\mathbb R^2$. Noting that $u_1(0)=0$, by the maximum principle, $u_1\equiv0$ in $\mathbb R^2$, which is a contradiction. Therefore, $\Delta u(x_0)<0$. Similarly, one has $\Delta v(x_0)<0$ as well. {Finally, by Proposition \ref{vanishing_R}, $u(x+x_z), v(x+x_z)\rightarrow 0$, as $|x|\rightarrow\infty$ uniformly for any $z=(u,v)\in \mathcal{S}$. Since $u,v$ do not change the sign, using the maximum principle, we conclude that there exist $C,c>0$, independent of $z=(u,v)\in \mathcal{S}$, such that $$|D^{\alpha}u(x)|+|D^{\alpha}v(x)|\le C\exp(-c|x-x_0|),\,x\in \mathbb R^2,\,|\alpha|=0,1$$ } \end{proof}
\section{Proof of Theorem \ref{Th1}}\label{semiclassical_s} \renewcommand{5.\arabic{equation}}{5.\arabic{equation}}
\subsection{Functional setting} By setting $u(x)=\varphi(\varepsilon x),v(x)=\psi(\varepsilon x)$ and $V_\varepsilon(x)=V(\varepsilon x)$, \re{q1} is equivalent to \begin{equation}\label{q51} \left\{ \begin{array}{ll} -\Delta u+V_\varepsilon(x)u=g(v)\\ -\Delta v+V_\varepsilon(x)v=f(u) \end{array} \right. \end{equation} We next consider \re{q51}. Let $H_\varepsilon$ be the completion of $C_0^\infty(\mathbb R^2)$ with respect to the inner product $$ (u,v)_{1,\varepsilon}:=\int_{\mathbb R^2}\nabla u\nabla v+V_\varepsilon(x)uv$$ and the norm
$$\|u\|_{1,\varepsilon}^2:=(u,u)_{1,\varepsilon}, u,v\in H_\varepsilon. $$ Let $E_\varepsilon:=H_\varepsilon\times H_\varepsilon$ with the inner product $$ (z_1,z_2)_\varepsilon:=(u_1,u_2)_{1,\varepsilon}+(v_1,v_2)_{1,\varepsilon},\ \ z_i=(u_i,v_i)\in E_\varepsilon,\: i=1,2. $$
and the norm $\|z\|_\varepsilon^2=\|(u,v)\|_\varepsilon^2=\|u\|_{1,\varepsilon}^2+\|v\|_{1,\varepsilon}^2.$ We have the orthogonal space decomposition $E_\varepsilon=E_\varepsilon^+\oplus E_\varepsilon^-$, where $$
E_\varepsilon^+:=\{(u,u)\,|\, u\in H_\varepsilon\}\ \ \ \mbox{and}\ \ \ E_\varepsilon^-:=\{(u,-u)\,|\,u\in H_\varepsilon\}. $$ For each $z=(u,v)\in E_\varepsilon$, $$z=z^++z^-=((u+v)/2,(u+v)/2)+((u-v)/2,(v-u)/2).$$ Weak solutions of \re{q51} are critical points of the associated energy functional $$ \Phi_\varepsilon(z):=\int_{\mathbb R^2}\nabla u\nabla v+V_\varepsilon(x)uv-I(z),\ \ z=(u,v)\in E_\varepsilon, $$ where $I(z)=\int_{\mathbb R^2}F(u)+G(v)$. Then $\Phi_\varepsilon\in C^1(E,\mathbb R)$ and $$ \langle\Phi_\varepsilon'(z),w\rangle=\int_{\mathbb R^2}(\nabla u\nabla w_2+\nabla v\nabla w_1+V_\varepsilon(x)uw_2+V_\varepsilon(x)vw_1)-\int_{\mathbb R^2}(f(u)w_1+g(v)w_2), $$ for all $z=(u,v),w=(w_1,w_2)\in E_\varepsilon$. Moreover, $\Phi_\varepsilon$ can be rewritten as follows \begin{equation}\label{y51}
\Phi_\varepsilon(z):=\frac{1}{2}\|z^+\|_\varepsilon^2-\frac{1}{2}\|z^-\|_\varepsilon^2-I(z). \end{equation} We know that if $z\in E_\varepsilon$ is a nontrivial critical point of $\Phi_\varepsilon$, then $z\in E_\varepsilon\setminus E_\varepsilon^-$. In the spirit of \cite{Szulkin}, we define the generalized Nehari Manifold $$ \mathcal{N}_\varepsilon:=\{z\in E_\varepsilon\setminus E_\varepsilon^-: \langle \Phi_\varepsilon'(z),z\rangle_\varepsilon=0, \langle \Phi_\varepsilon'(z),\varphi\rangle_\varepsilon=0\ \mbox{for all}\ \ \varphi\in E_\varepsilon^-\}. $$ Let $$ c_\varepsilon:=\inf_{z\in\mathcal{N}_\varepsilon}\Phi_\varepsilon(z), $$ then $c_\varepsilon$ is the least energy for system \re{q51}, the so-called ground state level.
\noindent For $z\in E_\varepsilon\setminus E_\varepsilon^-$, set $$ \hat{E}_\varepsilon(z)=E_\varepsilon^-\oplus\mathbb R^+z=E_\varepsilon^-\oplus\mathbb R^+z^+, $$ where $\mathbb R^+z:=\{tz: t\ge0\}$. From \cite{DJJ,Szulkin,Weth} we have the following properties of $\mathcal{N}_\varepsilon$, which will be used later.
\begin{lemma}\label{l5.1} Under the assumptions in Theorem \ref{Th1}, we have: \begin{itemize}
\item [1)] for any $z\in \mathcal{N}_\varepsilon$, $\Phi_\varepsilon|_{\hat{E}_\varepsilon(z)}$ admits a unique maximum point which occurs precisely at $z$;
\item [2)] for any $z\in E_\varepsilon\setminus E_\varepsilon^-$, the set $\hat{E}_\varepsilon(z)$ intersects $\mathcal{N}_\varepsilon$ at exactly one point $\hat{m}_\varepsilon(z)$, which is the unique global maximum point of $\Phi_\varepsilon|_{\hat{E}_\varepsilon(z)}$. \end{itemize} \end{lemma} \subsection{Lower and upper bounds for $c_\varepsilon$} \begin{proposition}\label{co51} There exists $c_0>0$ (independent of $\varepsilon$) such that for $\varepsilon>0$ sufficiently small, $$c_\varepsilon=\inf_{z\in E_\varepsilon\setminus E_\varepsilon^-}\max_{\omega\in\hat{E}_\varepsilon(z)}\Phi_\varepsilon(\omega)\in(c_0,4\pi/\alpha_0).$$ \end{proposition} \begin{proof} The min-max characterization is standard and we refer to \cite{DJJ}. Here we are concerned with estimating form below and above the critical level $C_\varepsilon$.
\noindent {\it Lower bound.} On one hand, for any $z\in E_\varepsilon$, we know $\hat{E}_\varepsilon(z)=\hat{E}_\varepsilon(z^+)$. Then, for any $a>0$ \begin{align*} c_\varepsilon&=\inf_{z\in E_\varepsilon\setminus E_\varepsilon^-}\max_{\omega\in\hat{E}_\varepsilon(z)}\Phi_\varepsilon(\omega)=\inf_{z\in E_\varepsilon^+\setminus\{0\}}\max_{\omega\in\hat{E}_\varepsilon(z)}\Phi_\varepsilon(\omega)\\ &=\inf_{z\in S_{a,\varepsilon}^+}\max_{\omega\in\hat{E}_\varepsilon(z)}\Phi_\varepsilon(\omega)\ge\inf_{z\in S_{a,\varepsilon}^+}\max_{\omega\in\mathbb R^+ z}\Phi_\varepsilon(\omega), \end{align*} where $
S_{a,\varepsilon}^+:=\{z\in E_\varepsilon^+: \|z\|_\varepsilon=a\}. $ On the other hand, recalling that $f,g$ have critical growth with critical exponent $\alpha_0$, by $(H1)$, for some $\alpha'>\alpha_0$, there exists $C>0$ such that \begin{equation}\label{fg}
F(t)\le \frac{1}{4}V_0|t|^2+C |t|^3\left(e^{\alpha't^2}-1\right),\, G(t)\le \frac{1}{4}V_0|t|^2+C |t|^3\left(e^{\alpha't^2}-1\right), t\in\mathbb R. \end{equation} By the Pohozaev-Trudinger-Moser inequality, there exists $a>0$ sufficiently small such that $$ \int_{\mathbb R^2}\left(e^{2\alpha'u^2}-1\right)\le1, $$
for any $u\in H^1(\mathbb R^2)$ with $\|u\|_{H^1}\le a$. Then, for any $z=(u,u)\in S_{a,\varepsilon}^+$, {\allowdisplaybreaks \begin{align*}
&\max_{\omega\in\mathbb R^+ z}\Phi_\varepsilon(\omega)\ge\Phi_\varepsilon(z)=\int_{\mathbb R^2}|\nabla u|^2+V_\varepsilon(x)u^2-\int_{\mathbb R^2}F(u)+G(u)\\
&\ge\|u\|_{1,\varepsilon}^2-V_0/2\int_\Omega u^2-2C\int_{\mathbb R^2} |u|^3\left(e^{\alpha'u^2}-1\right)\\
&\ge C'\|u\|_{1,\varepsilon}^2-2C\left(\int_\Omega u^6\right)^{1/2}\ge\|u\|_{1,\varepsilon}^2(C'-2C C_6^3\|u\|_{1,\varepsilon}), \end{align*} }
where $C'=\min\{1,V_0\}/2$ and $C_6$ is the Sobolev's constant of the embedding $H^1(\mathbb R^2)\hookrightarrow L^6(\mathbb R^2)$. Thus, taking $a>0$ fixed but small enough, for any $z=(u,u)\in S_{a,\varepsilon}^+$, we have $\|u\|_{1,\varepsilon}^2=a^2/2$ and $$
\max_{\omega\in\mathbb R^+ z}\Phi_\varepsilon(\omega)\ge\|u\|_{1,\varepsilon}^2\left[C'-2CC_6^3\|u\|_{1,\varepsilon}\right]\ge a^2/6>0. $$ Thus, for any $\varepsilon>0$, $c_\varepsilon\ge c_0=a^2/6$.
\noindent {\it Upper bound.} By $(H5)$ and $V(0)=V_0$, for some fixed $r>0$ and $\varepsilon_0>0$ such that \begin{equation} \label{ChoiceOfBeta}
\beta_0>\frac{4e^{\frac{r^2}{2}\max_{|x|\le\varepsilon r}V(x)}}{\alpha_0 r^2},\,\,\varepsilon\in(0,\varepsilon_0), \end{equation} we consider the following so-called Moser sequence \begin{equation} \omega_k(x) = \frac{1}{\sqrt{2\pi}}\left\{ \begin{array}{ll} (\log k)^{1/2}, & \mid x \mid \leq r/k;\\ \frac{ \log \frac{r}{\mid x \mid} }{(\log k)^{1/2}}, & r/k \leq \mid x \mid \leq r; \\ 0, & \mid x \mid \geq r. \end{array} \right. \end{equation}
Then, one easily checks that $\|\nabla\omega_k\|_2=1$ and $\|\omega_k\|_2^2=r^2/(4\log{k})+o(r^2/\log{k})$. Let $d_k(r):= r^2/4 + o_k(1)$ where $o_k(1) \to 0$, as $k \to + \infty$ and $\tilde{\omega}_{k,\varepsilon}:=\omega_k/\|\omega_k\|_{1,\varepsilon}$, then $\|\tilde{\omega}_{k,\varepsilon}\|_{1,\varepsilon}=1$ and for $k$ large enough, \begin{equation} \label{EstOfwn}
\tilde{\omega}_{k,\varepsilon}^2(x)\geq \frac 1{2 \pi} \: \Bigl( \log k - \: d_{k,\varepsilon}(r) \Bigr) \quad \text{for } |x| \leq \frac r k, \end{equation}
where $d_{k,\varepsilon}(r)=d_k(r)\max_{|x|\le\varepsilon r}V(x)\ge V_0d_k(r)$.
\noindent Suppose by contradiction that for some fixed $\varepsilon\in(0,\varepsilon_0)$ and for all $ k $, $$ \sup_{z\in\hat{E}((\tilde{\omega}_{k,\varepsilon},\tilde{\omega}_{k,\varepsilon}))}\Phi_\varepsilon(z) \geq 4\pi/\alpha_0. $$ Then $\Phi_\varepsilon(\hat{m}((\tilde{\omega}_{k,\varepsilon},\tilde{\omega}_{k,\varepsilon})))\geq 4\pi/\alpha_0$ for all $ k $, where $\hat{m}((\tilde{\omega}_{k,\varepsilon},\tilde{\omega}_{k,\varepsilon}))\in\mathcal{N}_\varepsilon$ and $$ \hat{m}((\tilde{\omega}_{k,\varepsilon},\tilde{\omega}_{k,\varepsilon})) = \tau_k(\tilde{\omega}_{k,\varepsilon},\tilde{\omega}_{k,\varepsilon}) + (u_k,-u_k)\in\hat{E}((\tilde{\omega}_{k,\varepsilon},\tilde{\omega}_{k,\varepsilon})). $$ Namely, \begin{equation}\label{Itatiaia}
\tau^2_{k} - \int_{\mathbb R^2}(|\nabla u_{k}| ^2+V_\varepsilon(x)u_k^2) -
\int_{\mathbb R^2} [F(\tau_{k}\tilde{\omega}_{k,\varepsilon} + u_{k})+G(\tau_{k}\tilde{\omega}_{k,\varepsilon}-u_{k})] \geq 4\pi/\alpha_0 \end{equation} and \begin{equation}\label{Itatuba} \tau^2_{k} -
\int_{\mathbb R^2}(|\nabla u_{k}| ^2+V_\varepsilon(x)u_k^2) =
\int_{\mathbb R^2} [f(\tau_{k}\tilde{\omega}_{k,\varepsilon} + u_{k})(\tau_{k}\tilde{\omega}_{k,\varepsilon} + u_{k})
+ g(\tau_{k}\tilde{\omega}_{k,\varepsilon} -u_{k})(\tau_{k}\tilde{\omega}_{k,\varepsilon} -u_{k})]. \end{equation}
\noindent {\it Claim:} $\lim_{k\rightarrow\infty}\tau_k=4 \pi / \alpha_0 $. Indeed, from (\ref{Itatiaia}), we get $\tau_{k}^2 \geq 4 \pi / \alpha_0 $. From $(H5)$, given $ \rho>0 $, there exists $ R_\rho $ such that \[ tf(t) \geq (\beta_0 - \rho) e^{\alpha_0 t^2} \mbox{ for all } t \geq R_\rho . \] and the same holds true also for $tg(t)$.
Noting that $$\tau_k \tilde{\omega}_{k,\varepsilon}=\frac{\tau_k}{\|\omega_k\|_\varepsilon}\frac{\sqrt{\log{k}}}{\sqrt{2\pi}}\rightarrow+\infty,\ \ \mbox{as}\ \ k\rightarrow\infty,\ \ x\in B_{r/k},$$ by choosing $k$ sufficiently large, we get $\max{\{\tau_{k}\tilde{\omega}_{k,\varepsilon} +u_{k}, \; \tau_{k}\tilde{\omega}_{k,\varepsilon} -u_{k}\}} \geq R_\rho $ for all $ x \in B_{r/k} $. So that by \re{EstOfwn}, \begin{align}\label{bound} \tau_k^2&\ge\int_{B_{r/k}} [f(\tau_{k}\tilde{\omega}_{k,\varepsilon} + u_{k})(\tau_{k}\tilde{\omega}_{k,\varepsilon} + u_{k})
+ g(\tau_{k}\tilde{\omega}_{k,\varepsilon} -u_{k})(\tau_{k}\tilde{\omega}_{k,\varepsilon} -u_{k})]\nonumber\\ &\geq (\beta_0- \rho) \int_{B_{r/k}}e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x\nonumber\\ &\ge\pi r^2(\beta_0 - \rho)\: e^{\frac{\alpha_0}{2\pi}\tau_k^2[\log k - d_{k,\varepsilon}(r)]-2\log{k}}, \end{align} which implies that $\{\tau_k\}$ is bounded. By \re{bound}, as a consequence of the boundedness of $\{\tau_k\}$, we know $\limsup_{k\rightarrow\infty}\tau_k^2\le4 \pi / \alpha_0$. In fact, if not we have $$ \limsup_{k\rightarrow\infty}e^{\frac{\alpha_0}{2\pi}\tau_k^2[\log k - d_{k,\varepsilon}(r)]-2\log{k}}=\infty, $$ which is a contradiction, and the claim is proved.
\noindent As $\omega_k \to 0 $ a.e. in $\mathbb R^2$, by the Lebesgue dominated convergence theorem $$\int_{\{x\in B_r:\tau_k\tilde{\omega}_{k,\varepsilon} < R_\rho\}} \min\{f(\tau_k\tilde{\omega}_{k,\varepsilon}) \tau_k\tilde{\omega}_{k,\varepsilon}, g(\tau_k\tilde{\omega}_{k,\varepsilon}) \tau_k\tilde{\omega}_{k,\varepsilon}\}\, \mathrm{d} x \rg0,\quad k\rightarrow\infty$$ and $$\int_{\{x\in B_r:\tau_k\tilde{\omega}_{k,\varepsilon} < R_\rho\}} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x\rightarrow\pi r^2.$$ Then, from \eqref{Itatuba} and $(H4)$ we have obtain \begin{align*} \tau_k^2 & \ge\int_{B_r} [f(\tau_k\tilde{\omega}_{k,\varepsilon}+u_k)(\tau_k\tilde{\omega}_{k,\varepsilon}+u_k)+g(\tau_k\tilde{\omega}_{k,\varepsilon}-u_k)(\tau_k\tilde{\omega}_{k,\varepsilon}-u_k)] \, \mathrm{d} x\\ & \geq (\beta_0- \rho) \int_{B_r} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x - (\beta_0- \rho) \int_{\{x\in B_r:\tau_k\tilde{\omega}_{k,\varepsilon} < R_\rho\}} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x\\ &\ \ \ \ +\int_{\{x\in B_r:\tau_k\tilde{\omega}_{k,\varepsilon} < R_\rho\}}\min\{f(\tau_k\tilde{\omega}_{k,\varepsilon}) \tau_k\tilde{\omega}_{k,\varepsilon}, g(\tau_k\tilde{\omega}_{k,\varepsilon}) \tau_k\tilde{\omega}_{k,\varepsilon}\}\, \mathrm{d} x\\ &=(\beta_0- \rho)\Big[\int_{B_r} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x-\pi r^2\Big]. \end{align*} In the following, we estimate the term $\int_{B_r} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x$. Observe first that from \eqref{EstOfwn} one has $$\int_{B_{r/k}} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x \geq \pi r^2 \: e^{\frac{\alpha_0}{2\pi}\tau_k^2[\log{k}- d_{k,\varepsilon}(r)]-2\log{k}}.$$ Noting that $\tau_k^2\ge4\pi/\alpha_0$ and $\tau_k^2\rg4\pi/\alpha_0$, we have
$$\liminf_{k\rightarrow\infty}\int_{B_{r/k}} e^{\alpha_0 (\tau_k\tilde{\omega}_{k,\varepsilon})^2}\, \mathrm{d} x \geq \pi r^2e^{-\max_{|x|\le\varepsilon r}V(x)r^2/2}.$$
Secondly, by using the change of variable $s=r e^{-\|\omega_k\|_\varepsilon \sqrt{\log k} \: t}$, one has \begin{equation*} \begin{split}
\int_{B_r \setminus B_{r/k}} e^{4 \pi (\tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x & = 2 \pi r^2 \|\omega_k\|_\varepsilon \sqrt{\log k} \: \int_{0}^{\frac{\sqrt{\log k}}{\|\omega_k\|_\varepsilon}} \: e^{2(\:t^2 - \|\omega_k\|_\varepsilon \sqrt{\log k} \: t \:)} \, \mathrm{d} t \\
& \geq 2 \pi r^2 \|\omega_k\|_\varepsilon \sqrt{\log k} \: \int_{0}^{\frac{\sqrt{\log k}}{\|\omega_k\|_\varepsilon}} \: e^{-2\|\omega_k\|_\varepsilon \sqrt{\log k} \: t} \, \mathrm{d} t \\ & = \pi r^2 \bigl( 1 - e^{-2 \log k}\bigr). \end{split} \end{equation*}
Thus $$\liminf_{k\rightarrow\infty}\int_{B_r} e^{\alpha_0(\tau_k \tilde{\omega}_{k,\varepsilon})^2} \, \mathrm{d} x \geq \pi r^2(e^{-\max_{|x|\le\varepsilon r}V(x)r^2/2}+1),$$ which implies
$$4\pi/\alpha_0= \lim_{k \to + \infty} \tau_k^2 \geq (\beta_0 - \rho) \pi r^2e^{-\max_{|x|\le\varepsilon r}V(x)r^2/2}.$$ As $\rho$ is arbitrary, we have
$$\beta_0 \leq \frac{4e^{\frac{r^2}{2}\max_{|x|\le\varepsilon r}V(x)}}{\alpha_0 r^2},$$ which contradicts \eqref{ChoiceOfBeta}. Therefore, $c_\varepsilon<4\pi/\alpha_0$ for $\varepsilon\in(0,\varepsilon_0)$. \end{proof}
\subsection{Existence of solutions to system \re{q51}}\
\noindent Let us define $$ \hat{m}_\varepsilon: z\in E_\varepsilon\setminus E_\varepsilon^-\mapsto\hat{m}_\varepsilon(z)\in\hat{E}_\varepsilon(z)\cap\mathcal{N}_\varepsilon. $$ \begin{lemma}\label{l5.4}
There exists $\delta>0$ (independent of $\varepsilon$) such that $\|z^+\|_\varepsilon\ge\delta$ for all $z\in\mathcal{N}_\varepsilon$. In particular, $$
\|\hat{m}_\varepsilon(z)^+\|_\varepsilon\ge\delta\ \ \ \mbox{for all}\ \ z\in E_\varepsilon\setminus E_\varepsilon^-. $$ Moreover, for each compact subset $\mathcal{W}\subset E_\varepsilon\setminus E_\varepsilon^-$, there exists a constant $C_{\mathcal{W},\varepsilon}>0$ such that $$
\|\hat{m}_\varepsilon(z)\|_\varepsilon\le C_{\mathcal{W},\varepsilon}\ \ \ \mbox{for all}\ \ z\in\mathcal{W}. $$ \end{lemma} \noindent Let $$
S_\varepsilon^+:=\{z\in E_\varepsilon^+: \|z\|_\varepsilon=1\}, $$ then $S_\varepsilon^+$ is a $C^1$-submanifold of $E_\varepsilon^+$ and the tangent manifold of $S_\varepsilon^+$ at $z\in S_\varepsilon^+$ is $$ T_z(S_\varepsilon^+)=\{\omega\in E_\varepsilon^+: (\omega,z)_\varepsilon=0\}. $$ Let $$
m_\varepsilon:=\hat{m}_\varepsilon|_{S_\varepsilon^+}: S_\varepsilon^+\longrightarrow \mathcal{N}_\varepsilon, $$ then by Lemma \ref{l5.4}, $\hat{m}_\varepsilon$ is continuous and $m_\varepsilon$ is a homeomorphism between $S_\varepsilon^+$ and $\mathcal{N}_\varepsilon$. Define $$ \Psi_\varepsilon: S_\varepsilon^+\longrightarrow\mathbb R,\quad \Psi_\varepsilon(z):=\Phi_\varepsilon(m_\varepsilon(z)),\: z\in S_\varepsilon^+, $$ then, as a consequence of \cite[Corollary 4.3]{Weth}, for any fixed $\varepsilon>0$, we have the following \begin{proposition}\label{p5.5}\noindent \begin{itemize}
\item [1)] $\Psi_\varepsilon\in C^1(S_\varepsilon^+,\mathbb R)$ and $$
\langle\Psi_\varepsilon'(z),\omega\rangle_\varepsilon=\|m_\varepsilon(z)^+\|\langle\Phi_\varepsilon'(m_\varepsilon(z)),\omega\rangle_\varepsilon\ \ \mbox{for all}\ \ \omega\in T_z(S_\varepsilon^+); $$
\item [2)] If $\{\omega_n\}\subset S_\varepsilon^+$ is a Palais-Smale sequence for $\Psi_\varepsilon$, then $\{m_\varepsilon(\omega_n)\}\subset \mathcal{N}_\varepsilon$ is a Palais-Smale sequence for $\Phi_\varepsilon$. Namely, if $\Psi_\varepsilon(\omega_n)\rightarrow d$ for some $d>0$ and $\|\Psi_\varepsilon'(\omega_n)\|_\ast\rightarrow 0$, as $n\rightarrow\infty$, then $\Phi_\varepsilon(m_\varepsilon(\omega_n))\rightarrow d$ and $\|\Phi_\varepsilon'(m_\varepsilon(\omega_n))\|\rg0$, as $n\rightarrow\infty$, where
$$
\|\Psi_\varepsilon'(\omega_n)\|_\ast=\sup_{\stackrel{\phi\in T_{\omega_n}(S_\varepsilon^+)}{\|\phi\|_\varepsilon=1}}\langle\Psi_\varepsilon'(\omega_n),\phi\rangle_\varepsilon\ \ \mbox{and}\ \ \|\Phi_\varepsilon'(m_\varepsilon(\omega_n))\|=\sup_{\stackrel{\phi\in E_\varepsilon}{\|\phi\|_\varepsilon=1}}\langle\Phi_\varepsilon'(m_\varepsilon(\omega_n)),\phi\rangle_\varepsilon;
$$
\item [3)] $\omega\in S_\varepsilon^+$ is a critical point of $\Psi_\varepsilon$ if and only if $m_\varepsilon(\omega)\in \mathcal{N}_\varepsilon$ is a critical point of $\Phi_\varepsilon$;
\item [4)] $\inf_{S_\varepsilon^+}\Psi_\varepsilon=\inf_{\mathcal{N}_\varepsilon}\Phi_\varepsilon$. \end{itemize} \end{proposition} \noindent Since $S_\varepsilon^+$ is a regular $C^1$-submanifold of $E_\varepsilon^+$, by Proposition \ref{co51} and Proposition \ref{p5.5}, it follows from the Ekeland variational principle (see \cite[Theorem 3.1]{E}) that there exists $\{\omega_n\}\subset S_\varepsilon^+$ such that $$
\Psi_\varepsilon(w_n)\rightarrow c_\varepsilon>0 \ \ \mbox{and}\ \ \|\Psi_\varepsilon'(\omega_n)\|_\ast\rightarrow 0,\ \ \mbox{as}\ \ n\rightarrow\infty. $$ Let $z_n=m(\omega_n)\in\mathcal{N}_\varepsilon$, then \begin{equation}
\Phi_\varepsilon(z_n)\rightarrow c_\varepsilon>0 \ \ \mbox{and}\ \ \|\Phi_\varepsilon'(z_n)\|\rightarrow 0,\ \ \mbox{as}\ \ n\rightarrow\infty. \end{equation} Similar as in \cite{DJJ}, one has the following two propositions: \begin{proposition}\label{o54} There exists $C$ (independent of $\varepsilon$) such that for all $\varepsilon>0$ and $n\in \mathbb{N}$: \begin{itemize}
\item [1)] $\|z_n\|_\varepsilon=\|(u_n,v_n)\|_\varepsilon\le C(1+c_\varepsilon)$;
\item [2)] $\int_{\mathbb R^2}f(u_n)u_n\, \mathrm{d} x\le C(1+c_\varepsilon)$ and $\int_{\mathbb R^2}g(v_n)v_n\, \mathrm{d} x\le C(1+c_\varepsilon)$ ;
\item [3)] $\int_{\mathbb R^2}F(u_n)\, \mathrm{d} x\le C(1+c_\varepsilon)$ and $\int_{\mathbb R^2}G(v_n)\, \mathrm{d} x\le C(1+c_\varepsilon)$. \end{itemize} \end{proposition} \noindent Up to a subsequence, there exists $z_\varepsilon=(u_\varepsilon,v_\varepsilon)\in E_\varepsilon$ such that $z_n\rightharpoonup z_\varepsilon$ in $E_\varepsilon$ and $z_n\xrightarrow{a.e.}z_\varepsilon$ in $\mathbb R^2$, as $n\rightarrow\infty$, which is actually a weak solution to \re{q51}, precisely we have \begin{proposition}\label{o51} The weak limit $z_\varepsilon$ is a critical point of $\Phi_\varepsilon$. \end{proposition}
\subsection{Asymptotic behavior of $c_\varepsilon$}
By Proposition \ref{o51}, it suffices to show $z_\varepsilon\not\equiv0$. For this purpose, in the following, we investigate the relation between $c_\ast$ and $c_\varepsilon$, where $c_\ast, c_\varepsilon$ are the corresponding least energies to System \re{q11} and \re{q51} respectively. \begin{lemma}\label{l5.9} With the assumptions of Theorem \ref{Th1}, we have $$\limsup_{\varepsilon\rg0}c_\varepsilon\le c_\ast.$$ \end{lemma} \begin{proof} By Theorem \ref{a}, there exists $z=(u,v)\in\mathcal{N}$ such that $$c_\ast=\max_{\omega\in\hat{E}(z)}\Phi(\omega)=\max_{\omega\in\hat{E}(z^+)}\Phi(\omega).$$ Noting that $z\in E\setminus E^-$, we know for any $\varepsilon>0$, $z\in E_\varepsilon\setminus E_\varepsilon^-$. Then, by Lemma \ref{l5.1}, for any $\varepsilon>0$ \begin{align*} c_\varepsilon\le\max_{\omega\in\hat{E}_\varepsilon(z)}\Phi_\varepsilon(\omega)=\Phi_\varepsilon(\hat{m}_\varepsilon(z)). \end{align*}
Recalling that $\hat{m}_\varepsilon(z)\in\hat{E}_\varepsilon(z)\cap\mathcal{N}_\varepsilon$, there exist $s_\varepsilon\ge0$, $t_\varepsilon\in\mathbb R$ and $\varphi_\varepsilon\in H_\varepsilon$, $\|\varphi_\varepsilon\|_\varepsilon=1$ such that $\hat{m}_\varepsilon(z)=s_\varepsilon z+t_\varepsilon(\varphi_\varepsilon,-\varphi_\varepsilon)$.\vskip0.1in {\bf Step 1.} We borrow some ideas from \cite{Ramos1} to prove that $t_\varepsilon,s_\varepsilon$ are bounded for $\varepsilon>0$ sufficiently small. We proceed by contradiction and distinguish between two cases.
\noindent {\it Case I.} Both $s_\varepsilon,t_\varepsilon$ are unbounded for $\varepsilon$ small. If $|t_\varepsilon|/s_\varepsilon\rightarrow\infty$, as $\varepsilon\rg0$, then \begin{align*} c_\varepsilon&\le\Phi_\varepsilon(s_\varepsilon z+t_\varepsilon(\varphi_\varepsilon,-\varphi_\varepsilon))\\
&=s_\varepsilon^2\|z\|_\varepsilon^2-t_\varepsilon^2+t_\varepsilon s_\varepsilon O(1)-\int_{\mathbb R^2}F(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)+G(s_\varepsilon v-t_\varepsilon\varphi_\varepsilon)\\
&\le s_\varepsilon^2\|z\|_\varepsilon^2-t_\varepsilon^2+t_\varepsilon s_\varepsilon O(1)=s_\varepsilon^2(O(1)-1)\rightarrow-\infty, \end{align*}
which contradict the fact $c_\varepsilon\ge c_0>0$. If $|t_\varepsilon|/s_\varepsilon\rg0$, as $\varepsilon\rg0$, then \begin{align*}
c_\varepsilon&\le s_\varepsilon^2\|z\|_\varepsilon^2-t_\varepsilon^2+t_\varepsilon s_\varepsilon O(1)-\int_{\mathbb R^2}F(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)+G(s_\varepsilon v-t_\varepsilon\varphi_\varepsilon)\\ &=s_\varepsilon^3\left(o(1)-\int_{\mathbb R^2}\frac{F(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)}{s_\varepsilon^3}+\frac{G(s_\varepsilon v-t_\varepsilon\varphi_\varepsilon)}{s_\varepsilon^3}\right). \end{align*} Since $c_\varepsilon\ge c_0>0$, as $\varepsilon\rg0$ we have $$ \int_{\mathbb R^2}\frac{F(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)}{s_\varepsilon^3}\rg0,\,\,\frac{G(s_\varepsilon v-t_\varepsilon\varphi_\varepsilon)}{s_\varepsilon^3}\rg0. $$
Recalling that $f$ has Moser critical growth at infinity, there exists $C>0$ such that $|F(t)|\ge C|t|^3$ for $|t|\ge1$. Let $A_\varepsilon:=\{x\in\mathbb R^2: |s_\varepsilon u(x)+t_\varepsilon\varphi_\varepsilon(x)|\ge1\}$, then $$
\int_{A_\varepsilon}\frac{F(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)}{s_\varepsilon^3}\ge C\int_{A_\varepsilon}\left|u(x)+\frac{t_\varepsilon}{s_\varepsilon}\varphi_\varepsilon(x)\right|^3, $$
where the left hand side vanishes as $k\rightarrow\infty$, which yields $\lim_{\varepsilon\rg0}\int_{A_\varepsilon}|u(x)|^3=0$. At the same time, \begin{align*}
&\int_{\mathbb R^2\setminus A_\varepsilon}|u(x)|^3\le\int_{\mathbb R^2\setminus A_\varepsilon}u^2(x)\left(\frac{1}{s_\varepsilon}+\frac{|t_\varepsilon|}{s_\varepsilon}|\varphi_\varepsilon|\right)\\
&\le\frac{1}{s_\varepsilon}\int_{\mathbb R^2}u^2(x)+\frac{|t_\varepsilon|}{s_\varepsilon}\left(\int_{\mathbb R^2}u^4(x)\right)^{1/2}\left(\int_{\mathbb R^2}\varphi^2_\varepsilon(x)\right)^{1/2}\rg0,\hbox{ as $\varepsilon\rg0$.} \end{align*}
Hence $\int_{\mathbb R^2}|u|^3=0$ and in turn $u\equiv0$. Similarly, $v\equiv0$. So that we get $c_\ast=0$, which is a contradiction. If $|t_\varepsilon|/s_\varepsilon\rightarrow l>0$, as $\varepsilon\rg0$, then following the same line as above,
$$\int_{A_\varepsilon}\left|u(x)+\frac{t_\varepsilon}{s_\varepsilon}\varphi_\varepsilon(x)\right|^3\rg0.$$ Moreover, \begin{align*}
\int_{\mathbb R^2\setminus A_\varepsilon}\left|u(x)+\frac{t_\varepsilon}{s_\varepsilon}\varphi_\varepsilon(x)\right|^3\le\frac{1}{s_\varepsilon}\int_{\mathbb R^2\setminus A_\varepsilon}\left|u(x)+\frac{t_\varepsilon}{s_\varepsilon}\varphi_\varepsilon(x)\right|^2\rg0,\hbox{ as $\varepsilon\rg0$.} \end{align*} Then $$
\int_{\mathbb R^2}\left|u(x)+\frac{t_\varepsilon}{s_\varepsilon}\varphi_\varepsilon(x)\right|^3\rg0,\,\,\varepsilon\rg0 $$ and analogously $$
\int_{\mathbb R^2}\left|v(x)-\frac{t_\varepsilon}{s_\varepsilon}\varphi_\varepsilon(x)\right|^3\rg0,\,\,\varepsilon\rg0. $$
So we get $\int_{\mathbb R^2}|u+v|^3=0$, that is $u=-v$. This implies $z=(u,v)\in E^-$ which contradicts the fact $z\in\mathcal{N}$.
\noindent {\it Case II.} Just one between $s_\varepsilon$ and $t_\varepsilon$ stays bounded for $\varepsilon$ small. If $|t_\varepsilon|/s_\varepsilon\rightarrow\infty$, as $\varepsilon\rg0$, then $|t_\varepsilon|\rightarrow\infty$, as $\varepsilon\rg0$ and as above one has \begin{align*}
c_\varepsilon\le s_\varepsilon^2\|z\|_\varepsilon^2-t_\varepsilon^2+t_\varepsilon s_\varepsilon O(1)=t_\varepsilon^2(O(1)-1)\rightarrow-\infty, \end{align*}
which contradicts the fact $c_\varepsilon\ge c_0>0$. If $|t_\varepsilon|/s_\varepsilon$ is bounded for $\varepsilon$ small, then $s_\varepsilon\rightarrow\infty$ and $|t_\varepsilon|/s_\varepsilon\rg0$, as $\varepsilon\rg0$. Reasoning as in {\it Case I}, we get $u=v=0$ and $c_\ast=0$, which is again a contradiction. \vskip0.1in {\bf Step 2.} Recall that \begin{align*} c_\varepsilon\le\max_{\omega\in\hat{E}_\varepsilon(z)}\Phi_\varepsilon(\omega)=\Phi_\varepsilon(\hat{m}_\varepsilon(z)) \end{align*} where $\hat{m}_\varepsilon(z)=s_\varepsilon z+t_\varepsilon(\varphi_\varepsilon,-\varphi_\varepsilon)$. Then \begin{align*} c_\varepsilon\le&\Phi_\varepsilon(s_\varepsilon z+t_\varepsilon(\varphi_\varepsilon,-\varphi_\varepsilon))=\Phi(s_\varepsilon z+t_\varepsilon(\varphi_\varepsilon,-\varphi_\varepsilon))\\ &+\int_{\mathbb R^2}(V_\varepsilon(x)-1)(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)(s_\varepsilon v-t_\varepsilon\varphi_\varepsilon)\\ \le&\max_{\omega\in\hat{E}(z)}\Phi(\omega)+I_\varepsilon=c_\ast+I_\varepsilon, \end{align*} where $I_\varepsilon:=\int_{\mathbb R^2}(V_\varepsilon(x)-1)(s_\varepsilon u+t_\varepsilon\varphi_\varepsilon)(s_\varepsilon v-t_\varepsilon\varphi_\varepsilon)$. Since $0\in\mathcal{M}$, by Lebesgue's dominated convergence theorem and Step 1, we have \begin{align*} I_\varepsilon&=\int_{\mathbb R^2}(V_\varepsilon(x)-1)[s_\varepsilon^2uv-t_\varepsilon^2\varphi_\varepsilon^2+t_\varepsilon s_\varepsilon (v-u)\varphi_\varepsilon]\\ &\le\int_{\mathbb R^2}(V_\varepsilon(x)-1)[s_\varepsilon^2uv+t_\varepsilon s_\varepsilon (v-u)\varphi_\varepsilon]\\
&\le s_\varepsilon^2\int_{\mathbb R^2}(V_\varepsilon(x)-1)uv+|t_\varepsilon s_\varepsilon|\left(\int_{\mathbb R^2}|V_\varepsilon(x)-1|^2(v-u)^2\right)^{1/2}\rg0\,\,\hbox{ as $\varepsilon\rg0$.} \end{align*} Therefore, $\limsup_{\varepsilon\rg0}c_\varepsilon\le c_\ast$. \end{proof} \subsection{Existence of ground state solutions for \re{q51}}
For any $\lambda>0$, let us consider the following problem in $\mathbb{R}^2$ \begin{equation}\label{q110} \left\{ \begin{array}{ll} -\Delta u+\lambda u=g(v)\\ -\Delta v+\lambda v=f(u) \end{array} \right. \end{equation} whose corresponding energy functional is $$ \Phi_\lambda(z):=\int_{\mathbb R^2}\nabla u\nabla v+\lambda uv-I(z),\ \ z=(u,v)\in E. $$ As above one can define the generalized Nehari Manifold $\mathcal{N}_\lambda$ and the least energy $$ c_\lambda:=\inf_{z\in\mathcal{N}_\lambda}\Phi_\lambda(z). $$ Moreover, with the same assumptions of Theorem \ref{Th2}, if $c_\lambda\in(0,4\pi/\alpha_0)$ for some $\lambda>0$, then there exists $z_\lambda=(u_\lambda,v_\lambda)\in\mathcal{N}_\lambda$ such that $\Phi_\lambda(z_\lambda)=c_\lambda$. \begin{lemma}\label{bj1} With the assumptions of Theorem \ref{Th2}, for any $\lambda>0$ the map $\lambda\mapsto c_\lambda\in(0,4\pi/\alpha_0)$ is strictly increasing. \end{lemma} \begin{proof} For any $\lambda>0$ with $c_\lambda\in(0,4\pi/\alpha_0)$, let $z_\lambda=(u_\lambda,v_\lambda)$ be a solution of \re{q110}, then $\tilde{z}_\lambda=(\tilde{u}_\lambda,\tilde{v}_\lambda)=(u_\lambda(\cdot/\sqrt{\lambda},v_\lambda(\cdot/\sqrt{\lambda}))$ satisfies in the whole plane the following system \begin{equation}\label{q111} \left\{ \begin{array}{ll} -\Delta \tilde{u}_\lambda+\tilde{u}_\lambda=\lambda^{-1}g(\tilde{v}_\lambda)\\ -\Delta \tilde{v}_\lambda+\tilde{v}_\lambda=\lambda^{-1}f(\tilde{u}_\lambda) \end{array} \right. \end{equation} whose corresponding energy functional is $$ \tilde{\Phi}_\lambda(\tilde{z}_\lambda):=\int_{\mathbb R^2}\nabla\tilde{u}_\lambda\nabla\tilde{v}_\lambda+\tilde{u}_\lambda\tilde{v}_\lambda-\lambda^{-1}I(\tilde{z}_\lambda). $$ Similar as above, we can define the generalized Nehari Manifold $\mathcal{\tilde{N}}_\lambda$ and the least energy $$ \tilde{c}_\lambda:=\inf_{z\in\mathcal{\tilde{N}}_\lambda}\tilde{\Phi}_\lambda(z). $$ We have $c_\lambda=\tilde{c}_\lambda\in(0,4\pi/\alpha_0)$. Then \re{q111} admits a ground state solution $\tilde{z}_\lambda=(\tilde{u}_\lambda,\tilde{v}_\lambda)$. Moreover, $$ \tilde{c}_\lambda:=\inf_{z\in E\setminus E^-}\max_{\omega\in\hat{E}(z)}\tilde{\Phi}_\lambda(\omega)=\max_{\omega\in\hat{E}(\tilde{z}_\lambda)}\tilde{\Phi}_\lambda(\omega). $$
To show that $c_\lambda$ is strictly increasing, it is enough to prove that $\tilde{c}_\lambda$ is strictly increasing. For any $0<\mu<\lambda$, the set $\hat{E}(\tilde{z}_\lambda)$ intersects $\mathcal{\tilde{N}_\mu}$ at exactly one point $\hat{m}_\mu(z)$, which is the unique global maximum point of $\tilde{\Phi}_\mu|_{\hat{E}(\tilde{z}_\lambda)}$. Since $F(s), G(s)>0$ for any $s\not=0$, \begin{align*} \tilde{c}_\mu&\le\max_{\omega\in\hat{E}(\tilde{z}_\lambda)}\tilde{\Phi}_\mu(\omega)=\tilde{\Phi}_\mu(\hat{m}_\mu(z))\\ &<\tilde{\Phi}_\lambda(\hat{m}_\mu(z))\le\max_{\omega\in\hat{E}(\tilde{z}_\lambda)}\tilde{\Phi}_\lambda(\omega)=\tilde{c}_\lambda. \end{align*} Therefore, $c_\mu<c_\lambda$. \end{proof} \noindent Now, we are set to prove that the weak limit obtained in Proposition \ref{o51} is non trivial, precisely \begin{lemma}\label{bj2} $z_\varepsilon\not\equiv0$ provided $\varepsilon>0$ is sufficiently small. \end{lemma} \begin{proof} Assume by contradiction that $z_\varepsilon=0$ for $\varepsilon>0$ small, then $z_n=(u_n,v_n)\rightharpoonup 0$ in $E_\varepsilon$ and $z_n\xrightarrow{a.e.}0$ in $\mathbb R^2$, as $n\rightarrow\infty$. It is well known that $\{z_n\}$ satisfies just one of the following alternatives: \begin{itemize}
\item [1)] (Vanishing) $$ \lim_{n\rightarrow\infty}\sup_{y\in\mathbb R^2}\int_{B_R(y)}(u_n^2+v_n^2)\, \mathrm{d} x=0\ \ \mbox{for all}\ \ R>0; $$
\item [2)] (Nonvanishing) there exist $\nu>0$, $R_0>0$ and $\{y_n\}\subset\mathbb R^2$ such that $$ \lim_{n\rightarrow\infty}\int_{B_{R_0}(y_n)}(u_n^2+v_n^2)\, \mathrm{d} x\ge\nu. $$ \end{itemize}
Due to $c_\varepsilon\in(c_0,4\pi/\alpha_0)$ we can rule out {\it Vanishing}. So that {\it Nonvanishing} occurs. Let $\tilde{u}_n(\cdot):=u_n(\cdot+y_n)$ and $\tilde{v}_n(\cdot):=v_n(\cdot+y_n)$, then $|y_n|\rightarrow\infty$, as $n\rightarrow\infty$ and \begin{equation}\label{y522} \lim_{n\rightarrow\infty}\int_{B_{R_0}(0)}(\tilde{u}_n^2+\tilde{v}_n^2)\, \mathrm{d} x\ge\nu. \end{equation} Let $\tilde{z}_n=(\tilde{u}_n,\tilde{v}_n)$, $\{\tilde{z}_n\}$ is bounded in $E$. Up to a subsequence, by \re{y522} we assume that $\tilde{z}_n\rightarrow \tilde{z}\not=0$ weakly in $E$ for some $\tilde{z}=(\tilde{u},\tilde{v})\in E$ and $\Phi_{V_\infty}'(\tilde{z})=0$, where $$ \Phi_{V_\infty}(z)=\int_{\mathbb R^2}\nabla u\nabla v+V_\infty uv-I(z),\,\, z=(u,v)\in E. $$ By $(H2)$ and Fatou's Lemma, for fixed $\varepsilon>0$, \begin{align*} c_\varepsilon+o_n(1)&=\Phi_\varepsilon(\tilde{z}_n)-\frac{1}{2}\langle \Phi_\varepsilon'(\tilde{z}_n),\tilde{z}_n\rangle\\ &=\int_{\mathbb R^2}\frac{1}{2}f(\tilde{u}_n)\tilde{u}_n-F(\tilde{u}_n)+\int_{\mathbb R^2}\frac{1}{2}g(\tilde{v}_n)\tilde{v}_n-G(\tilde{v}_n)\\ &\ge\int_{\mathbb R^2}\frac{1}{2}f(\tilde{u})\tilde{u}-F(\tilde{u})+\int_{\mathbb R^2}\frac{1}{2}g(\tilde{v})\tilde{v}-G(\tilde{v})+o_n(1)\\ &=\Phi_{V_\infty}(\tilde{z})-\frac{1}{2}\langle \Phi_{V_\infty}'(\tilde{z}),\tilde{z}\rangle+o_n(1)\ge c_{V_\infty}+o_n(1). \end{align*} It follows that $c_\varepsilon\ge c_{V_\infty}$ for $\varepsilon>0$ small enough. By Lemma \ref{l5.9} and Lemma \ref{bj1}, we get $c_{V_\infty}>c_\ast$. Again by Lemma \ref{l5.9} we get a contradiction. \end{proof} \noindent By virtue of Lemma \ref{bj2} we get straightforward the following \begin{corollary} For $\varepsilon>0$ small enough, $\Phi_\varepsilon(z_\varepsilon)=c_\varepsilon$, namely $z_\varepsilon$ is a ground state solution of \re{q51}. \end{corollary} \subsection{Concentration} Reasoning as in Proposition \ref{bo1} we have \begin{proposition}\label{boc1}
Let $\varepsilon>0$ and $z_\varepsilon=(u_\varepsilon,v_\varepsilon)$ be a ground state solution to \re{q51}. Then, $u_\varepsilon, v_\varepsilon\in L^{\infty}(\mathbb R^2)\cap C_{loc}^{1,\gamma}(\mathbb R^2)$ for some $\gamma\in(0,1)$. {Moreover, $u_\varepsilon(x), v_\varepsilon(x)\rightarrow 0$, as $|x|\rightarrow\infty$.} \end{proposition}
\noindent By Proposition \ref{boc1}, there exists $y_\varepsilon\in\mathbb R^2$ such that $$|u_\varepsilon(y_\varepsilon)|+|v_\varepsilon(y_\varepsilon)|=\max_{x\in\mathbb R^2}(|u_\varepsilon(x)|+|v_\varepsilon(x)|).$$ Moreover, $x_\varepsilon:=\varepsilon y_\varepsilon$ is a maximum point of $|\varphi_\varepsilon(x)|+|\psi_\varepsilon(x)|$, where $(\varphi_\varepsilon(\cdot),\psi_\varepsilon(\cdot))=(u_\varepsilon(\cdot/\varepsilon),v_\varepsilon(\cdot/\varepsilon))$ is a ground state solution of the original problem \re{q1}. We conclude the proof of Theorem \ref{Th1} by proving Proposition \ref{boc2}, \ref{boc3} and \ref{boc4} below. {\begin{proposition}\label{boc2}\ \begin{itemize} \item [1)] $\lim_{\varepsilon\rightarrow 0}\mbox{dist}(x_\varepsilon,\mathcal{M})=0$; \item [2)] $(u_\varepsilon(\cdot+x_\varepsilon/\varepsilon), v_\varepsilon(\cdot+x_\varepsilon/\varepsilon))$ converges (up to a subsequence) to a ground state solution of \begin{align}\label{luv} \left\{ \begin{array}{ll} -\Delta u+V_0u=g(v)&\\ &\text{ in } \mathbb{R}^2\\ -\Delta v+V_0v=f(u) \end{array} \right. \end{align}
\item [3)] $u_\varepsilon(x+x_\varepsilon/\varepsilon), v_\varepsilon(x+x_\varepsilon/\varepsilon)\rightarrow 0$, uniformly as $|x|\rightarrow\infty$, for $\varepsilon>0$ sufficiently small. \end{itemize} \end{proposition}} \begin{proof}
\noindent By virtue of Proposition \ref{o54} and Fatou's Lemma, there exists $C>0$ (independent of $\varepsilon$) such that $\|(u_\varepsilon,v_\varepsilon)\|_\varepsilon\le C$ for all $\varepsilon\in(0,\varepsilon_0)$. Up to a subsequence, we may assume $z_\varepsilon=(u_\varepsilon, v_\varepsilon)\rightharpoonup z_0=(u_0, v_0)$ in $E$ and $(u_\varepsilon, v_\varepsilon)\xrightarrow{a.e.}(u_0, v_0)$ in $\mathbb R^2$, as $\varepsilon\rg0$. Due to $c_\varepsilon\in(c_0,4\pi/\alpha_0)$ for $\varepsilon>0$ sufficiently small, as in Lemma \ref{bj2}, we have $u_0\not\equiv0, v_0\not\equiv0$. Moreover, $\Phi'(z_0)=0$. By $(H2)$ and Fatou's Lemma, \begin{align*} c_\varepsilon&=\Phi_\varepsilon(z_\varepsilon)-\frac{1}{2}\langle \Phi_\varepsilon'(z_\varepsilon),z_\varepsilon\rangle\\ &=\int_{\mathbb R^2}\frac{1}{2}f(u_\varepsilon)u_\varepsilon-F(u_\varepsilon)+\int_{\mathbb R^2}\frac{1}{2}g(v_\varepsilon)v_\varepsilon-G(v_\varepsilon)\\ &\ge\int_{\mathbb R^2}\frac{1}{2}f(u_0)u_0-F(u_0)+\int_{\mathbb R^2}\frac{1}{2}g(v_0)v_0-G(v_0)+o_\varepsilon(1)\\ &=\Phi(z_0)-\frac{1}{2}\langle \Phi'(z_0),z_0\rangle+o_\varepsilon(1)\ge c_\ast+o_\varepsilon(1). \end{align*} Thanks to Lemma \ref{l5.9}, $\Phi(z_0)=c_\ast$, namely $(u_0, v_0)$ is a ground state solution of \re{luv}. Thanks to Fatou's Lemma again, $$ \lim_{\varepsilon\rg0}\int_{\mathbb R^2}\frac{1}{2}f(u_\varepsilon)u_\varepsilon-F(u_\varepsilon)=\int_{\mathbb R^2}\frac{1}{2}f(u_0)u_0-F(u_0) $$ and $$ \lim_{\varepsilon\rg0}\int_{\mathbb R^2}\frac{1}{2}g(v_\varepsilon)v_\varepsilon-G(v_\varepsilon)=\int_{\mathbb R^2}\frac{1}{2}g(v_0)v_0-G(v_0). $$
Repeating the argument in Proposition \ref{con}, we get $\|u_\varepsilon\|_\varepsilon\rightarrow\|u_0\|_{H^1}$ and $\|v_\varepsilon\|_\varepsilon\rightarrow\|v_0\|_{H^1}$, as $\varepsilon\rg0$. This implies $(u_\varepsilon,v_\varepsilon)\rightarrow (u_0,v_0)$ strongly in $E$ as $\varepsilon\rg0$. Then, as in Proposition \ref{bo2} and \ref{pro_apriori}, $\{\|u_\varepsilon\|_\infty,\|v_\varepsilon\|_\infty\}$ is uniformly bounded for $\varepsilon>0$ small and $$\liminf_{\varepsilon\rg0}\min\{\|u_\varepsilon\|_\infty,\|v_\varepsilon\|_\infty\}>0.$$ As in Proposition \ref{boo4}, there exists $R_2>0$ such that $$ \lim_{\varepsilon\rg0}\int_{B_{R_2}(x_\varepsilon/\varepsilon)}(u_\varepsilon^2+v_\varepsilon^2)\, \mathrm{d} x>0. $$
Now, we claim that $\{x_\varepsilon\}$ is bounded for $\varepsilon>0$ small enough. Suppose this does not occur, so that $|x_\varepsilon|\rightarrow\infty$, as $\varepsilon\rg0$. Let $\bar{u}_\varepsilon(\cdot)=u_\varepsilon(\cdot+x_\varepsilon/\varepsilon)$ and $\bar{v}_\varepsilon(\cdot)=v_\varepsilon(\cdot+x_\varepsilon/\varepsilon)$ which, up to a subsequence, $(\bar{u}_\varepsilon,\bar{v}_\varepsilon)\rightarrow \bar{z}=(\bar{u},\bar{v})$ weakly in $E$, as $\varepsilon\rg0$ and $\bar{u},\bar{v}\not\equiv0$. Moreover, $\Phi_{V_\infty}'(\bar{z})=0$. As in Lemma \ref{bj2} we get a contradiction. Therefore $\{x_\varepsilon\}$ is bounded for $\varepsilon>0$ small. Up to a subsequence, assume $x_\varepsilon\rightarrow x_0$, as $\varepsilon\rg0$ and let $\hat{u}_\varepsilon(\cdot)=u_\varepsilon(\cdot+x_\varepsilon/\varepsilon)$, $\hat{v}_\varepsilon(\cdot)=v_\varepsilon(\cdot+x_\varepsilon/\varepsilon)$. Then, up to a subsequence, $\hat{z}_\varepsilon=(\hat{u}_\varepsilon,\hat{v}_\varepsilon)\rightarrow\hat{z}=(\hat{u},\hat{v})\not=0$ weakly in $E$, as $\varepsilon\rg0$ and $\Phi_{V(x_0)}'(\hat{z})=0$, where $$ \Phi_{V(x_0)}(z)=\int_{\mathbb R^2}\nabla u\nabla v+V(x_0) uv-I(z),\,\, z=(u,v)\in E. $$ By $(H2)$ and Fatou's Lemma, \begin{align*} c_\varepsilon&=\Phi_\varepsilon(z_\varepsilon)-\frac{1}{2}\langle \Phi_\varepsilon'(z_\varepsilon),z_\varepsilon\rangle\\ &=\int_{\mathbb R^2}\frac{1}{2}f(\hat{u}_\varepsilon)\hat{u}_\varepsilon-F(\hat{u}_\varepsilon)+\int_{\mathbb R^2}\frac{1}{2}g(\hat{v}_\varepsilon)\hat{v}_\varepsilon-G(\hat{v}_\varepsilon)\\ &\ge\int_{\mathbb R^2}\frac{1}{2}f(\hat{u})\hat{u}-F(\hat{u})+\int_{\mathbb R^2}\frac{1}{2}g(\hat{v})\hat{v}-G(\hat{v})+o_\varepsilon(1)\\ &=\Phi_{V(x_0)}(\hat{z})-\frac{1}{2}\langle \Phi_{V(x_0)}'(\hat{z}),\hat{z}\rangle+o_\varepsilon(1)\ge c_{V(x_0)}+o_\varepsilon(1). \end{align*} Recalling that $\limsup_{\varepsilon\rg0}c_\varepsilon\le c_\ast$, we get $c_{V(x_0)}=c_\ast$ and hence $(\hat{u},\hat{v})$ is a ground state solution of \re{luv}. Thanks to Lemma \ref{bj1}, $V(x_0)=V_0$, namely $x_0\in\mathcal{M}$ and $\lim_{\varepsilon\rightarrow 0}\mbox{dist}(x_\varepsilon,\mathcal{M})=0$. {Moreover, $(\hat{u}_\varepsilon,\hat{v}_\varepsilon)\rightarrow (\hat{u},\hat{v})$ strongly in $E$, as $\varepsilon\rg0$. As in Proposition \ref{bo2}, $u_\varepsilon(x+x_\varepsilon/\varepsilon), v_\varepsilon(x+x_\varepsilon/\varepsilon)\rightarrow 0$ vanish at infinity uniformly in $\varepsilon$.} \end{proof}
{\begin{proposition}\label{boc3} Let $(\varphi_\varepsilon,\psi_\varepsilon)$ be a ground state solution to \re{q1} and $x_\varepsilon^1, x_\varepsilon^2$ be any maximum point of $|\varphi_\varepsilon|$ and $|\psi_\varepsilon|$ respectively. Then, $$
\hbox{$\lim_{\varepsilon\rightarrow 0}\mbox{dist}(x_\varepsilon^i,\mathcal{M})=0,\quad \lim_{\varepsilon\rg0}|x_\varepsilon^i-x_\varepsilon|=0,\quad i=1,2$.} $$ If in addition $f$ and $g$ are odd and $(H6)$ holds, then for $\varepsilon>0$ small enough, $\varphi_\varepsilon\psi_\varepsilon>0$ in $\mathbb R^2$ and
$$\lim_{\varepsilon\rg0}|x_\varepsilon^1-x_\varepsilon^2|/\varepsilon=0.$$
Moreover, for some $c,C>0$, $$|\varphi_\varepsilon(x)|\le C\exp(-\frac{c}{\varepsilon}|x-x_\varepsilon^1|),\quad |\psi_\varepsilon(x)|\le C\exp(-\frac{c}{\varepsilon}|x-x_\varepsilon^2|), \,\, x\in\mathbb R^2.$$ \end{proposition}} \begin{proof} Note that $x_\varepsilon^1/\varepsilon,x_\varepsilon^2/\varepsilon$ are the maxima points of $u_\varepsilon,v_\varepsilon$ respectively. Thanks to the decayof $u_\varepsilon,v_\varepsilon$ and the following fact $$
\liminf_{\varepsilon\rg0}\min\{\|u_\varepsilon\|_\infty,\|v_\varepsilon\|_\infty\}>0, $$
we get $|x_\varepsilon^i/\varepsilon-x_\varepsilon/\varepsilon|$ is bounded for $i=1,2$ and $\varepsilon>0$ small enough. Then, $\lim_{\varepsilon\rightarrow 0}\mbox{dist}(x_\varepsilon^i,\mathcal{M})=0\,,i=1,2$, $\lim_{\varepsilon\rg0}|x_\varepsilon^i-x_\varepsilon|=0\,,i=1,2$ and $\lim_{\varepsilon\rg0}|x_\varepsilon^1-x_\varepsilon^2|=0$ .
\noindent Next we assume {$f$ and $g$ are odd, that $(H6)$ holds}, and also that, up to a subsequence, $(x_\varepsilon^1-x_\varepsilon^2)/\varepsilon\rightarrow y_0\in\mathbb R^2$, as $\varepsilon\rg0$. Let $\tilde{u}_\varepsilon(\cdot)=u_\varepsilon(\cdot+x_\varepsilon^1/\varepsilon)$ and $\tilde{v}_\varepsilon(\cdot)=v_\varepsilon(\cdot+x_\varepsilon^2/\varepsilon)$, then $(\tilde{u}_\varepsilon(\cdot),\tilde{v}_\varepsilon(\cdot+(x_\varepsilon^1-x_\varepsilon^2)/\varepsilon))\rightarrow (u,v)\not=0$ strongly in $E$ and in $C_{loc}^1(\mathbb R^2)$, as $\varepsilon\rg0$. Moreover, $(u,v)$ is a ground state solution of \re{q11}. Without loss generality, we assume $u>0$, $v>0$ in $\mathbb R^2$. Since $0$ is a maximum point of $\tilde{u}_\varepsilon$, $0$ is a maximum point also for $u$. By virtue of Theorem \ref{sign}, $0$ is the unique maximum point of $u$ and $v$. On the other hand, up to a subsequence, $(\tilde{u}_\varepsilon(\cdot+(x_\varepsilon^2-x_\varepsilon^1)/\varepsilon),\tilde{v}_\varepsilon(\cdot))\rightarrow (\tilde{u},\tilde{v})\not=0$ strongly in $E$ and in $C_{loc}^1(\mathbb R^2)$, as $\varepsilon\rg0$. Then $(\tilde{u}(\cdot),\tilde{v}(\cdot))=(u(\cdot-y_0),v(\cdot-y_0))$, which is a ground state solution of \re{q11}. Since $0$ is a maximum point of $\tilde{v}_\varepsilon$, then $0$ is the unique maximum point of $\tilde{v}$. Therefore, $y_0=0$.
\noindent Finally, we prove that $u_\varepsilon,v_\varepsilon$ do not change the sign for $\varepsilon>0$ sufficiently small. Let $$ \bar{u}_\varepsilon=u_\varepsilon(\cdot+x_\varepsilon^1/\varepsilon),\,\,\, \bar{v}_\varepsilon=v_\varepsilon(\cdot+x_\varepsilon^1/\varepsilon), $$ it is enough to prove $\bar{u}_\varepsilon\bar{v}_\varepsilon>0$ in $\mathbb R^2$. We assume $(\bar{u}_\varepsilon,\bar{v}_\varepsilon)\rightarrow(u,v)\in\mathcal{S}$ strongly in $E$ and uniformly in $C_{loc}^2(\mathbb R^2)$, as $\varepsilon\rg0$ and $0$ is the unique maximum point of $u,v$. By Theorem \ref{sign}, $uv>0$ in $\mathbb R^2$. Without loss of generality, we assume $u>0$ and $v>0$ in $\mathbb R^2$. Then there exist $R>0$ and $\varepsilon_0>0$ such that $\bar{u}_\varepsilon,\bar{v}_\varepsilon>0$ in $B_R(0)$ for $\varepsilon<\varepsilon_0$. Define $$
R_\varepsilon(\bar{u}_\varepsilon):=\sup\{r\,|\,\bar{u}_\varepsilon(x)>0,\,\, \forall\, x\in B_r(0)\},\,\,R_\varepsilon(\bar{v}_\varepsilon):=\sup\{r\,|\, \bar{v}_\varepsilon(x)>0,\,\,\forall\ x\in B_r(0)\} $$
and $R_\varepsilon:=\min\{R_\varepsilon(\bar{u}_\varepsilon),R_\varepsilon(\bar{v}_\varepsilon)\}$, then $R_\varepsilon\ge R$ for any $\varepsilon<\varepsilon_0$. If $R_\varepsilon=\infty$ for any $\varepsilon<\varepsilon_0$, the proof is complete. Otherwise, there exists $\varepsilon_n>0$ such that $\varepsilon_n\rg0$, as $n\rightarrow\infty$ and $R_n:=R_{\varepsilon_n}<\infty$ for any fixed $n$. Then, by the maximum principle, $R_{\varepsilon_n}(\bar{u}_{\varepsilon_n}), R_{\varepsilon_n}(\bar{v}_{\varepsilon_n})<\infty$ for any fixed $n\in\mathbb{N}$. Hence $\inf_{x\in\mathbb R^2}\bar{u}_{\varepsilon_n}(x)<0$ and $\inf_{x\in\mathbb R^2}\bar{v}_{\varepsilon_n}(x)<0$ for any $n\in\mathbb{N}$. Noting that $\bar{u}_{\varepsilon_n}(x),\bar{v}_{\varepsilon_n}(x)\rg0$, as $|x|\rightarrow\infty$, there exist $y_n,z_n\in\mathbb R^2$ such that $\bar{u}_{\varepsilon_n}(y_n)=\min_{x\in\mathbb R^2}\bar{u}_{\varepsilon_n}(x)<0$ and $\bar{v}_{\varepsilon_n}(z_n)=\min_{x\in\mathbb R^2}\bar{v}_{\varepsilon_n}(x)<0$. Then we have $$ g(\bar{v}_{\varepsilon_n}(y_n))\le V_0\bar{u}_{\varepsilon_n}(y_n),\,\,\,f(\bar{u}_{\varepsilon_n}(z_n))\le V_0\bar{v}_{\varepsilon_n}(z_n). $$ By Remark \ref{remark1} we have $$ V_0\bar{u}_{\varepsilon_n}(y_n)\ge g(\bar{v}_{\varepsilon_n}(y_n))\ge g(\bar{v}_{\varepsilon_n}(z_n)) \ge g\left(\frac{f(\bar{u}_{\varepsilon_n}(z_n))}{V_0}\right)\ge g\left(\frac{f(\bar{u}_{\varepsilon_n}(y_n))}{V_0}\right), $$
which yields $\inf_n|\bar{u}_{\varepsilon_n}(y_n)|>0$ by $(H1)$ . {Observe that $\bar{u}_{\varepsilon_n}(x)\rg0$, as $|x|\rightarrow\infty$, uniformly in $\varepsilon$, and thus $\sup_n|y_n|<\infty$, namely $|y_n|<R_n$ for $n$ sufficiently large.} Hence $\bar{u}_{\varepsilon_n}(y_n)>0$, which is a contradiction. {Finally, since $u_\varepsilon,v_\varepsilon$ do not change the sign, by the standard comparison principle, we get the uniformly exponential decay at infinity.} \end{proof} In order to complete the proof of Theorem \ref{Th1} we need to prove the uniqueness of the maximum points of $\varphi_\varepsilon,\psi_\varepsilon$.
\begin{proposition}\label{boc4} Let $x_\varepsilon^1, y_\varepsilon^1$ be any maxima points of $\varphi_\varepsilon$. {Assume $f$ and $g$ are odd and $(H6)$ holds. Then $x_\varepsilon^1=y_\varepsilon^1$, for $\varepsilon>0$ sufficiently small.} Namely, the maximum point of $\varphi_\varepsilon$ is unique. The same holds for $\psi_\varepsilon$. \end{proposition} \begin{proof} Let $$ \bar{u}_\varepsilon=u_\varepsilon(\cdot+x_\varepsilon^1/\varepsilon),\,\,\, \bar{v}_\varepsilon=v_\varepsilon(\cdot+x_\varepsilon^1/\varepsilon). $$ Then $(\bar{u}_\varepsilon,\bar{v}_\varepsilon)\rightarrow (u,v)\in\mathcal{S}$ strongly in $E$ and uniformly in $C_{loc}^2(\mathbb R^2)$, as $\varepsilon\rg0$. Moreover, there exist $c,C>0$ such that $$
|\bar{u}_\varepsilon(x)|\le C\exp{(-c|x-x_\varepsilon^1/\varepsilon|)},\,\,\ x\in\mathbb R^2. $$
Hence $\|\bar{u}_\varepsilon\|_\infty\le C\exp{(-c|y_\varepsilon^1-x_\varepsilon^1|/\varepsilon)}$. As a consequence we have $$\limsup_{\varepsilon\rg0}|y_\varepsilon^1-x_\varepsilon^1|/\varepsilon<\infty.$$
Indeed, otherwise $\|\bar{u}_\varepsilon\|_\infty\rg0$, as $\varepsilon\rg0$, which yields $$
\int_{\mathbb R^2}[|\nabla\bar{v}_\varepsilon|^2+V_\varepsilon(x+x_\varepsilon^1/\varepsilon)|\bar{v}_\varepsilon|^2]\,\mathrm{d} x=\int_{\mathbb R^2}f(\bar{u}_\varepsilon)\bar{v}_\varepsilon\,\mathrm{d} x\rg0. $$
Namely $\|v_\varepsilon\|_{1,\varepsilon}\rg0$, as $\varepsilon\rg0$ from which $\Phi_\varepsilon(u_\varepsilon,v_\varepsilon)\rg0$, as $\varepsilon\rg0$, thus a contradiction by Proposition \ref{co51}. Therefore $|y_\varepsilon^1-x_\varepsilon^1|/\varepsilon$ stays bounded for $\varepsilon>0$ small. As in Proposition \ref{boc3}, $|y_\varepsilon^1-x_\varepsilon^1|/\varepsilon\rg0$, as $\varepsilon\rg0$. Obverse that $\nabla\bar{u}_\varepsilon(0)=\nabla\bar{u}_\varepsilon((y_\varepsilon^1-x_\varepsilon^1)/\varepsilon)=0$. By Theorem \ref{sign}, $\Delta u(0)<0$. Recalling that $u(x)=u(|x|)$, $u'(0)=0$ and $u''(r)<0$ for $r=|x|$ small. On the other hand, since $g\in C^1$, $\bar{u}_\varepsilon\in C^2$ and $\bar{u}_\varepsilon\rightarrow u$ in $C_{loc}^2(\mathbb R^2)$, as $\varepsilon\rg0$, it follows from \cite[Lemma 4.2]{Ni-Takagi1} that $y_\varepsilon^1=x_\varepsilon^1$ for $\varepsilon>0$ sufficiently small. \end{proof}
\end{document} |
\begin{document}
\title[Approximations of a quantum diffusion equation]{Entropy-stable and entropy-dissipative approximations of a fourth-order quantum diffusion equation}
\author{Mario Bukal} \address{Institute for Analysis and Scientific Computing, Vienna University of
Technology, Wiedner Hauptstra\ss e 8--10, 1040 Wien, Austria} \email{mbukal@asc.tuwien.ac.at}
\author{Etienne Emmrich} \address{Institute for Mathematics, Technical University of Berlin,
Stra{\ss}e des 17. Juni 136, 10623 Berlin, Germany} \email{emmrich@math.tu-berlin.de}
\author{Ansgar J\"ungel} \address{Institute for Analysis and Scientific Computing, Vienna University of
Technology, Wiedner Hauptstra\ss e 8--10, 1040 Wien, Austria} \email{juengel@tuwien.ac.at}
\date{\today}
\thanks{The first and last author acknowledge partial support from the Austrian Science Fund (FWF), grants P20214, P22108, and I395, and the Austrian-French Project of the Austrian Exchange Service (\"OAD)}
\begin{abstract} Structure-preserving numerical schemes for a nonlinear parabolic fourth-order equation, modeling the electron transport in quantum semiconductors, with periodic boundary conditions are analyzed. First, a two-step backward differentiation formula (BDF) semi-discretization in time is investigated. The scheme preserves the nonnegativity of the solution, is entropy stable and dissipates a modified entropy functional. The existence of a weak semi-discrete solution and, in a particular case, its temporal second-order convergence to the continuous solution is proved. The proofs employ an algebraic relation which implies the G-stability of the two-step BDF. Second, an implicit Euler and $q$-step BDF discrete variational derivative method are considered. This scheme, which exploits the variational structure of the equation, dissipates the discrete Fisher information (or energy). Numerical experiments show that the discrete (relative) entropies and Fisher information decay even exponentially fast to zero. \end{abstract}
\keywords{Derrida-Lebowitz-Speer-Spohn equation, discrete entropy-dissipation i\-ne\-qua\-li\-ty, Fisher information, BDF time discretization, numerical convergence, discrete variational derivative method.}
\subjclass[2000]{65M06, 65M12, 65M15, 35Q40, 82D37.}
\maketitle
\section{Introduction}
This paper is devoted to the study of novel structure-preserving temporal higher-order numerical schemes for the fourth-order quantum diffusion equation \begin{equation}\label{qde}
n_t + \operatorname{div}\left(n\na\left(\frac{\Delta\sqrt{n}}{\sqrt{n}}\right)\right) = 0,
\quad x\in{{\mathbb T}^d},\ t>0, \quad n(0)=n_0, \end{equation} where ${{\mathbb T}^d}$ is the $d$-dimensional torus. This equation is the zero-temperature and zero-field limit of the quantum drift-diffusion model, which describes the evolution of the electron density $n(t)=n(t,\cdot)$ in a quantum semiconductor device; see \cite{Jue09}. It was derived in \cite{DMR05} from a relaxation-time Wigner equation using a Chapman-Enskog expansion around the quantum equilibrium. For smooth positive solutions, \eqref{qde} can be written in a symmetric form for the variable $\log n$: \begin{equation}\label{dlss}
n_t + \frac12\pa_{ij}^2(n\pa_{ij}^2\log n) = 0,\quad x\in{{\mathbb T}^d},\ t>0,\quad
n(0)=n_0, \end{equation} where here and in the following, we employ the summation convention over repeated indices and the notation $\pa_i=\pa/\pa x_i$, $\pa_{ij}^2=\pa^2/\pa x_i\pa x_j$. This is the multidimensional form of the so-called Derrida-Lebowitz-Speer-Spohn (DLSS) equation. Its one-dimensional version was derived in \cite{DLSS91} in a suitable scaling limit from the time-discrete Toom model and the variable $n$ is related to a limit random variable.
The main difficulties in the analysis of \eqref{qde} (or \eqref{dlss}) are the highly nonlinear structure, originating from the quantum potential term $\Delta\sqrt{n}/\sqrt{n}$ in \eqref{qde}, and the fourth-order differential operator, which lacks a maximum principle.
These difficulties have been overcome by exploiting the rich mathematical structure of \eqref{dlss}. First, equation \eqref{dlss} preserves the nonnegativity of the solutions \cite{JuMa08}: Starting from a nonnegative initial datum, the weak solution stays nonnegative for all time. Second, \eqref{dlss} allows for a class of Lyapunov functionals and so-called entropy dissipation estimates. More precisely, the functionals $$
E_\alpha[n] = \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} n^\alpha \mathrm{d}x \quad
(\alpha\neq 0,1),\quad E_1[n] = \int_{{\mathbb T}^d}\big(n(\log n-1)+1\big)\mathrm{d}x $$ are Lyapunov functionals along solutions to \eqref{dlss}, i.e.\ ${\mathrm{d}} E_\alpha[n]/{\mathrm{d}} t\le 0$ if $(\sqrt{d}-1)^2/(d+2)\le\alpha\le (\sqrt{d}+1)^2/(d+2)$, and the entropy dissipation inequality \begin{equation}\label{1.est1}
\frac{{\mathrm{d}}}{{\mathrm{d}} t}E_\alpha[n] + 2\kappa_\alpha
\int_{{\mathbb T}^d}(\Delta n^{\alpha/2})^2\mathrm{d}x \le 0 \end{equation} holds if $(\sqrt{d}-1)^2/(d+2)<\alpha < (\sqrt{d}+1)^2/(d+2)$. The constant $\kappa_\alpha>0$ can be computed explicity, see Lemma \ref{lem.H2} below. For $\alpha=1$, inequality \eqref{1.est1} can be interpreted as the dissipation of the physical entropy. Third, equation \eqref{qde} is the gradient flow of the Fisher information \begin{equation}\label{1.F}
F[n] = \int_{{\mathbb T}^d}|\na\sqrt{n}|^2\mathrm{d}x \end{equation} with respect to the Wasserstein metric \cite{GST09}. As the variational derivative of the Fisher information equals $\delta F[n]/\delta n=-\Delta\sqrt{n}/\sqrt{n}$, a straightforward computation shows that the Fisher information is dissipated along solutions to \eqref{qde}, \begin{equation}\label{1.est2}
\frac{{\mathrm{d}}}{{\mathrm{d}} t}F[n] + \int_{{\mathbb T}^d} n\Big|\na\Big(\frac{\delta F[n]}{\delta n}
\Big)\Big|^2\mathrm{d}x = 0. \end{equation} Since the Fisher information can be interpreted as the quantum energy, the latter can be seen as an energy dissipation identity.
Whereas the local-in-time existence of positive classical solutions for strictly positive $W^{1,p}({{\mathbb T}^d})$ initial data with $p>d$ could be proved using semigroup theory \cite{BLS94}, global-in-time existence results were based on estimates \eqref{1.est1} and \eqref{1.est2}. More precisely, the global existence of a nonnegative weak solution was achieved in \cite{JuPi00} in the one-dimensional case. This result was extended later to several space dimensions in \cite{JuMa08}, employing entropy dissipation inequalites, and in \cite{GST09}, exploring the variational structure of the equation.
From a numerical viewpoint, it is desirable to design numerical approximations which preserve the above structural properties like positivity preservation, entropy stability, and entropy or energy dissipation on a discrete level. For a constant time step size $\tau >0$, let $t_k = k\tau$ $(k\ge 0)$. If $n_k$ approximates the solution $n(t_k)$ to \eqref{dlss} at time $t_k$, we call a numerical scheme {\em entropy dissipating} if $E_\alpha[n_{k+1}]\le E_\alpha[n_k]$ for all $k\ge 0$ with $\alpha$ in a certain parameter range, and {\em entropy stable} if there exists a constant $C>0$ such that $E_\alpha[n_k]\le C$ for all $k\ge 0$. In this paper, we investigate the entropy stability and entropy dissipation of backward differentiation formulas (BDF).
In the literature, most of the numerical schemes proposed for \eqref{dlss} are based on an implicit Euler discretization in one space dimension. In \cite{JuPi01}, the convergence of a positivity-preserving semi-discrete Euler scheme was shown. A fully discrete finite-difference scheme which preserves the positivity, mass, and physical entropy was derived in \cite{CJT03}. D\"uring et al.\ \cite{DMM10} employed the variational structure of \eqref{dlss} on a fully discrete level and introduced a discrete minimizing movement scheme. This approach implies the decay of the discrete Fisher information and the nonnegativity of the discrete solutions. Finally, a positivity-preserving finite-volume scheme in several space dimensions for a stationary quantum drift-diffusion model was suggested in \cite{CGJ11}.
Positivity preserving and entropy consistent numerical schemes have been investigated in the literature also for other nonlinear fourth- and second-order equations. For instance, a positivity preserving finite difference approximation of the thin-film equation was proposed by Zhornitskaya and Bertozzi \cite{ZhBe00}. Finite element techniques for the same equation were employed by Barrett, Blowley, and Garcke \cite{BBG98}, imposing the nonnegativity property as a constraint such that at each time level a variational inequality has to be solved. Furthermore, entropy consistent finite volume--finite element schemes were suggested and analyzed by Gr\"un and Rumpf \cite{Gru03,GrRu00}. Furihata and Matsuo \cite{FuMa10} developed the discrete variational derivative method to derive conservative or dissipative schemes for a variety of evolution equations possessing a variational structure. Entropy dissipative fully discrete schemes for electro-reaction-diffusion systems were derived by Glitzky and G\"artner \cite{GlGa09}.
In most of these works, the time discretization is restricted to the implicit Euler method, motivated by the fact that the solutions often lack regularity. However, high-order schemes often still yield smaller time errors than the Euler scheme, and this improved accuracy is vital to match the spatial approximation errors. A difficulty of the analysis is that the time discretization has to be compatible with the entropy structure of the equation. This is the case for the first-order implicit Euler discretization. Indeed, multiplying the semi-discrete scheme \begin{equation}\label{1.euler}
\frac{1}{\tau}(n_{k+1}-n_k) + \frac12\pa_{ij}^2(n_{k+1}\pa_{ij}^2\log n_{k+1})
= 0, \quad k\ge 0, \end{equation} where $\tau>0$ is the time step and $n_k$ approximates $n(t_k)$ with $t_k=\tau k$, by $\log n_{k+1}$ and using the elementary inequality \begin{equation}\label{euler.ineq}
(x-y)\log x\ge x\log x-y\log y\quad \mbox{for }x,y>0 \end{equation} (which follows from the convexity of $x\mapsto x\log x$), it was shown in \cite[Lemma 4.1]{JuMa08} that $$
E_\alpha[n_{k+1}] + 2\tau\kappa_\alpha\int_{{\mathbb T}^d} (\Delta n_{k+1}^{\alpha/2})^2\mathrm{d}x
\le E_\alpha[n_k], \quad k\ge 0. $$ As a consequence, $k\mapsto E_\alpha[n_k]$ is nonincreasing and the entropy dissipation structure is preserved. It is less clear whether higher-order approximations yield entropy dissipating numerical schemes. In this paper, we prove this property for the two-step BDF method.
Two-step BDF (or BDF2) methods have been employed in the literature to approximate various evolution equations in different contexts. We just mention numerical schemes for incompressible Navier-Stokes problems \cite{Emm04,GiRa81,HiSu00}, semilinear and quasilinear parabolic equations \cite{Emm05,Moo94}, and nonlinear evolution problems governed by monotone operators \cite{Emm09,Kre78}. To our knowledge, temporal higher-order schemes for the quantum diffusion equation \eqref{qde} have been not considered so far.
In the following, we detail our main results. First, we analyze the BDF2 time approximation of the DLSS equation, written in the form \begin{equation}\label{alpha_dlss}
\frac{2}{\alpha}n^{1-\alpha/2}(n^{\alpha/2})_t
+ \frac12\pa_{ij}^2(n\pa_{ij}^2\log n) = 0, \end{equation}which was already used in \cite{JuVi07} in a different context. Introducing the variable $v_k:=n_k^{\alpha/2}$, which approximates $n(t_k)^{\alpha/2}$, the semi-discrete BDF2 scheme for \eqref{alpha_dlss} reads as \begin{equation}\label{disc.dlss}
\frac{2}{\alpha\tau}v_{k+1}^{2/\alpha-1}\left(\frac32 v_{k+1} - 2v_k
+ \frac12 v_{k-1}\right) + \frac{1}{2}\pa_{ij}^2(n_{k+1}\pa_{ij}^2
\log n_{k+1}) = 0 \quad\mbox{in }{{\mathbb T}^d},\ k\geq 1. \end{equation} Here, $v_0=n_0^{\alpha/2}$ is given by the initial datum $n_0$, and $v_1$ is the solution to the implicit Euler scheme \begin{equation}\label{alpha_euler}
\frac{2}{\alpha\tau}v_1^{2/\alpha-1}\big(v_{1} - v_0\big)
+ \frac12\pa_{ij}^2(n_{1}\pa_{ij}^2\log n_{1}) = 0
\quad\mbox{in }{{\mathbb T}^d}. \end{equation} The existence of a weak solution to the scheme \eqref{disc.dlss}--\eqref{alpha_euler} is provided by the following theorem.
\begin{theorem}[Existence of solutions and entropy stability]\label{thm.bdf2.ex} Let $1\le d\le 3$, $1 \leq \alpha < (\sqrt{d}+1)^2/(d+2)$, and let $n_0 \in L^3({{\mathbb T}^d})$ be a nonnegative function. Then there exists a weak solution $v_1 = n_1^{\alpha/2}$ of the implicit Euler scheme \eqref{alpha_euler} and a sequence $(v_k)=(n_k^{\alpha/2})$ of weak nonnegative solutions to \eqref{disc.dlss} satisfying $v_k \geq 0$ in ${{\mathbb T}^d}$, $v_k\in H^2({{\mathbb T}^d})$, and for all $\phi\in W^{2,\infty}({{\mathbb T}^d})$, \begin{align}\label{weak_alpha}
\frac{1}{\alpha\tau}\int_{{{\mathbb T}^d}} & v_{k+1}^{2/\alpha-1}\left(\frac32 v_{k+1}
- 2v_k + \frac12 v_{k-1}\right)\phi\mathrm{d}x \\
&{}+ \int_{{{\mathbb T}^d}}\left(\frac{1}{2\alpha}v_{k+1}^{2/\alpha-1}\pa_{ij}^2v_{k+1}
- \frac{\alpha}{2}\pa_i (v_{k+1}^{1/\alpha})\pa_j
(v_{k+1}^{1/\alpha})\right)\pa_{ij}^2\phi\mathrm{d}x = 0. \nonumber \end{align} If $\alpha > 1$, the scheme \eqref{disc.dlss} is entropy stable and the a priori estimate \begin{equation}\label{ent.stab}
E_\alpha[n_m] + \frac43\kappa_\alpha\tau\sum_{k=1}^{m}\int_{{{\mathbb T}^d}}
\big(\Delta(n_{k}^{\alpha/2})\big)^2\mathrm{d}x
\le E_\alpha[n_0],\quad m\geq 1, \end{equation} holds, where $\kappa_\alpha>0$ is defined in Lemma \ref{lem.H2}. \end{theorem}
When we redefine the entropy, we are able to prove entropy dissipation of the semi-discrete scheme. For this, introduce the modified entropy $$
E^G_\alpha[n_k,n_{k-1}] = \frac{1}{2\alpha(\alpha-1)}\int_{{\mathbb T}^d}
\big(n_k^\alpha + (2n_k^{\alpha/2}-n_{k-1}^{\alpha/2})^2\big)\mathrm{d}x, \quad k\ge 1. $$ This definition is motivated by the inequality $$
2\left(\frac32 a-2b+\frac12 c\right)a \ge \frac12\big(a^2+(2a-b)^2\big)
- \frac12\big(b^2+(2b-c)^2\big) \quad\mbox{for all }a,b,c\in{\mathbb R}, $$ which implies the G-stability of the BDF2 method; see \cite{Dah78} and Lemma \ref{lem.ineq}. The entropies $E_\alpha$ and $E_\alpha^G$ are formally related by $E^G_\alpha[n_k,n_{k-1}] = E_\alpha[n_k] + O(\tau)$ as $\tau\to 0$ for $k\ge 2$.
\begin{corollary}[Entropy dissipation]\label{coro.bdf2} Let the assumptions of Theorem \ref{thm.bdf2.ex} hold for $\alpha>1$. Then the scheme \eqref{disc.dlss} is entropy dissipative in the sense of \begin{equation}\label{ent.diss}
E^G_\alpha[n_{k+1},n_k] + 2\kappa_\alpha\tau\int_{{{\mathbb T}^d}}
\big(\Delta(n_{k+1}^{\alpha/2})\big)^2\mathrm{d}x
\le E^G_\alpha[n_k,n_{k-1}],\quad k\geq 1. \end{equation} In particular, $k\mapsto E^G_\alpha[n_k,n_{k-1}]$ is nonincreasing. \end{corollary}
We stress the fact that the implicit Euler scheme \eqref{1.euler} dissipates {\em all} admissible entropies, whereas the BDF2 scheme just dissipates {\em one} entropy, $E^G_\alpha[n_k]$, where $\alpha$ has been fixed in the scheme.
The proof of Theorem \ref{thm.bdf2.ex} is based on the semi-discrete entropy stability inequality \eqref{ent.stab} and the Leray-Schauder fixed-point theorem. Instead of \eqref{euler.ineq}, we employ the algebraic inequalities \eqref{ineq1} and \eqref{ineq2} (see Section \ref{sec.bdf2}). We have not been able to obtain similar inequalities for BDF$k$ methods with $3\le k\le 6$. The reason might be the fact that the only G-stable BDF methods are the BDF1 (implicit Euler) and BDF2 discretizations \cite{Dah78}. Moreover, we have not been able to prove entropy dissipation for $\alpha=1$ since in this case, inequalities \eqref{ineq1} and \eqref{ineq2} cannot be used.
If $\alpha=1$, we prove that the semi-discrete solution to the BDF2 scheme converges to the continuous solution with second-order rate.
\begin{theorem}[Second-order convergence]\label{thm.bdf2.conv} Let the assumptions of Theorem \ref{thm.bdf2.ex} hold, let $\alpha=1$, and let $(v_k)$ be the sequence of solutions to \eqref{disc.dlss}-\eqref{alpha_euler} constructed in Theorem \ref{thm.bdf2.ex}. We assume that there exist values $\mu_k>0$ such that $v_k\ge \mu_k>0$ in ${{\mathbb T}^d}$. Furthermore, let $n$ be a strictly positive solution to \eqref{dlss} satisfying $\sqrt{n}\in H^3(0,T;L^2({{\mathbb T}^d}))\cap W^{2,\infty}(0,T; L^2({{\mathbb T}^d}))$. Then there exists a constant $C>0$, depending only on the $L^2(0,T;L^2({{\mathbb T}^d}))$ norm of $(\sqrt{n})_{ttt}$, the $L^\infty(0,T;L^2({{\mathbb T}^d}))$ norm of $(\sqrt{n})_{tt}$, and $T$, but not on $\tau$, such that $$
\|v_k - \sqrt{n(t_k,\cdot)}\|_{L^2({{\mathbb T}^d})} \le C\tau^2, $$ where $0<\tau<1/8$ is the time step and $t_k=\tau k$, $k\ge 0$. \end{theorem}
It is shown in \cite[Theorem 6.2]{BLS94} that the solution $n$ to \eqref{dlss} is smooth locally in time if the initial datum is positive and an element of $W^{1,\infty}({{\mathbb T}^d})$. The proof of Theorem \ref{thm.bdf2.conv} is based on local truncation error estimates and the monotonicity of the formal operator $A(v)=v^{1-2/\alpha}\pa_{ij}^2(v^2\pa_{ij}^2\log v)$ for $\alpha=1$ \cite{JuPi03}. If $\alpha\neq 1$, the operator $A$ seems to be not monotone, and our proof does not apply. Possibly, the second-order convergence for $\alpha\neq 1$ could be achieved by applying suitable nonlinear semigroup estimates.
Next, we investigate a fully discrete numerical scheme which dissipates the Fisher information. To this end, we employ the discrete variational derivative method of Furihata and Matsuo \cite{FuMa10}. The method is based on the variational structure of the DLSS equation, \begin{equation}\label{dlss.gf}
n_t + \operatorname{div}\left(n\na\frac{\delta F[n]}{\delta n}\right) = 0, \quad t>0. \end{equation} The dissipation of the Fisher information $F[n]$ (see \eqref{1.F}) follows from (formally) integrating by parts in $$
\frac{{\mathrm{d}}}{{\mathrm{d}} t}F[n]
= \int_{{\mathbb T}^d}\frac{\delta F[n]}{\delta n}n_t \mathrm{d}x
= -\int_{{\mathbb T}^d} n\left|\na\left(\frac{\delta F[n]}{\delta n}\right)\right|^2\mathrm{d}x \le
0 $$ (see \eqref{1.est2}). The idea of the method is to derive a discrete formula for the variational derivative $\delta F[n]/\delta n$ in such a way that the above integration by parts formula and consequently the dissipation property hold on a discrete level. We provide such formulas for spatial finite difference and temporally higher-order BDF approximations.
The numerical approximation for equation (\ref{qde}), derived in \cite{DMM10}, takes advantage of the gradient-flow structure in the sense that the variational structure was discretized instead of equation (\ref{qde}) itself. The method is based on the minimizing movement (steepest descent) scheme and consequently dissipates the discrete Fisher information. In each time step, a constrained quadratic optimization problem for the Fisher information needs to be solved on a finite-dimensional space. Each subproblem has to be solved iteratively, leading to a sequential quadratic programming method. In general, this structure-preserving approach, known as ``first discretize, then minimize'', has good stability properties and captures well other structural features of equations, like those presented in \cite{WW10}.
The strategy of the discrete variational derivative method is the standard ``first minimize, then discretize'' approach, i.e., the discretization of equation (\ref{qde}), as the minimality condition in the variational setting, is performed. To some extent this is simpler than the above approach, since in each time step only a discrete nonlinear system has to be solved, and the main structural property remains preserved. Furthermore, we derive temporally higher-order discretizations, whereas the scheme in \cite{DMM10} is of first order only.
To simplify the notation, we consider the spatially one-dimensional case only. The extension to the multidimensional situation is straightforward if we assume rectangular grids. Let $x_0,\ldots,$ $x_N$ be equidistant grid points of ${\mathbb T}$ with mesh size $h>0$ and $x_0\cong x_N$. Let $U_i^k$ be an approximation of $n(t_k,x_i)$ and set $U^k=(U^k_0,\ldots,U^k_{N-1})$, $U_N=U_0$. Furthermore, let $\delta_k^{1,q}$ be the $q$-step BDF operator at time $t_k$; for instance, \begin{align}
\delta_{k+1}^{1,q}U_i^{k+1} &= \frac{1}{\tau}(U^{k+1}_i-U^{k}_i)
\quad\mbox{if }q=1, \label{1.bdf1} \\
\delta^{1,q}_{k+1}U_i^{k+1}
&= \frac{1}{\tau}\left(\frac32 U^{k+1}_i - 2U^k_i + \frac12 U^{k-1}_i\right)
\quad\mbox{if }q=2. \label{1.bdf2} \end{align} We denote by $\delta^{\langle 1\rangle}_i$ the central finite-difference operator at $x_i$, i.e.\ $\delta^{\langle 1\rangle}_i U^k = (U_{i+1}^k-U_{i-1}^k)/h$. Then, following \eqref{dlss.gf}, we propose the fully discrete scheme \begin{equation}\label{dvdm.q}
\delta_{k+1}^{1,q}U_i^{k+1} = \delta^{\langle 1\rangle}_i
\left(U^{k+1}\delta^{\langle 1\rangle}_i\left(
\frac{\delta F_d}{\delta(U^{k+1},\ldots,U^{k-q+1})}\right)\right), \quad
k\ge q-1, \end{equation} where $i=0,\ldots,N-1$. The discrete variational derivative $\delta F_d/\delta(U^{k+1},\ldots,U^{k-q+1})\in{\mathbb R}^N$ is defined in such a way that a discrete chain rule holds (see \eqref{q1.dvd} and \eqref{gen.dvd} in Section \ref{sec.dvdm} for the precise definitions), yielding the dissipation of the discrete Fisher information $F_d[U^k]$ in the sense of the following theorem.
\begin{theorem}[Dissipation of the Fisher information]\label{thm.dvdm} Let $N\in{\mathbb N}$, $U^0\in{\mathbb R}^N$ be some nonnegative initial datum with unit mass, $\sum_{i=0}^{N-1} U_i^0 h=1$, and let $U^1,\ldots,U^{q-1}\in{\mathbb R}^N$ be starting values with unit mass and $F_d[U^{q-1}]\le\cdots\le F_d[U^0]<\infty$. Then scheme \eqref{dvdm.q}, with the discrete variational derivative $\delta F_d/\delta(U^{k+1},\ldots,U^{k-q+1})$ defined by \eqref{gen.dvd}, is consistent of order $(q,2)$ with respect to the time-space discretization. Furthermore, $U^k$ is bounded uniformly in $k$, has unit mass, and the discrete Fisher information is dissipated in the sense of $$
\delta^{1,q}_k F_d[U^k] \le 0 \quad\mbox{for all }k\ge q. $$ Furthermore, for $q=1$ the discrete variational derivative is defined by \eqref{q1.dvd}, scheme \eqref{dvdm.q} is consistent of order $(1,2)$ and the discrete Fisher information is nonincreasing, $F_d[U^{k+1}]\le F_d[U^k]$ for all $k\ge 1$. \end{theorem}
We say that a scheme is consistent of order $(q,m)$ if the truncation error is of the order $O(\tau^q)+O(h^m)$ for $\tau\to 0$ and $h\to 0$.
The paper is organized as follows. The analysis of the BDF2 time approximation is performed in Section \ref{sec.bdf2}, and Theorems \ref{thm.bdf2.ex} and \ref{thm.bdf2.conv} are proved. The fully discrete variational derivative method is detailed in Section \ref{sec.dvdm}, and Theorem \ref{thm.dvdm} is proved. Numerical experiments in Section \ref{sec.num} illustrate the entropy stability, entropy dissipation, and energy (Fisher information) dissipation, even in situations not covered by the above theorems.
\section{BDF2 time approximation}\label{sec.bdf2}
First, we collect some auxiliary results. The following lemma is needed to show a priori bounds for the semi-discrete solutions to the DLSS equation.
\begin{lemma}\label{lem.ineq} It holds for all $a$, $b$, $c\in{\mathbb R}$, \begin{align}
2\left(\frac32 a-2b+\frac12 c\right)a &\ge \frac32 a^2-2b^2+\frac12 c^2
+ (a-b)^2 - (b-c)^2, \label{ineq1} \\
2\left(\frac32 a-2b+\frac12 c\right)a &\ge \frac12\big(a^2+(2a-b)^2\big)
- \frac12\big(b^2+(2b-c)^2\big). \label{ineq2} \end{align} \end{lemma}
\begin{proof} We calculate $$
2\left(\frac32 a-2b+\frac12 c\right)a = \frac32 a^2-2b^2+\frac12 c^2
+ (a-b)^2 - (b-c)^2 + \frac12(a-2b+c)^2, $$
which proves the first assertion. Because of $$
2\left(\frac32 a-2b+\frac12 c\right)a = \frac12\big(a^2+(2a-b)^2\big)
- \frac12\big(b^2+(2b-c)^2\big) + \frac12(a-2b+c)^2, $$ the second assertion follows as well. \end{proof}
We also recall the following inequality (see \cite[Lemma 2.2]{JuMa08} for a proof).
\begin{lemma}\label{lem.H2} Let $d\ge 2$ and $\sqrt{n}\in H^2({{\mathbb T}^d})\cap W^{1,4}({{\mathbb T}^d})\cap L^\infty({{\mathbb T}^d})$ with $\inf_{{\mathbb T}^d} n>0$. Then, for any $(\sqrt{d}-1)^2/(d+2)<\alpha<(\sqrt{d}+1)^2/(d+2)$, $\alpha\neq 1$, $$
\frac{1}{4(\alpha-1)}\int_{{\mathbb T}^d} n\pa_{ij}^2(\log n)\pa_{ij}^2(n^{\alpha-1})\mathrm{d}x
\ge \kappa_\alpha\int_{{\mathbb T}^d}(\Delta n^{\alpha/2})^2 \mathrm{d}x $$ and for $\alpha=1$, $$
\frac14\int_{{\mathbb T}^d} n(\pa_{ij}^2(\log n))^2 \mathrm{d}x \ge \kappa_1\int_{{\mathbb T}^d}(\Delta\sqrt{n})^2\mathrm{d}x, $$ where $$
\kappa_\alpha = \frac{p(\alpha)}{\alpha^2(p(\alpha)-p(0))}>0 \quad\mbox{and}\quad
p(\alpha) = -\alpha^2 + \frac{2(d+1)}{d+2}\alpha - \left(\frac{d-1}{d+2}\right)^2. $$ \end{lemma}
\begin{proof}[Proof of Theorem \ref{thm.bdf2.ex}.] Given $v_0=n_0^{\alpha/2}$, the existence of a nonnegative weak solution $v_1\in H^2({{\mathbb T}^d})$ to \eqref{alpha_euler} is shown in \cite{JuMa08}. Assume that $v_2,\ldots,v_{k}\in H^2({{\mathbb T}^d})$ are solutions to \eqref{weak_alpha}. We introduce the variable $y$ by $v_{k+1}=e^{\alpha y/2}$ such that $n_{k+1}=e^y$. First, we prove the existence of a weak solution $y\in H^2({{\mathbb T}^d})$ to the regularized equation \begin{equation}\label{reg.form}
\frac{2}{\alpha\tau}e^{(1-\alpha/2)y}\left(\frac32e^{\alpha y/2} - 2v_k
+ \frac12 v_{k-1}\right) + \frac12\pa_{ij}^2(e^y\pa_{ij}^2y)
+ \eps L(y) = 0, \end{equation} where $\eps>0$ and $$ L(y) = \Delta^2 y
- \operatorname{div}(|\nabla y|^2\nabla y)
+ y. $$
{\em Step 1: Definition of the fixed-point operator.} Given $z\in W^{1,4}({{\mathbb T}^d})$ and $\sigma\in [0,1]$, we define on $H^2({{\mathbb T}^d})$ the forms \begin{align*}
a(y,\phi) &= \frac{1}{2}\int_{{{\mathbb T}^d}}e^z\pa_{ij}^2y\pa_{ij}^2\phi \mathrm{d}x
+ \eps\int_{{{\mathbb T}^d}}\big(\Delta y\Delta\phi
+ |\nabla z|^2\na y\cdot\na\phi + y\phi\big)\mathrm{d}x, \\
f(\phi) &= -\frac{2\sigma}{\alpha\tau}\int_{{{\mathbb T}^d}}e^{(1-\alpha/2)z}
\left(\frac32 e^{\alpha z/2} - 2v_{k} + \frac12 v_{k-1}\right)\phi \mathrm{d}x. \end{align*} Since $H^2({{\mathbb T}^d})\hookrightarrow W^{1,4}({{\mathbb T}^d})\hookrightarrow L^\infty({{\mathbb T}^d})$ with continuous embeddings (remember that $d\le3$), these mappings are well defined and continuous. Furthermore, by the Poincar\'e inequality for periodic functions with constant $C_P>0$, the bilinear form $a$ is coercive, \begin{align*}
\eps\|y\|_{H^2({{\mathbb T}^d})}^2 &= \eps\int_{{\mathbb T}^d}\big(|\na^2 y|^2 + |\na y|^2 + y^2\big)\mathrm{d}x
\le \eps\int_{{\mathbb T}^d}\big((C_P^2+1)|\na^2 y|^2 + y^2\big)\mathrm{d}x \\
&= \eps\int_{{\mathbb T}^d}\big((C_P^2+1)(\Delta y)^2 + y^2\big)\mathrm{d}x
\le \eps(C_P^2+1)\int_{{\mathbb T}^d}((\Delta y)^2 + y^2)\mathrm{d}x \\
&\le (C_P^2+1) a(y,y). \end{align*} By Lax-Milgram's lemma, there exists a unique solution $y\in H^2({{\mathbb T}^d})$ to $$
a(y,\phi) = f(\phi) \quad\mbox{for all }\phi\in H^2({{\mathbb T}^d}). $$ This defines the fixed-point operator $S:W^{1,4}({{\mathbb T}^d})\times[0,1]\to W^{1,4}({{\mathbb T}^d})$, $S(z,\sigma)=y$. It holds $S(y,0)=0$ for all $y\in W^{1,4}({{\mathbb T}^d})$, and $S$ is continuous and compact, in view of the compact embedding $H^2({{\mathbb T}^d})\hookrightarrow W^{1,4}({{\mathbb T}^d})$. In order to apply the Leray-Schauder theorem, it remains to show that there exists a uniform bound in $W^{1,4}({{\mathbb T}^d})$ for all fixed points of $S(\cdot,\sigma)$.
{\em Step 2: A priori bound.} Let $y\in H^2({{\mathbb T}^d})$ be a fixed point of $S(\cdot,\sigma)$ for some $\sigma\in[0,1]$. We employ the test function $\phi=y$ in \eqref{reg.form}. This gives \begin{align}
0 &= \frac{2\sigma}{\alpha\tau} \int_{{{\mathbb T}^d}}e^{(1-\alpha/2)y}\left(\frac32 e^{\alpha y/2} - 2v_{k}
+ \frac12 v_{k-1}\right)y \mathrm{d}x \nonumber\\
&\phantom{xx}{}
+ \frac{1}{2}\int_{{{\mathbb T}^d}}e^y (\pa_{ij}^2y)^2\mathrm{d}x
+ \eps\int_{{{\mathbb T}^d}}\big((\Delta y)^2 + |\nabla y|^4 + y^2 \big)\mathrm{d}x. \label{reg.weak1} \end{align} To estimate the first integral, we distinguish the domains $\{y<0\}$ and $\{y\ge 0\}$: \begin{align*}
\frac{2\sigma}{\alpha\tau} \int_{{{\mathbb T}^d}}e^{(1-\alpha/2)y}&\left(\frac32 e^{\alpha y/2} - 2v_{k}
+ \frac12 v_{k-1}\right)y \mathrm{d}x \\
&= \frac{\sigma}{\alpha\tau}\int_{\{y < 0\}}\big(3e^{y}y - 4e^{(1-\alpha/2)y}v_{k}y
+ e^{(1-\alpha/2)y}v_{k-1}y\big)\mathrm{d}x \\
&\phantom{xx}{} + \frac{\sigma}{\alpha\tau}\int_{\{y\geq 0\}}
\big(3e^{y}y - 4e^{(1-\alpha/2)y}v_{k}y
+ e^{(1-\alpha/2)y}v_{k-1}y\big)\mathrm{d}x. \end{align*} The first integral on the right-hand side is estimated by using the Young inequalities $-4e^{(1-\alpha/2)y}v_{k}y\ge -2e^{(2-\alpha)y} y^2 - 2v_k^2$ and $e^{(1-\alpha/2)y}v_{k-1}y\ge -\frac12 e^{(2-\alpha)y} y^2 - \frac12v_{k-1}^2$: \begin{align*}
\frac{\sigma}{\alpha\tau}\int_{\{y < 0\}} & \big(3e^{y}y - 4e^{(1-\alpha/2)y}v_{k}y
+ e^{(1-\alpha/2)y}v_{k-1}y\big)\mathrm{d}x \\
&\ge \frac{\sigma}{\alpha\tau}\int_{\{y < 0\}}\left(3e^y y - \frac52 e^{(2-\alpha)y} y^2 - 2v_k^2
-\frac12 v_{k-1}^2\right)\mathrm{d}x \\
&= \frac{\sigma}{\alpha\tau}\int_{\{y < 0\}}\left(e^y(y-1) + 1
+ \left(1+2y\right)e^y -\frac52e^{(2-\alpha)y} y^2 - 1 - 2v_k^2 - \frac12v_{k-1}^2\right)\mathrm{d}x. \end{align*} Since $y\mapsto (1+2y)e^y -\frac52e^{(2-\alpha)y} y^2 - 1$, $y<0$, is bounded from below (remember that $\alpha<2$), we find that \begin{align*}
\frac{\sigma}{\alpha\tau}\int_{\{y < 0\}} &\big( 3e^{y}y - 4e^{(1-\alpha/2)y}v_{k}y
+ e^{(1-\alpha/2)y}v_{k-1}y\big)\mathrm{d}x\\
&\ge \frac{\sigma}{\alpha\tau}\int_{\{y < 0\}} \big(e^y(y-1) + 1\big)\mathrm{d}x - \frac{\sigma}{\alpha\tau}c_1
- \frac{\sigma}{\alpha\tau}\int_{\{y < 0\}}
\left(2v_k^2 + \frac12 v_{k-1}^2\right)\mathrm{d}x, \end{align*} where $c_1>0$ depends only on the lower bound of $y\mapsto (1+2y)e^y -\frac52 e^{(2-\alpha)y} y^2 - 1$, $y<0$, and $\mbox{meas}({{\mathbb T}^d})$. For the remaining integral over $\{y\ge 0\}$, we employ the Young inequalities $-4e^{(1-\alpha/2)y} v_k y\ge -2e^{(2-\alpha)y} - y^4 - v_k^4$ and $e^{(1-\alpha/2)y}v_{k-1}y\ge -\frac12 e^{(2-\alpha)y} - \frac14 y^4 - \frac14 v_{k-1}^4$: \begin{align*}
\frac{\sigma}{\alpha\tau}\int_{\{y \ge 0\}} & \big(3e^{y}y - 4e^{(1-\alpha/2)y}v_{k}y
+ e^{(1-\alpha/2)y}v_{k-1}y\big)\mathrm{d}x \\
&\ge \frac{\sigma}{\alpha\tau}\int_{\{y \ge 0\}}\Big(3e^y y - \frac52 e^{(2-\alpha)y}
- \frac54 y^4 - v_k^4 - \frac14 v_{k-1}^4\Big)\mathrm{d}x \\
&= \frac{\sigma}{\alpha\tau}\int_{\{y \ge 0\}}\Big(e^y(y-1) + 1 + \Big((1+2y)e^y - \frac52 e^{(2-\alpha)y} - \frac54 y^4 - 1\Big) \\ &\phantom{xxxxxxxxxxx}{} - v_k^4 - \frac14 v_{k-1}^4\Big)\mathrm{d}x. \end{align*} The mapping $y\mapsto (1+2y)e^y - \frac52 e^{(2-\alpha)y} - \frac54 y^4 - 1$, $y\geq 0$, is bounded from below which implies the existence of a constant $c_2>0$ such that \begin{align*}
\frac{\sigma}{\alpha\tau}\int_{\{y \ge 0\}} &\big(3e^{y}y - 4e^{(1-\alpha/2)y}v_{k}y
+ e^{(1-\alpha/2)y}v_{k-1}y\big)\mathrm{d}x\\
&\ge \frac{\sigma}{\alpha\tau}\int_{\{y \ge 0\}}\big(e^y(y-1)+1\big)\mathrm{d}x
- \frac{\sigma}{\alpha\tau}c_2 - \frac{\sigma}{\alpha\tau}\int_{\{y \ge 0\}}
\left(v_k^4+\frac14 v_{k-1}^4\right)\mathrm{d}x. \end{align*} Summarizing the estimates for both integrals over $\{y>0\}$ and $\{y\ge 0\}$, it follows that \begin{align}
\frac{2\sigma}{\alpha\tau} \int_{{{\mathbb T}^d}} & e^{(1-\alpha/2)y}\left(\frac32 e^{\alpha y/2} - 2v_{k}
+ \frac12 v_{k-1}\right)y \mathrm{d}x \label{aux1} \ge \frac{\sigma}{\alpha\tau}\int_{{\mathbb T}^d} \big(e^y(y-1)+1\big)\mathrm{d}x\\
&\phantom{xxxxxxx}{} - \frac{\sigma}{\alpha\tau}\int_{{\mathbb T}^d}
\left(2v_k^2 + v_k^4 + \frac12 v_{k-1}^2 + \frac14 v_{k-1}^4\right)\mathrm{d}x
- \frac{\sigma}{\alpha\tau}(c_1+c_2). \nonumber \end{align}
For the second integral in \eqref{reg.weak1}, we use Lemma \ref{lem.H2}: $$
\frac12\int_{{{\mathbb T}^d}}e^y (\pa_{ij}^2y)^2\mathrm{d}x
\geq 2\kappa_1\int_{{{\mathbb T}^d}}\big(\Delta e^{y/2}\big)^2\mathrm{d}x, $$ where $\kappa_1>0$ depends only on the space dimension $d$. With this estimate and \eqref{aux1}, equation \eqref{reg.weak1} implies that \begin{align*}
\frac{\sigma}{\alpha\tau}\int_{{{\mathbb T}^d}} & \big(e^y(y-1)+1\big)\mathrm{d}x
+ 2\kappa_1\int_{{{\mathbb T}^d}}\big(\Delta e^{y/2}\big)^2\mathrm{d}x
+ \eps \int_{{{\mathbb T}^d}}\big((\Delta y)^2 + |\nabla y|^4 + y^2 \big)\mathrm{d}x \\
&\leq \frac{\sigma}{\alpha\tau}\int_{{{\mathbb T}^d}}\left(2v_{k}^2 + v_{k}^4
+ \frac12 v_{k-1}^2 + \frac14 v_{k-1}^4\right)\mathrm{d}x + \frac{\sigma}{\alpha\tau}(c_1 + c_2). \end{align*} By the definition of the entropy, this inequality can be written as \begin{align}
E_1[n] &+ \frac{2\alpha\tau\kappa_1}{\sigma}\int_{{{\mathbb T}^d}}\big(\Delta e^{y/2}\big)^2\mathrm{d}x
+ \frac{\eps\alpha\tau}{\sigma} \int_{{{\mathbb T}^d}} \big((\Delta y)^2 + |\nabla y|^4 + y^2\big)\mathrm{d}x \nonumber \\
&\quad\leq \int_{{{\mathbb T}^d}}\left(2v_{k}^2 + v_{k}^4 + \frac12 v_{k-1}^2
+ \frac14 v_{k-1}^4\right)\mathrm{d}x + c_1 + c_2. \label{main.ap2} \end{align} The right-hand side gives a uniform (with respect to $\sigma$) bound since $v_{k-1}$, $v_k\in W^{1,4}({{\mathbb T}^d})$. Hence, by the Poincar\'e inequality we obtain the $H^2$-bound $$
\|y\|_{H^2({{\mathbb T}^d})}^2 \le C\int_{{\mathbb T}^d}\big((\Delta y)^2 + y^2\big)\mathrm{d}x \le C, $$
where the constant $C>0$ depends on $\alpha$, $\eps$, $\tau$, $v_k$, and $v_{k-1}$ but not on $\sigma$. The continuous embedding $H^2({{\mathbb T}^d})\hookrightarrow W^{1,4}({{\mathbb T}^d})$ then implies the desired uniform bound, $\|y\|_{W^{1,4}({{\mathbb T}^d})} \le C$. Leray-Schauder's fixed-point theorem provides the existence of a fixed point $y_\eps$ of $S(y,1)=y$, i.e.\ of a solution to \eqref{reg.form}.
{\em Step 3: Limit $\eps\to 0$.} Let $y_\eps$ be a solution to \eqref{reg.form}, constructed in the previous steps. Set $v_\eps:=e^{\alpha y_\eps/2}$ and $n_\eps:=e^{y_\eps}$. Then $v_\eps$ solves \begin{equation}\label{veps}
\frac{2}{\alpha\tau}v_\eps^{2/\alpha-1}
\left(\frac32 v_\eps - 2v_k + \frac12 v_{k-1}\right)
+ \pa_{ij}^2\left(\frac{1}{\alpha}v_\eps^{2/\alpha-1}\pa_{ij}^2 v_\eps
- \alpha \pa_i(v_\eps^{1/\alpha})\pa_j(v_\eps^{1/\alpha})\right)
+ \eps L(y_\eps) = 0. \end{equation} The goal is to pass to the limit $\eps\to 0$ in this equation.
Let $\alpha > 1$. We employ the test function $e^{(\alpha-1)y_\eps}/(\alpha-1)\in H^2({{\mathbb T}^d})$ in \eqref{reg.form} and find that \begin{align*}
0 &= \frac{2}{\alpha(\alpha-1)}\int_{{\mathbb T}^d}
\left(\frac32 v_\eps - 2v_k + \frac12 v_{k-1}\right)v_\eps\mathrm{d}x
+ \frac{\tau}{2(\alpha-1)}\int_{{\mathbb T}^d} e^{y_\eps}\pa_{ij}^2 y_\eps\pa_{ij}^2
(e^{(\alpha-1)y_\eps})\mathrm{d}x \\
&\phantom{xx}{}
+ \frac{\eps\tau}{\alpha-1}\langle L(y_\eps),e^{(\alpha-1)y_\eps}\rangle_{H^{-2},H^2}. \end{align*} Inequality \eqref{ineq1} shows that \begin{align*}
\frac{2}{\alpha(\alpha-1)}\int_{{\mathbb T}^d}
\left(\frac32 v_\eps - 2v_k + \frac12 v_{k-1}\right)v_\eps\mathrm{d}x
&\ge \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d}\left(\frac32 v_\eps^2 - 2v_k^2
+ \frac12 v_{k-1}^2\right)\mathrm{d}x \\
&\phantom{xx}{}+ \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d}\big((v_\eps-v_k)^2
- (v_k-v_{k-1})^2\big)\mathrm{d}x. \end{align*} The integral involving the second derivatives is again estimated by using Lemma \ref{lem.H2}: $$
\frac{\tau}{2(\alpha-1)}\int_{{\mathbb T}^d} e^{y_\eps}\pa_{ij}^2 y_\eps\pa_{ij}^2
(e^{(\alpha-1)y_\eps})\mathrm{d}x
\ge 2 \kappa_\alpha\tau\int_{{\mathbb T}^d}(\Delta e^{\alpha y_\eps/2})^2 \mathrm{d}x
= 2\kappa_\alpha\tau\int_{{\mathbb T}^d}(\Delta v_\eps)^2 \mathrm{d}x. $$ Now let us consider the $\eps$-term and show that $\langle L(y_\eps),e^{(\alpha-1)y_\eps}\rangle_{H^{-2},H^2}$ is bounded from below uniformly in $\eps$. By construction, $v_\eps$ and $n_\eps$ are strictly positive since $y_\eps\in H^2({{\mathbb T}^d})\hookrightarrow L^\infty({{\mathbb T}^d})$. Therefore, we can write (cf.~\cite[Section 4.1]{JuMa08}) \begin{align*}
\langle L(y_\eps),e^{(\alpha-1)y_\eps}&\rangle_{H^{-2},H^2} = 4(\alpha-1)\int_{{{\mathbb T}^d}}\Big(e^{(\alpha-1)y_\eps}\Big(\frac{\Delta e^{y_\eps/2}}{e^{y_\eps/2}} - (2-\alpha)\Big|\frac{\nabla e^{y_\eps/2}}{e^{y_\eps/2}}\Big|^2\Big)^2\mathrm{d}x\\
&\phantom{xx}{} + 4(\alpha^2-1)(3-\alpha)\int_{{{\mathbb T}^d}}e^{(\alpha-1)y_\eps} \Big|\frac{\nabla e^{y_\eps/2}}{e^{y_\eps/2}}\Big|^4\mathrm{d}x
+ \int_{{{\mathbb T}^d}}y_\eps e^{(\alpha-1)y_\eps}\mathrm{d}x \geq -C, \end{align*} where $C>0$ depends only on $\alpha$. We have used the fact that $x e^{(\alpha-1)x}\geq -1/((\alpha-1)e)$ for all $x\in{\mathbb R}$.
Summarizing the above inequalities, we obtain \begin{align}
\frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} & \left(\frac32 v_\eps^2 - 2v_k^2
+ \frac12 v_{k-1}^2\right)\mathrm{d}x
+ \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d}\big((v_\eps-v_k)^2 - (v_k-v_{k-1})^2\big)\mathrm{d}x\nonumber \\
&{}+ 2\tau\kappa_\alpha\int_{{\mathbb T}^d}(\Delta v_\eps)^2 \mathrm{d}x
\le C\eps.\label{main.ap1} \end{align} Inequality \eqref{main.ap1} provides the estimate for $(v_\eps)$ in $H^2({{\mathbb T}^d})$ uniformly in $\eps$. Therefore, there exists a limit function $v\in H^2({{\mathbb T}^d})$ such that, up to a subsequence, as $\eps\to 0$, \begin{align*}
v_\eps \rightharpoonup v &\quad\mbox{weakly in }H^2({{\mathbb T}^d}), \\
v_\eps\to v &\quad\mbox{strongly in }W^{1,4}({{\mathbb T}^d})\mbox{ and }L^\infty({{\mathbb T}^d}). \end{align*} Consequently, since $2/\alpha-1>0$, \begin{equation}\label{conv1}
v_\eps^{2/\alpha-1}\pa_{ij}^2 v_\eps \rightharpoonup
v^{2/\alpha-1}\pa_{ij}^2 v \quad\mbox{weakly in } L^2({{\mathbb T}^d}),\ i,j=1,\ldots,d. \end{equation}
According to the Lions-Villani lemma on the regularity of the square root of Sobolev functions (see the version in \cite[Lemma 26]{BJM11}), there exists $C>0$ independent of $\eps$ such that $$
\|\sqrt{v_\eps}\|_{W^{1,4}({{\mathbb T}^d})}^2 \le C\|v_\eps\|_{H^2({{\mathbb T}^d})}\le C. $$ Since $1/2 < 1/\alpha < 1$, Proposition A.1 in \cite{JuMi09} shows that the strong convergence $v_\eps\to v$ in $H^1({{\mathbb T}^d})$ and the boundedness of $(\sqrt{v_\eps})$ in $W^{1,4}({{\mathbb T}^d})$ imply that $$
v_\eps^{1/\alpha} \to v^{1/\alpha} \quad\mbox{strongly in }W^{1,2\alpha}({{\mathbb T}^d}). $$ Hence, we have \begin{equation}\label{conv2}
\pa_i(v_\eps^{1/\alpha})\pa_j(v_\eps^{1/\alpha})\to
\pa_i(v^{1/\alpha})\pa_j(v^{1/\alpha}) \quad\mbox{strongly in }
L^\alpha({{\mathbb T}^d}),\ i,j=1,\ldots,d. \end{equation} Estimate \eqref{main.ap2} and $E_1[n]\ge0$ provide the uniform bound $$
\sqrt{\eps}\|y_\eps\|_{H^2({{\mathbb T}^d})} + \sqrt[4]{\eps}\|\na y_\eps\|_{L^4({{\mathbb T}^d})} \le C, $$ which shows that \begin{equation}\label{conv3}
\eps L(y_\eps)\rightharpoonup 0 \quad\mbox{ weakly in }H^{-2}({{\mathbb T}^d}). \end{equation}
Using $\phi\in W^{2,\infty}(\Omega)$ as a test function in the weak formulation of \eqref{veps}, the convergence results \eqref{conv1}-\eqref{conv3} allow us to pass to the limit $\eps\to 0$ in the resulting equation, which yields \eqref{weak_alpha} for $v_{k+1}:=v$. In fact, it is sufficient to use test functions $\phi\in W^{2,\alpha/(\alpha-1)}({{\mathbb T}^d})$.
If $\alpha=1$, the convergence result follows similarly as above based on the uniform bound $\|e^{y_\eps/2}\|_{H^2}\leq C$, which is obtained from a priori estimate \eqref{main.ap2}, using the elementary inequality $s \leq s(\log s - 1) + e$ for all $s \geq 0$, which gives a uniform $L^2$-bound for $e^{y_\eps}$. In that case, the test functions $\phi\in H^2({{\mathbb T}^d})$ can be used in \eqref{veps}.
{\em Step 4: Entropy stability.} Let $\alpha>1$. Using the test function $v_1^{2-2/\alpha}/(\alpha-1)$ in \eqref{alpha_euler}, it follows that \begin{equation*}
\frac{1}{\tau\alpha(\alpha-1)}\int_{{\mathbb T}^d}\big(v_1^2 - v_0^2 + (v_1-v_0)^2\big)\mathrm{d}x
+ \frac{1}{2(\alpha-1)}\int_{{\mathbb T}^d} v_1^{2/\alpha}\pa_{ij}^2(\log v_1^{2/\alpha})
\pa_{ij}^2(v_1^{2-2/\alpha})\mathrm{d}x = 0. \end{equation*} By Lemma \ref{lem.H2}, we infer that \begin{equation}\label{5.1}
\frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} \big(v_1^2 + (v_1-v_0)^2\big) \mathrm{d}x
+ 2\tau\kappa_\alpha\int_{{\mathbb T}^d}(\Delta v_1)^2\mathrm{d}x
\le \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} v_0^2 \mathrm{d}x. \end{equation} This gives an $H^2$-bound for $v_1$.
Next, let $k\ge 1$ and let $y_\eps$ be a weak solution to \eqref{reg.form}. Set $v_\eps=e^{\alpha y_\eps/2}$. The convergence results of Step 3 allow us to pass to the limit $\eps\to 0$ in \eqref{main.ap1}.
Using the weakly lower semi-continuity of $u\mapsto\|\Delta u\|_{L^2({{\mathbb T}^d})}^2$ on $H^2({{\mathbb T}^d})$, it follows that \begin{align}
\frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} & \left(\frac32 v_{k+1}^2 - 2v_k^2
+ \frac12 v_{k-1}^2\right)\mathrm{d}x
+ \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d}\big((v_{k+1}-v_k)^2 - (v_k-v_{k-1})^2\big)\mathrm{d}x
\nonumber \\
&{}+ 2\kappa_\alpha\tau\int_{{\mathbb T}^d}(\Delta v_{k+1})^2 \mathrm{d}x \le 0, \label{5.k} \end{align} where, as before, $v_{k+1}=\lim_{\eps\to 0}v_\eps$. Summing \eqref{5.1} and \eqref{5.k} over $k=1,\ldots,m-1$, some terms cancel and we end up with $$
\frac{3}{2\alpha(\alpha-1)}\int_{{\mathbb T}^d} v_m^2 \mathrm{d}x
+ 2\kappa_\alpha\tau\sum_{k=0}^{m-1}\int_{{\mathbb T}^d}(\Delta v_{k+1})^2\mathrm{d}x
\le \frac{1}{2\alpha(\alpha-1)}\int_{{\mathbb T}^d}(v_{m-1}^2 + v_1^2 + v_0^2)\mathrm{d}x. $$
Set $a_m=\|v_m\|_{L^2(\Omega)}^2$ for $m\ge 0$. By \eqref{5.1}, $a_1\le a_0$. Then, the above estimate shows that $a_m\le \frac13 a_{m-1} + \frac23 a_0$. A simple induction argument gives $a_m\le a_0$ for all $m\ge 1$. Therefore, $$
\frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} v_m^2\mathrm{d}x
+ \frac43\kappa_\alpha\tau\sum_{k=1}^m\int_{{\mathbb T}^d}(\Delta v_k)^2\mathrm{d}x
\le \frac{1}{\alpha(\alpha-1)}\int_{{\mathbb T}^d} v_0^2 \mathrm{d}x. $$ This implies the entropy stability estimate \eqref{ent.stab}. \end{proof}
The proof of Corollary \ref{coro.bdf2} is a consequence of the above proof. Indeed, employing inequality \eqref{ineq2} instead of \eqref{ineq1}, we can replace \eqref{5.k} by \begin{align*}
\frac{1}{2\alpha(\alpha-1)}\int_{{\mathbb T}^d} & \big(v_{k+1}^2 + (2v_{k+1}-v_k)^2\big)\mathrm{d}x
+ 2\kappa_\alpha\tau\int_{{\mathbb T}^d}(\Delta v_{k+1})^2 \mathrm{d}x \\
&\le \frac{1}{2\alpha(\alpha-1)}\int_{{\mathbb T}^d} \big(v_{k}^2 + (2v_{k}-v_{k-1})^2\big)\mathrm{d}x, \end{align*} which equals \eqref{ent.diss}.
Next, we prove that, if $\alpha=1$, the solutions $v_k$ are smooth as long as they are strictly positive.
\begin{lemma}\label{lem.smooth} Let $\alpha=1$ and let $(v_k)$ be the sequence of weak solutions constructed in Theorem~\ref{thm.bdf2.ex} satisfying $v_k\ge \mu_k>0$ in ${{\mathbb T}^d}$ for $k\ge 1$ and some $\mu_k>0$. Then $v_k\in C^\infty({{\mathbb T}^d})$. \end{lemma}
\begin{proof} We recall that the weak form \eqref{weak_alpha} for $\alpha=1$ reads as $$
\int_{{\mathbb T}^d} v_{k+1}\left(\frac32 v_{k+1}-2v_k+\frac12 v_{k-1}\right)\phi \mathrm{d}x
+ \frac{\tau}{2}\int_{{\mathbb T}^d}\big(v_{k+1}\pa_{ij}^2v_{k+1}-\pa_i v_{k+1}\pa_j v_{k+1}\big)
\pa_{ij}^2\phi \mathrm{d}x = 0 $$ for $\phi\in H^2({{\mathbb T}^d})$. Since $v_k$ is assumed to be strictly positive, we can write $$
v_{k+1}\pa_{ij}^2v_{k+1}-\pa_i v_{k+1}\pa_j v_{k+1}
= \frac12 n_{k+1}\pa_{ij}^2\log n_{k+1}, $$ where $n_{k+1}=v_{k+1}^2$ and consequently, \begin{equation}\label{vk}
v_{k+1}\left(\frac32 v_{k+1}-2v_k+\frac12 v_{k-1}\right)
+ \frac{\tau}{4}\pa_{ij}^2(n_{k+1}\pa_{ij}^2\log n_{k+1}) = 0
\quad\mbox{in }H^{-2}({{\mathbb T}^d}). \end{equation} With the identity $$
\pa_{ij}^2(n_{k+1}\pa_{ij}^2\log n_{k+1})
= \Delta^2 n_{k+1} - \pa_i\left(2\frac{\pa_{ij}^2n_{k+1}\pa_j n_{k+1}}{n_{k+1}}
- \frac{(\pa_j n_{k+1})^2\pa_i n_{k+1}}{n_{k+1}^2}\right), $$ it follows that $n_{k+1}$ solves \begin{equation}\label{delta2}
\Delta^2 n_{k+1} = \pa_i\left(2\frac{\pa_{ij}^2n_{k+1}\pa_j n_{k+1}}{n_{k+1}}
- \frac{(\pa_j n_{k+1})^2\pa_i n_{k+1}}{n_{k+1}^2}\right)
- \frac{4}{\tau}v_{k+1}\left(\frac32 v_{k+1}-2v_k+\frac12 v_{k-1}\right) \end{equation} in the sense of $H^{-2}({{\mathbb T}^d})$. The second term on the right-hand side is an element of $H^2({{\mathbb T}^d})$. The continuity of the Sobolev embedding $H^2({{\mathbb T}^d})\hookrightarrow W^{1,6}({{\mathbb T}^d})$ (for $d\le 3$) implies that $(\pa_j n_{k+1})^2\pa_i n_{k+1}/n_{k+1}\in L^2({{\mathbb T}^d})$ and $\pa_{ij}^2n_{k+1}\pa_j n_{k+1}/n_{k+1}\in L^{3/2}({{\mathbb T}^d})\hookrightarrow H^{-1/2}({{\mathbb T}^d})$ for all $i,j=1,\ldots,d$. This proves that $$
\Delta^2 n_{k+1}\in H^{-3/2}({{\mathbb T}^d}). $$ The regularity theory for elliptic operator on ${{\mathbb T}^d}$ (e.g., using Fourier transforms on the torus) yields $n_{k+1}\in H^{5/2}({{\mathbb T}^d})$ which improves the previous regularity $n_{k+1}\in H^2({{\mathbb T}^d})$. Taking into account the improved regularity and the embedding $H^{5/2}({{\mathbb T}^d})\hookrightarrow W^{2,3}({{\mathbb T}^d})$, we infer that the right-hand side of \eqref{delta2} lies in $H^{-1}({{\mathbb T}^d})$, i.e. $$
\Delta^2 n_{k+1}\in H^{-1}({{\mathbb T}^d}), $$ which implies that $n_{k+1}\in H^3({{\mathbb T}^d})$. By bootstrapping, we conclude that $n_{k+1}\in H^m({{\mathbb T}^d})$ for all $m\in{\mathbb N}$. \end{proof}
Now, we are in the position to prove Theorem \ref{thm.bdf2.conv}.
\begin{proof}[Proof of Theorem \ref{thm.bdf2.conv}] Let $(v_k)$ be a sequence of weak solutions to \eqref{weak_alpha}. Since we have assumed that $v_k$ is strictly positive, Lemma \ref{lem.smooth} shows that $v_k$ is smooth. As a consequence, $v_k$ solves (see \eqref{vk}) $$
\frac32 v_{k+1} - 2v_k + \frac12 v_{k-1} + \frac{1}{v_{k+1}}
\pa_{ij}^2\big(v_{k+1}^2\pa_{ij}^2\log v_{k+1}\big) = 0 \quad\mbox{in }{{\mathbb T}^d}. $$ Let $n=v^2$ be a solution to \eqref{dlss} with the regularity indicated in the theorem. By Taylor expansion, $$
v_t(t_{k+1}) = \frac{1}{\tau}\left(\frac32 v(t_{k+1}) - 2v(t_k)
+ \frac12 v(t_{k-1})\right) + \frac{f_k}{\tau}, \quad k\ge 1, $$ where $$
f_k = -\int_{t_k}^{t_{k+1}} v_{ttt}(s)(t_k-s)^2 {\mathrm{d}} s
+ \frac14\int_{t_{k-1}}^{t_{k+1}}v_{ttt}(s)(t_{k-1}-s)^2{\mathrm{d}} s $$ can be interpreted as the local truncation error. We estimate $f_k$ as follows: \begin{equation}\label{f.k}
\sum_{k=1}^{m-1}\|f_k\|_{L^2({{\mathbb T}^d})}^2 \le C_R\|v_{ttt}\|_{L^2(0,T;L^2({{\mathbb T}^d}))}^2\tau^5, \end{equation} where $C_R>0$ does not depend on $\tau$ or $m$. Similarly, we have $$
v_t(t_1) = \frac{1}{\tau}(v(t_1)-v(t_0)) + \frac{f_0}{\tau}, \quad\mbox{where }
f_0 = \int_0^\tau v_{tt}(s)s{\mathrm{d}} s, $$ and \begin{equation}\label{f.0}
\|f_0\|_{L^2({{\mathbb T}^d})}
\le \int_0^\tau \|v_{tt}(s)\|_{L^2({{\mathbb T}^d})} s {\mathrm{d}} s
\le \frac{\tau^2}{2}\|v_{tt}\|_{L^\infty(0,T;L^2({{\mathbb T}^d}))}. \end{equation} Replacing the time derivative $v_t$ in \eqref{dlss}, written as $v_t + v^{-1}\pa_{ij}^2(v^2\pa_{ij}^2\log v) = 0$, by the above expansions, it follows that \begin{align}
v(t_1)-v(t_0) + \frac{\tau}{v(t_1)}\pa_{ij}^2\big(v(t_1)^2\pa_{ij}^2\log v(t_1)\big)
&= -f_0, \label{v.0} \\
\frac32 v(t_{k+1}) - 2v(t_k) + \frac12 v(t_{k-1}) + \frac{\tau}{v(t_{k+1})}
\pa_{ij}^2\big(v(t_{k+1})^2\pa_{ij}^2\log v(t_{k+1})\big)
&= -f_k, \label{v.k} \end{align} for $k\ge 1$. Taking the difference of \eqref{alpha_euler}, multiplied by $v_1^{-1}$, and \eqref{v.0}, and the difference of \eqref{disc.dlss}, multiplied by $v_{k+1}^{-1}$, and \eqref{v.k}, we obtain the error equations for $e_k:=v_k-v(t_k)$: \begin{align*}
e_1-e_0 + \tau\big(A(v_1)-A(v(t_1))\big) &= f_0, \\
\frac32 e_{k+1} - 2e_k + \frac12 e_{k-1}
+ \tau\big(A(v_{k+1})-A(v(t_{k+1}))\big) &= f_k, \quad k\ge 1, \end{align*} where we have introduced the operator $$
A:D(A)\to H^{-2}({{\mathbb T}^d}), \quad A(v) = \frac{1}{v}\pa_{ij}^2(v^2\pa_{ij}^2\log v), $$ with domain $D(A)=\{v\in H^2({{\mathbb T}^d}):v>0$ in ${{\mathbb T}^d}\}$.
We multiply the error equations by $e_1$ and $e_{k+1}$, respectively, integrate over ${{\mathbb T}^d}$, and sum over $k=0,\ldots,m-1$: \begin{align}
\int_{{\mathbb T}^d} & (e_1-e_0)e_1 \mathrm{d}x + \sum_{k=1}^{m-1}\int_{{\mathbb T}^d}
\left(\frac32 e_{k+1}-2e_k+\frac12 e_{k-1}\right)e_{k+1}\mathrm{d}x \nonumber \\
&{}+ \tau\sum_{k=0}^{m-1}\int_{{\mathbb T}^d}\big(A(v_{k+1})-A(v(t_{k+1}))\big)
(v_{k+1}-v(t_{k+1}))\mathrm{d}x = \sum_{k=0}^{m-1}f_k e_{k+1}\mathrm{d}x. \label{err} \end{align} Using $e_0=0$ and inequality \eqref{ineq1}, the first two integrands can be estimated by \begin{align*}
(e_1-e_0)e_1 +& \sum_{k=1}^{m-1}
\left(\frac32 e_{k+1}-2e_k+\frac12 e_{k-1}\right)e_{k+1} \\
&\ge e_1^2 + \sum_{k=1}^{m-1}\left(\frac34 e_{k+1}^2 - e_k^2 + \frac14 e_{k-1}^2
+ \frac12(e_{k+1}-e_k)^2 - \frac12(e_k-e_{k-1})^2\right)\mathrm{d}x \\
&= e_1^2 + \frac34 e_m^2 - \frac34 e_1^2 - \frac14e_{m-1}^2 + \frac14 e_0^2
+ \frac12(e_m-e_{m-1})^2 - \frac12(e_1-e_0)^2 \\
&= \frac34 e_m^2 - \frac14 e_{m-1}^2 - \frac14 e_1^2 + \frac12(e_m-e_{m-1})^2 \\
&\ge \frac34 e_m^2 - \frac14 e_{m-1}^2 - \frac14 e_1^2. \end{align*} For the third integral in \eqref{err}, we employ the monotonicity of the operator $A$. In fact, it is proved in \cite[Lemma 3.5]{JuPi03} that for positive functions $w_1$, $w_2\in H^4({{\mathbb T}^d})$, $$
\int_{{\mathbb T}^d}(A(w_1)-A(w_2))(w_1-w_2)\mathrm{d}x = \int_{{\mathbb T}^d}\frac{1}{w_1w_2}
\left|\operatorname{div}\left(w_1^2\na\left(\frac{w_1-w_2}{w_1}\right)\right)\right|^2\mathrm{d}x \ge 0. $$ The right-hand side of \eqref{err} is estimated by Young's inequality: \begin{align*}
\int_{{{\mathbb T}^d}}f_0 e_{1}\mathrm{d}x
&\le 2\|f_0\|_{L^2({{\mathbb T}^d})}^2 + \frac18\|e_1\|_{L^2({{\mathbb T}^d})}^2, \\
\int_{{{\mathbb T}^d}}f_k e_{k+1}\mathrm{d}x
&\le \frac{1}{2\tau}\|f_k\|_{L^2({{\mathbb T}^d})}^2
+ \frac{\tau}{2}\|e_{k+1}\|_{L^2({{\mathbb T}^d})}^2,\quad k\geq 1. \end{align*} Summarizing the above estimates and taking into account \eqref{f.k} and \eqref{f.0}, we find that \begin{align*}
\frac34\|e_m\|_{L^2({{\mathbb T}^d})}^2
&\le \frac14\|e_{m-1}\|_{L^2({{\mathbb T}^d})}^2 + \frac14\|e_1\|_{L^2({{\mathbb T}^d})}^2
+ 2\|f_0\|_{L^2({{\mathbb T}^d})}^2 + \frac18\|e_1\|_{L^2({{\mathbb T}^d})}^2 \\
&\phantom{xx}{}+ \frac{1}{2\tau}\sum_{k=1}^{m-1}\|f_k\|_{L^2({{\mathbb T}^d})}^2
+ \frac{\tau}{2}\sum_{k=1}^{m-1}\|e_{k+1}\|_{L^2({{\mathbb T}^d})}^2 \\
&\le \frac14\|e_{m-1}\|_{L^2({{\mathbb T}^d})}^2 + \frac38\|e_1\|_{L^2({{\mathbb T}^d})}^2
+ C\tau^4 + \frac{\tau}{2}\sum_{k=2}^{m}\|e_{k}\|_{L^2({{\mathbb T}^d})}^2, \end{align*} where $C>0$ depends on the $L^2(0,T;L^2({{\mathbb T}^d}))$ norm of $v_{ttt}$ and the $L^\infty(0,T;L^2({{\mathbb T}^d}))$ norm of $v_{tt}$ but not on $\tau$. Taking the maximum over $m=1,\ldots,M$, we infer that $$
\frac34\max_{m=1,\ldots,M}\|e_m\|_{L^2({{\mathbb T}^d})}^2
\le \frac58\max_{m=1,\ldots,M}\|e_{m-1}\|_{L^2({{\mathbb T}^d})}^2
+ C\tau^4 + \frac{\tau}{2}\sum_{k=2}^{M}\|e_{k}\|_{L^2({{\mathbb T}^d})}^2. $$ The first term on the right-hand side is controlled by the left-hand side, leading to $$
\|e_M\|_{L^2({{\mathbb T}^d})}^2 \le \max_{m=1,\ldots,M}\|e_m\|_{L^2({{\mathbb T}^d})}^2
\le 8C\tau^4 + 4\tau\sum_{k=2}^{M}\|e_{k}\|_{L^2({{\mathbb T}^d})}^2. $$ We separate the last summand in the sum, $$
(1-4\tau)\|e_M\|_{L^2({{\mathbb T}^d})}^2
\le 8C\tau^4 + 4\tau\sum_{k=2}^{M-1}\|e_{k}\|_{L^2({{\mathbb T}^d})}^2, $$ and apply the inequality $1+x\le e^x$ for all $x\ge 0$ and the discrete Gronwall lemma (see, e.g., \cite[Theorem 4]{WiWo65}): \begin{align*}
\|e_M\|_{L^2({{\mathbb T}^d})}^2 &\le \frac{8C\tau^4}{1-4\tau}
\left(1+\frac{4\tau}{1-4\tau}\right)^{M-2}
\leq \frac{8C\tau^4}{1-4\tau}\exp\Big(\frac{4t_{M-2}}{1-4\tau}\Big)
\le 16C\tau^4 \exp(8t_{M-2}). \end{align*} The result follows for all $0<\tau<1/8$ with the constant $4\sqrt{C}\exp(4T)$, where $T>0$ is the terminal time. \end{proof}
\section{Fully discrete variational derivative method}\label{sec.dvdm}
In this section, we explore the variational structure of the DLSS equation on a discrete level, using the discrete variational derivative method of \cite{FuMa10}. In order to explain the idea, we consider first the implicit Euler discretization.
Let $x_i=ih$, $i=0,\ldots,N-1$, be an equidistant grid on the one-dimensional torus ${\mathbb T}\cong[0,1)$, let $t_k=k\tau$ with $\tau>0$, and let $U_i^k$ approximate $n(t_k,x_i)$. Set $U^k=(U_0^k,\ldots,U_{N-1}^k)\in{\mathbb R}^N$ and $U_\ell = U_{\ell\mod N}$ for all $\ell\in{\mathbb Z}$. We introduce the following difference operators for $U=(U_i)\in{\mathbb R}^N$:
\begin{center} \begin{tabular}{ll}
forward difference: & $\delta_i^+ U = h^{-1}(U_{i+1}-U_i)$, \\[1mm]
backward difference: & $\delta_i^- U = h^{-1}(U_i-U_{i-1})$, \\[1mm]
central difference: & $\delta^{\langle1\rangle}_i U =
(2h)^{-1}(U_{i+1}-U_{i-1})$, \\[1mm] second-order central difference: &
$\delta^{\langle2\rangle}_i U = \delta_i^+\delta_i^- U = \delta_i^-\delta_i^+ U$. \end{tabular} \end{center}
The first step is to define the discrete Fisher information. We choose a symmetric form for the derivative, $v_x^2(x_i)\approx \frac12((\delta_i^+ V)^2+(\delta^-_i V)^2)$, where $V=(V_i)=(\sqrt{U_i})\in{\mathbb R}^N$. The Fisher information $F[v^2]=\int_{\mathbb T} v_x^2 \mathrm{d}x$ is approximated by using the first-order quadrature rule $\int_{\mathbb T} w(x)\mathrm{d}x\approx \sum_{i=0}^{N-1}w(x_i)h$. Actually, this rule is of second order $O(h^2)$ here, since due to the periodic boundary conditions, it coincides with the trapezoidal rule, $(w(x_0)+w(x_N))h/2+\sum_{I=1}^{N-1}w(x_i)h$. Therefore, the discrete Fisher information reads as $$
F_d[U] = \frac{1}{2}\sum_{i=0}^{N-1}\big((\delta_i^+ V)^2 + (\delta_i^- V)^2\big)h,
\quad U=(U_i)\in{\mathbb R}^N. $$
The second step is the definition of the discrete variational derivative. Applying the discrete variation procedure and using summation by parts (see \cite[Prop.\ 3.2]{FuMa10}), we calculate \begin{align}
F_d[U^{k+1}] - F_d[U^k]
&= \frac12\sum_{i=0}^{N-1}\left((\delta_i^+V^{k+1})^2 - (\delta_i^+V^k)^2
+ (\delta_i^-V^{k+1})^2 - (\delta_i^-V^k)^2\right)h \nonumber \\
&= \frac12\sum_{i=0}^{N-1}\big[\delta_i^+(V^{k+1} + V^k)\delta_i^+(V^{k+1} - V^k)
\nonumber \\
&\phantom{xx}{}+ \delta_i^-(V^{k+1} + V^k)\delta_i^-(V^{k+1} - V^k)\big]h
\nonumber \\
&= -\sum_{i=0}^{N-1}\delta_i^{\langle 2\rangle}(V^{k+1} + V^k)(V_i^{k+1} -
V_i^k)h \nonumber \\
&= -\sum_{i=0}^{N-1}\frac{\delta_i^{\langle 2\rangle}(V^{k+1} + V^k)}
{V_i^{k+1} + V_i^k}(U_i^{k+1} - U_i^k)h, \quad k\geq 0. \label{dcr} \end{align}
This motivates the definition of the discrete variational derivative \begin{equation}\label{q1.dvd}
\frac{\delta F_d}{\delta(U^{k+1},U^k)_i}
= -\frac{\delta_i^{\langle 2\rangle}(V^{k+1} + V^k)}{V_i^{k+1} + V_i^k}, \quad
i=0,\ldots,N-1, \end{equation} since this implies the discrete chain rule $$
F_d[U^{k+1}]-F_d[U^k] = \sum_{i=0}^{N-1}\frac{\delta F_d}{\delta(U^{k+1},U^k)_i}
(U_i^{k+1}-U_i^k)h. $$ Observe that \eqref{q1.dvd} is a Crank-Nicolson type approximation of the variational derivative $\delta F[n]/\delta n=-(\sqrt{n})_{xx}/\sqrt{n}=-v_{xx}/v$, where $n=v^2$. The implicit Euler discrete variational derivative (DVD) method for the DLSS equation is then given by the nonlinear system with unknowns $U^{k+1} = (V^{k+1})^2$: \begin{equation}\label{bdf1.dvdm}
\frac{1}{\tau}(U_i^{k+1} - U_i^k)
= \delta_i^{\langle 1\rangle}\left(U^{k+1}\delta_i^{\langle 1\rangle}
\left(\frac{\delta F_d}{\delta(U^{k+1},U^k)}\right)\right), \quad
i=0,\ldots,N-1,\ k\geq 0. \end{equation}
The initial condition $n_0$ is approximated by its projection on the discrete grid, defining the starting vector $U^0\in{\mathbb R}^N$. Multiplying the above scheme by $\delta F_d/\delta(U^{k+1},U^k)_i$, summing over $i=0,\ldots,N-1$, and employing the discrete chain rule \eqref{dcr}, we infer the discrete dissipation property \begin{equation}\label{dvd.diss}
\frac{1}{\tau}(F_d[U^{k+1}] - F_d[U^k])
+ \sum_{i=0}^{N-1}U_i^{k+1}\left(\delta_i^{\langle 1\rangle}
\left(\frac{\delta F_d}{\delta(U^{k+1},U^k)}\right)\right)^2h = 0. \end{equation} In fact, this proves the monotonicity of the discrete Fisher information for $q=1$.
\begin{remark}\rm Observe that we could have taken a different approximation for the discrete Fisher information, e.g.\ $\widetilde{F}_d[U] = \sum_{i=0}^{N-1}(\delta_i^{\langle 1\rangle} V)^2h$. This would lead to a different variational derivative $\delta \widetilde{F}_d/\delta(U^{k+1},U^k)$ and eventually to a another scheme (\ref{bdf1.dvdm}), with $F_d$ replaced by $\widetilde{F}_d$, which dissipates $\widetilde{F}_d$ instead. Besides the symmetry, which brings the second-order consistency in space, the above choice of the discrete Fisher information is motivated by the fact that $\delta_i^+\delta_i^- = \delta_i^{\langle 2\rangle}$, used in the discrete variation procedure.
\qed \end{remark}
In the following, we consider temporally higher-order discretizations. There are several ways to generalize the above DVD method. In order to stay in the spirit of Section \ref{sec.bdf2}, we derive higher-order DVD methods, which are based on backward differentiation formulas. The function $f(\xi,\eta)=(\xi^2+\eta^2)/2$ represents both the Fisher information $F[n]=\int_{\mathbb T} f(v_x,v_x)\mathrm{d}x$ and the discrete Fisher information $F_d[U]=\sum_{i=0}^{N-1}f(\delta_i^+V,\delta_i^-V)h$. The definition of $f$ is motivated by the following formal representation of the variational derivative, $$
\frac{\delta F[n]}{\delta n}
= -\frac{v_{xx}}{v}
= -\frac{1}{2v}\left(\pa_x\pa_\xi f\big|_{\xi=v_x}
+ \pa_x\pa_\eta f\big|_{\eta=v_x}\right). $$ This formula gives an idea how to approximate the variational derivative in general. We denote by $\delta^{1,q}_k$ the $q$-th step BDF operator at time $t_k$. For instance, the formulas for $q=1$ and $q=2$ are given in \eqref{1.bdf1} and \eqref{1.bdf2}, respectively. The discrete variational derivative of order $q$ is defined componentwise by \begin{equation}\label{gen.dvd}
\frac{\delta F_d}{\delta(U^{k+1},\ldots,U^{k-q+1})_i}
= -\frac{1}{2V_i^{k+1}}\left(\delta_i^-(\pa_\xi^{{\mathrm{d}}} f)
+ \delta_i^+(\pa_\eta^{{\mathrm{d}}} f)\right), \quad k\geq q-1, \end{equation} where the discrete operators $\pa^{\mathrm{d}}_\xi f$ and $\pa_\eta^{\mathrm{d}} f$ are given by \begin{align*}
(\pa_\xi^{{\mathrm{d}}} f)_i
&= \pa_\xi f\big|_{\xi = \delta_i^+V^{k+1}}
+ r_{\rm corr}\delta_{k+1}^{1,q}(\delta_i^+U^{k+1})
= \delta_i^+V^{k+1} + r_{\rm corr}\delta_{k+1}^{1,q}(\delta_i^+U^{k+1}), \\
(\pa_\eta^{{\mathrm{d}}} f)_i
&= \pa_\eta f\big|_{\eta = \delta_i^-V^{k+1}}
+ r_{\rm corr}\delta_{k+1}^{1,q}(\delta_i^-U^{k+1})
= \delta_i^-V^{k+1} + r_{\rm corr}\delta_{k+1}^{1,q}(\delta_i^-U^{k+1}), \end{align*} and $r_{\rm corr}$ is a correction term, which has to be determined in such a way that the discrete chain rule $$
\delta_{k+1}^{1,q} F_d[U^{k+1}]
= \sum_{i=0}^{N-1}\frac{\delta F_d}{\delta(U^{k+1},\ldots,U^{k-q+1})_i}
\delta_{k+1}^{1,q}U_i^{k+1}h $$ holds. The role of the correction term is not only to satisfy the discrete chain rule but also to increase the temporal accuracy of the discrete variational derivative. Straightforward computations with the above expressions using summation by parts formulas and periodic boundary conditions yield \begin{align}
& \frac{\delta F_d}{\delta(U^{k+1},\ldots,U^{k-q+1})_i}
= -\frac{\delta_i^{\langle2\rangle}V^{k+1}}{V_i^{k+1}}
- r_{\rm corr}
\frac{\delta_{k+1}^{1,q}\delta_i^{\langle2\rangle}U^{k+1}}{V_i^{k+1}}, \quad
k\geq q-1, \label{dvd.bdfq} \\
& r_{\rm corr}
= \frac{\delta_{k+1}^{1,q}F_d[U^{k+1}] - \sum_{i=0}^{N-1}\delta_i^+V^{k+1}
\delta_i^+\Big(\frac{\delta_{k+1}^{1,q}U^{k+1}}{V^{k+1}}\Big)h}{\sum_{i=0}^{N-1}
(\delta_i^+\delta_{k+1}^{1,q}U^{k+1})
\delta_i^+\Big(\frac{\delta_{k+1}^{1,q}U^{k+1}}{V^{k+1}}\Big)h}. \label{rcorr} \end{align} We note that for $q=1$, this definition generally does not coincide with the discrete variational derivative \eqref{q1.dvd}. The temporally BDF$q$ discrete variational derivative (BDF$q$ DVD) method is then defined by the following nonlinear system in the unknowns $U^{k+1}=(V^{k+1})^2$: \begin{equation}\label{bdfq.dvdm}
\delta_{k+1}^{1,q}U_i^{k+1}
= \delta_i^{\langle1\rangle}\left(U^{k+1}\delta_i^{\langle1\rangle}
\left(\frac{\delta F_d}{\delta(U^{k+1},\ldots,U^{k-q+1})}\right)\right), \quad
i=0,\ldots,N-1,\ k\geq q-1. \end{equation} In particular, for $q=1$, we obtain two methods: the BDF1 DVD scheme \eqref{bdfq.dvdm} and the DVD scheme \eqref{bdf1.dvdm}.
\begin{proof}[Proof of Theorem \ref{thm.dvdm}.] Let $n=v^2$ be a smooth positive solution to \eqref{qde} with $d=1$. According to \cite{BLS94}, such a solution exists at least in a small time interval if the initial datum is smooth and positive. Furthermore, let $q\in{\mathbb N}$, $q\ge 2$ (and typically $q\le 6$), be the order of the backward differentiation formula.
First, we consider the discrete variational derivative \eqref{q1.dvd}. A Taylor expansion around $(t_{k+1},x_i)$ yields \begin{align*}
\frac{\delta F_d}{\delta(n(t_{k+1}), n(t_k))}\Big|_{x=x_i}
&= -\frac{\delta_i^{\langle2\rangle}(v(t_{k+1}, x_i)
+ v(t_k, x_i))}{v(t_{k+1},x_i) + v(t_k, x_i)}
= \frac{v_{xx}}{v}(t_{k+1},x_i) + O(\tau) + O(h^2) \\
&= \frac{\delta F}{\delta n}[n](t_{k+1},x_i) + O(\tau) + O(h^2), \end{align*} where $i=0,\ldots,N-1$, $k\ge 0$. Similarly, $$
\delta_i^{\langle1\rangle}\left(n(t_{k+1})\delta_i^{\langle1\rangle}
\left(\frac{\delta F_d}{\delta(n(t_{k+1}),n(t_k))}\right)\right)\bigg|_{x=x_i}
= \left(n\left(\frac{\delta F[n]}{\delta n}\right)_x\right)_x(t_{k+1},x_i)
+ O(\tau) + O(h^2). $$ Thus, the local truncation error of the right-hand side in \eqref{q1.dvd} is of order $O(\tau)+O(h^2)$. Since the left-hand side is of order $O(\tau)$ in time and exact at spatial grid points $x_i$, the local truncation error of scheme \eqref{q1.dvd} is of order $O(\tau)+O(h^2)$. The monotonicity of the discrete Fisher information is shown in \eqref{dvd.diss}.
The mass conservation is an obvious consequence of the scheme. To prove the uniform boundedness, we observe that, by the discrete $H^1$-seminorm, $$
\sum_{i=0}^{N-1}(\delta_i^+ V^k)^2 h \le F_d[U^0] < \infty\quad\text{for all
}k\geq1. $$ Then, according to the discrete Poincar\'e-Wirtinger inequality, for $i=0,\ldots,N-1$, $k\ge 1$, \cite[Lemma 3.3]{FuMa10}, $$
|V_i^k - M_k|^2 \le \sum_{i=0}^{N-1}(\delta_i^+ V^k)^2 h\le F_d[U^0] $$
with $M_k = \sum_{i=0}^{N-1}V_i^kh$. Jensen's inequality for the quadratic function and the mass conservation property of the method give $M_k\leq 1$ for all $k\ge 0$. Finally, by the triangle inequality, $|V_i^k|\le F_d[U^0]^{1/2}+1$ and thus,
$|U_i^k|\le 2F_d[U^0]+2$.
Next, we consider scheme \eqref{bdfq.dvdm} with the discrete variational derivative \eqref{dvd.bdfq}. By construction, the left-hand side of \eqref{dvd.bdfq} is of order $q$ in time and exact at the spatial grid points $x_i$. Thus, it remains to prove that the right-hand side is of order $(q,2)$ with respect to time-space discretization.
Taylor expansions show, with a slight abuse of notation, that \begin{align}
\delta_i^\pm v(t_{k+1},x_i)
&= v_x(t_{k+1},x_i) \pm \frac{h}{2}v_{xx}(t_{k+1},x_i) + O(h^2),
\label{bdfq.aux.12} \\
-\frac{\delta_i^{\langle2\rangle} v(t_{k+1},x_i)}{v(t_{k+1},x_i)}
&= -\frac{v_{xx}}{v}(t_{k+1},x_i) + O(h^2), \label{bdfq.aux.3} \\
\frac{\delta_{k+1}^{1,q}\delta_i^{\langle2\rangle}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}
&= \frac{n_{txx}}{v}(t_{k+1},x_i) + O(\tau^q) + O(h^2), \label{bdfq.aux.4} \\
\delta_i^\pm\delta_{k+1}^{1,q}n(t_{k+1},x_i)
&= n_{tx}(t_{k+1},x_i) \pm \frac{h}{2}n_{txx}(t_{k+1},x_i) + O(\tau^q) + O(h^2),
\label{bdfq.aux.56} \\
\delta_i^\pm\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)
&= 2v_{tx}(t_{k+1},x_i) \pm hv_{txx}(t_{k+1},x_i) + O(\tau^q) + O(h^2).
\label{bdfq.aux.78} \end{align} We prove that $r_{\rm corr}$ is of order $(q,2)$. Let $r_n$ and $r_d$ denote the numerator and denominator of $r_{\rm corr}$, respectively, replacing $V^{k+1}_i$ by $v(t_{k+1},x_i)$ and $U^{k+1}_i$ by $n(t_{k+1},x_i)$. Taking into account the periodic boundary conditions, we find that \begin{align*}
\sum_{i=0}^{N-1} & (\delta_i^+\delta_{k+1}^{1,q}n(t_{k+1},x_i))\delta_i^+
\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)h \\
&= \sum_{i=0}^{N-1}(\delta_i^-\delta_{k+1}^{1,q}n(t_{k+1},x_i))\delta_i^-
\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)h. \end{align*} Therefore, we can split $r_d$ into two parts: \begin{align*}
r_d &= \frac12\sum_{i=0}^{N-1}\Bigg[(\delta_i^+\delta_{k+1}^{1,q}n(t_{k+1},x_i))
\delta_i^+\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right) \\
&\phantom{xx}{}+ (\delta_i^-\delta_{k+1}^{1,q}n(t_{k+1},x_i))
\delta_i^-\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)
\Bigg]h. \end{align*} In view of \eqref{bdfq.aux.56}-\eqref{bdfq.aux.78}, it follows that $$
r_d = 2\sum_{i=0}^{N-1}(n_{tx}v_{tx})(t_{k+1},x_i)h + O(\tau^q) + O(h^2). $$ The numerator $r_n$ is treated in a similar way. Using \eqref{bdfq.aux.12}, the first term in $r_n$ can be written as \begin{align*}
\delta_{k+1}^{1,q} F_d[n(t_{k+1})]
&= \frac12\frac{{\mathrm{d}}}{{\mathrm{d}} t}\sum_{i=0}^{N-1}\big((\delta_i^+v(t,x_i))^2
+ (\delta_i^-v(t,x_i))^2\big)h\Big|_{t=t_{k+1}} + O(\tau^q) \\
&= \sum_{i=0}^{N-1}(v_xv_{xt})(t_{k+1},x_i)h
+ O(\tau^q) + O(h^2). \end{align*} For the second term in $r_n$, we observe that, because of the periodic boundary conditions, $$
\sum_{i=0}^{N-1}\delta_i^+v(t_{k+1},x_i)
\delta_i^+\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)h
=\sum_{i=0}^{N-1}\delta_i^-v(t_{k+1},x_i)
\delta_i^-\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)h, $$ and hence, employing \eqref{bdfq.aux.12} and \eqref{bdfq.aux.78}, \begin{align*}
\sum_{i=0}^{N-1}\delta_i^+v(t_{k+1},x_i)
\delta_i^+\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)h
&= \frac12\sum_{i=0}^{N-1}\Bigg[\delta_i^+v(t_{k+1},x_i)
\delta_i^+\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right) \\
&\phantom{xx}{}+ \delta_i^-v(t_{k+1},x_i)\delta_i^-
\left(\frac{\delta_{k+1}^{1,q}n(t_{k+1},x_i)}{v(t_{k+1},x_i)}\right)\Bigg]h \\
&= 2\sum_{i=0}^{N-1}(v_xv_{xt})(t_{k+1},x_i)h + O(\tau^q) + O(h^2). \end{align*} Summarizing these identities yields $r_{\rm corr}=O(\tau^q) + O(h^2)$. Finally, \eqref{bdfq.aux.3}-\eqref{bdfq.aux.4} imply that \begin{align*}
\frac{\delta F_d}{\delta(n(t_{k+1}),\ldots,n(t_{k+1-q}))}\Big|_{x=x_i}
&= -\frac{\delta_i^{\langle2\rangle} v(t_{k+1},x_i)}{v(t_{k+1},x_i)}
- r_{\rm corr}
\frac{\delta_{k+1}^{1,q}\delta_i^{\langle2\rangle}n(t_{k+1},x_i)}{v(t_{k+1},x_i)} \\
&= \frac{\delta F[n]}{\delta n}(t_{k+1}, x_i) + O(\tau^q) + O(h^2). \end{align*} This shows that the discrete variational derivative \eqref{dvd.bdfq} is of order $q$ in time, finishing the proof. \end{proof}
\section{Numerical examples}\label{sec.num}
In this section, we present some numerical examples which illustrate the decay properties of the entropy functionals and Fisher information as well as the convergence properties of the schemes presented in the previous sections.
\subsection{BDF2 finite-difference scheme}
The DLSS equation \eqref{dlss} is approximated by the BDF2 method in time and central finite differences in space. The scheme is given by the following nonlinear system with unknowns $V_i^k=(U_i^k)^{\alpha/2}$: For $i=0,\ldots,N-1$ and $k=1$, $$
(V_i^{1})^{2/\alpha-1}\big(V_i^{1} - V_i^0 \big)
+ \tau\delta_i^{\langle 2\rangle}\left((V_i^{1})^{2/\alpha}
\delta_i^{\langle 2\rangle}\log V_i^{1}\right) = 0 $$ and for $i=0,\ldots,N-1$, $k\ge 2$, $$
(V_i^{k+1})^{2/\alpha-1} \left(\frac32 V_i^{k+1} - 2V_i^k + \frac12 V_i^{k-1}
\right) + \tau\delta_i^{\langle 2\rangle}\left((V_i^{k+1})^{2/\alpha}
\delta_i^{\langle 2\rangle}\log V_i^{k+1}\right) = 0. $$ The initial datum $(V_i^0)$ is given by $(n_0(x_i)^{\alpha/2})$. For $k=1$, the scheme corresponds to the implicit Euler discretization, needed to initialize the BDF2 scheme for $k\ge 2$. The above nonlinear system, with periodic boundary conditions, is solved using the Newton method.
We choose the initial datum $n_0(x)=0.001+\cos^{16}(\pi x)$, $x\in[0,1]$. The spatial mesh size is $h=0.005$ ($N=200$) and the time step $\tau=10^{-6}$. The (continuous) entropies $E_\alpha[n]$ are dissipated for $1\le\alpha<3/2$. Figure \ref{fig.bdf2.stab} (a) illustrates the stability and, in fact, decay of the discrete entropies $E_{\alpha,{\mathrm{d}}}$, defined below, for various values of $\alpha$. Although Theorem~\ref{thm.bdf2.ex} does not provide a stability estimate for $\alpha=1$, the numerical results indicate that the discrete entropy $E_{1,{\mathrm{d}}}[U]=\sum_{i=0}^{N-1}(U_i(\log U_i-1)+1)h$ is decreasing. Figure \ref{fig.bdf2.stab} (b) shows that the decay of the discrete relative entropy is exponential, and even the discrete Fisher information converges exponentially fast to zero. Here, the discrete relative entropy is defined by $$
E_{\alpha,{\mathrm{d}}}^{rel}[U^k] = E_{\alpha,{\mathrm{d}}}[U^k] - E_{\alpha,{\mathrm{d}}}[\bar U], \quad
\mbox{where }
E_{\alpha,{\mathrm{d}}}[U^k] = \sum_{i=0}^{N-1}(U_i^k)^\alpha h, \
\bar U = \sum_{i=0}^{N-1}U^k_i h. $$
\begin{figure}
\caption{ (a) Entropy stability (decay) for the BDF2 finite-difference scheme. (b) Exponential decay of the discrete relative entropy and the discrete Fisher information for the BDF2 finite-difference scheme.}
\label{fig:entropy_decay}
\label{fig:rel_entropy_decay}
\label{fig.bdf2.stab}
\end{figure}
According to Theorem \ref{thm.bdf2.conv}, the semi-discrete BDF2 scheme converges in second order if $\alpha=1$. This may be not the case for the fully discrete scheme, since the discretization may destroy the monotonicity structure of the spatial operator. However, Figure \ref{fig.bdf2.conv} shows that the numerical convergence rate is close to 2, even for $\alpha\neq 1$. The numerical convergence rates $cr$ have been obtained by the linear regression method. The convergence of the method is measured in the discrete $\ell^2$-norm $$
\|e_m\|_2 := \left(\sum_{i=0}^{N-1}(V_{{\rm ex},i}^m-V_i^m)^2 h\right)^{1/2}, $$ and the numerical solutions are compared at time $t=5\cdot 10^{-5}$. Here, the ``exact'' solution $V_{{\rm ex},i}^m$ is computed by the above scheme using the very small time step $\tau=10^{-10}$.
\begin{figure}
\caption{Temporal convergence of the BDF2 finite-difference scheme for various values of $\alpha$; the convergence rate is denoted by $cr$.}
\label{fig.bdf2.conv}
\end{figure}
\subsection{Discrete variational derivative method}
We present some numerical results obtained from the DVD and BDF$q$ DVD schemes derived in Section \ref{sec.dvdm}. The initial datum and the numerical parameters are chosen as in previous subsection. In order to solve the discrete nonlinear systems, we employed here the NAG toolbox routine {\tt c05nb}, which is based on a modification of the Powell hybrid method. It turned out that this routine is at least three times faster than the standard MATLAB routine {\tt fsolve}.
In Figure \ref{fig.dvdm.fish}, the temporal evolution of the discrete relative entropies $E_\alpha^{\rm rel}[U^k]$ and the discrete Fisher information $F_d[U^k]$ are depicted for (a) the implicit Euler scheme \eqref{bdf1.dvdm} and (b) the BDF2 scheme \eqref{bdfq.dvdm}. We observe that the decay is in all cases exponential. This holds also true for the BDF3 scheme (results not shown).
\begin{figure}
\caption{Exponential decay of the discrete Fisher information and relative entropies using (a) the DVD scheme and (b) the BDF2 DVD scheme.}
\label{fig:dvdm_fisher_decay}
\label{fig:bdf2_dvdm_fisher_decay}
\label{fig.dvdm.fish}
\end{figure}
Next, we test numerically the convergence in time of the DVD scheme. Figure \ref{fig.dvdm.conv} illustrates the $\ell^2$-errors of the methods. We have chosen the mesh size $h=0.01$, and we compared the numerical solutions at time $t_m=5\cdot 10^{-5}$. The ``exact'' solutions are computed by the respective method taking the time step $\tau=10^{-9}$. The numerical convergence rates, computed by the linear regression method, are given in Table \ref{tab.conv}. We note that the BDF3 DVD scheme gives only slightly better results than the BDF2 DVD scheme. The reason is that the first step is initialized by the first-order scheme \eqref{bdf1.dvdm}, and this initialization error cannot be compensated by the higher-order accuracy of the local approximation. In order to obtain a third-order scheme, we need to initialize the scheme with a second-order discretization.
\begin{figure}
\caption{Temporal convergence of the DVD, BDF2 DVD, and BDF3 DVD schemes.}
\label{fig.dvdm.conv}
\end{figure}
\begin{table}[ht] \centering\begin{tabular}{cc} \hline Scheme & Convergence rate\\ \hline DVD & $1.020$\\ BDF2 DVD & $1.824$\\ BDF3 DVD & $1.977$ \\ \hline \end{tabular}
\caption{Numerical temporal convergence rates for the discrete variational derivative methods.} \label{tab.conv} \end{table}
\end{document} |
\begin{document}
\pagestyle{plain} \title{Path-Fault-Tolerant Approximate\
Shortest-Path Trees hanks{Research partially supported by the Italian Ministry
of University and Research
under the Research Grants: 2010N5K7EB PRIN 2010 ``ARS TechnoMedia'' (Algoritmica
per le Reti Sociali Tecno-mediate), and 2012C4E3KT PRIN 2012 ``AMANDA''
(Algorithmics for MAssive and Networked DAta).} \newcommand{\lcomment}[1]{ \tag{#1} }
\begin{abstract} Let $G=(V,E)$ be an $n$-nodes non-negatively real-weighted undirected graph. In this paper we show how to enrich a {\em single-source shortest-path tree} (SPT) of $G$ with a \emph{sparse} set of \emph{auxiliary} edges selected from $E$, in order to create a structure which tolerates effectively a \emph{path failure} in the SPT. This consists of a simultaneous fault of a set $F$ of at most $f$ adjacent edges along a shortest path emanating from the source, and it is recognized as one of the most frequent disruption in an SPT. We show that, for any integer parameter $k \geq 1$, it is possible to provide a very sparse (i.e., of size $O(kn\cdot f^{1+1/k})$) auxiliary structure that carefully approximates
(i.e., within a stretch factor of $(2k-1)(2|F|+1)$) the true shortest paths from the source during the lifetime of the failure. Moreover, we show that our construction can be further refined to get a stretch factor of $3$ and a size of $O(n \log n)$ for the special case $f=2$, and that it can be converted into a very efficient \emph{approximate-distance sensitivity oracle}, that allows to quickly (even in optimal time, if $k=1$) reconstruct the shortest paths (w.r.t. our structure) from the source after a path failure, thus permitting to perform promptly the needed rerouting operations. Our structure compares favorably with previous known solutions, as we discuss in the paper, and moreover it is also very effective in practice, as we assess through a large set of experiments. \end{abstract}
\section{Introduction} Broadcasting data from a source node to every other node of a network is one of the most basic communication primitives in modern networked applications.
Given the widespread diffusion of such applications, in the recent past, there has been an increasing demand for more and more efficient, i.e. scalable and reliable, methods to implement this fundamental feature.
The natural solution is that of modeling the network as a graph (nodes as vertices and links as edges) and building a (fast and compact) structure to be used to transmit the data. In particular, the most common approach of this kind is that of computing a {\em shortest-path tree} (SPT), rooted at the desired source node, of such graph.
However, the SPT, as any tree-based topology, is prone to unpredictable events that might occur in practice, such as failures of nodes and/or links. Therefore, the use of SPTs might result in a high sensitivity to malfunctioning, which unavoidably causes the undesired effect of disconnecting sets of nodes from the source and thus the interruption of the broadcasting service.
Therefore, a general approach to cope with this scenario is to make the SPT \emph{fault-tolerant} against a given number of simultaneous component failures, by adding to it a set of suitably selected edges from the underlying graph, so that the resulting structure will remain connected w.r.t. the source. In other words, the selected edges can be used to build up alternative paths from the root, each one of them in replacement of a corresponding original shortest path which was affected by the failure. However, if these paths are constrained to be \emph{shortest}, then it can be easily seen that for a non-negatively real weighted and undirected graph of $n$ nodes and $m$ edges, this may require as much as $\Theta(m)$ additional edges, also in the case in which $m=\Theta(n^2)$. In other words, the set-up costs of the strengthened network may become unaffordable.
Thus, a reasonable compromise is that of building \emph{sparse} and fault-tolerant structure which \emph{approximates} the shortest paths from the source, i.e., that contains paths which are guaranteed to be longer than the corresponding shortest paths by at most a given \emph{stretch} factor, for any possible edge/vertex failure that has to be handled. In this way, the obtained structure can be revised as a 2-level communication network: a first \emph{primary} level, i.e., the SPT, which is used when all the components are operational, and an \emph{auxiliary} level which comes into play as soon as a component undergoes a failure.
In this paper, we show that an efficient structure of this sort exists for a prominent class of failures in an SPT, namely those involving a set of adjacent edges along a shortest path emanating from the source of the SPT. Our study is motivated by several applications, such as, for instance, traffic engineering in optical networks or path-congestion management in road-networks, where failures in the above form often affect the SPT~\cite{BW09,DDFLP13,MCFF09}.
For this kind of failure, also known as a \emph{path failure}\footnote{Notice that this is a small abuse of nomenclature, since failures we consider are restricted to the path's edges only.}, we show that it is possible not only to obtain resilient sparse structures, but also that these can be pre-computed efficiently, and that they can return quickly the auxiliary network level.
\subsection{Related Work} In the recent past, many efforts have been dedicated to devising single and multiple edge/vertex fault-tolerant structures. More formally, let $r$ denote a distinguished source vertex of a non-negatively real-weighted and undirected graph $G=(V(G),E(G))$, with $n$ nodes and $m$ edges. We say that a spanning subgraph $H$ of $G$ is an \emph{Edge/Vertex-fault-tolerant $\alpha$-Approximate SPT} (in short, $\alpha$-{\ttfamily E/VASPT}), with $\alpha>1$, if it satisfies the following condition: For each edge $e \in E(G)$ (resp., vertex $v \in V(G)$), all the distances from $r$ in the subgraph $H-e$, i.e., $H$ deprived of edge $e$ (resp., the subgraph $H-v$, i.e., $H$ deprived of vertex $v$ and all its incident edges) are $\alpha$-stretched (i.e., at most $\alpha$ times longer) w.r.t. the corresponding distances in $G-e$ (resp., $G-v$).
An early work on the matter is \cite{NPW03}, where the authors showed that by adding at most $n-1$ edges to the SPT, a 3-{\ttfamily EASPT} can be obtained. This was shown to be very useful in order to compute a recovery scheme needing only one backup routing table at each node \cite{IIOY03}. In \cite{GP07}, the authors showed instead how to build a 1-{\ttfamily EASPT} in $\widetilde{O}(m n)$ time\footnote{The $\widetilde{O}$ notation hides poly-logarithmic factors in $n$.}. Notice that, a 1-{\ttfamily EASPT} contains \emph{exact} replacement paths from the source, but of course its size might be $\Theta(n^2)$ if $G$ is dense. Then, in \cite{BK13}, Baswana and Khanna devised a $3$-{\ttfamily VASPT} of size $O(n \log n)$. Later on, a significant improvement to this result was provided in \cite{BGLP14}, where the authors showed the existence of a $(1+\varepsilon)$-{\ttfamily E/VASPT}, for any $\varepsilon >0$, of size $O(\frac{n \log n}{\varepsilon^2})$.
Concerning \emph{unweighted} graphs, in \cite{BK13} the authors give a $(1+\varepsilon)$-{\ttfamily VABFS} (where BFS stands for \emph{breadth-first search tree}) of size $O(\frac{n}{\varepsilon^3}+n \log n)$ (actually, such a size can be easily reduced to $O(\frac{n}{\varepsilon^3})$). Then, Parter and Peleg in \cite{PP14} present a set of lower and upper bounds to the size of a $(\alpha,\beta)$-{\ttfamily EABFS}, namely a structure for which the length of a path is stretched by at most a factor of $\alpha$, plus an additive term of $\beta$. More precisely, they construct a $(1,4)$-{\ttfamily EABFS} of size $O(n^{4/3})$. Moreover, assuming at most $f=O(1)$ edge failures can take place, they show the existence of a $(3(f +1),(f+1) \log n)$-{\ttfamily EABFS} of size $O(fn)$. This was improving onto the general fault-tolerant \emph{spanner} construction given in \cite{CLPR09}, which, for weighted graphs and for any integer parameter $k \geq 1$, is resilient to up to $f$ edge failures with stretch factor of $2k-1$ and size $O(f \cdot n^{1+1/k})$.
\hide{Finally, we mention the general fault-tolerant \emph{spanner} construction given in \cite{CLPR09}, which, for any integer parameter $k \geq 1$, is resilient to up to $f$ edge/vertex failures with stretch factor of $(2k-1)$ and size $\widetilde{O}(f^2 \cdot k^{f+1} \cdot n^{1+1/k})$. This latter result has been then improved through a randomized construction in \cite{DK11}, where the expected size was reduced to $\widetilde{O}(f^{2-1/k} \cdot n^{1+1/k})$.}
On the other hand, concerning \emph{approximate-distance sensitivity oracles} (simply \emph{$\alpha$-oracles} in the following, where $\alpha$ denotes the guaranteed approximation ratio w.r.t. true distances), researchers aimed at computing, with a \emph{low} preprocessing time, a \emph{compact} data structure able to \emph{quickly} answer to some distance query following an edge/vertex failure. The vast literature dates back to the work \cite{TZ05} of Thorup and Zwick, who showed that, for any integer $k \geq 1$, any undirected graph with non-negative edge weights can be preprocessed in $O(km \cdot n^{1/k})$ time to build a $(2k-1)$-oracle of size $O(k\cdot n^{1+1/k})$, answering in $O(k)$ time to a post-failure distance query, recently reduced to $O(1)$ time in \cite{DBLP:conf/stoc/Chechik14}. Due to the long-standing girth conjecture of Erd\H{o}s \cite{Erd64}, this is essentially optimal. Concerning the failure of a set $F$ of at most $f$ edges, in \cite{CLPR10} the authors built, for any integer $k \geq 1$, a $(8k - 2)(f + 1)$-oracle of size $O(fk \cdot n^{1+1/k} \log (n W))$, where $W$ is the ratio of the maximum to the minimum edge weight in $G$, and with a query time of
$\widetilde{O}(|F| \cdot \log \log d)$, where $d$ is the actual distance between the queried pair of nodes in $G-F$. As far as \emph{SPT oracles} (i.e., returning distances/paths only from a source node) are concerned, in \cite{BK13} it is shown how to build in $O(m \log n + n \log^2 n)$ time an SPT oracle of size $O(n \log n)$, that for any single-vertex-failure returns a 3-stretched replacement path in time proportional to the path's size. Finally, for directed graphs with integer positive edge weights bounded by $M$, in \cite{GW12} the authors show how to build in $\widetilde{O}(M n^{\omega})$ time and $\Theta(n^2)$ space a randomized single-edge-failure SPT oracle returning \emph{exact} distances in $O(1)$ time, where $\omega< 2.373$ denotes the matrix multiplication exponent.
\subsection{Our Results} In this paper, we consider the specific, yet interesting, problem of making a SPT resilient to the failure of any sub-path of size (i.e., number of edges) at most $f \geq 1$ emanating from its source.
More in details, let $F$ be a set of cascading edges of a given SPT, where
$0<|F|\leq f$. We say that a spanning subgraph $H$ of $G$ is a \emph{Path-Fault-Tolerant $\alpha$-Approximate SPT} (in short, $\alpha$-\texttt{PASPT}), with $\alpha\ge 1$, if, for each vertex $z \in V(G)$, the following inequality holds:
$ d_{H-F}(z) \leq \alpha \cdot d_{G-F}(z) $, where $d_{G-F}(z)$ (resp., $d_{H-F}(z)$) denotes the distance from $r$ to $z$ in $G-F$ (resp., $H-F$).
For any integer parameter $k \geq 1$, we can provide the following results:
\begin{itemize}
\item We give an algorithm for computing, in $O(n\cdot(m +f^2))$ time, a
\pspt{(2k-1)(2|F|+1)} containing $O(k n\cdot f^{1+\frac{1}{k}})$ edges;
\item We give an algorithm for computing, in $O(n \cdot (m + f^2) )$ time, an oracle of size $O(k n\cdot f^{1+\frac{1}{k}})$ which is able to return: (i) a $(2k-1)(2|F|+1)$-approximate distance in $G-F$ between $r$ and a generic vertex $z$ in $O(k)$ time; (ii) the associated path in $O(k +f + \ell)$ time, where $\ell$ is the number of its edges; if $k=1$, this can be further reduced to $O(\ell)$ time. \end{itemize}
Concerning the former result, it compares favorably with both the aforementioned general fault-tolerant spanner constructions given in \cite{CLPR09}, and the unweighted {\ttfamily EABFS} provided in \cite{PP14}, while concerning instead the latter result, it compares favorably with the fault-tolerant oracle given in \cite{CLPR10}. For the sake of fairness, we remind that all these structures were thought to cope with edge failures arbitrarily spread across $G$, though.
Besides that, we also analyze in detail the special case when at most $f = 2$ failures of cascading edges can occur, for which we are able to achieve a significantly better stretch factor. More precisely, we design: (i) an algorithm for computing, in $O(n \cdot (m +n \log n))$ time, a 3-\texttt{PASPT} containing $O(n \log n)$ edges; (ii) an algorithm for computing, in $O(n \cdot (m +n \log n))$ time, an oracle of size $O(n \log n)$ which is able to return a $3$-approximate distance in $G-F$ between $r$ and a generic vertex $z$ in constant time, and the associated path in a time proportional to the number of its edges. Some of the proofs related to these latter results will be given in the appendix.
Finally, we provide an experimental evaluation of the proposed structures, to assess their performance in practice w.r.t.\ both size and quality of the stretch. \section{Notation} \label{sec:preliminaries} In what follows, we give our notation for the considered problem.
We are given a non-negatively real-weighted, undirected graph $G=(V(G),E(G))$ with
$|V(G)|=n$ vertices and $|E(G)|=m$ edges. We denote by $w_G(e)$ or $w_G(u,v)$ the weight of the edge $e=(u,v) \in E(G)$.
Given an edge $e=(u,v)$, we denote by $G - e$ or $G - (u,v)$ the graph obtained from $G$ by removing the edge $e$. Similarly, for a set $F$ of edges, $G-F$ denotes the graph obtained from $G$ by removing the edges in $F$. Furthermore, given a vertex $v \in V(G)$, we denote by $G - v$ the graph obtained from $G$ by removing vertex $v$ and all its incident edges.
Given a graph $G$, we call $\pi_G(x,y)$ a shortest path between two vertices $x,y \in V(G)$, $d_G(x,y)$ its weighted length (i.e., the distance from $x$ to $y$ in $G$), $\tree{G}{r}$ a shortest path tree (SPT) of $G$ rooted at a certain distinguished source vertex $r$. Moreover, we denote by $\subtree{G}{r}{x}$ the subtree of $\tree{G}{r}$ rooted at vertex $x$.
Whenever the graph $G$ and/or the source vertex $r$ are clear from the context, we might omit them, i.e., we write $\pi(u)$ and $d(u)$ instead of $\pi_G(r,u)$ and $d_G(r,u)$, respectively. When considering an edge $(x,y)$ of an SPT, we assume $x$ and $y$ to be the closest and the furthest endpoints from $r$, respectively.
Furthermore, if $P$ is a path from $x$ to $y$ and $Q$ is a path from $y$ to $z$, with $x,y,z \in V(G)$, we denote by $P \circ Q$ the path from $x$ to $z$ obtained by concatenating $P$ and $Q$. We also denote by $w(P)$ the total weight of the edges in $P$.
For the sake of simplicity we consider only edge weights that are strictly positive. However, our entire analysis also extends to non-negative weights. Throughout the rest of the paper, we assume that, when multiple shortest paths exist, ties are broken in a consistent manner. In particular we fix an SPT $T=\tree{G}{r}$ of $G$ and, given a graph $H \subseteq G$ and $x,y \in V(H)$, whenever we compute the path $\pi_H(x,y)$ and ties arise, we prefer edges in $E(T)$.
A path between any two vertices $u,v \in V(G)$ is said to be an $\alpha$--approximate shortest path if its length is at most $\alpha$ times the length of the shortest path between $u$ and $v$ in $G$.
For the sake of simplicity, we assume that, if a set of at most $f$ edge failures has to be handled, the original graph is $(f+1)$--edge connected. Indeed, if this is not the case, we can guarantee the $(f+1)$--edge connectivity by adding at most $O(nf)$ edges of weight $+\infty$ to $G$. Notice that this is not actually needed by any of the proposed algorithms.
\section{Our \texttt{PASPT} Structure and the Corresponding Oracle} \label{sec:f_structure}
In what follows, we give a high-level description of our algorithm for computing a \pspt{(2|F|+1)}, namely $H$ (see Algorithm~\ref{alg:f_path}), where
$|F| \le f$.
We define the level $\ell(v)$ of a vertex $v \in V(G)$ to be the hop-distance between $r$ and $v$ in $T = \tree{G}{r}$, i.e., the number of edges of the unique path from $r$ to $v$ in $T$.
Note that, when a failure of $|F|$ consecutive edges occurs on a shortest path, $T$ will be broken into a forest $\ensuremath{\mathcal{C}}\xspace$ of $|F|+1$ subtrees. We consider these subtrees as rooted according to $T$, i.e., each tree $T_i$ is rooted at vertex $r_i$ that minimizes $\ell(r_i)$.
Roughly speaking, the algorithm considers all possible path failures $F^*$ of $f$ vertices by fixing the deepest endpoint $v$ of the failing path. It then reconnects the resulting $f + 1$ subtrees of $G-F^*$ by selecting at most $O(f^2)$ edges into a graph $U$, one for each couple of trees $T^*_i, T^*_j$ of the forest $G-F$. These edges are either directly added to the structure $H$ or they are first sparsified into a graph $U^\prime$ by using a suitable multiplicative $(2k-1)$-spanner, so that only $k f^{1+\frac{1}{k}}$ of them are added to $H$.
In particular, it is known that, given an $n$-vertex graph and an integer $k \ge 1$, both a $(2k-1)$--spanner and a $(2k-1)$--approximate distance oracle of size $O(kn^{1+\frac{1}{k}})$ can be built in $O(n^2)$ time. The oracle can report an approximate distance between two vertices in $O(k)$ time, and the corresponding approximate shortest path in time proportional to the number of its edges. For further details we refer the reader to \cite{DBLP:conf/soda/BaswanaS04,DBLP:journals/talg/BaswanaS06,DBLP:conf/icalp/RodittyTZ05}. Recently, it has been shown in \cite{DBLP:conf/stoc/Chechik14} that a randomized $(2k-1)$--approximate distance oracle of \emph{expected} size $O(kn^{1+\frac{1}{k}})$ can be built, so that answering a distance query requires only constant time. In what follows, however, we only describe results which are based on deterministic construction and provide a worst case guarantee on the size of the resulting structures.
\begin{algorithm}[t] \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{A graph $G$, $r \in V(G)$, an SPT $T = \tree{G}{r}$, an integer $f$}
\Output{A \pspt{(2|F|+1)} of $G$ rooted at $r$} \BlankLine $H \gets T = \tree{G}{r}$\; \ForEach{$v \in V(G)$ \label{ln:main_loop}}{
Let $\langle r = z_0, z_1, \dots, z_\ell(v) \rangle$ be the path from $r$ to $v$ in $T$\;
\tcp{$F^*$ contains last $\min\{f, \ell(v) \}$ edges of the path}
Let $F^* = \{(z_{i-1}, z_i) : i > \ell(v) - \min\{\ell(v), f\}\}$ \;
Let $\ensuremath{\mathcal{C}}\xspace^* = \{T^*_1, T^*_2, \dots\}$ be the set of connected components of $T-F^*$\;
\BlankLine
\tcp{Build an auxiliary graph $U$ associated with $v$}
$U \gets ( \{r^*_i \, : r^*_i \mbox{ is the root of } T^*_i \} ,\emptyset)$ \;
\ForEach{$T^*_i,T^*_j \in \ensuremath{\mathcal{C}}\xspace^* \, : \, T^*_i \neq T^*_j$}{
Let $E_{i,j} = \{ (u,v) \in E(G) \setminus F^* : u \in V(T^*_i), v \in V(T^*_j)\}$ \;
\BlankLine
$(x',y') \gets \underset{(x,y) \in E_{i,j}}{\arg\min} \{d_T(r^*_i, x) + w_G(x,y) + d_T(y,r^*_j)\}$ \label{ln:formula}\;
\tcp{We say that $(x', y') \in E(G)$ is associated to $(r^*_i, r^*_j) \in E(U)$}
$E(U) \gets E(U) \cup \{ (r^*_i, r^*_j) \}$\;
$w_U(r^*_i,r^*_j) = d_T(r^*_i, x') + w_G(x',y') + d_T(x',r^*_j)$\;
}
\BlankLine
\tcp{Optional step, executed only if $k \neq 1$. Otherwise, let $U^\prime = U$.}
$U^\prime \gets $ Compute a $(2k-1)$-spanner of $U$ \label{ln:sparsify}\;
$E(H) \gets E(H) \cup E(U^\prime)$ \; } \Return $H$
\caption{Algorithm for building a \pspt{(2|F|+1)}. Notice that an optional integer parameter $k \ge 1$ is used. By default we set $k=1$.} \label{alg:f_path} \end{algorithm}
We start by bounding the running time of Algorithm~\ref{alg:f_path}: \begin{lemma}
Algorithm~\ref{alg:f_path} requires $O(n(m+f^2))$ time. \end{lemma} \begin{proof}
Notice that the loop in line~\ref{ln:main_loop} considers each vertex of $G$ at most once.
We bound the time required by each iteration.
For each vertex $v$ a complete auxiliary graph $U$ of $O(f)$ vertices is built.
Moreover, the weights of all the edges of $U$ can be computed in $O(m)$ time by scanning
all the edges of $E(G) \setminus F^*$ while keeping track, for each pair of vertices $r^*_i, r^*_j \in V(U)$,
of the minimum value of the formula in line~\ref{ln:formula}.
Finally, the optional spanner construction invoked by line~\ref{ln:sparsify} requires $O(f^2)$ time.
This concludes the proof. \end{proof}
We now bound the size of the returned structure: \begin{lemma} \label{lemma:f_path:size} The structure $H$ returned by Algorithm~\ref{alg:f_path} contains $O(k n \cdot f^{1+\frac{1}{k}})$ edges. \end{lemma} \begin{proof} At the beginning of the algorithm, $H$ coincides with $T = \tree{G}{r}$, so
$|E(H)|=O(n)$. Therefore, we only need to bound the number of edges added to $H$ during the execution of the algorithm. Notice that, for each vertex $v \in V(G)$, Algorithm~\ref{alg:f_path} considers at most $f+1$ connected components of
$\mathcal{C^*}$. For each pair of components, at most one edge is added to $U$, hence $|E(U)|=O(f^2)$. Either $k=1$ and $U^\prime = U$ or $k > 1$ and $U^\prime$ is a $(2k-1)$--spanner of $U$. In both cases we have $|U^\prime| = O(k |U|^{1+\frac{1}{k}}) = O(k f^{1+\frac{1}{k}})$. As only the edges of $U^\prime$ gets added to $H$, the claim follows. \end{proof}
We now upper-bound the distortion provided by the structure $H$. For the sake of clarity, we first discuss the case where the step of line~\ref{ln:sparsify} of Algorithm~\ref{alg:f_path} is omitted, i.e., we simply set $k=1$ and $U^\prime = U$. At the end of this section we will argue about the general case.
For each path failure $F$ of $|F| \leq f$ edges, and for each target vertex $t$, we will consider a suitable path $P$ in $G-F$, whose length is at most $(2|F|+1)$ times the distance $d_{G-F}(t)$. Then, since $P$ might not be entirely contained in $H-F$, we will show that its length must be an upper bound to the length a path $Q$ in $H-F$ between $r$ an $t$, and hence to $d_{H-F}(t)$.
We first discuss how $P$ is defined: consider the forest $\ensuremath{\mathcal{C}}\xspace$ of the connected components of $T-F$.
Let $\pi = \pi_{G-F}(r)$, let $r_0 = r$, and let $t_0$ be the last vertex of $\pi$ belonging to $T_0$. W.l.o.g., we assume $t \not\in V(T_0)$, as otherwise we have $d_{H-F}(t) = d_{G-F}(t)$. Moreover, we call $(t_0, s_1)$ the edge following vertex $t_0$ in $\pi$.
Initially, we set $P_0 = \pi_T(s,t_0) \circ (t_0, s_1)$ and $i=1$. We proceed iteratively: Let $T_i$ be the subtree of $C$ which contains $s_i$ and let $t_i$ be the last vertex of $\pi$ such that $t_i$ belongs to $T_i$, i.e., $t_i$ is in the same subtree as $s_i$ (notice that, it may be that $s_i=t_i)$. Call $r_i$ the root of $T_i$. If $t_i = t$ we set $P = P_{i-1} \circ \pi_T(s_i, r_i) \circ \pi_T(r_i, t_i)$, and we are done. Otherwise, let $(t_i, s_{i+1})$ be the edge following $t_i$ in $\pi$. We set $P_i = P_{i-1} \circ \pi_T(s_i, r_i) \circ \pi_T(r_i, t_i) \circ (t_i, s_{i+1})$, we increment $i$ by one, and we repeat the whole procedure. Figure~\ref{fig:aux_path} shows an example of such a path $P$. Let $h$ be the final value of $i$, at the end of this procedure, so that $t=t_h \in V(T_h)$.
\begin{figure}
\caption{Example of construction of $P$. The path $P$ is shown in bold, while the path $\pi$ is composed of both the light subpaths and of the bold edges with endpoint in different subtrees. In this example $P$ traverses $4$ subtrees and hence $h=3$.}
\label{fig:aux_path}
\end{figure}
Notice that, by construction, the path $P$ does not contain any failed edge. We now argue that the length $w(P)$ of $P$, is always at most
$(2|F|+1)$ times the distance $d_{G-F}(t)$.
\begin{lemma} \label{lemma:f_path:stretch_1}
$d_P(t) \leq (2|F|+1) \cdot d_{G-F}(t)$, for every $t \in V(G)$. \end{lemma}
\begin{proof} We proceed by showing, by induction on $i$, that $d_P(t_i) \leq
(2i+1) \cdot d_{G-F}(t_i)$. The claim follows since $t=t_h$ and $h \le |F|$.
The base case is trivially true, as we have $d_P(t_0) = 1 \cdot d_{G-F}(t_0)$, since $t_0$ belongs to the same subtree $T_0$ as $r$. Now, suppose that the claim is true for $i-1$. We can prove that it is true also for $i$ by writing: \begin{align*}
d_P(t_i) & = d_P(t_{i-1}) + d_P(t_{i-1}, s_i) + d_P(s_i, r_i) + d_P(r_i, t_i) \\
& \le (2i-1) \cdot d_{G-F}(t_{i-1}) + d_{G-F}(t_{i-1}, s_i) + d_G(s_i, r_i) + d_G(r_i, t_i) \\
& \le (2i-1) \cdot d_{G-F}(t_{i-1}) + d_{G-F}(t_{i-1}, s_i) + d_G(s_i, t_i) + 2 d_G(r_i, t_i) \\
& \le (2i-1) \cdot d_{G-F}(t_i) + 2 d_{G}(t_i)) \le (2i+1) \cdot d_{G-F}(t_i). \end{align*} \end{proof}
It remains to show that, even though $P$ might not be entirely contained in $H-F$, its length $w(P)$ is always an upper bound to $d_{H-F}(t)$.
Let $v$ be the deepest endpoint (w.r.t. level) among the endpoints of the edges in $F$. Moreover, let $F^*$ be the set of failed edges considered by Algorithm~\ref{alg:f_path} when $v$ is examined at line~\ref{ln:main_loop}, and let $U$ be the the corresponding auxiliary graph. Notice that $F \subseteq F^*$ as $F^*$ always contains $\min\{ \ell(v), f \}$ edges. As a consequence, $T_0 \in \ensuremath{\mathcal{C}}\xspace$ contains, in general, several trees in $\ensuremath{\mathcal{C}}\xspace^*$. We let $R$ be the set of the roots of all the subtrees of $T_0$ which are in $\ensuremath{\mathcal{C}}\xspace^*_0$. Notice that every other tree $T_j \in C$ such that $T_j \neq T_0$ belongs to $\ensuremath{\mathcal{C}}\xspace^*$ (see Figure~\ref{fig:path_to_ri}).
Remember that $r_h$ is the root of the subtree $T_h \in \ensuremath{\mathcal{C}}\xspace^* = T-F^*$ which contains $t$. Let $r^\prime_0$ be the root of the last tree $T^\prime_0 \in \ensuremath{\mathcal{C}}\xspace^*$ which is contained in $T_0$ and is traversed by $\pi_{G-F}(r_h)$. It follows that $r^\prime_0 \in V(P)$. We now construct another path $Q$, which will be entirely contained in $H-F$. We choose a special vertex $r^*_0 \in R$, as follows: \begin{equation}
\label{eq:choice_of_r^*_0}
r^*_0 = \arg \min_{z \in R} \{ d_T(z) + d_U(z, r_h) \}. \end{equation}
\begin{figure}
\caption{An example of path $Q$ contained in $H-F$ (left) and of the corresponding
edges of $U$ (right). The length of $Q$ is upper-bounded by that of $P$.}
\label{fig:path_to_ri}
\end{figure}
The path $Q$ is composed of three parts, i.e. $Q = Q_1 \circ Q_2 \circ Q_3$. The first one, $Q_1$, coincides with $\pi_T(r^*_0)$. The second one is obtained by considering the shortest path $\pi_U(r^*_0, r_h)$ and by replacing each edge going from a vertex $r^*_i \in V(U)$ to a vertex $r^*_j \in V(U)$ with the path: $\pi_T(r^*_i, x') \circ (x',y') \circ \pi_T(x', r^*_j)$, where $(x',y')$ is the edge associated to $(r^*_i, r^*_j)$ by Algorithm~\ref{alg:f_path} when $v$ is considered. Finally, $Q_3 = \pi_T(r^*_h, t)$. In Figure~\ref{fig:path_to_ri}, we show an example of how such path $Q$ can be obtained.
We now prove that: \begin{lemma} \label{lemma:f_path:stretch_2} $d_{H-F}(r,t) \le w(Q) \le w(P)$ \end{lemma} \begin{proof} Notice that the path $Q$ is in $H$ and does not contain any failed edge, hence $d_{H-F}(r,t) \le w(Q)$ is trivially true.
To prove $w(Q) \le w(P)$, notice that $P$ can also be decomposed into the three subpaths $P_1 = P[r, r^\prime_0]$, $P_2 = P[r^\prime_0, r_h]$ and $P_3 = P[r_h,t]$. We have that that $P_3 = Q_3$ and that the endpoints of $P_2$ coincide with the endpoints of $Q_2$.
By the choice of $r^*_0$, we must have $w(Q_1) + w(Q_2) \le w(P_1) + w(P_2)$ as the (weighted length of) path $P_1 \circ P_2$ is considered in equation \eqref{eq:choice_of_r^*_0} when $z=r^\prime_0$.
This implies that $w(Q) = w(Q_1) + w(Q_2) + w(Q_3) \le w(P_1) + w(P_2) + w(P_3) = w(P)$. \end{proof}
By combining~Lemma~\ref{lemma:f_path:size} with Lemma~\ref{lemma:f_path:stretch_1} and \ref{lemma:f_path:stretch_2}, it immediately follows: \begin{theorem} Algorithm \ref{alg:f_path} computes, in $O(n(m+f^2))$ time, a
\pspt{(2|F|+1)} of size $O(n f^2)$, for any $|F|\le f$. \end{theorem}
We now relax the assumption that $U = U^\prime$. Indeed, if $k \neq 1$, Algorithm~\ref{alg:f_path} computes, in line~\ref{ln:sparsify}, a $(2k-1)$--spanner $U'$ of the graph $U$. In this case, we can construct a path $Q^\prime$ in a similar way as we did for $Q$, with the exception that we now use the graph $U^\prime$ instead of $U$. Once we do so, it is easy to prove that a more general version of Lemma~\ref{lemma:f_path:stretch_2} holds:
\begin{lemma}\label{lemma:f_path:stretch_2-bis}
$d_{H-F}(r,t) \le (2k-1) w(Q^\prime) \le (2k-1) w(P)$ \end{lemma}
Lemma \ref{lemma:f_path:stretch_2-bis}, combined with Lemma~\ref{lemma:f_path:stretch_1}, immediately implies that $d_{H-F}(r,t) \le (2k-1)(2|F|+1) d_{G-F}(r,t)$. This discussion allows us to show an interesting trade-off between the size of the returned structure and the multiplicative stretch provided, as summarized by the following theorem:
\begin{theorem} Let $k\ge 1$ be an integer. Then, Algorithm \ref{alg:f_path} can compute, in
$O(n(m+f^2))$ time, a \pspt{(2k-1)(2|F|+1)} of size $O(n k\cdot f^{1+\frac{1}{k}})$. \end{theorem}
\subsection{Oracle Setting} In what follows, we show how Algorithm \ref{alg:f_path} can be used to compute an approximate distance oracle of size $O(n f^2)$ (see Algorithm \ref{alg:f_path_oracle}). We also show that a smaller-size oracle can be obtained (see Algorithm \ref{alg:f_path_oracle_2}) if we allow for a slightly larger query time.
\begin{algorithm}[t] \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
Preprocess $T = \tree{G}{r}$ to answer LCA queries as shown in \cite{HT84}\; For each vertex $v \in V(G)$, compute and store its level $\ell(v)$. \BlankLine
\ForEach{$v \in V(G)$}{
Let $\langle r = z_0, z_1, \dots, z_\ell(v)$ be the path from $r$ to
$v$ in $T$\;
\BlankLine
Build graph $U$ associated with vertex $v$ as in Algorithm~\ref{alg:f_path} \;
Compute and store the solution to the all-pairs shortest paths problem on $U$\;
\BlankLine
\ForEach{$\eta = 1, \dots, \min\{f, \ell(v)\}$}{
\ForEach{ $r_h : h > \ell(v)-\eta$ }{
$R \gets \{ z_i : 0 \le i \le \ell(v)-\eta \}$ \;
Let $r^*_0$ be the vertex of $R$ minimizing
Equation~\eqref{eq:choice_of_r^*_0} \;
Store $r^*_0$ with key $(v, \eta, r_i)$ \label{ln:store_r*_0}\;
}
} }
\caption{Algorithm for building an oracle with constant query time.} \label{alg:f_path_oracle} \end{algorithm}
\begin{algorithm}[t] \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
Preprocess $T$ to answer LCA queries as shown in \cite{HT84}\; For each vertex $v \in V(G)$, compute and store its level $\ell(v)$. \BlankLine
\ForEach{$v \in V(G)$}{ Build graph $U$ associated with vertex $v$ as in Algorithm~\ref{alg:f_path} \; Build and store a distance sensitivity oracle of $U$ with stretch $2k-1$ \; } \caption{Algorithm for building an oracle with $O(f)$ query time.} \label{alg:f_path_oracle_2} \end{algorithm}
\begin{theorem} \label{thm:f_path_oracle}
Let $F$ be a path failure of $|F| \le f$ edges and $t \in V(G)$. Algorithm~\ref{alg:f_path_oracle} builds, in $O(n(m+f^2))$ time, an oracle of size $O(n f^2)$ which is able to return: \begin{itemize}
\item a $(2|F|+1)$-approximate distance in $G-F$ between $r$ and $t$ in constant time; \item the associated path in a time proportional to the number of its edges.
\end{itemize} \end{theorem} \begin{proof} In order to answer a query we need to find: (i) the root $r^*_0$ of the subtree of $\ensuremath{\mathcal{C}}\xspace^*$ which contains $t_0$, (ii) the root $r_h$ of the subtree of $\ensuremath{\mathcal{C}}\xspace^*$ containing $t$.
In order to find $r_h$, we perform a LCA query on $T$ to find the least common ancestor $u$ between $v$ and $t$. Either $\ell(v) \ge \ell(u) > \ell(v) -
|F|$, in which case $u = r_h$, or $\ell(u) \le \ell(v)-|F|$ which means that $t$ belongs to $T_0$.
As in the latter case we can simply return $d_T(t)$, we focus on the former one.
To find $r^*_0$ we look for the vertex associated with the triple $(v,
|F|, r_h)$ stored by Algorithm~\ref{alg:f_path_oracle} at line \ref{ln:store_r*_0}.
We answer a distance query with the quantity $d_T(r^*_0) + d_{U^\prime}(r^*_0, r^*_h) + d_T(r_h, t)$, which can be computed in constant time by accessing the distances stored in shortest path tree $T$, plus the solution of the APSP problem on $U^\prime$ computed by Algorithm~\ref{alg:f_path_oracle} when vertex $v$ was considered.
To answer a path query we simply construct, and return, the path $Q$, by expanding the edges of the graph $U^\prime$ into paths which are in $G-F$, as explained before. This clearly takes a time proportional to the number of edges of $Q$. \end{proof}
If we allow for a query time that is proportional to $O(f+k)$, we can reduce the size of the oracle by computing a distance sensitivity oracle (DSO) of $U$ (see Algorithm~\ref{alg:f_path_oracle_2}). In this case, we can still find vertex $r_h$ using the LCA query, as shown in the proof of Theorem~\ref{thm:f_path_oracle}, while vertex $r^*_0$ is guessed among the (up to) $f$ roots of the trees in $G-F^*$ which are contained in $T_0$. The resulting oracle is summarized by the following:
\begin{theorem} \label{thm:f_path_spanned_oracle}
Let $F$ be a path failure of $|F| \le f$ edges, let $t \in V(G)$ and let $k \ge 1$ be an integer. Algorithm~\ref{alg:f_path_oracle_2} builds, in $O(n(m+f^2))$ time, an oracle of size $O(n k f^{1+\frac{1}{k}})$ which is able to return:
\begin{itemize}
\item a $(2k-1)(2|F|+1)$-approximate distance in $G-F$ between $r$ and $t$ in $O(f+k)$ time; \item the corresponding path in $O(\ell + k + f)$ time, where $\ell$ is the number of its edges.
\end{itemize} \end{theorem}
\section{Our \pspt{3} Structure for Paths of 2 Edges} \label{sec:2_structure} \newcommand{\ensuremath{\texttt{FirstLast}}}{\ensuremath{\texttt{FirstLast}}}
In what follows, we provide an algorithm which builds a \pspt{3} (see Algorithm~\ref{alg:2_path}) for the special case of at most $f = 2$ cascading edge failures. This structure improves, w.r.t. the quality of the stretch, over the general
\pspt{(2|F|+1)} of Section~\ref{sec:f_structure}.
\begin{algorithm}[t] \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{A graph $G$, $r \in V(G)$, an SPT $T = \tree{G}{r}$} \Output{A \pspt{3} of $G$ rooted at $r$} \BlankLine $H \gets \tree{G}{r}$ \; $\hat{T} \gets$ compute a $3$-EASPT of $\tree{G}{r}$ as shown in \cite{NPW03} \; $H \gets E(H) \cup E(\hat{T})$ \; Compute a path decomposition $\mathcal{P}$ of $\tree{G}{r}$ by recursively applying Lemma~\ref{lemma:path_decomposition_one_path}\; \BlankLine \ForEach{Path $P \in \mathcal{P}$}{
\ForEach{$x \in V(P) \, : x$ is not a leaf and $x \neq r$}{
Let $z$ be the (unique) child of $x$ in $P$\;
Let $\hat{e}$ be the edge connecting $x$ and its parent int $T$\;
\BlankLine
\tcp{Protect vertex $x$}
$E(H) \gets E(H) \cup \ensuremath{\texttt{FirstLast}}(\pi_{G-\hat{e}}(x), \subtree{G}{r}{z})$ \label{ln:protecting_x_start}\;
\If{$\pi_{G-\hat{e}}(x)$ contains an edge $e^\prime$ in $C(x)$}
{
$E(H) \gets E(H) \cup \ensuremath{\texttt{FirstLast}}(\pi_{G-\hat{e}-e^\prime}(x), \subtree{G}{r}{z})$ \label{ln:protecting_x_end}\;
}
\BlankLine
\tcp{Protect vertex $z$}
$E(H) \gets E(H) \cup E(\pi_{G-\hat{e}}(z))$ \label{ln:protecting_z_start}\;
\ForEach{$e^\prime \in \{\pi_{G-\hat{e}}(z) \cap C(x)\}$}{ $E(H) \gets E(H) \cup E(\pi_{G - \hat{e}- e^\prime }(z))$ \label{ln:protecting_z_end} \;
}
\BlankLine
\tcp{Protect all the other children of $x$}
\ForEach{children $z_i$ of $x \, \:\, z_i \neq z$ \label{ln:protecting_children_start}}
{
Let $(u,q)$ be the first edge of $\pi_{G-\hat{e}-(x,z_i)}(x,z_i)$ with $q \in V(\subtree{G}{r}{z_i})$\;
$E(H) \gets E(H) \cup \{ (u,q) \}$ \label{ln:protecting_children_end} \;
}
\BlankLine
\tcp{Protect vertices whose paths that do not contain $x$}
$T^\prime \gets \subtree{G-x}{r}$ with edges oriented towards the leaves \label{ln:protecting_other_start} \;
$E(H) \gets E(H) \cup \{ (x_1, x_2) \in E(T^\prime) \, : \, x_2 \not\in \subtree{G}{r}{z} \}$ \label{ln:protecting_other_end} \;
} }
\Return $H$ \caption{Algorithm for building a \pspt{3} for the case of $f=2$.} \label{alg:2_path} \end{algorithm}
The algorithm starts with a 3-\texttt{EASPT} with $O(n)$ edges \cite{NPW03} and proceeds as follows. As initial building block, it considers a suitable path $P$ in the shortest-path tree $\tree{G}{r}$, and constructs a structure $H$ that is able to handle the failure of a pair of edges $\{e_1,e_2\}$, such that $e_1 \in P$, and guarantees $3$-stretched distances from $r$, for each vertex in $G$.
Then, we make use of the following result of \cite{BK13}: \begin{lemma}[\cite{BK13}] \label{lemma:path_decomposition_one_path} There exists an $O(n)$ time algorithm to compute an ancestor-leaf path $Q$ in $\tree{G}{r}$ whose removal splits $\tree{G}{r}$ into a set of disjoint subtrees $\subtree{G}{r}{r_1},\dots,\subtree{G}{r}{r_j}$ such that, for each $i\le j$: \begin{itemize}
\item $| \subtree{G}{r}{r_i}| < n/2$ and $V(Q) \cap V(\subtree{G}{r}{r_i}) = \emptyset$ \item $\subtree{G}{r}{r_i}$ is connected to $Q$ through some edge for each $i\le j$ \end{itemize} \label{lemma:path_dec} \end{lemma}
This allows us to incrementally add edges to $H$ by considering a set $\mathcal{P}$ of edge-disjoint paths. This set can be obtained by recursively using the path decomposition technique of Lemma~\ref{lemma:path_dec} on the shortest-path tree $\tree{G}{r}$. We show that, in this way, we are able to build a \pspt{3} of size $O(n \log n)$. Given a path $\pi = \langle s, \dots, t \rangle$ and a tree $T^\prime$, we denote by $\ensuremath{\texttt{FirstLast}}(\pi, T^\prime)$ the edges of the subpaths of $\pi$ going (i) from $s$ to the first vertex of $\pi$ in $V(T^\prime)$, and (ii) from the last vertex of $\pi$ in $V(T^\prime)$ to $t$. If these vertices do not exists, i.e., $V(\pi) \cap V(T^\prime) = \emptyset$, then we define $\ensuremath{\texttt{FirstLast}}(\pi, T^\prime) = E(\pi)$. Moreover, we denote by $C(x)$ the edges connecting vertex $x$ to its children in \tree{G}{r}. We are able to prove the following theorem, whose proof is given in the appendix:
\begin{theorem} \label{thm:2_path_preproc}
Let $F$ be a path failure of $|F| \le 2$ edges and $t \in V(G)$. Algorithm \ref{alg:2_path} computes, in $O(nm + n^2 \log n)$ time, a \pspt{3} of size $O(n \log n)$. \end{theorem} Notice that it is possible to modify Algorithm~\ref{alg:2_path} in order to build an oracle of size $O(n \log n)$ which is able to report, with optimal query time, both a $3$-stretched shortest path in $G-F$ and its distance, when $F$ contains two consecutive edges in $T$. Both the description of the modified algorithm and the proof of the following theorem is given in the appendix. \begin{theorem} \label{thm:oracle_2_path}
Let $F$ be a path failure of $|F| \le 2$ edges and $t \in V(G)$. A modification of Algorithm~\ref{alg:2_path} builds, in $O(nm + n^2 \log n)$ time, an oracle of size $O(n \log n)$ which is able to return: \begin{itemize} \item a $3$-approximate distance in $G-F$ between $r$ and $t$ in constant time; \item the associated path in a time proportional to the number its edges. \end{itemize} \end{theorem}
\section{Experimental Study}\label{sec:experiments} In this section, we present an experimental study to assess the performance, w.r.t. both the quality of the stretch and the size (in terms of edges), of the proposed structures within SageMath (v. 6.6) under GNU/Linux.
As input to our algorithms, we used weighted undirected graphs belonging to the following graph categories: (i) \emph{Uncorrelated Random Graphs} (ERD): generated by the general \emph{Erd\H{o}s-R\'enyi\xspace} algorithm~\cite{B01}; (ii) \emph{Power-law Random Graphs} (BAR): generated by the \emph{Barab\'asi-Albert\xspace} algorithm~\cite{AB99}; \emph{Quandrangular Grid Graphs} (GRI): graphs whose topology is induced by a two-dimensional grid formed by squares.
For each of the above synthetic graph categories we generated three input graphs of different size and density. We assigned weights to the edges at random, with uniform probability, within $[100,100\,000]$.
We also considered two real-world graphs. In details: (i) a graph (CAI) obtained by parsing the \emph{CAIDA IPv4 topology dataset}~\cite{caida}, which describes a subset of the Internet topology at router level (weights are given by round trip times); (ii) the road graph of Rome (ROM) taken from the 9th Dimacs Challenge Dataset\footnote{http://www.dis.uniroma1.it/challenge9} (weights are given by travel times).
\renewcommand{1.1}{1.1}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{
#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1} } \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1 }}
\newcolumntype{d}[1]{D{.}{\cdot}{#1} }
\newcommand{\ccol}[1]{\multicolumn{1}{c|}{#1}} \newcommand{\ccolnop}[1]{\multicolumn{1}{c}{#1}}
\newcommand{\szccolp}[2]{\multicolumn{1}{C{#1cm}|}{#2}} \newcommand{\szccolnop}[2]{\multicolumn{1}{C{#1cm}}{#2}}
Then, for each input graph, we built both the \pspt{(2k -1)(2|F|+1)}, for which we focused on the basic case of $k=1$, and the \pspt{3}, as follows: we randomly chose a root vertex, computed the SPT and enriched it by using the corresponding procedures (i.e. Algorithm~\ref{alg:f_path} and~\ref{alg:2_path}, resp.). We measured the total number of edges of the resulting structures.
Regarding Algorithm~\ref{alg:f_path}, we set $f=10$, as such a value has already been considered in previous works focused on the effect of path-like disruptions on shortest paths~\cite{BW09,DDFLP14}.
Then, we randomly select path failures of $|F|$ edges to perform on the input graphs, with $|F|$ uniformly chosen at random within the range $[2,f]$. We removed the edges belonging to the path failure from both the original graph and the computed structure.
Regarding Algorithm~\ref{alg:2_path}, we simply chose at random a pair of edges and removed them from both the original graph and the computed structure.
After the removal, we computed distances, from the root vertex, in both the original graph and the fault tolerant structure, and measured the resulting average stretch. In order to be fair, we considered only those nodes that get disconnected as a consequence of the failures. Our results are summarized in Table~\ref{table:results}, where, for each input graph, we report the number of vertices and edges, the average size (number of edges) of the two fault tolerant structures and the corresponding provided average stretch.
\begin{table}[t] \centering
\begin{tabular}{c|c|c|c|c|c|c} \multirow{2}{*}{\bf G} &
\multirow{2}{*}{\bf |V(G)|} &
\multirow{2}{*}{\bf |E(G)|} &
\multicolumn{2}{c|}{\pspt{(2|F|+1)}}& \multicolumn{2}{c}{\pspt{3}}\\ \cline{4-7} & & & \bf \#edges & \bf avg stretch& \bf \#edges & \bf avg stretch\\ \hline ERD-1 & 500 & 50\,000 & 3\,980 &1.8015 & 957 & 1.0000\\ \hline ERD-2 & 1\,000 &50\,000 & 8\,899 &1.1360 & 1\,924 & 1.0000\\ \hline ERD-3 & 5\,000 & 50\,000 & 20\,198 &1.0903 & 9\,501 & 1.0035 \\ \hline BAR-1 & 500 & 1\,491 & 1\,366 &1.0003 & 949 & 1.0041\\ \hline BAR-2 & 1\,000 & 2\,991 & 2\,765&1.0034 & 1\,871 & 1.0005 \\ \hline BAR-3 & 5\,000 &14\,991 & 13\,349&1.0040 & 9\,459 & 1.0000\\ \hline GRI-1 & 500 & 1\,012 & 1\,008 & 1.0005 & 868& 1.0000\\ \hline GRI-2 & 1\,000 &1\,984 & 1\,973 & 1.0000 &1\,749 & 1.0000\\ \hline GRI-3 & 5\,000 &9\,940 & 9\,884& 1.0000 &8\,826 & 1.0000\\ \hline CAI &5\,000 & 6\,328 & 6\,033 & 1.0000 & 6\,026 & 1.0000\\
\hline ROM & 3\,353 & 4\,831 & 4\,796 & 1.0000 & 4\,780 & 1.0000 \\ \end{tabular} \caption{Average number of edges and stretch factor for both the
\pspt{(2|F|+1)} and the \pspt{3}. } \label{table:results} \end{table}
First of all, our results show that the quality of the stretch, provided by both the \pspt{(2|F|+1)} and the \pspt{3} in practice, is always by far better than the estimation given by the worst-case bound (i.e. 2|F|+1 and 3, resp.). In details, the average stretch is always very close to $1$ and does not depend neither on the input size nor on the number of failures. This is probably due to the fact that those cases considered in the worst-case analysis are quite rare.
Similar considerations can be done w.r.t. the number of edges that are added to the SPT by Algorithms~\ref{alg:f_path} and~\ref{alg:2_path}. In fact, also in this case, the structures behave better than what the worst-case bound suggests. For instance, the number of edges of the \pspt{(2|F|+1)} (the \pspt{3}, resp.) is much smaller than $n f^2$ ($n \log n$, resp.).
In summary, our experiments suggest that the proposed fault tolerant structures might be suitable to be used in practice.
\appendix
\section{Omitted Proofs} In this section, we upper-bound the running time of Algorithm~\ref{alg:2_path}. In details, we prove that, given a set of two failures $F = \{e_1,e_2\}$, $d_{H-F}(t) \le 3\cdot d_{G-F}(t)$ for every $t\in V(G)$, and that $H$ contains $O(n\cdot \log n)$ edges.\footnote{We only focus on exactly two edge faults since $H$ already contains a 3-\texttt{EASPT}.} W.l.o.g. we assume that that $e_1=(y,x)$, $e_2=(x, k)$, where $x$ is a child of $y$ and $k$ is a child of $x$ in $T$.
Notice that, every possible edge $e_1$ of a pair of failures that can occur on $\tree{G}{r}$ is considered exactly once as, during the construction phase, we make use of the path decomposition technique of~\cite{BK13}. Let $P \in \mathcal{P}$ be the path of the path decomposition $\mathcal{P}$ which contains $e_1$ and let $z$ be the vertex following $x$ in $P$.\footnote{Note that vertex $z$ always exists as the last vertex of $P$ must be a leaf in $T$, while $x$ is an internal vertex.} Notice that the other failed edge $e_2=(x,k)$ might or might not belong to the very same path $P$.
We now bound the distance $d_{H-F}(t)$ between $r$ and a generic \emph{target vertex} $t \in V(G)$. We assume, w.l.o.g., that $t$ belongs to $\subtree{G}{r}{x}$ as otherwise we trivially have $d_{H-F}(t) = d_{G-F}(t)$. For the sake of clarity, we divide the proof into parts, depending on the position of $t$ in $\tree{G}{r}-F$ and on the structure of the path $\pi_{G-F}(t)$.
\begin{lemma} \label{lemma:2_path_t_in_Tz} For every $t \in V(\subtree{G}{r}{z})$, there exists a path $\pi^*(t)$ between $r$ and $t$ in $H-F$ such that $w(\pi^*(t)) \le 3\cdot d_{G-F}(t)$. \end{lemma} \begin{proof} The edges added to $H$ at Lines~\ref{ln:protecting_z_start}--\ref{ln:protecting_z_end} of Algorithm~\ref{alg:2_path} guarantee that $d_{H-F}(z)$ equals $d_{G-F}(z)$ for every possible pair of failures. It follows that we can choose $\pi^*(t) = \pi_{G-F}(z) \circ \pi_{G}(z,t)$, as we have: \begin{align*} w(\pi^*(t)) & = d_{H-F}(z) + d_{H-F}(z,t)\\
& \le d_{H-F}(z) + d_{G}(z,t)\lcomment{$\pi_G(z,t) = \pi_{H-F}(z,t)$}\\
& \le d_{G-F}(z) + d_{G}(z,t)\lcomment{By Lines~\ref{ln:protecting_z_start}--\ref{ln:protecting_z_end} of Alg.~\ref{alg:2_path}}\\
& \le d_{G-F}(t) + d_{G-F}(t,z) + d_G(z,t)\lcomment{By triang. ineq.}\\
& \le d_{G-F}(t) + 2d_{G}(t,z)\lcomment{$\pi_G(z,t) = \pi_{G-F}(z,t)$}\\
& \le d_{G-F}(t) + 2d_{G}(t) \le 3d_{G-F}(t). \lcomment{$z \in V(\pi_G)(t)$} \end{align*} \end{proof}
\begin{lemma} \label{lemma:2_path_x} There exists a path $\pi^*(x)$ between $r$ and $x$ in $H-F$ such that $w(\pi^*(x)) \le 3\cdot d_{G-F}(x)$ if $e_2=(x,z)$ and $w(\pi^*(x)) = d_{G-F}(x)$ otherwise. \end{lemma} \begin{proof}
If $V(\pi_{G-F}(x)) \cap V(\subtree{G}{r}{z}) = \emptyset$ we set $\pi^*(x) = \pi_{G-F}(x)$ and we are done as $\pi^*(x)$ gets added to $H$ by Lines~\ref{ln:protecting_x_start}--\ref{ln:protecting_x_end} of Algorithm~\ref{alg:2_path}.
Otherwise, if $V(\pi_{G-e_1-e_2}(x)) \cap V(\subtree{G}{r}{z}) \neq \emptyset$, let $q,q^\prime$ be the first and last vertex of $\pi=\pi_{G-e_1-e_2}(x)$ that is in $V(\subtree{G}{r}{z})$, respectively. If $e_2 \neq (x,y)$ then it suffices to choose $\pi^*(x)=\pi$. Indeed, by construction, $\pi$ is in $H$ since both $\pi[r,q]$ and $\pi[q,x]=\pi_G(q,z)$ are in $H$.
Finally, if $e_2 = (x,y)$, then $pi^*(x) = \pi^*(q^\prime) \circ \pi[q^\prime, x]$, where $\pi^*(q^\prime)$ is the path of Lemma~\ref{lemma:2_path_t_in_Tz}. The path $\pi^*(x)$ is in $H$ and we can bound its length as follows:
\[
w(\pi^*(x)) = w(\pi^*(q^\prime)) + d_{G-F}(q^\prime, x) \le 3 d_{G-F}(q^\prime) + d_{G-F}(q^\prime, x) \le 3 d_{G-F}(x).
\] \end{proof}
\begin{lemma} \label{lemma:2_path_t_not_in_Tz_not_x} For every $t \not\in V(\subtree{G}{r}{z}) \cup \{x\}$ such that $x \not \in V(\pi_{G-F}(t))$, there exists a path $\pi^*(t)$ between $r$ and $t$ in $H-F$ satisfying $w(\pi^*(t)) \le 3\cdot d_{G-F}(t)$. \end{lemma} \begin{proof} First of all notice that it must hold $\pi_{G-F}(t) = \pi_{G-x}(t)$. If $\pi_{G-x}(t)$ does not contain any vertex of $\subtree{G}{r}{z}$ we are done, as we can set $\pi^*(t) = \pi_{G-x}(t)$ (by Lines~\ref{ln:protecting_other_start}--\ref{ln:protecting_other_end} of Algorithm~\ref{alg:2_path}). Otherwise, let us call $q$ the last vertex of $\pi_{G-x}(t)$ that belongs to $\subtree{G}{r}{z}$. We set $\pi^*(t) = \pi^*(q) \circ \pi_{G-x}(q,t)$, where $\pi^*(q)$ is the path of Lemma~\ref{lemma:2_path_t_in_Tz}. We have \begin{align*} w(\pi^*(t)) & = w(\pi^*(q)) + d_{H-x}(q,t)\\
& \le 3d_{G-F}(q) + d_{H-F}(q,t) \lcomment{By Lemma~\ref{lemma:2_path_t_in_Tz}} \\
& \le 3d_{G-F}(q) + d_{H-F}(q,t) \lcomment{By Lines~\ref{ln:protecting_other_start}--\ref{ln:protecting_other_end} of Alg.~\ref{alg:2_path}} \\
& \le 3d_{G-F}(q) + 3d_{G-F}(q,t) = 3d_{G-F}(t)\lcomment{Since $q \in V(\pi_{G-F}(t))$} \end{align*} \end{proof}
\begin{lemma} \label{lemma:2_path_t_not_in_Tz_x} For every $t \not\in V(\subtree{G}{r}{z}) \cup \{x\}$ such that $x \in V(\pi_{G-F}(t))$, there exists a path $\pi^*(t)$ between $r$ and $t$ in $H-F$ satisfying $w(\pi^*(t)) \le 3\cdot d_{G-F}(t)$. \end{lemma} \begin{proof} Notice that $t$ belongs to a subtree $\subtree{G}{r}{z_i}$ for exactly one child $z_i \neq z$ of $x$ in $\tree{G}{r}$. If $(x,z_i) \neq e_2$, we have that $\pi_{G}(x,t)=\pi_{G-F}(x,t)=\pi_{H-F}(x,t)$ We set $\pi^*(t) = \pi^*(x) \circ \pi_{G}(x,t)$ where $\pi^*(x)$ is the path of Lemma~\ref{lemma:2_path_x}. We have: \[
w(\pi^*(t)) = w(\pi^*(x)) + d_G(x,t) \le 3 d_{G-F}(x) + d_{G-F}(x,t) \le 3 d_{G-F}(t) \]
Otherwise, $e_2 = (x,z_i)$, which means that $t$ belongs to a subtree of $\tree{G}{r}$ which gets disconnected form $x$ by the removal of $e_2$.
Since $e_2 \neq (x, z)$, we know that the path $\pi^*(x)$ of Lemma~\ref{lemma:2_path_x} satisfies \linebreak $w(\pi^*(x)) = d_{G-F}(x)$. Moreover, the shortest path $\pi_{G-F}(x, z_i)$ traverses at most one other subtree (other than $\subtree{G}{r}{z_i}$) rooted at a child of $x$. This is because $H-F$ contains the shortest paths from $x$ to every vertex in $V(\subtree{G}{r}{x}) \setminus V(\subtree{G}{r}{z_i})$. Let $(u,q)$ be the first edge of the path $\pi_{G-F}(x,z_i)$ such that $q \in V(\subtree{G}{r}{z_i})$ and notice that this edge belongs to $H$ (Lines~\ref{ln:protecting_children_start}--\ref{ln:protecting_children_end} of Algorithm~\ref{alg:2_path}). By the choice of $(u,q)$ we have $\pi_{H-F}(x,q) = \pi_{G}(x,u) \circ (u,q)$. We set $\pi^*(t) = \pi^*(x) \circ \pi_{G}(x,u) \circ (u,q) \circ \pi_{G}(q,z_i) \circ \pi_{G}(z_i,t)$. \begin{align*}
w(\pi^*(t)) & = w(\pi^*(x)) + d_{G}(x,u) + w(u,q) + d_{G}(q,z_i) + d_{G}(z_i,t)\\
& \le d_{G-F}(x) + d_{G-F}(x,q) + d_{G-F}(q,z_i) + d_{G}(z_i,t) \\
& \le d_{G-F}(x) + d_{G-F}(x, z_i) + d_{G}(z_i,t)\lcomment{Since $q \in V(\pi_{G-F}(x,z_i))$}\\
& \le d_{G-F}(x) + d_{G-F}(x, t) + 2d_{G}(z_i,t) \lcomment{By triang. ineq.}\\
& \le d_{G-F}(x) + d_{G-F}(x, t) + 2d_{G}(x,t) \lcomment{$z_i \in V(\pi_G)(x,t)$} \\
& \le d_{G-F}(x) + d_{G-F}(x, t) + 2d_{G-F}(x,t)\\
& = d_{G-F}(x) +3 d_{G-F}(x,t) \le 3 d_{G-F}(t).\lcomment{Since $x \in V(\pi_{G-F}(t))$} \end{align*} \end{proof}
We now bound the size of $H$. In order to do so, it is useful to split the vertices of the $T$ into components, depending on the vertex $x$ that is currently considered by Algorithm~\ref{alg:2_path}. More formally, when a couple of edges $(y,x), (x,z)$ is considered we can partition the vertices of $T-x$ into three distinct sets (see Figure~\ref{fig:2_failure_structure}): \begin{itemize}
\item $U_x$, which contains the vertices which are in the same subtree as $r$ in $T-x$;
\item $D_x$, which contains the vertices which are in the subtree of $T$ rooted at $z$:
\item $O_x$, which contains all the vertices which are in the subtree rooted at some child $z_i \neq z$ of $x$ in $T$. \end{itemize} We are now read to prove:
\begin{lemma} \label{lemma:2_path:size} The structure $H$ returned by Algorithm \ref{alg:2_path} contains $O(n\cdot \log n)$ edges. \end{lemma} \begin{proof} To prove the claim we fix a generic path $P = \langle u, \dots, v \rangle$ (of at least two edges) of the path decomposition, where $v$ is a left and $u$ one its ancestors in $T$. We show that, when Algorithm~\ref{alg:2_path} considers $P$, the total number of edges added to $H$ is
$O(|V(\subtree{G}{r}{u})|)$.
\begin{figure}
\caption{Left: a view of the partition of the vertices induced by the removal of a pair of edges of $E(\tree{G}{r})$. Right: A path decompostion of a tree. Paths of the decomposition are highlighted. Edges connecting the roots of the resulting subtrees to a path of the decomposition are dashed.}
\label{fig:2_failure_structure}
\end{figure}
For the sake of the analysis, imagine the edges of paths considered by the algorithm as if they were directed. Notice that no new edge entering a vertex in $U_x$ can be added to $H$, as the shortest paths towards vertices in $U_x$ cannot change, and $H$ contains a shortest path tree $T$ of $G$. Hence, in the following, we ignore all the edges entering vertices in $U_x$.
In Lines~\ref{ln:protecting_x_start}--\ref{ln:protecting_x_end}, the edges of at most two paths are added to $H$. Moreover, by definition of $\ensuremath{\texttt{FirstLast}}(\cdot, \cdot)$, at most one edge of each path enters a vertex in $D_x$. This implies that the number of new edges is at most $O(O_x)$. In Lines~\ref{ln:protecting_z_start}--\ref{ln:protecting_z_end}, at most $3$ paths are considered as $\{\pi_{G-e_1}(z) \cap C(x)\}$ contains at most $2$ edges. Each of those paths has at most one new edge which enters a vertex $q$ in $D_z$ since, once this happens, the shortest path from $q$ to $z$ of $T$ is already in $H$. Again, the number of new edges is at most $O(O_x)$. In Lines~\ref{ln:protecting_children_start}--\ref{ln:protecting_children_end}, at most one edge for each children of $z_i \neq z$ of $x$ is added to $H$, and all those children belong to $O_x$. Finally, in Lines~\ref{ln:protecting_other_start}--\ref{ln:protecting_other_end} only new edges entering vertices in $O_x$ are added to $H$, so their overall number is $O(O_x)$.
As all the sets $O_x$ associated to the different vertices $x$ of $P$ are pairwise vertex disjoint, we immediately have that at most
$O(|V(\subtree{G}{r}{v})|)$ edges are added to $H$ when path $P$ is examined.
The first path $P$ considered by Algorithm~\ref{alg:2_path} is the one obtained by applying Lemma~\ref{lemma:path_decomposition_one_path} on $T$. The removal of this path splits $T$ into a number $h$ of subtrees $T_1, \dots, T_h$ having $\eta_1, \dots, \eta_h$ vertices respectively. Moreover we know that $\eta_i \le \frac{n}{2} \; \forall i=1,\dots, h$ and that $\sum_{i=1}^h \eta_i \le n$. If we reapply the procedure recursively, we get the following recurrence equation describing the overall number of new edges: \[
S(n) = \sum_{i=1}^h S(\eta_i) + n \] which can be solved to show that $S(n) = O(n \log n)$.
To conclude the proof, we only need to notice that the set of paths $\mathcal{P}$ used by Algorithm~\ref{alg:2_path} is defined exactly in this very same recursive fashion, and that the tree $\hat{T}$ has $O(n)$ edges. \end{proof}
Finally, we bound the running time of Algorithm~\ref{alg:2_path}: \begin{lemma} Algorithm~\ref{alg:2_path} requires $O(nm + n^2 \log n)$ time. \label{lemma:2_path:complexity} \end{lemma} \begin{proof} First of all, observe that a rough estimate of the time needed for computing the path decomposition $\mathcal{P}$ is $O(n^2)$ and that the time needed to build $\hat{T}$ is $O(n m)$ \cite{NPW03}. Moreover each vertex $x$ get considered at most once.
When the algorithm is considering a vertex $x$, a constant number of different shortest paths are needed. Those can be computed in $O(m+n \log n)$ time using the Dijkstra's algorithm where, for each vertex $v$, we also store the last edge of its shortest path that (i) leaves the same connected component of $r$ in $T-F$, (ii) leaves $\subtree{G}{r}{z}$, and (iii) enters the same connected component as $v$ in $T-F$. This allows to implement $\ensuremath{\texttt{FirstLast}}(\cdot)$ and to add the edges needed in Lines~\ref{ln:protecting_children_start}--\ref{ln:protecting_children_end}, \ref{ln:protecting_other_start}--\ref{ln:protecting_other_end} in time proportional to the vertices in $O_x$. Hence, the overall time spent by adding edges to $H$ is again $O(n \log n)$. \end{proof} \noindent By Lemmata~\ref{lemma:2_path_t_in_Tz}--\ref{lemma:2_path:size}, Theorem~\ref{thm:2_path_preproc} follows. \section{Oracle Setting for $f=2$ and Proof of Theorem~\ref{thm:oracle_2_path}} We here give a brief description of how to modify Algorithm~\ref{alg:2_path} in order to build an oracle of size $O(n \cdot \log n)$ which is able to report, with optimal query time, both a $3$-stretched shortest path in $G-F$ and its distance, when $F$ contains two consecutive edges in $T$.
In order to do so, we first add an additional step to Algorithm~\ref{alg:2_path} which computes an $O(n)$ size structure which is able to answer LCA queries in $O(1)$ time \cite{HT84}. Then we store the tree $T$ and, for each vertex $x$, its child $z$ on the path decomposition.
Whenever we are considering a vertex $x$ and its child $z \in P$, we also store each path, say $\pi$, towards a vertex, say $u$, considered in Lines~\ref{ln:protecting_x_start}--\ref{ln:protecting_x_end}, \ref{ln:protecting_z_start}--\ref{ln:protecting_z_end}, using a \emph{compact representation}. To be more precise, let $s$ be the last vertex of $\pi$ which belongs to the same component as $r$ in $T-F$, and let $q,q^\prime$ be the first and last vertex of $\pi$ which belong to $T(z)$. We only store the (i) vertices $s,q,q^\prime$, (ii) the subpaths $\pi[s:q]$, $\pi[q^\prime, u]$ along with their lengths, and (iii) a reference to the position $x$ in the subpaths of $\pi$, if any. If $q,q^\prime$ do not exists, we simply store $s$, $\pi[s:u]$, $w(\pi[s,u])$, and a reference to $x$.
In Lines~\ref{ln:protecting_children_start}--\ref{ln:protecting_children_end}, we add one edge $(u,q)$ for each children $z_i \neq z$ of $x$. We store $(u,q)$ alongside $z_i$.
Finally, in Lines~\ref{ln:protecting_other_start}--\ref{ln:protecting_other_end}, we add some edges of the shortest path tree $\tree{G-x}{r}$.
For each vertex $u \in O_x$, we store (i) the edge leading to its parent in $\tree{G-x}{r}$, (ii) the last vertex $q$ of $pi_{G-x}(u)$ which is either in $U$ or in $V(\subtree{G}{r}{z})$, (iii) the length of $\pi_{G-x}(u)[q,u]$, and iv) the root of the subtree containing $u$ in $T-x$.
Since the amount of memory used to do so is always proportional to the vertices in $O_x$ we have that the overall size is still $O(n \log n)$. It is easy to see that, given a path failure\footnote{Once again, we focus on the failure of exactly two edges. To handle the failure of only one edge $e$, it suffices to store a single backup edge associated with $e$, as shown in \cite{NPW03}.} $F = \{ (y,x), (x,k) \}$ and a vertex $t$, we can answer a query by building (or computing the distance of) $\pi^*(t)$ as described in the appropriate lemma in Lemmata \ref{lemma:2_path_t_in_Tz}--\ref{lemma:2_path_t_not_in_Tz_x}. In order to do so we need to know: \begin{itemize} \item The root of the subtrees of $T-x$ containing $t$. \item Whether $\pi_{G-F}(t)$ contains $x$. \end{itemize} The former can be easily done by querying, in constant time, the least common ancestors of the pairs $t,z$ and $t,x$ in $T$ to determine if $z$ belongs to $U$ or $\subtree{G}{r}{z}$. If that is not the case, then the root of the sought subtree was explicitly stored and can be retrieved. As for the latter, we consider both cases. That is, we compute two candidate paths, we discard the one containing $(x,z_i)$, if any (this is done using the pointers to $x$), and we return the shortest of the remaining paths (or its distance). The above reasoning suffices to prove Theorem~\ref{thm:oracle_2_path}.
\end{document} |
\begin{document}
\title{ Shared randomness and device-independent dimension witnessing} \author{Julio I. de Vicente} \email{jdvicent@math.uc3m.es} \affiliation{Departamento de Matem\'aticas, Universidad Carlos III de Madrid, Avda. de la Universidad 30, E-28911, Legan\'es (Madrid), Spain}
\begin{abstract} It has been shown that the conditional probability distributions obtained by performing measurements on an uncharacterized physical system can be used to infer its underlying dimension in a device-independent way both in the classical and quantum setting. We analyze several aspects of the structure of the sets of probability distributions corresponding to a certain dimension taking into account whether shared randomness is available as a resource or not. We first consider the so-called prepare-and-measure scenario. We show that quantumness and shared randomness are not comparable resources. That is, on the one hand, there exist behaviours that require a quantum system of arbitrarily large dimension in order to be observed while they can be reproduced with a classical physical system of minimal dimension together with shared randomness. On the other hand, there exist behaviours which require exponentially larger dimensions classically than quantumly even if the former is supplemented with shared randomness. We also show that in the absence of shared randomness, the sets corresponding to a sufficiently small dimension are negligible (zero-measure and nowhere dense) both classically and quantumly. This is in sharp contrast to the situation in which this resource is available and explains the exceptional robustness of dimension witnesses in the setting in which devices can be taken to be uncorrelated. We finally consider the Bell scenario in the absence of shared randomness and prove some non-convexity and negligibility properties of these sets for sufficiently small dimensions. This shows again the enormous difference induced by the availability or not of this resource. \end{abstract}
\maketitle
\section{Introduction}
Is it possible to estimate the degrees of freedom of an uncharacterized physical system? This question has received much attention in the last years in what is known as device-independent dimension witnessing (DIDW). It turns out that it is indeed possible to make tests about the underlying dimension of a physical system without making any assumption on it nor on the internal functioning of the measurement devices used to interact with it. Dimension estimates can be constructed based only on the measurement data, i.e.\ on the observed probabilities of obtaining certain outcomes conditioned on the different possible choices of measurement. These results are not only interesting from the fundamental point of view but also play a role in quantum information processing. Besides allowing for experimental tests of the physical dimension \cite{experiments}, which might be considered as a resource, these investigations allow to constrain the correlations that are achievable when the setting limits the underlying dimension of the physical systems used in a protocol. These scenarios are know as semi-device-independent quantum information processing: no assumption is made on the working of the devices nor on the physical systems used except for its dimension. Ideas from DIDW have allowed to prove the security of certain cryptographic schemes \cite{qkd} and to provide randomness-expansion protocols in this framework \cite{racs}. Moreover, DIDW is intimately related to the field of quantum communication complexity, which studies the minimal amount of communication parties have to exchange to successfully carry out distributed computational tasks \cite{review}. Indeed, communication can be quantified by the dimensionality of the physical systems used to encode the messages.
The first proposals for DIDW considered the Bell scenario of quantum nonlocality since violating Bell inequalities by a certain amount might require quantum systems of at least a certain dimension \cite{bellwit}. Subsequently, the structure of quantum correlations under dimensionality constraints has been extensively studied \cite{qdim}. Although other settings have been considered \cite{other}, a different general and simple formalism for DIDW was presented in \cite{gallego} in the so-called prepare-and-measure scenario, which has been largely explored afterwards \cite{dallarno,didwpm}. Both the Bell and the prepare-and-measure scenarios rely on different parties holding devices that interact with the physical system. It is usually assumed that the action of these devices might be correlated by the parties having access to a common random variable. This induces convexity into the sets of observable probability distributions corresponding to a given dimension and separation theorems can be used to obtain linear functionals that enable DIDW. However, shared randomness can be viewed as a resource and in certain settings it might be more natural to assume that all devices are independent (this is the case, for example, when the devices are trusted and are not jointly conspiring to mimic higher-dimensional behaviours). Conditions for DIDW with uncorrelated devices have been presented in \cite{brunner} (prepare-and-measure scenario) and more recently in \cite{sikora1} (Bell scenario) and \cite{sikora2} (prepare-and-measure scenario).
In this paper we explore the differences for DIDW considering whether shared randomness is available as a resource or not. In order to do this, we analyze in detail the structure of the sets of probability distributions corresponding to a certain dimension taking into account both possibilities. We first consider the prepare-and-measure scenario and we show that quantumness and shared randomness are not comparable resources. That is, on the one hand, there exist behaviours that require a quantum system of arbitrarily large dimension in order to be observed while they can be reproduced with a classical physical system of minimal dimension together with shared randomness. On the other hand, using results from communication complexity it can be seen that there exist behaviours which require exponentially larger dimensions classically than quantumly even if the former is supplemented with shared randomness. We also show another clear difference depending on whether shared randomness is available or not. In the absence of it, the sets corresponding to a sufficiently small dimension are negligible both classically and quantumly: they are zero-measure and nowhere-dense subsets in the set of all possible behaviours. However, this is never the case in the other setting as these sets are never negligible independently of how small the dimension might be. This negligibility property also explains the exceptional robustness of dimension witnesses when devices are taken to be independent as observed in \cite{brunner}. In the second part of this article, we consider the Bell scenario. The availability or not of shared randomness is known to make a difference and non-convexity results for the sets of observable probability distributions of a fixed dimension are known \cite{vertesi,wolfe}. Here we extend these results and prove systematically some non-convexity properties for these sets for sufficiently small underlying dimension when the parties do not have access to shared randomness. Furthermore, contrary again to the case of correlated devices, we also show that in this case these sets have measure zero and are nowhere dense in the set of all quantum behaviours. In order to obtain all these results we use some very simple dimension estimates based on the rank of a matrix.
\section{Prepare-and-measure scenarios}\label{pms}
The prepare-and-measure scenario \cite{gallego} for witnessing dimensions in a device-independent way is the following. There are two parties, Alice (or A) and Bob (or B), which receive respectively inputs $x$ and $y$ from finite alphabets $\mathcal{X}$ and $\mathcal{Y}$. Their only chance to communicate is by A sending a classical or quantum physical system to B depending on her input. The dimension of this system, to be defined precisely below, quantifies the amount of communication used. Upon receival of the message, B interacts with the system by performing a measurement depending on his input and produces an output $b$ which can take values in a finite alphabet. For simplicity, we will consider this output to be binary, i.\ e.\ $b\in\{0,1\}$. Then, we can record the conditional probabilities with which each output occurs for any given pair of inputs: $P(b|xy)$. This is the main object in a device-independent scenario and we will refer to it as behaviour and denote it by $\textbf{P}$. Of course, as conditional probabilities, behaviours are characterized by $P(b|xy)\geq0$ $\forall b,x,y$ and $\sum_bP(b|xy)=1$ $\forall x,y$.
The question to be addressed in this setting is the following. Without using any knowledge on how A and B process their information, what is the minimal amount of classical or quantum communication sent from A to B that is compatible with the observation of a given behaviour? The possible classical messages $m(x)$ are given by dits, i.\ e.\ $m\in\{1,\ldots,d\}$. Thus, the amount of classical communication is measured by the dimension of the message $d$. A always has the chance to use a random strategy, i.\ e.\ she can send a message $m$ given $x$ with probability $s(m|x)$, and so does B, i.\ e.\ he can produce an output $b$ given $y$ and the reception of $m$ with probability $t(b|ym)$. In the quantum case A sends quantum states $\rho_x$. The dimension of her message is thus \begin{equation}\label{quantumdim} d=\dim \sum_x\textrm{supp\,}\rho_x, \end{equation} where $\textrm{supp\,}$ stands for the support of an operator. In order to produce his output, B can interact with the message through a quantum measurement conditioned on his input.
Thus, we define the set of behaviours obtained by sending classical messages of dimension at most $d$ by $\mathcal{C}_d$ (this and the other sets to be defined below also depend on $|\mathcal{X}|$ and $|\mathcal{Y}|$, which we drop to ease the notation since these quantities should be in general clear from the context). In other words, $\textbf{P}\in\mathcal{C}_d$ when \begin{equation}\label{setc}
P(b|xy)=\sum_{m=1}^{d}s(m|x)t(b|my). \end{equation} On the other hand, $\mathcal{Q}_d$ denotes the set of behaviours achievable by sending quantum states of dimension at most $d$. That is, $\textbf{P}\in\mathcal{Q}_d$ if there exists measurements for B, $\{\Pi_b^y\geq0\}$ with $\sum_b\Pi_b^y=\mbox{$1 \hspace{-1.0mm} {\bf l}$}$ $\forall y$, such that \begin{equation}\label{setq}
P(b|xy)=\mathrm{tr}(\rho_x\Pi_b^y) \end{equation} where the $\{\rho_x\}$ are of dimension less or equal to $d$ (cf.\ Eq.\ (\ref{quantumdim})).
As already mentioned in the introduction, this does not exhaust all possibilities. Depending on the physical setting A and B may be granted with another resource to build their strategy: shared randomness. This means that A and B may pre-establish the strategy each will follow depending on the value of a random variable they both have access to. This boils down to the fact that they can prepare any convex combination of their previously allowed behaviours. Thus, we define the sets of behaviours obtained by sending classical or quantum messages of dimension at most $d$ together with shared randomness by \begin{equation} \mathcal{C}'_d=\textrm{Conv}(\mathcal{C}_d),\quad\mathcal{Q}'_d=\textrm{Conv}(\mathcal{Q}_d), \end{equation} where $\textrm{Conv}(\cdot)$ stands for the convex hull. The sets $\mathcal{C}_d$ and $\mathcal{Q}_d$ need not be convex so in general we have strict inclusions $\mathcal{C}_d\subset\mathcal{C}'_d$ and $\mathcal{Q}_d\subset\mathcal{Q}'_d$ \cite{dallarno}. This means that shared randomness is indeed a resource which can allow to perform some tasks using less communication. On the other hand, we clearly have as well the inclusions $\mathcal{C}_d\subseteq\mathcal{Q}_d$ and $\mathcal{C}'_d\subseteq\mathcal{Q}'_d$, which can also be seen to be in general strict. That is, quantum strategies are also a resource over classical strategies in order to reduce the amount of communication.
\begin{figure}
\caption{For a fixed value of $d$, all inclusions among sets are clear except for those corresponding to classical communication together with shared randomness and quantum communication without shared randomness.}
\end{figure}
Notice that if $d\geq|\mathcal{X}|$, the scenario is trivial. A can unambiguously encode the value of her input into her message to B, who can then use his private randomness to output any possible behaviour. Hence, $\mathcal{C}_{|\mathcal{X}|}=\mathcal{Q}_{|\mathcal{X}|}=\mathcal{C}'_{|\mathcal{X}|}=\mathcal{Q}'_{|\mathcal{X}|}$, which constitute the set of all behaviours in a given setting. Thus, given any valid behaviour there always exist values of the dimension in which it is realizable in any of the aforementioned sets, since in the worst case we have $d=|\mathcal{X}|$. Therefore, it is always well defined to ask what the minimal value of the dimension is to obtain some behaviour in any of the four sets of possible strategies. Taking into account the inclusions pointed out above, it comes as a natural question what the relation between $\mathcal{C}'_d$ and $\mathcal{Q}_d$ is (see Fig.\ 1). Moreover, given the status of both quantumness and shared randomness as a resource, it is interesting to know if one can exchange one for the other or if one is strictly more powerful than the other. Actually, this is a standard question in the context of communication complexity \cite{randvsquant}. Here, one usually seeks for differences in $d$ which are larger than a logarithmic cost over $|\mathcal{X}|$ as this is considered negligible with respect to the size of the input: the so-called exponential separations. We will show that there exist scenarios in which $\mathcal{C}'_d\nsubseteq\mathcal{Q}_d$ and $\mathcal{Q}_d\nsubseteq\mathcal{C}'_d$, with both separations being exponential (or even arbitrary). The second inclusion is a straightforward observation from known results in communication complexity. In order to establish the first one, we will first observe in the following subsection some very simple dimension estimates based on the rank of a matrix associated to $\textbf{P}$. Using again these estimates, we will finish this section by showing the negligibility of low-dimensional sets in the absence of shared randomness. This explains the exceptional robustness to noise of dimension witnessing in this scenario and provides a clear contrast to the case where this resource is available.
\subsection{Dimension estimates}\label{estimates}
The fact that $\mathcal{C}'_{d}$ and $\mathcal{Q}'_{d}$ are convex sets allows to separate each set from its complement by linear functionals on the behaviours. This gives rise to the so-called linear dimension witnesses \cite{gallego,dallarno}. The case of $\mathcal{C}_{d}$ and $\mathcal{Q}_{d}$ was recently addressed in \cite{brunner}, which obtained some non-linear dimension witness for these non-convex sets. Specifically, they consider the scenario in which $|\mathcal{X}|=2|\mathcal{Y}|=2k$ and show that the $k\times k$ matrix $W_k$ with entries $W_k(i,j)=P(0|2j-1,i)-P(0|2j,i)$ ($i,j=1,\ldots,k$) is such that $\det W_k=0$ for all behaviours in $\mathcal{C}_d$ ($\mathcal{Q}_d$) with $d\leq k$ ($d\leq\sqrt{k}$). Thus, the determinant of $W_k$ being non-zero allows one to establish non-trivial lower bounds on the required dimensionality both in the classical and quantum case.
In the following we obtain more refined estimates. In order to do so and to deal with behaviours, we will arrange the array of numbers given by $\textbf{P}$ into a matrix $P\in\mathbb{R}^{|\mathcal{X}|\times2|\mathcal{Y}|}$ according to the rule \begin{equation}\label{behavior}
P=\sum_{bxy}P(b|xy)|x\rangle\langle yb|, \end{equation}
where in the standard notation of quantum mechanics $|yb\rangle=|y\rangle\otimes|b\rangle$ and $\{|y\rangle\}$ denotes the computational basis of $\mathbb{R}^{|\mathcal{Y}|}$ and similarly for the other alphabet elements. In other words, $P$ takes the form \begin{widetext} \begin{equation} P=\left(
\begin{array}{ccccccc}
P(0|11) & P(1|11) & P(0|12) & P(1|12) & \cdots & P(0|1|\mathcal{Y}|) & P(1|1|\mathcal{Y}|) \\
P(0|21) & P(1|21) & P(0|22) & P(1|22) & \cdots & P(0|2|\mathcal{Y}|) & P(1|2|\mathcal{Y}|) \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
P(0||\mathcal{X}|1) & P(1||\mathcal{X}|1) & P(0||\mathcal{X}|2) & P(1||\mathcal{X}|2) & \cdots & P(0||\mathcal{X}||\mathcal{Y}|) & P(1||\mathcal{X}||\mathcal{Y}|) \\
\end{array}
\right). \end{equation} \end{widetext}
Now, in the case $\textbf{P}\in\mathcal{C}_d$, we can make the following immediate observation. Since the behaviour is given by Eq.\ (\ref{setc}), we have that \begin{align}
P&=\sum_{m=1}^d\left(\sum_xs(m|x)|x\rangle\right)\left(\sum_{by}t(b|my)\langle yb|\right)\nonumber\\ &=\sum_{m=1}^du_mv_m^T \end{align}
for some real (actually non-negative) vectors $\{u_m\}$ and $\{v_m\}$ of size $|\mathcal{X}|$ and $2|\mathcal{Y}|$ respectively. Thus, we clearly see that if $\textbf{P}\in\mathcal{C}_d$, then $\textrm{rank\,} P\leq d$. On the other hand, if the behaviour is quantum, Eq.\ (\ref{setq}) tells us that its entries are given by the Hilbert-Schmidt inner product of pairs of Hermitian matrices of size $d\times d$. This set of matrices forms a subspace which is isomorphic to $\mathbb{R}^{d^2}$. Thus, there exists vectors $\{w_x\}$ and $\{t_{by}\}$ in this space such that $P(b|xy)=w_x^Tt_{by}$. Therefore, we now have that if $\textbf{P}\in\mathcal{Q}_d$, then $\textrm{rank\,} P\leq d^2$.
\begin{observation}\label{obsest} If $\textbf{P}\in\mathcal{C}_d$, then $d\geq\textrm{rank\,} P$ while if $\textbf{P}\in\mathcal{Q}_d$, then $d\geq\sqrt{\textrm{rank\,} P}$. \end{observation}
These simple observations generalize the previous aforementioned result of \cite{brunner} in several ways. First, we do not put any constraint on the size of the alphabets $\mathcal{X}$ and $\mathcal{Y}$. Second, the estimates are finer since we do not rely on a matrix being of full rank or not but the bound is sensitive to the different possible values of the rank. It should be mentioned that the result of Bowles \emph{et al.} based on the $W$ matrix is also obtained by showing that the entries of this matrix are given by the inner product of a set of vectors. Thus, rank estimates are also possible in this case. In more detail, one obtains that $d\geq\textrm{rank\,} W_k+1$ if $\textbf{P}\in\mathcal{C}_d$ and $d\geq\sqrt{\textrm{rank\,} W_k+1}$ if $\textbf{P}\in\mathcal{Q}_d$. Hence, it comes as a natural question whether it is better to use $P$ or $W$ to get the strongest estimate. Notice that, in the $|\mathcal{X}|=2|\mathcal{Y}|=2k$ setting, the maximal possible rank of $P$ is $k+1$ (this is because several columns are surely linearly dependent due to the condition $\sum_bP(b|xy)=1$ $\forall x,y$) while for $W$ it is obviously $k$. Thus, in the case of maximal rank both approaches yield equal estimates. In the appendix we show that $\textrm{rank\,} W\leq\textrm{rank\,} P$. Thus, this suggests that it is generally better to use $P$. In fact, it can only be worse in cases for which $\textrm{rank\,} W=\textrm{rank\,} P$. However, this can only lead to a difference of one in the estimate (and not always in the quantum case since it holds for many natural numbers $n$ that $\lceil\sqrt{n+1}\rceil=\lceil\sqrt{n}\rceil$).
It is worth mentioning that stronger bounds on $d$ can be placed by using generalizations of the rank \cite{psd}. Recent literature has established an intimate relation between the non-negative rank and the classical dimension and the positive semidefinite rank and the quantum dimension in similar scenarios \cite{psd,ranks}. It is immediate to check that the nonnegative rank of $P$ and the positive semidefinite rank of $P$ are lower bounds for $d$ in the classical and quantum case respectively in the scenario considered here. Reference \cite{sikora2} also offers related strategies to bound the dimension. However, we stick here to the weaker rank estimates because they seem much easier to use. In fact, we do not know efficient algorithms to compute these other notions of the rank \cite{vavasis,psd}.
\subsection{Behaviours more expensive quantumly than classically together with shared randomness}\label{srmoreq}
In this subsection we will show that $\mathcal{C}'_d\nsubseteq\mathcal{Q}_d$ with an arbitrary separation: there exist behaviours requiring a constant amount of classical communication together with shared randomness (actually just one bit) while the necessary quantum communication increases at least as $\sqrt{|\mathcal{Y}|}$. In more detail, we will consider general scenarios such that $|\mathcal{Y}|=k$ and $|\mathcal{X}|=m\geq k+1$ (this condition is only to make possible that the matrices of behaviours can have the largest possible rank, $k+1$) and we will construct behaviours $\textbf{P}_k$ for any natural $k$ which they all belong to $\mathcal{C}'_2$ but cannot belong to $\mathcal{Q}_{\lfloor\sqrt{k}\rfloor}$. The idea is to mix a sufficient number of behaviours in $\mathcal{C}_2$ such that the corresponding matrix $P_k$ has its rank as large as possible so that Observation \ref{obsest} leads us to conclude that $d\geq\sqrt{k+1}$ in order for $\textbf{P}_k\in\mathcal{Q}_d$ to hold.
An example of such a construction goes as follows. Here and throughout this paper we will denote by $e_i^{(n)}$ the vector of $\mathbb{R}^n$ that has zeroes everywhere except a 1 in the $i$th entry. Consider the behaviour $\textbf{D}_1$ in the aforementioned setting whose $m\times2k$ matrix is given by (to ease the notation we drop the dependence on $k$) \begin{widetext} \begin{equation}\label{ex} D_1=\left(
\begin{array}{cccc}
(e_2^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\vdots & \vdots & \ddots & \vdots \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\vdots & \vdots & \vdots & \vdots \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\end{array}
\right)
=\left(
\begin{array}{c}
0 \\
1 \\
\vdots \\
1 \\
\end{array}
\right)\left(
\begin{array}{cccc}
(e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\end{array}
\right)+e_1^{(m)}\left(
\begin{array}{cccc}
(e_2^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\end{array}
\right). \end{equation} \end{widetext} The second way to write $D_1$ shows clearly that $\textbf{D}_1\in\mathcal{C}_2$: B outputs all the time $b=0$ except maybe when $y=1$ depending on the bit sent by A, her action relying on whether she gets the input $x=1$ or any other. We can similarly define the behaviours $\textbf{D}_i$ ($i=1,\ldots,k$) whose matrices are all made by $1\times2$ blocks given by $(e_1^{(2)})^T$ except at the position $(i,i)$ where it is given by $(e_2^{(2)})^T$. By the same arguments as above, we have that $\textbf{D}_i\in\mathcal{C}_2$ $\forall i$. It is easy to see that any non-trivial mixture of all these behaviours has maximal rank. Taking for instance \begin{equation} \textbf{P}_k=\sum_{i=1}^k\frac{1}{k}\textbf{D}_i, \end{equation} we find that \begin{equation} P_k=\left(
\begin{array}{cccc}
c_k^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
(e_1^{(2)})^T & c_k^T & \cdots & (e_1^{(2)})^T \\
\vdots & \vdots & \ddots & \vdots \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & c_k^T \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\vdots & \vdots & \vdots & \vdots \\
(e_1^{(2)})^T & (e_1^{(2)})^T & \cdots & (e_1^{(2)})^T \\
\end{array}
\right), \end{equation} where \begin{equation} c_k^T=\left(
\begin{array}{cc}
1-1/k & 1/k \\
\end{array}
\right). \end{equation} Now, one can see that $P_k$ has the largest possible rank, i.\ e.\ $\textrm{rank\,} P_k=k+1$. This is because, on the one hand, all even columns are clearly linearly independent, i.e.\ $col_{2j}(P_k)=e_j^{(m)}/k$ for $j=1,2,\ldots, k$. On the other hand, if we add to this set any other odd column, the set remains linearly independent because this column has non-zero entries where all the others have a zero entry (from the $(k+1)$th entry to the $m$th). Thus, using Observation \ref{obsest}, we finally obtain that if $\textbf{P}_k\in\mathcal{Q}_d$ it must hold that $d\geq\sqrt{k+1}$ while, by construction, $\textbf{P}_k\in\mathcal{C}'_2$ $\forall k$. In passing, since obviously $\textbf{P}_k\in\mathcal{Q}'_2$ $\forall k$, this also shows the non-convexity of $\mathcal{Q}_d$ (see also \cite{dallarno} and \cite{sikora2}).
\subsection{Behaviours cheaper quantumly than classically together with shared randomness}\label{qmoresr}
The results of \cite{brunner} discussed before show that there is a quadratic gap between $\mathcal{C}$ and $\mathcal{Q}$. Interestingly, this gap can be seen to be exponential and extended to $\mathcal{C}'$. Testing classical and quantum dimensions in the prepare-and-measure scenario is intimately connected to the field of communication complexity when restricted to the scenario of one-way communication complexity. In fact, this setting is the same with the only difference that it is task-oriented. In this case, under the same restrictions A and B now have the goal of evaluating with high probability of success a binary function $f$ of their inputs that is known to both of them. That is, their strategies should aim at preparing behaviours $P(b|xy)$ for which the result $b=f(x,y)$ is much more likely than $b\neq f(x,y)$. The field of communication complexity studies what is the least amount of communication (from A to B in the one way case) necessary to evaluate different functions. The possible benefits of using quantum communication over classical communication have been extensively studied in the last years and there are several scenarios for which it is known that certain functions can be evaluated with a given probability of success requiring exponentially less communication in the quantum case than in the case of classical messages \cite{review}. Interestingly, the one-way scenario is not an exception and Refs.\ \cite{raz,montanaro} provide instances of this situation for the case of partial functions. In more detail, \cite{montanaro} considers a function, $f_P$, for which A receives an $n$-bit string $x$ (i.\ e.\ $|\mathcal{X}|=2^n$) and B a $n\times n$ permutation matrix $M$ (i.\ e.\ $|\mathcal{Y}|=n!$). The goal is to output 1 if $Mx=x$ and 0 if $Mx$ and $x$ are sufficiently different (in a precise way which is irrelevant here). This is an example of a partial function or a function with a promise, A and B are guaranteed to receive a strict subset of the inputs $x$ and $y$ (those for which any of the above conditions hold). In \cite{montanaro}, it is shown that a quantum strategy solves this function using $O(\log n)$ qubits of communication (i.\ e.\ $d=O(n)$) while there cannot exist any classical strategy solving $f_P$ using less than of the order of $n^{7/16}$ bits. This immediately implies that $\mathcal{Q}_d\nsubseteq\mathcal{C}'_d$. To see this, consider any behaviour corresponding to the aforementioned quantum strategies that solve $f_P$. It must then fulfill that $\textbf{P}_n\in\mathcal{Q}_d$ for some $d=O(n)$. However, it cannot be that $\textbf{P}_n\in\mathcal{C}'_d$ as this would be in contradiction to the result of \cite{montanaro}. Indeed, any behaviour in $\mathcal{C}'_d$ cannot have the same entries as $\textbf{P}_n$ over the subset of promised inputs $x$ and $y$ since it could then be used to solve $f_P$. Moreover, it must be that $\textbf{P}_n\in\mathcal{C}'_{d'}$ with $d'$ scaling at least as $2^{n^{7/16}}$. Interestingly, there is roughly no difference between $\mathcal{C}_d$ and $\mathcal{C}'_d$ for the evaluation of functions. Newman's theorem \cite{newman} shows in the one-way communication scenario that classical strategies with shared randomness that solve some function can be turned into successful strategies without shared randomness with just a logarithmic overhead. Thus, if there exists some exponential gap for the solution of a function with quantum and classical resources, it must persist if we allow classical resources supplemented with shared randomness.
In the light of communication complexity, the reader might wonder whether the results of the previous section showing behaviours which required overwhelmingly more quantum communication to be prepared than classical communication together with shared randomness could be used to devise functions whose solution has a similar gap, i.\ e.\ functions that are at least exponentially cheaper to solve classically if shared randomness is allowed than quantumly. However, Newman's theorem forbids this possibility. In what comes to the evaluation of functions, the differences with and without shared randomness can be at most logarithmic in the one-way scenario even if one just uses classical messages.
\subsection{Structure of $\mathcal{C}_d$ and $\mathcal{Q}_d$ and robustness of dimensionality detection}
As discussed in the introduction, the prepare-and-measure scenario was introduced to certify the dimension of uncharacterized physical systems in a device-independent way, i.e.\ based solely on the observed statistics and without any assumption on the internal working of the devices used. In this context any condition expressed in terms of the observed behaviour that guarantees that A and B exchange physical systems of at least a certain dimension is usually referred to as a dimension witness. The rank estimates introduced in Sec.\ \ref{estimates} are therefore an example of such an object. In practice, the measurement device cannot be perfectly isolated from external noise introducing errors in the experimentally reconstructed behaviour, which can make it less dimensional. The robustness of a dimension witness characterizes its noise tolerance in these scenarios and plays a crucial role in dimensionality certification. Although not strictly necessary, in this setting a natural assumption is that the preparing and measuring devices are uncorrelated (i.e.\ the preparer and the measurer are not maliciously conspiring to fool the certifier) and, hence, in this case one takes that shared randomness is not available. Thus, the problem here boils down to identifying what is the smallest $d$ such that $\textbf{P}\in\mathcal{C}_d$ or $\textbf{P}\in\mathcal{Q}_d$. This was the motivation of \cite{brunner} to introduce the dimension witness based on the determinant of the matrix $W_k$ that we have reviewed in Sec.\ \ref{estimates}. It was observed there that these witnesses are extraordinarily robust tolerating arbitrary amounts of noise. In this section we investigate the structure of the sets $\mathcal{C}_d$ and $\mathcal{Q}_d$ from this point of view and find reasons for this exceptional robustness. Low-dimensional sets are negligibly small in the set of all possible behaviours: they are nowhere dense and have measure zero. Hence, very contrived forms of noise are required to drastically reduce the dimension. This is in sharp contrast to the case where shared randomness is available since, as we also discuss here, the sets $\mathcal{C}'_d$ and $\mathcal{Q}'_d$ are not negligible $\forall d\geq2$. We finish this section by observing that rank estimates are also extremely robust under any physically reasonable form of noise.
Notice that evaluating the rank of a matrix is an ill-conditioned problem. Due to the estimates presented in Sec.\ \ref{estimates}, this indicates that small perturbations of a behaviour could increase considerably the required dimension to prepare it. Furthermore, for general matrices it is well-known that lower-rank matrices are of measure zero and nowhere dense among matrices of higher rank. This suggests that lower-dimensional sets of behaviours might be negligible. We formalize this in the following.
\begin{theorem}\label{obsnegl}
In every scenario with $|\mathcal{Y}|=k$ and $|\mathcal{X}|=m\geq k+1$, the sets $\mathcal{C}_d$ with $d<k+1$ and $\mathcal{Q}_d$ with $d<\sqrt{k+1}$ have measure zero and are nowhere dense in the set of all possible behaviours $\mathcal{C}_m=\mathcal{Q}_m$. \end{theorem} \begin{proof} We will show that the set of rank-deficient behaviours (i.e.\ $\textrm{rank\,} P<k+1$), which we will denote by $S$, is of measure zero and nowhere dense in $\mathcal{C}_m=\mathcal{Q}_m$. Since in the classical case we have seen that $\textrm{rank\,} P\leq d$, when $d<k+1$ we have that $\mathcal{C}_d\subset S$ and the result follows. The same applies to the quantum case. The proof of the claim for rank-deficient behaviours follows basically in a straightforward manner the analogous case for general matrices.
Let us first show that $S$ has measure zero. Notice that the set of behaviours is a subset of $\mathbb{R}^{km}$ determined by specifying an arbitrary collection of values $0\leq P(0|xy)\leq1$ $\forall x,y$ and it has non-zero Lebesgue measure. When $P\in S$, this additionally imposes that the determinants of certain square submatrices vanish, which is a polynomial in the matrix entries, i.e.\ in the $\{P(0|xy)\}$. However, the zero set of a polynomial must have measure zero (unless it is the zero polynomial). Hence, $S$ has Lebesgue measure equal to zero.
Let us now see that $S$ is nowhere dense. For this we have to see that the closure of $S$, $\overline{S}$, has empty interior. Since we are dealing with finite-dimensional matrices, for these topological considerations we can take any matrix norm $||\cdot||$. First of all, it is useful to notice that $S$ is closed. This is because $S$ is characterized by the determinant of all $(k+1)\times(k+1)$ submatrices of $P$ being zero. Hence, the set is the preimage of a closed set under a continuous map (the determinant is a polynomial of the matrix entries) and it is therefore closed. Now, since $S=\overline{S}$, we just need to check that $S$ has an empty interior. Clearly, general rank-deficient matrices can always be approximated by full-rank matrices. It remains to see the same being careful that the full-rank approximation can be chosen to be a behaviour as well. For this, take any $P\in S$ and define for any $\epsilon\in[0,1]$ \begin{equation}\label{pepsilon} P_\epsilon=(1-\epsilon)P+\epsilon Q, \end{equation} where \begin{equation}\label{pepsilonq} Q=\sum_{j=1}^{k}e_j^{(m)}v_j^T+\left(\sum_{j=k+1}^me_j^{(m)}\right)v_{k+1}^T \end{equation} with the $2k$-dimensional vectors $\{v_j\}$ defined by \begin{widetext} \begin{equation}\label{conspms} v_1=\left(
\begin{array}{c}
e_2^{(2)} \\
e_1^{(2)} \\
\vdots \\
e_1^{(2)} \\
\end{array}
\right), v_2=\left(
\begin{array}{c}
e_1^{(2)} \\
e_2^{(2)} \\
e_1^{(2)} \\
\vdots \\
e_1^{(2)} \\
\end{array}
\right),\ldots, v_k=\left(
\begin{array}{c}
e_1^{(2)} \\
\vdots \\
e_1^{(2)} \\
e_2^{(2)} \\
\end{array}
\right), v_{k+1}=\left(
\begin{array}{c}
e_1^{(2)} \\
\vdots \\
e_1^{(2)} \\
e_1^{(2)} \\
\end{array}
\right). \end{equation} \end{widetext}
Notice that $Q$ is a valid behaviour and, therefore, so is $P_\epsilon$ $\forall\epsilon$. It is important to notice that the $\{v_j\}$ are linearly independent (LI) vectors. That the first $k$ of them are LI is clear because each of them has a nonzero entry where all the others are zero. To see that adding $v_{k+1}$ to the set keeps it LI we notice the following. This vector has its second entry equal to zero, which is the case for all of the others except $v_1$. Thus if $v_{k+1}$ could be obtained as a linear combination of the other vectors, the weight of $v_1$ has to be zero. Iterating this argument for all even entries of $v_{k+1}$ we obtain the claim. Now, because $P$ and $Q$ are behaviours and the $\{v_j\}$ are nonnegative vectors, we have that $Pv_j$ and $Qv_j$ are nonnegative vectors too $\forall j$. Moreover, by construction $Qv_j\neq0$ $\forall j$ and, therefore, $P_\epsilon v_j\neq0$ $\forall j$ and all $\epsilon>0$. Since the $\{v_j\}$ are LI this implies that $\dim\ker P_\epsilon\leq k-1$ and, hence, given that the dimension of the kernel and the rank must add up to the number of columns $2k$, $\textrm{rank\,} P_\epsilon=k+1$ $\forall\epsilon>0$, i.e.\ $P_\epsilon\notin S$ $\forall\epsilon>0$. Thus, we finally see that $\forall\delta>0$ and $\forall P\in S$, $\exists P'\notin S$ such that $||P-P'||<\delta$ (for this it suffices to take $P'=P_\epsilon$ with $\epsilon$ sufficiently small). Hence, $S$ does not contain any nonempty open set, i.e.\ it has empty interior as we wanted to prove \footnote{Actually, the fact that S is closed and has measure zero is already enough to prove that it is nowhere dense. However, similar constructions to that of Eq.\ (\ref{pepsilonq}) will be used through the paper.}. \end{proof}
Thus, the sets $\mathcal{C}_d$ with $d<k+1$ and $\mathcal{Q}_d$ with $d<\sqrt{k+1}$ are negligibly small and $\mathcal{C}_m\backslash\mathcal{C}_d$ and $\mathcal{Q}_m\backslash\mathcal{Q}_d$ have full measure and a dense interior. It might be that the conditions $d<k+1$ and $d<\sqrt{k+1}$ are an artifact of the proof due to the rank estimates and that the above claim can be extended to larger values of $d<m$. It could moreover be that $\mathcal{C}_{d-1}$ ($\mathcal{Q}_{d-1}$) has zero measure and is nowhere dense in $\mathcal{C}_d$ ($\mathcal{Q}_d$) for all $d$ such that $2\leq d\leq m$.
The result of Theorem \ref{obsnegl} is in sharp contrast to the case when shared randomness is available. The sets $\mathcal{C}'_d$ and $\mathcal{Q}'_d$ are not negligible in the set of all possible behaviours $\forall d\geq2$, as we show below. Thus, this negligibility property provides a crucial difference for DIDW in the presence or not of shared randomness.
\begin{proposition} In every scenario with $|\mathcal{Y}|=k$ and $|\mathcal{X}|=m\geq k+1$, the sets $\mathcal{C}'_d$ and $\mathcal{Q}'_d$ $\forall d\geq2$ have nonzero measure and are not nowhere dense in the set of all possible behaviours $\mathcal{C}_m=\mathcal{Q}_m$. \end{proposition} \begin{proof} First of all, notice that $\mathcal{C}'_2\subset\mathcal{C}'_d$ ($d>2$) and $\mathcal{C}'_2\subset\mathcal{Q}'_d$ ($d\geq2$). Hence, it suffices to prove the claim for $\mathcal{C}'_2$. Notice moreover that both $\mathcal{C}'_2$ and $\mathcal{C}_m$ are (convex) polytopes \cite{gallego}. We have discussed in the proof of Theorem \ref{obsnegl} that $\dim\mathcal{C}_m=km$. Therefore, one only needs to see that $\dim\mathcal{C}'_2=km$ too. For this, we have to find $km+1$ points in $\mathcal{C}'_2$ which are affinely independent. We give such a construction in the following. Notice that, as in Eq.\ (\ref{ex}), behaviours whose matrix has all except one rows equal are in $\mathcal{C}_2\subset\mathcal{C}'_2$. On the analogy of Eq.\ (\ref{ex}), we denote then by $\{\textbf{D}_{ij}\}$ ($1\leq i\leq m$, $1\leq j\leq k$) the behaviours whose $m\times2k$ matrices have $1\times2$ blocks equal to $(e_1^{(2)})^T$, except for the block at position $(i,j)$ which is equal to $(e_2^{(2)})^T$. We will also consider the behaviour $\textbf{D}_{0}$, whose matrix has all blocks equal to $(e_1^{(2)})^T$. Arguing in a similar manner as with the set of vectors of Eq.\ (\ref{conspms}), it is easy to see that the $km+1$ points $\{\textbf{D}_{0},\textbf{D}_{ij}\}$ (which all happen to be vertices of the polytope $\mathcal{C}'_2$) are LI and, hence, affinely independent. \end{proof}
Theorem \ref{obsnegl} should not be interpreted as a physical impossibility of preparing low-dimensional behaviours. If the setting limits the amount of communication A can send to B, we are bound to observe such a low-dimensional behaviour. What we learn from it is that if the underlying dimension is sufficiently large, and B's measurements are subject to noise, it must have a very particular form in order to drastically reduce the dimension of the observed behaviour. Actually, for any behaviour $P$ one can see that under any physically reasonable form of noise $P_n$, the observed behaviour, \begin{equation}\label{noisy} P_\eta=\eta P+(1-\eta)P_n\quad(\eta\in[0,1]), \end{equation} maintains the rank $\forall\eta>0$. That is, $\textrm{rank\,} P_\eta=\textrm{rank\,} P$ $\forall\eta>0$ and the rank estimates are completely robust against noise. Thus, if $P$ can be certified to have dimension $d\geq k+1$ in the classical case (or $d\geq \sqrt{k+1}$ in the quantum case) by means of its rank, this will not change for its noisy version (unless in the extremal case of full noise $\eta=0$).
We finish this section by proving the above claim that $\textrm{rank\,} P_\eta=\textrm{rank\,} P$ $\forall\eta>0$. First we need to discuss what $P_n$ can be. The most reasonable and general form for the noise is that it is independent of A's input, i.e.\ $P_n(b|xy)=P_n(b|y)$ $\forall b,x,y$. This is because the errors only occur in the measurement process carried out by B and A has no control over it to affect the encoding of her message. Thus, $P_n={\bf 1}v^T$ where ${\bf 1}$ is a column vector in $\mathbb{R}^{|\mathcal{X}|}$ with all entries equal to 1 and $v=\sum_{b,y}P_n(b|y)|yb\rangle$. This implies that the noisy behaviour $P_\eta$ is given by a rank-one perturbation to $P$. This means that the rank of $P$ and $P_\eta$ can at most differ by one. However, we will see now that they are actually equal (as long as $\eta\neq0$). Since the image of $P_n$ is spanned by ${\bf 1}$, $\textrm{rank\,} P_\eta=\textrm{rank\,} P-1$ can only hold if there exists some vector $u$ such that $Pu\propto{\bf 1}$ in such a way that $P_\eta u=0$. Clearly, this vector must be of the form $u=\alpha{\bf 1}+w$ where $\alpha\in\mathbb{R}$ and $w\in\ker P$ because $P{\bf 1}=|\mathcal{Y}|{\bf 1}$ for any behaviour. However, since we also have that $P_\eta{\bf 1}=|\mathcal{Y}|{\bf 1}$, if it is possible to have a vector in the kernel of $P_\eta$ that is not in the kernel of $P$, $P_\eta u=[\alpha|\mathcal{Y}|+(1-\eta)(v^Tw)]{\bf 1}=0$, it must be such that $v^Tw\neq0$. In this case we then have that $P_\eta w\neq0$, that is, we can also find a vector such that it is in the kernel of $P$ but not in that of $P_\eta$. This shows that $\textrm{rank\,} P_\eta\neq\textrm{rank\,} P-1$ when $\eta\neq0$. A similar argument shows that $\textrm{rank\,} P\neq\textrm{rank\,} P_\eta-1$ when $\eta\neq0$. Hence, we obtain the desired result.
\section{Bell scenarios}\label{bs}
The first scheme \cite{bellwit} that was proposed to test in a device-independent way the dimension of a quantum system used the Bell scenario of quantum non-locality \cite{reviewnl}. In this setting we also have two parties A and B, but they cannot communicate in this case. However, both A and B perform measurements dependent respectively on some inputs $x$ and $y$ on a bipartite quantum state $\rho$ they share. Each measurement leads to outputs $a$ and $b$ for A and B respectively. This scenarios can also be catalogued according to the (finite) size of the input and output alphabets $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{A}$ and $\mathcal{B}$. In the following we will consider that $|\mathcal{X}|=|\mathcal{Y}|=m$ and $|\mathcal{A}|=|\mathcal{B}|=n$ and will refer to $(m,n)$ scenarios. Similarly to the prepare-and-measure setting, the object to which we have access to here is the set of conditional probabilities of obtaining the outputs $(a,b)$ given the choice of inputs $(x,y)$, $P(ab|xy)$. We will use again the term behaviour to refer to this collection of numbers. Obviously, it must hold that $P(ab|xy)\geq0$ $\forall a,b,x,y$ and $\sum_{a,b}P(ab|xy)=1$ $\forall x,y$. All behaviours attainable classically (together with shared randomness) satisfy \begin{equation}
P(ab|xy)=\sum_\lambda p_\lambda P^A_\lambda(a|x)P^B_\lambda(b|y)\,\forall a,b,x,y, \end{equation}
for some convex weights $\{p_\lambda\}$ and sets of conditional probabilities $\{P^A_\lambda\}$ and $\{P^B_\lambda\}$. Alternatively, the set of all such behaviours, the so-called local set $\mathcal{L}$, can be characterized to be the convex hull of all local deterministic behaviours (LDBs). The LDBs correspond to all possible deterministic uncorrelated behaviours, i.\ e.\ to those of the form $D(ab|xy)=\delta_{a,f(x)}\delta_{b,g(y)}$, where $f$ is any function mapping elements of $\mathcal{X}$ to $\mathcal{A}$ and similarly for $g$. That is, for every party a unique output occurs with probability 1 for every choice of input. Given a scenario, there is a finite number (actually $n^{2m}$) of possible LDBs and, hence, $\mathcal{L}$ is a polytope. We will denote by $\mathcal{Q}$ here the set of all behaviours that can be obtained by performing measurements on bipartite quantum states $\rho_{AB}$, i.\ e.\ \begin{equation}
P(ab|xy)=\mathrm{tr}(\rho_{AB} E_a^x\otimes F_b^y) \end{equation} for some positive semidefinite operators $\{E_a^x,F_b^y\}$ such that $\sum_aE_a^x$ and $\sum_bF_b^y$ equal the identity in each party's Hilbert space $\forall x,y$. The celebrated conclusion of Bell's theorem is that $\mathcal{L}\subsetneq\mathcal{Q}$.
For a fixed $(m,n)$ setting, one can now define $\mathcal{Q}_d$ here as the set of all behaviours in $\mathcal{Q}$ which are obtainable by a quantum state such that $\min_{X=A,B}(\dim\textrm{supp\,}\rho_X)\leq d$, i.\ e.\ all behaviours obtainable by measuring quantum states of minimum local dimension at most $d$. As discussed in the introduction, the characterization of these sets has raised considerable attention from the point of view of both dimensionality certification and semi-device-independent quantum information protocols. It is interesting to notice that, when the dimension of the physical system is not restricted, shared randomness is not a resource. Its availability is irrelevant in what comes to which behaviours can be observed with quantum preparations because $\mathcal{Q}=\mathcal{Q}_\infty$ is a convex set \cite{reviewnl}. However, this is not the case when the physical dimension of the underlying system plays a role. If shared randomness is freely available, this leads to consider the sets $\textrm{Conv}(\mathcal{Q}_d)$. Thus, it is interesting to explore the differences given by whether this resource is available or not and, in particular, whether $\mathcal{Q}_d\neq\textrm{Conv}(\mathcal{Q}_d)$ in general. In fact, it is easy to see that $\mathcal{Q}_1$ is non-convex in every scenario \cite{wolfe}. By definition, this set can only include uncorrelated behaviours. However, the convex hull of all LDBs gives rise to the full local polytope $\mathcal{L}$ and it is well-known that this set includes correlated behaviours. Reference \cite{vertesi} was the first one to observe that the sets $\mathcal{Q}_d$ need not be convex in general. In more detail, it is shown therein that in the scenarios $(m,2)$ with $m$ even, every set $\mathcal{Q}_d$ such that $d<\sqrt{m+1}$ is non-convex. In particular, this implies that $\mathcal{Q}_2$ is non-convex already in the reasonably simple scenario $(4,2)$. It has been observed in \cite{wolfe} this to be the case even in the simplest possible scenario $(2,2)$ \footnote{The published version of \cite{wolfe} uses numerical evidence to provide a region in $\textrm{Conv}(\mathcal{Q}_2)$ which is not in $\mathcal{Q}_2$. A posterior version [arXiv:1506.01119v4] mentions that the tools of \cite{sikora1} can be used to see that some points in this region are indeed not in $\mathcal{Q}_2$.}. In the following we prove non-convexity properties of the sets $\mathcal{Q}_d$ in the general scenario $(m,n)$. On the analogy of Sec.\ \ref{pms}, we will finish this section by proving that the sets $\mathcal{Q}_d$ are negligible in $\mathcal{Q}$ when the dimension is sufficiently small, a property which is not true for $\textrm{Conv}(\mathcal{Q}_d)$.
\subsection{Dimension estimates}
In order to prove the non-convexity and negligibility of $\mathcal{Q}_d$ in $(m,n)$ scenarios we first derive dimension estimates. On the analogy of the previous section and adapting the techniques of \cite{vertesi}, we will obtain lower bounds on the quantum dimension in terms of the rank of some matrix associated to the behaviour. More explicitly, for every $(m,n)$ scenario we will arrange every behaviour $\textbf{P}$ to form the $mn\times mn$ real matrix \begin{equation}\label{Pmatrix}
P=\sum_{abxy}P(ab|xy)|xa\rangle\langle yb|, \end{equation}
where in the standard notation of quantum mechanics $|xa\rangle=|x\rangle\otimes|a\rangle$ and $\{|x\rangle\}$ denotes the computational basis of $\mathbb{R}^{m}$ and similarly for the other alphabet elements. Thus, $P$ can be partitioned as a block matrix $$P=\left(
\begin{array}{ccc}
P_{11} & \cdots & P_{1m} \\
\vdots & \ddots & \vdots \\
P_{m1} & \cdots & P_{mm} \\
\end{array}
\right)\in\mathbb{R}^{mn\times mn}$$ with blocks $$P_{xy}=\left(
\begin{array}{ccc}
P(11|xy) & \cdots & P(1n|xy) \\
\vdots & \ddots & \vdots \\
P(n1|xy) & \cdots & P(nn|xy) \\
\end{array}
\right)\in\mathbb{R}^{n\times n}.$$
It will be relevant in the next subsection to note that the matrix associated to LDBs $D(ab|xy)=\delta_{a,f(x)}\delta_{b,g(y)}$ is of rank one, i.\ e.\ \begin{align}
D&=\left(\sum_{ax}\delta_{a,f(x)}|xa\rangle\right)\left(\sum_{by}\delta_{b,g(y)}\langle yb|\right)\nonumber\\
&=\left(\sum_{x}|xf(x)\rangle\right)\left(\sum_{y}\langle yg(y)|\right). \end{align}
Suppose now that $\textbf{P}\in\mathcal{Q}_d$ and that the optimal quantum state is such that $d=\dim\textrm{supp\,}\rho_A$. This means that the operators $\{E_a^x\}$ act on $\mathbb{C}^d$. Since they are Hermitian, they must then belong to a real vector space of dimension $d^2$ and, thus, at most $d^2$ of them can be linearly independent. In other words, we can express all the $\{E_a^x\}$ as real linear combinations of a fixed set of $d^2$ Hermitian operators (e.\ g.\ the identity and the generators of SU$(d)$). By linearity of the trace, this means that there are at most $d^2$ linearly independent rows in the matrix $P$ and, hence, $\textrm{rank\,} P\leq d^2$. If it was the case that $d=\dim\textrm{supp\,}\rho_B$, then we can make the same reasoning with the operators $\{F_b^y\}$ and the columns of $P$, arriving again at the same conclusion that $d\geq\sqrt{\textrm{rank\,} P}$.
\begin{observation}\label{obsest2} If $\textbf{P}\in\mathcal{Q}_d$, then $d\geq\sqrt{\textrm{rank\,} P}$. \end{observation}
It is important to notice for the following that the largest rank a matrix of a behaviour can attain is $mn-m+1$. This is because quantum behaviours must obey the no-signaling constraints \begin{align}
\sum_bP(ab|xy)=\sum_bP(ab|xy'),\quad \forall a,x,y,y',\nonumber\\
\sum_aP(ab|xy)=\sum_aP(ab|x'y),\quad \forall b,x,x',y. \end{align} The set of all behaviours fulfilling these conditions will be denoted by $\mathcal{NS}$, which is also a polytope.
It should be mentioned that \cite{sikora1} already provides means to obtain lower bounds for the dimension and, actually, one can also use for these matters the positive semidefinite rank of $P$. However, as in the previous section, although weaker, rank estimates turn out to be more easily applied.
\subsection{Non-convexity of $\mathcal{Q}_d$}
In order to prove the non-convexity of $\mathcal{Q}_d$ we will use the following strategy. We will construct a local behaviour $\textbf{L}$ such that $L$ has the largest possible rank $mn-m+1$. Through Observation \ref{obsest2} this implies that $\textbf{L}\notin\mathcal{Q}_d$ if $d<\sqrt{\textrm{rank\,} L}$. However, since all local behaviours can be written as a convex combination of LDBs, it must hold that $\textbf{L}\in\textrm{Conv}(\mathcal{Q}_1)\subset\textrm{Conv}(\mathcal{Q}_d)$ $\forall d$. Thus, for sufficiently small values of $d$, $\mathcal{Q}_d$ cannot be convex.
\begin{lemma}\label{obsbell} In every $(m,n)$ scenario there exists $\textbf{L}\in\mathcal{L}$ such that $\textrm{rank\,} L=mn-m+1$. \end{lemma} \begin{proof} Consider the set of $n-1$ vectors of size $mn$ given by (we use here the same notation for the $\{e_i^{(n)}\}$ as in Sec.\ \ref{pms}) \begin{equation} v_i^{(1)}=\left(
\begin{array}{c}
e_i^{(n)} \\
e_1^{(n)} \\
\vdots \\
e_1^{(n)} \\
\end{array}
\right),\quad i=2,3,\ldots n. \end{equation} We will also consider similar sets of the same cardinality \begin{equation} \{v_i^{(2)}\}=\left\{\left(
\begin{array}{c}
e_1^{(n)} \\
e_i^{(n)} \\
e_1^{(n)} \\
\vdots \\
e_1^{(n)} \\
\end{array}
\right)\right\},\ldots, \{v_i^{(m)}\}=\left\{\left(
\begin{array}{c}
e_1^{(n)} \\
\vdots \\
e_1^{(n)} \\
e_i^{(n)} \\
\end{array}
\right)\right\}. \end{equation} Notice now that the set $\{v_i^{(j)}\}$ ($i=2,\ldots n$, $j=1,\ldots m$) contains $mn-m$ LI vectors. This can be seen by noticing the each vector has a nonzero-entry where all the others are zero. Notice moreover that if we add the vector \begin{equation} v^{(0)}=\left(
\begin{array}{c}
e_1^{(n)} \\
e_1^{(n)} \\
\vdots \\
e_1^{(n)} \\
\end{array}
\right) \end{equation} to this set, the vectors are still LI (cf.\ the reasoning after Eq.\ (\ref{conspms})). Finally, notice that the matrices \begin{equation} L_0=v^{(0)}(v^{(0)})^T,\quad \{L_{ij}\}=\{v_i^{(j)}(v_i^{(j)})^T\} \end{equation} clearly correspond to LDBs in the $(m,n)$ scenario. Hence, the matrix \begin{equation}\label{lesp} L=\frac{1}{mn-m+1}\left(L_0+\sum_{ij} L_{ij}\right) \end{equation} corresponds to a local behaviour and has the desired property that $\textrm{rank\,} L=mn-m+1$. This is because by construction $Lv^{(0)}\neq0$ and $Lv_i^{(j)}\neq0$ $\forall i,j$ and, thus, $\dim\ker L\leq m-1$, which leads to the claim using that $\textrm{rank\,} L+\dim\ker L=mn$. To see that indeed none of the above vectors is in the kernel of $L$, notice that $L_0v^{(0)}=mv^{(0)}$ while $L_{ij}v^{(0)}$ is a nonnegative vector $\forall i,j$ and similarly for the $\{v_i^{(j)}\}$. \end{proof} This shows that $\textbf{L}\notin\mathcal{Q}_d$ for $d<\sqrt{mn-m+1}$ and as discussed above we obtain the following corollary. \begin{corollary}\label{cor} Every set $\mathcal{Q}_d$ in a $(m,n)$ scenario such that $d<\sqrt{mn-m+1}$ is not convex. \end{corollary}
Notice that in the case $n=2$ we recover the result of \cite{vertesi} that allowed to verify the non-convexity of $\mathcal{Q}_2$ in the scenario $(4,2)$. With our result, the simplest scenario for which we can see that $\mathcal{Q}_2$ is not convex is $(2,3)$. However, as mentioned before, $\mathcal{Q}_2$ is known to be non-convex in the simplest possible scenario $(2,2)$. Since the maximal rank of the matrix of a behaviour is $mn-m+1$, Corollary \ref{cor} cannot be further improved using our techniques. Thus, more powerful constraints could be in principle established going beyond the estimates based on the rank.
Lemma \ref{obsbell} has also a similar interpretation to the result of Sec.\ \ref{srmoreq} in the prepare-and-measure scenario. If A and B are bound to local preparations but have access to shared randomness, they can obtain behaviours which in order to be accessible quantumly without this resource need an arbitrarily large dimension as the number of inputs and/or outputs grows. An analogous result to that of Sec.\ \ref{qmoresr} is also obviously true by Bell's theorem. There are behaviours observable quantumly without shared randomness (even with the smallest possible dimension $d=2$) which cannot be attained by local strategies no matter how much access to this resource they have.
\subsection{Negligibility of low dimensional sets}
As in the prepare-and-measure scenario, one can show as well that a significant difference between the availability or not of shared randomness is that in the latter case the sets of low-dimension behaviours are negligible in the set of all quantum behaviours.
\begin{theorem}\label{obsnegl2} Every set $\mathcal{Q}_d$ in a $(m,n)$ scenario such that $d<\sqrt{mn-m+1}$ is of measure zero and nowhere dense in the full set of quantum behaviours $\mathcal{Q}$. \end{theorem} \begin{proof} The proof is given in two parts. We first show that with the given premise $\mathcal{Q}_d$ is of measure zero and nowhere dense in $\mathcal{NS}$. We then show that this implies the claim.
The first part follows closely the proof of Theorem \ref{obsnegl} and we will only outline it. Similarly, we consider the set $S$ of rank-deficient no-signaling behaviours (i.e.\ $\textrm{rank\,} P<mn-m+1$) and prove the claim for this set, which is extended to $\mathcal{Q}_d$ with $d<\sqrt{mn-m+1}$ because $\mathcal{Q}_d\subset S$. The no-signaling set $\mathcal{NS}$ is a polytope in $\mathbb{R}^t$ with \cite{pironio} \begin{equation}\label{t} t=m^2(n-1)^2+2m(n-1). \end{equation} For behaviours in $S$ some polynomials of the $t$ variables must additionally vanish and $S$ has measure zero in $\mathcal{NS}$. To see that $S$ is nowhere dense in $\mathcal{NS}$, one should follow the same argumentation as before replacing $P_\epsilon$ in Eq.\ (\ref{pepsilon}) by \begin{equation} P_\epsilon=(1-\epsilon)P+\epsilon L, \end{equation} where $L$ is given by Eq.\ (\ref{lesp}).
We finish by showing that $\mathcal{Q}_d$ ($d<\sqrt{mn-m+1}$) having zero measure and being nowhere dense in $\mathcal{NS}$ implies the same negligibility properties inside $\mathcal{Q}$. Since the local polytope $\mathcal{L}$ has the same dimension ($t$) as $\mathcal{NS}$ \cite{pironio} and $\mathcal{L}\subset\mathcal{Q}\subset\mathcal{NS}$, we have that $\mathcal{Q}$ is not of measure zero in $\mathcal{NS}$ \footnote{The fact that $\mathcal{Q}$ is a bounded convex set guarantees that it is measurable.}. Hence $\mathcal{Q}_d$ has measure zero in $\mathcal{Q}$ as well. Regarding nowhere density, we use again that $\mathcal{Q}$ is full dimensional together with the fact that it is a convex set. Corollary 6.4.1 in \cite{rockafellar} tells us then that for every $P$ in the interior of $\mathcal{Q}$, $\mathop{\mathcal{Q}}\limits^\circ$, we have that \begin{equation}\label{ast}
\exists\epsilon>0\textrm{ such that }||P-P'||<\epsilon\Longrightarrow P'\in\mathcal{Q}. \end{equation}
Let us proceed by contradiction and assume that $\mathcal{Q}_d$ is not nowhere dense in $\mathcal{Q}$. Then, there would exist a $P\in\overline{\mathcal{Q}}_d$ and $\delta>0$ such that $||P-P'||<\delta$ and $P'\in\mathcal{Q}$ implies that $P'\in\overline{\mathcal{Q}}_d$. By definition, such $P$ must belong to $\mathop{\overline{\mathcal{Q}}_d}\limits^\circ$ and since $\mathcal{Q}_d\subset\mathcal{Q}$, it holds then that $\mathop{\overline{\mathcal{Q}}_d}\limits^\circ\subset\mathop{\overline{\mathcal{Q}}}\limits^\circ=\mathop{\mathcal{Q}}\limits^\circ$, where the equality follows from the convexity of $\mathcal{Q}$ (cf.\ Theorem 6.3 in \cite{rockafellar}). Thus, by condition (\ref{ast}) we can drop the assumption $P'\in\mathcal{Q}$ if we take $\min\{\epsilon,\delta\}$, i.e.\ \begin{equation}
||P-P'||<\min\{\epsilon,\delta\}\Longrightarrow P'\in\overline{\mathcal{Q}}_d. \end{equation} This means that $\mathcal{Q}_d$ is not nowhere dense in $\mathcal{NS}$ and we have reached a contradiction \footnote{An alternative way to prove that $\mathcal{Q}_{d}$ is nowhere dense in $\mathcal{Q}$ would be to show that $\mathcal{Q}_d$ is closed.}. \end{proof}
Theorem \ref{obsnegl2} tells us that in such simple scenarios as $(4,2)$ or $(2,3)$ it is not only not enough to consider quantum systems with $d=2$ to reproduce all quantum behaviours but that this is almost never the case.
This negligibility property is again in sharp contrast to the case in which the devices of the parties can be correlated. When shared randomness is granted to the parties we have that $\mathcal{L}\subset\textrm{Conv}(\mathcal{Q}_d)$ $\forall d$. Since, as we have already used in the proof of Theorem \ref{obsnegl2}, $\dim \mathcal{L}=\dim \mathcal{Q}=\dim \mathcal{NS}$, the sets $\textrm{Conv}(\mathcal{Q}_d)$ have non-zero measure and are not nowhere dense in $\mathcal{Q}$ $\forall d$.
Looking at Theorem \ref{obsnegl2}, it comes as natural question whether the bound $d<\sqrt{mn-m+1}$ is optimal for the negligibility property to hold. Although we cannot answer completely this question, the above observation allows to establish a lower bound on $d$ for which negligibility does not hold anymore in every scenario \footnote{Notice that a similar reasoning can be applied to the prepare-and-measure scenario. However, it yields a trivial bound since in this case $d=|\mathcal{X}|$ is enough to generate the set of all behaviours.}.
\begin{proposition} Every set $\mathcal{Q}_d$ in a $(m,n)$ scenario such that $d\geq m^2(n-1)^2+2m(n-1)+1$ has non-zero measure and is not nowhere dense in the full set of quantum behaviours $\mathcal{Q}$. \end{proposition} \begin{proof} We first notice that if $\textbf{P}\in\mathcal{Q}_d$ and $\textbf{P}'\in\mathcal{Q}_{d'}$, then $\textbf{P}_\lambda=\lambda\textbf{P}+(1-\lambda)\textbf{P}'\in\mathcal{Q}_{d+d'}$ $\forall\lambda\in(0,1)$. This is a well-known argument that is used to show that $\mathcal{Q}$ is convex. Indeed, if \begin{align}
P(ab|xy)&=\mathrm{tr}(\rho E_a^x\otimes F_b^y),\nonumber\\
P'(ab|xy)&=\mathrm{tr}(\rho' (E')_a^x\otimes (F')_b^y), \end{align} then \begin{equation}
P_\lambda(ab|xy)=\mathrm{tr}[(\lambda\rho\oplus(1-\lambda)\rho') \mathcal{E}_a^x\otimes \mathcal{F}_b^y] \end{equation} with $\mathcal{E}_a^x=E_a^x\oplus(E')_a^x$ $\forall a,x$ and $\mathcal{F}_b^y=F_b^y\oplus(F')_b^y$ $\forall b,y$.
On the other hand, Carath\'eodory's theorem \cite{rockafellar} tells us that $\forall \textbf{P}\in\mathcal{L}=\textrm{Conv}(\mathcal{Q}_1)$ we have the convex combination \begin{equation} P=\sum_{i=1}^{t+1}\lambda_iP_i,\quad P_i\in\mathcal{Q}_1\,\forall i, \end{equation} where we are using that $\dim\mathcal{L}=t$ (cf.\ Eq.\ (\ref{t})). This means that $\textrm{Conv}(\mathcal{Q}_1)\subset\mathcal{Q}_{t+1}$ and since $\textrm{Conv}(\mathcal{Q}_1)$ is not of measure zero nor nowhere dense, we obtain the claim. \end{proof}
As in the prepare-and-measure scenario, Theorem \ref{obsnegl2} also implies an exceptional robustness in the presence of noise for DIDW when shared randomness is not available. Notice that in this case it is natural to assume that the noisy behaviour $P_\eta$ (cf.\ Eq.\ (\ref{noisy})) is subjected to uncorrelated noise, i.e.\ $P_n(ab|xy)=P_A(a|x)P_B(b|y)$ $\forall a,b,x,y$, since this affects independently the devices held by A and B. Therefore, the noise induces again a rank-one perturbation and we have that $\textrm{rank\,} P -1\leq\textrm{rank\,} P_\eta\leq\textrm{rank\,} P+1$ $\forall\eta>0$.
\section{Conclusions}
The task of DIDW has been receiving a lot of attention in recent years. It enables experiments to certify the underlying dimension of an uncharacterized physical system and it provides a framework for semi-device-independent quantum information processing. The most common scenarios for DIDW involve different parties interacting with the physical system: the so-called prepare-and-measure and Bell scenarios. Depending on the context it might or might not be the case that the parties are provided with an extra resource, shared randomness, that allows to correlate the different devices the parties hold. In this work we have explored the differences that may arise for the task of DIDW in these two possible settings. We have seen that shared randomness is indeed a powerful resource: certain behaviours which can be obtained by sending just one classical bit (when the devices are correlated) need quantum systems of arbitrarily large dimension in the absence of shared randomness (the necessary quantum dimension grows with the number of possible inputs while the classical dimension remains 2). On the other hand, quantumness is also more powerful than classical systems even if the latter have access to shared randomness. There are behaviours that require exponentially larger classical dimension even though in the quantum setting the devices are not correlated. We have also shown that one of the main differences given by the availability or not of this resource is not only the lack of convexity of the corresponding sets of probability distributions but the fact that for sufficiently small dimensions these sets are negligibly small (of measure zero and nowhere dense in the set of all possible distributions) if shared randomness is not granted. These results are obtained using very simple estimates for the dimension based on the rank of a matrix. For the future, it would be interesting to study whether the bounds on the dimension as a function of the number of possible inputs provided here for the sets to be non-convex or negligible in both the prepare-and-measure and Bell scenarios can be improved by using more sophisticated tools. The results of \cite{sikora1,sikora2} and the notions of non-negative and positive semidefinite rank might be helpful in this task.
\begin{acknowledgments} I thank A. Monr\`{a}s, C. Palazuelos and I. Villanueva for very useful discussions. This research was funded by the Spanish MINECO through grants MTM2014-54692-P and MTM2014-54240-P and by the Comunidad de Madrid through grant QUITEMAD+CM S2013/ICE-2801. \end{acknowledgments}
\begin{appendix}
\section{Relation between the ranks of $W$ and $P$}
Taking the matrices defined in Sec.\ \ref{estimates}, here we prove that $\textrm{rank\,} W\leq\textrm{rank\,} P$. In order to transform $P$ into $W$ we have to subtract to every odd row $i$ its subsequent row $i+1$. This a so-called elementary row operation and can be achieved by multiplying $P$ from the left with the matrix $E_i$, which is like the identity with the difference that the entry $(i,i+1)$ should be equal to $-1$. Since the matrices $\{E_i\}$ are all full-rank, the matrix $\prod_{i\textrm{ odd}}E_iP$ has the same rank as $P$. To obtain $W$ it just remains to delete all even rows and columns, a process which certainly cannot increase the rank. Thus, we obtain the desired result.
\end{appendix}
\end{document} |
\begin{document}
\theoremstyle{plain}
\newtheorem{example}{Example} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{conjecture}{Conjecture}
\title{A pairing in homology and the category of linear complexes of tilting modules for a quasi-hereditary algebra} \author{Volodymyr Mazorchuk and Serge Ovsienko} \date{} \maketitle
\begin{abstract} We show that there exists a natural non-degenerate pairing of the homomorphism space between two neighbor standard modules over a quasi-hereditary algebra with the first extension space between the corresponding costandard modules and vise versa. Investigation of this phenomenon leads to a family of pairings involving standard, costandard and tilting modules. In the graded case, under some "Koszul-like" assumptions (which we prove are satisfied for example for the blocks of the category $\mathcal{O}$), we obtain a non-degenerate pairing between certain graded homomorphism and graded extension spaces. This motivates the study of the category of linear tilting complexes for graded quasi-hereditary algebras. We show that, under assumptions, similar to those mentioned above, this category realizes the module category for the Koszul dual of the Ringel dual of the original algebra. As a corollary we obtain that under these assumptions the Ringel and Koszul dualities commute. \end{abstract}
\section{Introduction and description of the results}\label{s1}
Let $\Bbbk$ be an algebraically closed field. If the opposite is not emphasized, in this paper by a module we mean a {\em left} module and we denote by $\Rad(M)$ the radical of a module, $M$. For a ${\Bbbk}$-vector space, $V$, we denote the dual space by $V^*$.
Let $A$ be a basic $\Bbbk$-algebra, which is quasi-hereditary with respect to the natural order on the indexing set $\{1,2,\dots,n\}$ of pairwise-orthogonal primitive idempotents $e_i$ (see \cite{CPS1,DR,DR2} for details). Let $P(i)$, $\Delta(i)$, $\nabla(i)$, $L(i)$, and $T(i)$ denote the projective, standard, costandard, simple and tilting $A$-modules, associated to $e_i$, $i=1,\dots, n$, respectively. Set $P=\oplus_{i=1}^n P(i)$, $\Delta=\oplus_{i=1}^n \Delta(i)$,$\nabla=\oplus_{i=1}^n \nabla(i)$, $L=\oplus_{i=1}^n L(i)$, $T=\oplus_{i=1}^n T(i)$. We remark that, even if the standard $A$-modules are fixed, the linear order on the indexing set of primitive idempotents, with respect to which the algebra $A$ is quasi-hereditary, is not unique in general. We denote by $R(A)$ and $E(A)$ the Ringel and Koszul duals of $A$ respectively. A graded algebra, $B=\oplus_{i\in\ensuremath{\mathbb{Z}}}B_i$, will be called {\em positively graded} provided that $B_i=0$ for all $i<0$ and $\Rad(B)=\oplus_{i>0}B_i$.
This paper has started from an attempt to give a conceptual explanation for the equality \begin{equation}\label{introeq1} \dim\Hom_{A}(\Delta(i-1),\Delta(i))= \dim\operatorname{Ext}^1_A(\nabla(i),\nabla(i-1)), \end{equation} which is proved at the beginning of Section~\ref{s2}. Our first main result, proved also in Section~\ref{s2}, is the following statement:
\begin{theorem}\label{introt1} \begin{enumerate}[(1)] \item\label{introt1.1} Let $i,j\in\{1,\dots,n\}$ and $j<i$. Then there exists a bilinear pairing, \begin{displaymath} \langle\cdot\, , \cdot\rangle: \Hom_A\left(\Delta(j),\Delta(i)\right)\times \operatorname{Ext}_A^1\left(\nabla(i),\nabla(j)\right)\to\Bbbk. \end{displaymath} \item\label{introt1.2} If $j=i-1$, then $\langle\cdot\, ,\cdot\rangle$ is non-degenerate. \end{enumerate} \end{theorem}
Theorem~\ref{introt1} explains the origins of \eqref{introeq1} and motivates the study of $\langle\cdot\, ,\cdot\rangle$. It happens that in the general case, that is for $j<i-1$, the analogue of Theorem~\ref{introt1}\eqref{introt1.2} is no longer true. We give an example at the beginning of Section~\ref{s3}. In the same section we present some special results and a modification of $\langle\cdot\, ,\cdot\rangle$ in the general case.
An attempt to lift the above results to higher $\operatorname{Ext}$'s naturally led us to the definition of a different pairing, which uses a minimal tilting resolution of the costandard module. In Section~\ref{s4} we construct and investigate a pairing between $\operatorname{Ext}^l_A(\nabla,\nabla)$ and $\Hom_A(\Delta,T_l)$, where $T_l$ is the $l$-th component of a minimal tilting resolution of $\nabla$. In the case $l=1$ this new pairing induces the one we have constructed in Section~\ref{s2}.
The new pairing is rarely non-degenerate. In an attempt to find some conditions, which would ensure this property, we naturally came to the graded case. In Section~\ref{s5} we show that in the graded case our new pairing induces a non-degenerate pairing between the graded homomorphism and the graded first extension spaces under the condition that the costandard modules admit linear tilting resolutions. Here the linearity of the resolution means the following: we show that for a positively graded quasi-hereditary algebra all tilting modules are gradable and thus we can fix their graded lifts putting their "middles" in degree $0$; the linearity of the resolution now means that the $i$-th term of the resolution consists only of tilting modules, whose "middles" are exactly in degree $i$. This observation brings the linear complexes of tilting modules into the picture and serves as a bridge to the second part of the paper, in which we study the category of all such linear complexes.
The above mentioned condition of the existence of a linear tilting resolution for costandard $A$-modules immediately resembles the conditions, which appeared in \cite{ADL} during the study of the following question: when the Koszul dual of a quasi-hereditary algebra is quasi-hereditary with respect to the opposite order? In \cite[Theorem~3]{ADL} it was shown that this is the case if and only if both the standard and costandard $A$-modules admit a linear projective and injective (co)resolution respectively (algebras, satisfying these conditions, were called {\em standard Koszul} in \cite{ADL}). This resemblance motivated us to take a closer look at the category of linear complexes of tilting $A$-modules. The most striking property of this category is the fact that it combines two objects of completely different natures: tilting modules for a quasi-hereditary algebra, which give rise to the so-called {\em Ringel duality}; and linear resolutions, which are the source of a completely different duality, namely the {\em Koszul duality}. Under some natural assumptions, which roughly mean that all objects we consider are well-defined and well-coordinated with each other, in Section~\ref{s55} we prove our second main result:
\begin{theorem}\label{introt2} Assume that $A$ is a positively graded quasi-hereditary algebra, such that \begin{enumerate}[(i)]
\item standard $A$-modules admit a linear tilting coresolution, \item costandard $A$-modules admit a linear tilting resolution. \end{enumerate} The above conditions imply that the quadratic dual $R(A)^!$ of $R(A)$ is quasi-hereditary (with respect to the same order as for $A$), and we further assume that \begin{enumerate}[(i)] \setcounter{enumi}{3} \item the grading on $R(R(A)^!)$, induced from the category of graded $R(A)^!$-modules, is positive. \end{enumerate} Then the algebras $A$, $R(A)$, $E(A)$, $R(E(A))$ and $E(R(A))$ are standard Koszul quasi-hereditary algebras, moreover, $E(R(A))\cong R(E(A))$ as quasi-hereditary algebras. In other words, Koszul and Ringel dualities commute on $A$. \end{theorem}
As a preparatory result for this theorem we show that, under the same assumptions, the category of bounded linear complexes of tilting $A$-modules is equivalent to the category of graded modules over $E(R(A))^{opp}$. Moreover, this realization preserves (in some sense) standard and costandard modules but switches simple and tilting modules.
We finish the paper with proving that all conditions of Theorem~\ref{introt2} are satisfied for the associative algebras, associated with the blocks of the BGG category $\mathcal{O}$. This is done in Section~\ref{s6}. In the same section we also derive some consequences for these algebras, in particular, about the structure of tilting modules. The paper is finished with an Appendix, written by Catharina Stroppel, where it is shown that all conditions of Theorem~\ref{introt2} are satisfied for the associative algebras, associated with the blocks of the parabolic category $\mathcal{O}$ in the sense of \cite{RC}. As the main tool in the proof of the last result, it is shown that Arkhipov's twisting functor on $\mathcal{O}$ (see \cite{AS,KM}) is gradable.
For an abelian category, $\mathcal{A}$, we denote by $D^b(\mathcal{A})$ the corresponding bounded derived category and by $K(\mathcal{A})$ the corresponding homotopic category. In particular, for an associative algebra, $B$, we denote by $D^b(B)$ the bounded derived category of $B\mathrm{-mod}$ and by $K(B)$ the homotopic category of $B\mathrm{-mod}$. For $M\in B\mathrm{-mod}$ we denote by $M^{\bullet}$ the complex defined via $M^{0}=M$ and $M^{i}=0$, $i\neq 0$.
We will say that a module, $M$, is {\em Ext-injective} (resp. {\em Ext-projective}) {\em with respect to a module}, $N$, provided that $\operatorname{Ext}^{k}(X,M)=0$, $k>0$, (resp. $\operatorname{Ext}^{k}(M,X)=0$, $k>0$) for any subquotient $X$ of $N$.
When we say that a graded algebra is Koszul, we mean that it is Koszul with respect to this grading.
\section{A bilinear pairing between $\Hom_A$ and $\operatorname{Ext}^1_A$}\label{s2}
The following observation is the starting point of this paper. Fix $1<i\leq n$. According to the classical BGG-reciprocity for quasi-hereditary algebras (see for example \cite[Lemma~2.5]{DR2}), we have that $[I(i-1):\nabla(i)]=[\Delta(i):L(i-1)]$, where the first number is the multiplicity of $\nabla(i)$ in a costandard filtration of $I(i-1)$, and the second number is the usual composition multiplicity. The quasi-heredity of $A$, in particular, implies that $\Delta(i-1)$ is Ext-projective with respect to $\Rad(\Delta(i))$ and hence $[\Delta(i):L(i-1)]=\dim\Hom_{A}(\Delta(i-1),\Delta(i))$.
The number $[I(i-1):\nabla(i)]$ can also be reinterpreted. Again, the quasi-heredity of $A$ implies that any non-zero element from $\operatorname{Ext}^1_A(\nabla(i),\nabla(i-1))$ is in fact lifted from a non-zero element of $\operatorname{Ext}^1_A(L(i),\nabla(i-1))$ via the map, induced by the projection $\Delta(i)\twoheadrightarrow L(i)$. Since $\nabla(i-1)$ has simple socle, it further follows that any non-zero element from $\operatorname{Ext}^1_A(L(i),\nabla(i-1))$ corresponds to a submodule of $I(i-1)$ with simple top $L(i)$. From this one easily derives that $[I(i-1):\nabla(i)]=\dim\operatorname{Ext}^1_A(\nabla(i),\nabla(i-1))$. Altogether, we obtain that $\dim\Hom_{A}(\Delta(i-1),\Delta(i))= \dim\operatorname{Ext}^1_A(\nabla(i),\nabla(i-1))$. In the present section we show that the spaces $\Hom_{A}(\Delta(i-1),\Delta(i))$ and $\operatorname{Ext}^1_A(\nabla(i),\nabla(i-1))$ are connected via a non-degenerate bilinear pairing in a natural way.
For every $i=1,\dots,n$ we fix a non-zero homomorphisms, $\alpha_i:\Delta(i)\to\nabla(i)$. Remark that $\alpha_i$ is unique up to a scalar and maps the top of $\Delta(i)$ to the socle of $\nabla(i)$. For $j<i$ let $f:\Delta(j)\to\Delta(i)$ be some homomorphism and $\xi:\nabla(j)\overset{\beta}{\hookrightarrow} X \overset{\gamma}{\twoheadrightarrow}\nabla(i)$ be a short exact sequence. Consider the following diagram: \begin{equation}\label{eq2.1} \xymatrix{ 0\ar[rr] && \nabla(j)\ar[rr]^{\beta} && X\ar[rr]^{\gamma} && \nabla(i)\ar[rr] && 0 \\ && \Delta(j)\ar@{-->}[u]^{\alpha_j}\ar[rrrr]^{f} && && \Delta(i) \ar@{-->}[u]_{\alpha_i}\ar@{=>}[ull]_{\varphi} && }. \end{equation} Since $j<i$, we have $\operatorname{Ext}_A^1(\Delta(i),\nabla(j))=0$ and $\Hom_A(\Delta(i),\nabla(j))=0$. Hence \begin{displaymath} \Hom_A(\Delta(i),X)\cong \Hom_A(\Delta(i),\nabla(i)), \end{displaymath} which means that $\alpha_i$ admits a unique lifting, $\varphi:\Delta(i)\to X$, such that the triangle in \eqref{eq2.1} commutes. Further, $L(j)$ occurs exactly once in the socle of $X$ and $\beta\circ \alpha_j$ is a projection of $\Delta(j)$ onto this socle $L(j)$-component of $X$. On the other hand, since $\gamma\circ \varphi=\alpha_i$, it follows that $\varphi(\Rad(\Delta(i)))\subset\beta(\nabla(j))$. Since $[\nabla(j):L(j)]=1$, it follows that the composition $\varphi\circ f$ is a projection of $\Delta(j)$ onto the socle $L(j)$-component of $X$ as well. Since $\Bbbk$ is algebraically closed, we get that $\beta\circ \alpha_j$ and $\varphi\circ f$ differ only by a scalar (they are not the same in general as $\beta\circ \alpha_j$ does depend on the choice of $\alpha_j$ and $\varphi\circ f$ does not). Hence we can denote by $\langle f,\xi\rangle$ the unique element from ${\Bbbk}$ such that $\langle f,\xi\rangle \left(\varphi\circ f\right)= \beta\circ \alpha_j$.
\begin{lemma}\label{l2.1} \begin{enumerate}[(1)] \item Let $\xi':\nabla(j)\hookrightarrow Y\twoheadrightarrow\nabla(i)$ be a short exact sequence, which is congruent to $\xi$. Then $\langle f,\xi\rangle=\langle f,\xi'\rangle$ for any $f$ as above. In particular, $\langle \cdot \, ,\cdot\rangle$ induces a map from $\Hom_A(\Delta(j),\Delta(i))\times \operatorname{Ext}_A^1(\nabla(i),\nabla(j))$ to ${\Bbbk}$ (we will denote the induced map by the same symbol $\langle \cdot \, ,\cdot\rangle$ abusing notation). \item The map $\langle \cdot \, ,\cdot\rangle: \Hom_A(\Delta(j),\Delta(i))\times \operatorname{Ext}_A^1(\nabla(i),\nabla(j))\to {\Bbbk}$ is bilinear. \end{enumerate} \end{lemma}
\begin{proof} This is a standard direct calculation. \end{proof}
Note that the form $\langle \cdot \, ,\cdot\rangle$ is independent, up to a non-zero scalar, of the choice of $\alpha_i$ and $\alpha_j$. Since the algebras $A$ and $A^{opp}$ are quasi-hereditary simultaneously, using the dual arguments one constructs a form, \begin{displaymath} \langle \cdot \, ,\cdot\rangle': \Hom_A(\nabla(i),\nabla(j))\times \operatorname{Ext}_A^1(\Delta(j),\Delta(i))\to {\Bbbk}, \end{displaymath} and proves a dual version of Lemma~\ref{l2.1}.
\begin{theorem}\label{t2.2} Let $j=i-1$. Then the bilinear from $\langle \cdot \, ,\cdot\rangle$ constructed above is non-degenerate. \end{theorem}
We remark that in the case $j<i-1$ the analogous statement is not true in general, see the example at the beginning of Section~\ref{s3}.
\begin{proof} First let us fix a non-zero $f:\Delta(i-1)\to\Delta(i)$. Since $\Delta(i-1)$ has simple top, there exists a unique submodule $M\subset \Delta(i)$, which is maximal with respect to the condition $p\circ f\neq 0$, where $p:\Delta(i)\to \Delta(i)/M$ is the natural projection. Denote $N=\Delta(i)/M$. The module $N$ has simple socle, which is isomorphic to $L(i-1)$, and $p\circ f:\Delta(i-1)\to N$ is a non-zero projection onto the socle of $N$. Now we claim that $\Rad(N)\hookrightarrow \nabla(i-1)$. Indeed, $\Rad(N)\subset \Rad(\Delta(i))$ and hence it can have only composition subquotients of the form $L(t)$, $t<i$, since $A$ is quasi-hereditary. But since $\Rad(N)$ has the simple socle $L(i-1)$, the quasi-heredity of $A$ implies $\Rad(N)\hookrightarrow \nabla(i-1)$ as well. Let $C$ denote the cokernel of this inclusion. The module $N$ is an extension of $\Rad(N)$ by $L(i)$ and is indecomposable. This implies that the short exact sequence $\xi:\Rad(N)\hookrightarrow N\twoheadrightarrow L(i)$ represents a non-zero element in $\operatorname{Ext}_A^1(L(i),\Rad(N))$. Let us apply $\Hom_A(L(i),{}_-)$ to the short exact sequence $\Rad(N)\hookrightarrow \nabla(i-1)\twoheadrightarrow C$ and remark that $\Hom_A(L(i),C)=0$ as $[C:L(s)]\neq 0$ implies $s<i-1$ by above. This gives us an inclusion, $\operatorname{Ext}_A^1(L(i),\Rad(N))\hookrightarrow \operatorname{Ext}_A^1(L(i),\nabla(i-1))$, and hence there exists a short exact sequence, $\xi':\nabla(i-1))\hookrightarrow N'\twoheadrightarrow L(i)$, induced by $\xi$.
\begin{lemma}\label{l2.3} $\operatorname{Ext}_A^1(L(i),\nabla(i-1))\cong \operatorname{Ext}_A^1(\nabla(i),\nabla(i-1))$. \end{lemma}
\begin{proof} We apply $\Hom_A({}_-,\nabla(i-1))$ to the short exact sequence $L(i)\hookrightarrow \nabla(i)\twoheadrightarrow D$, where $D$ is the cokernel of the inclusion $L(i)\hookrightarrow \nabla(i)$. This gives the following part in the long exact sequence: \begin{displaymath} \operatorname{Ext}_A^{1}(D,\nabla(i-1))\to\operatorname{Ext}_A^1(\nabla(i),\nabla(i-1))\to \operatorname{Ext}_A^1(L(i),\nabla(i-1))\to\operatorname{Ext}_A^{2}(D,\nabla(i-1)). \end{displaymath} But $D$ contains only simple subquotients of the form $L(s)$, $s\leq i-1$. This means that $\nabla(i-1)$ is Ext-injective with respect to $D$ because of the quasi-heredity of $A$ and proves the statement. \end{proof}
Applying Lemma~\ref{l2.3} we obtain that the sequence $\xi'$ gives rise to the unique short exact sequence $\xi'':\nabla(i-1)\hookrightarrow N''\twoheadrightarrow \nabla(i)$. Moreover, by construction it also follows that $N$ is isomorphic to a submodule in $N''$. Consider $\xi''$ with $X=N''$ in \eqref{eq2.1}. Using the inclusion $N\hookrightarrow N''$ we obtain that the composition $\varphi\circ f$ is non-zero, implying $\langle f,\xi''\rangle\neq 0$. This proves that the left kernel of the form $\langle \cdot \, ,\cdot\rangle$ is zero.
To prove that the right kernel is zero, we, basically, have to reverse the above arguments. Let $\eta:\nabla(i-1)\hookrightarrow X\twoheadrightarrow \nabla(i)$ be a non-split short exact sequence. Quasi-heredity of $A$ implies that $\nabla(i-1)$ is Ext-injective with respect to $\nabla(i)/\soc(\nabla(i))$. Hence $\eta$ is in fact a lifting of some non-split short exact sequence, $\eta':\nabla(i-1)\hookrightarrow X'\twoheadrightarrow L(i)$ say. In particular, it follows that $X'$ and thus also $X$ has simple socle, namely $L(i-1)$. Further, applying $\Hom_A(\Delta(i),{}_-)$ to $\eta$, and using the fact that $\Delta(i)$ is Ext-projective with respect to $X$, one obtains that there is a unique (up to a scalar) non-trivial map from $\Delta(i)$ to $X$. Let $Y$ be its image. Then $Y$ has simple top, isomorphic to $L(i)$. Furthermore, all other simple subquotients of $X$ are isomorphic to $L(s)$, $s<i$, and hence $Y$ is a quotient of $\Delta(i)$. Since $\Delta(i-1)$ is Ext-projective with respect to $\Rad(\Delta(i))$, we can find a map, $\Delta(i-1)\to \Rad(\Delta(i))$, whose composition with the inclusion $\Rad(\Delta(i))\hookrightarrow \Delta(i)$ followed by the projection from $\Delta(i)$ onto $Y$ is non-zero. The composition of the first two maps gives us a map, $h:\Delta(i-1)\to\Delta(i)$, such that $\langle h,\eta\rangle\neq 0$. Therefore the right kernel of the form $\langle \cdot \, ,\cdot\rangle$ is zero as well, completing the proof. \end{proof}
\begin{corollary}\label{c2.4} \begin{enumerate}[(1)] \item $\Hom_A(\Delta(i-1),\Delta(i))\cong \operatorname{Ext}_A^1(\nabla(i),\nabla(i-1))^*$. \item $\operatorname{Ext}_A^1(\Delta(i-1),\Delta(i))\cong \Hom_A(\nabla(i),\nabla(i-1))^*$. \end{enumerate} \end{corollary}
\begin{proof} The first statement is an immediate corollary of Theorem~\ref{t2.2} and the second statement follows by duality since $A^{opp}$ is quasi-hereditary as soon as $A$ is, see \cite{CPS1}. \end{proof}
\begin{corollary}\label{c2.5} Assume that $A$ has a {\em simple preserving duality}, that is a contravariant exact equivalence, which preserves the iso-classes of simple modules. Then \begin{enumerate}[(1)] \item $\Hom_A(\Delta(i-1),\Delta(i))\cong \operatorname{Ext}_A^1(\Delta(i-1),\Delta(i))^*$. \item $\operatorname{Ext}_A^1(\nabla(i),\nabla(i-1))\cong \Hom_A(\nabla(i),\nabla(i-1))^*$. \end{enumerate} \end{corollary}
\begin{proof} Apply the simple preserving duality to the statement of Corollary~\ref{c2.4}. \end{proof}
\section{Homomorphisms between arbitrary standard modules}\label{s3}
It is very easy to see that the statement of Theorem~\ref{t2.2} does not extend to the case $j<i-1$. For example, consider the path algebra $A$ of the following quiver: \begin{displaymath} \xymatrix{ 1 && 2\ar[ll] && 3 \ar[ll]\\ }. \end{displaymath} This algebra is hereditary and thus quasi-hereditary. Moreover, it is directed and thus standard modules are projective and costandard modules are simple. One easily obtains that $\Hom_A(\Delta(1),\Delta(3))={\Bbbk}$ whereas $\operatorname{Ext}^1_A(\nabla(3),\nabla(1))=0$. The main reason why this happens is the fact that the non-zero homomorphism $\Delta(1)\to\Delta(3)$ factors through $\Delta(2)$ (note that $1<2<3$).
Let us define another pairing in homology. Denote by $\overline{\alpha}_i$ the natural projection of $\Delta(i)$ onto $L(i)$ and consider (for $j<i$) the following diagram: \begin{equation}\label{eq3.1} \xymatrix{ 0\ar[rr] && \nabla(j)\ar[rr]^{\beta} && X\ar[rr]^{\gamma} && L(i)\ar[rr] && 0 \\ && \Delta(j)\ar@{-->}[u]^{\alpha_j}\ar[rrrr]^{f} && && \Delta(i) \ar@{-->}[u]_{\overline{\alpha}_i}\ar@{=>}[ull]_{\varphi} && }. \end{equation} Using this diagram, the same arguments as in Section~\ref{s2} allow us to define the map \begin{displaymath} \overline{\langle \cdot \, ,\cdot\rangle}: \Hom_A(\Delta(j),\Delta(i))\times \operatorname{Ext}_A^1(L(i),\nabla(j))\to {\Bbbk} \end{displaymath} and one can check that this map is bilinear.
\begin{proposition}\label{p3.1} Let $N$ be the quotient of $\Delta(i)$, maximal with respect to the following conditions: $[\Rad(N):L(s)]\neq 0$ implies $s\leq j$; $[\Soc(N):L(s)]\neq 0$ implies $s=j$. Then \begin{enumerate}[(1)] \item the rank of the form $\overline{\langle \cdot \, ,\cdot\rangle}$ equals the multiplicity $[\Soc(N):L(j)]$, which, in turn, is equal to $\dim\operatorname{Ext}_A^1(L(i),\nabla(j))$; \item the left kernel of $\overline{\langle \cdot \, ,\cdot\rangle}$ is the set of all morphisms $f:\Delta(j)\to\Delta(i)$ such that $\pi\circ f=0$, where $\pi:\Delta(i)\twoheadrightarrow N$ is the natural projection. \end{enumerate} \end{proposition}
\begin{proof} The proof is analogous to that of Theorem~\ref{t2.2}. \end{proof}
Analyzing the proof of Lemma~\ref{l2.3} it is easy to see that there is no chance to hope for any reasonable relation between $\operatorname{Ext}_A^1(L(i),\nabla(j))$ and $\operatorname{Ext}_A^1(\nabla(i),\nabla(j))$ in general. However, we have the following:
\begin{proposition}\label{c3.3} \begin{enumerate}[(1)] \item The right kernel of $\langle \cdot \, ,\cdot\rangle$ coincides with the kernel of the homomorphism $\tau:\operatorname{Ext}_A^1(\nabla(i),\nabla(j))\to \operatorname{Ext}_A^1(L(i),\nabla(j))$ coming from the long exact sequence in homology. \item Let $j=i-2$. Then $\tau$ is surjective; the rank of $\langle \cdot \, ,\cdot\rangle$ coincides with the rank of $\overline{\langle \cdot \, ,\cdot\rangle}$; and the left kernel of $\langle \cdot \, ,\cdot\rangle$ coincides with the left kernel of $\overline{\langle \cdot \, ,\cdot\rangle}$. \end{enumerate} \end{proposition}
\begin{proof} The first statement follows from the proof of Theorem~\ref{t2.2}. To prove the second statement we remark that for $j=i-2$ we have $\operatorname{Ext}_A^{k}(X,\nabla(i-2))=0$, $k>1$, for any simple subquotient $X$ of $\nabla(i)/L(i)$. This gives the surjectivity of $\tau$, which implies all other statements. \end{proof}
We remark that all results of this section have appropriate dual analogues.
\section{A generalization of the bilinear pairing to higher $\operatorname{Ext}$'s}\label{s4}
Let us go back to the example at the beginning of Section~\ref{s3}, where we had a hereditary algebra with $\operatorname{Ext}^1_A(\nabla(3),\nabla(1))=0$, $\Hom_A(\Delta(1),\Delta(3))={\Bbbk}$, and such that any morphism from the last space factors through $\Delta(2)$. One can have the following idea: $\Hom_A(\Delta(1),\Delta(3))$ decomposes into a product of $\Hom_A(\Delta(1),\Delta(2))$ and $\Hom_A(\Delta(2),\Delta(3))$, by Theorem~\ref{t2.2} the space $\Hom_A(\Delta(1),\Delta(2))$ is dual to $\operatorname{Ext}^1_A(\nabla(2),\nabla(1))$ and the space $\Hom_A(\Delta(2),\Delta(3))$ is dual to $\operatorname{Ext}^1_A(\nabla(3),\nabla(2))$, perhaps this means that the product of $\Hom_A(\Delta(1),\Delta(2))$ and $\Hom_A(\Delta(2),\Delta(3))$ should correspond to the product of the spaces $\operatorname{Ext}^1_A(\nabla(2),\nabla(1))$ and $\operatorname{Ext}^1_A(\nabla(3),\nabla(2))$ and thus should be perhaps paired with $\operatorname{Ext}^2_A(\nabla(3),\nabla(1))$ and not $\operatorname{Ext}^1_A(\nabla(3),\nabla(1))$? In our example this argument does not work directly either since the algebra we consider is hereditary and thus $\operatorname{Ext}^2_A$ simply vanish. However, on can observe that for $j=i-k$, $k\in\mathbb{N}$, one could define a $\Bbbk$-linear map from $\operatorname{Ext}_A^k(\nabla(i),\nabla(j))^*$ to $\Hom_A(\Delta(j),\Delta(i))$ via \begin{multline*} \operatorname{Ext}_A^k\left(\nabla(i),\nabla(j)\right)^*\overset{f}{\rightarrow} \bigotimes_{l=0}^{k-1} \operatorname{Ext}_A^1\left(\nabla(i-l),\nabla(i-l-1)\right)^*\cong \text{ (by Corollary~\ref{c2.4}) } \\ \cong \bigotimes_{l=0}^{k-1} \Hom_A^1\left(\Delta(i-l-1),\Delta(i-l)\right)\overset{g}{\rightarrow} \Hom_A\left(\Delta(j),\Delta(i)\right), \end{multline*} where $g$ is the usual composition of $k$ homomorphisms, and $f$ is the dual map to the Yoneda composition of $k$ extensions. This map would give a bilinear pairing between $\operatorname{Ext}_A^k\left(\nabla(i),\nabla(j)\right)^*$ and $\Hom_A\left(\Delta(j),\Delta(i)\right)^*$, which could also be interesting. However, we do not study this approach in the present paper.
Instead, we are going to try to extend the pairing we discussed in the previous sections to higher extensions using some resolutions. This leads us to the following definition. Choose a minimal tilting resolution, \begin{displaymath} \mathcal{C}^{\bullet}:\quad\quad 0\longrightarrow T_k \overset{\varphi_k}{\longrightarrow}\dots \overset{\varphi_2}{\longrightarrow} T_1 \overset{\varphi_1}{\longrightarrow} T_0\overset{\varphi_0}{\longrightarrow}\nabla\longrightarrow 0, \end{displaymath} of $\nabla$ (see \cite[Section~5]{Ri} for the existence of such resolution). Denote by $\mathcal{T}(\nabla)^{\bullet}$ the corresponding complex of tilting modules. Fix $l\in\{0,\dots,k\}$ and consider the following part of the resolution above: \begin{displaymath} \xymatrix{
&&\Delta\ar@{.>}[d]^{f} && \\ T_{l+1}\ar[rr]^{\varphi_{l+1}} && T_l\ar@{.>}[d]^{g}\ar[rr]^{\varphi_{l}} && T_{l-1} \\
&&\nabla && \\ }. \end{displaymath} For every $f\in \Hom_A(\Delta,T_l)$ and every $g\in \Hom_A(T_l,\nabla)$ the composition gives \begin{displaymath} g\circ f\in \Hom_A(\Delta,\nabla)=\oplus_{i=1}^n\Hom_A(\Delta(i),\nabla(i))= \oplus_{i=1}^n{\Bbbk}\alpha_i. \end{displaymath} Hence $g\circ f=\sum_{i=1}^n a_i \alpha_i$ for some $a_i\in\Bbbk$ and we can denote $\widetilde{\langle f,g\rangle}^{(l)}= \sum_{i=1}^n a_i\in {\Bbbk}$. Obviously $\widetilde{\langle \cdot\, ,\cdot\rangle}^{(l)}$ defines a bilinear map from $\Hom_A(\Delta,T_l)\times \Hom_A(T_l,\nabla)$ to ${\Bbbk}$. This map induces the bilinear map \begin{displaymath} \langle f,g\rangle^{(l)}: \Hom_A(\Delta,T_l)\times \Hom_{Com} \left(\mathcal{T}(\nabla)^{\bullet},\nabla^{\bullet}[l]\right)\to {\Bbbk} \end{displaymath} (where $\Hom_{Com}$ means the homomorphisms of complexes). We remark that we have an obvious inclusion $\Hom_{Com} \left(\mathcal{T}(\nabla)^{\bullet},\nabla^{\bullet}[l]\right)\subset \Hom_A(T_l,\nabla)$ since the complex $\nabla^{\bullet}[l]$ is concentrated in one degree.
\begin{theorem}\label{t4.1} Let $f\in \Hom_A(\Delta,T_l)$ and $g\in \Hom_{Com}\left(\mathcal{T}(\nabla)^{\bullet},\nabla^{\bullet}[l]\right)$. Assume that $g$ is homotopic to zero. Then $\langle f,g\rangle^{(l)}=0$. In particular, $\langle \cdot\, ,\cdot\rangle^{(l)}$ induces a bilinear map, $\Hom_A(\Delta,T_l)\times \operatorname{Ext}_A^l(\nabla,\nabla)\to {\Bbbk}$. \end{theorem}
The form, constructed in Theorem~\ref{t4.1} will be denoted also by $\langle \cdot\, ,\cdot\rangle^{(l)}$ abusing notation. We remark that both the construction above and Theorem~\ref{t4.1} admit appropriate dual analogues.
\begin{proof} Since $\mathcal{T}(\nabla)^{\bullet}$ is a complex of tilting modules, the second statement of the theorem follows from the first one and \cite[Chapter III(2), Lemma~2.1]{Ha}. To prove the first statement we will need the following auxiliary statement.
\begin{lemma}\label{l4.2} Let $\beta :\Delta(i)\to T(j)$ and $\gamma:T(j)\to \nabla(k)$. Then $\gamma\circ\beta\neq 0$ if and only if $i=j=k$, $\beta\neq 0$ and $\gamma\neq 0$. \end{lemma}
\begin{proof} Using the standard properties of tilting modules, see for example \cite{Ri}, we have $[T(i):L(i)]=1$, $\dim\Hom_A(\Delta(i),T(i))=1$ and any non-zero element in this space is injective, $\dim\Hom_A(T(i),\nabla(i))=1$ and any non-zero element in this space is surjective. Hence in the case $i=j=k$ the composition of non-zero $\gamma$ and $\beta$ is a non-zero projection of the top of $\Delta(i)$ to the socle of $\nabla(i)$. This proves the "if" statement.
To prove the "only if" statement we note that $\gamma\circ\beta\neq 0$ obviously implies $i=k$. Assume that $j\neq i$ and $\gamma\circ\beta\neq 0$. The module $T(j)$ has a costandard filtration, which we fix, and $\Delta(i)$ is a standard module. Hence, by \cite[Theorem~4]{Ri}, $\beta$ is a linear combination of some maps, each of which comes from a homomorphism, which maps the top of $\Delta(i)$ to the socle of some $\nabla(i)$ in the costandard filtration of $T(j)$ (we remark that this $\nabla(i)$ is a subquotient of $T(j)$ but not a submodule in general). Since the composition $\gamma\circ\beta$ is non-zero and $\nabla(i)$ has simple socle, we have that at least one whole copy of $\nabla(i)$ in the costandard filtration of $T(j)$ survives under $\gamma$. But, by \cite[Theorem~1]{Ri}, any costandard filtration of $T(j)$ ends with the subquotient $\nabla(j)\neq \nabla(i)$. This implies that the dimension of the image of $\gamma$ must be strictly bigger than $\dim \nabla(i)$, which is impossible. The obtained contradiction shows that $i=j=k$. The rest follows from the standard facts, used in the proof of the "if" part. \end{proof}
We can certainly assume that $f\in\Hom_A(\Delta(i),T_l)$ and $g\in\Hom_{Com}\left(\mathcal{T}(\nabla)^{\bullet}, \nabla(i)^{\bullet}[l]\right)$ for some $i$. Consider now any homomorphism $h:T_{l-1}\to\nabla(i)$. Our aim is to show that the composition $h\circ \varphi_l\circ f=0$. Assume that this is not the case and apply Lemma~\ref{l4.2} to the components of the following two pairs: \begin{enumerate}[(a)] \item\label{ppp1} $f:\Delta(i)\to T_l$ and $h\circ \varphi_l:T_l\to \nabla(i)$ \item\label{ppp2} $\varphi_l\circ f:\Delta(i)\to T_{l-1}$ and $h:T_{l-1}\to \nabla(i)$. \end{enumerate} If $h\circ \varphi_l\circ f\neq 0$, we obtain that both $T_l$ and $T_{l-1}$ contain a direct summand isomorphic to $T(i)$, such that the map $\varphi_l$ induces a map, $\overline{\varphi}_l:T(i)\to T(i)$, which does not annihilate the unique copy of $L(i)$ inside $T(i)$. Since $T(i)$ is indecomposable, we have that $\operatorname{End}_A(T(i))$ is local and thus the non-nilpotent element $\overline{\varphi}_l\in\operatorname{End}_A(T(i))$ must be an isomorphism. This contradicts the minimality of the resolution $\mathcal{T}(\nabla)^{\bullet}$. \end{proof}
We remark that the sequence \begin{displaymath} 0\to \Hom_A(\Delta,T_k)\to\dots\to \Hom_A(\Delta,T_1)\to\Hom_A(\Delta,T_0) \to \Hom_A(\Delta,\nabla)\to 0, \end{displaymath} obtained from $\mathcal{C}^{\bullet}$ using $\Hom_A(\Delta,{}_-)$, is exact, and that Theorem~\ref{t4.1} defines a bilinear pairing between $\operatorname{Ext}_A^l(\nabla,\nabla)$ and the $l$-th element of this exact sequence. It is also easy to see that the pairing, given by Theorem~\ref{t4.1}, does not depend (up to an isomorphism of bilinear forms) on the choice of a minimal tilting resolution of $\nabla$. In particular, for every $l$ the rank of $\langle \cdot\, ,\cdot\rangle^{(l)}$ is an invariant of the algebra $A$. By linearity we have that \begin{displaymath} \langle \cdot\, ,\cdot\rangle^{(l)}=\oplus_{i,j=1}^n \langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}, \end{displaymath} where $\langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}$ is obtained by restricting the definition of $\langle \cdot\, ,\cdot\rangle^{(l)}$ to the homomorphisms from $\Delta(j)$ (instead of $\Delta$) to the tilting resolution of $\nabla(i)$ (instead of $\nabla$). The relation between $\langle \cdot\, ,\cdot\rangle^{(l)}_{(i,j)}$ and the forms we have studied in the previous section can be described as follows:
\begin{proposition}\label{p4.3} $\operatorname{rank} \langle \cdot\, ,\cdot\rangle^{(1)}_{i,i-1}= \dim\operatorname{Ext}_A^1(\nabla(i),\nabla(i-1))= \operatorname{rank} \langle \cdot\, ,\cdot\rangle$. \end{proposition}
\begin{proof} Straightforward. \end{proof}
In the general case we have the following:
\begin{corollary}\label{c4.4} $\operatorname{rank} \langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}$ equals the multiplicity of $T(j)$ as a direct summand in the $l$-th term of the minimal tilting resolution of $\nabla(i)$. \end{corollary}
\begin{proof} Let $T_l=\oplus_{k=1}^n T(k)^{l_k}$ and $p:T_l\twoheadrightarrow \oplus_{k=1}^n \nabla(k)^{l_k}$ be a projection. Since the complex $\mathcal{C}^{\bullet}$ is exact and consists of elements, having a costandard filtration, the cokernel of any map in this complex has a costandard filtration itself since the category of modules with costandard filtration is closed with respect to taking cokernels of monomorphisms, see for example \cite[Theorem~1]{DR2}. This implies that $\varphi_l$ induces a surjection from $T_l$ onto a module having a costandard filtration. Moreover, the minimality of the resolution means that this surjection does not annihilate any of the direct summands. In other words, the kernel of $\varphi_l$ is contained in the kernel of $p$. This implies that for the cokernel $N$ of $\varphi_{l+1}$ we have $\dim\Hom(N,\nabla(j))=l_j$. Using Lemma~\ref{l4.2} it is easy to see that $\dim\Hom(N,\nabla(j))$, in fact, equals $\operatorname{rank} \langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}$. This completes the proof. \end{proof}
We remark that, using Corollary~\ref{c4.4} and the Ringel duality (see \cite[Chapter~6]{Ri}), we can also interpret $\operatorname{rank} \langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}$ as the dimension of $l$-th extension space (over $R(A)$) from the $i$-th standard $R(A)$-module to the $j$-th simple $R(A)$-module. For the BGG category $\mathcal{O}$ the dimensions of these spaces are given by the Kazhdan-Lusztig combinatorics.
\section{Graded non-degeneracy in a graded case}\label{s5}
The form $\langle \cdot\, ,\cdot\rangle^{(l)}$ is degenerate in the general case. However, in this section we will show that it induces a non-degenerate pairing between the graded homomorphism and extension spaces for graded algebras under some assumptions in the spirit of Koszulity conditions.
Throughout this section we assume that $A$ is positively graded (recall that this means that $A=\oplus_{i\geq 0} A_i$ and $\Rad(A)=\oplus_{i> 0} A_i$). We remark that this automatically guarantees that the simple $A$-modules can be considered as graded modules. We denote by $A\mathrm{-gmod}$ the category of all graded (with respect to the grading fixed above) finitely generated $A$-modules. The morphisms in $A\mathrm{-gmod}$ are morphisms of $A$-modules, which {\em preserve} the grading, that is these morphisms are homogeneous morphisms of degree $0$. We denote by $\langle 1\rangle:A\mathrm{-gmod}\to A\mathrm{-gmod}$ the functor, which shifts the grading as follows: $(M\langle 1\rangle)_i=M_{i+1}$.
Forgetting the grading defines a faithful functor from $A\mathrm{-gmod}$ to $A\mathrm{-mod}$. We say that $M\in A\mathrm{-mod}$ admits the {\em graded lift} $\tilde{M}\in A\mathrm{-gmod}$ (or, simply, is {\em gradable}) provided that, after forgetting the grading, the module $\tilde{M}$ becomes isomorphic to $M$. If $M$ is indecomposable and admits a graded lift, then this lift is unique up to an isomorphism in $A\mathrm{-gmod}$ and a shift of grading, see for example \cite[Lemma~2.5.3]{BGS}.
For $M,N\in A\mathrm{-gmod}$ we set $\operatorname{ext}_A^{i}(M,N)= \operatorname{Ext}_{A\mathrm{-gmod}}^{i}(M,N)$, $i\geq 0$. It is clear that, forgetting the grading, we have \begin{equation}\label{greq} \operatorname{Ext}_A^{i}(M,N)=\oplus_{j\in\ensuremath{\mathbb{Z}}}\operatorname{ext}_A^{i}(M,N\langle j\rangle), \quad\quad i\geq 0 \end{equation} (see for example \cite[Lemma~3.9.2]{BGS}).
\begin{lemma}\label{l5.1} Let $M,N\in A\mathrm{-gmod}$. Then the non-graded trace $\mathrm{Tr}_M(N)$ of $M$ in $N$, that is the sum of the images of all (non-graded) homomorphism $f:M\to N$, belongs to $A\mathrm{-gmod}$. \end{lemma}
\begin{proof} Any $f:M\to N$ can be written as a sum of homogeneous components $f_i:M\to N\langle i\rangle$, $i\in\ensuremath{\mathbb{Z}}$, in particular, the image of $f$ is contained in the sum of the images of all $f_i$. Since the image of a homogeneous map is a graded submodule of $N$, the statement follows. \end{proof}
\begin{corollary}\label{c5.2} All standard and costandard $A$-modules are gradable. \end{corollary}
\begin{proof} By duality it is enough to prove the statement for standard modules. The module $\Delta(i)$ is defined as a quotient of $P(i)$ modulo the trace of $P(i+1)\oplus\dots\oplus P(n)$ in $P(0)$. For positively graded algebras all projective modules are obviously graded and hence the statement follows from Lemma~\ref{l5.1}. \end{proof}
\begin{proposition}\label{p5.3} Let $M,N\in A\mathrm{-gmod}$. Then the universal extension of $M$ by $N$ (in the category $A\mathrm{-mod}$) is gradable. \end{proposition}
\begin{proof} As we have mentioned before, we have $\operatorname{Ext}_A^{1}(M,N)=\oplus_{j\in\ensuremath{\mathbb{Z}}}\operatorname{ext}_A^{1}(M,N\langle j\rangle)$. Every homogeneous extension obviously produces a gradable module. Since we can construct the universal extension of $N$ by $M$ choosing a homogeneous basis in $\operatorname{Ext}_A^1(M,N)$, the previous argument shows that the obtained module will be gradable. This completes the proof. \end{proof}
We would like to fix a grading on all modules, related to the quasi-hereditary structure. We concentrate $L$ in degree $0$ and fix the gradings on $P$, $\Delta$, $\nabla$ and $I$ such that the canonical maps $P\twoheadrightarrow L$, $\Delta\twoheadrightarrow L$, $L\hookrightarrow \nabla$ and $L\hookrightarrow I$ are all morphism in $A\mathrm{-gmod}$. The only structural modules, which are left, are tilting modules. However, to proceed, we have to show first that tilting modules are gradable.
\begin{corollary}\label{c5.4} All tilting $A$-modules admit graded lifts. Moreover, for $T$ this lift can be chosen such that both the inclusion $\Delta\hookrightarrow T$ and the projection $T\twoheadrightarrow \nabla$ are morphisms in $A\mathrm{-gmod}$. \end{corollary}
\begin{proof} By \cite[Proof of Lemma~3]{Ri}, the tilting $A$-module $T(i)$ is produced by a sequence of universal extensions as follows: we start from the (gradable) module $\Delta(i)$, and on each step we extend some (gradable) module $\Delta(j)$, $j<i$, with the module, obtained on the previous step. Using Proposition~\ref{p5.3} and induction we see that all modules, obtained during this process, are gradable. The statement about the choice of the lift is obvious. \end{proof}
We fix the grading on $T$, given by Corollary~\ref{c5.4}. This automatically induces a grading on the Ringel dual $R(A)=\operatorname{End}_A(T)^{opp}$. In what follows we always will consider $R(A)$ as a graded algebra with respect to this induced grading.
Note that the same ungraded $A$-module can occur as a part of different structures, for example, a module can be projective, injective and tilting at the same time. In this case it is possible that the lifts of this module, which we fix for different structures, are different. For example, if we have a non-simple projective-injective module, then, considered as a projective module, it is graded in non-negative degrees with top being in degree $0$; considered as an injective module, it is graded in non-positive degrees with socle being in degree $0$; and, considered as a tilting module, it has non-trivial components both in negative and positive degrees.
A complex, $\mathcal{X}^{\bullet}$, of graded projective (resp. injective, resp. tilting) modules will be called {\em linear} provided that $\mathcal{X}^{i}\in\mathrm{add} (P\langle i\rangle)$ (resp. $\mathcal{X}^{i}\in\mathrm{add} (I\langle i\rangle)$, resp. $\mathcal{X}^{i}\in\mathrm{add} (T\langle i\rangle)$) for all $i\in\ensuremath{\mathbb{Z}}$.
To avoid confusions between the degree of a graded component of a module and the degree of a component in some complex, to indicate the place of a component in a complex we will use the word {\em position} instead of the word degree.
We say that $M\in A\mathrm{-gmod}$ admits an {\em LT-resolution}, $\mathcal{T}^{\bullet}\twoheadrightarrow M$, (here LT stands for linear-tilting) if $\mathcal{T}^{\bullet}$ is a linear complex of tilting modules from $A\mathrm{-gmod}$, such that $\mathcal{T}^{i}=0$, $i>0$, and the homology of $\mathcal{T}^{\bullet}$ is concentrated in position $0$ and equals $M$ in this position. One also defines {\em LT-coresolution} in the dual way. The main result of this section is the following:
\begin{theorem}\label{t5.5} Let $A$ be a positively graded quasi-hereditary algebra and $1\leq i,j\leq n$. Assume that \begin{enumerate}[(i)] \item\label{l5.5.1} $\nabla(i)$ admits an LT-resolution, $\mathcal{T}(\nabla(i))^{\bullet}\twoheadrightarrow\nabla(i)$; \item\label{l5.5.2} the induced grading on $R(A)$ is positive. \end{enumerate} Then the form $\langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}$ induces a non-degenerate bilinear pairing between \begin{displaymath} \operatorname{hom}_A(\Delta(j)\langle -l\rangle, \mathcal{T}(\nabla(i))^{-l})\quad \text{ and }\quad \operatorname{ext}_A^l(\nabla(i),\nabla(j)\langle -l\rangle). \end{displaymath} \end{theorem}
We remark that Theorem~\ref{t5.5} has a dual analogue.
\begin{proof} The assumption \eqref{l5.5.2} means that \begin{gather} \operatorname{hom}_A(\Delta\langle s\rangle,T)\neq 0 \quad\quad\mathrm{ implies } \quad\quad s\leq 0 \label{bl1}\\ \operatorname{hom}_A(\Delta(k)\langle s\rangle,T(m))\neq 0 \quad\mathrm{ and }\quad k\neq m \quad\quad\mathrm{ implies }\quad\quad s<0\label{bl2}. \end{gather} Hence, it follows that \begin{displaymath} \dim \operatorname{hom}_A\left(\Delta(j)\langle -l\rangle, \mathcal{T}(\nabla(i))^{-l}\right) \end{displaymath} equals the multiplicity of $T(j)\langle -l\rangle$ as a direct summand of $\mathcal{T}(\nabla(i))^{-l}$, which, using the dual arguments, in turn, equals \begin{displaymath} \dim \operatorname{hom}_A\left(\mathcal{T}(\nabla(i))^{-l},\nabla(j)\langle -l\rangle\right). \end{displaymath} From the definition of an LT-resolution and \eqref{bl1}-\eqref{bl2} we also obtain \begin{displaymath} \operatorname{hom}_A\left(\mathcal{T}(\nabla(i))^{-l+1},\nabla(j)\langle -l\rangle\right)=0, \end{displaymath} which means that there is no homotopy from $\mathcal{T}(\nabla(i))^{\bullet}$ to $\nabla(j)\langle -l\rangle^{\bullet}$. The arguments, analogous to those, used in Corollary~\ref{c4.4}, imply that any map from $\mathcal{T}(\nabla(i))^{-l}$ to $\nabla(j)\langle -l\rangle$ induces a morphism of complexes from $\mathcal{T}(\nabla(i))^{\bullet}$ to $\nabla(j)\langle -l\rangle^{\bullet}$. Hence \begin{displaymath} \dim \operatorname{ext}_A^l\left(\nabla(i),\nabla(j)\langle -l\rangle\right)= \dim \operatorname{hom}_A\left(\mathcal{T}(\nabla(i))^{-l},\nabla(j)\langle -l\rangle\right). \end{displaymath}
We can now interpret every $f\in \operatorname{hom}_A\left(\Delta(j)\langle -l\rangle, \mathcal{T}(\nabla(i))^{-l}\right)$ as a fixation of a direct summand of $\mathcal{T}(\nabla)^{-l}$, which is isomorphic to $T(i)\langle -l\rangle$. Projecting it further onto $\nabla(j)\langle -l\rangle$ shows that the left kernel of the form $\langle \cdot\, ,\cdot\rangle^{(l)}_{i,j}$ is zero. Since the dimensions of the left and the right spaces coincide by the arguments above, we obtain that the form is non-degenerate. This completes the proof. \end{proof}
It is easy to see that the condition \eqref{l5.5.2} of Theorem~\ref{t5.5} does not imply the condition \eqref{l5.5.1} in general. Further, it is also easy to see, for example for the path algebra of the following quiver: \begin{displaymath} \xymatrix{ 1 && 2\ar[rr] && 3\ar@/_1pc/[llll] && 4\ar[ll]}, \end{displaymath} that the condition \eqref{l5.5.1} (even if we assume it to be satisfied for all $i$) does not imply the condition \eqref{l5.5.2} in general, However, we do not know if the assumptions of the existence of an $LT$-resolution for $\nabla$ and, simultaneously, an $LT$-coresolution for $\Delta$, would imply the condition \eqref{l5.5.2}.
We also would like to remark that the conditions of Theorem~\ref{t5.5} are not at all automatic even in very good cases. For example one can check that the path algebra of the following quiver: \begin{displaymath} \xymatrix{ 1 && 2\ar[ll]\ar[rr] && 3\ar[rr] && 4} \end{displaymath} is standard Koszul, however, both conditions of Theorem~\ref{t5.5} fail.
Let $A$ be a positively graded quasi-hereditary algebra. We say that $A$ is an {\em SCK-algebra} (abbreviating standard-costandard-Koszul) provided that $A$ is standard Koszul and the induced grading on $R(A)$ is positive. We say that $A$ is an {\em SCT-algebra} (abbreviating standard-costandard-tilting) provided that all standard and costandard modules admit LT-(co)resolutions. By \cite[theorem~1]{ADL}, any standard Koszul algebra, and thus any SCK-algebra, is Koszul. We finish this section with the following observation.
\begin{theorem}\label{t5.6} Any SCK-algebra is an SCT-algebra and vice versa. \end{theorem}
\begin{proof} Our first observation is that for any SCT-algebra $A$ the induced grading on the $R(A)$ is positive. To prove this it is enough to show that all subquotients in any standard filtration of the cokernel of the morphism $\Delta(i)\hookrightarrow T(i)$ have the form $\Delta(j)\langle l\rangle$, $l>0$. This follows by induction in $i$. For $i=1$ the statement is obvious, and the induction step follows from the inductive assumption applied to the first term of the linear tilting coresolution of $\Delta(i)$.
Now we claim that the Ringel dual of an SCT algebra is SCK and vice versa. Assume that $A$ is SCT. Applying $\Hom_A(T,{}_-)$ to the LT-resolution of $\nabla$ we obtain that the $k$-th component of the projective resolution of the standard $R(A)$-module is generated in degree $k$. Applying analogous arguments to the LT-coresolution of $\Delta$ we obtain that the $k$-th component of the injective resolution of the costandard $R(A)$-module is generated in degree $-k$. As we have already shown, the induced grading on $R(A)$ is positive. Furthermore, the (graded) Ringel duality maps injective $A$-modules to tilting $R(A)$-modules, which implies that the grading, induced on $A$ from $R(A)\mathrm{-gmod}$, will coincide with the original grading on $A$, and hence will be positive as well. This means that $R$ is SCK. The arguments in the opposite direction are similar.
To complete the proof it is now enough to show, say, that any SCT algebra is SCK. The existence of a linear tilting coresolution for $\Delta$ and the above proved fact that for an SCT-algebra $A$ the induced grading on the $R(A)$ is positive, imply $\operatorname{ext}^{k}(\Delta\langle l\rangle,\Delta)=0$ unless $l\leq k$. Since $A$ is positively graded, we have that the $k$-th term of the projective resolution of $\Delta$ consists of modules of the form $P(i)\langle -l\rangle$, $l\geq k$. Assume that for some $k$ we have that $P(i)\langle -l\rangle$ with $l>k$ occurs. Since every kernel and cokernel in our resolution has a standard filtration, we obtain that $\operatorname{ext}^{k}(\Delta,\Delta(i)\langle -l\rangle)\neq 0$ with $l> k$, which contradicts $l\leq k$ above. This implies that $\Delta$ has a linear projective resolution. Analogous arguments imply that $\nabla$ has a linear injective coresolution. This completes the proof. \end{proof}
\section{The category of linear complexes of tilting modules}\label{s55}
We continue to work under the assumptions of Section~\ref{s5}, moreover, we assume, until the end of this section, that $A$ is such that both $A$ and $R(A)$ are positively graded.
The results of Section~\ref{s5} motivate the following definition: We say that $M\in A\mathrm{-gmod}$ is {\em $T$-Koszul} provided that $M$ is isomorphic in $D^b(A\mathrm{-gmod})$ to a linear complex of tilting modules. Thus any module, which admits an $LT$-(co)resolution, is $T$-Koszul.
We denote by $\mathfrak{T}=\mathfrak{T}(A)$ the category, whose objects are linear complexes of tilting modules and morphisms are all morphisms of graded complexes (which means that all components of these morphisms are homogeneous homomorphisms of $A$-modules of degree $0$). We also denote by $\mathfrak{T}^b$ the full subcategory of $\mathfrak{T}$, which consists of bounded complexes.
\begin{lemma}\label{l55.1} \begin{enumerate}[(1)] \item $\mathfrak{T}$ is an abelian category. \item $\langle -1\rangle[1]:\mathfrak{T}\to \mathfrak{T}$ is an auto-equivalence. \item The complexes $\left(T(i)^{\bullet}\right)\langle -l\rangle[l]$ constitute an exhaustive list of simple objects in $\mathfrak{T}$. \end{enumerate} \end{lemma}
\begin{proof} The assumption that the grading on $R(A)$, induced from $A\mathrm{-gmod}$, is positive, implies that the algebra $\mathrm{end}_A(T^{\bullet})$ is semi-simple. Using this it is easy to check that taking the usual kernels and cokernels of morphisms of complexes defines on $\mathfrak{T}$ the structure of an abelian category. That $\langle -1\rangle[1]:\mathfrak{T}\to \mathfrak{T}$ is an auto-equivalence follows from the definition.
The fact that $\mathrm{end}_A(T^{\bullet})$ is semi-simple and the above definition of the abelian structure on $\mathfrak{T}$ imply that any non-zero homomorphism in $\mathfrak{T}$ to the complex $\left(T(i)^{\bullet}\right)\langle -l\rangle[l]$ is surjective. Hence the objects $\left(T(i)^{\bullet}\right)\langle -l\rangle[l]$ are simple. On the other hand, it is easy to see that for any linear complex $\mathcal{T}^{\bullet}$ and for any $k\in\ensuremath{\mathbb{Z}}$ the complex $\left(\mathcal{T}^{k}\right)^{\bullet}$ is a subquotient of $\mathcal{T}^{\bullet}$ provided that $\mathcal{T}^{k}\neq 0$. Hence any simple object in $\mathfrak{T}$ should contain only one non-zero component. In order to be a simple object, this component obviously should be an indecomposable $A$-module. Therefore any simple object in $\mathfrak{T}$ is isomorphic to $\left(T(i)^{\bullet}\right)\langle -l\rangle[l]$ for some $i$ and $l$. This completes the proof. \end{proof}
Our aim is to show that $\mathfrak{T}$ has enough projective objects. However, to do this it is more convenient to switch to a different language and to prove a more general result.
Let $B=\oplus_{i\in\ensuremath{\mathbb{Z}}}B_i$ be a basic positively graded $\Bbbk$-algebra such that $\dim_{\Bbbk}B_i<\infty$ for all $i\geq 0$. Denote by $\mathfrak{B}$ the category of linear complexes of projective $B$-modules, and by $\tilde{\mathfrak{B}}$ the category, whose objects are all sequences $\mathcal{P}^{\bullet}$ of projective $B$-modules, such that $\mathcal{P}^{i}\in\mathrm{add}(P\langle -i\rangle)$ for all $i\in\ensuremath{\mathbb{Z}}$, and whose morphisms are all morphisms of graded sequences (consisting of homogeneous maps of degree $0$). The objects of $\tilde{\mathfrak{B}}$ will be called {\em linear sequences of projective modules}.
Denote by $\mu: B_1\otimes_{B_0}B_1\to B_2$ the multiplication map and by $\mu^{*}:B_2^*\to B_1^*\otimes_{B_0^*}B_1^*$ the dual map. Define the algebra $\Lambda$ as the quotient of the free positively graded tensor algebra $B_0[B_1^*]$ modulo the homogeneous ideal generated by $\mu^*(B_2^*)$.
A graded module, $M=\oplus_{i\in\ensuremath{\mathbb{Z}}}M_i$, over a graded algebra is called {\em locally finite} provided that $\dim M_i<\infty$ for all $i$. Note that a locally finite module does not need to be finitely generated. For a graded algebra, $C$, we denote by $C\mathrm{-lfmod}$ the category of all locally finite graded $C$-modules (with morphisms being homogeneous maps of degree $0$).
The following statement was proved in \cite[Theorem~2.4]{MS}. For the sake of completeness we present a short version of the proof.
\begin{theorem}\label{t55.2} There is an equivalence of categories, $\overline{F}: B_0[B_1^*]\mathrm{-lfmod}\to \tilde{\mathfrak{B}}$, which induces an equivalence, $F:\Lambda\mathrm{-lfmod}\to \mathfrak{B}$. \end{theorem}
\begin{proof} Let $P$ denote the projective generator of $B$. We construct the functor $\overline{F}$ in the following way: Let $X=\oplus_{j\in\ensuremath{\mathbb{Z}}}X_j\in B_0[B_1^*]\mathrm{-lfmod}$. We define $\overline{F}(X)=\mathcal{P}^{\bullet}$, where $\mathcal{P}^{j}=P\langle j\rangle \otimes_{B_0} X_{j}$, $j\in \ensuremath{\mathbb{Z}}$. To define the differential $d_j:\mathcal{P}^{j}\to \mathcal{P}^{j+1}$ we note that $P\cong {}_B B$ and use the following bijections: \begin{equation}\label{eq55.4} \begin{array}{lcl} \displaystyle \{M\in B_0[B_1^*]\mathrm{-lfmod}:
M|_{B_0}= X|_{B_0} \} & \cong & \text{ (since $B_0[B_1^*]$ is free) } \\ \displaystyle \prod_{j\in\ensuremath{\mathbb{Z}}}\mathrm{hom}_{B_0-B_0}\left(B_1^*\langle j+1\rangle, \mathrm{Hom}_{\Bbbk}\left(X_{j},X_{j+1}\right)\right) & \cong & \text{ (by adjoint associativity)} \\\displaystyle \prod_{j\in\ensuremath{\mathbb{Z}}}\mathrm{hom}_{B_0}\left(X_{j}, B_1\langle j+1\rangle\otimes_{B_0} X_{j+1}, \right) & \cong & \text{ (because of grading) } \\ \displaystyle \prod_{j\in\ensuremath{\mathbb{Z}}}\mathrm{hom}_{B_0}\left(X_{j}, B\langle j+1\rangle\otimes_{B_0} X_{j+1}, \right) &\cong & \text{ (by projectivity of ${}_B B$)}\\ \displaystyle \prod_{j\in\ensuremath{\mathbb{Z}}}\mathrm{hom}_B\left(B\langle j\rangle\otimes_{B_0} X_{j}, B\langle j+1\rangle\otimes_{B_0} X_{j+1} \right). & & \\ \end{array} \end{equation} Thus, starting from the fixed $X$, the equalities of \eqref{eq55.4} produce for each $j\in Z$ a unique map from the space $\mathrm{hom}_B\left(B\langle j\rangle\otimes_{B_0} X_{j}, B\langle j+1\rangle\otimes_{B_0} X_{j+1} \right)$, which defines the differential in $\mathcal{P}^{\bullet}$.
Tensoring with the identity map on ${}_B B$ the correspondence $\overline{F}$, defined above on objects, extends to a functor from $B_0[B_1^*]\mathrm{-lfmod}$ to $\tilde{\mathfrak{B}}$. Since $\operatorname{hom}({}_B B,{}_B B)\cong B_0$ is a direct sum of several copies of $\Bbbk$, it follows by a direct calculation that $\overline{F}$ is full and faithful. It is also easy to derive from the construction that $\overline{F}$ is dense. Hence it is an equivalence of categories $B_0[B_1^*]\mathrm{-lfmod}$ and $\tilde{\mathfrak{B}}$.
Now the principal question is: when $\overline{F}(X)$ is a complex? Let \begin{gather*} d_j: B\langle j\rangle\otimes_{B_0} X_{j}\to B\langle j+1\rangle\otimes_{B_0} X_{j+1},\\ d_{j-1}: B\langle j-1\rangle\otimes_{B_0} X_{j-1}\to B\langle j\rangle\otimes_{B_0} X_{j} \end{gather*} be as constructed above. Let further \begin{gather*} \delta_{j}:X_{j}\to B_1 \langle j+1\rangle\otimes_{B_0} X_{j+1},\\ \delta_{j-1}:X_{j-1}\to B_1 \langle j\rangle\otimes_{B_0} X_{j} \end{gather*} be the corresponding maps, given by \eqref{eq55.4}. Then $d_jd_{j-1}=0$ if and only if \begin{displaymath} \left(\mu\otimes \mathrm{Id}_{X_{j+1}}\right)\circ \left(\mathrm{Id}_{B_1}\otimes\delta_j\right)\circ\delta_{j-1}=0. \end{displaymath} The last equality, in turn, is equivalent to the fact that the global composition of morphisms in the following diagram is zero: \begin{displaymath} B_2^* \xrightarrow{\mu^*} B_1^*\otimes B_1^* \xrightarrow{b} \mathrm{Hom}_{\Bbbk}\left(X_{j},X_{j+1}\right)\otimes \mathrm{Hom}_{\Bbbk}\left(X_{j-1},X_{j}\right)\xrightarrow{c} \mathrm{Hom}_{\Bbbk}\left(X_{j-1},X_{j+1}\right), \end{displaymath} where the map $b$ is given by two different applications of \eqref{eq55.4} and $c$ denotes the usual composition. Hence $\overline{F}(X)$ is a complex if and only if $\mathrm{Im}(\mu^*) X=0$ or, equivalently, $X\in \Lambda\mathrm{-lfmod}$. \end{proof}
It is clear that the equivalence, constructed in the proof of Theorem~\ref{t55.2}, sends the auto-equivalence $\langle 1\rangle$ on $\Lambda\mathrm{-lfmod}$ (resp. on $B_0[B_1^*]\mathrm{-lfmod}$) to the auto-equivalence $\langle -1\rangle[1]$ on $\mathfrak{B}$ (resp. on $\tilde{\mathfrak{B}}$).
Now we are back to the original setup of this section.
\begin{corollary}\label{c55.6} Let $R=R(A)=\oplus_{i\geq 0}R_i$ and set $\Lambda=R_0[R_1^*]/(\mu^*(R_2^*))$, where $\mu$ denotes the multiplication in $R$. Then the category $\mathfrak{T}$ is equivalent to $\Lambda\mathrm{-lfmod}$. \end{corollary}
\begin{proof} Apply first the graded Ringel duality and then Theorem~\ref{t55.2}. \end{proof}
\begin{corollary}\label{c55.7} Assume that $R=R(A)$ is Koszul. Set $\Lambda=(E(R(A)))^{opp}$. Then \begin{enumerate}[(1)] \item $\mathfrak{T}$ is equivalent to the category $\Lambda\mathrm{-lfmod}$. \item The category $\mathfrak{T}^b$ is equivalent to $\Lambda\mathrm{-gmod}$. \end{enumerate} \end{corollary}
\begin{proof} If the algebra $R=\oplus_{i\geq 0}R_i$ is Koszul then, by \cite[Section~2.9]{BGS}, the formal quadratic dual algebra $R_0[R_1^*]/(\mu^*(R_2^*))$ is isomorphic to $(E(R))^{opp}$. Now everything follows from Corollary~\ref{c55.6}. \end{proof}
Corollary~\ref{c55.6} motivates the further study of the categories $\mathfrak{T}$ and $\mathfrak{T}^b$. We start with a description of the first extension spaces between the simple objects in $\mathfrak{T}$. Surprisingly enough, this result can be obtained without any additional assumptions.
\begin{lemma}\label{l55.8} Let $i,j\in\{1,\dots,n\}$ and $l\in \ensuremath{\mathbb{Z}}$. Then $\mathrm{ext}_{\mathfrak{T}}^1\left( T(i)^{\bullet},T(j)^{\bullet}\langle -l\rangle[l]\right)\neq 0$ implies $l=-1$. Moreover, \begin{displaymath} \mathrm{ext}_{\mathfrak{T}}^1\left( T(i)^{\bullet},T(j)^{\bullet}\langle 1\rangle[-1]\right)\cong \mathrm{hom}_A\left(T(i),T(j)\langle 1\rangle\right). \end{displaymath} \end{lemma}
\begin{proof} A direct calculation, using the definition of the first extension via short exact sequences and the abelian structure on $\mathfrak{T}$. \end{proof}
Recall from \cite{CPS1,DR} that an associative algebra is quasi-hereditary if and only if its module category is a highest weight category. Our goal is to establish some conditions under which $\mathfrak{T}^b$ becomes a highest weight category. To prove that a category is a highest weight category one has to determine the (co)standard objects.
\begin{proposition}\label{p55.9} \begin{enumerate}[(1)] \item Assume that $\Delta(i)$ admits an LT-coresolution, $\Delta(i)\hookrightarrow \mathcal{T}(\Delta(i))^{\bullet}$, for all $i$. Then $\mathrm{ext}_{\mathfrak{T}}^1\left(\mathcal{T}(\Delta(i))^{\bullet}, T(j)^{\bullet}\langle -l\rangle[l]\right)= 0$ for all $l\in\ensuremath{\mathbb{Z}}$ and $j\leq i$. \item Assume that $\nabla(i)$ admits an LT-resolution, $\mathcal{T}(\nabla(i))^{\bullet}\twoheadrightarrow \nabla(i)$, for all $i$. Then we have $\mathrm{ext}_{\mathfrak{T}}^1\left(T(j)^{\bullet}\langle -l\rangle[l], \mathcal{T}(\nabla(i))^{\bullet}\right)= 0$ for all $l\in\ensuremath{\mathbb{Z}}$ and $j\leq i$. \end{enumerate} \end{proposition}
\begin{proof} By duality, it is certainly enough to prove only the first statement. Using the induction with respect to the quasi-hereditary structure it is even enough to show that $\mathcal{T}(\Delta(n))^{\bullet}$ is projective in $\mathfrak{T}$. By Lemma~\ref{l55.8} we can also assume that $l<0$. Let \begin{equation}\label{eq55.9.1} 0\to T(j)^{\bullet}\langle -l\rangle[l]\to \mathcal{X}^{\bullet} \to \mathcal{T}(\Delta(n))^{\bullet}\to 0 \end{equation} be a short exact sequence in $\mathfrak{T}$. Let further $d^{\bullet}$ denote the differential in $\mathcal{T}(\Delta(n))^{\bullet}$. Consider the short exact sequence \begin{equation}\label{eq55.9.2} 0\to \ker(d^{-l})\to \mathcal{T}(\Delta(n))^{-l}\to \ker(d^{-l+1})\to 0. \end{equation} Since $\mathcal{T}(\Delta(n))^{\bullet}$ is a tilting coresolution of a standard module, it follows that all modules in \eqref{eq55.9.2} have standard filtration. Hence, applying $\Hom_A({}_-,T(j))$ to \eqref{eq55.9.2}, and using the fact that $T(j)$ has a costandard filtration, we obtain the surjection \begin{displaymath} \Hom_A\left(\mathcal{T}(\Delta(n))^{-l},T(j)\right)\twoheadrightarrow \Hom_A\left(\ker(d^{-l}),T(j)\right), \end{displaymath} which induces the graded surjection \begin{displaymath} \operatorname{hom}_A\left(\mathcal{T}(\Delta(n))^{-l},T(j)\langle -l\rangle\right)\twoheadrightarrow \operatorname{hom}_A\left(\ker(d^{-l}),T(j)\langle -l\rangle\right). \end{displaymath} The last surjection allows one to perform a base change in $\mathcal{X}^{\bullet}$, which splits the sequence \eqref{eq55.9.1}. This proves the statement. \end{proof}
For $R=R(A)$ we introduce the notation $R^{!}=R_0[R_1^*]/(\mu^*(R_2^*))$ (if $R$ is Koszul, this notation coincides with the one used for the formal quadratic dual in \cite[2.8]{BGS}). We have $\mathfrak{T}\cong R^{!}\mathrm{-lfmod}$ by Corollary~\ref{c55.6}.
\cite[Theorem~1]{ADL} states that a standard Koszul quasi-hereditary algebra is Koszul (which means that if standard modules admit linear projective resolutions and costandard modules admit linear injective resolutions, then simple modules admit both linear projective and linear injective resolutions). An analogue of this statement in our case is the following:
\begin{theorem}\label{t55.12} Let $A$ be an SCT algebra. Then \begin{enumerate}[(1)] \item\label{t55.12.1} $R^{!}$ is quasi-hereditary with respect to the usual order on $\{1,2,\dots,n\}$, or, equivalently, $\mathfrak{T}^b\simeq R^{!}\mathrm{-gmod}$ is a highest weight category; \item\label{t55.12.2} $\mathcal{T}(\Delta(i))^{\bullet}$, $i=1,\dots,n$, are standard objects in $\mathfrak{T}^b$; \item\label{t55.12.3}$\mathcal{T}(\nabla(i))^{\bullet}$, $i=1,\dots,n$, are costandard objects in $\mathfrak{T}^b$; \end{enumerate} Assume further that the algebra $R^{!}$ is SCT. Then \begin{enumerate}[(1)] \setcounter{enumi}{3} \item\label{t55.12.4} simple $A$-modules are $T$-Koszul, in particular, for every $i=1,\dots,n$ there exists a linear complex, $\mathcal{T}(L(i))^{\bullet}$, of tilting modules, which is isomorphic to $L(i)$ in $D^b(A\mathrm{-gmod})$; \item\label{t55.12.5} $\mathcal{T}(L(i))^{\bullet}$, $i=1,\dots,n$, are tilting objects with respect to the quasi-hereditary structure on $\mathfrak{T}^b$. \end{enumerate} \end{theorem}
\begin{proof} The algebra $R$ is quasi-hereditary with respect to the opposite order on $\{1,2,\dots,n\}$. Moreover, $R$ is SCK by Theorem~\ref{t5.6}, in particular, it is standard Koszul, thus also Koszul by \cite[Theorem~1]{ADL}. Hence its Koszul dual, which is isomorphic to $(R^{!})^{opp}$ by \cite[2.10]{BGS}, is quasi-hereditary with respect to the usual order on $\{1,2,\dots,n\}$ by \cite[Theorem~2]{ADL}. This certainly means that $R^{!}$ is quasi-hereditary with respect to the usual order on $\{1,2,\dots,n\}$. From Corollary~\ref{c55.7} we also obtain $R^{!}\mathrm{-gmod}\simeq \mathfrak{T}^b$. This proves the first statement.
That the objects $\mathcal{T}(\Delta(i))^{\bullet}$, $i=1,\dots,n$, are standard and the objects $\mathcal{T}(\nabla(i))^{\bullet}$, $i=1,\dots,n$, are costandard follows from Proposition~\ref{p55.9} and \cite[Theorem~1]{DR2}. This proves \eqref{t55.12.2} and \eqref{t55.12.3}.
Now we can assume that $R^{!}$ is an SCT-algebra. In particular, it is quasi-hereditary, and hence the category $\mathfrak{T}^b$ must contain tilting objects with respect to the corresponding highest weight structure. By \cite[Proof of Lemma~3]{Ri}, the tilting objects in $\mathfrak{T}^b$ can be constructed via a sequence of universal extensions, which starts with some standard object and proceeds by extending other (shifted) standard objects by objects, already constructed on previous steps. The assumption that $R^{!}$ is SCT=SCK means that new standard objects should be shifted by $\langle -l\rangle[l]$ with $l>0$. From the second statement of our theorem, which we have already proved above, it follows that the standard objects in $\mathfrak{T}^b$ are exhausted by $\mathcal{T}(\Delta(i))^{\bullet}$, $i=1,\dots,n$, and their shifts. The homology of $\mathcal{T}(\Delta(i))^{\bullet}$ is concentrated in position $0$ and in non-negative degrees. It follows that the homology of the tilting object in $\mathfrak{T}^b$, which we obtain, using this construction, will be concentrated in non-positive positions and in non-negative degrees.
On the other hand, a dual construction, that is the one, which uses costandard objects, implies that the homology of the same tilting object in $\mathfrak{T}^b$ will be concentrated in non-negative positions and in non-positive degrees. This means that the homology of an indecomposable tilting object in $\mathfrak{T}^b$ is concentrated in position $0$ and in degree $0$ and hence is a simple $A$-module. This proves two last statements of our theorem and completes the proof. \end{proof}
In the next section we will show that all the above conditions are satisfied for the associative algebras, associated with the blocks of the BGG category $\mathcal{O}$.
We remark that, under conditions of Theorem~\ref{t55.12}, in the category $\mathfrak{T}$ the standard and costandard $A$-modules remain standard and costandard objects respectively via their tilting (co)resolutions. Tilting $A$-modules become simple objects, and simple $A$-modules become tilting objects via $\mathcal{T}(L(i))^{\bullet}$.
An SCT algebra $A$ for which $R(A)^!$ is SCT will be called {\em balanced}. The results of this section allow us to formulae a new type of duality for balanced algebras (in fact, this just means that we can perform in one step the following path $A\leadsto R\leadsto R^{!}\leadsto R(R^{!})$, which consists of already known dualities for quasi-hereditary algebras).
\begin{corollary}\label{c55.88} Let $A$ be balanced and $\mathcal{T}(L(i))^{\bullet}$, $i=1,\dots,n$, be a complete list of indecomposable tilting objects in $\mathfrak{T}^b$, constructed in Theorem~\ref{t55.12}\eqref{t55.12.5}. Then $\langle -1\rangle[1]$ induces a (canonical) $\ensuremath{\mathbb{Z}}$-action on the algebra \begin{displaymath} \overline{C}(A)=\operatorname{End}_A\left(\oplus_{l\in\ensuremath{\mathbb{Z}}}\oplus_{i=1}^n \mathcal{T}(L(i))^{\bullet}\langle -l\rangle[l]\right), \end{displaymath} which makes $\overline{C}(A)$ into the covering of some algebra $C(A)$. The algebra $C(A)$ is balanced and $C(C(A))\cong A$. \end{corollary}
\begin{proof} From Theorem~\ref{t55.12} it follows that $C(A)\cong (R(R^{!}))^{opp}$. From Lemma~\ref{l55.8} and the assumption that $R(A)^!$ is SCT it follows that the grading on both $R^{!}$ and $C(A)$, induced from $\mathfrak{T}$, is positive. In particular, Theorem~\ref{t5.6} and \cite[Theorem~2]{ADL} now imply that $C(A)$ is balanced. Since both Ringel and Koszul dualities are involutive, we also have $A\cong (R(R(C(A))^{!}))^{opp}$. \end{proof}
\begin{corollary}\label{c55.89} Let $A$ be balanced. Then $A$ is standard Koszul and $C(A)\cong (A^{!})^{opp}\cong E(A)$. \end{corollary}
\begin{proof} $A$ is standard Koszul by Theorem~\ref{t5.6}, in particular, it is Koszul by \cite[Theorem~1]{ADL}. Further, since no homotopy is possible in $\mathfrak{T}$, it follows that \begin{displaymath} \operatorname{ext}_A^{l}\left(L(i),L(j)\langle -l\rangle\right)\cong \Hom_{\mathfrak{T}}\left(\mathcal{T}(L(i))^{\bullet}, \mathcal{T}(L(j))^{\bullet}\langle -l\rangle[l]\right). \end{displaymath} The last equality is obviously compatible with the $\ensuremath{\mathbb{Z}}$-actions and the compositions on both sides, which implies that the Koszul dual $(A^{!})^{opp}$ of $A$ is isomorphic to $C(A)$. \end{proof}
And now we can formulate, probably, the most surprising result of this section.
\begin{corollary}\label{c55.90} Let $A$ be balanced. Then the algebras $R(A)$, $E(A)$ and $E(R(A))$ and $R(E(A))$ are also balanced, moreover \begin{displaymath} E(R(A))\cong R(E(A)) \end{displaymath} as quasi-hereditary algebras. In other words, both the Ringel and Koszul dualities preserve the class of balanced algebras and commute on this class. \end{corollary}
\begin{proof} Follows from Theorem~\ref{t55.12}, Corollary~\ref{c55.88} and Corollary~\ref{c55.89}. \end{proof}
The results, presented in this section motivate the following natural question: {\em is any SCT=SCK algebra balanced?}
\section{The graded Ringel dual for the category $\mathcal{O}$}\label{s6}
In this section we prove that the conditions of Theorem~\ref{t5.5} are satisfied for the associative algebra, associated with a block of the BGG category $\mathcal{O}$. To do this we will use the graded approach to the category $\mathcal{O}$, worked out in \cite{St}. So, in this section we assume that $A$ is the basic associative algebra of an indecomposable integral (not necessarily regular) block of the BGG category $\mathcal{O}$, \cite{BGG}. The (not necessarily bijective) indexing set for simple modules will be the Weyl group $W$ with the usual Bruhat order (such that the identity element is the maximal one and corresponds to the projective Verma=standard module). This algebra is Koszul by \cite{BGS,So}, and thus we can fix on $A$ the Koszul grading, which leads us to the situation, described in Section~\ref{s5}. Recall that a module, $M$, is called {\em rigid} provided that its socle and radical filtrations coincide, see for example \cite{Ir}. Our main result in this section is the following:
\begin{theorem}\label{t6.1} $\operatorname{End}_A(T)$ is positively graded, moreover, it is generated in degrees $0$ and $1$. Furthermore, $\nabla$ admits an LT-resolution. \end{theorem}
\begin{proof} From \cite[Section~7]{FKM} it follows that $T\cong \mathrm{Tr}_{P(w_0)}(P)$ and thus, by Lemma~\ref{l5.1}, there is a graded submodule, $T'$ of $P$, which is isomorphic to $T$ after forgetting the grading. Moreover, again by \cite[Section~7]{FKM}, the restriction from $P$ to $T'$ induces an isomorphism of $\operatorname{End}_A(P)$ and $R=\operatorname{End}_A(T)$. So, to prove that $\operatorname{End}_A(T)$ is positively graded it is enough to show that $T'\cong T\langle -l\rangle$ for some $l$. Actually, we will show that this $l$ equals the Loewy length of $\Delta(e)$.
Let $\theta_s$ denote the graded translation functor through the $s$-wall, see \cite[3.2]{St}. Let $w_0$ denote the longest element in the Weyl group. The socle of any Verma module in the category $\mathcal{O}$ is the simple Verma module $\Delta(w_0)$, see \cite[Chapter~7]{Di}. This gives, for some $l\in\ensuremath{\mathbb{Z}}$, a graded inclusion, $T(w_0)\langle -l\rangle\cong \Delta(w_0)\langle -l \rangle\hookrightarrow \Delta(e)$. Moreover, since Verma modules in $\mathcal{O}$ are rigid by \cite{Ir}, and since their graded filtration in the Loewy one by \cite[Proposition~2.4.1]{BGS}, it follows that this $l$ equals the Loewy length of $\Delta(e)$. Now we would like to prove by induction that $T(w_0w)\langle -l\rangle\hookrightarrow P(w)$ for any $w\in W$. Assume that this is proved for some $w$ and let $s$ be a simple reflection such that $l(ws)>l(w)$. Translating through the $s$-wall we obtain $\theta_s T(w_0w)\langle -l\rangle\hookrightarrow \theta_s P(w)$. Further, the module $P(ws)$ is a direct summand of $\theta_s P(w)$ (after forgetting the grading). However, from \cite[Theorem~3.6]{St} it follows that the inclusion $P(ws)\hookrightarrow\theta_s P(w)$ is homogeneous and has degree $0$. The same argument implies that the inclusion $T(w_0ws)\hookrightarrow\theta_s T(w_0w)$ is homogeneous and has degree $0$. This gives us the desired inclusion $T(w_0ws)\langle -l\rangle\hookrightarrow P(ws)$ of degree $0$ and completes the induction. Adding everything up we obtain a graded inclusion of degree $0$ from $T\langle -l\rangle$ to $P$.
Recall once more that the restriction from $P$ to $T$ induces an isomorphism of $\operatorname{End}_A(P)$ and $R=\operatorname{End}_A(T)$. Since $\operatorname{End}_A(P)=A$ is positively graded and is generated in degrees $0$ and $1$, we obtain that $\operatorname{End}_A(T)$ is positively graded and is generated in degrees $0$ and $1$ as well.
It is now left to prove the existence of an LT-resolution for $\nabla$. Consider the minimal tilting resolution of $\nabla$. In Section~\ref{s5} we have defined the grading on $T$ such that the canonical projection $T\to \nabla$ is a homogeneous map of degree $0$. The kernel of this projection is thus graded and has a graded $\nabla$-filtration. Proceeding by induction we obtain that the minimal tilting resolution of $\nabla$ is graded. Let $R=R(A)$. Using the functor $F=\Hom_A(T,{}_-)$ we transfer this graded tilting resolution to a graded projective resolution of the direct sum $\Delta^{(R)}$ of standard $R$-modules. By \cite{So2} we have $A\cong R$, moreover, we have just proved that the grading on $R$, which is induced from $A\mathrm{-gmod}$, is the Koszul one. By \cite[3.11]{BGS}, the standard $A$-modules are Koszul, implying that the $l$-th term of the projective resolution of $\Delta^{(R)}$ is generated in degree $l$. Applying $F^{-1}$ we thus obtain an LT-resolution of $\nabla$. This completes the proof. \end{proof}
Catharina Stroppel gave an alternative argument for Theorem~\ref{t6.1} (see Appendix), which uses graded twisting functors. The advantage of her approach is that it can be generalized also to the parabolic analogue of the category $\mathcal{O}$ defined in \cite{RC}.
The arguments, used in the proof of Theorem~\ref{t6.1} also imply the following technical result:
\begin{corollary}\label{c6.2} \begin{enumerate}[(1)] \item The Loewy length $\mathrm{l.l.}(P(w))$ of $P(w)$ equals $2\mathrm{l.l.}(\Delta(e))-\mathrm{l.l.}(\Delta(w))$. In particular, for the regular block of $\mathcal{O}$ we have $\mathrm{l.l.}(P(w))=l(w_0)+l(w)+1$. \item The Loewy length $\mathrm{l.l.}(T(w))$ of $T(w)$ equals $2\mathrm{l.l.}(\Delta(w))-1$. In particular, for the regular block of $\mathcal{O}$ we have $\mathrm{l.l.}(T(w))=2(l(w_0)-l(w))+1$. \end{enumerate} \end{corollary}
\begin{proof} We start with the second statement. Recall that $\Delta(w)\hookrightarrow T(w)$, $T(w)\twoheadrightarrow \nabla(w)$, $[T(w):L(w)]=1$ and $L(w)$ is the simple top of $\Delta(w)$ and the simple socle of $\nabla(w)$. It follows that $\mathrm{l.l.}(T(w))\geq \mathrm{l.l.}(\Delta(w))+\mathrm{l.l.}(\nabla(w))-1= 2\mathrm{l.l.}(\Delta(w))-1$ since $\mathcal{O}$ has a simple preserving duality. However, the graded filtration of the tilting module we have just constructed certainly has semi-simple subquotients (since $A_0$ is positively graded). All $\Delta(w')$ occurring in it have Loewy length less than or equal to that of $\Delta(w)$ and start in negative degrees since $\operatorname{End}_A(T)$ is positively graded. This implies that $\mathrm{l.l.}(T(w))\leq 2\mathrm{l.l.}(\Delta(w))-1$ and completes the proof of the first part.
Since $P(w)$ has simple top, its graded filtration is the radical one by \cite[Proposition~2.4.1]{BGS}. However, from the proof of Theorem~\ref{t6.1} and from the second part of this corollary, which we have just proved, it follows that the length of the graded filtration of $P(w)$ is exactly $2\mathrm{l.l.}(\Delta(e))-\mathrm{l.l.}(\Delta(w))$.
The computations for the regular block follow from the results of \cite{Ir1} and the proof is complete. \end{proof}
\begin{corollary}\label{c6.3} Let $w\in W$. Then the following conditions for $T(w)$ are equivalent: \begin{enumerate}[(a)] \item\label{c.6.3.1} $T(w)$ is rigid. \item\label{c.6.3.2} $\operatorname{End}_A(T(w))$ is commutative. \item\label{c.6.3.3} $T(w)$ has simple top (or, equivalently, simple socle). \item\label{c.6.3.35} The center of the universal enveloping algebra surjects onto $\operatorname{End}_A(T(w))$. \item\label{c.6.3.4} $T(w)\hookrightarrow P(w_0)$. \item\label{c.6.3.5} $P(w_0)\twoheadrightarrow T(w)$. \item\label{c.6.3.6} $[T(w):\Delta(w')]\leq 1$ for all $w'\in W$. \item\label{c.6.3.7} $[T(w):\nabla(w')]\leq 1$ for all $w'\in W$. \item\label{c.6.3.8} $[T(w):\Delta(w_0)]=1$. \item\label{c.6.3.9} $[T(w):\nabla(w_0)]=1$. \end{enumerate} \end{corollary}
We remark that, though $\Delta(w_0)\cong \nabla(w_0)$ is a simple module, the numbers $[T(w):\Delta(w_0)]$ and $[T(w):\nabla(w_0)]$ are not the composition multiplicities, but the multiplicities in the standard and the costandard filtrations of $T(w)$ respectively.
\begin{proof} By \cite[Section~7]{FKM}, $T(w)\hookrightarrow P(w_0w)$ and the restriction induces an isomorphism for the endomorphism rings. Hence the equivalence of \eqref{c.6.3.2}, \eqref{c.6.3.3}, and \eqref{c.6.3.35} follows from the self-duality of $T(w)$ and \cite[Theorem~7.1]{St2}. That \eqref{c.6.3.3} implies \eqref{c.6.3.1} follows from \cite[Proposition~2.4.1]{BGS}. From the proof of Theorem~\ref{t6.1} and \cite[Theorem~3.6]{St} it follows that the highest and the lowest graded components of $T(w)$ are one-dimensional. Hence if $T(w)$ does not have simple top, its graded filtration, which is a Loewy one, does not coincide with the radical filtration and thus $T(w)$ is not rigid. This means that \eqref{c.6.3.1} implies \eqref{c.6.3.3}. Since $L(w_0)$ is the socle of any Verma module, it follows that \eqref{c.6.3.5} is equivalent to \eqref{c.6.3.3}. And, using the self-duality of both $T(w)$ and $P(w_0)$ we have that \eqref{c.6.3.5} is equivalent to \eqref{c.6.3.4}.
The equivalence of \eqref{c.6.3.6} and \eqref{c.6.3.7} and the equivalence of \eqref{c.6.3.8} and \eqref{c.6.3.9} follows using the simple preserving duality on $\mathcal{O}$. Since $[P(w_0):\Delta(w')]=1$ for all $w'$, we get that \eqref{c.6.3.5} implies \eqref{c.6.3.6}. Let $T(w)$ be such that \eqref{c.6.3.6} is satisfied. Then, in particular, $[T(w):\Delta(w_0)]\leq 1$. Since $L(w_0)$ is a simple socle of any Verma module, the self-duality of $T(w)$ implies $[T(w):\Delta(w_0)]=1$, which, in turn, implies that $T(w)$ has simple top, giving \eqref{c.6.3.3}. Moreover, the same arguments shows that \eqref{c.6.3.8} implies \eqref{c.6.3.3}. That \eqref{c.6.3.6} implies \eqref{c.6.3.8} is obvious, and the proof is complete. \end{proof}
We remark that (in the case when the equivalent conditions of Corollary~\ref{c6.3} are satisfied) the surjection of the center of the universal enveloping algebra onto $\operatorname{End}_A(T(w))$ is graded with respect to the grading on the center, considered in \cite{So}.
\begin{corollary}\label{c6.101} Let $w\in W$, and $s$ be a simple reflection. Then $\theta_s T(w)=T(w)\langle 1\rangle \oplus T(w)\langle -1\rangle $ if $l(ws)>l(w)$ and $\theta_s T(w)\in\mathrm{add}(T)$ (as a graded module) otherwise. \end{corollary}
\begin{proof} In the case $l(ws)>l(w)$ the statement follows from \cite[Section~7]{FKM} and \cite[Section~8]{St2}. If $l(ws)< l(w)$ then Theorem~\ref{t6.1} and \cite[Section~8]{St2} implies that $\theta_s T(w)$ has a graded Verma flag, and all Verma subquotients in this flag are of the form $\Delta(x)\langle k\rangle$, $k\geq 0$. The self-duality of $\theta_s T(w)$ now implies that $\theta_s T(w)\in\mathrm{add}(T)$. \end{proof}
One more corollary of Theorem~\ref{t6.1} is the following:
\begin{proposition}\label{p6.5} $A$ is a balanced algebra, in particular, all standard, costandard, and simple $A$-modules are $T$-Koszul. \end{proposition}
\begin{proof} That standard and costandard $A$-modules are $T$-Koszul follows from the fact that $A$ is standard Koszul (see \cite{ADL}) and Theorem~\ref{t6.1}. Hence $A$ is SCT by Theorem~\ref{t6.1} and Corollary~\ref{c55.89}. Further, the Koszul grading on $A\mathrm{-mod}$ induces on $R(A)^!\mathrm{-mod}$ the Koszul grading by \cite[Theorem~3]{ADL}. In particular, from Theorem~\ref{t6.1} it follows that $R(A)^!$ is SCK, that is $A$ is balanced. That simple $A$-modules are $T$-Koszul now follows from Theorem~\ref{t55.12}. \end{proof}
With the same argument and using the result of Catharina Stroppel presented in the Appendix, one gets that the algebras of the blocks of the parabolic analogue of the category $\mathcal{O}$ in the sense of \cite{RC} are also balanced.
We also remark that projective $A$-modules are not $T$-Koszul in general. For example, already for $\mathfrak{sl}_2$ we have $P(s_{\alpha})\cong T(e)\langle -1\rangle$ and thus $P(s_{\alpha})$ is not $T$-Koszul.
\begin{corollary}\label{c6.7} Let $A$ be the associative algebra of the regular block of the category $\mathcal{O}$ endowed with Koszul grading. Then the category of linear bounded tilting complexes of $A$-modules is equivalent to $A\mathrm{-gmod}$. \end{corollary}
\begin{proof} Since $A$ has a simple preserving duality, it is isomorphic to $A^{opp}$, moreover, $A$ is Koszul self-dual by \cite{So} and Ringel self-dual by \cite{So2}. Hence the necessary statement follows from Corollary~\ref{c55.7}. \end{proof}
For singular blocks Corollary~\ref{c55.7} and \cite{BGS} imply that the category of linear bounded tilting complexes of $A$-modules is equivalent to the category of graded modules over the regular block of the parabolic category $\mathcal{O}$ with the same stabilizer (and vice versa).
\section{Appendix (written by Catharina Stroppel)}\label{sapp}
In this appendix we reprove Theorem~\ref{t6.1} in a way which implies the corresponding statement for the parabolic category $\mathcal{O}$. Our methods also provide an example for the theory developed in the paper in the context of properly stratified algebras. Since we do not use any new techniques, we refer mainly to the literature. We have to recall several constructions and definitions. We restrict ourselves to the case of the principal block to avoid even more notation.
For an algebra $A$ we denote by $\mathrm{mod-}A$ ($A\mathrm{-mod-}A$ respectively) the category of finitely generated right $A$-modules (finitely generated $A$-bimodules). If $A$ is graded, then we denote by $\mathrm{gmod-}A$ and $A\mathrm{-gmod-}A$ the corresponding categories of graded modules.
Let $\mathfrak{g}$ be a semisimple Lie algebra with fixed Borel and Cartan subalgebras $\mathfrak{b}$, $\mathfrak{h}$, Weyl group $W$ with longest element $w_0$, and corresponding category $\mathcal{O}$. Let $\mathcal{O}_0$ be the principal block of $\mathcal{O}$ with the simple modules $L(x\cdot0)$ of highest weight $x(\rho)-\rho$, where $x\in W$ and $\rho$ denotes the half-sum of positive roots. Let $P(x\cdot0)$ be the projective cover of $L(x\cdot0)$.
Let $\mathcal{H}$ denote the category of Harish-Chandra bimodules with generalized trivial central character from both sides (as considered for example in \cite{SHC}). Let $\chi$ denote the trivial central character. For any $n\in\mathbb{Z}_{>0}$ we have the full subcategories $\mathcal{H}^n$ (and $^n\mathcal{H}$ respectively) of $\mathcal{H}$ given by objects $X$ such that $X\mathrm{Ker}\chi^n=0$ ($\mathrm{Ker}\chi^n X=0$ respectively). There is an auto-equivalence $\eta$ of $\mathcal{H}$, given by switching the left and right action of $U(\mathfrak{g})$ (see \cite[6.3]{Ja}), and giving rise to equivalences $\mathcal{H}^n\cong{}^n\mathcal{H}$. For $s$ a simple reflection we have translation functors through the $s$-wall: $\theta_s$ from the left hand side and $\theta_s^r$ from the right hand side (for a definition see \cite[6.33]{Ja} or more explicitly \cite[2.1]{Sthom}). In particular, $\eta\theta_s\cong\theta_s^r\eta$. Recall the equivalence (\cite[Theorem 5.9]{BG}) $\epsilon:\mathcal{H}^1\cong\mathcal{O}_0$. We denote by $L_x=\epsilon^{-1} L(x\cdot 0)$ and consider it also as an object in $\mathcal{H}$. Note that $\eta L_x\cong L_{x^{-1}}$ (see \cite[6.34]{Ja}). Let $P^n_x$ and ${}^nP_x$ be the projective cover of $L_x$ in $\mathcal{H}^n$ and ${}^n\mathcal{H}$ respectively. In particular, $\eta P^n_x\cong {}^n P_x$.
Recall the structural functor $\mathbb{V}:\mathcal{H}\rightarrow S(\mathfrak{h})\mathrm{-mod-}S(\mathfrak{h})$ from \cite{SHC}. We equip the algebra $S=S(\mathfrak{h})$ with a $\mathbb{Z}$-grading such that $\mathfrak{h}$ is sitting in degree two. In \cite{SHC} it is proved that $\mathbb{V} P^n_x$ has a graded lift. By abuse of language, we denote the graded lift having $-l(x)$ as its lowest degree also by $\mathbb{V} P^n_x$. Let $A^n=\mathrm{End}_{S-\mathrm{gmod-}S}(\bigoplus_{x\in W}\mathbb{V} P^n_x)$. Then $A^n$ is a graded algebra such that $\mathcal{H}^n\cong\mathrm{mod-}A^n$. In particular, $A^1$ is the Koszul algebra corresponding to $\mathcal{O}_0$ (\cite{BGS}). On the other hand we have ${}^nA=\mathrm{End}_\mathcal{H}(\bigoplus_{x\in W}{}^nP_x)$ and the corresponding equivalence ${}^n\mathcal{H}\cong\mathrm{mod-}{}^nA$. Concerning the notation we will not distinguish between objects in $\mathcal{H}^n$ and $\mathrm{mod-}A^n$ or between objects in $^n\mathcal{H}$ and $\mathrm{mod-}{}^nA$. We fix a grading on $^nA$ such that $\eta$ lifts to equivalences $\tilde\eta:\mathrm{gmod-}A_n\cong \mathrm{gmod-}A^n$ preserving the degrees in which a simple module is concentrated. More precisely, $\tilde\eta L(x)\cong L(x^{-1})$, where $L(x)$ denotes the graded lift of $L_x$, concentrated in degree zero, in the corresponding category.
Let us fix $n$. For $s$ a simple reflection we denote by $S^s$ the $s$-invariants in $S$. We define $\tilde\theta_s:\mathrm{gmod-}A^n\rightarrow\mathrm{gmod-}A^n$ as tensoring with the graded $A^n=\mathrm{End}_{S\mathrm{-gmod-}S}(\bigoplus_{x\in W}\mathbb{V} P^n_x)$ bimodule $\mathrm{Hom}_{S\mathrm{-gmod-}S}(\bigoplus_{x\in W}\mathbb{V} P^n_x,\bigoplus_{x\in W}S\otimes_{S^s}\mathbb{V} P^n_x\langle -1\rangle)$. Because of \cite[Lemma 10]{SHC}, this is a graded lift (in the sense of \cite{St}) of the translation functor $\theta_s:\mathcal{H}^n\rightarrow\mathcal{H}^n$. As in \cite{St} we have the adjunction morphisms $\mathrm{ID}\langle 1\rangle\rightarrow\tilde\theta_s$ and $\tilde\theta_s\rightarrow\mathrm{ID}\langle -1\rangle$. Define $\tilde\theta_s^r=\eta\tilde\theta_s\eta:{}^nA\mathrm{-gmod}\rightarrow {}^nA\mathrm{-gmod}$. We have again the adjunction morphism $a_s^{(n)}:\mathrm{ID}\langle 1\rangle\rightarrow\tilde\theta_s^r$. Let $T_s^{(n)}$ denote the functor given by taking the cokernel of $a_s^{(n)}$. We fix a compatible system of surjections $P^n\twoheadrightarrow P^m$ for $n\geq m$. It gives rise to a system of graded projections $p_{n,m}:{_{}^nA}\twoheadrightarrow ^mA$ for $n\geq m$. Let $^\infty A=\varprojlim\;_{}^nA$ and $^\infty T_s=\varprojlim T^{(n)}:\mathrm{gmod-}^\infty A\rightarrow \mathrm{gmod-}^\infty A$. Note that $^\infty T_s$ preserves the category $\mathrm{gmod-}A^1$ (considered as a subcategory of $^\infty A$). In fact, it is a graded lift of Arkhipov's twisting functor (as considered in \cite{AS}, \cite{KM}). Let $T_s:\mathrm{gmod-}A^1\rightarrow \mathrm{gmod-}A^1$. For $x\in W$ with reduced expression ${[x]}=s_{i_1}s_{i_2}\cdots s_{i_r}$ set $T_{[x]}=T_{s_1}T_{s_2}\cdots T_{s_r}$. Set $A=A^1$.
\begin{proposition}\label{prop} Let $x$, $s\in W$ and let $s$ be a simple reflection. Then the following holds \begin{enumerate} \item The functor $T_{[x]}$ is (up to isomorphism) independent of the chosen reduced expression. \item Moreover, if $sx>x$ and $\Delta(x)\in\mathrm{gmod-}A$ denotes the graded lift of the Verma module with simple head $L(x)$ (concentrated in degree zero), then $T_s\Delta(x)\cong\Delta(sx)$ and $T_s\nabla(sx)\cong\nabla(x)$, where $\nabla(x)$ denotes the graded lift of the dual Verma module with socle $L(x)$ (concentrated in degree zero). \end{enumerate} \end{proposition}
\begin{proof} We consider now the adjunction morphism $b_s:\mathrm{ID}\rightarrow\tilde\theta_s^r$ between endofunctors on $\mathrm{mod-}_{}^nA$. Let $\tilde{T}_s$ denote the functor given by taking the cokernel of $b_s$, restricted to $\mathrm{mod-}A$. Let $\tilde T_{[x]}=\tilde{T}_{s_1}\tilde{T}_{s_2}\cdots\tilde{T}_{s_r}$. Then $\tilde{T}_{[x]}$ does not depend on the chosen reduced expression (\cite{Joseph}, \cite{KM}). If we show that $\tilde{T}_{[x]}$ is indecomposable, then a graded lift is unique up to isomorphism and grading shift, and the statement follows say from the second part of the theorem. Set $G=\tilde{T}_x$. Let us prove the indecomposability: We claim that the canonical evaluation morphism $\mathrm{End}(G)\rightarrow \mathrm{End}_\mathfrak{g}(G P(w_0\cdot0))$, $\varphi\mapsto\varphi_{P(w_0\cdot0)}$, is an isomorphism. Assume $\varphi_{P(w_0\cdot0)}=0$. Let $P$ be a projective object in $\mathcal{O}_0$. Then there is a short exact sequence \begin{equation}\label{eq:ses} P\rightarrow\oplus_{I} P(w_0\cdot0)\rightarrow Q \end{equation} for some finite set $I$ and some module $Q$ having a Verma flag. (To see this consider the projective Verma module. It is the unique Verma submodule of ${P(w_0\cdot0)}=0$, hence the desired sequence exists. The existence of the sequence for any projective object follows then using translation functors.) By \cite[Lemma 2.1]{AS}, we get an exact sequence $G P\rightarrow\oplus_{I} GP(w_0\cdot0)\rightarrow GQ$. Hence $\varphi_{P(w_0\cdot0)}=0$ implies $\varphi_P=0$ for any projective object $P$. Since $G$ is right exact, it follows $\varphi=0$. Let $g\in\mathrm{End}_\mathfrak{g}(G P(w_0\cdot0))$. Since $\mathrm{End}_\mathfrak{g}(G P(w_0\cdot0))\cong\mathrm{End}_\mathfrak{g}(P(w_0\cdot0))$ (\cite[Proposition 5.3]{AS}), $g$ defines an endomorphism of $G$ when restricted to the additive category generated by $P(w_0\cdot0)$. Note that (by taking the injective hull of $Q$) the sequence \eqref{eq:ses} gives rise to an exact sequence \begin{equation*} 0\rightarrow P\rightarrow\oplus_{I} P(w_0\cdot0)\rightarrow \oplus_{I'} P(w_0\cdot0) \end{equation*} for some finite sets $I$, $I'$. Using again \cite[Lemma 2.1]{AS} we get an exact sequence \begin{equation*} 0\rightarrow G P\rightarrow\oplus_{I} G P(w_0\cdot0)\rightarrow\oplus_{I'} G P(w_0\cdot0). \end{equation*} Hence $g$ defines an endomorphism $g_P$ of $P$. Standard arguments show that this is independent of the chosen exact sequence. Since $G$ is right exact, $g$ extends uniquely to an endomorphism $\varphi$ of $G$. By construction $\varphi_{P(w_0\cdot0)}=g$. This proves the surjectivity. Since $\mathrm{End}_\mathfrak{g}(G P(w_0\cdot0))\cong \mathrm{End}_\mathfrak{g}(P(w_0\cdot0))$ is a local ring, the functor $G$ is indecomposable. This proves the first part of the proposition.
We have $\tilde{T}_s f(\Delta(x))\cong f(\Delta(sx))$, where $f$ denotes the grading forgetting functor. Hence, $T_s(\Delta(x))\cong \Delta(sx)\langle k\rangle$ for some $k\in\mathbb{Z}$. On the other hand $\eta T_s\eta\Delta(x^{-1})\cong\Delta(x^{-1}s)$ (\cite[Theorem~3.6]{St}). Hence $k=0$ and $T_s\Delta(x)\cong\Delta(sx)$. Forgetting the grading we have $\tilde{T}_s f(\nabla(sx))\cong f(\nabla(x))$. On the other hand $\eta T_s\eta\nabla((sx)^{-1})\cong\nabla(x^{-1})$ (\cite[Theorem 3.10]{St}). The second part of the proposition follows. \end{proof}
Since $T_{[x]}$ does not depend on the chosen reduced expression, we denote it just $T_x$ in the following. Let $P(x)\in\mathrm{gmod-}A$ be the indecomposable projective module with simple head $L(x)$ concentrated in degree zero. Set $P=\bigoplus_{x\in W}P(x)$. Let $T(x)$ denote the graded lift of an indecomposable tilting module, characterized by the property that $\Delta(x)$ is a submodule and $\nabla(x)$ is a quotient. Let $T=\bigoplus_{x\in W} T(x)$.
\begin{theorem}\label{tapp} Let $x\in W$. There is an isomorphism of graded algebras \begin{eqnarray*} \mathrm{End}_{A}(P)\cong\mathrm{End}_{A}(T_x P). \end{eqnarray*} For $x=w_0$ we get in particular \begin{eqnarray*} \mathrm{End}_{A}(P)\cong\mathrm{End}_{A}(T). \end{eqnarray*} \end{theorem}
\begin{proof} The first isomorphism follows directly from \cite[Lemma 2.1]{AS} and the definition of $T_x$. For the second we claim that $T_{w_0} P(y)\cong T(w_0y)$. By Proposition~\ref{prop} we have $T_{w_0}P(0)\cong \Delta(w_0)$. Hence, the statement is true for $y=e$. Using translation functors we directly get $T_{w_0} P(y)\cong T(w_0y)\langle k\rangle$ for some $k\in\mathbb{Z}$. On the other hand $P(y)$ surjects onto $\Delta(y)$. Then $T_{w_0}P(y)$ surjects onto $T_{w_0}\Delta(y)$. The latter is isomorphic to $\nabla(w_0y)$, since $\Delta(w_0)\cong\nabla(w_0)$. \end{proof}
Let $\ensuremath{\mathfrak{p}}$ be a parabolic subalgebra of $\mathfrak{g}$ with corresponding parabolic subgroup $W_\ensuremath{\mathfrak{p}}$ of $W$. Let $\mathcal{O}_0^\ensuremath{\mathfrak{p}}$ be the full subcategory of $\mathcal{O}_0$ given by locally $\ensuremath{\mathfrak{p}}$-finite objects. If $P\in\mathcal{O}_0$ is a minimal projective generator, then its maximal quotient $P^\ensuremath{\mathfrak{p}}$ contained in $\mathcal{O}_0^\ensuremath{\mathfrak{p}}$ is a minimal projective generator of $\mathcal{O}_0^\ensuremath{\mathfrak{p}}$ and $\mathrm{End}_\mathfrak{g}(P^\ensuremath{\mathfrak{p}})$ inherits a grading from $A=\operatorname{End}_\mathfrak{g}(P)$. We will consider then the category $\mathrm{gmod-}A^\ensuremath{\mathfrak{p}}$ as the full subcategory of $\mathrm{gmod-}A$ given by all objects having only composition of the form $L(x)\langle k\rangle$, where $k\in\mathbb{Z}$ and $x\in W^\ensuremath{\mathfrak{p}}$, the set a shortest coset representative of $W_\ensuremath{\mathfrak{p}}\backslash W$. Let $\Delta^\ensuremath{\mathfrak{p}}(x)\in\mathrm{gmod-}A^\ensuremath{\mathfrak{p}}$, $\nabla^\ensuremath{\mathfrak{p}}(x)$ be the standard graded lifts of the standard and costandard modules in $\mathcal{O}_0^\ensuremath{\mathfrak{p}}$ (which were denoted by $\Delta(x)$ and $\nabla(x)$ in Section~\ref{s6}). Let $T^\ensuremath{\mathfrak{p}}$ be the module $T$ from Corollary~\ref{c6.101} for the category $\mathrm{gmod}-A^\ensuremath{\mathfrak{p}}$. Then Theorem~\ref{t6.1} generalizes to the following
\begin{corollary}\label{capp} $\mathrm{End}_{A^\ensuremath{\mathfrak{p}}}(T^\ensuremath{\mathfrak{p}})$ is positively graded, moreover, it is generated in degrees $0$ and $1$. Furthermore, $\nabla$ admits an LT-resolution. \end{corollary}
\begin{proof} Let $w=w_0^\ensuremath{\mathfrak{p}}\in W^\ensuremath{\mathfrak{p}}$ be the longest element. Then $\Delta^\ensuremath{\mathfrak{p}}(w)$ is a tilting module and canonically a quotient of $\Delta(w)\cong T_w\Delta(e)=T_wP(e)$. Using translation functors we get that $T^\ensuremath{\mathfrak{p}}$ is a quotient of $T_w P$. Hence, there is a surjection of graded algebras from $\mathrm{End}_{A}(T_wP)\cong\mathrm{End}_{A}(P)$ onto $\mathrm{End}_{A}(T^\ensuremath{\mathfrak{p}})$. Hence $\mathrm{End}(T_w P^\ensuremath{\mathfrak{p}})\cong\operatorname{End}_{A^\ensuremath{\mathfrak{p}}}(T^\ensuremath{\mathfrak{p}})$ is positively graded and generated in degrees 0 and 1. The existence of the resolution follows using the same arguments as in the proof of Theorem~\ref{t6.1}. \end{proof}
\begin{center} {\bf Acknowledgments} \end{center}
The research was done during the visit of the second author to Uppsala University, which was partially supported by the Royal Swedish Academy of Sciences, and by The Swedish Foundation for International Cooperation in Research and Higher Education (STINT). This support and the hospitality of Uppsala University are gratefully acknowledged. The first author was also partially supported by the Swedish Research Council. We thank Catharina Stroppel for many useful remarks and comments on the preliminary version of the paper and for writing the Appendix.
\noindent Volodymyr Mazorchuk, Department of Mathematics, Uppsala University, Box 480, 751 06, Uppsala, SWEDEN, e-mail: {\tt mazor\symbol{64}math.uu.se}, web: {``http://www.math.uu.se/$\tilde{\hspace{1mm}}$mazor/''}.
\noindent Serge Ovsienko, Department of Mechanics and Mathematics, Kyiv Taras Shevchenko University, 64, Volodymyrska st., 01033, Kyiv, Ukraine, e-mail: {\tt ovsienko\symbol{64}zeos.net}.
\end{document} |
\begin{document}
\begin{center} \textbf{\LARGE{Inequalities having Seven Means \\ and Proportionality Relations}} \end{center}
\begin{center} \textbf{\large{Inder J. Taneja}}\\ Departamento de Matem\'{a}tica\\ Universidade Federal de Santa Catarina\\ 88.040-900 Florian\'{o}polis, SC, Brazil.\\ \textit{e-mail: ijtaneja@gmail.com\\ http://www.mtm.ufsc.br/$\sim $taneja} \end{center}
\begin{abstract} In 2003, Eve \cite{eve}, studied seven means from geometrical point of view. These means are \textit{Harmonic, Geometric, Arithmetic, Heronian, Contra-harmonic, Root-mean square and Centroidal mean}. Some of these means are particular cases of Gini's \cite{gin} mean of order $r$ and $s$. In this paper we have established some proportionality relations having these means. Some inequalities among some of differences arising due to seven means inequalities are also established. \end{abstract}
\textbf{Key words:} \textit{Arithmetic mean, Geometric mean, Heronian mean, triangular discrimination, Hellingar's distance}
\textbf{AMS Classification:} 94A17; 26A48; 26D07.
\section{Seven Geometrical Means}
Let $a,\,b>0$ be two positive numbers. In 2003, Eves \cite{eve} studied the geometrical interpretation of the following seven menas: \begin{enumerate} \item Arithmetic mean: $A(a,b)={\left( {a+b} \right)} \mathord{\left/ {\vphantom {{\left( {a+b} \right)} 2}} \right. \kern-\nulldelimiterspace} 2$; \item Geometric mean: $G(a,b)=\sqrt {ab} $; \item Harmonic mean: $H(a,b)={2ab} \mathord{\left/ {\vphantom {{2ab} {\left( {a+b} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {a+b} \right)}$; \item Heronian mean: $N(a,b)={\left( {a+\sqrt {ab} +b} \right)} \mathord{\left/ {\vphantom {{\left( {a+\sqrt {ab} +b} \right)} 3}} \right. \kern-\nulldelimiterspace} 3$; \item Contra-harmonic mean: $C(a,b)={\left( {a^{2}+b^{2}} \right)} \mathord{\left/ {\vphantom {{\left( {a^{2}+b^{2}} \right)} {\left( {a+b} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {a+b} \right)}$ \item Root-mean-square: $S(a,b)=\sqrt {{\left( {a^{2}+b^{2}} \right)} \mathord{\left/ {\vphantom {{\left( {a^{2}+b^{2}} \right)} 2}} \right. \kern-\nulldelimiterspace} 2} $ \item Centroidal mean: $R(a,b)={2\left( {a^{2}+ab+b^{2}} \right)} \mathord{\left/ {\vphantom {{2\left( {a^{2}+ab+b^{2}} \right)} {3\left( {a+b} \right)}}} \right. \kern-\nulldelimiterspace} {3\left( {a+b} \right)}$ \end{enumerate}
Except 4 and 7 the above means are particular cases of well-known Gini \cite{gin} mean of order $r$ and $s$ is given by \begin{equation} \label{eq1} E_{r,s} (a,b)=\begin{cases}
{\left( {\frac{a^{r}+b^{r}}{a^{s}+b^{s}}} \right)^{\frac{1}{r-s}}} & {r\ne s} \\
{\sqrt {ab} } & {r=s=0} \\
{\exp \left( {\frac{a^{r}\ln a+b^{r}\ln b}{a^{r}+b^{r}}} \right)} & {r=s\ne 0} \\ \end{cases}. \end{equation}
In particular, we have $E_{-1,0} =H$, $E_{-1/2,1/2} =G$, $E_{0,1} =A$, $E_{0,2} =S$ and $E_{1,2} =R$. Since $E_{r,s} =E_{s,r} $, the Gini-mean $E_{r,s} (a,b)$ is an increasing function in $r$ or $s$ \cite{czp}. In view of this we have $H\le G\le A\le S\le C$. Moreover we can easily verify the following inequality having the above seven means: \begin{equation} \label{eq2} H\le G\le N\le A\le R\le S\le C. \end{equation} We can write, $M(a,b)=b\,f_{M} (a/b)$, where $M$stands for any of the above seven means, then we have \begin{equation} \label{eq3} f_{H} (x)\le f_{G} (x) \le f_{N} (x))\le f_{A} (x)\le f_{R} (x)\le f_{S} (x)\le f_{C} (x), \end{equation}
\noindent where $f_{H} (x)=2x/(x+1)$, $f_{G} (x)=\sqrt x $, $f_{N} (x)=(x+\sqrt x +1)/3$, $f_{A} (x)=(x+1)/2$, $f_{R} (x)=2(x^{2}+x+1)/3(x+1)$, $f_{S} (x)=\sqrt {(x^{2}+1)/2} $ and $f_{C} (x)=(x^{2}+1)/(x+1)$, $\forall x>0$, $x\ne 1$. We have equality sign in (\ref{eq3}) iff $x=1$. For simplicity, let us write \begin{equation} \label{eq4} D_{AB} =b\,f_{AB} (a,b), \end{equation}
\noindent where $f_{UV} (x)=f_{U} (x)-f_{V} (x)$, with $U\ge V$. Inequalities appearing in (\ref{eq2}) admits 21 nonnegative differences. Some of these are equal with multiplicative constants as given below: \begin{align} \label{eq5} & \Delta :=3D_{CR} =2D_{AH} =2D_{CA} =D_{CH} =6D_{RA} =\textstyle{3 \over 2}D_{RH},\\ \label{eq6} & h:=3D_{AN} =D_{AG} =\textstyle{3 \over 2}D_{NG} \intertext{and} \label{eq7} & D_{CG} =3D_{RN}. \end{align}
The measures $\Delta $ and $h$ appearing in (\ref{eq5}) and (\ref{eq6}) are respectively the \textit{triangular discrimination} \cite{lec} and \textit{Hellingar's distance} \cite{hel} respectively and are given by \[ \Delta (a,b)=\frac{\left( {a-b} \right)^{2}}{a+b} \] and \[ h(a,b)=\frac{\left( {\sqrt a -\sqrt b } \right)^{2}}{2}. \] More studied on these two measures can be seen in \cite{tan1, tan2, tan3}.
We shall improve considerably the inequalities given in (\ref{eq2}). For this we need first the convexity of the difference of means. In total, we have 21 differences. Some of them are equal to each other with some multiplicative constants. Some of them are not convex and some of them are convex.
\section{Convexity of Difference of Means}
Let us prove now the convexity of some of the difference of means arising due to inequalities (\ref{eq2}). In order to prove it we shall make use of the following lemma \cite{tan1, tan2}.
\begin{lemma} Let $f:I\subset {\rm R}_{+} \to {\rm R}$ be a convex and differentiable function satisfying $f(1)=0$. Consider a function \[ \varphi_{f} (a,b)=af\left( {\frac{b}{a}} \right), \quad a,b>0, \] then the function $\varphi_{f} (a,b)$ is convex in ${\rm R}_{+}^{2} $. Additionally, if $f^{\prime }(1)=0$, then the following inequality hold: \[ 0\le \varphi_{f} (a,b)\le \left( {\frac{b-a}{a}} \right)\varphi_{{f}'} (a,b). \] \end{lemma}
In all the cases, it is easy to check that $f_{AB} (1)=f_{A} (1)-f_{B} (1)=1-1=0$. According to Lemma 2.1, it is sufficient to show the convexity of the functions $f_{AB} (x)$. It requires only to show that the second order derivative of $f_{AB} (x)$ to be nonnegative for all $x>0$. Here below are the second order derivatives of the convex functions: \begin{align} & {f}''_{CS} (x)={f}''_{C} (x)-{f}''_{S} (x)=\frac{2\left[ {2\left( {2x^{2}+2} \right)^{3/2}-\left( {x+1} \right)^{3}} \right]}{(x+1)^{3}\left( {2x^{2}+2} \right)^{3/2}}, \notag\\ & {f}''_{CN} (x)={f}''_{C} (x)-{f}''_{N} (x)=\frac{48x^{3/2}+\left( {x+1} \right)^{3}}{12x^{3/2}(x+1)^{3}}>0, \notag\\ & {f}''_{CG} (x)={f}''_{C} (x)-{f}''_{G} (x)=\frac{16x^{3/2}+(x+1)^{3}}{4x^{3/2}(x+1)^{3}}>0, \notag\\ & {f}''_{SA} (x)={f}''_{S} (x)-{f}''_{A} (x)=\frac{1}{\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }>0,\notag\\ & {f}''_{SN} (x)={f}''_{S} (x)-{f}''_{N} (x)=\frac{12x^{3/2}+\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }{12x^{3/2}\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }>0. \notag\\ & {f}''_{SG} (x)={f}''_{S} (x)-{f}''_{G} (x)=\frac{4x^{3/2}+\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }{4x^{3/2}\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }>0, \notag\\ & {f}''_{SH} (x)={f}''_{S} (x)-{f}''_{H} (x)=\frac{\left( {x+1} \right)^{3}+4\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }{\left( {x+1} \right)^{3}\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} }>0, \notag\\ & {f}''_{AG} (x)={f}''_{A} (x)-{f}''_{G} (x)=\frac{1}{4x^{3/2}}>0, \notag\\ & {f}''_{AH} (x)={f}''_{A} (x)-{f}''_{H} (x)=\frac{4}{\left( {x+1} \right)^{3}}>0 \notag \intertext{and} & {f}''_{RG} (x)={f}''_{R} (x)-{f}''_{G} (x)=\frac{16x^{3/2}+3\left( {x+1} \right)^{3}}{4x^{3/2}\left( {x+1} \right)^{3}}>0. \notag \end{align}
\noindent Since, $S\ge A$, this implies that $S^{3}\ge A^{3}$, i.e., $\left( {\sqrt {\frac{x^{2}+1}{2}} } \right)^{3}-\left( {\frac{x+1}{2}} \right)^{3}\ge 0$. This gives $2\left( {2x^{2}+2} \right)^{3/2}-\left( {x+1} \right)^{3}\ge 0$. Thus we have ${f}''_{CS} (x)\ge 0$ for all $x>0$. The difference means ${D}_{SR}$, ${D}_{NH}$ and ${D}_{GH}$ are not convex.
\section{Inequalities among of Differences of Means}
In this section we shall bring sequence of inequalities based on the differences arising due to (\ref{eq8}). This we shall present in two parts. The results given in this section are based on the applications of the following lemma \cite{tan1, tan2}:
\begin{lemma} Let $f_{1} ,f_{2} :I\subset {\rm R}_{+} \to {\rm R}$ be two convex functions satisfying the assumptions:
(i) $f_{1} (1)=f_{1}^{\prime }(1)=0$, $f_{2} (1)=f_{2}^{\prime }(1)=0$;
(ii) $f_{1} $ and $f_{2} $ are twice differentiable in ${\rm R}_{+} $;
(iii) there exists the real constants $\alpha ,\beta $ such that $0\le \alpha <\beta $ and \[ \alpha \le \frac{f_{1}^{\prime \prime }(x)}{f_{2}^{\prime \prime }(x)}\le \beta , \quad f_{2}^{\prime \prime }(x)>0, \] for all $x>0$ then we have the inequalities: \[ \alpha \mbox{\thinspace }\varphi_{f_{2} } (a,b)\le \varphi_{f_{1} } (a,b)\le \beta \mbox{\thinspace }\varphi_{f_{2} } (a,b), \] for all $a,b\in (0,\infty )$, where the function $\phi_{(\cdot )} (a,b)$ is as defined in Lemma 2.1. \end{lemma}
The inequalities appearing in (\ref{eq2}) admits 21 nonnegative differences. The differences satisfies some simple inequalities. These are given by the following \textbf{pyramid}:
\[ D_{GH} ; \] \[ D_{NG} \le D_{NH} ; \] \[ D_{AN} \le D_{AG} \le D_{AH} ; \] \[ D_{RA} \le D_{RN} \le D_{RG} \le D_{RH} ; \] \[ D_{SR} \le D_{SA} \le D_{SN} \le D_{SG} \le D_{SH} ; \] \[ D_{CS} \le D_{CR} \le D_{CA} \le D_{CN} \le D_{CG} \le D_{CH}, \]
\noindent where the $D_{GH} =G-H$, $D_{NG} =N-G$, $D_{CS} =C-S$, etc. As we have seen above some of these differences are equals with multiplicative constants.
The difference means $D_{SR} $, $D_{NH} $and $D_{GH} $ are not convex. The other convex measures satisfies some interesting inequalities with each other given by the theorem below.
\begin{theorem} The following inequalities hold: \begin{equation} \label{eq19} D_{SA} \le \left\{ {\begin{array}{l}
\textstyle{3 \over 4}D_{SN} \\
\textstyle{1 \over 3}D_{SH} \le \textstyle{3 \over 4}D_{CR} \\
\end{array}} \right\}\le \left\{ {\begin{array}{l}
\textstyle{3 \over 7}D_{CN} \le \left\{ {\begin{array}{l}
D_{CS} \\
\textstyle{1 \over 3}D_{CG} \le \textstyle{3 \over 5}D_{RG} \\
\end{array}} \right. \\
\textstyle{1 \over 2}D_{SG} \le \textstyle{3 \over 5}D_{RG} \\
\end{array}} \right\}\le 3D_{AN} . \end{equation} \end{theorem}
\begin{proof} We shall prove above result by parts. Here we shall use frequently the second order derivatives given in section 2. \begin{enumerate} \item \textbf{For }$\bf{D_{SA} \le \textstyle{3 \over 4}D_{SN}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{SA\mathunderscore SN} (x)={{f}''_{SA} (x)} \mathord{\left/ {\vphantom {{{f}''_{SA} (x)} {{f}''_{SN} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{SN} (x)}$. This gives \[ {g}'_{SA\mathunderscore SN} (x)=-\frac{72\left( {x^{2}-1} \right)\sqrt x \sqrt {2x^{2}+2} }{\left[ {24x^{3/2}+\left( {2x^{2}+2} \right)^{3/2}} \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right., \]
Also we have \begin{equation} \label{eq20} \beta_{SA\mathunderscore SN} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{SA\mathunderscore SN} (x)=g_{SA\mathunderscore SN} (1)=\frac{3}{4}. \end{equation}
By the application Lemma 3.1 with (\ref{eq20}) we get the required result.
\item \textbf{For }$\bf{D_{SA} \le \textstyle{1 \over 3}D_{SH}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{SA\mathunderscore SH} (x)={{f}''_{SA} (x)} \mathord{\left/ {\vphantom {{{f}''_{SA} (x)} {{f}''_{SH} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{SH} (x)}$. This gives \[ {g}'_{SA\mathunderscore SH} (x)=-\frac{12\left( {x-1} \right)\left( {x+1} \right)^{2}\sqrt {2x^{2}+2} }{\left[ {\left( {x+1} \right)^{3}+4\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} } \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right., \]
Also we have \begin{equation} \label{eq21} \beta_{SA\mathunderscore SH} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{SA\mathunderscore SH} (x)=g_{SA\mathunderscore SH} (1)=\frac{1}{3}. \end{equation}
By the application Lemma 3.1 with (\ref{eq21}) we get the required result.
\item \textbf{For }$\bf{D_{SH} \le \textstyle{9 \over 4}D_{CR}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{SH\mathunderscore CR} (x)={{f}''_{SH} (x)} \mathord{\left/ {\vphantom {{{f}''_{SH} (x)} {{f}''_{CR} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{CR} (x)}$. This gives \[ {g}'_{SH\mathunderscore CR} (x)=-\frac{9\left( {x-1} \right)\left( {x+1} \right)^{2}}{8\left( {x^{2}+1} \right)^{2}\sqrt {2x^{2}+2} } \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right.. \]
Also we have \begin{equation} \label{eq22} \beta_{SA\mathunderscore SH} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{SA\mathunderscore SH} (x)=g_{SA\mathunderscore SH} (1)=\frac{1}{3}. \end{equation}
By the application Lemma 3.1 with (\ref{eq23}) we get the required result.
\item \textbf{For }$\bf{D_{CR} \le \textstyle{4 \over 7}D_{CN}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{CR\mathunderscore CN} (x)={{f}''_{CR} (x)} \mathord{\left/ {\vphantom {{{f}''_{CR} (x)} {{f}''_{CN} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{CN} (x)}$. This gives \[ {g}'_{CR\mathunderscore CN} (x)=-\frac{48\sqrt x \left( {x-1} \right)\left( {x+1} \right)^{2}}{\left[ {48x^{3/2}+\left( {x+1} \right)^{3}} \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right.. \]
Also we have \begin{equation} \label{eq23} \beta_{CR\mathunderscore CN} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{CR\mathunderscore CN} (x)=g_{CR\mathunderscore CN} (1)=\frac{4}{7}. \end{equation}
By the application Lemma 3.1 with (\ref{eq22}) we get the required result.
\item \textbf{For }$\bf{D_{CR} \le \textstyle{2 \over 3}D_{SG}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{CR\mathunderscore SG} (x)={{f}''_{CR} (x)} \mathord{\left/ {\vphantom {{{f}''_{CR} (x)} {{f}''_{SG} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{SG} (x)}$. This gives \[ {g}'_{CR\mathunderscore SG} (x)=-\frac{16\left( {x-1} \right)\sqrt x \sqrt {2x^{2}+2} \times v_{1} (x)}{\left( {x+1} \right)^{3}\left[ {4x^{3/2}+\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} } \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right., \] where \[ v_{1} (x)=\left( {x^{2}+1} \right)^{2}\sqrt {2x^{2}+2} -8x^{5/2} =8\left[ {\left( {\sqrt {\frac{x^{2}+1}{2}} } \right)^{5}-\left( {\sqrt x } \right)^{5}} \right]>0_{\mathrm{.}} \]
Above expression holds since $S>G$, $\forall x>0,\,x\ne 1$. Also we have \begin{equation} \label{eq24} \beta_{CR\mathunderscore SG} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{CR\mathunderscore SG} (x)=g_{CR\mathunderscore SG} (1)=\frac{2}{3}. \end{equation}
By the application Lemma 3.1 with (\ref{eq24}) we get the required result.
\item \textbf{For }$\bf{D_{SN} \le \textstyle{4 \over 7}D_{CN}} $\textbf{: } We have $\beta_{SN\mathunderscore CN} ={{f}''_{SN} (1)} \mathord{\left/ {\vphantom {{{f}''_{SN} (1)} {{f}''_{CN} (1)}}} \right. \kern-\nulldelimiterspace} {{f}''_{CN} (1)}=\textstyle{4 \over 7}$. Now, we have to show that $\textstyle{4 \over 7}D_{CN} -D_{SN} \ge 0$, i.e., $\textstyle{1 \over 7}\left( {4C+3N-7S} \right)\ge 0$. We can write $\textstyle{4 \over 7}D_{CN} -D_{SN}{\mathrm{\thinspace }}=b\,f_{CN\mathunderscore SN} (a/b)$, where \[ f_{CN\mathunderscore SN} (x)=\textstyle{4 \over 7}f_{SN} (x)-f_{CN} (x) =\frac{1}{14\left( {x+1} \right)}\times v_{2} (x), \] where \[ v_{2} (x)=10x^{2}+10+4x+2x^{3/2}+2\sqrt x -7\left( {x+1} \right)\sqrt {2x^{2}+2} . \]
In order to prove the non-negativity of $v_{2} (x)$, let us consider the function \begin{align} h_{2} (x)& =\left( {10x^{2}+10+4x+2x^{3/2}+2\sqrt x } \right)^{2}-\left( {7\left( {x+1} \right)\sqrt {2x^{2}+2} } \right)^{2}\notag\\ & =\left( {2x^{2}+48x^{3/2}+68x+48\sqrt x +2} \right)\left( {\sqrt x -1} \right)^{4}.\notag \end{align}
Since $h_{2} (x)\ge 0$, giving $v_{2} (x)\ge 0$, $\forall x>0$. This implies that $f_{CN\mathunderscore SN} (x)\ge 0$, $\forall x>0$, hence proving the required result.
\textbf{Argument:} \textit{Let }$a$\textit{ and }$b$\textit{ two positive numbers, i.e., }$a>0$\textit{ and }$b>0$\textit{. If }$a^{2}-b^{2}\ge 0$\textit{, then we can conclude that }$a\ge b$\textit{ because }$a-b=({a^{2}-b^{2})} \mathord{\left/ {\vphantom {{a^{2}-b^{2})} {(a+b)}}} \right. \kern-\nulldelimiterspace} {(a+b)}$\textit{. We have used this argument to prove }$v_{2} (x)\ge 0, \forall x>0. $
\item \textbf{For }$\bf{D_{SN} \le \textstyle{2 \over 3}D_{SG}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{SN\mathunderscore SG} (x)={{f}''_{SN} (x)} \mathord{\left/ {\vphantom {{{f}''_{SN} (x)} {{f}''_{SG} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{SG} (x)}$. This gives \[ {g}'_{SN\mathunderscore SG} (x)=-\frac{4\sqrt x \left( {x^{2}-1} \right)\sqrt {2x^{2}+2} }{\left[ {4x^{3/2}+\left( {x^{2}+1} \right)\sqrt {2x^{2}+2} } \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right.. \]
Also we have \begin{equation} \label{eq25} \beta_{SBN\mathunderscore SG} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{SN\mathunderscore SG} (x)=g_{SN\mathunderscore SG} (1)=\frac{2}{3}. \end{equation}
By the application Lemma 3.1 with (\ref{eq25}) we get the required result.
\item \textbf{For }$\bf{D_{CN} \le \textstyle{7 \over 3}D_{CS}} $\textbf{: }We have $\beta_{CN\mathunderscore CS} ={{f}''_{CN} (1)} \mathord{\left/ {\vphantom {{{f}''_{CN} (1)} {{f}''_{CS} (1)}}} \right. \kern-\nulldelimiterspace} {{f}''_{CS} (1)}=\textstyle{7 \over 3}$. Now, we have to show that $\textstyle{7 \over 3}D_{CS} -D_{CN} \ge 0$, i.e., $\textstyle{1 \over 3}\left( {4C+3N-7S} \right)\ge 0$. This is true in view of part 6.
\item \textbf{For }$\bf{D_{CS} \le 3D_{AN}} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{CS\mathunderscore AN} (x)={{f}''_{CS} (x)} \mathord{\left/ {\vphantom {{{f}''_{CS} (x)} {{f}''_{AN} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{AN} (x)}$. This gives \[ {g}'_{CS\mathunderscore AN} (x)=-\frac{18\sqrt x \left( {x-1} \right)\times v_{3} (x)}{\left( {x^{2}+1} \right)^{2}\left( {x+1} \right)^{2}\sqrt {2x^{2}+2} } \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right., \] where \begin{align} v_{3} (x)& =4\left( {x^{2}+1} \right)^{2}\sqrt {2x^{2}+2} -\left( {x+1} \right)^{5}\notag\\ & =32\left[ {\left( {\sqrt {\frac{x^{2}+1}{2}} } \right)^{2}-\left( {\frac{x+1}{2}} \right)^{5}} \right]>0.\notag \end{align}
Above expression holds since $S>A$, $\forall x>0,\,x\ne 1$. Also we have \begin{equation} \label{eq26} \beta_{CS\mathunderscore AN} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{CS\mathunderscore AN} (x)=g_{CS\mathunderscore AN} (1)=3. \end{equation}
By the application Lemma 3.1 with (\ref{eq26}) we get the required result.
\item \textbf{For }$\bf{D_{CN} \le \textstyle{7 \over 9}D_{CG} }$\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{CN\mathunderscore CG} (x)={{f}''_{CN} (x)} \mathord{\left/ {\vphantom {{{f}''_{CN} (x)} {{f}''_{CG} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{CG} (x)}$. This gives \[ {g}'_{CN\mathunderscore CG} (x)=-\frac{16\sqrt x \left( {x-1} \right)\left( {x+1} \right)^{2}}{\left[ {16x^{3/2}+\left( {x+1} \right)^{2}} \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right.. \]
Also we have \begin{equation} \label{eq27} \beta_{CN\mathunderscore CG} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{CN\mathunderscore CG} (x)=g_{CN\mathunderscore CG} (1)=\frac{7}{9}. \end{equation}
By the application Lemma 3.1 with (\ref{eq27}) we get the required result.
\item \textbf{For }$\bf{D_{SG} \le \textstyle{6 \over 5}D_{RG}} $\textbf{: }We have $\beta_{SG\mathunderscore RG} ={{f}''_{SG} (1)} \mathord{\left/ {\vphantom {{{f}''_{SG} (1)} {{f}''_{RG} (1)}}} \right. \kern-\nulldelimiterspace} {{f}''_{RG} (1)}=\textstyle{6 \over 5}$. Now, we have to show that $\textstyle{6 \over 5}D_{RG} -D_{SG} \ge 0$, i.e., $\textstyle{1 \over 5}\left( {6R-G-5S} \right)\ge 0$. We can write $\textstyle{6 \over 5}D_{RG} -D_{SG}{\mathrm{\thinspace }}=b\,f_{SG\mathunderscore RG} (a/b)$, where \[ f_{SG\mathunderscore RG} (x)=\textstyle{6 \over 5}f_{RG} (x)-f_{SG} (x) =\frac{1}{10\left( {x+1} \right)}\times v_{3} (x), \] where \[ v_{3} (x)=8\left( {x^{2}+x+1} \right)-2\sqrt x \left( {x+1} \right)-5\left( {x+1} \right)\sqrt {2x^{2}+2} . \]
In order to prove the non-negativity of $v_{3} (x)$, let us consider the function \begin{align} h_{3} (x)& =\left[ {8\left( {x^{2}+x+1} \right)-2\sqrt x \left( {x+1} \right)} \right]^{2}-\left( {5\left( {x+1} \right)\sqrt {2x^{2}+2} } \right)^{2}\notag\\ & =\left( {14x^{2}+24x^{3/2}+44x+24\sqrt x +14} \right)\left( {\sqrt x -1} \right)^{4}.\notag \end{align}
Since $h_{3} (x)\ge 0$, giving $v_{3} (x)\ge 0$, $\forall x>0$. The non-negativity of the expression $8\left( {x^{2}+x+1} \right)-2\sqrt x \left( {x+1} \right)$ can be shown easily following the same lines, i.e. \begin{align} & \left[ {4\left( {x^{2}+x+1} \right)} \right]^{2}-\left[ {\sqrt x \left( {x+1} \right)} \right]^{2}\notag\\ & \hspace{20pt} =16x^{4}+31x^{3}+46x^{2}+31x+16>0.\notag \end{align}
This implies that $f_{SG\mathunderscore RG} (x)\ge 0$, $\forall x>0$, hence proving the required result.
\item \textbf{For }$D_{CG} \le \textstyle{9 \over 5}D_{RG} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{CG\mathunderscore RG} (x)={{f}''_{CG} (x)} \mathord{\left/ {\vphantom {{{f}''_{CG} (x)} {{f}''_{RG} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{RG} (x)}$. This gives \[ {g}'_{CG\mathunderscore RG} (x)=-\frac{144\sqrt x \left( {x-1} \right)\left( {x+1} \right)^{2}}{\left[ {16x^{3/2}+3\left( {x+1} \right)^{3}} \right]^{2}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right.. \]
Also we have \begin{equation} \label{eq28} \beta_{CG\mathunderscore RG} =\mathop {\sup }\limits_{x\in (0,\infty )} g_{CG\mathunderscore RG} (x)=g_{CG\mathunderscore RG} (1)=\frac{9}{5}. \end{equation}
By the application Lemma 3.1 with (\ref{eq28}) we get the required result.
\item \textbf{For }$D_{RG} \le 5D_{AN} $\textbf{: }Let us consider the function $_{\mathrm{\thinspace }}g_{RG\mathunderscore AN} (x)={{f}''_{RG} (x)} \mathord{\left/ {\vphantom {{{f}''_{RG} (x)} {{f}''_{AN} (x)}}} \right. \kern-\nulldelimiterspace} {{f}''_{AN} (x)}$. This gives \[ {g}'_{RG\mathunderscore AN} (x)=-\frac{24\sqrt x \left( {x-1} \right)}{\left( {x+1} \right)^{4}} \left\{ {{\begin{array}{*{20}c}
{>0} & {x<1} \\
{<0} & {x>1} \\ \end{array} }} \right.. \]
Also we have \begin{equation} \label{eq29} \beta_{RG\mathunderscore AN} =\mathop{\sup }\limits_{x\in (0,\infty )} g_{RG\mathunderscore AN} (x)=g_{RG\mathunderscore AN} (1)=5. \end{equation}
By the application Lemma 3.1 with (\ref{eq29}) we get the required result. \end{enumerate} \end{proof}
\begin{remark} The above 13 parts allows writing inequalities in their equivalent forms: \begin{multicols}{2} \begin{enumerate} \item $\frac{2G+S}{3}\le N;$ \item $\frac{2C+7G}{9}\le N;$ \item $\frac{S+3N}{4}\le A;$ \item $\frac{2S+H}{3}\le A;$ \item $\frac{2C+3N}{7}\le R;$ \item $\frac{G+5S}{6}\le R$ \item $\frac{4G+5C}{9}\le R;$ \item $S\le \frac{4C+3N}{7};$ \item $9R+4S\le 9C+4H;$ \item $2G+3C\le 2S+3R;$ \item $3N+C\le 3A+S;$ \item $5N+R\le 5A+G.$ \end{enumerate} \end{multicols} \end{remark}
Based on these equivalent versions, here below is an improvement over inequalities appearing in (\ref{eq2}):
\begin{proposition} The following inequalities hold: \[ H\le G\le \left\{ {\begin{array}{l}
\textstyle{{2G+S} \over 3} \\
\textstyle{{2C+7G} \over 9} \\
\end{array}} \right\}\le N\le \textstyle{{3A+S-C} \over 3}\le \left\{ {\begin{array}{l}
\textstyle{{S+3N} \over 4} \\
\textstyle{{2S+H} \over 3} \\
\end{array}} \right\}\le A\le \left\{ {\begin{array}{l}
\textstyle{{G+5S} \over 6} \\
\textstyle{{4G+5C} \over 9} \\
\end{array}} \right\}\le \] \begin{equation} \label{eq30} \le R\le \left\{ {\begin{array}{l}
S \\
5A+G-5N \\
\end{array}} \right\}\le \textstyle{{9C+4H-9R} \over 4}\le C\le \textstyle{{2S+3R-2G} \over 3}. \end{equation} \end{proposition}
Inequalities appearing (\ref{eq30}) can be proved by using similar arguments of Theorem 3.1.
\subsection{Proportionality Relations among Means}
As a part of (\ref{eq19}), let us consider the following inequalities: \begin{equation} \label{eq31} \textstyle{1 \over 4}\Delta \le \textstyle{3 \over 7}D_{CN} \le \textstyle{1 \over 3}D_{CG} \le \textstyle{3 \over 5}D_{RG} \le h. \end{equation} The expression (\ref{eq31}) has six means istead of seven. For simplicity, let us rewrite the expression (\ref{eq31}): \begin{equation} \label{eq32} W_{1} \le W_{2} \le W_{3} \le W_{4} \le W_{5} , \end{equation}
\noindent where for example $W_{1} =\textstyle{1 \over 4}\Delta $, $W_{2} =\textstyle{3 \over 7}D_{CN} $, $W_{9} =\textstyle{1 \over 3}D_{CG} $, etc. The inequalities (\ref{eq32}) again admits 10 nonnegative differences. These differences satisfies some natural inequalities given in a \textbf{pyramid} below:
\[ D_{W_{2} W_{1} }^{1} , \] \[ D_{W_{3} W_{2} }^{2} \le D_{W_{3} W_{1} }^{3} , \] \[ D_{W_{4} W_{3} }^{4} \le D_{W_{4} W_{2} }^{5} \le D_{W_{4} W_{1} }^{6} , \] \[ D_{W_{5} W_{4} }^{7} \le D_{W_{5} W_{3} }^{8} \le D_{W_{5} W_{2} }^{9} \le D_{W_{5} W_{1} }^{10} , \]
\noindent where $D_{W_{2} W_{1} }^{1} :=W_{2} -W_{1} $, $D_{W_{7} W_{6} }^{16} :=W_{7} -W_{6} $, etc. Interestingly, the above 10 nonnegative differences are equals to each other by some multiplicative constants: \begin{align} & \textstyle{7 \over 2}D_{W_{2} W_{1} }^{1} =\textstyle{{21} \over 8}D_{W_{3} W_{2} }^{2} =\textstyle{3 \over 2}D_{W_{3} W_{1} }^{3} =\textstyle{{15} \over 8}D_{W_{4} W_{3} }^{4} =\textstyle{{35} \over {32}}D_{W_{4} W_{2} }^{5} =\notag\\ \label{eq33} & \hspace{20pt} =\textstyle{5 \over 6}D_{W_{4} W_{1} }^{6} =\textstyle{5 \over 4}D_{W_{5} W_{4} }^{7} =\textstyle{3 \over 4}D_{W_{5} W_{3} }^{8} =\textstyle{7 \over {12}}D_{W_{5} W_{2} }^{9} =\textstyle{1 \over 2}D_{W_{5} W_{1} }^{10} =\frac{\left( {\sqrt a -\sqrt b } \right)^{4}}{a+b}. \end{align}
Based on the expressions (\ref{eq4}), (\ref{eq5}), (\ref{eq6}) and (\ref{eq33}) we have the following proportionality relations among the six means: \begin{multicols}{2} \begin{enumerate} \item $4A=2(C+H)=3R+H;$ \item $3R=C+2A=2C+H;$ \item $3N=2A+G;$ \item $3C+2H=3R+2A;$ \item $C+6A=H+6R;$ \item $C+3N=G+3R;$ \item $3N+2A=2C+2H+G;$ \item $27R+2G=14A+9C+6N;$ \item $3\left( {N+3R} \right)=8A+3C+G;$ \item $3G+8H+9C=3R+8A+9N;$ \item $4G+14H+17C=9R+14A+12N;$ \item $5G+24H+31C=21R+24A+15N.$ \end{enumerate} \end{multicols}
\end{document} |
\begin{document}
\title{On the rational real {J}acobian conjecture} \author{L. Andrew Campbell} \address{908 Fire Dance Lane \\ Palm Desert CA 92211 \\ USA} \email{lacamp@alum.mit.edu} \subjclass[2010]{ Primary 14R15; Secondary 14E05 14P10} \keywords{real rational map, {J}acobian conjecture}
\begin{abstract} Jacobian conjectures (that nonsingular implies a global inverse) for rational everywhere defined maps of $\mathbb{R}^n$ to itself are considered, with no requirement for a constant Jacobian determinant or a rational inverse. The birational case is proved and the Galois case clarified. Two known special cases of the Strong Real Jacobian Conjecture (SRJC) are generalized to the rational map context.
For an invertible map, the associated extension of rational function fields must be of odd degree and must have no nontrivial automorphisms. That disqualifies the Pinchuk counter
examples to the SRJC as candidates for invertibility. \end{abstract}
\maketitle
\section{Introduction and summary of results}\label{intro}
The {J}acobian\ Conjecture (JC) \cite{BCW82,ArnoBook} asserts that a polynomial map $F: k^n \rightarrow k^n$, where $k$ is a field of characteristic zero, has a polynomial inverse if it is a Keller map \cite{Keller}, which means that its {J}acobian\ determinant, $j(F)$, is a nonzero\ element of $k$. The JC is still not settled for any $n > 1$ and any specific field $k$ of characteristic zero.
For $k=\mathbb{R}$, the Strong Real {J}acobian\ Conjecture (SRJC), asserts that a polynomial map $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$, has a real analytic inverse if it is nonsingular, meaning that $j(F)$, whether constant or not, vanishes nowhere on $\mathbb{R}^n$. However, Sergey {P}inchuk exhibited a family of counterexamples for $n=2$ \cite{Pinchuk},
so the SRJC holds only in special cases.
The Rational Real {J}acobian\ Conjecture (RRJC) is considered here. It is the extension of the SRJC to everywhere defined rational maps, as well as polynomial\ ones. Everywhere defined means that each component of the map can be expressed as the quotient of two polynomials with a nowhere vanishing denominator. That rules out rational functions such as $(x^4+y^4)/(x^2+y^2)$, which is not defined at the origin, even though it has a unique continuous extension to all of $\mathbb{R}^2$. That requirement is crucial, as $F= (x^2y^6+2xy^2, xy^3+1/y)$ is Keller and maps $(1,1)$ and $(-3,-1)$ to the same point \cite{Vitushkin}. Assume $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is such a map and is nonsingular. Then all its fibers are finite of size at most the degree of the associated finite algebraic extension of rational function fields. If $F$ also has a (necessarily real analytic) inverse, then the function field extension is of odd degree and has a trivial automorphism group. The extension degree and the maximum fiber size are of the same parity. If odd maximum fiber size is added as an additional hypothesis to the RRJC or SRJC, it disqualifies the {P}inchuk\ counterexamples, for all of which that size is $2$ \cite{aspc}. If the extension degree is $1$ (the birational case), then $F$ has an inverse that is also an everywhere defined birational nonsingular map. If the extension is {G}alois, then $F$ has an inverse if, and only if, $F$ is birational.
If the automorphism group condition is added as an additional hypothesis, the RRJC and SRJC are
true in the {G}alois case. Thus if both necessary conditions are assumed, the resulting modified RRJC and SRJC conjectures are true in the birational and {G}alois cases and have no obvious counterexamples.
Finally, two known special cases of the SRJC
are generalized to the RRJC context. They show that $F$ is invertible if $A(F)$, the set of points in the codomain over which $F$ is not locally a trivial fibration,
either is of codimension greater than $2$, or does not intersect the image of $F$.
\section{Basic properties}\label{basic}
Both the {J}acobian\ hypothesis and the conclusion of the RRJC can be restated in various equivalent ways. Principally, the former is equivalent to the assertion that $F$ is locally diffeomorphic or locally real bianalytic, and the latter to the assertion that $F$ is injective or bijective or a homeomorphism or a diffeomorphism. These are all obvious, except for the key result that injectivity, also called univalence, implies bijectivity for maps of $\mathbb{R}^n$ to itself that are polynomial or, more generally, rational and defined on all of $\mathbb{R}^n$ \cite{InjectiveReal}. That result does not generalize to semi-algebraic maps of $\mathbb{R}^n$ to itself \cite{SurjectiveSemialgebraic}. Clearly any global univalence theorems \cite{GlobU} for local diffeomorphisms can yield special cases of the conjecture. Properness suffices, and related topological considerations play a role below. But the focus of this article is on results or conjectures that require the polynomial or rational character of a map and involve properties of the associated extension of rational function fields.
The extension of function fields exists, and is algebraic of finite degree, for any dominant rational $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$, whether defined everywhere or not. $F$ is dominant if, and only if, $j(F)$ is not identically zero. The extension is the inclusion of the subfield generated over $\mathbb{R}$ by the (algebraically independent) components of $F$ in the rational function field on the domain of $F$, and will be written as $\mathbb{R}(F) \subseteq \mathbb{R}(X)$ or $\mathbb{R}(X)/\mathbb{R}(F)$. The degree $d$ of the extension is called the extension degree of $F$. If $F$ is generically $N$-to-one for a positive integer $N$, then $N$ is called the geometric degree of $F$. In general, let $t \in \mathbb{R}(X)$ be a primitive element for the extension, meaning that $\mathbb{R}(F)(t)=\mathbb{R}(X)$. For generic $y$ in the codomain, inverse images $x$ of $y$ correspond bijectively to real roots $r=t(x)$ at $y$ of the monic minimal polynomial of $t$ over $\mathbb{R}(F)$. So a generic fiber of $F$ is finite and either empty or of positive size at most $d$, but $F$ need not have a geometric degree.
By definition, an automorphism of the extension is a field automorphism of $\mathbb{R}(X)$ that fixes every element of $\mathbb{R}(F)$.
\begin{Prop}\label{needs} If the geometric degree of $F$ is $1$, then the extension has odd degree and trivial automorphism group. \end{Prop}
\begin{proof} The nonreal roots occur in complex conjugate pairs, and the degree of the monic minimal polynomial for a primitive element is $d$. If $G: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is the geometric realization of an automorphism of $\mathbb{R}(X)$ as a rational map and every element of $\mathbb{R}(F)$ is fixed by the automorphism, then $F \circ G = F$. For a generic $x$, $G$ is defined at $x$, and $F$ is defined and locally diffeomorphic at both $x$ and $x'=G(x)$. Since the geometric degree of $F$ is $1$, $G$ is the identity on an open set and therefore, because it is rational, the identity map. So the automorphism is also the identity. \end{proof}
A map $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ will be called a rational nonsingular (nondegenerate) map if it is an everywhere defined rational map and $j(F)$ vanishes nowhere (resp., does not vanish identically). In either case, both of the Proposition \ref{needs} conclusions become necessary conditions for the existence of an inverse. The Pinchuk counterexamples \cite{Pinchuk} to the SRJC (and hence to the RRJC) are nonsingular polynomial\ maps of $\mathbb{R}^2$ to $\mathbb{R}^2$ with no inverse. All these {P}inchuk\ maps have the same nonconstant, everywhere positive {J}acobian\ determinant, geometric degree $2$, no point with more than $2$ inverse images, exactly $2$ points omitted in the image plane, and the same extension of degree $6$ with trivial automorphism group \cite{aspc,PMFF}.
All three conjectures discussed are true in the
dimension $n=1$ case $f: \mathbb{R} \rightarrow \mathbb{R}$.
In the JC case, $f$ is of degree $1$. In the SRJC case, $f$ is proper, since any nonconstant polynomial becomes infinite when its argument does. In the RRJC case, $f$ is monotone increasing or decreasing, hence injective, thus surjective, so unbounded above and below, and therefore proper.
In the RRJC context, the distinction between nonzero\ constant and nowhere vanishing {J}acobian\ determinants is not as critical as it may seem. If $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ satisfies the hypotheses, let $x \in \mathbb{R}^n, z \in \mathbb{R}$ and define $F^+:\mathbb{R}^{n+1} \rightarrow \mathbb{R}^{n+1}$ by $F^+(x,z)=(F(x),z/(j(F)(x))))$. Then $F^+$ also satisfies the hypotheses, $j(F^+)=1$, and $F^+$ is injective if, and only if, $F$ is injective. As pointed out in \cite{RealJC+SamuelsonMaps}, choosing {P}inchuk maps for $F$ yields Keller counterexamples to the RRJC in dimension $n=3$.
A Samuelson map is a map with a square {J}acobian\ matrix, all of whose leading principal minors, including its determinant, vanish nowhere. A rational Samuelson map defined on all of $\mathbb{R}^n$ has an inverse \cite{RationalSamuelson}, which is necessarily Nash (semi-algebraic\ and real analytic), but is rational if, and only if, the function field extension is birational (cf. section 3). The well known real analytic example $(e^x-y^2+3,4y e^x-y^3)$ in \cite{Gale-Nikaido} shows that a Samuelson map need not be globally injective (consider $(0,2)$ and $(0,-2)$). The variation $F(x,y)=(h-y^2+3,4yh-y^3)$ in \cite{RealJC+SamuelsonMaps}, where $h$ is the function $h(x)=x+\sqrt{1+x^2}$ (positive square root intended) has the same properties and is Nash as well. So does $F^+$, which is also Keller.
Let $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ be a rational nonsingular map. It is a local diffeomorphism, hence an open map. Let $x \in \mathbb{R}^n$ and $y= F(x) \in \mathbb{R}^n$ and define $m(x)$ to be the number of inverse images of $y$ under $F$, potentially allowing $+\infty$ as a possible value. Since $F$ is open, $m(x')\ge m(x)$ for $x'\in \mathbb{R}^n$ in a neighborhood of $x$. So if $A \subseteq \mathbb{R}^n$, the maximum value of $m$ on $A$ is also the maximum value of $m$ on its topological closure $\bar{A}$. So all fibers of $F$, not just generic ones, are finite of size at most $d$, where $d$ is the extension degree of $F$. The fiber size maximum, $N$, is attained on an open subset of the codomain, which must contain a point where $N$ is the number of real roots of a polynomial of degree $d$ with real coefficients. Thus $N$ and $d$ have the same parity. Note that if $N$ and $d$ are odd, then a generic fiber is nonempty because a real polynomial of odd degree has at least one real root, and so $F(\mathbb{R}^n)$ is a connected dense open semi-algebraic\ subset of the codomain. All subsets of $\mathbb{R}^n$ that can be described in the first order logic of ordered fields are semi-algebraic. The description can include real constant symbols (coefficients, values, etc.) and quantification over real variables (but not over subsets, functions or natural numbers); results for any dimension $n>0$ and involving polynomials of arbitrary degrees follow from schemas specifying first order descriptions for any fixed choice of the natural number parameters. As a first application of that principle, the $N$ subsets of the domain $\mathbb{R}^n$ on which $m(x)$ has a specified numeric value in the range $1,\ldots,N$, and the $N+1$ subsets of the codomain $\mathbb{R}^n$ on which $y$ has a specified number of inverse images in the range $0,\ldots,N$, are all semi-algebraic. By definition, $F$ is proper at a point $y$ in its codomain if $y$ has an open neighborhood $U$, such that any compact subset of $U$ has a compact inverse image under $F$. The set of points $y$ in the codomain at which $F$ is proper is readily verified to be the open set of points at which the number of inverse images of $y$ is locally constant. That set contains all points with $N$ inverse images and has an $\epsilon$-ball first order description. Its complement $A(F)$, the asymptotic variety of $F$, is therefore closed semi-algebraic and the inclusion $A(F)\subset \mathbb{R}^n$ is strict. $A(F)$ Is the union for $i=0,\ldots,N-1$ of the semi-algebraic sets consisting of points $y$ in the codomain at which $F$ is not proper and for which $y$ has exactly $i$ inverse images. At an interior point $y$ of one of these sets $F$ would be proper, contradicting $y \in A(F)$. Thus each such set has empty interior, hence is of dimension less than $n$. Consequently $\dim A(F) < n$. It follows that the complement of $A(F)$ is a finite union of disjoint connected open semi-algebraic subsets of $\mathbb{R}^n$ on each of which the number of inverse images of points is a constant, with possibly differing constants for different connected components. If $U$ is any such connected component that intersects $F(\mathbb{R}^n)$, then $F^{-1}(U)$ is nonempty, open and semi-algebraic. Let $V$ be one of its finitely many connected components. Since $V$ is an open and closed subset of $F^{-1}(U)$, the map $V \rightarrow U$ induced by $F$ is a proper local homeomorphism of connected, locally compact, and locally arcwise connected spaces and hence it is a covering map. Such a map is surjective, so all of $U$ is contained in $F(\mathbb{R}^n)$. $V$ must be exactly one of the finitely many connected components of the open semi-algebraic set $\mathbb{R}^n \setminus F^{-1}(A(F))$, since it is closed in that subset as one element of a finite cover by disjoint total spaces of covering maps. Speaking informally, this presents a view of $F$ as a finite collection of $n$-dimensional covering maps, of possibly different degrees, glued together along semi-algebraic sets of positive codimension to form $\mathbb{R}^n$ at the total space level, whose base spaces, which may sometimes coincide for different total spaces, are similarly glued together to form $F(\mathbb{R}^n)$. $F(\mathbb{R}^n) \cap A(F)$ is in general neither empty nor all of $A(F)$, a behavior exhibited by any {P}inchuk map $F$, since then $A(F)$ is a polynomial\ curve and exactly two of its points are not in the image of $F$.
\begin{Prop} If $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a rational nonsingular map and $F$ is generically injective, then $F$ is invertible and its inverse is a nonsingular real analytic map defined on all of $\mathbb{R}^n$. \end{Prop}
\begin{proof} Suppose $F$ is injective on a nonempty Zariski open set $U \subset \mathbb{R}^n$. Let $V$ be the complement of $U$. Since $V$ is algebraic and $\dim V < n$, $F(V)$ is semi-algebraic\ of maximum dimension at most $n-1$. So $F(V)$ is not Zariski dense and therefore the open set of points of maximum fiber size $N$ contains a point with inverse images only in $U$. It follows that $N=1$, that $F$ is injective, and hence that $F$ is surjective \cite{InjectiveReal}. $F$ is locally real bianalytic, and so its global inverse is a nonsingular real analytic map. \end{proof}
Remark. The asymptotic variety was defined by Ronen Peretz as the set of finite limits of a map along curves that tend to infinity \cite{asympvals,asymptotics}. For real polynomial\ maps, it can fail to be Zariski closed, and therefore not technically a variety \cite{GeoPinMap}. In that context it has been extensively studied by Zbigniew Jelonek as the set of points at which a map is not proper \cite{realtrans,geometry}. As one result, he shows that for a nonconstant polynomial map $F: \mathbb{R}^n \rightarrow \mathbb{R}^m$, where $n$ and $m$ are any positive integers and no other conditions are imposed,
the set $A(F)$ is $\mathbb{R}$-uniruled. By that he means that for any $a \in A(F)$ there is a nonconstant polynomial map $g: \mathbb{R} \rightarrow \mathbb{R}^m$ (a polynomial curve) such that $g(0)=a$ and $g(t) \in A(F)$ for all $t \in \mathbb{R}$. That in turn implies that every connected component of $A(F)$ is unbounded and has positive dimension. These results do not hold for everywhere defined rational maps, as shown by $y=1/(1+x^2)$, which is proper except at $y=0$.
\section{The birational and {G}alois cases}\label{bi+gal}
\begin{Thm} Let $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ be a birational nonsingular map. Then $F$ has a global inverse, which is also a birational nonsingular map. \end{Thm}
\begin{proof} $\mathbb{R}(F)=\mathbb{R}(X)$, so the extension degree is $1$. As it bounds the size of all fibers,
$F$ is injective, hence invertible. Thus the rational inverse of $F$ extends to a real analytic map on all of $\mathbb{R}^n$. Let $g=a/b$ be a component of the inverse, where $a$ and $b$ are polynomials with no nonconstant common factor and suppose $b(x)=0$ for some $x \in \mathbb{R}^n$. Let $U$ be an open neighborhood of $x$ in $\mathbb{C}^n$, such that $g$ extends to a complex analytic function $\tilde{g}$ on $U$ satisfying $b\tilde{g}=A$. Let $c$ be an irreducible complex polynomial factor of $b$ satisfying $c(x)=0$. Then $a$ vanishes on the irreducible hypersurface $c=0$ in $\mathbb{C}^n$, because it does so in $U$. So $c$ is also an irreducible factor of $a$. Using complex conjugation, it follows easily that $a^2$ and $b^2$ have a nonconstant common factor in the real polynomial ring. But then so do $a$ and $b$, by unique factorization. This contradiction shows that $b$ vanishes nowhere. So all components of the inverse are everywhere defined rational functions. That makes the inverse an everywhere defined rational map, and it is clearly nonsingular and birational. \end{proof}
If $F$ is defined over a subfield $k \subset \mathbb{R}$, then so is its inverse, since extension degree is preserved by a faithfully flat extension of the coefficients. In that case, $F$ induces a birational bijection of $k^n$ onto $k^n$. Note that $y=x+x^3$ is polynomial, nonsingular, invertible, and defined over $\mathbb{Q}$, but the induced map from $\mathbb{Q}$ to $\mathbb{Q}$ is not surjective.
Remark. In \cite{PolynomialRational}, polynomial\ maps $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ that map $\mathbb{R}^n$ bijectively onto $\mathbb{R}^n$ are considered, and the question is raised of when the inverse is rational. If so, the inverse is everywhere defined on $\mathbb{R}^n$ and $F$ is called a polynomial-rational bijection (PRB) of $\mathbb{R}^n$. A key technical result is that a polynomial\ bijection is a PRB if its natural extension to a polynomial\ map $\mathbb{C}^n \rightarrow \mathbb{C}^n$ maps only real points to real points. A PRB $F$ has a nowhere vanishing {J}acobian\ determinant $j(F)$. Conversely, it is shown that a nowhere vanishing $j(F)$ alone suffices to establish that a polynomial\ map $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ of degree two is a bijection and a PRB. A related but stronger condition is defined and shown to be sufficient, but not necessary, for polynomial\ maps of degree greater than two.
\begin{Thm} If $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a rational nonsingular map and $\mathbb{R}(X)/\mathbb{R}(F)$ is a {G}alois extension, then $F$ is invertible if, and only if, $F$ is birational. \end{Thm}
\begin{proof}
If $F$ is invertible, then the extension has no nontrivial automorphisms. So it can be {G}alois only if it is of degree $1$. In that (birational) case $F$ does have an inverse. \end{proof} If $F$ is defined over a subfield $k \subset \mathbb{R}$ and $k(X)/k(F)$ is {G}alois, then so is $\mathbb{R}(X)/\mathbb{R}(F)$.
Remark. The {G}alois case of the standard JC states that a polynomial\ Keller map with a {G}alois field extension has a polynomial inverse. It was first proved for $k=\mathbb{C}$ only \cite {GaloisCase}, using methods of the theory of several complex variables. The general characteristic zero case appears
in \cite{Razar} and, independently, in \cite{AlgebraicGaloisCase} . The theorem above is manifestly weaker.
Of course, the existence of a polynomial inverse implies the triviality of the field extension , so the JC theorem has no concrete examples.
In contrast, in the SRJC and RRJC contexts, the existence of an inverse does not imply the field extension is {G}alois, much less birational. For instance, if $y=x+x^3$, the field extension $\mathbb{R}(y) \subset \mathbb{R}(x)$ is neither. Even so, a {G}alois extension of degree $d \ne 1$ would represent a counterexample to the RRJC of a new, and unexpected, type.
\section{Promoted SRJC cases}\label{duo}
The two theorems below have been proved in the SRJC context and, because of their topological character, they generalize to the RRJC context almost effortlessly. In both theorems, let $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ be a rational nonsingular map. The theorems impose conditions on $A(F)$ that are illusory, in that they conclude that $F$ is invertible, and so $A(F)$ is actually empty. For polynomial\ $F$, the first theorem was proved by Zbigniew Jelonek \cite[Theorem 8.2]{geometry} and the second by
Christopher I. Byrnes and Anders Lindquist \cite[Remark 2]{NewProperness}.
\begin{Thm} If the dimension of $A(F)$ is less than $n-2$, then $F$ is invertible. \end{Thm}
\begin{proof} If $A \subset \mathbb{R}^n$ is a closed semi-algebraic\ set and $\dim A < n-2$, then $A^c= \mathbb{R}^n \setminus A$ is simply connected \cite[Lemma 8.1]{geometry}. This applies to both $A(F)$ and to $B(F)=F^{-1}(A(F))$, which satisfies $\dim B(F) = \dim A(F) \cap F(R^n)$. The induced map from $B(F)^c$ to $A(F)^c$ is proper, hence a covering map, and therefore a homeomorphism. Since $B(F)$ is not Zariski dense, $F$ is generically injective, and so invertible. This proof is that of Jelonek, which simply applies to rational maps as well. \end{proof}
\begin{Thm} If $A(F) \cap F(\mathbb{R}^n) = \emptyset$, then $F$ is invertible. \end{Thm}
\begin{proof} The condition states that every point of the (connected, open) image of $F$ is a point at which $F$ is proper. Equivalently, the induced map $\mathbb{R}^n \rightarrow F(\mathbb{R}^n)$ is proper. The main result of \cite{NewProperness} is that the standard complex JC holds for polynomial maps that are proper as maps onto their image. In Remark 2 at the end of the note, that result is also proved in the SRJC context. Briefly, $\mathbb{R}^n$ is a universal covering space, of finite degree $d$, of $F(\mathbb{R}^n)$. By well known results of the branch of topology called P. A. Smith theory, there are no fixed point free homeomorphisms of $\mathbb{R}^n$ onto itself of prime period. But the fundamental group $\pi_1 (F(\mathbb{R}^n))$ is of order $d$, and contains an element of prime period unless $d=1$. So $d=1$, $F$ is injective, and therefore invertible. The assumption that $F$ is polynomial, rather than just real analytic, is used at only two points in the proof. First, it ensures that the degree of the covering map is finite, and second, that injectivity implies invertibility. Rationality is sufficient in both situations, so this proof works in the RRJC context as well. \end{proof}
\end{document} |
\begin{document}
\title{Bass and Betti numbers of a module and its deficiency modules}
{\let\thefootnote\relax\footnote{{{\it Date:} \today}}} {\let\thefootnote\relax\footnote{{{\it 2020 Mathematics Subject Classification.} Primary 13C14, 13D45; Secondary 13H10, 14B15.}}} {\let\thefootnote\relax\footnote{{{\it Key words and phrases.} Generalized Cohen-Macaulay module, deficiency modules, Auslander-Reiten conjecture.}}} {\let\thefootnote\relax\footnote{{The second-named author was supported by a CAPES Doctoral Scholarship.}}}
\begin{abstract} This paper aims to provide several relations between Bass and Betti numbers of a given module and its deficiency modules. Such relations and the tools used throughout allow us to generalize some results of Foxby, characterize Cohen-Macaulay modules in equidimensionality terms, study the Cohen-Macaulay and complete intersection properties of a ring and furnish a case for the Auslander-Reiten conjecture. \end{abstract}
\section{Introduction}
In the celebrated paper \cite{F}, Foxby proved that over a Gorenstein local ring $R$ of dimension $d$, a Cohen-Macaulay $R$-module $M$ of dimension $t$ is such that $$\beta_j(M)=\mu^{j+t}(\Ext^{d-t}_R(M,R))$$ and $$\mu^j(M)=\beta_{j-t}(\Ext^{d-t}_R(M,R))$$ for all $j\geq0$. In particular, $\pd_RM<\infty$ if and only if $\id_R\Ext^{d-t}_R(M,R)<\infty$, and $\id_RM<\infty$ if and only if $\pd_R\Ext^{d-t}_R(M,R)<\infty$. Recently, Freitas and Jorge-Pérez \cite{FJP} generalized the first equivalence for local rings which are factor of Gorenstein local rings. In this paper, we shall look at these results in a wider situation as follows.
Schenzel \cite{S} generalized the notion of canonical module in the following sense. Given a Noetherian local ring $R$ which is a factor ring of a $s$-dimensional Gorenstein local ring $S$ and a finite $R$-module $M$, the \emph{$j$-th deficiency module of $M$} is defined as $$K^j(M)=\Ext^{s-j}_S(M,S)$$ for all $j=0,...,\dim_RM$. Local duality assures that these modules are well-defined. Particularly, $K(M):=K^{\dim_RM}(M)$ is called the \emph{canonical module of $M$}. In a certain sense, the deficiency modules of $M$ measure the extent of the failure of $M$ to be Cohen-Macaulay.
In this paper, we shall look for relations between Bass and Betti numbers of a given module and its deficiency modules. As Foxby provided the relations above for Cohen-Macaulay modules over a Gorenstein local ring, we furnish the same relations for generalized Cohen-Macaulay canonically Cohen-Macaulay modules with zeroth and first deficiency modules of positive depth over a local ring which is a factor of a Gorenstein local ring, see Theorem \ref{foxbygeneralization}. Furthermore, the theorems \ref{foxbygeneralization2} and \ref{foxbygeneralization3} show the same relations for arbitrary finite $R$-modules when certain homological conditions over its deficiency modules are imposed.
Besides such generalizations, we exhibit bounds for the Bass numbers (Betti numbers) of a module in terms of the Betti numbers (Bass numbers) of its deficiency modules, see the theorems \ref{mu<beta} and \ref{beta<mu}. They provide several applications that are worked out through this paper. Three examples of such applications are Corollary \ref{bassgeneralization}, providing the Cohen-Macaulay property of a local ring in terms of homological conditions over deficiency modules, Corollary \ref{CIchar} furnishing a characterization of the complete intersection property in terms of the first and second Bass numbers of the residue field, and Corollary \ref{AR} that states that the Auslander-Reiten conjecture holds for modules such that its deficiency modules have finite injective dimensions, generalizing then a similar application given quite recently in \cite{FJP}.
Our methods are especially concerned with studying the behaviour of some spectral sequences. The first of them is called Foxby spectral sequence \ref{foxbyss}, as it was firstly used by Foxby in \cite{F}. The first applications of such spectral sequences regard general information on the canonical module of a generalized Cohen-Macaulay module or an equidimensional module, see Theorem \ref{GCMtheorem} and Proposition \ref{equidimensional}. These results provide sufficient conditions for when the module is also canonically Cohen-Macaulay and its canonical module is generalized Cohen-Macaulay, see the corollaries \ref{GCMCCM} and \ref{GCMcanonicalmodule}, also a characterization of Cohen-Macaulay modules in Corollary \ref{CMequivalence} and a version for generalized Cohen-Macaulay modules of a Schenzel's result, see Corollary \ref{weakercmcanonicalmodule}.
\section{Generalized Cohen-Macaulay modules}
\noindent\textbf{Setup.} Throughout this paper, $R$ will always denote a commutative Noetherian local ring with non-zero unity, maximal ideal $\mathfrak{m}$ and residue class field $k$. Also, $R$ is supposed to be a factor of a Gorenstein local ring $S$ of dimension $s$, i.e., there exists a surjective ring homomorphism $S\rightarrow R$. We say that an $R$-module $M$ is \emph{finite} if it is a finitely generated $R$-module and denote by $M^\vee$ its Matlis dual.
For an $R$-module $M$, $\pd_RM$ and $\id_RM$ denote, respectively, the projective dimension and injective dimension of $M$. Further, $\beta_i(M)=\dim_k\Tor^R_i(k,M)$ is the $i$-th Betti number of $M$, $\mu^i(M)=\dim_k\Ext^i_R(k,M)$ is the $i$-th Bass number of $M$ and $\type(M)=\dim_k\Ext^{\depth_RN}_R(k,M)$ is its type.
The following spectral sequences have first appeared in the \cite{F}.
\begin{lemma}[Foxby spectral sequences]\label{foxbyss} Given a finite $R$-module $X$, an $R$-module $Y$ and a $S$-module $Z$, if either $\pd_RX<\infty$ or $\id_SZ<\infty$, then there exist a graded $R$-module $H$ and first quadrant spectral sequences $$E_2^{p,q}=\Ext^p_S(\Ext^q_R(X,Y),Z)\Rightarrow_p H^{q-p}$$ and $$'E_2^{p,q}=\Tor_R^p(X,\Ext^q_S(Y,Z))\Rightarrow_p H^{p-q}.$$ \end{lemma}
\begin{proof} Let $F_\bullet$ be a free $R$-resolution of $X$ and let $E^\bullet$ be an injective $S$-resolution of $Z$. The desired spectral sequences yield from the isomorphism of first quadrant double complexes $$\Hom_S(\Hom_R(F_\bullet,Y),E^\bullet)\simeq F_\bullet\otimes_R\Hom_S(Y,E^\bullet).$$ \end{proof}
The first application of the Foxby spectral sequences \ref{foxbyss} is a generalization of a well-known result about Cohen-Macaulay modules and its canonical modules, see \cite[Theorem 1.14]{S}. First, we need an auxiliary lemma.
We say that a finite $R$-module $M$ satisfies \emph{Serre's condition $S_k$}, for $k$ being a non-negative integer, provided $$\depth_{R_\mathfrak{p}}M_\mathfrak{p}\geq\min\{k,\dim_{R_\mathfrak{p}}M_{\mathfrak{p}}\}$$ for all $\mathfrak{p}\in\Supp M$.
\begin{lemma}\cite[Lemma 1.9]{S}\label{schenzellemma} Let $M$ be a finite $R$-module of dimension $t$. The modules $K^j(M)$ satisfy the following properties. \begin{itemize}
\item [(i)] $\dim_R K^j(M)\leq j$ for all integer $j$ and $\dim_RK(M)=t$;
\item [(ii)] Suppose that $M$ is equidimensional. Then, $M$ satisfies Serre's condition $S_k$ if and only if $\dim_RK^j(M)\leq j-k$, for all $0\leq j<t$. \end{itemize} \end{lemma}
A finite $R$-module $M$ is said to be \emph{generalized Cohen-Macaulay} if $H^j_\mathfrak{m}(M)$ is of finite length for all $j<\dim_RM$. It should be noticed, due to Matlis duality, that it is equivalent to say that $K^j(M)$ is of finite length for all $j<\dim_RM.$
\begin{theorem}\label{GCMtheorem} Let $M$ be a generalized Cohen-Macaulay $R$-module of dimension $t$. The following statements hold. \begin{itemize}
\item [(i)] There exists isomorphism $$K^0(K(M))\simeq\Tor_{-t}^S(M,S);$$
\item [(ii)] There exists a five-term type exact sequence
$$\xymatrix@=1em{
\Tor^S_{-t+2}(M,S)\ar[r] & K^2(K(M))\ar[r] & K^0(K^{t-1}(M))\ar[dl] \\ & \Tor^S_{-t+1}(M,S)\ar[r] & K^1(K(M))\ar[r] & 0
}$$
\item [(iii)] There exists an exact sequence
$$\xymatrix@=1em{
0\ar[r] & K^0(K^0(M))\ar[r] & M\ar[r] & K(K(M))\ar[r] & K^0(K^1(M))\ar[r] & 0;
}$$
\item [(iv)] If $t\geq3$, then there exist isomorphisms $$K^{t-j}(K(M))\simeq K^0(K^{j+1}(M))$$ for all $1\leq j\leq t-2$. \end{itemize} \end{theorem}
\begin{proof} Consider the Foxby spectral sequences \ref{foxbyss} by taking $X=M$ as $S$-module and $Y=Z=S$
$$E_2^{p,q}=\Ext^p_S(\Ext^q_S(M,S),S)\Rightarrow_p H^{q-p}$$ and $$'E_2^{p,q}=\Tor^S_p(M,\Ext^q_S(S,S))\Rightarrow_p H^{p-q}.$$
Since $'E_2^{p,q}=0$ for all $q\neq0$, we have $$H^j\simeq{}'E_2^{j,0}=\Tor_j^S(M,S)$$ for all $j\geq0$, and $$E_2^{p,q}=\Ext^p_S(\Ext^q_S(M,S),S)\Rightarrow_p\Tor^S_{q-p}(M,S).$$
Once $H^j_\mathfrak{m}(M)$ being of finite length, so is $K^j(M)$ for all $j<t$, and by local duality $$\Ext^p_S(\Ext^q_S(M,S),S)=\Ext^p_S(K^{s-q}(M),S)=0$$ for all $q>s-t$ and for all $p\neq s$. Also, Lemma \ref{schenzellemma} $(i)$ assures that $\dim_RK(M)=t$. Thus, $E_2$ has the following shape
$$ \xymatrix@=1em{ 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & \Ext^s_S(K^0(M),S) & 0 \\ \vdots & \vdots & \vdots & \iddots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & \Ext^s_S(K^{t-1}(M),S) & 0 \\ 0 & K(K(M)) & \Ext^{s-(t-1)}_S(K(M),S) & \cdots & \Ext^s_S(K(M),S) & 0 \\ 0\ar@{--}[rrrrruuuuu] & 0 & 0 & 0 & 0 & 0. } $$
By convergence, there are isomorphisms $$K^0(K(M))=\Ext^s_S(K(M),S)\simeq E_\infty^{s,s-t}\simeq\Tor_{-t}^S(M,S), \ K^1(K(M))=\Ext^{s-1}_S(K(M),S)\simeq E_\infty^{s-1,s-t}$$ and $$K^0(K^0(M))=\Ext^s_S(K^0(M),S)\simeq E_\infty^{s,s}.$$ Thus we get item $(i)$ and by applying Matlis dual one has isomorphisms $$H^1_\mathfrak{m}(K(M))\simeq(E_\infty^{s-1,s-t})^\vee \ \mbox{and} \ H^0_\mathfrak{m}(K^0(M))\simeq(E^{s,s}_\infty)^\vee.$$ The convergence again gives us short exact sequences \begin{equation}\xymatrix@=1em{ 0\ar[r] & E_\infty^{s,s-j}\ar[r] & \Tor_{-j}^S(M,S)\ar[r] & E_\infty^{s-(t-j),s-t}\ar[r] & 0 }\label{eq:GCMconv}\end{equation} for all $j\geq0$. Further, as we move through the pages of $E$, the differentials between the vertical and horizontal lines in the diagram above come out. In other words, there is an exact sequence \begin{equation}\xymatrix@=1em{ 0\ar[r] & E_\infty^{s-(t-j),s-t}\ar[r] & \Ext_S^{s-(t-j)}(K(M),S)\ar[r] & \Ext^s_S(K^{j+1}(M),S)\ar[r] & E_\infty^{s,s-(j+1)}\ar[r] & 0 }\label{eq:GCMdifferentials}\end{equation} for all $0\leq j\leq t-2$.
Item $(ii)$ is exactly the five-term exact sequence of $E$. For item $(iii)$, by taking $j=0$ in both above exact sequences, we have the following exact sequences $$\xymatrix@=1em{ 0\ar[r] & \Ext^s_S(K^0(M),S)\ar[r] & M\ar[r] & E^{s-t,s-t}_\infty\ar[r] & 0 }$$ and $$\xymatrix@=1em{ 0\ar[r] & E^{s-t,s-t}_\infty\ar[r] & K(K(M))\ar[r] & \Ext^s_S(K^1(M),S)\ar[r] & E_\infty^{s,s-1}\ar[r] & 0. }$$ The result follows by splicing these sequences and noticing that $E_\infty^{s,s-1}\subseteq\Tor^S_{-1}(M,S)=0$.
The exact sequence \ref{eq:GCMconv} assures that $E_\infty^{s-(t-j),s-t}=E_\infty^{s,s-j}=0$ for all $j>0$, so that, by the exact sequence \ref{eq:GCMdifferentials}, $$K^{t-j}(K(M))=\Ext^{s-(t-j)}_S(K(M),S)\simeq\Ext^s_S(K^{j+1}(M),S)=K^0(K^{j+1}(M))$$ for all $1\leq j\leq t-2$. \end{proof}
The concept of \emph{canonically Cohen-Macaulay module} was introduced by Schenzel \cite{S2}. We say that a finite $R$-module $M$ is canonically Cohen-Macaulay if its canonical module $K(M)$ is Cohen-Macaulay.
\begin{corollary}\label{GCMCCM} Let $M$ be a generalized Cohen-Macaulay $R$-module of dimension $t$. The following statements hold. \begin{itemize}
\item [(i)] If $t>j$ with $j\in\{0,1\}$, then $\depth_RK(M)>j$;
\item [(ii)] If $t=1$, then $M$ is canonically Cohen-Macaulay and there exists the short exact sequence
$$\xymatrix@=1em{
0\ar[r] & K^0(K^0(M))\ar[r] & M\ar[r] & K(K(M))\ar[r] & 0;
}$$
\item [(iii)] If $t=2$, then $M$ is canonically Cohen-Macaulay;
\item [(iv)] If $t\geq3$, then $K(M)$ is generalized Cohen-Macaulay. \end{itemize} \end{corollary}
\begin{proof} Item $(i)$ follows immediately from Theorem \ref{GCMtheorem} $(i)$ and $(ii)$. For item $(ii)$, item $(i)$ assures that $K(M)$ is Cohen-Macaulay and Theorem \ref{GCMtheorem} $(iii)$ is the desired exact sequence. As to item $(iii)$, item $(i)$ again assures that $K(M)$ is Cohen-Macaulay. Item $(iv)$ follows directly from item $(i)$ and Theorem \ref{GCMtheorem} $(iv)$. \end{proof}
\begin{corollary}\label{GCMcanonicalmodule} If $M$ is generalized Cohen-Macaulay, then so is $K(M)$. \end{corollary}
Corollary \ref{GCMcanonicalmodule} inspires us to ask the following.
\begin{question} Given a finite $R$-module $M$, when is $K(M)$ generalized Cohen-Macaulay? \end{question}
As Corollary \ref{GCMCCM} assures that generalized Cohen-Macaulay of dimension at most two are canonically Cohen-Macaulay, Theorem \ref{GCMtheorem} $(iv)$ recovers a characterization \cite{BN} for the case where the dimension is at least three.
\begin{corollary}\cite[Corollary 2.7]{BN} Let $M$ be a generalized Cohen-Macaulay $R$-module of dimension $t\geq3$. Then, the following statements are equivalent \begin{itemize}
\item [(i)] $M$ is canonically Cohen-Macaulay;
\item [(ii)] $H^j_\mathfrak{m}(M)=0$ for all $j=2,...,t-1$;
\item [(iii)] The $\mathfrak{m}$-transform functor $\D_\mathfrak{m}(M)$ is a Cohen-Macaulay $R$-module. \end{itemize} \end{corollary}
\begin{proposition}\label{equidimensional} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. The following statements hold. \begin{itemize}
\item [(i)] Assume $M$ is generalized Cohen-Macaulay $R$-module. If $\depth_RK^j(M)>0$ for $j=0,1$, then $M\simeq K(K(M))$. In particular, this isomorphism holds true whenever $g\geq2$.
\item [(ii)] Suppose $M$ is equidimensional. If $M$ satisfies Serre's condition $S_{k+1}$ for some positive integer $k$, then $$K^j(K(M))\simeq \Tor^S_{-t+j}(M,S)$$ for all $t-k+1\leq j\leq t$. \end{itemize} \end{proposition}
\begin{proof} Item $(i)$ follows immediately from Theorem \ref{GCMtheorem} $(iii)$ and from the fact that $K^0(M)=K^1(M)=0$ in case of $g\geq2$.
For item $(ii)$, consider the Foxby spectral sequences given in Theorem \ref{GCMtheorem} $$E_2^{p,q}=\Ext^p_S(\Ext^q_S(M,S),S)\Rightarrow_p\Tor^S_{q-p}(M,S).$$
By Lemma \ref{schenzellemma} $(ii)$ and local duality, we have $$E_2^{s-i,s-j}=\Ext^{s-i}_S(K^j(M),S)=0$$ for all $0\leq j<t$ and $i>j-k-1$. In other words, all modules $E_2^{p,q}$ such that $q\neq s-t$ above the dotted line in the below diagram must be zero
$$ \xymatrix@=1em{ 0 & 0 & \cdots & 0 & 0\\ \vdots & \vdots & \iddots & \Ext^s_S(K^{k+1}(M),S) & 0 \\ 0 & \Ext^{s-(t-k-2)}_S(K^{t-1}(M),S) & \cdots & \vdots & 0 \\ \Ext^{s-(t-k-1)}_S(K(M),S)\ar@{--}[rrrruuu] & \Ext^{s-(t-k-2)}_S(K(M),S) & \cdots & \Ext^s_S(K(M),S) & 0 \\ 0 & \cdots & 0 & 0 & 0. } $$ The result follows from the convergence. \end{proof}
Our results also retrieve the well-known fact that every Cohen-Macaulay module is canonically Cohen-Macaulay, see \cite[Theorem 1.14]{S}.
\begin{corollary}\label{cmcanonicalmodule} If $M$ is Cohen-Macaulay of dimension $t$, then so is $K(M)$ and $K(K(M))\simeq M$. \end{corollary}
\begin{proof} There are two immediate ways of proving the desired result. Indeed the result follows directly from Theorem \ref{GCMtheorem} as well as from Proposition \ref{equidimensional} $(ii)$ too. \end{proof}
Proposition \ref{equidimensional} provides a characterization for the Cohen-Macaulay property.
\begin{corollary}\label{CMequivalence} If $M$ is a finite $R$-module, then $M$ is Cohen-Macaulay if and only if $M$ is equidimensional canonically Cohen-Macaulay satisfying Serre's condition $S_{k+1}$ for some positive integer $k$. \end{corollary}
\begin{proof} It is well-known that a Cohen-Macaulay module is equidimensional and satisfies Serre's condition $S_k$ for any $k$. Corollary \ref{cmcanonicalmodule} assures that such a module is also canonically Cohen-Macaulay. Conversely, by taking $j=t$ in Proposition \ref{equidimensional} $(ii)$, we have the isomorphism $K(K(M))\simeq M$. Since $K(M)$ is Cohen-Macaulay, Corollary \ref{cmcanonicalmodule} again assures that $M\simeq K(K(M))$ is Cohen-Macaulay. \end{proof}
The next corollary is a version of Corollary \ref{cmcanonicalmodule} for generalized Cohen-Macaulay modules.
\begin{corollary}\label{weakercmcanonicalmodule} If $M$ is a generalized Cohen-Macaulay module such that $\depth_RK^j(M)>0$ for $j=0,1$, then so is $K(M)$ and $M\simeq K(K(M))$. \end{corollary}
\begin{proof} It follows directly from Corollary \ref{GCMcanonicalmodule} and Proposition \ref{equidimensional} $(i)$. \end{proof}
\section{Bounding Bass numbers}
The Foxby spectral sequences \ref{foxbyss} are fundamental tools in our work. They provide the main result of this section.
\begin{theorem}\label{mu<beta} If $M$ is a finite $R$-module of depth $g$ and dimension $t$, then the following inequality holds for all $j\geq0$
$$\mu^j(M)\leq\sum_{i=g}^t\beta_{j+i}(K^i(M)).$$ Moreover, $\type(M)=\beta_0(K^g(M))$ and $$\mu^{g+2}(M)-\mu^{g+1}(M)\leq\beta_2(K^g(M))-\beta_1(K^g(M))-\beta_0(K^{g+1}(M)).$$ \end{theorem}
\begin{proof} Consider the Foxby spectral sequences \ref{foxbyss} by taking $S=R, X=k, Y=M, Z=S$.
$$E_2^{p,q}=\Ext^p_S(\Ext^q_R(k,M),S)\Rightarrow_p H^{q-p}$$ and $$'E_2^{p,q}=\Tor_p^R(k,\Ext^q_S(M,S))\Rightarrow_p H^{p-q}.$$
Since $\Ext^q_R(k,M)$ is of finite length, we must have $E_2^{p,q}=0$ for all $p\neq s$, so that $$H^j\simeq E_2^{s,j+s}=\Ext^s_S(\Ext^{j+s}_R(k,M),S)$$ for all integer $j$. Once $K^{s-q}(M)=\Ext^q_S(M,S)$ for all $q\geq0$, we conclude that \begin{equation}'E_2^{p,q}=\Tor_p^R(k,K^{s-q}(M))\Rightarrow_p\Ext^s_S(\Ext^{p-q+s}_R(k,M),S).\label{eq:1}\end{equation}
Now, since $\Ext^s_S(k,S)^\vee\simeq k$, where $\_^\vee$ denotes the Matlis dual of $R$, we have $$\Ext^s_S(\Ext^j_R(k,M),S)\simeq\Ext^s_S(k,S)^{\mu^j(M)}\simeq k^{\mu^j(M)}$$ as $k$-vector spaces. Therefore, by the convergence of $'E$, $$\mu^j(M)\leq\sum_{j=p-q+s}\beta_p(K^{s-q}(M))=\sum_{i=g}^t\beta_{j+i}(K^i(M))$$ for all $j\geq0$.
Now, since $K^i(M)=\Ext^{s-i}_S(M,S)=0$ for all $i<g$, then $'E_2$ has the following corner $$\xymatrix@=1em{ & \vdots & \vdots & \vdots \\ \cdots & \Tor_2^R(k,K^{g+1}(M)) & \Tor_2^R(k,K^g(M))\ar[ddl] & 0 & \cdots \\ \cdots & \Tor_1^R(k,K^{g+1}(M)) & \Tor_1^R(k,K^g(M)) & 0 & \cdots \\ \cdots & k\otimes_RK^{g+1}(M) & k\otimes_RK^g(M) & 0 & \cdots }$$ Therefore, $$k\otimes_RK^g(M)={}'E_2^{0,s-g}\simeq H^{g-s}\simeq\Ext_S^s(\Ext^g_R(k,M),S)$$ so that $\type(M)=\beta_0(K^g(M))$ and there exists a five-term-type exact sequence $$\xymatrix@=1em{ \Ext^s_S(\Ext^{g+2}_R(k,M),S)\ar[r] & \Tor_2^R(k,K^g(M))\ar[r] & k\otimes_RK^{g+1}(M)\ar[dl] \\ & \Ext^s_S(\Ext^{g+1}_R(k,M),S)\ar[r] & \Tor_1^R(k,K^g(M))\ar[r] & 0 }$$ whence the desired formula. \end{proof}
\begin{corollary}\label{finiteid} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. If $\pd_RK^i(M)<\infty$ for all $i=g,...,t$, then $\id_RM<\infty$. \end{corollary}
\begin{proof} The hypothesis means that $\beta_l(K^i(M))=0$ for all $l\gg0$ and by Theorem \ref{mu<beta} one has $$\mu^j(M)\leq\sum_{i=g}^t\beta_{j+i}(K^i(M))=0$$ for $j\gg0$, i.e., $\id_RM<\infty$. \end{proof}
Bass' conjecture \cite{B} was first proved by Peskine-Szpiro in \cite{PS} and after in a more general setting by Roberts \cite{R}. It states that a local ring admitting a non-zero module of finite injective dimension must be Cohen-Macaulay. The next corollary provides sufficient conditions in terms of projective dimension for a local ring to be Cohen-Macaulay.
\begin{corollary}\label{bassgeneralization} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. If $\pd_RK^i(M)<\infty$ for all $i=g,...,t$, then $R$ is Cohen-Macaulay. \end{corollary}
\begin{proof} Corollary \ref{finiteid} assures that $\id_RM<\infty$ and thus the result follows from Bass' conjecture. \end{proof}
\begin{theorem}\label{foxbygeneralization} If $M$ is a generically Cohen-Macaulay canonically Cohen-Macaulay $R$-module of dimension $t$ such that $\depth_RK^j(M)>0$ for $j=0,1$, then $$\beta_j(M)=\mu^{j+t}(K(M))$$ and $$\mu^j(M)=\beta_{j-t}(K(M))$$ for all $j\geq0$. In particular, $\pd_RM<\infty$ if and only if $\id_RK(M)<\infty$ and $\id_RM<\infty$ if and only if $\pd_RK(M)<\infty$. \end{theorem}
\begin{proof} By Lemma \ref{schenzellemma} $(i)$, $K(M)$ is Cohen-Macaulay of dimension $t$ and by Proposition \ref{equidimensional} $(i)$, $K(K(M))\simeq M$, that is, $K^i(K(M))=0$ for all $i\neq t$ and $K^t(K(M))\simeq M$. The spectral sequence \ref{eq:1} $$'E_2^{p,q}=\Tor^R_p(k,K^{s-q}(K(M)))\Rightarrow_p\Ext^s_S(\Ext^{p-q+s}_R(k,K(M)),S)$$ degenerates, so that $$\Tor_j^R(k,M)\simeq\Tor_j^R(k,K(K(M)))={}'E_2^{j,s-t}\simeq\Ext^s_S(\Ext^{j+t}_R(k,K(M)),S)$$ for all $j\geq0$. Therefore, $$\beta_j(M)=\dim_k\Tor_j^R(k,M)=\dim_k\Ext^s_S(\Ext^{j+t}_R(k,K(M)),S)=\mu^{j+t}(K(M))$$ for all $j\geq0$. The other equality follows from the fact $K(K(M))\simeq M$. \end{proof}
Theorem \ref{foxbygeneralization} generalizes \cite[Corollary 3.6]{F} and improves \cite[Corollary 3.3]{FJP}. We record this in the next corollary.
\begin{corollary}\label{foxbyresult} If $M$ is Cohen-Macaulay $R$-module of dimension $t$, then $$\beta_j(M)=\mu^{j+t}(K(M))$$ and $$\mu^j(M)=\beta_{j-t}(K(M))$$ for all $j\geq0$. In particular, $\pd_RM<\infty$ if and only if $\id_RK(M)<\infty$ and $\id_RM<\infty$ if and only if $\pd_RK(M)<\infty$. \end{corollary}
\begin{proof} If $t\geq2$, then the result follows from Theorem \ref{foxbygeneralization}. Otherwise, Corollary \ref{cmcanonicalmodule} and the spectral sequence argument given in the proof of Theorem \ref{foxbygeneralization} asserts the result. \end{proof}
The next theorem is an attempt to extent part of Theorem \ref{foxbygeneralization} to arbitrary modules. In the next section, we work on the other part.
\begin{theorem}\label{foxbygeneralization2} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. If $\pd_RK^i(M)<\infty$ for all $g\leq i<t$, then $$\mu^j(M)=\beta_{j-t}(K(M))$$ for all $j>\depth R+t$. In particular, $\id_RM<\infty$ if and only if $\pd_RK(M)<\infty$. \end{theorem}
\begin{proof} The spectral sequence \ref{eq:1} is such that $'E_2^{p,q}=0$ for all $p>\depth R$ and $g\leq q<t$, so that $$\Tor_j^R(k,K(M))={}'E_2^{j,s-t}\simeq\Ext^s_S(\Ext^{j+t}_R(k,M),S),$$ whence the result. \end{proof}
We derive other consequences of Theorem \ref{mu<beta}. In particular, we say exactly when the type of a finite module is one in terms of its deficiency modules.
\begin{corollary}\label{typecaracterization} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. The following statements hold. \begin{itemize}
\item [(i)] If $M$ is Cohen-Macaulay of dimension $t$, then $$\mu^{t+2}(K(M))-\mu^{t+1}(K(M))\geq\beta_2(M)-\beta_1(M).$$ In particular, if $\pd_RM<\infty$ then $\beta_1(M)\geq\beta_2(M)$.
\item [(ii)] If $\id_RM<\infty$, then $$\beta_0(K^{g+1}(M))\geq\beta_2(K^g(M))-\beta_1(K^g(M)).$$ In particular, if $M$ is also Cohen-Macaulay, then $\beta_1(K(M))\geq\beta_2(K(M))$.
\item [(iii)] $\type(M)=1$ if and only if $K^g(M)$ is cyclic. \end{itemize} \end{corollary}
\begin{proof} Item $(iii)$ follows directly from Theorem \ref{mu<beta}. Item $(i)$ follows from Corollary \ref{cmcanonicalmodule}, Theorem \ref{mu<beta} and Corollary \ref{foxbyresult}, and item $(ii)$ follows from \cite[Theorem 3.7]{BH}, corollaries \ref{cmcanonicalmodule} and \ref{foxbyresult} and item $(i)$. \end{proof}
The spectral sequence \ref{eq:1} provides more information when the module involved has only two (possibly) non-zero deficiency modules.
\begin{proposition}\label{t=g+rfiniteid} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. Suppose $K^i(M)=0$ for all $i\neq g,t$. If $\id_RM<\infty$ then $\beta_j(K^g(M))=\beta_{j+g-t-1}(K(M))$ for all $j>\depth R-g+1$. \end{proposition}
\begin{proof} Write $t=g+r$. The spectral sequence \ref{eq:1} has only two vertical lines as the following diagram shows $$\xymatrix@=1em{ & & \vdots & \vdots & & \vdots & \vdots \\ \cdots & 0 & \Tor_{r+1}^R(k,K(M)) & 0 & \cdots & 0 & \Tor_{r+1}^R(k,K^g(M))\ar[ddddllll] & 0 & \cdots \\ & & \vdots & & \iddots & & \vdots \\ \cdots & 0 & \Tor_2^R(k,K(M)) & 0 & \cdots & 0 & \Tor_2^R(k,K^g(M)) & 0 & \cdots \\ \cdots & 0 & \Tor_1^R(k,K(M)) & 0 & \cdots & 0 & \Tor_1^R(k,K^g(M)) & 0 & \cdots \\ \cdots & 0 & k\otimes_RK(M) & 0 & \cdots & 0 & k\otimes_RK^g(M) & 0 & \cdots }$$
From convergence, we obtain an exact sequence $$\xymatrix@=1em{ \Ext^s_S(\Ext^{j+g}_R(k,M),S)\ar[r] & \Tor_j^R(k,K^g(M))\ar[r] & \Tor_{p-r-1}^R(k,K(M))\ar[r] & \Ext^s_S(\Ext^{j+g-1}_R(k,M),S)}$$ for all $j\geq0$. Thus, since $\id_RM=\depth R$ (see \cite[Theorem 3.7.1]{BH}), we conclude that $$\Tor_j^R(k,K^g(M))\simeq\Tor_{j-r-1}^R(k,K(M))$$ for all $j>\depth R-g+1$, whence the result. \end{proof}
Based on Corollary \ref{finiteid} and Proposition \ref{t=g+rfiniteid}, we finish this section by asking the following.
\begin{question}\label{question1} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. Is it true that $$\id_RM<\infty\Leftrightarrow\pd_RK^i(M)<\infty,\forall i=g,...,t?$$ \end{question}
\section{Bounding Betti numbers}
In last section, we bounded the Bass numbers of a module in terms of the Betti numbers of the deficiency modules. In this section, we get a dual version of Theorem \ref{mu<beta} in the following sense.
\begin{theorem}\label{beta<mu} For a finite $R$-module $M$ of depth $g$ and dimension $t$, the following inequality holds true for all $j\geq0$ $$\beta_j(M)\leq\sum_{i=g}^t\mu^{j+i}(K^i(M)).$$ Moreover, $\mu^0(K(M))=\beta_{-t}(M)$ and $$\beta_{-t+2}(M)-\beta_{-t+1}(M)\geq\mu^2(K(M))-\mu^1(K(M))-\mu^0(K^{t-1}(M)).$$ \end{theorem}
\begin{proof} By taking a free $R$-resolution $F_\bullet$ of $k$ and an injective $S$-resolution $E^\bullet$ of $S$, the tensor-hom adjunction induces a first quadrant double complex isomorphism $$\Hom_S(F_\bullet,\Hom_S(M,E^\bullet))\simeq\Hom_S(F_\bullet\otimes_Rk,E^\bullet)$$ which yields two spectral sequences as follows $$E_2^{p,q}=\Ext^p_R(k,\Ext^q_S(M,S))\Rightarrow_p H^{p+q}$$ and $$'E_2^{p,q}=\Ext^p_S(\Tor_q^R(k,M),S)\Rightarrow_p H^{p+q}.$$ Since $\Tor^R_q(k,M)$ is of finite length for all $q\geq0$, due to local duality, we must have $'E_2^{p,q}=0$ for all $p\neq s$, so that $$H^j\simeq{}'E_2^{s,j-s}=\Ext^s_S(\Tor_{j-s}^R(k,M),S)$$ for all $j\geq0$. Once $K^{s-q}(M)=\Ext^q_R(M,S)$ for all $q\geq0$, one has spectral sequence \begin{equation}E_2^{p,q}=\Ext^p_R(k,K^{s-q}(M))\Rightarrow_p\Ext^s_S(\Tor^R_{p+q-s}(k,M),S).\label{eq:2}\end{equation} By convergence, we conclude that $$\beta_j(M)=\dim_k\Ext^ s_S(\Tor^R_{(j+s)-s}(k,M),S)\leq\sum_{p+q=j+s}\dim_k\Ext^p_R(k,K^{s-q}(M))=\sum_{i=g}^t\mu^{i+j}(K^i(M)).$$
Now, since $K^i(M)=0$ for all $i<g$ or $i>t$, then $E_2^{p,q}=0$ for all $q<s-t$ or $q>s-g$. In particular, $E_2$ has a corner as follows $$\xymatrix@=1em{ \vdots & \vdots & \vdots \\ \Hom_R(k,K^{t-1}(M))\ar[drr] & \Ext^1_R(k,K^{t-1}(M)) & \Ext^2_R(k,K^{t-1}(M)) & \cdots \\ \Hom_R(k,K(M)) & \Ext^1_R(k,K(M)) & \Ext^2_R(k,K(M)) & \cdots \\ 0 & 0 & 0 & \cdots \\ \vdots & \vdots & \vdots}$$ Therefore, there exists the isomorphism $$\Hom_R(k,K(M))=E_2^{0,s-t}\simeq\Ext^s_S(\Tor^R_{-t}(k,M),S)$$ and a five-term type exact sequence $$\xymatrix@=1em{ 0\ar[r] & \Ext^1_R(k,K(M))\ar[r] & \Ext^s_S(\Tor^R_{-t+1}(k,M),S)\ar[r] & \Hom_R(k,K^{t-1}(M))\ar[dl] \\ & & \Ext^2_R(k,K(M))\ar[r] & \Ext^s_S(\Tor^R_{-t+2}(k,M),S) }$$ whence the result. \end{proof}
\begin{remark} It should be noticed that the estimate $\beta_j(M)\leq\sum_{i=g}^t\mu^{j+i}(K^i(M))$ is already known, see \cite[Theorem 3.2]{S}. \end{remark}
\begin{corollary}\label{t=0} The following statements hold. \begin{itemize}
\item [(i)] If $t=0$, then $\beta_0(M)=\mu^0(K(M))$ and $$\beta_2(M)-\beta_1(M)\geq\mu^2(K(M))-\mu^1(K(M)).$$ Otherwise, $\depth_RK(M)>0$;
\item [(ii)] If $t=1$, then $\beta_1(M)-\beta_0(M)\geq\mu^2(K(M))-\mu^1(K(M))-\mu^0(K^0(M))$;
\item [(iii)] If $t=2$, then $\beta_0(M)\geq\mu^2(K(M))-\mu^1(K(M))-\mu^0(K^1(M))$;
\item [(iv)] If $t>2$, then $\mu^0(K^{t-1}(M))\geq\mu^2(K(M))-\mu^1(K(M))$. \end{itemize} \end{corollary}
\begin{proof} It follows directly from Theorem \ref{beta<mu}. \end{proof}
\begin{corollary}\label{artinianlemma} If $M$ is a finite Artinian $R$-module, then $$\beta_2(M)-\beta_1(M)=\mu^2(K(M))-\mu^1(K(M)).$$ \end{corollary}
\begin{proof} By the corollaries \ref{typecaracterization} $(i)$ and \ref{t=0} $(i)$, $$\mu^2(K(M))-\mu^1(K(M))\geq\beta_2(M)-\beta_1(M)\geq\mu^2(K(M))-\mu^1(K(M)).$$ \end{proof}
\begin{lemma}\cite[Proposition 2.8.4]{H}\label{CIlemma} Suppose $R$ is $d$-dimensional with embedding dimension $e$. Then $\beta_1(R/\mathfrak{m})=e$ and the following statements are equivalent. \begin{itemize}
\item [(i)] $\beta_2(R/\mathfrak{m})=\binom{e}{2}+e-d$;
\item [(ii)] $R$ is a complete intersection. \end{itemize} \end{lemma}
\begin{corollary}\label{CIchar} If $R$ is $d$-dimensional of embedding dimension $e$, then $$\mu^2(k)-\mu^1(k)=\binom{e}{2}-d$$ if and only if $R$ is a complete intersection. \end{corollary}
\begin{proof} It follows directly from Corollary \ref{artinianlemma} and Lemma \ref{CIlemma}. \end{proof}
\begin{corollary}\label{finitepd} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. If $\id_RK^i(M)<\infty$ for all $i=g,...,t$, then $\pd_RM<\infty$. \end{corollary}
\begin{proof} By hypothesis, we have $\mu^l(K^i(M))=0$ for all $l\gg0$ and by Theorem \ref{beta<mu} one has $$\beta_j(M)\leq\sum_{i=g}^t\mu^{j+i}(K^i(M))=0$$ for all $j\gg0$, whence $\mu^j(M)=0$ for all $j\gg0$, that is, $\pd_RM<\infty$. \end{proof}
The \emph{Auslander-Reiten conjecture} \cite{AR} states the following. Given a finite $R$-module $M$, if $$\Ext^j_R(M,M\oplus R)=0$$ for all $j>0$, then $M$ is free. This long-standing conjecture has been largely studied and several positive answers are already known, see for instance \cite{A,AY,AB,DEL,FJP,HL,LM,NS}. Corollary \ref{finitepd} provides another positive answer for the Auslander-Reiten conjecture for a class of modules. But first, we need a lemma.
\begin{lemma}\cite[Lemma 1 (iii)]{M}\label{pdfinite} Let $R$ be a local ring and let $M$ and $N$ be finite $R$-modules. If $\pd_RM<\infty$ and $N\neq0$, then
$$\pd_RM=\sup\{j:\Ext^j_R(M,N)\neq0\}.$$ \end{lemma}
\begin{theorem}\label{arconjtheorem} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. $M$ is free provided the following statements hold. \begin{itemize}
\item [(i)] $\id_RK^i(M)<\infty$ for all $i=g,...,t$;
\item [(ii)] There exists an $R$-module $N$ such that $\Ext^j_R(M,N)=0$ for all $j=1,...,d$. \end{itemize} \end{theorem}
\begin{proof} It follows directly from Corollary \ref{finitepd} and Lemma \ref{pdfinite}. \end{proof}
The next corollary proves the Auslander-Reiten conjecture for a certain class of modules. It generalizes the case of the conjecture obtained in \cite{FJP}.
\begin{corollary}\label{AR} The Auslander-Reiten conjecture holds for finite modules having deficiency modules of finite injective dimension over local rings which are factors of Gorenstein local rings. \end{corollary}
\begin{proof} It follows immediately from Theorem \ref{arconjtheorem}. \end{proof} In the next theorem, such as Theorem \ref{foxbygeneralization2}, we furnish another attempt to remove the generalized Cohen-Macaulayness hypothesis from Theorem \ref{foxbygeneralization}.
\begin{theorem}\label{foxbygeneralization3} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. If $\id_RK^i(M)<\infty$ for all $g\leq i<t$, then $$\beta_j(M)=\mu^{j+t}(K(M))$$ for all $j>s+\depth R-t-g$. In particular, $\pd_RM<\infty$ if and only if $\id_RK(M)<\infty$. \end{theorem}
\begin{proof} Consider the spectral sequence \ref{eq:2} $$E_2^{p,q}=\Ext^p_R(k,K^{s-q}(M))\Rightarrow_p\Ext^s_S(\Tor^R_{p+q-s}(k,M),S).$$ The hypothesis and \cite[Theorem 3.7.1]{BH} assures that $E_2^{p,q}=0$ for all $p>\depth R$ and for all $s-t<q\geq s-g$. Therefore, the convergence of $E$ implies that $$\Ext^j_R(k,K(M))\simeq\Ext^s_S(\Tor^R_{j-t}(k,M),S)$$ for all $j>s-\depth R-g$, whence the result. \end{proof}
The next proposition is an attempt to understand the converse of Corollary \ref{finitepd}.
\begin{proposition}\label{t=g+rfinitepd} Assume $K^i(M)=0$ for all $i\neq g,t$. If $\pd_RM<\infty$, then $\mu^j(K^g(M))=\mu^{j-g+t+1}(K(M))$ for all $j>\pd_RM+1$. \end{proposition}
\begin{proof} The spectral sequence \ref{eq:2} has only two lines as follows $$\xymatrix@=1em{ 0 & 0 & \cdots & 0 & \cdots \\ \vdots & \vdots & & \vdots \\ \Hom_R(k,K^g(M))\ar[ddrrr] & \Ext^1_R(k,K^g(M)) & \cdots & \Ext^{p+r+1}_R(k,K^g(M)) & \cdots \\ \vdots & \vdots & \ddots & \vdots \\ \Hom_R(k,K(M)) & \Ext^1_R(k,K(M)) & \cdots & \Ext^{p+r+1}_R(k,K(M)) & \cdots \\ 0 & 0 & \cdots & 0 & \cdots \\ \vdots & \vdots & & \vdots}$$ Such a shape and convergence yields an exact sequence $$\xymatrix@=1em{ \Ext^s_R(\Tor^R_{j-g}(k,M),S)\ar[r] & \Ext^j_R(k,K^g(M))\ar[r] & \Ext^{j+r+1}_R(k,K(M))\ar[r] & \Ext^s_S(\Tor^R_{j-g+1}(k,M),S)}$$ for all $j\geq0$. Thus, if $j>\pd_RM+1$, then $$\Ext^j_R(k,K^g(M))\simeq\Ext^{j+r+1}_R(k,K(M))$$ and, in particular, $\mu^j(K^g(M))=\mu^{j+r+1}(K(M))$. \end{proof}
Corollary \ref{finitepd} and Proposition \ref{t=g+rfinitepd} lead us to ask the following.
\begin{question}\label{question2} Let $M$ be a finite $R$-module of depth $g$ and dimension $t$. Is it true that $$\pd_RM<\infty \Leftrightarrow \id_RK^i(M)<\infty, \forall i=g,...t?$$ \end{question}
\end{document} |
\begin{document}
\pagestyle{plain}
\sloppy
\title{The H-index can be easily manipulated}
\begin{abstract}
We prove two complexity results about the H-index concerned with the
Google scholar \emph{merge} operation on one's scientific articles. The results show that, although it is hard to merge one's articles in an optimal way, it is easy to merge them in such a way that one's H-index increases. This suggests the need for an alternative scientific performance measure that is resistant to this type of manipulation. \end{abstract}
\section{Introduction}
The \emph{H-index} was introduced by the physicist J.E. Hirsch in~\cite{Hir05} to `quantify an individual's scientific research output'. Recall that it is defined as the largest $x$ such that one's $x$ most cited paper is cited at least $x$ times. (An aside: Hirsch's original definition was ambiguous as pointed out in~\cite{Ste07}, where the current definition is proposed.) Its introduction led to an impressive literature. According to Google scholar; by 18th of April 2013 this paper was cited 3043 times. To mention just one example, \cite{Woe08} provided its axiomatic definition.
The H-index started to be used as a universal measure to assess and compare researchers in a given discipline. Hirsch suggested in his paper `(with large error bars) that for faculty at major research universities, $h \approx 12$ might be a typical value for advancement to tenure (associate professor) and that $h \approx 18$ might be a typical value for advancement to full professor'.
In fact, computer scientists seem to cite each other much more often. Jens Palsberg maintains at \url{http://www.cs.ucla.edu/~palsberg/h-number.html} a list of computer scientists with H-index 40 or higher (a value corresponding in Hirsch's article to Nobel prize winners). The list has more than 600 names and is based on the output generated by Google scholar.
Several people made obvious observations that the H-index can be boosted by such simple measures as adding your name to the articles written by members of your group, splitting a long article into a couple of shorter ones, by citing one's and each other's work, etc. For example, \cite{BK11a} studies the problem of manipulability of the H-index by means of self-citations.
This brings us to the subject of this note. \emph{Google scholar} allows one to perform some operations on the listed articles; notably, the \emph{merge}-operation allows one to combine two versions of an article even if they have different titles. By means of the merge operation, you can obviously improve your H-index. Suppose for instance that your H-index is 20. Then you can increase it by merging two articles that are cited each 11 times.
This suggests two natural problems, where in each case we refer to the improvement of the H-index by means of the merge operation.
\begin{itemize} \item Is it possible to improve your H-index?
\item Given a number $k$, determine whether your H-index can be improved to at least $k$.
\end{itemize}
\section{Two results}
To deal with these questions, we introduce first some notation. A researcher's output is represented as a multiset of natural numbers, each number representing a publication and its value representing the number of its citations. For example the multiset $\{1,1,2,3,4,4,5,5,5\}$ represents an output consisting of 9 publications with the corresponding H-index 4. Given a multiset $T$ of numbers we abbreviate $\sum_{x \in T} x$ to $\sum T$. So $\sum T$ is the number of citations resulting from the merge of the publications in $T$ into one.
To deal with the outcomes of merges we need to consider partitions of such multisets.
Fix a finite multiset $S$ of numbers from $\mathbb{N}_{> 0}$. We denote by $\bar{S}$ the singletons partition $\{\{x\}\ |\ x \in S\}$. Given a partition $\mathcal{T}$ of $S$, we define \begin{equation*}
v(\mathcal{T}) = \max\{|\mathcal{T}'|\ |\ \mathcal{T}' \subseteq \mathcal{T}, \forall T \in \mathcal{T'} : \sum T \geq |\mathcal{T}'| \}, \end{equation*}
where, as usual, $|\mathcal{T}'|$ denotes the cardinality of the multiset $\mathcal{T}'$ (which is a submultiset of a partition of $S$ in this case). In words, call a subset $\mathcal{T}'$ of the partition $\mathcal{T}$
\emph{good} if each element $T$ of $\mathcal{T}'$ after merge into a single publication yields at least $|\mathcal{T}'|$ citations. So if one allows the merge operation, then a good partition $\mathcal{T}'$
ensures that the H-index can be set to at least $|\mathcal{T}'|$. Then $v(\mathcal{T})$ is the cardinality of the largest good subset of $\mathcal{T}$, hence $v(\mathcal{T})$ is the largest H-index one can obtain by means of the merge operation, while $v(\bar{S})$ is the H-index corresponding to the input multiset $S$. To put it more directly, \[
v(\bar{S}) = \max\{|T|\ |\ T \subseteq S, \ \forall x \in T \: x \geq |T| \}, \] where we refer to the submultisets.
We call a partition $\mathcal{S}$ of $S$ an \emph{improving partition} if $v(\mathcal{S}) > v(\bar{S})$. We can now formalize the above two problems as follows, given as input a finite multiset $S$ of numbers in $\mathbb{N}_{> 0}$.
\paragraph{H-index improvement problem} Does there exist an improving partition? If yes, find it.
\paragraph{H-index achievability problem} Given a number $k$, does there exist a partition $\mathcal{T}$ of $S$, such that $v(\mathcal{T}) \geq k$? ~\\ \\ In Section \ref{sect:proofs}, we present the proofs of the following two results.
\begin{theorem} \label{thm:one} The H-index improvement problem can be solved in polynomial time. \end{theorem}
\begin{theorem} \label{thm:two} The H-index achievability problem is strongly $\mathsf{NP}$-complete.\footnote{A decision problem that involves numerical input is said to be \emph{strongly} $\mathsf{NP}$-complete if the problem is $\mathsf{NP}$ complete even if all the numbers in the input are represented in unary.} \end{theorem}
In particular, it is strongly $\mathsf{NP}$-hard to compute the maximal H-index that can be achieved through the merge operation.
From the viewpoint of manipulability, Theorem~\ref{thm:one} is bad news. Ideally, we would like to have a performance measure that is computationally difficult to manipulate. One can see a parallel with the search for voting methods that are difficult to manipulate, see, e.g. \cite{ZFBE12}. Our conclusion is that the H-index is not the last word in the ongoing quest to find a credible way to quantify one's scientific output.
\section{Proofs of the theorems}\label{sect:proofs}
In what follows, we assume that a multiset is represented as a list of possibly duplicate numbers. A different way of representing a multiset would be the more compact one, where we list only the distinct numbers that appear in the multiset, along with their respective multiplicity. We consider the latter representation to be unnatural, given the context in which we study this problem.
\noindent \textbf{Proof of Theorem~\ref{thm:one}}. Let $S$ be the given multiset. Let $S'$ be the smallest submultiset of $S$ such that $v(\bar{S}) = v(\overline{S'})$. For instance, if $S = \{5, 4, 3, 3, 3, 2\}$, then $S' = \{5, 4, 3\}$ and if $S = \{5, 3, 3, 3, 3, 2\}$, then $S' = \{5, 3, 3\}$. In both cases $v(\bar{S}) = 3$. Call a number $x \in S'$ \emph{supercritical} if $x > v(\bar{S})$ and
\emph{critical} if $x = v(\bar{S})$. Let $C_{+}$ be the multiset of all supercritical numbers in $S'$ and $C$ the multiset of all critical numbers in $S'$. Note that $C$ and $C_{+}$ partition $S'$ and that $v(\bar{S}) = |C_{+}| + |C|$. Furthermore, let $L$ denote the multiset of $|C|$ smallest numbers in $S$.
For instance, if $S = \{5, 4, 3, 3, 3, 2\}$, then $C = \{3\}$ and $L = \{2\}$, and if $S = \{5, 3, 3, 3, 3, 2\}$, then $C = \{3, 3\}$ and $L = \{3, 2\}$.
Note that below, we treat duplicate numbers in $S$ as having ``separate identities'', so that for two numbers $x,y \in S$ that are equal in magnitude, it may hold that $x \in C$ but $y \not\in C$ or $x \in L$ but $y \not\in L$. We believe that this slight informality and definitional abuse will cause no confusion to the reader.
We first establish the following characterization result.
\begin{lemma}
There exists an improving partition of $S$ iff $L \cap C = \varnothing$ and $\sum S \backslash (C
\cup C_{+} \cup L) > |C| + |C_{+}|$. \end{lemma}
\noindent \emph{Proof}. Suppose there exists an improving partition $\mathcal{S}$ of $S$.
We can assume without loss of generality that the following properties then hold:
\begin{enumerate}
\item Each supercritical number in $S$ appears in a singleton set in
$\mathcal{S}$. These are the only singleton sets in $\mathcal{S}$.
Indeed, if a supercritical number $x \in S$ appears in a
non-singleton set $T \in \mathcal{S}$, then take the partition
$\mathcal{T}$ of $S$ obtained from $\mathcal{S}$ by splitting $T$
into singletons. Because $\mathcal{S}$ is an improving partition,
there are at least $v(\bar{S})$ multisets $T' \in \mathcal{S}
\backslash \{T\}$ such that $\sum T' > v(\bar{S})$. All multisets of
$\mathcal{S} \backslash \{T\}$ are in $\mathcal{T}$. Also the number
$x$ is in a singleton set of $\mathcal{T}$ and $x > v(\bar{S})$.
Therefore, there are in $\mathcal{T}$ at least $v(\bar{S}) + 1$
multisets $T'$ such that $\sum T' > v(\bar{S})$. Hence,
$\mathcal{T}$ is an improving partition.
After we have repeatedly performed the above splitting steps we
obtain an improving partition $\mathcal{S'}$ such that each
supercritical number $x \in S$ appears in a singleton set in
$\mathcal{S'}$.
Since \[
v(\mathcal{S'}) > v(\bar{S}) = |C_{+}| + |C| \geq |C_{+}|, \] there exists in $\mathcal{S'}$ a non-singleton multiset $T \in \mathcal{S}$ that contains only non-supercritical numbers. Merging with it all singleton sets that contain a non-supercritical number yields the desired improving partition.
\item $L$ is disjoint from $C$.
By Property 1, the supercritical numbers form singleton sets in
$\mathcal{S}$, and each remaining multiset has cardinality at least
$2$. If $L$ were not disjoint from $C$, then we would have $|S| \leq
|C_{+}| + |L| + |C|$, so $|S \backslash C_{+}| \leq |L| + |C| =
2|C|$, hence the number $\ell$ of non-singleton multisets in
$\mathcal{S}$ would be at most $|C|$. This yields a contradiction,
since we would then have $v(\mathcal{S}) \leq |C_{+}| + \ell \leq
|C_{+}|+|C| = v(\bar{S})$.
\item In $\mathcal{S}$, every critical number is in a set of cardinality 2.
Indeed, by Property 1, critical numbers do not appear in singleton
sets. Further, if a critical number $x \in S$ appears in a multiset
$T \in \mathcal{S}$ of cardinality exceeding 2, then we can split
$T$ in any way so that $x$ is put in a multiset $T'$ of cardinality
$2$. It then holds that $\sum T' > v(\bar{S})$, so the resulting
partition remains an improving partition.
\item There is a bijection $\pi : C \rightarrow L$ such that $\{x, \pi(x)\} \in \mathcal{S}$ (i.e., $C$ is ``matched'' with $L$ in $\mathcal{S}$).
Indeed, by Property 3, every critical number is in a set of
cardinality 2. Now, let $x$ be a critical number and let $\{x,y\}
\in \mathcal{S}$ be the multiset of cardinality $2$ that contains
$x$. If $y$ is not in $L$, then $|C| = |L|$ implies that there is a
number $y' \in L$ that occurs in a multiset $T$ in $\mathcal{S}$
that does not contain a critical number. Because $y' \leq y$, the
operation of swapping $y'$ and $y$ in $\mathcal{S}$ does not
decrease the number of multisets that sum to at least $v(\bar{S}) +
1$. So the partition that results after this swap remains an
improving partition. \end{enumerate}
We have $v(\mathcal{S}) > v(\bar{S}) = |C_{+}| + |C|$, so by Properties 1,2, and 4, there is a multiset $T \in \mathcal{S}$ not intersecting $C_+$, $C$, and $L$, such that $\sum T > v(\bar{S})$. Hence $\sum S \backslash (C \cup C_{+} \cup L) \geq \sum T >
v(\bar{S}) = |C| + |C_{+}|$. We conclude that if there is an improving partition, then $L \cap C = \varnothing$ and $\sum S \backslash (C
\cup C_{+} \cup L) > |C| + |C_{+}|$.
Conversely, if $L \cap C = \varnothing$ and $\sum S \backslash (C \cup C_{+} \cup L) > |C| + |C_{+}|$, then there is an improving partition. It consists of
\begin{itemize} \item the singletons, each containing an element of $C_{+}$,
\item the sets of cardinality $2$, each containing a pair of elements from $C$ and $L$,
\item the multiset $S \backslash (C \cup C_{+} \cup L)$. \end{itemize}
{$\Box$}
The proof of Theorem~\ref{thm:one} is now immediate. It is straightforward to compute $C_+$, $C$ and $L$ in polynomial time. Using the above lemma we can therefore determine in polynomial time whether an improving partition exists, and find one in polynomial time if it does.
{$\Box$}
\noindent \textbf{Proof of Theorem~\ref{thm:two}}. The problem is clearly in $\mathsf{NP}$, so the proof will focus on establishing $\mathsf{NP}$-hardness. We do this by means of a polynomial time reduction from a strongly $\mathsf{NP}$-complete problem. The reduction is from the 3-PARTITION problem. In the 3-PARTITION problem, we are given a multiset $M$ of $3m$ positive integers, such that $\sum M = mb$ for some $b \in \mathbb{N}$. We have to decide whether it is possible to partition this set into $m$ submultisets, such that the sum of the numbers in each submultiset is exactly $b$.
Garey and Johnson \cite{gareyjohnson} prove that the 3-PARTITION problem is strongly $\mathsf{NP}$-complete, even under the assumption that $M$ is represented as above (i.e., non-concisely). This means that the 3-PARTITION problem is $\mathsf{NP}$-complete even when $b$ is bounded by some polynomial in $m$. Denote this polynomial by $p(m)$. From now on, with the SPECIAL 3-PARTITION problem we will mean the special case of the problem where $b$ is bounded by $p(m)$.
Before proceeding, one note is in order. In the original definition of the 3-PARTITION problem, the additional requirement is imposed that all sets in the partition are of cardinality 3 (and this is also where the name of the problem originates from). For convenience, we do not impose this requirement here. The reason it is not necessary to impose this requirement is because in \cite{gareyjohnson}, it is shown that strong $\mathsf{NP}$-hardness holds even when all numbers in the multiset are strictly between $b/2$ and $b/4$. This enforces that all sets in the partition will be of cardinality 3. Without the cardinality constraint, the problem thus becomes more general, and is automatically strongly $\mathsf{NP}$-hard.
Given a SPECIAL 3-PARTITION instance $(S',m,b)$, we reduce it to an H-index manipulation problem instance $(S,k)$ as follows. First, obtain $S''$ from $S'$ by adding $m$ to each number in $S'$. Note that $(S'',m, k)$, where $k=b+3m$, is a YES-instance of 3-PARTITION if and only if $(S',m,b)$ is a YES-instance of SPECIAL 3-PARTITION. Note also that $k-m = b+2m > 0$. Next, obtain the multiset $S$ from $S''$ by adding $k-m$ copies of $k$ to $S'$. This takes polynomial time, as $k$ is bounded by $p(m)+3m$.
We now show that $(S,k)$ is a YES-instance of the H-index manipulation problem if and only if $(S'',m,k)$ is a YES-instance of 3-PARTITION.
If $(S'',m,k)$ is a YES-instance of 3-PARTITION, then let $\mathcal{T}$ be a certificate for that, so $\mathcal{T}$ is a partition of $S''$ into $m$ multisets such that the
sum of the numbers in each multiset is $k$. Then by adding to $\mathcal{T}$ exactly $k - m$ copies of the set $\{k\}$, we obtain a certificate that $(S,k)$ is a YES-instance of the H-index achievability problem, because $k = k$.
Conversely, if $(S,k)$ is a YES-instance of the H-index achievability problem, then let $\mathcal{T}$ be a certificate for that. We can assume without loss of generality that the partition $\mathcal{T}$ contains exactly $k - m$ copies of the set $\{k\}$. Indeed, otherwise we can split each non-singleton set in $\mathcal{T}$ that contains a copy of $k$ into singleton sets. This will result in a desired certificate.
By removing all singleton sets $\{k\}$ from $\mathcal{T}$ we obtain a partition $\mathcal{T}'$ of $S''$. By the choice of $(S,k)$ this new partition $\mathcal{T}'$ contains $m$ multisets, each of which sums up to $k$. $\mathcal{T}$ does not contain any additional multiset besides these $m$ multisets, as then we would have $\sum S'' > mk$, which is not the case by construction. Therefore, $\mathcal{T}'$ is a certificate that $(S'',m,k)$ is a YES-instance of 3-PARTITION.
{$\Box$}
\end{document}
We could consider two variants of these problems: One possibility is to assume that the input multiset is represented \emph{concisely}, i.e., by a list of pairs of integers $((a_1,b_1), \ldots, (a_n, b_n))$, where $b_i$ indicates the multiplicity with which $a_i$ occurs in $S$. Information-theoretically, this is the most natural way to represent an instance of the problem.
Alternatively, we could study a non-concise version where the input multiset is represented by a list $L = (a_1, \ldots, a_n)$ of numbers, where the multiplicity with which a given number $a$ appears in the multiset is the number of times $a$ appears in $L$. This is information-theoretically an unreasonable way of representing the input. However, given the way in which the citation information is stored in Google's database, this non-concise representation is actually to be considered more realistic, and therefore we deem this non-concise version of the H-index manipulation problem the most reasonable one to study in this case. We will thus assume non-conciseness throughout.
For the H-index improvement problem, we will prove polynomial time decidability. For the H-index achievability problem, we will prove strong $\mathsf{NP}$-hardness. The latter will be treated first.
\begin{theorem} The H-index achievability problem is strongly $\mathsf{NP}$-complete. \end{theorem} \begin{proof} The problem is clearly in $\mathsf{NP}$, so the proof will focus on establishing $\mathsf{NP}$-hardness. We do this by means of a polynomial time reduction from a strongly $\mathsf{NP}$-complete problem. The reduction is from the 3-PARTITION problem. In the 3-PARTITION problem, we are given a multiset $M$ of $3m$ positive integers, such that $\sum M = mb$ for some $b \in \mathbb{N}$. We have to decide whether it is possible to partition this set into $m$ submultisets, such that the sum of the numbers in each submultiset is exactly $b$.
Garey and Johnson \cite{gareyjohnson} prove that the 3-PARTITION problem is strongly $\mathsf{NP}$-complete, even under the assumption that $M$ is represented as above (i.e., non-concisely). This means that the 3-PARTITION problem is $\mathsf{NP}$-complete even when $b$ is bounded by some polynomial in $m$. Denote this polynomial by $p(m)$. From now on, with the SPECIAL 3-PARTITION problem we will mean the special case of the problem where $b$ is bounded by $p(m)$.
Before proceeding, one note is in order. In the original definition of the 3-PARTITION problem, the additional requirement is imposed that all sets in the partition are of cardinality 3 (and this is also where the name of the problem originates from). For convenience, we do not impose this requirement here. The reason it is not necessary to impose this requirement is because in \cite{gareyjohnson}, it is shown that strong $\mathsf{NP}$-hardness holds even when all numbers in the multiset are strictly between $b/2$ and $b/4$. This enforces that all sets in the partition will be of cardinality 3. Without the cardinality constraint, the problem thus becomes more general, and is automatically strongly $\mathsf{NP}$-hard.
Given a SPECIAL 3-PARTITION instance $(S',m,b)$, we reduce it to an H-index manipulation problem instance $(S,k)$ as follows. First, obtain $S''$ from $S'$ by adding $m$ to each number in $S'$. Note that $(S'',m, k)$, where $k=b+3m$, is a YES-instance of 3-PARTITION if and only if $(S',m,b)$ is a YES-instance of SPECIAL 3-PARTITION. Note also that $k-m = b+2m > 0$. Next, obtain the multiset $S$ from $S''$ by adding $k-m$ copies of $k$ to $S'$. This takes polynomial time, as $k$ is bounded by $p(m)+3m$.
We now show that $(S,k)$ is a YES-instance of the H-index manipulation problem if and only if $(S'',m,k)$ is a YES-instance of 3-PARTITION.
If $(S'',m,k)$ is a YES-instance of 3-PARTITION, then let $\mathcal{T}$ be a certificate for that, so $\mathcal{T}$ is a partition of $S''$ into $m$ multisets such that the
sum of the numbers in each multiset is $k$. Then by adding to $\mathcal{T}$ exactly $k - m$ copies of the set $\{k\}$, we obtain a certificate that $(S,k)$ is a YES-instance of the H-index achievability problem, because $k = k$.
Conversely, if $(S,k)$ is a YES-instance of the H-index achievability problem, then let $\mathcal{T}$ be a certificate for that. We can assume without loss of generality that the partition $\mathcal{T}$ contains exactly $k - m$ copies of the set $\{k\}$. Indeed, otherwise we can split each non-singleton set in $\mathcal{T}$ that contains a copy of $k$ into singleton sets. This will result in a desired certificate.
By removing all singleton sets $\{k\}$ from $\mathcal{T}$ we obtain a partition $\mathcal{T}'$ of $S''$. By the choice of $(S,k)$ this new partition $\mathcal{T}'$ contains $m$ multisets, each of which sums up to $k$. $\mathcal{T}$ does not contain any additional multiset besides these $m$ multisets, as then we would have $\sum S'' > mk$, which is not the case by construction. Therefore, $\mathcal{T}'$ is a certificate that $(S'',m,k)$ is a YES-instance of 3-PARTITION. \end{proof}
Despite this bad news, we show that it is still possible to solve the H-index improvement problem in polynomial time: \begin{theorem} The H-index improvement problem can be solved in polynomial time. \end{theorem} \begin{proof} Let $S$ be our given multiset. Suppose that an improvement exists, then there exists an \emph{improving partition}, i.e., a partition $\mathcal{S}$ of $S$ such that $v(\mathcal{S}) \geq v(S)$. We say that a number in $S$ is \emph{supercritical} if it exceeds $v(\bar{S})$, and is \emph{critical} if it is equal to $v(\bar{S})$. Let $C_+$ and $C$ be the sets of supercritical and critical numbers, respectively. Note that $v(\bar{S})$ is the cardinality of the set of critical and supercritical numbers.
$S$ can be assumed to satisfy three properties without loss of generality: \begin{itemize}
\item We can assume that all supercritical numbers in $S$ form a
singleton set in $\mathcal{S}$, because if there is a set in
$\mathcal{S}$ of cardinality at least $2$ that contains a number
strictly greater than $v(\bar{S})$, then splitting that set into
singletons will still result in an improving partition.
\item For similar reasons we can assume that in $\mathcal{S}$, all critical numbers in occur in sets of cardinality $2$: Each critical number in $C$ is matched with a number in $L$, where $L$ denotes the set of $|C|$ smallest numbers of $S$. If a critical number would occur in a set of cardinality exceeding $2$, then the set can be split into two sets arbitrarily, and the resulting partition will still be improving. If a critical number occurs in a set of cardinality $2$, but is together with a number $a$ not in $L$, then $a$ can be swapped with a number in $L$ and the resulting partition will still be improving. \item We can assume without loss of generality that in $\mathcal{S}$, there is only a single set that does not contain any number in $C_+$ or $C$. To see why, assume w.l.o.g. that $\mathcal{S}$ satisfies the above two properties. If we remove from $\mathcal{S}$ all sets not intersecting $C_+$ and $C$ and call the resulting partition $\mathcal{S}'$, then we see that $v(\mathcal{S}') = v(\bar{S})$. Denote then by $T$ the union of all sets not intersecting $C_+$ and $C$, and observe that $v(\mathcal{S}' \cup T) > v(\bar{S})$. \end{itemize} With the insights given in the above three points, it is also straightforward to verify that any partition satisfying the above three properties is an improving partition if and only if $L$ is disjoint from $C$ and $\sum(S \backslash (C_+ \cup C \cup L)) > v(\bar{S})$.
It is straightforward to compute $C_+$ and $C$ in polynomial time. It is therefore also easy to compute $L$ in polynomial time. Thus, we can construct in polynomial time a set that satisfies the three assumptions above, and hence we can find an improving partition in polynomial time. \end{proof}
\end{document} |
\begin{document}
\title{Fully Dynamic Spanners with Worst-Case Update Time hanks{To be presented at the European Symposium on Algorithms (ESA) 2016. This work was partially done while the authors were visiting the Simons Institute for the Theory of Computing.} \begin{abstract} An {\em $\alpha$-spanner} of a graph $ G $ is a subgraph $ H $ such that $ H $ preserves all distances of $ G $ within a factor of $ \alpha $. In this paper, we give fully dynamic algorithms for maintaining a spanner $ H $ of a graph $ G $ undergoing edge insertions and deletions with worst-case guarantees on the running time after each update. In particular, our algorithms maintain: \begin{itemize} \item a $3$-spanner with $ \tilde O (n^{1+1/2}) $ edges with worst-case update time $ \tilde O (n^{3/4}) $, or \item a $5$-spanner with $ \tilde O (n^{1+1/3}) $ edges with worst-case update time $ \tilde O (n^{5/9}) $. \end{itemize} These size/stretch tradeoffs are best possible (up to logarithmic factors). They can be extended to the weighted setting at very minor cost. Our algorithms are randomized and correct with high probability against an oblivious adversary. We also further extend our techniques to construct a $5$-spanner with suboptimal size/stretch tradeoff, but improved worst-case update time.
To the best of our knowledge, these are the {\em first} dynamic spanner algorithms with sublinear worst-case update time guarantees. Since it is known how to maintain a spanner using small {\em amortized} but large {\em worst-case} update time \citem[Baswana et al.\ SODA'08]{BaswanaKS12}, obtaining algorithms with strong worst-case bounds, as presented in this paper, seems to be the next natural step for this problem.
\end{abstract}
\tableofcontents
\section{Introduction}
An {\em $\alpha$-spanner} of a graph $G$ is a sparse subgraph that preserves all original distances within a multiplicative factor of $\alpha$. Spanners are an extremely important and well-studied primitive in graph algorithms. They were formally introduced by Peleg and Sch{\" a}fer \cite{PS89} in the late eighties after appearing naturally in several network problems \cite{PU89}. Today, they have been successfully applied in diverse fields such as routing schemes \cite{Cowen01, CW04, PU89, RTZ08, TZ01}, approximate shortest paths algorithms \cite{DHZ96, Elkin05, BaswanaK10}, distance oracles \cite{BaswanaK10, Chechik14, Chechik15, PR10, TZ05}, broadcasting \cite{FPZ+04}, etc. A landmark upper bound result due to Awerbuch \cite{Awerbuch85} states that for any integer $ k $, every graph has a $(2k-1)$-spanner on $O(n^{1 + 1/k})$ edges. Moreover, the extremely popular {\em girth conjecture} of Erd\H{o}s \cite{girth} implies the existence of graphs for which $\Omega(n^{1 + 1/k})$ edges are necessary in any $(2k-1)$-spanner. Thus, the primary question of the optimal sparsity of a graph spanner is essentially resolved.
The next natural question in the field of spanners is to obtain efficient algorithms for computing a sparse spanner of an input graph $G$. This problem is well understood in the static setting; notable results include \cite{Awerbuch85, BaswanaS07, RTZ05, TZ01}. However, in many of the above applications of spanners, the underlying graph can experience minor changes and the application requires the algorithm designer to have a spanner available at all times. Here, it is very wasteful to recompute a spanner from scratch after every modification. The challenge is instead to {\em dynamically maintain} a spanner under edge insertions and deletions with only a small amount of time required per update. This is precisely the problem we address in this paper.
The pioneering work on dynamic spanners was by Ausiello et al.~\cite{AusielloFI06}, who showed how to maintain a $3$- or $5$-spanner with amortized update time proportional to the maximum degree~$ \Delta $ of the graph, i.e. for any sequence of $u$ updates the algorithm takes time $O (u \cdot \Delta)$ in total. In sufficiently dense graphs, $ \Delta $ might be $\Omega(n)$. Elkin \cite{Elkin11} showed how to maintain a $(2k-1)$ spanner of optimal size using $\tilde O (mn^{-1/k}) $ expected update time; i.e. {\em super-linear} time for dense enough graphs. Finally, Baswana et al.~\cite{BaswanaKS12} gave fully dynamic algorithms that maintain $(2k-1)$-spanners with essentially optimal size/stretch tradeoff using {\em amortized} $O(k^2 \log^2 n)$ or $O(1)^k$ time per update. Their {\em worst-case} guarantees are much weaker: any individual update in their algorithm can require $\Omega(n)$ time. It is very notable that {\em every} previously known fully dynamic spanner algorithm carries the drawback of $\Omega(n)$ worst-case update time. It is thus an important open question whether this update time is an intrinsic part of the dynamic spanner problem, or whether this linear time threshold can be broken with new algorithmic ideas.
There are concrete reasons to prefer worst-case update time bounds to their amortized counterparts. In real-time systems, hard guarantees on update times are often needed to serve each request before the next one arrives. Amortized guarantees, meanwhile, can cause undesirable behavior in which the system periodically stalls on certain inputs. Despite this motivation, good worst-case update times often pose a veritable challenge to dynamic algorithm designers, and are thus significantly rarer in the literature. Historically, the fastest dynamic algorithms usually first come with amortized time bounds, and comparable worst-case bounds are achieved only after considerable research effort. For example, this was the case for the dynamic connectivity problem on undirected graphs~\cite{KapronKM13} and the dynamic transitive closure problem on directed graphs~\cite{Sankowski04}. In other problems, a substantial gap between amortized and worst-case algorithms remains, despite decades of research. This holds in the cases of fully dynamically maintaining minimum spanning trees~\cite{HolmLT01,Frederickson85,EppsteinGIN97}, all-pairs shortest paths~\cite{DemetrescuI04,Thorup05}, and more. Thus, strong amortized update time bounds for a problem do not at all imply the existence of strong worst-case update time bounds, and once strong amortized algorithms are found it becomes an important open problem to discover whether or not there are interesting worst-case bounds to follow.
The main result of this paper is that highly nontrivial worst-case time bounds are indeed available for fully dynamic spanners. We present the first ever algorithms that maintain spanners with essentially optimal size/stretch tradeoff and {\em polynomially sublinear} (in the number of nodes in the graph) worst-case update time. Our main technique is a very general new framework for boosting the performance of an orientation-based algorithm, which we hope can have applications in related dynamic problems.
\subsection{Our results}
We obtain fully dynamic algorithms for maintaining spanners of graphs undergoing edge insertions and deletions. In particular, in the unweighted setting we can maintain: \begin{itemize}
\item a $3$-spanner of size $ O (n^{1+1/2} \log^{1/2}{n} \log{\log{n}}) $ with worst-case update time $ O (n^{3/4} \log^{4}{n}) $, or
\item a $5$-spanner of size $ O (n^{1+1/3} \log^{2/3}{n} \log{\log{n}}) $ with worst-case update time $ O (n^{5/9} \log^{4}{n}) $, or
\item a $5$-spanner of size $ O (n^{1+1/2} \log^{1/2}{n} \log{\log{n}}) $ with worst-case update time $ O (n^{1/2} \log^{4}{n}) $. \end{itemize} Naturally, these results assume that the initial graph is empty; otherwise, a lengthy initialization step is unavoidable.
Using standard techniques, these results can be extended into the setting of arbitrary positive edge weights, at the cost of an increase in the stretch by a factor of $1 + \epsilon$ and an increase in the size by a factor of $\log_{1 + \epsilon} W$ (for any $\epsilon > 0$, where $W$ is the ratio between the largest and smallest edge weights).
Our algorithms are randomized and correct with high probability against an \emph{oblivious adversary}~\cite{Ben-DavidBKTW94} who chooses its sequence of updates independently from the random choices made by the algorithm.\footnote{In particular, this means that the adversary is \emph{not} allowed to see the current edges of the spanner.} This adversarial model is the same one used in the previous randomized algorithms with amortized update time~\cite{BaswanaKS12}. Since the girth conjecture has been proven unconditionally for $ k=2 $ and $ k=3 $ \cite{Wenger91}, the first two spanners have optimal size/stretch tradeoff (up to the $\log$ factor). The third result sacrifices a non-optimal size/stretch tradeoff in exchange for improved update time.
\subsection{Technical Contributions}
Our main new idea is a general technique for boosting the performance of orientation-based algorithms.
Our algorithm contains three new high-level ideas. First, let $\vec{G}$ be an arbitrary orientation of the input graph $ G $; i.e. replace every undirected edge $ \{ u, v \} $ by a directed edge, either $ (u, v) $ or $ (v, u) $. We give an algorithm ALG for maintaining either a $3$-spanner or a $5$-spanner of $ G $ with update time proportional to the maximum \emph{out-degree} of the oriented graph $ \vec{G} $. This algorithm is based on the clustering approach used in \cite{BaswanaS07}. For maintaining $3$- and $5$-spanners we only need to consider clusters of diameter at most $ 2 $ consisting of the set of neighbors of certain cluster centers.
This alone is of course not enough, as generally the maximum out-degree of $ \vec{G} $ can be as large as $ n-1 $. To solve this problem, we combine ALG with the following simple out-degree reduction technique. Partition outgoing edges of every node into at most $ t \leq \lceil n / s \rceil $ groups of size at most $ s $ each. For any $ 1 \leq i \leq t $, we combine the edges of the $i$-th groups and on the corresponding subgraph $ G_i $ we run an instance of ALG to maintain a $3$-spanner with update time $ O (s) $, the maximum out-degree in $ \vec{G}_i $. By the decomposability of spanners, the union of all these sub-spanners $ H_1 \cup \dots H_t $ is a $3$-spanner of $ G $. In this way we can obtain an algorithm for maintaining a $3$-spanner of size $ |H_1| + \dots |H_t| = O (n^{5/2} / s) $ with worst-case update time $ O (s) $ for any $ 1 \leq s \leq n $. We remark that the general technique of partitioning a graph into subgraphs of low out-degree has been used before, e.g. \cite{BE13}; however, our recursive conversion of these subgraphs into spanners is original and an important technical contribution of this paper.
The partitioning is still not enough, as the optimal size of a $3$-spanner is $O(n^{3/2})$, which would then require $s = \Omega(n)$ worst-case update time. However, we can improve upon this tradeoff once more with a more fine-grained application of ALG. In particular, on each subgraph $ \vec{G}_i $, ALG maintains two subgraphs $ A_i^1 $ and $ \vec{B}_i^1 $, such that: \begin{itemize} \item $ A_i^1 $ is a `partial' $3$-spanner of $ G_i $ of size $ \tilde O (n^{1 + 1/2} \cdot s / n) $, and \item The maximum out-degree in $ \vec{B}_i^1 $ is considerably smaller than the maximum out-degree in $ \vec{G}_i $. \end{itemize} We then recursively apply ALG on $ \vec{B}_1^1 \cup \dots \cup \vec{B}_t^1 $ to some depth $ \ell $ at which the out-degree can no longer be reduced by a meaningful amount. Our final spanner is then the union of all the sets $A_i^j$, for $1 \le i \le t$ and $1 \le j \le \ell$, as well as the ``remainder'' graphs $\vec{B}_1^\ell \cup \dots \cup \vec{B}_t^\ell$, which have low out-degree and are thus sparse.
In principle, the recursive application of ALG could be problematic, as one update in~$ G $ could lead to several changes to the edges in the $ B_i^1 $ subgraphs, which then propagate as an increasing number of updates in the recursive calls of the algorithm. This places another constraint on ALG. We carefully design ALG in such a way that it performs only a constant number of changes to each $ B_i^1 $ with any update in $ G $, and we only recurse to depth $\ell = o(\log n)$ so that the total number of changes at each level is subpolynomial.
Overall, we remark that our framework for performing out-degree reduction is fairly generic, and seems likely applicable to other algorithms that admit the design of an ALG with suitable properties. The main technical challenges are designing ALG with these properties, and performing some fairly involved parameter balancing to optimize the running time used by the recursive calls. However, we do not know how to extend our approach to sparser spanners with larger stretches since corresponding constructions usually need clusters of larger diameter and maintaining such clusters with update time proportional to the maximum (out)-degree of the graph seems challenging.
\subsection{Other Related Work}
There has been some related work attacking the spanner problem in other models of computation. Some of the work on streaming spanner algorithms, in particular \cite{Baswana08,FeigenbaumKMSZ08}, was converted into purely \emph{incremental} dynamic algorithms, which maintain spanners under edge insertions but cannot handle deletions. This line of research culminated in an incremental algorithm with worst-case update time $ O (1) $ per edge insertion~\cite{Elkin11}. Elkin \cite{Elkin07} also gave a near-optimal algorithm for maintaining spanners in the distributed setting.
A concept closely related to spanners are \emph{emulators} \cite{DHZ96}, in which the graph $ H $ for approximately preserving the distances may contain arbitrary weighted edges and is not necessarily a subgraph of $ G $. Dynamic algorithms for maintaining emulators have been commonly used as subroutines to obtain faster dynamic algorithms for maintaining (approximate) shortest paths or distances. Some of the work on this problem includes \cite{RodittyZ12, BernsteinR11, HenzingerKNFOCS14, HenzingerKNSODA14, ACT14, AbrahamC13}.
As outlined above, one of the main technical contributions of this paper is a framework for exploiting orientations of undirected graphs. The idea of orienting undirected graphs has been key to many recent advances in dynamic graph algorithms. Examples include~\cite{NeimanS13,KopelowitzKPS14,PelegS16,AmirKLPPS15,AbrahamDKKP16}.
\section{Preliminaries}
We consider unweighted, undirected graphs $ G = (V, E) $ undergoing edge insertions and edge deletions. For all pairs of nodes $ u $ and $ v $ we denote by $ d_G (u, v) $ the distance between $ u $ and $ v $ in $ G $. An \emph{$\alpha$-spanner} of a graph $G = (V, E)$ is a subgraph $H = (V, E') \subseteq G$ such that $d_H(u, v) \le \alpha \cdot d_G(u, v)$ for all $u, v \in V$.\footnote{If $u$ and $v$ are disconnected in $G$, then $d_G(u, v) = \infty$ and so they may be disconnected in the spanner as well.} The parameter $\alpha$ is called the \emph{stretch} of the spanner. We will use the well-known fact that it suffices to only span distances over the edges of $G$. \begin{lemma} [Spanner Adjacency Lemma (Folklore)] \label{lem:span adjacency} If $H = (V, E')$ is a subgraph of $G = (V, E)$ that satisfies $d_H(u, v) \le \alpha \cdot d_G(u, v)$ for all $(u, v) \in E$, then $H$ is an $\alpha$-spanner of $G$. \end{lemma}
We will work with {\em orientations} of undirected graphs. We denote an undirected edge with endpoints $ u $ and $ v $ by $ \{u, v\} $ and a directed edge from $ u $ to $ v $ by $ (u, v) $. An \emph{orientation} $ \vec{G} = (V, \vec{E}) $ of an undirected graph $ G = (V, E) $ is a directed graph on the same set of nodes such that for every edge $ \{u, v\} $ of $ G $, $ \vec{G} $ either contains the edge $ (u, v) $ or the edge $ (v, u) $. Conversely, $ G $ is the \emph{undirected projection} of $ \vec{G} $. In an undirected graph $ G $, we denote by $ N (v) := \{ w \mid \{v, w\} \in G \} $ the set of neighbors of $ v $. In an oriented graph $ \vec{G} $, we denote by $ \mathit{Out} (v) := \{ w \mid (v, w) \in \vec{G} \} $ the set of outgoing neighbors of $ v $. Similarly, by $ \mathit{In} (v) := \{ u \mid (u, v) \in \vec{G} \} $ we denote the set of incoming neighbors of $ v $. We denote by $ \Delta^+ (\vec{G}) $ the maximum out-degree of $ \vec{G} $.
Our algorithms can easily be extended to graphs with edge weights, via the standard technique of weight binning: \begin{lemma}[Weight Binning, e.g.~\cite{BaswanaKS12}] Suppose there is an algorithm that dynamically maintains a spanner of an arbitrary unweighted graph with some particular size, stretch, and update time. Then for any $\epsilon > 0$, there is an algorithm that dynamically maintains a spanner of an arbitrary graph with positive edge weights, at the cost of an increase in the stretch by a factor of $ 1 + \epsilon $ and an increase in the update time by a factor of $ O(\log_{1 + \epsilon} W)$ (and no change in update time). Here, $W$ is the ratio between the largest and smallest edge weight in the graph. \end{lemma} Since this extension is already well known, we will not discuss it further. Instead, we will simplify the rest of the paper by focusing only on the unweighted setting; that is, all further graphs in this paper are unweighted and undirected.
In our algorithms, we will use the well-known fact that good hitting sets can be obtained by random sampling. This technique was first used in the context of shortest paths by Ullman and Yannakakis~\cite{UllmanY91}. A general lemma on the size of the hitting set can be formulated as follows.
\begin{lemma}[Hitting Sets] \label{lem:random hitting set} Let $ a \geq 1 $, let $ V $ be a set of size $ n $ and let $ U_1, U_2, \ldots, U_r $, be subsets of $ V $ of size at least $ q $. Let~$ S $ be a subset of $ V $ obtained by choosing each element of $ V $ independently at random with probability $ p = \min (x / q, 1) $ where $ x = a \ln{(r n)} + 1 $. Then, with high probability (whp), i.e. probability at least $ 1 - 1/n^a $, both the following two properties hold: \begin{enumerate} \item For every $ 1 \leq i \leq r $, the set $ S $ contains a node in $ U_i $, i.e. $ U_i \cap S \neq \emptyset $.
\item $ |S| \leq 3 x n / q = O (a n \ln{(r n)} / q) $. \end{enumerate} \end{lemma}
A well-known property of spanners is \emph{decomposability}. We will exploit this property to run our dynamic algorithm on carefully chosen subgraphs. \begin{lemma}[Spanner Decomposability, \cite{BaswanaKS12}]\label{lem:decomposability} Let $ G = (V, E) $ be an undirected (possibly weighted) graph, let $ E_1, \dots, E_t $ be a partition of the set of edges $ E $, and let, for every $ 1 \leq i \leq t $, $ H_i $ be an $ \alpha $-spanner of $ G_i = (V, E_i) $ for some $ \alpha \geq 1 $. Then $ H = \bigcup_{i=1}^t H_i $ is an $ \alpha $-spanner of $ G $. \end{lemma}
In our algorithms we use a reduction for getting a fully dynamic spanner algorithm for an arbitrarily long sequence of updates from a fully dynamic spanner algorithm that only works for a polynomially bounded number of updates. This is particularly useful for randomized algorithms whose high-probability guarantees are obtained by taking a union bound over a polynomially bounded number of events. \begin{lemma}[Update Extension, Implicit in~\cite{AbrahamDKKP16}]\label{lem:extending spanner to long sequence} Assume there is a fully dynamic algorithm for maintaining an $ \alpha $-spanner (for some $ \alpha \geq 1 $) of size at most $ S (m, n, W) $ with worst-case update time $ T (m, n, W) $ for up to $ 4 n^2 $ updates in $ G $. Then there also is a fully dynamic algorithm for maintaining an $ \alpha $-spanner of size at most $ O (S (m, n, W)) $ with worst-case update time $ O (T (m, n, W)) $ for an arbitrary number of updates. \end{lemma}
For completeness, we give the proof of this lemma in an appendix. We remark that is is entirely identical to the one given in~\cite{AbrahamDKKP16}.
\section{Algorithms for Partial Spanner Computation}
Our goal in this section is to describe fully dynamic algorithm for \emph{partial} spanner computation. We prove lemmas that can informally be summarized as follows: given a graph $G$ with an orientation $\vec{G}$, one can build a very sparse spanner that only covers the edges leaving nodes with large out-degree in $\vec{G}$. There is a smooth tradeoff between the sparsity of the spanner and the out-degree threshold beyond which edges are spanned.
As a crucial subroutine, our algorithms employ a fully dynamic algorithm for maintaining certain structural information related to a \emph{clustering} of $G$. We will describe this subroutine first.
\subsection{Maintaining a clustering structure}\label{sec:maintaining clustering}
In the spanner literature, a \emph{clustering} of a graph $G = (V, E)$ is a partition of the nodes $V$ into \emph{clusters} $C_1, \dots, C_k$, as well as a ``leftover'' set of \emph{free} nodes $F$, with the following properties: \begin{itemize} \item For each cluster $C_i$, there exists a ``center'' node $x_i \in V$ such that all nodes in $C_i$ are adjacent to $x_i$. \item The free nodes $F$ are precisely the nodes that are not adjacent to any cluster center. \end{itemize}
In this paper, we will represent clusterings with a vector $c$ indexed by $V$, such that for any clustered $v \in V$ we have $c[v]$ equal to its cluster center, and for any free $v \in V$ we use the convention $c[v] = \infty$.
We will use the following subroutine in our main algorithms: \begin{lemma}\label{lem:maintaining clusters worst case} Given an oriented graph $ \vec{G} = (V, \vec{E}) $ and a set of cluster centers $ S = \{ s_1, \ldots, s_k \} $, there is a fully dynamic algorithm that simultaneously maintains: \begin{enumerate}
\item A clustering $c$ of $ G = (V, E) $ with centers $S$
\item For each node $ v $ and each cluster index $ i \in \{ 1, \ldots, k \} $, the set
\begin{equation*}
\mathit{In} (v, i) := \{ u \in \mathit{In} (v) \mid c [u] = i \}
\end{equation*}
(i.e. the incoming neighbors to $v$ from cluster $i$)
\item For every pair of cluster indices $ i, j \in \{ 1, \ldots, k \} $, the set
\begin{equation*}
\mathit{In} (i, j) := \{ (u, v) \in \vec{E} \mid c [u] = j, c [v] = i \}
\end{equation*}
(i.e. the incoming neighbors to cluster $i$ from cluster $j$). \end{enumerate} This algorithm has worst-case update time $ O (\Delta^+ (\vec{G}) \log{n}) $, where $ \Delta^+ (\vec{G}) $ is the maximum out-degree of $ \vec{G} $. \end{lemma}
The second $\mathit{In}(v, i)$ sets will be useful for the $3$-spanner, while the third $\mathit{In}(i, j)$ sets will be useful for the $5$-spanner.
The implementation of this lemma is extremely straightforward; it is not hard to show that the necessary data structures can be maintained in the naive way by simply passing a message along the outgoing edges from $u$ and $v$ whenever an edge $(u, v)$ is inserted or deleted. Due to space constraints, we defer full implementation details and pseudocode to Appendix~\ref{apx:updating clustering algorithm}.
\subsection{Maintaining a partial $3$-spanner}
We next show how to convert Lemma~\ref{lem:maintaining clusters worst case} into a fully dynamic algorithm for maintaining a partial $3$-spanner of a graph, as described in the introduction. Specifically:
\begin{lemma}\label{lem:3 spanner worst-case fine-grained} For every integer $ 1 \leq d \leq n $, there is a fully dynamic algorithm that takes an oriented graph $\vec{G} = (V, \vec{E})$ on input and maintains subgraphs $A = (V, E_A), \vec{B} = (V, \vec{E}_B)$ (i.e. $\vec{B}$ is oriented but $A$ is not) over a sequence of $4n^2$ updates with the following properties: \begin{itemize} \item $ d_A (u, v) \leq 3 $ for every edge $ \{u, v\} $ in $E \setminus E_B$
\item $ A $ has size $ | A | = O (n^2 (\log{n}) / d + n) $ \item The maximum out-degree of $ \vec{B} $ is $ \Delta^+ (\vec{B}) \leq d $. \item With every update in $ G $, at most $ 4 $ edges are changed in $ \vec{B} $. \end{itemize} Further, this algorithm has worst-case update time $ O (\Delta^+ (\vec{G}) \log{n}) $. The algorithm is randomized, and all of the above properties hold with high probability against an oblivious adversary. \end{lemma}
Informally, this lemma states the following. Edges leaving nodes with high out-degree are easy for us to span; we maintain $A$ as a sparse spanner of these edges. Edges leaving nodes with low out-degree are harder for us to span, and we maintain $\vec{B}$ as a collection of these edges.
Note that this lemma is considerably \emph{stronger} than the existence of a $3$-spanner. In particular, by setting $ d = \sqrt{n \log{n}} $ and then using $A \cup \vec{B}$ as a spanner of $G$, we obtain a fully dynamic algorithm for maintaining a $3$-spanner: \begin{corollary}\label{cor:3 spanner worst-case} There is a fully dynamic algorithm for maintaining a $3$-spanner of size $ O (n^{1 + 1/2} \sqrt{\log{n}}) $ for an oriented graph $ \vec{G} $ with worst-case update time $ O (\Delta^+ (\vec{G}) \log{n}) $. The stretch and the size guarantee both hold with high probability against an oblivious adversary. \end{corollary} The proof is essentially immediate from Lemma~\ref{lem:3 spanner worst-case fine-grained}; we omit it because it is non-essential. The detail of handling only $4n^2$ updates is not necessary in this corollary, due to Lemma~\ref{lem:extending spanner to long sequence}.
Looking forward, we will wait until Lemma~\ref{lem:bootstrapping worst case} to show precisely how the extra generality in Lemma~\ref{lem:3 spanner worst-case fine-grained} is useful towards strong worst-case update time. The rest of this subsection is devoted to the proof of Lemma~\ref{lem:3 spanner worst-case fine-grained}.
\subsubsection{Algorithm}
It will be useful in this algorithm to fix an arbitrary ordering of the nodes in the graph. This allows us to discuss the ``smallest'' or ``largest'' node in a list, etc.
We initialize the algorithm by determining a set of cluster centers $ S $ via random sampling. Specifically, every node of $ G $ is added to $ S $ independently with probability $ p = \min (x / d, 1) $ where $ x = a \ln{(4 n^5)} + 1 $ for some error parameter $ a \geq 1 $. We then use the algorithm of Lemma~\ref{lem:maintaining clusters worst case} above to maintain a clustering with $ S = \{s_1, \dots, s_k \} $ as the set of cluster centers. The subgraphs $ A $ and $ \vec{B} $ are defined according to the following three rules:
\begin{enumerate}
\item For every clustered node $ v $ (i.e. $ c [v] \neq \infty $), $ A $ contains the edge $ \{ v, c[v] \} $ from $v$ to its cluster center in $S$.\label{edges to cluster centers}
\item For every clustered node $ v $ (i.e. $ c [v] \neq \infty $) and every cluster index $ 1 \leq i \leq k $, $ A $ contains the edge $ \{ u, v \} $ to the first node $ u \in \mathit{In} (v, i) $ (unless $ \mathit{In} (v, i) = \emptyset $).\label{edges to clusters}
\item For every node $ u $ and every node $ v $ among the \emph{first} $ d $ neighbors of $ u $ in $ N (u) $ (with respect to an arbitrary fixed ordering of the nodes), $\vec{B}$ contains the edge $ (u, v) $. Alternately, if $ | N(u) | \leq d $, then $\vec{B}$ contains all such edges $ (u, v) $.\label{edges to first outgoing neighbors} \end{enumerate}
We maintain the subgraph $ \vec{B} $ in the following straightforward way. For every node $ u $ we store $ N (u) $, the set of neighbors of $ u $, in two self-balancing binary search trees: $ N_{\leq d} (u) $ for the first $ d $ neighbors and $ N_{> d} (u) $ for the remaining neighbors. Every time an edge $ (u, v) $ or an edge $ (v, u) $ is inserted into $ \vec{G} $, we add $ v $ to $ N_{\leq d} (u) $ and we add $ (u, v) $ to $ \vec{B} $. If $ N_{\leq d} (u) $ now contains more than $ d $ nodes, we remove the largest element $ v' $, add it to $ N_{> d} (u) $, and remove $ (u, v') $ from $ \vec{B} $.\footnote{Note that the node $ v' $ that is removed from $ N_{\leq d} (u) $ might be the node $ v $ we have added in the first place.} Similarly, every time an edge $ (u, v) $ or an edge $ (v, u) $ is deleted from $ \vec{G} $, we first check if $ v $ is contained in $ N_{> d} (u) $ and if so remove it from $ N_{> d} (u) $. Otherwise, we first remove $ v $ from $ N_{\leq d} (u) $ and $ (u, v) $ from $ \vec{B} $. Then we find the smallest node $ v' $ in $ N_{> d} (u) $, remove $v'$ from $ N_{> d} (u) $, add $ v' $ to $ N_{\leq d} (u) $, and add $ (u, v) $ to $ \vec{B} $.
We now explain how to maintain the subgraph $ A $. As an underlying subroutine, we use the algorithm of Lemma~\ref{lem:maintaining clusters worst case} to maintain a clustering w.r.t. centers $ S $. On each edge insertion/deletion, we first update the clustering, and then perform the following steps: \begin{enumerate}
\item For every node $v$ for which $ c [v] $ has just changed from some center $ s_i $ to some other center $ s_j $, we remove the edge $ \{v, s_i\} $ from $ A $ (if $ i \neq \infty $) and add the edge $ \{v, s_j\} $ to $ A $ (if $ j \neq \infty $).\label{node changes cluster}
\item For every node $ u $ that has been added to $ \mathit{In} (v, i) $ for some node $ v $ and some $ 1 \leq i \leq k $, we check if $ u $ is now the first node in $ \mathit{In} (v, i) $. If so, we add the edge $ \{u, v\} $ to $ A $ and remove the edge $ \{u', v\} $ for the previous first node $ u' $ of $ \mathit{In} (v, i) $ (if $ \mathit{In} (v, i) $ was previously non-empty).\label{node added to list}
\item For every node $ u $ that is removed from $ \mathit{In} (v, i) $ for some node $ v $ and some $ 1 \leq i \leq k $, we check if $ u $ was the first node in $ \mathit{In} (v, i) $ and if so remove the edge $ \{u, v\} $ from $ A $ and add the edge $ \{u', v\} $ for the new first node $ u' $ of $ \mathit{In} (v, i) $ (if $ \mathit{In} (v, i) $ is still non-empty).\label{node removed from list} \end{enumerate}
\subsubsection{Analysis}
To bound the update time required by this algorithm, we will argue that we spend $ O (\Delta^+ (\vec{G}) \log{n}) $ time per update maintaining $A$, and $O(\log n)$ time per update maintaining $\vec{B}$ (which, in our applications, is always dominated by $ O (\Delta^+ (\vec{G}) \log{n}) $). By Lemma~\ref{lem:maintaining clusters worst case}, the clustering structure can be updated in time $ O (\Delta^+ (\vec{G}) \log{n}) $. Each operation in steps~\ref{node changes cluster},~\ref{node added to list}, and~\ref{node removed from list} above can be charged to the corresponding changes in $ s_i $ and $ \mathit{In} (v, i) $ and thus can also be carried out within the same $ O (\Delta^+ (\vec{G}) \log{n}) $ time bound. Updating the subgraph $ \vec{B} $ takes time $ O (\log{n}) $, since we must perform a constant number of queries and updates in the corresponding self-balancing binary search trees.
We now show that the subgraphs $ A $ and $ \vec{B} $ have all of the properties claimed in Lemma~\ref{lem:3 spanner worst-case fine-grained}. First, we will discuss the sparsity bounds on $A$ and $\vec{B}$. Observe that rule~\ref{edges to cluster centers} contributes at most $n$ edges to $A$, since every node is contained in at most one cluster. Next, recall that the number of cluster centers $ S $ is $|S| = k = O (n (\log{n}) / d) $ (by Lemma~\ref{lem:random hitting set}, with high probability). Thus, $ A $ contains only $ O(n k) = O (n^2 (\log{n}) / d) $ edges due to rule~\ref{edges to clusters}. As the only edges of $ \vec{B} $ come from rule~\ref{edges to first outgoing neighbors}, the maximum out-degree in $ \vec{B} $ is $ d $. The claimed sparsity bounds therefore hold. Furthermore, with every insertion or deletion of an edge $ \{u, v\} $ in $ G $, at most one edge is added to or removed from the first $ d $ neighbors of $ u $ and $ v $, respectively. This implies that there are at most $4$ changes to $ \vec{B} $ with every update in $ G $. It now only remains to show that $ A $ is a $3$-spanner of $ G \setminus B $.
\begin{lemma} For up to $ 4 n^3 $ updates, $ d_A (u, v) \leq 3 $ for every edge $ \{u, v\} $ in $E \setminus E_B$ with high probability. \end{lemma}
\begin{proof}
Let $ \{ u, v \} $ be an edge of $E \setminus E_B$. Assume without loss of generality that the edge is oriented from $ u $ to $ v $ in $ \vec{G} $. As $ \{ u, v \} $ is not contained in $ B $, by rule~\ref{edges to first outgoing neighbors} above we have $ | N (u) | > d $. Thus, by Lemma~\ref{lem:random hitting set}, since the cluster centers $S$ were chosen by random sampling, with high probability there exists a cluster center in the first $ d $ outgoing neighbors of each node in all of up to $ 4 n^3 $ different versions of $G$ (i.e. one version for each of the $4n^3$ updates considered). Therefore $ c[u] = i $ for some $ 1 \leq i \leq k $ and, by rule~\ref{edges to cluster centers}, $ A $ contains the edge $ \{ u, s_i \} $. Since $ c[u] = i $, and $ u $ is an incoming neighbor of $ v $ in $ \vec{G} $, we have $ \mathit{In} (v, i) \neq \emptyset $, and thus, for the first element $ u' $ of $ \mathit{In} (v, i) $, $ A $ contains the edge $ \{ u', v \} $ (by rule~\ref{edges to clusters}). As $ c [u'] = i $, $ A $ contains the edge $ \{ s_i, u' \} $ by rule~\ref{edges to cluster centers}. This means that $ A $ contains the edges $ \{ u, s_i \} $, $ \{ s_i, u' \} $, and $ \{ u', v \} $, and thus there is a path from $ u $ to $ v $ of length~$ 3 $ in $ A $ as desired. \end{proof} This now also completes the proof of Lemma~\ref{lem:3 spanner worst-case fine-grained}.
\subsection{$5$-spanner}
The $5$-spanner algorithm is very similar to the $3$-spanner algorithm above, but we define the edges of the spanner in a slightly different way. Instead of including an edge from each node to each cluster, we have an edge between each \emph{pair} of clusters. Thus, the subgraphs $ A $ and $ \vec{B} $ are defined according to the following three rules: \begin{enumerate}
\item For every clustered node $ v $ (i.e. $ c [v] \neq \infty $), $ A $ contains the edge $ \{ v, c[v] \} $ from $v$ to its cluster center in $S$.\label{edges to cluster centers 5-spanner}
\item For every pair of distinct cluster indices $ 1 \leq i, j \leq k $, $ A $ contains the edge $ \{ u, v \} $, where $ \{ u, v \} $ is the first element in $ \mathit{In} (i, j) $ (unless $ \mathit{In} (i, j) = \emptyset $).\label{edges between clusters 5-spanner}
\item For every node $ u $ and every node $ v $ among the \emph{first} $ d $ neighbors of $ u $ in $ N (u) $ (with respect to an arbitrary fixed ordering of the nodes), $\vec{B}$ contains the edge $ (u, v) $. Alternately, if $ | N(u) | \leq d $, then $\vec{B}$ contains all such edges $ (u, v) $..\label{edges to first outgoing neighbors 5-spanner} \end{enumerate}
Beyond this slightly altered definition, we use the same approach for maintaining $ A $ and $ \vec{B} $ as in the $3$-spanner. The guarantee on the stretch can be proved as follows. \begin{lemma} For up to $ 4 n^3 $ updates, $ d_A (u, v) \leq 5 $ for every edge $ \{u, v\} $ in $E \setminus E_B$ with high probability. \end{lemma}
\begin{proof}
Let $ \{ u, v \} $ be an edge of $E \setminus E_B$. Assume without loss of generality that the edge is oriented from $ u $ to $ v $ in $ \vec{G} $. As $ \{ u, v \} $ is not contained in $ B $, by rule~\ref{edges to first outgoing neighbors 5-spanner} above we have $ | N (v) | > d $. We now apply Lemma~\ref{lem:random hitting set} to argue that there is a cluster center in the first $ d $ outgoing neighbors of each node in up to $ 4 n^3 $ versions of the graph (one version for each update to be considered). Thus, $ N (v) $ contains a cluster center from $ S $ with high probability. Therefore $ c[v] = i $ for some $ 1 \leq i \leq k $ and, by rule~\ref{edges to cluster centers}, $ A $ contains the edge $ \{ v, s_i \} $. By the same argument, $ N (u) $ contains a cluster center from $ S $ with high probability and thus $ A $ contains an edge $ \{ u, s_j \} $ where $ c[u] = j $ for some $ 1 \leq j \leq k $. Since $ c[v] = i $, $ c[u] = j $, and $ u $ is an incoming neighbor of $ v $ in $ \vec{G} $, we have $ \mathit{In} (i, j) \neq \emptyset $, and thus, for the first element $ (u', v') $ of $ \mathit{In} (i, j) $, $ A $ contains the edge $ \{ u', v' \} $ (by rule~\ref{edges between clusters 5-spanner}). As $ c[v'] = i $, $ c[u'] = j $, $ A $ contains the edges $ \{ v', s_i \} $ and $ \{ u', s_j \} $ by rule~\ref{edges to cluster centers 5-spanner}. This means that $ A $ contains the edges $ \{ u, s_i \} $, $ \{ s_i, u' \} $, $ \{ u', v' \} $, $ \{ v', s_i \} $ and $ \{ s_i, v \} $, and thus there is a path from $ u $ to $ v $ of length~$ 5 $ in $ A $ as desired. \end{proof} Note that in this proof we exploit the fact that we have cluster centers for both $ u $ and $ v $ whenever the edge $ \{u, v\} $ is missing. This motivates our design choice for considering the whole neighborhood of a node to determine its cluster. If we only considered cluster centers in the outgoing neighbors of a node, the resulting clustering would still be good enough for the $3$-spanner, but the argument above for the $5$-spanner would break down.
All other properties of the $5$-spanner can be proved in an essentially identical manner to the $3$-spanner. We can summarize the obtained guarantees as follows. \begin{lemma}\label{lem:5 spanner worst-case fine-grained} For every integer $ 1 \leq d \leq n $, there is a fully dynamic algorithm that takes an oriented graph $\vec{G} = (V, \vec{E})$ on input and maintains subgraphs $A = (V, E_A), \vec{B} = (V, \vec{E}_B)$ (i.e. $\vec{B}$ is oriented but $A$ is not) over a sequence of $4n^2$ updates with the following properties: \begin{itemize} \item $ d_A (u, v) \leq 5 $ for every edge $ \{u, v\} $ in $E \setminus E_B$
\item $ A $ has size $ | A | = O ((n^2 \log^2{n}) / d^2 + n) $ \item The maximum out-degree of $ \vec{B} $ is $ \Delta^+ (\vec{B}) \leq d $. \item With every update in $ G $, at most $ 4 $ edges are changed in $ \vec{B} $. \end{itemize} Further, this algorithm has worst-case update time $ O (\Delta^+ (\vec{G}) \log{n}) $. The algorithm is randomized, and all of the above properties hold with high probability against an oblivious adversary. \end{lemma}
Once again, this lemma generalizes the construction of a sparse $5$-spanner. By setting $ d = (n \log{n})^{2/3} $ we can obtain: \begin{corollary}\label{cor:5 spanner worst-case} There is a fully dynamic algorithm for maintaining a $5$-spanner of size $ O (n^{1 + 1/3} \log^{2/3}{n}) $ for an oriented graph $ \vec{G} $ with worst-case update time $ O (\Delta^+ (\vec{G}) \log{n}) $. The stretch and the size guarantee both hold with high probability against an oblivious adversary. \end{corollary}
\section{Out-degree Reduction for Improved Update Time}\label{lem:bootstrapping worst case}
Our goal is now to use Lemmas \ref{lem:3 spanner worst-case fine-grained} and \ref{lem:5 spanner worst-case fine-grained} to obtain spanner algorithms with sublinear update time. Since we obtain our $3$-spanner and $5$-spanner in an essentially identical manner, we will explain only the $3$-spanner in full detail, and then sketch the $5$-spanner construction.
We next establish the following simple generalization of Lemma \ref{lem:3 spanner worst-case fine-grained}:
\begin{lemma}\label{lem:degree reduction 3-spanner} For every integer $ 1 \leq s \leq n $ and $ 1 \leq d \leq n $, there is a fully dynamic algorithm that takes an oriented graph $\vec{G} = (V, \vec{E})$ on input and maintains subgraphs $A = (V, E_A), \vec{B} = (V, \vec{E}_B)$ (i.e. $\vec{B}$ is oriented but $A$ is not) over a sequence of $4n^2$ updates with the following properties: \begin{itemize} \item $ d_A (u, v) \leq 3 $ for every edge $ \{u, v\} $ in $E \setminus E_B$
\item $ A $ has size $ | A | = O (\Delta^+ (\vec{G}) n^2 (\log n) / (s d)) $ \item The maximum out-degree of $ \vec{B} $ is $ \Delta^+ (\vec{B}) \leq \Delta^+ (\vec{G}) \cdot d / s $. \item With every update in $ G $, at most $ 4 $ edges are changed in $ \vec{B} $. \end{itemize} Further, this algorithm has worst-case update time $ O (s \log{n}) $. The algorithm is randomized, and all of the above properties hold with high probability against an oblivious adversary. \end{lemma} In particular, Lemma~\ref{lem:3 spanner worst-case fine-grained} is the special case of this lemma in which $s = \Delta^+ (\vec{G})$.
\begin{proof} We orient each incoming edge of $ G $ in an arbitrary way. We then maintain a partitioning of the (oriented) edges of $ G $ into $ t := \lceil \Delta^+ (\vec{G}) / s \rceil $ groups, such that in each group each node has at most $ s $ outgoing edges. Specifically, we perform this partitioning by maintaining the current out-degree of each node $u$ in $\vec{G}$, and we assign a new edge $(u, v)$ which is the $x^{th}$ edge leaving $u$ in $\vec{G}$ to the subgraph $\vec{G}_{\lceil x/s \rceil}$. In this way, we form $ t $ subgraphs $ \vec{G}_1, \ldots \vec{G}_t $ of $ \vec{G} $, each of which has $\Delta^+(\vec{G}_i) \le s$.
We now run the algorithm of Lemma~\ref{lem:3 spanner worst-case fine-grained} on each $ \vec{G}_i $ to maintain, for each $ 1 \leq i \leq t $, two subgraphs $ A_i $ and $ \vec{B}_i $ as specified in the lemma. Let $ A = \bigcup A_i $ and $ \vec{B} = \bigcup \vec{B_i} $ denote the unions of these subgraphs.
Observe that every update in $ G $ only changes exactly one of the subgraphs $ \vec{G}_i $ and thus only must be executed in one corresponding instance of the algorithm of Lemma~\ref{lem:3 spanner worst-case fine-grained}. As we have ``artificially'' bounded the maximum out-degree of every subgraph $ \vec{G}_i $ by $ s $, the claimed bounds on the update time and the properties of $A$ and $\vec{B}$ now follow simply from Lemma~\ref{lem:3 spanner worst-case fine-grained}. \end{proof}
We now recursively apply the ``out-degree reduction'' of the previous lemma to obtain subgraphs $ \vec{B} $ of smaller and smaller out-degree. Finally, at bottom level, the maximum out-degree is small enough that we can apply a ``regular'' spanner algorithm to it.
\begin{theorem} \label{thm:3 span} There is a fully dynamic algorithm for maintaining a $3$-spanner of size $ O(n^{1+1/2} \log^{1/2}{n} \log{\log{n}}) $ with worst-case update time $ O (n^{3/4} \log^{4}{n}) $. \end{theorem}
\begin{proof} Our spanner construction is as follows (we temporarily omit details related to parameter choices, which influence the resulting update time). Apply Lemma \ref{lem:degree reduction 3-spanner} to obtain subgraphs $A_1, \vec{B}_1$. Include all edges in $A$ in the spanner, and then recursively apply Lemma \ref{lem:degree reduction 3-spanner} to $\vec{B}$ to obtain $A_2, \vec{B}_2$. Repeat to depth $ \ell $ (for some parameter $\ell$ that will be chosen later). At bottom level, instead of recursing, we apply the algorithm from Corollary~\ref{cor:3 spanner worst-case} to obtain a $3$-spanner of $\vec{B}$.
More formally, we set $ \vec{B}_0 = \vec{G}_0 $, and for every $ 1 \leq j \leq \ell $ we let $ A_j $ and $ \vec{B}_j $ be the graphs maintained by the algorithm of Lemma~\ref{lem:degree reduction 3-spanner} on input $ \vec{B}_{j-1} $ using parameters $ s $ and $ d_j $ to be chosen later.\footnote{Note that the parameter $ s $ is the same for all levels of the recursion, whereas the parameter $ d_j $ is not.} Further, we let $ H' $ be the spanner maintained by the algorithm of Corollary~\ref{cor:3 spanner worst-case} on input $ \vec{B}_\ell $. The resulting graph maintained by our algorithm is $ H = \bigcup_{1 \leq j \leq \ell} A_j \cup H' $. Then, by Lemma \ref{lem:degree reduction 3-spanner}, we have the following properties for every $ 1 \leq j \leq \ell $: \begin{itemize} \item $ d_{A_j} (u, v) \leq 3 $ for every edge $ \{u, v\} $ in $ B_{j-1} \setminus B_j $
\item $ A_j $ has size $ | A_j | = O (\Delta^+ (\vec{B}_{j-1}) n^2 (\log n) / (s d_j)) $ \item The maximum out-degree of $ \vec{B_j} $ is $ \Delta^+ (\vec{B}_j) \leq \Delta^+ (\vec{B}_{j-1}) \cdot d_j / s $. \item With every update in $ \vec{B}_{j-1} $, at most $ 4 $ edges are changed in $ \vec{B_j} $. \end{itemize}
It is straightforward to see that the resulting graph $ H $ is a $3$-spanner of $G$: At each level $ j $ of the recursion, $ A_j $ spans all edges of $B_{j-1}$ \emph{except} those that appear in the current subgraph $\vec{B}_j$. Thus, at bottom level, the only non-spanned edges of $G$ are those in the final subgraph $\vec{B}_\ell$. For these edges we explicitly add a $3$-spanner $H'$ of $\vec{B}_\ell$ to $ H $. By Lemma \ref{lem:span adjacency}, this suffices to produce a $3$-spanner of all of $G$.
Now that we have correctness of the construction, it remains to bound the number of edges in the output spanner. First, observe that, by induction, \begin{align*} \Delta^+ (\vec{B}_{j}) \leq n \cdot \prod_{1 \leq j' \leq j} d_{j'} / s^j \end{align*} for all $ 1 \leq j \leq \ell $. Since additionally $H'$ has size $O(n^{1 + 1/2} \log^{1/2} n)$ by Corollary~\ref{cor:3 spanner worst-case}, the total number of edges in $ H $ is \begin{align*}
|H| = \sum_{1 \leq j \leq \ell} | A_j | + | H' | &\leq \sum_{1 \leq j \leq \ell} O \left( \frac{\Delta^+ (B_{j-1}) n^2 \log n}{s d_j} \right) + O(n^{1 + 1/2} \log^{1/2} n) \\
&\leq \sum_{1 \leq j \leq \ell} O \left( \frac{ \left( \prod \limits_{1 \leq j' \leq j-1} d_{j'} \right) n^3 \log n}{s^j d_j} \right) + O(n^{1 + 1/2} \log^{1/2} n) \, . \end{align*} Thus, our spanner satisfies the claimed sparsity bound so long as the union of all $\ell$ of the $A_j$ subgraphs fit within the claimed sparsity bound; this will be the case if we balance all summands.
We next bound the update time of our algorithm. Each change to some $\vec{B}_j$ causes at most $4$ changes in the next level $\vec{B}_{j+1}$, and thus the number of changes to $ \vec{B}_j $ can propagate exponentially. Thus, for every $ 0 \leq j \leq \ell-1 $, a single update in $ \vec{G} $ could cause at most $ 4^j $ changes to $\vec{B}_j$. Each of the $ \ell $ instances of the algorithm of Lemma~\ref{lem:degree reduction 3-spanner} has a worst-case update time of $ O (s \log n) $ and the algorithm of Corollary~\ref{cor:3 spanner worst-case} has a worst-case update time of $ \Delta^+ (\vec{B}_\ell \log n) $. Since \begin{align*} \Delta^+ (\vec{B}_\ell) \leq n \cdot \prod_{1 \leq j \leq \ell} d_j / s^\ell \end{align*} the worst-case update time of our overall algorithm is \begin{equation*} O \left( \left(\sum_{j=0}^{\ell-1} 4^j s + 4^\ell \Delta^+ (\vec{B}_\ell) \right) \cdot \log n \right) \le O \left( \left( s + \frac{n \cdot \prod \limits_{1 \leq j \leq \ell} d_j}{s^\ell} \right) \cdot 4^\ell \log n \right) \, . \end{equation*}
Our goal is now to choose parameters $s_j, d, \ell$ to minimize this expression subject to the constraint on spanner size given above. To achieve this, we set parameters as follows: \begin{align*} \ell &= \log{\log{n}} \, ,\\ s &= n^{(3 \cdot 2^\ell - 1) / (2^{\ell+2} - 2)} \log{n} \, , \text{ and} \\ d_j &= n^{(3 \cdot 2^\ell - 2^{j-1} - 1) / (2^{\ell+2} - 2)} \log{n} \, . \end{align*} These values were obtained with the help of a computer algebra solver, so we do not have explicit computations to show for them. \end{proof}
We now turn our attention to the $5$-spanner. Similar to Lemma \ref{lem:degree reduction 3-spanner} above, we can use Lemma~\ref{lem:5 spanner worst-case fine-grained} to perform a similar out-degree reduction step for our dynamic 5-spanner algorithm.
\begin{lemma}\label{lem:degree reduction 5-spanner} For every integer $ 1 \leq s \leq n $ and $ 1 \leq d \leq n $, there is a fully dynamic algorithm that takes an oriented graph $\vec{G} = (V, \vec{E})$ on input and maintains subgraphs $A = (V, E_A), \vec{B} = (V, \vec{E}_B)$ (i.e. $\vec{B}$ is oriented but $A$ is not) over a sequence of $4n^2$ updates with the following properties: \begin{itemize} \item $ d_A (u, v) \leq 5 $ for every edge $ \{u, v\} $ in $E \setminus E_B$
\item $ A $ has size $ | A | = O (\Delta^+ (\vec{G}) n^2 (\log^2 n) / (s d^2)) $ \item The maximum out-degree of $ \vec{B} $ is $ \Delta^+ (\vec{B}) \leq \Delta^+ (\vec{G}) \cdot d / s $. \item With every update in $ G $, at most $ 4 $ edges are changed in $ \vec{B} $. \end{itemize} Further, this algorithm has worst-case update time $ O (s \log{n}) $. The algorithm is randomized, and all of the above properties hold with high probability against an oblivious adversary. \end{lemma} The proof of this lemma is essentially identical to the proof of Lemma \ref{lem:degree reduction 3-spanner} and has thus been omitted.
Just as in the case of the 3-spanner, we use this lemma to show: \begin{theorem} \label{thm:5 span} There is a fully dynamic algorithm for maintaining a $5$-spanner of size $ O (n^{1+1/3} \log^{2/3}{n} \log{\log{n}}) $ with worst-case update time $ O (n^{5/9} \log^{4}{n}) $. \end{theorem} \begin{proof} The proof is identical to the proof of Theorem \ref{thm:3 span}, except that the proper parameter balance is now: \begin{align*} \ell &= \log{\log{n}} \, ,\\ s &= n^{(5 \cdot 3^\ell - 2^{\ell+1}) / (3^{\ell+2} - 3 \cdot 2^{\ell+1})} \log{n} \, , \text{ and} \\ d_j &= n^{(5 \cdot 3^\ell - 3^{j-1} 2^{\ell-j+2} - 2^{\ell+1}) / (3^{\ell+2} - 3 \cdot 2^{\ell+1})} \log{n} \, . \end{align*} \end{proof}
Finally, we can also show: \begin{theorem} There is a fully dynamic algorithm for maintaining a $5$-spanner of size $ O (n^{1+1/2} \log^{1/2}{n} \log{\log{n}}) $ with worst-case update time $ O (n^{1/2} \log^{4}{n}) $. \end{theorem}
\begin{proof} The proof is identical to the proof of Theorems~\ref{thm:3 span} and~\ref{thm:5 span}, except that we now use the parameter balance \begin{align*} \ell &= \log{\log{n}} \, ,\\ s &= n^{(3^{\ell+1} - 2^\ell) / (2 \cdot 3^{\ell+1} - 2^{\ell+2})} \log{n} \, , \text{ and} \\ d_j &= n^{(3^{\ell+1} - 3^j \cdot 2^{\ell-j} - 2^\ell) / (2 \cdot 3^{\ell+1} - 2^{\ell+2})} \log{n} \, . \end{align*} and we maintain the dynamic $3$-spanner $ H' $ of size $ O (n^{1 + 1/2} \log^{1/2}{n}) $ from Corollary~\ref{cor:5 spanner worst-case} at bottom level. \end{proof} This spanner has non-optimal size/stretch tradeoff, but enjoys the best worst-case update time that we are currently able to construct.
\section{Updating the Clustering}\label{apx:updating clustering algorithm}
In the following we give the straightforward algorithm for maintaining the clustering with worst-case update time proportional to the maximum out-degree of the original graph mentioned in Section~\ref{sec:maintaining clustering}. To make the presentation of this algorithm more succinct we assume that there is some (arbitrary, but fixed) ordering on the nodes. Furthermore, we assume that the nodes of $ S $ are given according to this order, i.e. $ s_1 \leq s_2 \leq \dots \leq s_k $. For every node $ v $, we maintain $ c [v] $ as the smallest $ i $ such that $ s_i $ is a neighbor of $ v $ (or $ \infty $ if no such neighbor exists). Additionally, we naturally extend the sets $ \mathit{In} (v, i) $ and $ \mathit{In} (i, j) $ to the case $ i, j \in \{1, \dots, k, \infty\} $.
We begin with an empty graph $G = (V, \emptyset)$, a cluster vector $c$ with $c[v] = \infty$ for all $v \in V$, and empty sets $\mathit{In}(i, j)$ and $\mathit{In}(i, v)$ for all cluster indices $1 \le i, j \le k$ and $v \in V$. We then modify these data structures under edge insertions and deletions as follows.
Correctness of the algorithms that follow is immediate, and is not shown formally.
\subsection{Insertion of an edge $(u, v)$}
\begin{itemize}
\item Add $ u $ to $ \mathit{In} (v, i) $ for $ i = c [u] $.
\item Add $ (u, v) $ to $ \mathit{In} (j, i) $ for $ i = c [u] $ and $ j = c [v] $
\item If $ u = s_i $ for some $ 1 \leq i \leq k $:
\begin{itemize}
\item Set $ j = c[v] $ (might be $ \infty $)
\item Add $ u $ to $ C [v] $.
\item If $ i < j $:
\begin{itemize}
\item Set $ c[v] = i $
\item For every outgoing neighbor $ v' $ of $ v $:
Remove $ v $ from $ \mathit{In} (v', j) $ and add $ v $ to $ \mathit{In} (v', i) $.
Remove $ (v, v') $ from $ \mathit{In} (i', j) $ and add $ (v, v') $ to $ \mathit{In} (i', i) $ where $ i' = c[v'] $.
\end{itemize}
\end{itemize}
\item If $ v = s_i $ for some $ 1 \leq i \leq k $:
\begin{itemize}
\item Set $ j = c[u] $ (might be $ \infty $)
\item Add $ v $ to $ C [u] $.
\item If $ i < j $:
\begin{itemize}
\item Set $ c[u] = i $
\item For every outgoing neighbor $ v' $ of $ u $:
Remove $ u $ from $ \mathit{In} (v', j) $ and add $ u $ to $ \mathit{In} (v', i) $.
Remove $ (u, v') $ from $ \mathit{In} (i', j) $ and add $ (u, v') $ to $ \mathit{In} (i', i) $ where $ i' = c[v'] $.
\end{itemize}
\end{itemize}
\end{itemize}
\subsection{Deletion of an edge $(u, v)$:}
\begin{itemize}
\item Remove $ u $ from $ \mathit{In} (v, i) $ for $ i = c [u] $.
\item Remove $ (u, v) $ from $ \mathit{In} (j, i) $ for $ i = c [u] $ and $ j = c [v] $
\item If $ u = s_i $ for some $ 1 \leq i \leq k $:
\begin{itemize}
\item Remove $ u $ from $ C [v] $.
\item If $ c[v] = s_i $:
\begin{itemize}
\item Let $ j $ be minimal such that $ s_j $ is in $ C [v] $ (might be $ \infty $)
\item Set $ c[v] = j $
\item For every outgoing neighbor $ v' $ of $ v $:
Remove $ v $ from $ \mathit{In} (v', i) $ and add $ v $ to $ \mathit{In} (v', j) $.
Remove $ (v, v') $ from $ \mathit{In} (i', j) $ and add $ (v, v') $ to $ \mathit{In} (i', i) $ where $ i' = c[v'] $
\end{itemize}
\end{itemize}
\item If $ v = s_i $ for some $ 1 \leq i \leq k $:
\begin{itemize}
\item Remove $ v $ from $ C [u] $.
\item If $ c[u] = s_i $:
\begin{itemize}
\item Let $ j $ be minimal such that $ s_j $ is in $ C [u] $ (might be $ \infty $)
\item Set $ c[u] = j $
\item For every outgoing neighbor $ v' $ of $ u $:
Remove $ u $ from $ \mathit{In} (v', i) $ and add $ u $ to $ \mathit{In} (v', j) $.
Remove $ (u, v') $ from $ \mathit{In} (i', j) $ and add $ (u, v') $ to $ \mathit{In} (i', i) $ where $ i' = c[v'] $
\end{itemize}
\end{itemize}
\end{itemize}
\section{Proof of Lemma \ref{lem:extending spanner to long sequence}}
We exploit the decomposability of spanners. We maintain a partition of $ G $ into two disjoint subgraphs $ G_1 $ and $ G_2 $ and run two instances $ A_1 $ and $ A_2 $ of the dynamic algorithm on $ G_1 $ and $ G_2 $, respectively. These two algorithms maintain a $ t $-spanner of $ H_1 $ of $ G_1 $ and a $ t $-spanner $ H_2 $ of $ G_2 $. By Lemma \ref{lem:decomposability}, the union $ H = H_1 \cup H_2 $ is a $ t $-spanner of $ G = G_1 \cup G_2 $.
We divide the sequence of updates into phases of length $ n^2 $ each. In each phase of updates one of the two instances $ A_1 $, $ A_2 $ is in the state \emph{growing} and the other one is in the state \emph{shrinking}. $ A_1 $ and $ A_2 $ switch their states at the end of each phase. In the following we describe the algorithm's actions during one phase. Assume without loss of generality that, in the phase we are fixing, $ A_1 $ is growing and $ A_2 $ is shrinking.
At the beginning of the phase we restart the growing instance $ A_1 $. We will orchestrate the algorithm in such a way that at the beginning of the phase $ G_1 $ is the empty graph and $ G_2 = G $. After every update in $ G $ we execute the following steps: \begin{enumerate}
\item If the update was the insertion of some edge $ e $, then $ e $ is added to the graph $ G_1 $ and this insertion is propagated to the \emph{growing} instance $ A_1 $.
\item If the update was the deletion of some edge $ e $, then $ e $ is removed from the graph $ G_i $ it is contained in and this deletion is propagated to the corresponding instance $ A_i $.
\item In addition to processing the update in $ G $, if $ G_2 $ is non-empty, then one arbitrary edge~$ e $ is first removed from $ G_2 $ and deleted from instance $ A_2 $ and then added to $ G_1 $ and inserted into instance $ A_1 $. \end{enumerate} Observe that these rules indeed guarantee that $ G_1 $ and $ G_2 $ are disjoint and together contain all edges of $ G $. Furthermore, since the graph $ G_2 $ of the shrinking instance has at most $ n^2 $ edges at the beginning of the phase, the length of $ n^2 $ updates per phase guarantees that $ G_2 $ is empty at the end of the phase. Thus, the growing instance always starts with an empty graph $ G_1 $.
As both $ H_1 $ and $ H_2 $ have size at most $ S (n, m, W) $, the size of $ H = H_1 \cup H_2 $ is $ O (S (n, m, W)) $. With every update in $ G $ we perform at most $ 2 $ updates in each of $ A_1 $ and $ A_2 $. It follows that the worst-case update time of our overall algorithm is $ O (T (m, n, W)) $. Furthermore since each of the instances $ A_1 $ and $ A_2 $ is restarted every other phase, each instance of the dynamic algorithm sees at most $ 4 n^2 $ updates before it is restarted.
\end{document} |
\begin{document}
\title{Involutions of a canonical curve.}
{\footnotesize{\bf Authors' address:} Departamento de Algebra, Universidad de Santiago de Compostela. $15706$ Santiago de Compostela. Galicia. Spain. e-mail: {\tt pedreira@zmat.usc.es}; \\ {\tt luisfg@usc.es}\\ {\bf Abstract:} We give a geometrical characterization of the ideal of quadrics containing a ca\-noni\-cal curve with an involution. This implies to study involutions of rational normal scrolls and Veronese surfaces. \\ {\bf Mathematics Subject Classifications (1991):} Primary, 14H37; secondary, 14H30, 14J26.\\ {\bf Key Words:} Canonical curve, involution, rational normal scrolls.}
{\bf Introduction:} Let $C$ be a nonhyperelliptic smooth curve of genus $\pi$. An involution of $C$ is an automorphism $\varphi:C{\longrightarrow} C$ such that $\varphi^2=id$. It induces a double cover $\gamma:C{\longrightarrow} C/\varphi=X$, where $X$ is a smooth curve of genus $g$. We say that $C$ has an involution of genus $g$. By Hurwitz formula, we know that $\pi\geq 2g-1$. It is well know that a general smooth curve of genus $\pi\geq 3$ has not nontrivial automorphisms. In particular a smooth curve with an involution is not generic.
In this paper we give a geometric characterization of the ideal of quadrics containing the canonical model of a nonhyperelliptic curve $C$ with an involution. Let $C_{{\cal K}}\subset {\bf P}^{\pi-1}$ be the canonical model of $C$. We see that an involution of $C_{{\cal K}}$ is a harmonic involution; that is, it can be extended to ${\bf P}^{\pi-1}$.
An involution $\overline{\varphi}$ of ${\bf P}^n$ has two complementary spaces of base points $S_1$ and $S_2$. Moreover, $\overline{\varphi}$ induces an involution $\overline{\varphi}^*$ in the space of quadrics of ${\bf P}^n$. This involution has two spaces of base points: the {\em base quadrics}, that is, quadrics containing the spaces $S_1$ and $S_2$ and the {\em harmonic quadrics}, that is, quadrics such that $S_1$ and $S_2$ are polar respect to them. A subspace $\Sigma\subset {\bf P}(H^0({\cal O}_{P^n}(2)))$ is called a {\em base-harmonic system} respect to $S_1,S_2$ when it is a fixed space of $\overline{\varphi}^*$. In this case $\Sigma_b$ and $\Sigma_h$ will denote the base quadrics and the harmonic quadrics of $\Sigma$ respectively.
We prove the following Theorem: \\ \\ {\bf Theorem \ref{fundamental}} {\em \ \ Let $C_{{\cal K}}\subset {\bf P}^{\pi-1}$ be the canonical curve of genus $\pi$, with $\pi>4$. If $C_{{\cal K}}$ has an involution of genus $g$ then $\pi\geq 2g-1$ and the quadrics of ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ are a base-harmonic system respect to the base spaces ${\bf P}^{g-1}$, ${\bf P}^{\pi-g-1}$ that contains $(g-1)(\pi-g-2)$ independent base quadrics. Conversely, these conditions are sufficient to grant the existence of an involution, except when: \begin{enumerate}
\item $\pi=6,g=2$ and $C_{{\cal K}}$ has a $g^2_5$; or
\item $\pi=2g$,$2g+1$ or $2g+2$ and $C_{{\cal K}}$ is trigonal.
\end{enumerate} }
\hspace{\fill}$\rule{2mm}{2mm}$
First we prove that the conditions are sufficient when the curve $C_{{\cal K}}$ is complete intersection of quadrics. The Enriq\"ues-Babbage Theorem says that $C_{{\cal K}}$ is the complete intersection of quadrics except when it is trigonal or $C$ is a quintic smooth curve. In these cases the quadrics containing the canonical curve intersect in a rational normal scroll and in the Veronese surface respectively.
In order to examine the special cases we make an study of the harmonic involutions of the rational normal scrolls and the Veronese surface. We compute the number of base quadrics on each case. From this calculus we obtain the Corollaries \ref{coro1} and \ref{coro2}: \\ \\ {\bf Corollary \ref{coro1}}{\em \ \ The unique involutions on a trigonal canonical curve of genus $\pi$, $\pi>4$ are of genus $\frac{\pi}{2},\frac{\pi-1}{2}$ or $\frac{\pi-2}{2}$. \hspace{\fill}$\rule{2mm}{2mm}$} \\ \\ {\bf Corollary \ref{coro2}}{\em \ \ The unique involutions on a smooth quintic plane curve are of genus $2$.\hspace{\fill}$\rule{2mm}{2mm}$ }
Furthermore, we make a particular geometrical study of the canonical curves of genus $4$ with an involution of genus $2$ and genus $1$.
Note that to compute the number of independent base quadrics containing the canonical curves with and involution we need the result about the projective normality of the canonical scrolls (see \cite{pedreira2},$\S5$). We will follow the notation of \cite{fuentes} and \cite{hartshorne} to work with scrolls and ruled surfaces.
We thanks Lawrence Ein by his interest on this work during his visit to our Department on November, 2001.
\section{Harmonic involutions.}\label{involutions}
\begin{defin}\label{definvolucion}
Let $X\subset {\bf P}^n$ be a projective variety. An isomorphism $\varphi:X{\longrightarrow} X$ is called an involution if $\varphi^2=Id$. Moreover, if $\varphi$ is the restriction of an involution $\overline{\varphi}:{\bf P}^n{\longrightarrow} {\bf P}^n$ then $\varphi$ is called a harmonic involution.
\end{defin}
\begin{prop}\label{caracterizaarmonicas}
Let $X\subset {\bf P}^n$ be a linearly normal projective variety. An involution $\varphi:X{\longrightarrow} X$ is harmonic if and only if $\varphi^*(X\cap H)\sim X\cap H$ for all hyperplane $H$.
\end{prop} {\bf Proof:} If $\varphi$ is harmonic we clearly have that $\varphi^*(X\cap H)\sim X\cap H$.
Conversely, if $\varphi^*(X\cap H)\sim X\cap H$ for all hyperplane, then we have an involution $\varphi^*:H^0({\cal O}_X(1)){\longrightarrow} H^0({\cal O}_X(1))$ that makes the following diagram commutative:
$$ \setlength{\unitlength}{5mm} \begin{picture}(15,4)
\put(2.6,3){\makebox(0,0){${\bf P}(H^0({\cal O}_X(1))^{\vee})$}} \put(3.6,0){\makebox(0,0){$X$}} \put(12.5,3){\makebox(0,0){${\bf P}(H^0({\cal O}_X(1))^{\vee})$}} \put(10.5,0){\makebox(0,0){$X$}} \put(3.6,0.7){\vector(0,1){1.5}} \put(10.5,0.7){\vector(0,1){1.5}} \put(5.5,3){\vector(1,0){4}} \put(5.5,0){\vector(1,0){4}} \put(7.5,3.5){\makebox(0,0){${\bf P}({\varphi^*}^{\vee})$}} \put(7.5,0.6){\makebox(0,0){$\varphi$}}
\end{picture} $$ so $\varphi$ extends to ${\bf P}^n$. \hspace{\fill}$\rule{2mm}{2mm}$
{\bf Examples:}
\begin{enumerate}
\item {\em Any involution of a rational curve $D_n\subset {\bf P}^n$ is harmonic.}
It is sufficient to note that $\varphi^*(D_n\cap H)$ has degree $n$; because $D_n$ is a rational curve, it follows that $\varphi^*(D_n\cap H)\sim D_n\cap H$.
\item {\em Any involution of a canonical curve $C_{{\cal K}} \subset {\bf P}^{\pi-1}$ of genus $\pi$ is harmonic.}
The linear system $|\varphi^*(C_{{\cal K}}\cap H)|$ has degree $2\pi-2$ and it has dimension $\pi$, so it is the canonical linear system and $\varphi^*(C_{{\cal K}}\cap H)\sim C_{{\cal K}}\cap H$.
\item {\em Any involution of a normal rational scroll $R_{n-1}\subset {\bf P}^n$ with invariant $e>0$ is harmonic.}
Let $S_e={\bf P}({\cal O}_{P^1}\oplus {\cal O}_{P^1}(-e))$ be the ruled surface associated to $R_{n-1}$. We know that $H\cap R_{n-1}\sim X_0+\mbox{\euf b} f$. Since $\varphi$ is an isomorphism, $\varphi^*(X_0)^2=X_0^2$ and $\varphi^*(f)^2=f^2=0$. But, in $S_e$ with $e>0$, $X_0$ is the unique curve with negative self-intersection and $f$ is the unique curve with self-intersection $0$. From this $\varphi^*(X_0)\sim X_0$ and $\varphi^*(f)\sim f$. Then: $$ \varphi^*(H\cap R_{n-1})\sim \varphi^*(X_0+\mbox{\euf b} f)\sim \varphi^*(X_0)+\mbox{\euf b} \varphi^*(f)\sim X_0+\mbox{\euf b} f\sim H\cap R_{n-1} $$
\end{enumerate}
We recall some basic facts about involutions in a projective space.
\begin{prop}\label{espaciosfijos}
Any involution $\overline{\varphi}$ of ${\bf P}^n$ has two complementary spaces $S_1,S_2 \\ \subset {\bf P}^n$ of base points. In this way, the image of a point $P$ is the fourth harmonic of $P$, $l\cap S_1$ and $l\cap S_2$, where $l$ is the unique line passing through $P$ verifying $l\cap S_1\neq \emptyset$ and $l\cap S_2\neq \emptyset$.
Conversely, any pair of complementary spaces $S_1,S_2$ of ${\bf P}^n$ defines an invol\-ution of ${\bf P}^n$.\hspace{\fill}$\rule{2mm}{2mm}$
\end{prop}
\begin{rem}\label{expresionmatricial} { If we take a base of ${\bf P}^n$, $W=\{ P_1,\ldots,P_{k+1},P'_1,\ldots,P'_{k'+1} \}$ where $\langle P_1,\ldots,P_{k+1} \rangle=S_1$ and $\langle P'_1,\ldots,P'_{k'+1} \rangle=S_2$ then the involution $\varphi$ is given by the matrix: $$ M_{\varphi}=\left( \begin{array}{cc} {Id}&{0}\\ {0}&{-Id}\\ \end{array}\right) $$ }\hspace{\fill}$\rule{2mm}{2mm}$
\end{rem}
\begin{defin}\label{espaciosbase}
Under the above assumptions, we say that $\varphi$ is harmonic respect to $S_1$ and $S_2$. Moreover, $S_1$ and $S_2$ are called the base spaces of $\varphi$.
\end{defin}
\begin{rem}\label{cuadricas}
{\em The linear isomorphism $\overline{\varphi}$ induces an isomorphism: $$ \overline{\varphi}^*:{\bf P}(H^0({\cal O}_{P^n}(2))){\longrightarrow} {\bf P}(H^0({\cal O}_{P^n}(2))) $$ Because $\overline{\varphi}$ is an involution, $\overline{\varphi}^*$ is an involution too. Therefore, it has two complementary spaces of base points. Taking coordinates respect to the base $W$, a quadric $Q\subset {\bf P}^n$ has a matrix: $$ M_{Q}=\left( \begin{array}{cc} {A}&{C}\\ {C^t}&{B}\\ \end{array}\right) $$
We see that $\overline{\varphi}^*(Q)=Q$ if and only if $M_{\overline{\varphi}^*(Q)}=M_{\varphi}M_QM_{\varphi}=\lambda M_Q$ for some $\lambda\neq 0$, if and only if $A=B=0$ or $C=0$. In the first case $Q$ is called a harmonic quadric and in the second case $Q$ is called a base quadric. The set of harmonic quadrics will be denoted by ${\bf P}(H^0({\cal O}_{P^n}(2))_h)$ and the set of base quadrics by ${\bf P}(H^0({\cal O}_{P^n}(2))_b)$. We have that:
\begin{enumerate}
\item $Q$ is a harmonic quadric if and only if $P'^tM_QP=0$ for each $P\in S_1,P'\in S_2$; that is, if $S_1$ and $S_2$ are polar respect to $Q$.
\item $Q$ is a base quadric if and only if $S_1,S_2\subset Q$.
\end{enumerate} } \hspace{\fill}$\rule{2mm}{2mm}$
\end{rem}
\begin{defin}\label{sistemaarmonico}
Let $\Sigma\subset {\bf P}(H^0({\cal O}_{P^n}(2)))$ be a projective subspace $\Sigma={\bf P}(V)$ and let $\overline{\varphi}:{\bf P}^n{\longrightarrow}{\bf P}^n$ a harmonic involution respect to two spaces $S_1,S_2$. $\Sigma$ is called a base-harmonic system respect to $S_1,S_2$ when it is a fixed space of $\overline{\varphi}^*$.
\end{defin}
\begin{rem}\label{notasistemaarmonico}
{\em If we denote $\Sigma_h=\Sigma\cap {\bf P}(H^0({\cal O}_{P^n}(2))_h)$ and $\Sigma_b=\Sigma\cap {\bf P}(H^0({\cal O}_{P^n}( \\ 2))_b)$, we see that $\Sigma$ is a base-harmonic system if and only if $\Sigma=\Sigma_h+\Sigma_b$; that is, $V$ has a base composed by harmonic and base quadrics.}\hspace{\fill}$\rule{2mm}{2mm}$
\end{rem}
\begin{prop}\label{equivalencia}
Let $X\subset {\bf P}^n$ be a projective variety.
\begin{enumerate}
\item If $X$ has a harmonic involution $\varphi$ respect to two spaces $S_1,S_2$ then the system ${\bf P}(H^0(I_X(2)))$ is base-harmonic system respect to $S_1,S_2$.
\item Suppose that $X$ is the complete intersection of quadrics. If ${\bf P}(H^0(I_X(2)))$ is a base-harmonic system respect to $S_1,S_2$, then $X$ has a harmonic involution respect to two spaces $S_1,S_2$.
\end{enumerate}
\end{prop} {\bf Proof:}
\begin{enumerate}
\item Let $Q\in {\bf P}(H^0(I_X(2)))$, that is, $X$ is contained on $Q$. It is sufficient to show that $\overline{\varphi}^*(Q)$ contains $X$. Since $\overline{\varphi}(X)=X$ and $X\subset Q$, $X=\overline{\varphi}(X)\subset \overline{\varphi}(Q)$ and the conclusion follows.
\item Let $\overline{\varphi}:{\bf P}^n{\longrightarrow} {\bf P}^n$ the harmonic involution defined by the spaces $S_1,S_2$. If $X$ is the complete intersection of quadrics, then $X=Q_1\cap\ldots\cap Q_k$ where $\{ Q_1,\ldots ,Q_k\}$ is a base of ${\bf P}(H^0(I_X(2)))$. If ${\bf P}(H^0(I_X(2)))$ is a base-harmonic system respect to $S_1,S_2$ we can choose a base of fixed quadrics, so $$ \overline{\varphi}(X)=\overline{\varphi}(Q_1\cap\ldots\cap Q_k)=\overline{\varphi}(Q_1)\cap\ldots\cap\overline{\varphi}(Q_k)= Q_1\cap\ldots\cap Q_k=X $$ and we can restrict $\overline{\varphi}$ to $X$. \hspace{\fill}$\rule{2mm}{2mm}$
\end{enumerate}
\begin{prop}\label{cuadricasbase}
Let $X\subset {\bf P}^n$ be a projective variety and let $\varphi:X{\longrightarrow} X$ a harmonic involution of $X$ respect to spaces $S_1,S_2$.
Let $F=\overline{\{P\in {\bf P}^n/P\in \langle x,\varphi(x) \rangle, x\in X\}}$ be the variety of lines joining points of $X$ related by the involution. Then: $$ \begin{array}{rl} {h^0(I_{X,P^n}(2))_b}&{=h^0(I_{F,P^n}(2))_b=}\\ {}&{=h^0(I_{F,P^n}(2))-h^0(I_{F\cap S_1,S_1}(2))-h^0(I_{F\cap S_2,S_2}(2))}\\ \end{array} $$
\end{prop} {\bf Proof:} Let us first prove that $H^0(I_{X,P^n}(2))_b=H^0(I_{F,P^n}(2))_b$.
Since $X\subset F$, a quadric containing $F$ contains $X$ too, so $H^0(I_{F,P^n}(2))_b\subset H^0(I_{X,P^n}(2))_b$.
Conversely, if $Q\in H^0(I_{X,P^n}(2))_b$, $X,S_1,S_2\subset Q$. Therefore, each line $l$ of $F$ meets $Q$ in four points: $(x,\varphi(x),l\cap S_1,l\cap S_2)$ and then it is contained on $Q$. Thus, $F\subset Q$ and $H^0(I_{X,P^n}(2))_b\subset H^0(I_{F,P^n}(2))_b$.
Now, let us consider the following exact sequence: $$ 0{\longrightarrow} H^0(I_{F\cup S_1,P^n}(2)){\longrightarrow} H^0(I_{F,P^n}(2))\stackrel{{\alpha}}{{\longrightarrow}} H^0(I_{F\cap S_1,S_1}(2)) $$ Let us see that ${\alpha}$ is a surjective map. Let $Q_1\subset S_1$ be a quadric which contains $F\cap S_1$. Taking the cone of $Q_1$ over $S_2$, we obtain a quadric of ${\bf P}^n$ that contains the lines joining $Q_1$ and $S_2$ so it contains $F$. From this, we deduce that: $$ h^0(I_{F\cup S_1,P^n}(2))=h^0(I_{F,P^n}(2))-h^0(I_{F\cap S_1,S_1}(2)) $$ Similarly, we have $$ 0{\longrightarrow} H^0(I_{F\cup S_1\cup S_2,P^n}(2)){\longrightarrow} H^0(I_{F\cup S_1,P^n}(2))\stackrel{\beta}{{\longrightarrow}} H^0(I_{F\cap S_2,S_2}(2)) $$ where $H^0(I_{F\cup S_1\cup S_2,P^n}(2))=H^0(I_{F,P^n}(2))_b$ and $\beta$ is a surjective map. Therefore: $$ \begin{array}{rl} {h^0(I_{F,P^n}(2))_b}&{=h^0(I_{F\cup S_1,P^n}(2))-h^0(I_{F\cap S_2,S_2}(2))=}\\ {}&{=h^0(I_{F,P^n}(2))-h^0(I_{F\cap S_1,S_1}(2))-h^0(I_{F\cap S_2,S_2}(2))}\\ \end{array} $$\hspace{\fill}$\rule{2mm}{2mm}$
{\em We finish this section by computing the dimension of the spaces of harmonic and base quadrics in an involution over a normal rational curve.}
Let $D_n\subset {\bf P}^n$ be a rational normal curve of degree $n$ and let $\varphi:D_n{\longrightarrow} D_n$ an involution of degree $2$. The lines joining points related by the involution generate a rational normal ruled surface $R_{n-1}\subset {\bf P}^n$. This ruled surface has two directrix curves $C_1,C_2$ on the base spaces $S_1,S_2$ of the involution.
We now (\cite{hartshorne}, IV, $2.17$ and $2.19$) that $R_{n-1}\cong {\bf P}({\cal O}_{P^1}\oplus {\cal O}_{P^1}(-e))$ for some $e\geq 0$. In this way the divisor of hyperplane sections of $R_{n-1}$ is $H\sim X_0+(n-1+e)/2 f$. Moreover the curve $D_n$ corresponds to a $2$-secant curve on the surface, so $D_n\sim 2X_0+kf$. Since $deg(D_n)=D_n.H=n$, we obtain $k=e+1$. Because $D_n$ is irreducible, $D_n.X_0\geq 0$ and we obtain $e\leq 1$. We deduce that $e=0$ if $n$ is odd and $e=1$ if $n$ is even.
In this way we see that if $n$ is odd, the directrix curves $C_1,C_2$ have degree $(n-1)/2$ and if $n$ is even, the directrix curves have degree $(n-2)/2$ and $n/2$.
By applying Proposition \ref{cuadricasbase} we have that: $$ h^0(I_{D_n}(2))_b=h^0(I_{R_{n-1}}(2))-h^0(I_{C_1}(2))-h^0(I_{C_2}(2)) $$ The dimension of theses spaces are well known. Thus, we obtain: $$ h^0(I_{D_n}(2))_b=\left\{ \begin{array}{l} {(\frac{n-1}{2})^2\mbox{ if $n$ is odd}}\\ {}\\ {\frac{n(n-2)}{4}\mbox{ if $n$ is even}}\\ \end{array}\right. $$
Note that when $n$ is odd, $D_n\sim 2X_0+f$, so $D_n.X_0=1$ with $C_1,C_2\sim X_0$. Thus, the base points of the involution of $D_n$ are $D_n\cap C_1$ and $D_n\cap C_2$. When $n$ is even, $D_n\sim 2X_0+2f$, so $D_n.X_0=0$ and $D_n.(X_0+f)=2$ where $C_1\sim X_0$ and $C_2\sim X_0+f$. In this case the base points of the involutions of $D_n$ are the two points of $D_n\cap C_2$. \hspace{\fill}$\rule{2mm}{2mm}$
\section{Involutions of the canonical curve.}\label{involutionscanonica}
Let $C_{{\cal K}}$ be the canonical curve of genus $\pi$ and let $\varphi:C_{{\cal K}}{\longrightarrow} C_{{\cal K}}$ be an involution. We saw that it is a harmonic involution. We will use the results obtained in \cite{pedreira2}. The scroll generated by the involution is a canonical scroll $R_{\mbox{\euf b}}$. We call genus of the involution $\varphi$ to the genus of the ruled surface $R_{\mbox{\euf b}}$. Thus we have a $2:1$-morphism $\gamma:C_{{\cal K}}{\longrightarrow} X$. $R_{\mbox{\euf b}}$ has a canonical directrix curve $\overline{X_0}$ of genus $g$, and a nonspecial curve $\overline{X_1}$ with degree $\pi-1$. They lie on disjoint spaces ${\bf P}^{g-1}$ and $P^{\pi-g-1}$. The involution of $C_{{\cal K}}$ has $2(\pi-1-2(g-1))$ base points that are the ramifications of $\gamma$. We denote them by ${\cal B}$ and we know that ${\cal B}\sim C_{{\cal K}}\cap \overline{X_1}$.
Let us compute the number of base quadrics containing $C_{{\cal K}}$, that is, the dimension of $H^0(I_{C_{{\cal K}}}(2))_b$. By Proposition \ref{cuadricasbase} we know that:
$$ h^0(I_{C_{{\cal K}}}(2))_b=h^0(I_{R_{\mbox{\euf b}}}(2))-h^0(I_{\overline{X_0},P^{g-1}}(2))-h^0(I_{\overline{X_1},P^{\pi-g-1}}(2)) $$
We can compute this dimension because we know that (see \cite{pedreira2}): $$ \begin{array}{l} {h^0(I_{R_{\mbox{\euf b}}}(2))=h^0({\cal O}_{P^{\pi-1}}(2))-h^0({\cal O}_{S_{\mbox{\euf b}}}(2H))+dim(s(H, H))}\\ {h^0(I_{\overline{X_0}}(2))=h^0({\cal O}_{P^{g-1}}(2))-h^0({\cal O}_X(2{\cal K}))+dim(s({\cal K},{\cal K}))}\\
{h^0(I_{\overline{X_1}}(2))=h^0({\cal O}_{P^{\pi-g-1}}(2))-h^0({\cal O}_X(2\mbox{\euf b}))+dim(s(\mbox{\euf b},\mbox{\euf b}))}\\ {}\\ {dim(s(H,H))=dim(s({\cal K},{\cal K}))+dim(s(\mbox{\euf b},\mbox{\euf b}))}\\ \end{array} $$ From this, we obtain: $$ h^0(I_{C_{{\cal K}}}(2))_b=h^0({\cal O}_{P^{\pi-1}}(2))_b-h^0({\cal O}_X(\mbox{\euf b}+{\cal K})) $$ Thus the number of base quadrics containing $C_{{\cal K}}$ is: $$ h^0(C_{{\cal K}}(2))_b=(g-1)(\pi-g-2) $$ \hspace{\fill}$\rule{2mm}{2mm}$
\begin{teo}\label{involucioncanonica}
Let $C_{{\cal K}}\subset {\bf P}^{\pi-1}$ be a nonhyperelliptic canonical curve of genus $\pi$ that it is complete intersection of quadrics. $C_{{\cal K}}$ has an involution of degree $2$ if and only if ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ is harmonic respect to two disjoint complementary subspaces of dimensions $k$ and $\pi-k-2$, with $k\leq \pi-k-1$. Moreover the involution has genus $g=k+1$ if and only if $h^0(I_{C_{{\cal K}}}(2))_b=(g-1)(\pi-g-2)$.
\end{teo} {\bf Proof:} The first assertion follows from Proposition \ref{equivalencia}.
By the above discussion we know the number of base quadrics $h^0(I_{C_{{\cal K}}}(2))_b=(g'-1)(\pi-g'-2)$ where $g'$ is the genus of the involution. This genus is $k+1$ or $\pi-k-1$. Suppose that $(g'-1)(\pi-g'-2)=(g-1)(\pi-g-2)$ with $g=k+1$. If $g'=\pi-k-1=\pi-g$ then we obtain $\pi=2g$ so the genus $g'$ of the involution is $k+1$. \hspace{\fill}$\rule{2mm}{2mm}$
\begin{rem}\label{icompleta}
{\em It is well known that a nonhyperelliptic canonical curve of genus $\pi\geq 5$ is the complete intersection of quadrics, except when it is trigonal or when $\pi=6$ and it has a $g^2_5$. In the first case the intersection of the quadrics of ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ is a rational normal scroll; in the second case it is the Veronese surface of ${\bf P}^5$.} \hspace{\fill}$\rule{2mm}{2mm}$
\end{rem}
\subsection{Involutions of the canonical curve of genus $\pi=4$.}\label{involuciones4}
Let $C_{\cal K}\subset {\bf P}^3$ be a canonical curve of genus $4$. It is well known that this curve is the complete intersection of a quadric and a cubic surface (see \cite{hartshorne},IV,Example $5.2.2$). Suppose that $C_{\cal K}$ has an involution $C_{\cal K}{\longrightarrow} X$ where $X$ is a smooth curve of genus $g$. We have that $\pi-1\geq 2g-2$, so then genus of $X$ can be $1$ or $2$ (if $g=0$ the curve $C_{\cal K}$ is hyperelliptic).
\begin{prop2}\label{involucion41}
A canonical curve of genus $4$ has an elliptic involution if and only if it is the complete intersection of an elliptic cubic cone $S$ and a quadric that doesn't pass through the vertex of $S$.
\end{prop2} {\bf Proof:} Suppose that $C_{\cal K}$ has an elliptic involution. We know that the involution generates a scroll $R$. In this case the directrix curve $X_0$ has degree $g-1=0$, so it is a point. Then $R$ is an elliptic cone. If $Q$ is the unique quadric that contains $C_{\cal K}$, then necessary $C_{\cal K}=Q\cap R$. Moreover, we know that $C_{\cal K}\cap X_0=\emptyset$, so $Q$ does not pass through the vertex of $R$. Conversely, if $C_{\cal K}=Q\cap R$ and $Q$ does not pass through the vertex of $R$, the generators of $S$ provide an elliptic involution of $C_{\cal K}$. \hspace{\fill}$\rule{2mm}{2mm}$
\begin{teo2}\label{involucion42}
A canonical curve of genus $4$ has an involution of genus $2$ if and only if it is the complete intersection of a quadric and a cubic surface which has a harmonic involution respect to polar lines respect to the quadric.
\end{teo2} {\bf Proof:}
\begin{enumerate}
\item Suppose that $C_{\cal K}=Q_2\cap Q_3$ where $Q_2$ is a quadric and $Q_3$ is a cubic surface with a harmonic involution $\varphi:Q_3{\longrightarrow} Q_3$. Let $l$ and $l'$ the base spaces. Suppose that they are polar respect to $Q_2$. From this, $\varphi(Q_2)=Q_2$ so $C_{\cal K}$ has a harmonic involution. Because the base spaces have dimension $1$, the involution has genus $2$.
\item Suppose that $C_{\cal K}$ has a harmonic involution $\varphi$ of genus $2$. From this, the base spaces are two lines $l$ and $l'$. Let $R\subset {\bf P}^3$ the ruled surface generated by the involution. We know that $R$ contains $l$ and $l'$ with multiplicities $3$ and $2$ respectively. Moreover, $l'\cap C_{\cal K}=\emptyset$ and $l\cap C_{\cal K}$ consist of two points. We know that ${\bf P}(H^0(I_{C_{\cal K}}(2)))=\{ Q_2 \}$ and $dim{\bf P}(H^0(I_{C_{\cal K}}(3)))=4$. In this way $C_{\cal K}=Q_2\cap Q_3$ for any cubic surface $Q_3\in {\bf P}(H^0(I_{C_{\cal K}}(3)))$. Consider the involution $\overline{\varphi}:{\bf P}(H^0(I_{C_{\cal K}}(3))){\longrightarrow} {\bf P}(H^0(I_{C_{\cal K}}(3)))$ induced by $\varphi$. Let us see that there is a fixed irreducible element. Let $V$ be the set of reducible elements of ${\bf P}(H^0(I_{C_{\cal K}}(3)))$. Because $Q_2$ is the unique quadric containing $C_{\cal K}$ then $V=\{Q_2+H; H\subset {\bf P}^3 \}$ and $dim(V)=3$. Since the base points of $\overline{\varphi}$ generate ${\bf P}(H^0(I_{C_{\cal K}}(3)))$, then there exist at least a fixed irreducible element. \hspace{\fill}$\rule{2mm}{2mm}$
\end{enumerate}
\subsection{Involutions of the Veronese surface.}\label{involucionesV}
Let $v_{2,n}$ be the Veronese map of ${\bf P}^n$: $$ \begin{array}{rcl} {v_{2,n}:{{\bf P}^n}^*}&{{\longrightarrow}}&{V_{2,n}\subset {\bf P}(H^0({\cal O}_{P^n}(2)))}\\ {[x_0:\ldots :x_n]}&{{\longrightarrow}}&{[x_0^2:x_0x_1:\ldots :x_n^2]}\\ \end{array} $$ We will denote the image of this map by $V_{2,n}$. If $n=2$, it is the Veronese surface and we will denote it by $V_2$.
\begin{prop2}\label{iveronesevariedad}
The involutions of the Veronese variety $V_{2,n}$ are harmonic respect to two base spaces which are the harmonic and base quadrics of two subspaces $S_1,S_2$ of ${\bf P}^n$.
\end{prop2} {\bf Proof:} A harmonic involution $\varphi:{\bf P}^n{\longrightarrow} {\bf P}^n$ respect to spaces $S_1,S_2$ induces a harmonic involution $\overline{\varphi}$ in ${\bf P}(H^0({\cal O}_{P^n}(2)))$ respect to ${\bf P}(H^0({\cal O}_{P^n}(2))_h)$ and ${\bf P}(H^0({\cal O}_{P^n}(2))_b)$.
Because $\overline{\varphi}\circ v_{2,n}=v_{2,n}\circ \overline{\varphi}$, we see that $\overline{\varphi}$ restricts to $V_{2,n}$. In this way we have a harmonic involution $\overline{\varphi}:V_{2,n}{\longrightarrow} V_{2,n}$ respect to the spaces ${\bf P}(H^0({\cal O}_{P^n}(2))_h)$ and ${\bf P}(H^0({\cal O}_{P^n}(2))_b)$.
Conversely, given an involution $\eta$ of $V_2$. Applying the isomorphism $v_{2,n}$, we obtain an involution $\varphi^*$ of ${{\bf P}^n}^*$ with base spaces ${S_1}^*,{S_2}^*$. In this way, the dual map $\varphi=\varphi^{**}$ provides an involution $\overline{\varphi}$ of ${\bf P}(H^0({\cal O}_{P^n}(2)))$ such that
$\overline{\varphi}|_{V_{2,n}}=\eta$.\hspace{\fill}$\rule{2mm}{2mm}$
We will denote the Veronese surface by $V_2$.
\begin{cor2}\label{iveronesesuperficie}
Any nontrivial involution of the Veronese surface $V_2$ is harmonic respect to a line which corresponds to the conics of ${\bf P}^2$ passing through a point $P$ and a line $r$, and a $3$-dimensional space $V$ corresponding to the polar conics of ${\bf P}^2$ respect to $P$ and $r$.
\end{cor2} {\bf Proof:} It is sufficient to note that an involution $\eta$ of $V_2$ is induced by and involution $\varphi$ of $P^2$. If $\varphi$ is nontrivial, then its base spaces are a point $P$ and a line $r$.\hspace{\fill}$\rule{2mm}{2mm}$
\begin{prop2}\label{veroneseqbase}
Let $\eta$ be a nontrivial harmonic involution of the Veronese surface. Then $h^0({\cal O}_{V_2}(2))_b=2$.
\end{prop2} {\bf Proof:} Consider the Veronese map of ${\bf P}^2$: $$ \begin{array}{rcl} {v_2:{{\bf P}^n}^*}&{{\longrightarrow}}&{V_2\subset {\bf P}(H^0({\cal O}_{P^n}(2)))}\\ {[x_0:x_1:x_2]}&{{\longrightarrow}}&{[x_0^2:x_0x_1:\ldots :x_2^2]=[y_0:y_1:\ldots:y_5]}\\ \end{array} $$
Let $Y$ be the matrix: $$ \left(\begin{array}{ccc} {y_0}&{y_1}&{y_2}\\ {y_1}&{y_3}&{y_4}\\ {y_2}&{y_4}&{y_5}\\ \end{array}\right) $$ We know that $H^0({\cal O}_{V_2}(2))$ is generated by the quadrics of ${\bf P}^5$ whose equations are defined by the minors of order $2$ of the matrix $Y$.
Moreover the base spaces of $\varphi$ are a line $l$ corresponding to the conics of ${\bf P}^2$ passing through a line $r$ and a point $P$ and a space $V$ corresponding to the polar conics respect to $P$ and $r$.
Taking an adequate system of coordinates we can consider $P$ generated by the equations $\{x_1=x_2=0\}$ and $r$ generated by the equation ${x_0=0}$.
A conic containing $P$ and $r$ has an equation $ax_0x_1+bx_0x_2=0$. Then the equations of $l$ are $\{y_0=y_3=y_4=y_5=0\}$.
A polar conic respect to $P$ and $r$ has an equation $ax_0^2+bx_1^2+cx_1x_2+dx_2^2=0$. Then the equations of $V$ are $\{y_1=y_2=0\}$.
Applying the conditions to contain $V$ and $l$ to the equations of $H^0({\cal O}_{V_2}(2))$, we obtain that the quadrics containing $V$ and $l$ are generated by $\{y_1y_4-y_2y_3=y_1y_5-y_2y_4=0\}$. From this, $h^0({\cal O}_{V_2}(2))_b=2$. \hspace{\fill}$\rule{2mm}{2mm}$
\subsection{Harmonic involutions of the rational ruled surfaces.}\label{involucionesR}
Let $R_{n-1}\subset {\bf P}^n$ a rational normal ruled surface of degree $n-1$, with $n>3$. Let $\varphi:R_{n-1}{\longrightarrow} R_{n-1}$ a harmonic involution of the surface. Then $\varphi$ conserves the degree of the curves. From this, it applies generators into generators if $n>3$. In this way, we have the following induced harmonic involutions: $$ \begin{array}{rccl} {\varphi_0:{\bf P}^1}&{{\longrightarrow}}&{{\bf P}^1}&{}\\ {P}&{{\longrightarrow}}&{Q}&{/\varphi(Pf)=Qf}\\ \end{array} $$ where ${\bf P}^1$ parameterizes the generators. $$ \begin{array}{rcl}
{\varphi_l:|D_l|}&{{\longrightarrow}}&{|D_l|}\\ {D}&{{\longrightarrow}}&{\varphi(D)}\\ \end{array} $$
where $|D_l|$ is the linear system of curves of degree $l$.
Let $k$ be the degree of the curve of minimum degree of $R_{n-1}$: \begin{enumerate}
\item If $k=\frac{n-1}{2}$, then there is a $1$-dimensional family of irreducible curves of degree $k$. The involution $\varphi_k$ has at least $2$ base points, so there are two disjoint curves $D_k$ that are invariant by $\varphi$.
\item If $k<\frac{n-1}{2}$, then there is a unique curve of minimum degree, so it is invariant by $\varphi$. Moreover, if $l=n-1-k$ the linear system $|D_l|$ has dimension $l-k$. Its generic curve is an irreducible curve disjoint from $D_k$. In particular, the set of reducible curves of $|D_l|$ are an hyperplane composed by curves of the form $D_k+\sum f_i$. Thus, we have a harmonic involution: $$ \begin{array}{rcl}
{\varphi_l:{\bf P}^{l-k}\cong |D_l|}&{{\longrightarrow}}&{|D_l|\cong {\bf P}^{l-k}}\\ \end{array} $$
We know that $\varphi_l$ has two disjoint spaces of base points. Both of them can not be contained on the hyperplane (because they generate ${\bf P}^{n-k}$), so necessary there exists an irreducible curve in $|D_l|$ that is fixed by the involution $\varphi_l$; that is, it is invariant by $\varphi$.
\end{enumerate}
We conclude the following proposition:
\begin{prop2}\label{conclusion1}
Given a harmonic involution on a rational normal ruled surface of degree $n-1$, there exist two disjoint rational normal curves $D_k$,$D_l$ with degrees $k$ and $l$, $k+l=n-1$, that are invariant by the involution.
\end{prop2}
Let $D_k\subset {\bf P}^k$ and $D_l\subset {\bf P}^l$ be the two invariant curves. The involution $\varphi$ restricts to these spaces. Thus, we have a harmonic involution $\varphi_k:{\bf P}^k{\longrightarrow} {\bf P}^k$. It has two invariant spaces ${\bf P}^{k_1}$, ${\bf P}^{k_2}$ with $k_1+k_2+1=k$. Similarly, the harmonic involution $\varphi_l:{\bf P}^l{\longrightarrow} {\bf P}^l$ has two invariant spaces $P^{l_1}$, $P^{l_2}$ with $l_1+l_2+1=l$. From this, we have two possibilities for the base spaces $S_1$,$S_2$ of the involution $\varphi$: $$ \begin{array}{ll} {\begin{array}{l}
{S_1=\langle {\bf P}^{k_1},{\bf P}^{l_1} \rangle={\bf P}^{k_1+l_1+1}}\\
{S_2=\langle {\bf P}^{k_2},{\bf P}^{l_2} \rangle={\bf P}^{k_2+l_2+1}}\\ \end{array}}& {\begin{array}{l}
{S_1=\langle {\bf P}^{k_1},{\bf P}^{l_2} \rangle={\bf P}^{k_1+l_2+1}}\\
{S_2=\langle {\bf P}^{k_2},{\bf P}^{l_1} \rangle={\bf P}^{k_2+l_1+1}}\\ \end{array}}\\ \end{array} $$
Conversely if we have two harmonic involutions in ${\bf P}^k$ and ${\bf P}^l$ we can recuperate an involution in ${\bf P}^n$. Note that this involution is not unique, because we have two ways to define it. Moreover, in order to restrict the involution to the ruled surface $q:R_{n-1}{\longrightarrow} {\bf P}^1$ we need that the involutions in ${\bf P}^k$ and ${\bf P}^l$ restrict to $D_k$ and $D_l$ and that they are compatible, that is, the images of the points on the same generator lay on the same generator: $q(\varphi_k(D_k\cap P f))=q(\varphi_l(D_l\cap P f)), \forall P\in {\bf P}^1$.
Thus, if $\varphi_k$ and $\varphi_l$ verify these conditions we have a harmonic involution $\varphi$ that restricts to $R_{n-1}$.
\begin{prop2}\label{conclusion2}
A harmonic involution on a normal rational ruled surface $R_{n-1}$ defines two harmonic involutions $\varphi_k$, $\varphi_l$ on two disjoint rational curves $D_k$, $D_l$ that generate the surface. Moreover, they make commutative the diagram $(1)$.
Conversely, if two harmonic involutions $\varphi_k$, $\varphi_l$ on two rational curves generating a rational ruled surface $R_{n-1}$ verifying $q(\varphi_k(D_k\cap P f))=q(\varphi_l(D_l\cap P f)), \forall P\in {\bf P}^1$. , then they define two possible harmonic involutions on $R_{n-1}$, taking the space bases generate by the space bases of $\varphi_k$ and $\varphi_l$.
\end{prop2}
\begin{rem2}\label{compatibilidad}
{\em In order to obtain compatible involutions $\varphi_k$, $\varphi_l$ it is sufficient to define a involution $\eta$ on ${\bf P}^1$ and to translate it to $D_k$ and $D_l$: $$ \begin{array}{c} {\varphi_k(D_k\cap P f):=D_k\cap \eta(P) f}\\ {\varphi_l(D_l\cap P f):=D_l\cap \eta(P) f}\\ \end{array} $$ for all $P\in {\bf P}^1$.
Moreover, if $\varphi_k$ and $\varphi_l$ are compatible involutions and one of them is the identity, then the other one is the identity too.}\hspace{\fill}$\rule{2mm}{2mm}$ \end{rem2}
We saw how are the (nontrivial) harmonic involutions on a normal rational curve $D_m{\subset} {\bf P}^m$: \begin{enumerate}
\item If $m=2\mu$ the involution is defined by two base spaces ${\bf P}^{\mu}$, ${\bf P}^{\mu-1}$ such that ${\bf P}^{\mu}\cap D_m=P\cup Q$ and ${\bf P}^{\mu-1}\cap D_m=\emptyset$ ($P,Q$ base points).
\item If $m=2\mu+1$ then involution is defined by two base spaces ${\bf P}^{\mu}_1$, ${\bf P}^{\mu}_2$ such that ${\bf P}^{\mu}_1\cap D_m=P$ and ${\bf P}^{\mu}_2\cap D_m=Q$ ($P,Q$ base points).
\end{enumerate}
In both cases the involution generates a normal rational ruled surface $R_{m-1}$ of degree $m-1$, whose directrix curves lay on the space bases. We call them base curves.
Thus, let $\varphi$ be a harmonic involution on $R_{n-1}$. Let $\varphi_k$, $\varphi_l$ be the harmonic involutions induced on the directrix curves $D_k$, $D_l$. Let ${\bf P}^{k_1}$, ${\bf P}^{k_2}$, ${\bf P}^{l_1}$, ${\bf P}^{l_2}$ be the base spaces of $\varphi_k$ and $\varphi_l$. We know that the base spaces of $\varphi$ are $S_1=\langle {\bf P}^{k_1},{\bf P}^{l_1} \rangle$, $S_2=\langle {\bf P}^{k_2},{\bf P}^{l_2} \rangle$. Let $C_{k_1}$, $C_{k_2}$, $C_{l_1}$, $C_{l_2}$ the corresponding base curves. Let $F$ be the variety of lines that join the points of the involution: $F=\overline{\{P\in {\bf P}^n/P\in \langle x,f(x)\rangle,x\in R_{n-1}\}}$. Let us identificate $F\cap S_1$ and $F\cap S_2$.
\begin{lemma2}\label{variedadderectas}
The variety $F\cap S_1$ $(F\cap S_2)$ is a normal rational ruled surface of degree $k_1+l_1=n_1$ $(k_2+l_2=n_2)$ generated by the directrix curves $C_{k_1}$ and $C_{l_1}$ ($C_{k_2}$ and $C_{l_2}$). We call it base ruled surface $R_{n_1-1}$ ($R_{n_2-1}$).
\end{lemma2} {\bf Proof:} Given a point $P\in R_{n-1}$, consider the line $r=\langle P,f(P) \rangle$ of $F$. $r$ meets $S_1$ in a point $P_1$ that corresponds to project $P$ from $S_2$ onto $S_1$. Thus, given a generator $f\in R_{n-1}$, the lines of $F$ defined by the points of $f$ meet $S_1$ in a line $f_1$; this line is the projection of $f$ from $S_b$. Moreover, since $f$ meets $D_k$ and $D_l$, its projection on $S_1$ meets $C_{k_1}$ and $C_{l_1}$. In this way we see that the generator of $R_{n-1}$ project into lines joining $C_{k_1}$ and $C_{l_2}$, so $F\cap S_1$ is the rational ruled surface defined by these directrix curves. \hspace{\fill}$\rule{2mm}{2mm}$
We saw that a harmonic involution on a normal rational ruled surface is defined by the involutions of the directrix curves $D_k$ and $D_l$. From this, we distinguish several types of involutions:
\begin{enumerate}
\renewcommand{\arabic{enumii}}{\arabic{enumii}} \renewcommand{\Alph{enumiii}}{\arabic{enumiii}} \renewcommand{\arabic{enumiv}}{\arabic{enumiv}} \renewcommand{\theenumii.}{\Alph{enumi}.\arabic{enumii}.} \renewcommand{\Alph{enumi}.\theenumii.\theenumiii.}{\Alph{enumi}.\arabic{enumii}.\Alph{enumiii}.} \renewcommand{\Alph{enumi}.\theenumii.\theenumiii.\theenumiv}{\Alph{enumi}.\arabic{enumii}.\Alph{enumiii}.\arabic{enumiv}}
\makeatletter \renewcommand{\Alph{enumi}.\theenumii.\theenumiii.}{\Alph{enumi}.\arabic{enumii}.\Alph{enumiii}.} \makeatother
\item {\em $\varphi_k$ and $\varphi_l$ are the identity.}
Then the base spaces of $\varphi$ are the spaces ${\bf P}^k$ and ${\bf P}^l$ that contain the directrix curves. All the generators are invariants by $\varphi$ and the variety $F$ is the ruled surface $R_{n-1}$.
\item {\em $\varphi_k$ and $\varphi_l$ are not trivial.}
\begin{enumerate}
\item $n-1=2\lambda$ (even).
\begin{enumerate}
\item $k=2\mu, l=2(\lambda-\mu)$.
Then the involutions on $D_k$ and $D_l$ have the following base spaces and base curves:
$C_{\mu}\in {\bf P}^{\mu}$, $C_{\mu-1}\in {\bf P}^{\mu-1}$, with $P_k,Q_k\in C_{\mu}$ base points of $D_k$.
$C_{\lambda-\mu}\in {\bf P}^{\lambda-\mu}$, $C_{\lambda-\mu-1}\in {\bf P}^{\lambda-\mu-1}$, with $P_l,Q_l\in C_{\lambda-\mu}$ base points of $D_l$.
Then, the base spaces of $\varphi$ are:
\begin{enumerate}
\item Case A:
$S_1=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu}\rangle={\bf P}^{\lambda+1}\ni P_k,Q_k,P_l,Q_l$.
$S_2=\langle {\bf P}^{\mu-1},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda-1}$.
Where the generators $f_P,f_Q\in {\bf P}^{\lambda+1}$ are fixed.
\item Case B: \label{B1}
$S_1=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni P_k,Q_k$.
$S_2=\langle {\bf P}^{\mu-1},{\bf P}^{\lambda-\mu}\rangle={\bf P}^{\lambda}\ni P_l,Q_l$.
Where the generators $f_P,f_Q$ are invariant (not fixed).
\end{enumerate}
\item $k=2\mu+1, l=2(\lambda-\mu)-1$.
Then the involutions on $D_k$ and $D_l$ have the following base spaces and base curves:
$P_k\in C_{\mu}\in {\bf P}^{\mu}$, $Q_k\in C_{\mu}\in {\bf P}^{\mu}$, with $P_k,Q_k$ base points of $D_k$.
$P_l\in C_{\lambda-\mu}\in {\bf P}^{\lambda-\mu}$, $Q_l\in C_{\lambda-\mu}\in {\bf P}^{\lambda-\mu}$, with $P_l,Q_l$ base points of $D_l$.
Then, the base spaces of $\varphi$ are:
\begin{enumerate}
\item Case C:
$S_1=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni P_k,P_l$.
$S_2=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni Q_k,Q_l$.
Where the generators $f_P\in {\bf P}^a,f_Q\in S_2$ are fixed.
\item Case B: (similar to case \ref{B1}).
$S_1=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni P_k,Q_l$.
$S_2=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni P_l,Q_k$.
Where the generators $f_P,f_Q$ are invariant (not fixed).
\end{enumerate}
\end{enumerate}
\item $n-1=2\lambda-1$ even.
Then the curves $D_k$ and $D_l$ have degrees $k=2\mu$ and $l=2(\lambda-\mu)-1$. The base spaces and base curves are:
$C_{\mu}\in {\bf P}^{\mu}$, $C_{\mu-1}\in {\bf P}^{\mu-1}$, with $P_k,Q_k\in C_{\mu}$ base points of $D_k$.
$P_l\in C_{\lambda-\mu-1}\in {\bf P}^{\lambda-\mu-1}$, $Q_l\in C_{\lambda-\mu-1}\in {\bf P}^{\lambda-\mu-1}$, with $P_l,Q_l$ base points of $D_l$.
In any case, the base spaces of $\varphi$ are:
\begin{enumerate}
\item Case D:
$S_1=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni P_k,Q_k,P_l$.
$S_2=\langle {\bf P}^{\mu},{\bf P}^{\lambda-\mu-1}\rangle={\bf P}^{\lambda}\ni Q_l$.
Where $f_P$ is a fixed generator and $f_Q$ are an invariant generator.
\end{enumerate}
\end{enumerate}
\end{enumerate}
Let $\varphi:R_{n-1}{\longrightarrow} R_{n-1}$ a harmonic involution over the rational normal ruled surface $R_{n-1}\subset {\bf P}^n$. By the proposition \ref{equivalencia} we know that $H^0(I_{R_{n-1}}(2))$ is a base-harmonic system; that is, $H^0(I_{R_{n-1}}(2))=H^0(I_{R_{n-1}}(2))_h\oplus H^0(I_{R_{n-1}}(2))_b$. Let us see the dimension of these spaces. We know that $h^0(I_{R_{n-1}}(2))=\left(^{n-1}_{\;\;\, 2}\right)$. We will treat each case separated:
\begin{enumerate}
\item All generators are invariant by the involution.
We use the proposition \ref{cuadricasbase}. In this case $F=R_{n-1}$, $F\cap {\bf P}^a=D_k$ and $F\cap {\bf P}^b=D_l$. Thus, $$ h^0(I_{R_{n-1}}(2))_b=h^0(I_{R_{n-1}}(2))-h^0(I_{D_k}(2))-h^0(I_{D_l}(2))=kl $$ and $$ h^0(I_{R_{n-1}}(2))_h=h^0(I_{R_{n-1}}(2))-h^0(I_{R_{n-1}}(2))_b=\left(^{n-1}_{\;\;\, 2}\right)-kl $$
\item The generic generator is not invariant by the involution.
We use the proposition \ref{cuadricasbase}. But in this case, $F\cap S_1=R_{n_1-1}$ and $F\cap S_2=R_{n_2}$. Then we have: $$ h^0(I_{R_{n-1}}(2))_b=h^0(I_{F}(2))-h^0(I_{R_{n_1-1}}(2))-h^0(I_{R_{n_2-1}}(2)) \eqno(2) $$
A quadric containing $R_{n-1}\cup R_{n_1-1}$ meets each line of $F$ in three points. Then such quadric contains $F$, so $H^0(I_F(2))=H^0(I_{R_{n-1}\cup R_{n_1-1}}(2))$. Consider the exact sequence: $$ 0{\longrightarrow} H^0(I_{R_{n-1}\cup R_{n_1-1}}(2)){\longrightarrow} H^0(I_{R_{n-1}}(2))\stackrel{\alpha}{{\longrightarrow}} H^0({\cal O}_{R_{n_1-1}}(2-Y)) $$ where $Y=R_{n-1}\cap R_{n_1-1}$. Then: $$ h^0(I_F(2))\geq h^0(I_{R_{n-1}}(2))-h^0({\cal O}_{R_{n_1-1}}(2-R_{n-1}\cap R_{n_1-1})) $$ and applying $(2)$ we obtain in each case:
\begin{enumerate}
\renewcommand{\arabic{enumii}}{\Alph{enumii}} \renewcommand{\theenumii.}{\arabic{enumii}.}
\item ($n-1=2\lambda, S_1={\bf P}^{\lambda-1},S_2={\bf P}^{\lambda+1},f_Q,f_P$ fixed generators.) $$ h^0(I_{R_{n-1}}(2))_b\geq \lambda (\lambda-1) $$
\item ($n-1=2\lambda, S_1={\bf P}^{\lambda}_1,S_2={\bf P}^{\lambda}_2,f_Q,f_P$ invariant (not fixed) gen\-er\-ators, with $f_P\cap {\bf P}^{\lambda}_i=P_i$, $f_Q\cap P^{\lambda}_i=Q_i$.) $$ h^0(I_{R_{n-1}}(2))_b\geq \lambda (\lambda-1) $$
\item $(n-1=2\lambda, S_1={\bf P}^{\lambda}_1,S_2={\bf P}^{\lambda}_2,f_Q,f_P$ fixed generators, with $f_P\in {\bf P}^{\lambda}_1$, $f_Q\in {\bf P}^{\lambda}_2$.) $$ h^0(I_{R_{n-1}}(2))_b\geq \lambda (\lambda-1)+1 $$
\item $(n-1=2\lambda-1, S_1={\bf P}^{\lambda-1},S_2={\bf P}^{\lambda},f_Q$ fixed generator in ${\bf P}^{\lambda}$, and $f_q$ invariant generator, with $f_Q\cap P^{\lambda-1}=Q_1$.) $$ h^0(I_{R_{n-1}}(2))_b\geq (\lambda-1)^2 $$
\end{enumerate}
\end{enumerate}
Now, let us compute the harmonic quadrics.
Let $E_k$ the set of $k+1$ generic points in $D_k$ and $E_l$ the set of $l+1$ generic points on $D_l$. Note that a harmonic quadric that passes through a point of $R_{n-1}$ passes through the image point too. From this, a quadric passing through $E_k (E_l)$ meets $D_k (D_l)$ in $2k+2 (2l+2)$ points because $D_k (D_l)$ is invariant by the involution. Moreover, a harmonic quadric that contains $D_k$ and $D_l$, contains the invariant (not fixed) generators too, because they meet each space base in a point.
Finally, a harmonic quadric containing $D_k$ and $D_l$ and passing through $m$ generic points of $R_{n-1}$ contains their images ($2m$ points) and the corresponding $2m$ generators. Let $E_m$ be $m$ generic points of $R_{n-1}$ and let $E$ be $E_k\cup E_l \cup E_m$.
If $Q$ is a harmonic quadric passing through the points of $E$, then $D_k\cup D_l\cup \{$invariant generators$\}\cup 2mf\subset Q\cap R_{n-1}$. If $2m>2(n-1)-(k+l)-$number of invariant generators, then $R_{n-1}\subset Q$ and we have the exact sequence: $$ 0{\longrightarrow} H^0(I_{R_{n-1}}(2))_h{\longrightarrow} H^0(I_{{\bf P}^{n-1}}(2))_h{\longrightarrow} H^0({\cal O}_E(2)) $$ From this: $$ h^0(I_{R_{n-1}}(2))_h\geq h^0({\cal O}_{{\bf P}^{n-1}}(2))_h-(n+1+m) $$
In each case we obtain:
\begin{enumerate}
\renewcommand{\Alph{enumi}}{\Alph{enumi}}
\item There are not invariant generators. Taking $m=\lambda+1$ we have: $$ h^0(I_{R_{n-1}}(2))_h\geq \left(^{\lambda+1}_{\;\;\, 2}\right) + \left(^{\lambda+3}_{\;\;\, 2}\right)-(3\lambda+3) $$
\item There are two invariant generators. Taking $m=\lambda$ we have: $$ h^0(I_{R_{n-1}}(2))_h\geq \left(^{\lambda+2}_{\;\;\, 2}\right) + \left(^{\lambda+2}_{\;\;\, 2}\right)-(3\lambda+2) $$
\item There are not invariant generators. Taking $m=\lambda+1$ we have: $$ h^0(I_{R_{n-1}}(2))_h\geq \left(^{\lambda+2}_{\;\;\, 2}\right) + \left(^{\lambda+2}_{\;\;\, 2}\right)-(3\lambda+3) $$
\item There is an invariant generator. Taking $m=\lambda$ we have: $$ h^0(I_{R_{n-1}}(2))_h\geq \left(^{\lambda+1}_{\;\;\, 2}\right) + \left(^{\lambda+2}_{\;\;\, 2}\right)-(3\lambda+1) $$
\end{enumerate}
We see that the sum of the bounds computed for the harmonic and base quadrics is the quadrics of $H^0(I_{R_{n-1}}(2))$ in all cases, so these bounds are reached in all cases, and we have the number of base quadrics:
\begin{prop2}\label{baseregladaracional}
Let $R_{n-1}\subset {\bf P}^n$ be a rational normal scroll of degree $n-1$. Let $\varphi:R_{n-1}{\longrightarrow} R_{n-1}$ be a harmonic involution. Then we have the following cases:
\begin{enumerate}
\renewcommand{\arabic{enumii}}{\arabic{enumii}} \renewcommand{\Alph{enumiii}}{\Alph{enumiii}}
\item All the generators are invariant. There are two directrix curves of base point $D_k,D_l$ with $k+l=n-1$. They lay on the base spaces ${\bf P}^k$, ${\bf P}^l$. $$ h^0(I_{R_{n-1}}(2))_b=kl $$
\item There are two invariant (fixed or not):
\begin{enumerate}
\item $n-1=2\lambda$ ($n$ even).
\begin{enumerate}
\item The base spaces are ${\bf P}^{\lambda-1}$,${\bf P}^{\lambda+1}$. There are two fixed generators in ${\bf P}^{\lambda+1}$.
$$ h^0(I_{R_{n-1}}(2))_b=\lambda(\lambda-1) $$
\item The base spaces are ${\bf P}^{\lambda}$,${\bf P}^{\lambda}$. There is a fixed generator in each of them.
$$ h^0(I_{R_{n-1}}(2))_b=\lambda(\lambda-1)+1 $$
\item The base spaces are ${\bf P}^{\lambda}$,${\bf P}^{\lambda}$. There are not fixed generators.
$$ h^0(I_{R_{n-1}}(2))_b=\lambda(\lambda-1) $$
\end{enumerate}
\item $n-1=2\lambda-1$ ($n$ odd).
\begin{enumerate}
\item[D.] The base spaces are ${\bf P}^{\lambda-1}$,${\bf P}^{\lambda}$. There is a fixed generator in ${\bf P}^{\lambda}$.
$$ h^0(I_{R_{n-1}}(2))_b=(\lambda-1)^2 $$
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{prop2} \hspace{\fill}$\rule{2mm}{2mm}$
\subsection{Involutions of the canonical curve of genus $\pi>4$.}\label{involucionesC}
We have investigated all the possible cases where the quadrics that contain a canonical curve are a base-harmonic system:
\begin{teo2}\label{uno}
The unique cases where the system of quadrics containing a canonical curve $C_{{\cal K}}$ of genus $\pi$, $\pi>4$ are a base-harmonic system respect to base spaces ${\bf P}^{g-1}$, ${\bf P}^{\pi-g-1}$ ($\pi\geq 2g-1>0 $) with $b$ independent base quadrics are:
\begin{enumerate}
\item $b=(g-1) (\pi-g-2)$
\begin{enumerate}
\item If $\pi=2g,2g+1,2g+2$ and the curve has a $g^1_3$ or an involution of genus $g$, or both of them; except if $\pi=6$ and $g=2$ when $C_{{\cal K}}$ can have a $g^2_5$.
\item If $\pi\neq 2g,2g+1,2g+2$ and the curve $C_{{\cal K}}$ has a $\gamma^1_2$ of genus $g$.
\end{enumerate}
\item $b=(g-1)(\pi-g-2)+1$, $\pi=2g-1r$ or $\pi=2g$ and $C_{{\cal K}}$ has a $g^1_3$ (not a $\gamma^1_2$).
\item $b=(g-1)(\pi-g-1)$ and $C_{{\cal K}}$ has a $g^1_3$ (not a $\gamma^1_2$).
\end{enumerate}
\end{teo2}
\begin{cor2}\label{coro1}
The unique involutions on a trigonal canonical curve of genus $\pi$, $\pi>4$ are of genus $\frac{\pi}{2},\frac{\pi-1}{2}$ or $\frac{\pi-2}{2}$.
\end{cor2} {\bf Proof:} Let $C_{{\cal K}}\subset {\bf P}^n$ be a trigonal canonical curve and let $R_{\pi-2}$ be the ruled surface of trisecants. Suppose that $C_{{\cal K}}$ has an involution of genus $g$. Then the system of quadrics ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ is a base-harmonic system with $(g-1)(\pi-g-2)$ independent base quadrics. Since $R_{\pi-2}=\bigcap_{Q\supset C_{{\cal K}}}Q$, we have a harmonic involution over $R_{\pi-2}$. By Proposition \ref{baseregladaracional} we know that $\pi=2g$, $\pi=2g+1$ or $\pi=2g+2$ and the conclusion follows. \hspace{\fill}$\rule{2mm}{2mm}$
\begin{cor2}\label{coro2}
The unique involutions on a smooth quintic plane curve are of genus $2$. \hspace{\fill}$\rule{2mm}{2mm}$
\end{cor2}
\begin{teo2}\label{casoparticular}
Let $C_{{\cal K}}\subset {\bf P}^{\pi-1}$ a canonical curve of genus ${\bf P}$, $\pi>4$. Then $C_{{\cal K}}$ has an involution of genus $1$ if and only if the quadrics of ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ are a base-harmonic system respect to a point and a space ${\bf P}^{\pi-2}$ without base quadrics.
\end{teo2} {\bf Proof:} If $C_{{\cal K}}$ has an involution of genus $1$, we know that the involution generates an elliptic cone, the system of quadrics ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ is harmonic and it hasn't base quadrics respect to ${\bf P}^0$ and ${\bf P}^{\pi-2}$.
Conversely, if the system of quadrics ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ is harmonic respect to ${\bf P}^0$ and ${\bf P}^{\pi-2}$, necessary it hasn't base quadrics, because the quadric containing $C_{{\cal K}}$ are reducible. If $C_{{\cal K}}$ is not trigonal we have an involution of genus $1$ in $C_{{\cal K}}$. If $C_{K}$ is trigonal, by Corollary \ref{coro1}, $\pi=2,3,4$. But we have supposed that $\pi>4$. \hspace{\fill}$\rule{2mm}{2mm}$
\begin{teo2}\label{fundamental}
Let $C_{{\cal K}}\subset {\bf P}^{\pi-1}$ be the canonical curve of genus $\pi$, with $\pi>4$. If $C_{{\cal K}}$ has an involution of genus $g$ then $\pi\geq 2g-1$ and the quadrics of ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ are a base-harmonic system respect to the base spaces ${\bf P}^{g-1}$, ${\bf P}^{\pi-g-1}$ that contains $(g-1)(\pi-g-2)$ independent base quadrics. Conversely, these conditions are sufficient to grant the existence of an involution, except when:
\begin{enumerate}
\item $\pi=6,g=2$ and $C_{{\cal K}}$ has a $g^2_5$; or
\item $\pi=2g$,$2g+1$ or $2g+2$ and $C_{{\cal K}}$ is trigonal.
\end{enumerate}
\end{teo2} \hspace{\fill}$\rule{2mm}{2mm}$
\begin{rem2} {\em Let us study what happens at the two exceptions:
\begin{enumerate}
\item Suppose that $C_{{\cal K}}$ is a canonical curve of genus $6$ with a $g^2_5$, that is,it is isomorphic to a smooth plane curve of degree $5$. Suppose that the quadrics of ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ are a base-harmonic system respect to the base spaces ${\bf P}^1$ and ${\bf P}^3$. It induces a harmonic involution on the Veronese surface and then, an involution on the plane.
Obviously, the generic plane curve of degree $5$ of the plane is not invariant by this involution. So in this case the hypothesis of the above theorem are not sufficient.
However, there are smooth quintic plane curves invariant by an involution. For example, we can take the quintic curve $f(x_0,x_1)-x_2^4x_0=0$ on the plane, where $f(x_0,x_1)$ is a generic homogeneous polynomial of degree $5$. This curve is smooth an it's invariant by the involution $$ \begin{array}{ccc} {x_0{\longrightarrow} x_0;}&{x_1{\longrightarrow} x_1;}&{x_2{\longrightarrow} -x_2}\\ \end{array} $$
\item Now, suppose that $C_{{\cal K}}$ is trigonal. Then it lies on a rational ruled surface $S_e={\bf P}({\cal O}_{P^1}\oplus {\cal O}_{P^1}(-e))$ in the linear systems $|3X_0+af|$. The canonical embedding is obtained by the linear system $X_0+(a-e-2)f$ on the ruled surface.
If ${\bf P}(H^0(I_{C_{{\cal K}}}(2)))$ is a base-harmonic system then it defines a harmonic involution on the ruled surface $S_e$. Moreover, we have an induced involution in the linear system
$|3X_0+a f|$. The generic curve of this linear system is not invariant by the involution. We see that the hypothesis of the theorem are not sufficient.
On the other hand, there are smooth curves on these linear systems invariant by the involution. Let us see an example. Consider the rational ruled surface $S_0\cong {\bf P}^1\times {\bf P}^1$ with coordinates $[(x_0,x_1),(y_0,y_1)]$. We can take the curve on $S_0$ with equation: $$ x_0^ny_0^3-x_0^ny_0y_1^2+x_1^ny_1^3+x_1^ny_0^2y_1=0 $$ with $n\geq 5$ even. This is a smooth curve of type $(3,n)$ on the linear system $3X_0+nf$. Moreover it is invariant by the involution $$ \begin{array}{cccc} {x_0{\longrightarrow} x_0;}&{x_1{\longrightarrow} -x_1;}\\ {y_0{\longrightarrow} y_0;}&{y_1{\longrightarrow} -y_1}\\ \end{array} $$
\end{enumerate} }\hspace{\fill}$\rule{2mm}{2mm}$
\end{rem2}
\end{document} |
\begin{document}
\maketitle \footnote{partially supported by Grant of JSPS}
\begin{abstract} This paper shows some criteria for a scheme of finite type over an algebraically closed field to be non-singular in terms of jet schemes.
For the base field of characteristic zero,
the scheme is non-singular if and only if one of the truncation morphisms
of its jet schemes
is flat.
For the positive characteristic case,
we obtain a similar characterization under the reducedness condition on the
scheme.
We also obtain by a simple discussion that the scheme is non-singular if and only if one of its jet schemes is non-singular. \end{abstract}
\section{Introduction} \noindent In 1968 John F. Nash introduced the jet schemes and the arc space of an algebraic and an analytic variety and
posed the Nash problem (\cite{nash}).
The jet schemes and the arc space are considered to be something to reflect
the nature of the
singularities of a variety.
(The Nash problem itself concerns a connection between the arc
space and the singularities.)
By looking at the jet schemes over a variety,
we can see some properties of the
singularities of the variety (see \cite{ein}, \cite{e-Mus}, \cite{must01},
\cite{must02}) : for example, if \( X \) is
locally a complete intersection variety, the singularities of \( X
\) are canonical (resp.
terminal) if and only
if the jet scheme \( X_{m} \) is irreducible (resp. normal) for every
\( m\in {\Bbb N} \).
For a non-singular variety \( X \), the jet
schemes are distinguished: the \( m \)-jet scheme \( X_{m} \) is
non-singular for every \( m\in {\Bbb N} \) and every truncation morphism \(
\psi_{m',m}:
X_{m'}\to X_{m}\) is smooth with the fiber \( {\Bbb A}_{k}^{(m'-m)\dim X} \)
for \( m'>m\geq 0\).
Then, it is natural to ask whether these properties
characterize the smoothness of the variety \( X \).
Our results are rather stronger, i.e., only one jet scheme or one
truncation morphism is sufficient to characterize the smoothness of the variety \( X \).
In this paper we prove the following:
\begin{prop} \label{sm} Let \( k \) be a field of arbitrary characteristic and \( f:X\to Y \) a morphism of \( k \)-schemes. Then the following are equivalent: \begin{enumerate}
\item[(i)] \( f \) is smooth (resp. unramified, \'etale);
\item[(ii)] For every \( m\in {\Bbb N} \), the morphism \( f_{m}:X_{m}\longrightarrow
Y_{m} \) induced from $f$ is smooth (resp. unramified, \'etale);
\item[(iii)] There is an integer \( m\in {\Bbb N} \) such that
the morphism \( f_{m}:X_{m}\longrightarrow
Y_{m} \) is smooth (resp. unramified, \'etale). \end{enumerate} \end{prop}
As a corollary of this proposition, we obtain the following:
\begin{cor} \label{smooth} Let \( k \) be a field of arbitrary characteristic. A scheme \( X \) of finite type over \( k \) is smooth if and only if there is \( m\in {\Bbb Z}_{\geq 0} \) such that \( X_{m} \) is smooth. \end{cor}
\begin{thm} \label{flat} Let \( k \) be an algebraically closed field of characteristic zero. A scheme \( X \) of finite type over \( k \)
is non-singular if and only if there is a pair of integers \( 0\leq m<m' \) such that the truncation morphism \( \psi_{m', m}: X_{m'}\longrightarrow X_{m} \) is a flat morphism. \end{thm}
Here, we note that the assumption of the characteristic of the base field in Theorem \ref{flat} is necessary. We will see a counter example of this statement in positive characteristic(Example \ref{ex}).
If we assume that the scheme $X$ is reduced, then we have a similar criterion as Theorem \ref{flat} also for the positive characteristic case.
\begin{thm} \label{positive} Let \( k \) be an algebraically closed field of arbitrary characteristic. Assume the scheme \( X \) of finite type over \( k \) is reduced. Then $X$ is non-singular if and only if there is a pair of integers \( 0< m<m' \) such that the truncation morphism \( \psi_{m',m}:X_{m'}\to X_{m} \) is flat . \end{thm}
This paper is motivated by Kei-ichi Watanabe's question. The author expresses her hearty thanks to him. The author is also grateful to Mircea Musta\cedilla{t}\v{a} for his helpful comments and stimulating discussions.
\section{Preliminaries on jet schemes}
In this paper, a $k$-scheme is always a separated scheme over a field $k$.
\begin{defn}
Let \( X \) be a scheme of finite type over \( k \) and $K\supset k$ a field extension.
A morphism \( \operatorname{Spec} K[t]/(t^{m+1})\to X \) is called an \( m \)-jet
of \( X \). \end{defn}
\begin{say} \label{field}
Let \( X \) be a scheme of finite type over \( k \).
Let \( {\cal S}ch/k \) be the category of \( k \)-schemes
and \( {\cal S}et \) the category of sets.
Define a contravariant functor \( {\cal F}_{m}^X: {\cal S}ch/k \longrightarrow{\cal S}et \)
by $$
{\cal F}_{m}^X(Y)=\operatorname{Hom} _{k}(Y\times_{\operatorname{Spec} k}\operatorname{Spec} k[t]/(t^{m+1}), X). $$
Then, \( {\cal F}_{m}^X \) is representable by a scheme \( X_{m} \) of finite
type over \( k \), that is $$
\operatorname{Hom} _{k}(Y, X_{m})\simeq\operatorname{Hom} _{k}(Y\times_{\operatorname{Spec} k} \operatorname{Spec} k[t]/(t^{m+1}), X). $$
This \( X_{m} \) is called the {\it scheme of \( m \)-jets} of \( X
\) or the {\it \( m \)-jet scheme} of \( X \).
For \( m<m' \) the canonical surjection \( k[t]/(t^{m'+1})\to k[t]/(t^{m+1}) \)
induces a morphism \( \psi^X_{m',m}:X_{m'}\to X_{m} \),
which we call a truncation morphism.
In particular, for \( m=0 \) \( \psi^X_{m,0}:
X_{m}\to X \) is denoted by \( \pi^X_{m} \).
We denote \( \psi^X_{m',m} \) and \( \pi^X_{m} \) by \(
\psi_{m',m} \) and \( \pi_{m} \), respectively, if there is no risk
of confusion.
By \ref{field}, a point \( z \in X_{m} \) gives an \( m \)-jet \(
\alpha_{z}:
\operatorname{Spec} K[t]/(t^{m+1})
\to X \) and \( \pi^X_{m}(z)=\alpha_{z}(0) \),
where \( K \) is the residue field at \( z \) and \( 0 \) is
the point of \( \spec K[t]/(t^{m+1}) \).
From now on we denote a point \( z \) of \( X_{m} \) and the
corresponding \( m \)-jet \( \alpha_{z} \) by the common symbol \(
\alpha \). \end{say}
\begin{say}
The canonical inclusion \( k\to k[t]/(t^{m+1}) \) induces a
section \( \sigma^X_{m}:X \hookrightarrow X_{m} \) of \( \pi^X_{m} \).
The image \( \sigma^X_{m}(x) \) of a point \( x\in X \) is the
trivial \( m \)-jet at \( x \) and is denoted by \( x_{m} \). \end{say}
\begin{say}
Let \( f:X\to Y \) be a morphism of \( k \)-schemes.
Then the canonical morphism \( f_{m}:X_{m}\to Y_{m} \)
is induced for every \( m\in {\Bbb N} \) such that the
following diagram is commutative:
\[ \begin{array}{ccc}
X_{m}& \stackrel{f_{m}}\longrightarrow & Y_{m}\\
\pi^X_{m} \downarrow\ \ \ \ \ & & \ \ \ \ \downarrow \pi^Y_{m}\\
X & \stackrel{f}\longrightarrow & Y\\
\end{array}. \]
Pointwise, for \( \alpha\in X_{m} \) , \( f_{m}(\alpha)\) is the $m$-jet \[ f\circ \alpha:
\spec K[t]/(t^{m+1})\stackrel{\alpha}\longrightarrow X \stackrel{f}\longrightarrow Y. \] \end{say}
\section{Proof of Proposition \ref{sm}}
\noindent [{\it Proof of Proposition \ref{sm}}]
(i)\( \Rightarrow \) (ii): This implication for smooth and \'etale cases is already mentioned in \cite{BL} and \cite{EM}.
For the reader's convenience, the proof is included here.
Assume for an integer \( m\geq 0 \), a commutative diagram of \( k \)-schemes: \[ \begin{array}{ccc} X_{m}& \stackrel{f_{m}}\longrightarrow & Y_{m}\\ \uparrow& & \uparrow\\ Z' &\hookrightarrow& Z\\ \end{array}
\]
is given, where \( Z'\hookrightarrow Z \) is a closed immersion of affine schemes
whose defining ideal is nilpotent.
This diagram is equivalent to the following commutative diagram:
\[ \begin{array}{ccc} X& \stackrel{f}\longrightarrow &Y \\ \uparrow& & \uparrow\\ Z'\times {\Spec k[t]/(t^{m+1})} &\hookrightarrow& Z\times {\Spec k[t]/(t^{m+1})}\\ \end{array}. \] Here, we note that \( Z'\times {\Spec k[t]/(t^{m+1})} \hookrightarrow Z\times {\Spec k[t]/(t^{m+1})} \) is a closed subscheme with the nilpotent defining ideal. If \( f \) is smooth (resp. unramified, \'etale), there exists a (resp. there exists at most one, there exists a unique) morphism \( Z\times {\Spec k[t]/(t^{m+1})} \to X \) which makes the two triangles commutative. This is equivalent to the fact that there exists a (resp. there exists at most one, there exists a unique) morphism \( Z \to X_{m} \) which makes the two triangles in the first diagram commutative.
\noindent (ii)\( \Rightarrow \) (iii): trivial.
\noindent (iii)\( \Rightarrow \) (i): Assume a commutative diagram,
\begin{equation}\label{d1}
\begin{array}{ccc} X& \stackrel{f}\longrightarrow &Y \\ \varphi\uparrow\ \ \ & & \ \ \ \uparrow\psi\\ Z' &\hookrightarrow& Z \\ \end{array} \end{equation} is given, where \( Z'\hookrightarrow Z \) is a closed immersion of affine schemes whose defining ideal is nilpotent.
For an integer \( m\geq 0 \), by composing with the sections
\(\sigma_{m}^X: X\hookrightarrow X_{m} \), \( \sigma_{m}^Y:
Y\hookrightarrow Y_{m} \), we obtain the commutative diagram:
\begin{equation}\label{d2}
\begin{array}{ccc}
X_{m}&\stackrel{f_{m}}\longrightarrow& Y_{m}\\
\cup& & \cup\\ X& \stackrel{f}\longrightarrow &Y \\ \varphi\uparrow\ \ \ & & \ \ \ \uparrow\psi\\ Z' &\hookrightarrow& Z \\ \end{array}. \end{equation}
Now, if \( f_{m} \) is smooth (resp. unramified, \'etale), there exists a (resp. exists at most one, exists a unique ) morphism \( Z\to X_{m} \) such that the two triangles are commutative in the diagram (\ref{d2}). By composing this morphism $Z\to X_m$ with $\pi_m^X:X_m\to X$, we obtain that there exists a (resp. exists at most one, exists a unique ) morphism \( Z\to X \) such that the two triangles in the lower rectangle are commutative. \( \Box \)
\vskip.5truecm \noindent [{\it Proof of Corollary \ref{smooth}}] In Proposition \ref{sm}, let \( Y=\spec k \). \( \Box \)
\section{jet schemes of a local analytic scheme}
For the proofs of the theorems, here we set up the jet schemes for local analytic schemes. Let $k$ be an algebraically closed field of arbitrary characteristic. The representability of the following functor follows from \cite{voj}. Here, we show the concrete form of the scheme representing the functor.
\begin{prop}
Let ${\widehat{{\Bbb A}_{k}^N}}$ be the affine scheme $\operatorname{Spec} \widehat{\o_{{\Bbb A}^N,0}}$, where $\o_{{\Bbb A}^N,0}$ is the local ring of the origin $0\in {\Bbb A}_k^N$ and $\widehat{\o_{{\Bbb A}^N,0}}$ is the completion of $\o_{{\Bbb A}^N,0}$ at the maximal ideal.
Let ${\cal F}_m^{{\widehat{{\Bbb A}_{k}^N}}}: Sch/k \to Set$ be the functor from the category of $k$-schemes to the category of sets defined as follows:
$${\cal F}_m^{{\widehat{{\Bbb A}_{k}^N}}}(Y):=\operatorname{Hom}_k(Y\times_{\spec k} {\Spec k[t]/(t^{m+1})}, {\widehat{{\Bbb A}_{k}^N}}).$$ For a morphism $u:Y\to Z$ in $Sch/k$, $${\cal F}_m^{{\widehat{{\Bbb A}_{k}^N}}}(u):\operatorname{Hom}_k(Z\times{\Spec k[t]/(t^{m+1})},{\widehat{{\Bbb A}_{k}^N}})\longrightarrow\operatorname{Hom}_k(Y\times{\Spec k[t]/(t^{m+1})},{\widehat{{\Bbb A}_{k}^N}}) $$ is defined by $f\mapsto f\circ (u\times id)$.
Then, ${\cal F}_m^{{\widehat{{\Bbb A}_{k}^N}}}$ is representable by the scheme $$({\widehat{{\Bbb A}_{k}^N}})_m:=\operatorname{Spec} k[[x_{0,1}, x_{0,2},\ldots,x_{0,N}]][x_{1,1},\ldots,x_{1,N},\ldots,x_{m,
1},\ldots,x_{m,N}]$$
$$ =\spec k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}], $$
where we denote the multivariables \( (
x_{i,1},x_{i,2},\ldots,x_{i,N})\) by \( {\bf x}_{i} \) for the
simplicity of notation. \end{prop}
\begin{pf} We may assume that $Y$ is an affine scheme $\spec R$ over $k$. Then, $$\operatorname{Hom}_k(Y\times{\Spec k[t]/(t^{m+1})},{\widehat{{\Bbb A}_{k}^N}}) \simeq \operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))$$ Here we have a bijection: $$\operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))\simeq \operatorname{Hom}_k(k[[{\bf x}_0]],R)\times R^{mN}$$ by $\varphi\mapsto (\pi_0\circ \varphi, \pi_1\varphi(x_{01}),...,\pi_1\varphi(x_{0,N}),..., \pi_m\varphi(x_{01}),...,\pi_m\varphi(x_{0,N}))$, where, $$\pi_i: R[t]/(t^{m+1})\to R\ \ \ \ (i=0,1,...,m) $$ is the projection of $R[t]/(t^{m+1}) =R\oplus Rt \oplus\cdots \oplus Rt^m \simeq R^{m+1}$ to the $i$-th factor. Indeed it gives a bijection, since we have the inverse map $$\operatorname{Hom}_k(k[[{\bf x}_0]],R)\times R^{mN}\longrightarrow\operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))$$
by $$(\varphi_0, a_{1,1},..,a_{1,N},...,a_{m,1},..., a_{m,N})\mapsto \varphi$$ where $\varphi\in \operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))$ is defined as follows:
For $\gamma(x_{0,1}, x_{0,2},\ldots,x_{0,N})\in k[[{\bf x}_0]]$, substituting $\sum_{i=0}^m x_{i,j}t^i$ into $x_{0,j}$ $(j=1,...,N)$ in $\gamma$, we obtain $$\gamma(\sum {\bf x}_i t^i)=\sum_{i=0}^{\infty}\left(\sum_{\sum_{\ell} i_{\ell}=i, 1\leq j_{\ell}\leq N} \gamma_{i_1,j_1,...,i_s, j_s}x_{i_1,j_1}\cdots x_{i_s,j_s}\right)t^i$$ in $k[[{\bf x}_{0}, {\bf x}_{1},\ldots,{\bf x}_{m}, t]]$, where $\gamma_{i_1,j_1,...,i_s, j_s}\in k[[{\bf x}_0]]$. Define $\varphi(\gamma)\in R[t]/(t^{m+1})$ by $$\varphi(\gamma)=\sum_{i=0}^m\left(\sum_{\sum_{\ell} i_{\ell}=i, 1\leq j_{\ell}\leq N} \varphi_0(\gamma_{i_1,j_1,...,i_s, j_s})a_{i_1,j_1}\cdots a_{i_s,j_s}\right)t^i.$$
On the other hand, It is clear that there is a bijection $$\operatorname{Hom}_k(k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m], R)\simeq \operatorname{Hom}_k(k[[{\bf x}_0]],R)\times R^{mN}$$
by $\varphi\mapsto (\varphi|_{k[[{\bf x}_0]]}, \varphi(x_{1,1}),...,\varphi(x_{1,N}),..., \varphi(x_{m, 1}),...,\varphi(x_{m,N}))$. By this, we have $$\operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))\simeq\operatorname{Hom}_k(k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m], R),$$ which implies $$\operatorname{Hom}_k(Y\times \spec k[t]/(t^{m+1}), {\widehat{{\Bbb A}_{k}^N}}) \simeq \operatorname{Hom}_k(Y, \spec k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m])$$ This completes the proof. \end{pf}
By this proposition, we have the following:
\begin{cor} Let $X\subset {\widehat{{\Bbb A}_{k}^N}}$ be a closed subscheme. Let $I$ be the defining ideal of $X$ in ${\widehat{{\Bbb A}_{k}^N}}$. Define a functor ${\cal F}_m^X: Sch/k\to Set$ for this $ X$ in the same way as in the previous proposition.
For a power series \( f\in k[[{\bf x}_{0}]]\) we define an element
\( F_{m}\in k[[{\bf x}_{0}]][ {\bf x}_{1}\ldots,{\bf x}_{m}] \) as follows:
\[ f(\sum_{i= 0}^m {\bf x}_{i}t^i)=F_{0}+F_{1}t+F_{2}t^2+\cdots+
F_{m}t^m+\cdots . \]
Then, the functor ${\cal F}_m^X$ is represented by a scheme \( X_{m} \) defined in
\(( {\widehat{{\Bbb A}_{k}^N}})_{m}= \spec k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}]\) by
the ideal generated by \( F_{i} \)'s \( (i\leq m) \) for all \( f\in I \).
(It is sufficient to take \( F_{i} \)'s \( (i\leq m) \) for all generators $f\in I$.) \end{cor}
\begin{pf} We use the notation in the proof of the previous proposition. There, we obtained bijections : $$\operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))\stackrel{\Phi}\simeq
\operatorname{Hom}_k(k[[{\bf x}_0]],R)\times R^{mN}$$
$$\stackrel{\Psi}\simeq \operatorname{Hom}_k(k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m], R).$$ Here, for $Y=\spec R$, we have the fact that $${\cal F}_m^X(Y)=\operatorname{Hom}_k(k[[{\bf x}_0]]/I, R[t]/(t^{m+1}))$$ is the subset $$\{\varphi:k[[{\bf x}_0]]\to R[t]/(t^{m+1})\mid \varphi(\gamma)=0 \ \ \mbox{for\ generators}\ \gamma\in I\}$$ of $\operatorname{Hom}_k(k[[{\bf x}_0]], R[t]/(t^{m+1}))$. The condition $\varphi(\gamma)=0$ is equivalent to the conditions $\pi_i\circ\varphi(\gamma)=0$ $(i=0,1,...,m)$. Therefore, this subset is mapped by $\Psi\circ \Phi$ to the subset $$\big\{\varphi:k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m]\to R\mid \ \varphi(x_{i,j})=a_{i,j}, \ \mbox{for\ generators}\ \gamma\in I, $$ $$\sum_{\sum_{\ell} i_{\ell}=i, 1\leq j_{\ell}\leq N} \varphi_0(\gamma_{i_1,j_1,...,i_s, j_s})a_{i_1,j_1}\cdots a_{i_s,j_s}=0 \ (i=0,1,...,m)\big\}.$$ Let the ideal $J\subset k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m]$ be generated by $$\sum_{\sum_{\ell} i_{\ell}=i, 1\leq j_{\ell}\leq N} \gamma_{i_1,j_1,...,i_s, j_s}x_{i_1,j_1}\cdots x_{i_s,j_s}$$ for generators $\gamma\in I,$ then it follows that our subset is equal to $$\operatorname{Hom}_k(k[[{\bf x}_0]][{\bf x}_1,...,{\bf x}_m]/J,R).$$
\end{pf}
\begin{rem} \label{algan}
Let $X\subset {\Bbb A}_k^N$ be a closed subscheme containing the origin $0$, $I_X$ the defining ideal
and $\widehat X$ the affine scheme $\spec \widehat {\o_{X,0}}$. Note that the defining ideal $I$ of $\widehat{X}$ in ${{\widehat{{\Bbb A}_{k}^N}}}$
is generated by $I_X$. For a polynomial \( f\in k[{\bf x}_{0}] \) we define an element
\( F_{m}\in k[{\bf x}_{0}, {\bf x}_{1}\ldots,{\bf x}_{m}] \) in the same way as in the previous corollary.
Then \( \widehat X_{m} \) is defined in
\(( {\widehat{{\Bbb A}_{k}^N}})_{m}= \spec k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}]\) by
the ideal generated by \( F_{i} \)'s \( (i\leq m) \) for generators \( f\in I_{X} \). \end{rem}
\begin{cor} Under the notation of Remark\ref{algan}, it follows that $$\widehat X_m=\widehat X\times_X X_m.$$ \end{cor}
\begin{pf}
Note that $F_i\in k[{\bf x}_0, {\bf x}_1,..., {\bf x}_m]$ for a generator $f$ of $I_X$ and $I$ is generated by $I_X$.
Now the expressions
$$X_{m}= \spec k[{\bf x}_{0},{\bf x}_{1},\ldots,{\bf x}_{m}]/ (F_i)_{f\in I_X}$$
$$\widehat X_{m}= \spec k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}]/ (F_i)_{f\in I_X}$$ give the required equality. \end{pf}
\begin{cor} \label{equal} Under the notation of Remark\ref{algan},
let $\pi_m^X$ and $\pi_m^{\widehat X}$ be the canonical projections
$X_m\to X$ and $\widehat X_m\longrightarrow\widehat X$, respectively.
Then, we obtain the isomorphism of schemes:
$$(\pi_m^X)^{-1}(0)\simeq (\pi_m^{\widehat X})^{-1}(0).$$ \end{cor}
\begin{cor} \label{reduction}
Under the notation of Remark\ref{algan},
replacing $X$ by a sufficiently small neighborhood of $0$, we obtain the equivalence that
the truncation morphism $X_{m'}\to X_m$ is flat
if and only if
the truncation morphism $\widehat X_{m'}\longrightarrow\widehat X_m$ is flat. \end{cor}
\begin{pf} ``Only if" part follows from the base change property for flatness. ``If" part follows from the fact that the homomorphism $\o_{X,0}\longrightarrow\widehat{\o_{X,0}}$ is faithfully flat. \end{pf}
\begin{defn}
\label{weight}
A monomial \( {\bf x}=\prod_{\ell=1}^d x_{i_{\ell},j_{\ell}}\in k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}] \)
is called a monomial of {\it weight} \( w \) if \( w=\sum_{\ell=1}^d
i_{\ell}\).
For an element \( F\in k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}] \) the
order \( \operatorname{ord} F \) is defined as the lowest degree of the monomials
in \( {\bf x}_{0},\ldots,{\bf x}_{m} \) that appear in \( F \). \end{defn}
Note that every monomial in \( F_{m} \) has weight \(m \) for $f\in k[[{\bf x}_0]]$.
The next lemma follows from the definition of \( F_{m} \):
\begin{lem} \label{appear} \label{lem} Let \( f \) be a non-zero power series in \( k[[{\bf x}_{0}]] \) of order \( \geq 1 \). \begin{enumerate} \item[(i)] When char \( k \)= 0, a monomial \( \prod_{\ell=1}^r x_{0,j_{\ell}} \) appears in \( f \) if and only if for every \( i_{\ell}\geq 0 \), the monomial \[ \prod_{\ell=1}^r x_{i_{\ell},j_{\ell}} \] appears in \( F_{m} \), where \( \sum_{\ell}i_{\ell}=m \).
Hence, \( \operatorname{ord} F_{m}=\operatorname{ord} f \), and in particular \( F_{m}\neq 0 \) for every \( m \).
\item[(ii)] For any characteristic, a monomial $\displaystyle\prod_{j=1}^N x_{0,j}^{e_j}$ appears in $f$ if and only if for every \( i_{\ell}\geq 0 \), the monomial $$\prod_{j=1}^N x_{i_j,j}^{e_j}$$ appears in $F_m$, where $m=\sum_j e_j i_j$.
\end{enumerate}
\end{lem}
\begin{pf}
The statement of ``if'' part follows immediately from the
definition of \( F_{m} \) for both (i) and (ii).
Now assume that
\( g= \prod_{\ell=1}^r x_{0,j_{\ell}} \) is a monomial in \( f \).
By substituting \( \sum_{i\geq 0}x_{i, j}t^i \)
into $x_{0,j}$ in this monomial, we obtain
\[ g(\sum_{i\geq 0}{\bf x}_{i}t^i)=G_{0}+G_{1}t+G_{2}t^2+\cdots. \]
Therefore, \( G_{m} \) is the sum of the monomials of the form \( \prod_{\ell=1}^r x_{i_{\ell},j_{\ell}} \)
with \( i_{\ell}\geq 0 \) and \( \sum_{\ell}i_{\ell}=m \).
If the characteristic of \( k \) is zero, the coefficients of each such
monomial is nonzero.
And each monomial \( \prod_{\ell=1}^r x_{i_{\ell},j_{\ell}} \) in
\(G_{m}\) is not canceled by the contribution from the other monomials of \( f
\),
because the collection \( (j_{1},..,j_{\ell},..,j_{r}) \)
assigns the source monomial \( \prod_{\ell=1}^r x_{0,j_{\ell}} \).
This shows the statement of ``only if'' part of (i).
For the proof of only if part of (ii), let $g=\prod_j x_{0,j}^{e_j}$ and define $G_i$ in the same way as in the previous discussion.
Then, the monomial $\prod_j x_{i_j,j}^{e_j}$ appears with coefficient 1 in $G_m$ for $m=\sum_j e_j i_j$.
Therefore, the coefficient of $\prod_j x_{i_j,j}^{e_j}$ in $F_m$ is the same as the coefficient of
$\prod_j x_{0,j}^{e_j}$ in $f$. \end{pf}
\begin{rem} The statement (i) of Lemma \ref{appear} does not hold for positive characteristic case. For example, let $p>0$ be the characteristic of the base field $k$ and
\( f=x_{0,1}^p \in k[[x_{0,1}]]\).
Then \( F_{m}=x_{i,1}^p \) for \( m=pi \) and \( F_{m}=0 \) for \( m\not\equiv 0 \) (mod \( p \)). \end{rem}
As we saw in the previous section, Corollary \ref{smooth} follows immediately from Proposition \ref{sm}. But here we give another proof of Corollary \ref{smooth} for an algebraically closed base field, since we think that it gives some useful insight into jet schemes. \vskip.5truecm \noindent [{\it Proof of Corollary \ref{smooth}}] We may assume that \( (X,0)\subset ({\widehat{{\Bbb A}_{k}^N}}, 0) \) is a closed subscheme with a singularity at $0$, where \( N \) is the embedding dimension of \( (X,0) \). Then every element \( f \in I_{X} \) has order greater than 1. By this, every element \( F_{i}\) of the defining ideal \( I_{X_{m}} \) of \( X_{m} \) in \( ({\widehat{{\Bbb A}_{k}^N}})_{m} \)
has order greater than 1. Here, note that \( I_{X_{m}}\neq 0 \), since \( I_{X}\neq 0 \) and \( F_{0}=f \) for \( f\in I_{X} \). Therefore the Jacobian matrix of \( I_{X_{m}} \) is
the zero matrix at the trivial \( m \)-jet \( 0_{m} \in X_{m} \) at \( 0 \), which shows that \( 0_{m} \) is a singular point in \( X_{m} \) for every \( m \). \( \Box \)
\section{Proofs of theorems \ref{flat}, \ref{positive}}
\begin{say} \label{note} For the proof of the theorems, we fix the notation as follows: Let \( (X,0)\subset ({\widehat{{\Bbb A}_{k}^N}}, 0) \) be a singularity of embedding dimension \( N \). Let \( 0\leq m<m' \),
\( R_{m}=k[[{\bf x}_{0}]][{\bf x}_{1},\ldots,{\bf x}_{m}] \), \( I\subset R_{m} \) the defining ideal of \( X_{m} \) in \( ({\widehat{{\Bbb A}_{k}^N}})_{m} \), \( R_{m'}=k[[{\bf x}_{0}]][{\bf x}_{1},..,{\bf x}_{m},..{\bf x}_{m'}] \) and \( I'\subset R_{m'} \) the defining ideal of \( X_{m'} \) in \( ({\widehat{{\Bbb A}_{k}^N}})_{m'} \). Let \( M \) be the maximal ideal of \( R_{m}\) generated by \( {\bf x}_{0},\ldots,{\bf x}_{m} \). \end{say}
\begin{lem} \label{notation}
Under the notation as in \ref{note}, if there is an element \( F\in I' \cap MR_{m'} \) such that \( F\not\in MI'+IR_{m'} \), then the truncation morphism \(\psi_{m',m}: X_{m'}\to X_{m} \)
is not flat. \end{lem}
\begin{pf} The truncation morphism \(\psi_{m',m}: X_{m'}\to X_{m} \) corresponds to the canonical ring homomorphism \( R_{m}/I\to R_{m'}/I' \). The non-flatness follows from the non-injectivity of the canonical homomorphism: \[ M/I\otimes_{R_{m}/I}R_{m'}/I'\to R_{m'}/I'. \] Since we have an isomorphism of the first module \[ M/I\otimes_{R_{m}/I}R_{m'}/I'\simeq MR_{m'} /(MI'+IR_{m'}),\] the existence of an element \( F\in I'\cap MR_{m'} \) such that \( F\not\in MI'+IR_{m'} \) gives the non-injectivity. \end{pf}
[{\it Proof of Theorem \ref{flat}}] Assume that the base field \( k \) is algebraically closed and of characteristic zero and \( (X, 0) \) is a singular point of a scheme $X$ of finite type over $k$. Then we will deduce that every truncation morphism \(\psi_{m',m}: X_{m'}\to X_{m} \) \( (m'>m\geq 0) \) is not flat. For this, it is sufficient to prove that \(\psi_{m',m}: \widehat{X_{m'}}\longrightarrow\widehat{X_{m}} \) \( (m'>m\geq 0) \) is not flat by Corollary \ref{reduction}. So we may assume that $X$ is a closed subscheme of ${\widehat{{\Bbb A}_{k}^N}}$ with the embedding dimension $N$. Let $I_X$ be the defining ideal of $X$ in ${\widehat{{\Bbb A}_{k}^N}}$. We use the notation of \ref{note}. Let \( f \) be an element in \( I_{X} \) with the minimal order \( d \). Note that \( d\geq 2 \), as \( N \) is the embedding dimension. Then, by Lemma \ref{lem}, (i), \( F_{m+1} \) is not zero and presented as \[ F_{m+1}=g_{1}({\bf x}_{0})x_{m+1, 1}+\cdots+g_{N}({\bf x}_{0})x_{m+1,N}+ g'({\bf x}_{0},\ldots,{\bf x}_{m}), \] where \( \operatorname{ord} F_{m+1}=d \) and some of \( g_{i} \)'s are not zero. We should note that \( \operatorname{ord} g_{i}=d-1 \) for all non-zero \( g_{i} \)'s.
As \( \operatorname{ord} g_{i}\geq 1\), for every \( i \) and \( \operatorname{ord} g'\geq 1 \), the element \( F_{m+1} \) is in \( MR_{m'} \). It is clear that \( F_{m+1}\in I' \). On the other hand, as \( \operatorname{ord} I=\operatorname{ord} I'=d \), it follows that $\operatorname{ord} MI'\geq d+1$ and the initial term of an element $IR_{m'}$ of order $d$ is the initial term of an element of $I$. Hence, the initial term of an element in
\( MI'+IR_{m'} \) of order \( d \) should be the initial term of an element of $I$,
therefore it should be a polynomial in \( {\bf x}_{0},\ldots, {\bf x}_{m} \).
However, the initial term of \( F_{m+1} \) is not of this form,
which implies
\( F_{m+1}\not\in MI'+IR_{m'}\).
By Lemma \ref{notation}, the non-flatness of \(
\psi_{m',m}:X_{m'}\to X_{m} \) follows for every pair \( (m, m') \)
with \( 0\leq m<m' \). \( \Box \)
\begin{exmp} \label{ex}
The condition char\( k \)=0 is necessary for Theorem \ref{flat}.
Indeed,
there are counter examples for Theorem \ref{flat} in case of
positive characteristic.
For example,
let \( X \) be a scheme defined by \( x_{0,1}^p \) in \(
{\Bbb A}_{k}^1=\spec k[x_{0,1}] \) over a field \( k \) of characteristic p.
Let $r$ be an integer with $0<r < p$
Then, for any positive integer $q$, we have
\[ X_{pq+r}=\spec k[x_{0,1}, x_{1,1},..., x_{pq+r, 1}]/(x_{0,1}^p,..., x_{q,1}^p) \] and
\[ X_{pq}=\spec k[x_{0,1}, x_{1,1},.., x_{pq, 1}] / (x_{0,1}^p,..., x_{q,1}^p). \]
It is clear that \( X_{pq+r} \) is flat over \( X_{pq} \), while \( X \) is
singular. \end{exmp}
\vskip.5truecm \noindent [{\it Proof of Theorem \ref{positive}}] As in the proof of the previous theorem, we will show the non-flatness of the truncation morphisms, if $(X,0) $ is singular. As $X$ is reduced, some fiber of the truncation morphism $\psi_{m',m}:X_{m'}\to X_m$ has dimension $\leq (m'-m)\dim (X,0)$ for a small affine neighborhood $X$ of $0$, if $\psi_{m',m}$ is flat. (If $X$ is of equi-dimensional, then the fiber has dimension $\dim (X,0)$.) Hence, if $\psi_{m',m}$ is flat, by Corollaries \ref{equal}, \ref{reduction}, the dimension of the fiber over a closed point in $(\pi_{m}^{\widehat X})^{-1}(0)$ by the morphism
$\widehat{\psi_{m',m}}:\widehat{X_{m'}}\longrightarrow\widehat{X_m}$ is $\leq (m'-m)\dim (X,0)$. With remarking this fact and Corollary \ref{reduction},
we may assume that $X$ is a singular closed subscheme of ${\widehat{{\Bbb A}_{k}^N}}$ for the embedding dimension $N$ of $(X,0)$.
First assume \( m'<d(m+1) \).
Note that for every \( g\in I_{X} \),
\[ \overline{G}_{i}=G_{i}({\bf 0},\ldots,{\bf 0},{\bf x}_{m+1},\ldots,{\bf x}_{i})=0 \]
for \( i<d(m+1) \).
This is because every monomial in $G_i$ has a factor $x_{\ell, j}$ with $\ell\leq m$,
since the weight of \( G_{i} \) is \( i\) \( (<d(m+1))\) and
\(\operatorname{ord} G_{i}\geq d \).
Let \( 0_{m} \) be the trivial \( m \)-jet at \( 0 \).
As \( \psi_{m', m}^{-1}(0_{m}) \) is defined in \( {\Bbb A}^{(m'-m)N} \)
by the ideal generated by \( \overline{G}_{i} \)'s with \( i\leq m' \) for $g\in I_X$,
it follows that \[ \psi_{m', m}^{-1}(0_{m})\simeq {\Bbb A}^{N(m'-m)}, \]
which is a fiber of dimension \( N(m'-m)> (m'-m)\dim (X,0) \).
Therefore, $\psi_{m',m}$ is not flat,
because otherwise the fiber dimension would be $(m'-m)\dim(X,0)$ as we saw before.
Therefore, we may assume that \( m'\geq d(m+1) \), where $d=\operatorname{ord} I_X$.
Let $f\in I_X$ have the order $ d$.
Let $\prod_j x_{0,j}^{e_j}$ be a monomial with the minimal degree in $f$.
Then, $\sum_j e_j=d$ and therefore $e_j\leq d$ for every $j$.
Let $e$ be one of non-zero $e_j$'s.
By the assumption $m'\geq d(m+1)$, there is a positive integer $i$ such that $m\leq ie <m'$.
Let $s$ be minimal among such $i$'s.
Then \( F_{se}\in I' \) is clear and also we have \( F_{se}\in MR_{m'} \) under the notation of \ref{note}.
Indeed, if a monomial \( \prod_{\ell=1}^u x_{i_{\ell}, j_{\ell}}
\) of \( F_{se} \) has a factor \( x_{i_{\ell}, j_{\ell}} \) with \( i_{\ell}\geq m+1
\), let this \( i_{\ell} \) be \( i_{1} \).
Then \( i_{1}\geq m+1 > (s-1)e \).
By this,
\[ \sum_{\ell\neq 1}i_{\ell}< se-(s-1)e=e\leq d\leq u. \]
Therefore, there is at least one \( \ell \) such that \( i_{\ell}\leq 1\leq m \).
Hence every monomial of \( F_{se} \) is contained in \( MR_{m'} \).
Now let $e=e_1$.
As
\[ \prod_j x_{0,j}^{e_j} \]
is a monomial of \( f \) of the minimal order \( d \), by Lemma \ref{appear},
\[ x_{1,s}^e\prod_{j\neq 1}x_{0, j}^{e_j} \]
is a monomial of \( F_{se} \).
Therefore, \( \operatorname{ord} F_{se}=d \).
This monomial does not appear in any element of \( MI'+IR_{m'} \).
Indeed, \( \operatorname{ord} MI'\geq d+1 \) and the initial term of
an element of \( IR_{m'} \) of order \( d \)
must be the initial term of an element of \( I \), because of \(
\operatorname{ord} I = d \).
Therefore, every initial monomial of an element of \( IR_{m'} \)
of order \( d \) is of the
form
\[ \prod_{\ell}x_{i_{\ell},j_{\ell}}, \ \ \ (\sum_{\ell}i_{\ell}\leq m),\]
since \( I \) is generated by \( F_{i} \)'s with \( i\leq m \) for \(
f\in I_{X} \).
As \( x_{1,s}^e\prod_{j\neq 1}x_{0, j}^{e_j} \) is not of
this form, we obtain \( F_{se}\not\in IR_{m'}+MI' \).
By this and Lemma \ref{notation}, it follows that
\( X_{m'}\to X_{m} \) is not flat for \( m'>m> 0 \).
\begin{rem}
In the proof of Theorem \ref{positive}, we used the condition $m\geq 1$.
It is not clear if the same statement as in the Theorem \ref{positive} follows for $m=0$
in positive characteristic case,
i.e., If the base field is of positive characteristic, $X$ is reduced and
$\pi_{m'}=\psi_{m',0}:X_{m'}\to X$ is flat for some $m'>0$, then is $X$ non-singular?
But in particular, if $m'=1$, it holds true.
This is seen as follows:
For an affine scheme $X$ of finite type over $k$, the fiber of a point $x\in X$ by the projection $\pi_1:X_1\to X$ is the Zariski tangent space
of the point.
Therefore $\dim \pi_1^{-1}(x)={\operatorname{embdim}} (X,x)$.
If $(X, 0)$ is singular and reduced, $\dim \pi_1^{-1}(x)> \dim(X,0)$, while there are points in a small neighborhood of $0$ such that the fiber dimension is $\dim (X,0)$.
Hence, $\pi_1$ is not flat.
\end{rem}
\makeatletter \renewcommand{\@biblabel}[1]{
#1.}\makeatother
\end{document} |
\begin{document}
\title{Nonlinear steering criteria for arbitrary two-qubit quantum systems}
\author{Guo-Zhu Pan\footnote{panguozhu@wxc.edu.cn}} \affiliation{School of Electrical and Photoelectric Engineering, West Anhui University, Lu'an, 237012, China}
\author{Ming Yang\footnote{mingyang@ahu.edu.cn}} \affiliation{School of Physics {\&} Materials Science, Anhui University, Hefei 230601, China}
\author{Hao Yuan} \affiliation{School of Electrical and Photoelectric Engineering, West Anhui University, Lu'an, 237012, China} \affiliation{School of Physics {\&} Materials Science, Anhui University, Hefei 230601, China}
\author{Gang Zhang} \affiliation{School of Electrical and Photoelectric Engineering, West Anhui University, Lu'an, 237012, China}
\author{Jun-Long Zhao} \affiliation{School of Physics {\&} Materials Science, Anhui University, Hefei 230601, China} \begin{abstract} \textbf{ Abstract:} By employing Pauli measurements, we present some nonlinear steering criteria applicable for arbitrary two-qubit quantum systems and optimized ones for symmetric quantum states. These criteria provide sufficient conditions to witness steering, which can recover the previous elegant results for some well-known states. Compared with the existing linear steering criterion and entropic criterion, ours can certify more steerable states without selecting measurement settings or correlation weights, which can also be used to verify entanglement as all steerable quantum states are entangled.
\end{abstract}
\keywords{Quantum steering, Nonlocality, Entanglement, Covariance matrices} \pacs{03.65.Ud, 03.67.Mn, 42.50.Dv} \maketitle
\section{Introduction}
Quantum steering describes the ability of one observer to nonlocally affect the other observer's state through local measurements, which was first noted by Einstein, Podolsky and Rosen (EPR) for arguing the completeness of quantum mechanics in 1935 \cite{ein}, and later introduced by Schr\"{o}dinger in response to the well-known EPR paradox\cite{sch}. After being formalized by Wiseman et al. with a local hidden variable (LHV)-local hidden state model in 2007 \cite{wis}, quantum steering has attracted increasing attention and been explored widely. Steerable states were shown to be advantageous for tasks involving secure quantum teleportation \cite{rei, ros}, quantum secret sharing \cite{walk, kog}, one-sided device-independent quantum key distribution \cite{bra} and channel discrimination \cite{pia}.
Quantum steering is one form of quantum correlations intermediate between quantum entanglement \cite{horo} and Bell nonlocality \cite{bell}. It has been demonstrated that a quantum state which is Bell nonlocal must be steerable, and a quantum state which is steerable must be entangled \cite{jone, brun}. One distinct feature of quantum steering which differs from entanglement and Bell nonlocality is asymmetry. That is, there exists the case when Alice can steer Bob's state but Bob cannot steer Alice's state, which is referred to as one-way steerable and has been demonstrated in theory \cite{bow} and experiment \cite{han, wol}.
Quantum steering is the failure description of the local hidden variable-local hidden state models to reproduce the correlation between two subsystems, which can be witnessed by quantum steering criteria. Recently, a lot of steering criteria have been developed to distinguish steerable quantum states from unsteerable ones. In Ref. \cite{sau}, the linear steering criteria was introduced for qubit states. In Ref. \cite{sch2}, the steering criteria from entropic uncertainty relations were derived, which can be applicable for both discrete and continuous variable systems. Subsequently, the steering criteria via covariance matrices of local observables \cite{ji} and local uncertainty relations \cite{zhen} in arbitrary-dimensional quantum systems were presented. Recently, Refs. \cite{zhe1, zhe2} generalized the linear steering criteria to high-dimensional systems. Although these criteria work well for a number of quantum states, most of them require constructing appropriate measurement settings or correlation weights in practice, which increases the complexities of the detecting inevitably. The development of the universal criterion to detect steering is still one vexed question.
In this paper, we first present some steering criteria applicable for arbitrary two-qubit quantum systems, then optimize them for symmetric quantum states, and finally we provide a broad class of explicit examples including two-qubit Werner states, Bell diagonal states, and Gisin states. Compared with the existing linear steering criterion and entropic criterion, ours can certify more steerable states without selecting measurement settings or correlation weights, which can also be used to verify entanglement as all steerable quantum states are entangled.
\section{Nonlinear steering criteria for arbitrary two-qubit quantum systems} Suppose two separate parties, Alice and Bob, share a two-qubit quantum state on a composite Hilbert space $\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}$. The steering is defined by the failure description of all possible local hidden variable-local hidden state models in the form \cite{wis, jone} \begin{equation}\label{1}
P(a,b|A,B;W)=\sum_{\lambda}P(a|A;\lambda)P(b|B;\rho_{\lambda})p_{\lambda}, \end{equation}
where $P(a,b|A,B;W)$ are joint probabilities for Alice and Bob's measurements $A$ and $B$, with the results $a$ and $b$, respectively; $p_{\lambda}$ and $P(a|A;\lambda)$ denote some probability distributions involving the LHV $\lambda$, and
$P(b|B;\rho_{\lambda})$ denotes the quantum probability of outcome $b$ given measurement $B$ on state $\rho_{\lambda}$. $W$ represents the bipartite state under consideration. In other words, a quantum state will be steerable if it does not satisfy Eq.(1). Within the formulation, we propose a nonlinear steering criterion that can be used to certify a wide range of steerable quantum states for two-qubit quantum systems.
\emph{Theorem 1.} If a given two-qubit quantum state is unsteerable from Alice to Bob (or Bob to Alice), the following inequality holds: \begin{equation} \sum\limits_{i=1}\limits^{3}\sum\limits_{j=1}\limits^{3}\langle\sigma_{i}\otimes\sigma_{j}\rangle^{2}\leq1, \end{equation} where $\sigma_{i,j}$ ($i,j=1,2,3$) are Pauli operators.
\emph{Proof.} Suppose Alice and Bob share a two-qubit quantum state $\rho_{AB}$ on a composite Hilbert space, both of them perform $N$ measurements on their own states, which are denoted by $A_{k}$ and $B_{l}$, respectively. Here $B_{l}$ is a quantum observable while $A_{k}$ have no such constraint, $k (l)$ ($k (l)=1,2,\cdots,N$) labels the \emph{k}th (\emph{l}th) measurement setting for Alice (Bob). If the state is unsteerable from Alice to Bob, we have the following inequality \begin{eqnarray}
\nonumber &&\sum\limits_{k=1}\limits^{N}\sum\limits_{l=1}\limits^{N}\langle A_{k}\otimes B_{l}\rangle^{2}\\ \nonumber
&=& \sum\limits_{k=1}\limits^{N}\sum\limits_{l=1}\limits^{N}\left(\sum\limits_{a_{k},b_{l}}a_{k}b_{l}P(a_{k},b_{l}|A_{k},B_{l};\rho_{AB})\right)^2\\ \nonumber
&\leq&\sum\limits_{\lambda}\left(p_{\lambda}\sum\limits_{k=1}\limits^{N}\left[\sum\limits_{a_{k}}a_{k}P(a_{k}|A_{k},\lambda)\right]^2\sum\limits_{l=1}\limits^{N}\left[\sum\limits_{b_{l}}b_{l}P(b_{l}|B_{l},\rho_{\lambda})\right]^2\right)\\ \nonumber &=&\sum\limits_{\lambda}p_{\lambda}\left(\sum\limits_{k=1}\limits^{N}\langle A_{k}\rangle^{2}_{\lambda}\sum\limits_{l=1}\limits^{N}\langle B_{l}\rangle^{2}_{\rho_{\lambda}}\right)\\ \nonumber &\leq&\eta \sum\limits_{\lambda}p_{\lambda}\left(\sum\limits_{k=1}\limits^{N}\langle A_{k}^{2}\rangle_{\lambda}\right)\max\limits_{\{\rho_{\lambda}\}}\left(\sum\limits_{l=1}\limits^{N}\langle B_{l}\rangle^{2}_{\rho_{\lambda}}\right)\\ &=&\eta\sum\limits_{k=1}\limits^{N}\langle A_{k}^{2}\rangle C_{B}=\eta C_{A}C_{B}, \end{eqnarray} where $C_{A}=\sum\limits_{k=1}\limits^{N}\langle A_{k}^{2}\rangle,C_{B}=\max\limits_{\{\rho_{\lambda}\}}\left(\sum\limits_{l=1}\limits^{N}\langle B_{l}\rangle^{2}_{\rho_{\lambda}}\right)$. The parameter $\eta$ ($0\leq\eta\leq1$) is a constant, which is used to adjust the value to the appropriate bound. The first inequality follows from the fact $\sum_{k=1}^{N}\sum_{l=1}^{N}(\alpha_{k}\beta_{l})^{2}\leq\sum_{k=1}^{N}\alpha_{k}^{2}\sum_{l=1}^{N}\beta_{l}^{2}$. The second inequality follows from the definition of $C_{B}$ and the fact $\langle A_{k}^{2}\rangle_{\lambda}\geq\langle A_{k}\rangle_{\lambda}^{2}$. If the observables $A_{k}$ and $B_{l}$ are restricted to Pauli matrices, i.e., $A_{k} (B_{l})=\{\sigma_{1}, \sigma_{2}, \sigma_{3}\}$, one has straightforwardly $C_{A}=3$ and $C_{B}=1$, so Eq.(3) reduces to \begin{equation} \sum\limits_{i=1}\limits^{3}\sum\limits_{j=1}\limits^{3}\langle\sigma_{i}\otimes\sigma_{j}\rangle^{2}\leq\eta'. \end{equation} where $\eta'=3\eta$.
As we know, quantum entanglement, quantum steering, and Bell nonlocality are equivalent in the case of pure states \cite{wis, jone, ysx}. For an arbitrary quantum steering criterion, it is preferable to be a sufficient and necessary condition to detect pure states \cite{zhe1, zhe2, zhen}. In order to obtain the optimal value of the parameter $\eta'$, we introduce the pure states as reference states. For any two-qubit state, it can be expressed as \begin{equation}\label{9} \rho_{AB}=\frac{1}{4}(\mathbb{I}+\sum_{i=1}^{3}c_{i0}\sigma_{i}\otimes\mathbb{I}+\sum_{j=1}^{3}c_{0j}\mathbb{I}\otimes\sigma_{j}+\sum_{i=1}^{3}\sum_{j=1}^{3}c_{ij}\sigma_{i}\otimes\sigma_{j}), \end{equation}
where $|c_{ij}|\leq1$ for $i,j=0,1,2,3$. For arbitrary pure states $\rho_{AB}$, one has straightforwardly $\sum_{i=1}^{3}c_{i0}^{2}+\sum_{j=1}^{3}c_{0j}^{2}+\sum_{i=1}^{3}\sum_{j=1}^{3}c_{ij}^{2}=3$ due to the fact $tr(\rho_{AB}^{2})=1$. Next we consider two cases, one is that $\rho_{AB}$ be pure separable states, then one achieves $\sum_{i=1}^{3}\langle\sigma_{i}\otimes\mathbb{I}\rangle^{2}=\sum_{i=1}^{3}c_{i0}^{2}=1, \sum_{i=1}^{3}\langle\mathbb{I}\otimes \sigma_{j}\rangle^{2}=\sum_{i=1}^{3}c_{0j}^{2}=1$, and $\sum_{i=1}^{3}\sum_{j=1}^{3}\langle\sigma_{i}\otimes\sigma_{j}\rangle^{2}=\sum_{i=1}^{3}\sum_{j=1}^{3}c_{ij}^{2}=1$, which result in $\eta'\geq1$ due to the fact that all pure separable states are unsteerable. The other is that $\rho_{AB}$ be pure entangled states, then one attains $\sum_{i=1}^{3}\langle\sigma_{i}\otimes\mathbb{I}\rangle^{2}=\sum_{i=1}^{3}c_{i0}^{2}<1, \sum_{i=1}^{3}\langle\mathbb{I}\otimes \sigma_{j}\rangle^{2}=\sum_{i=1}^{3}c_{0j}^{2}<1$, and $\sum_{i=1}^{3}\sum_{j=1}^{3}\langle\sigma_{i}\otimes\sigma_{j}\rangle^{2}=\sum_{i=1}^{3}\sum_{j=1}^{3}c_{ij}^{2}>1$, which result in $\eta'\leq1$ due to the fact that all pure entangled states are steerable \cite{zhe1, zhe2, zhen}. So the optimal value of the parameter $\eta'=1$. This gives the proof of Theorem 1.
In this way, we derive the steering criterion for arbitrary two-qubit quantum systems. Whatever strategies Alice and Bob choose, a violation of inequality (2) would imply steering.
In the following we further develop steering criterion by introducing quantum correlation matrix of local observables. Given a quantum state $\rho$ and observables $\{O_{k}\} (k = 1,2, . . . ,n)$, an $n\times n$ symmetric covariance matrix $\gamma$ is defined as \cite{ji} \begin{equation}\label{5} \gamma_{kk'}(\rho)=(\langle O_{k}O_{k'}\rangle+\langle O_{k'}O_{k}\rangle)/2-\langle O_{k}\rangle\langle O_{k'}\rangle. \end{equation}
Now, let us consider a composite system $\rho_{AB}$ and a set observables $\{O_{m}\}=\{\sigma_{i}\otimes\sigma_{j}\} (i,j=1,2,3, m=3(i-1)+j)$. Similarly, the covariance matrix can be constructed as \begin{equation}\label{6} \gamma_{mm'}(\rho_{AB})=(\langle O_{m}O_{m'}\rangle+\langle O_{m'}O_{m}\rangle)/2-\langle O_{m}\rangle\langle O_{m'}\rangle. \end{equation}
Obviously, the diagonal elements of the covariance matrix stand for the variance of the observables $\{O_{m}\}$.
\emph{Corollary 1.} If a given quantum state $\rho_{AB}$ is unsteerable, the sum of the eigenvalues of the covariance matrix $\gamma_{mm'}(\rho_{AB})$ of the observables $\{O_{m}\}=\{\sigma_{i}\otimes\sigma_{j}\} (i,j=1,2,3, m=3(i-1)+j)$ must satisfied \begin{equation}\label{7} \sum\limits_{k=1}\limits^{9}\lambda_{k}\geq 8, \end{equation} where $\lambda_{k}$ is the eigenvalue of the covariance matrix $\gamma_{mm'}(\rho_{AB})$.
\emph{Proof.} For an unsteerable state $\rho_{AB}$, one has $\sum_{i=1}^{3}\sum_{j=1}^{3}\langle\sigma_{i}\otimes\sigma_{j}\rangle^{2}\leq1$ according to Theorem 1, which results in $\sum_{i=1}^{3}\sum_{j=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{j})\geq 8 $, where $\delta^{2}(\sigma_{i}\otimes\sigma_{j})=\langle(\sigma_{i}\otimes\sigma_{j})^{2}\rangle-\langle \sigma_{i}\otimes\sigma_{j}\rangle^{2}$ is the variance of the observable $\sigma_{i}\otimes\sigma_{j}$. To prove the corollary 1, we introduce the principal components analysis (PCA) \cite{pea, hot, jol}, which is a mathematical procedure that transforms a number of possibly correlated variables into a number of uncorrelated variables called principal components. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. Similar to classical PCA, for the quantum covariance matrix $\gamma_{mm'}(\rho_{AB})$, the variances of principal components correspond to the eigenvalues of the covariance matrix, i.e., $\sum_{k=1}^{9}\lambda_{k}=\sum_{k=1}^{9}\delta^{2}P_{k}$, where $P_{k}$ is the principal component of the covariance matrix $\gamma_{mm'}(\rho_{AB})$, and $\sum_{k=1}^{9}\delta^{2}P_{k}=\sum_{i=1}^{3}\sum_{j=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{j})$, one has $\sum_{k=1}^{9}\lambda_{k}=\sum_{i=1}^{3}\sum_{j=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{j})$. So one attains $\sum_{k=1}^{9}\lambda_{k}\geq 8 $ for an unsteerable state. A detailed proof is provided in the Appendix A.
\section{Optimized steering criteria for symmetric two-qubit quantum systems}
Symmetry is another central concept in quantum theory \cite{gro}, which can be used to simplify the study of the entanglement sometimes \cite{voll, stoc, toth}. A bipartite quantum state $\rho$ is called symmetric if it is permutationally invariant, i.e., $F\rho F=\rho$, here $F=\sum_{ij}|ij\rangle\langle ji|$ is the flip operator. In the following we optimize the steering criterion for symmetric two-qubit quantum states.
\emph{Theorem 2.} If a given symmetric two-qubit quantum state is unsteerable from Alice to Bob (or Bob to Alice), the following inequality holds: \begin{equation} \sum\limits_{i=1}\limits^{3}\langle\sigma_{i}\otimes\sigma_{i}\rangle^{2}\leq1, \end{equation} where $\sigma_{i}$ ($i=1,2,3$) are Pauli operators.
\emph{proof.}
For arbitrary symmetric two-qubit quantum state, one has $\langle\sigma_{i}\otimes\sigma_{j}\rangle=0$, where $i, j=1,2,3, i\neq j$. So \emph{Theorem 1} reduces to \emph{Theorem 2}.
\emph{Corollary 2.} If a given symmetric two-qubit quantum state $\rho_{AB}$ is unsteerable, the sum of the eigenvalues of the covariance matrix $\gamma_{mm'}(\rho_{AB})$ of the observables $\{O_{i}\}=\{\sigma_{i}\otimes\sigma_{i}\} (i=1,2,3)$ must satisfy \begin{equation}\label{7} \sum\limits_{k=1}\limits^{3}\lambda_{k}\geq 2, \end{equation} where $\lambda_{k}$ is the eigenvalue of the covariance matrix $\gamma_{mm'}(\rho_{AB})$. A brief proof of our theorem is specified below.
\emph{proof.} For a symmetric unsteerable state $\rho_{AB}$, one has $\sum_{i=1}^{3}\delta^{2}\langle\sigma_{i}\otimes\sigma_{i}\rangle^{2}\leq1$ from Eq.(9), which results in $\sum_{i=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{i})\geq 2 $. For the quantum covariance matrix $\gamma_{mm'}(\rho_{AB})$, one has $\sum_{k=1}^{3}\lambda_{k}=\sum_{i=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{i})$ according to PCA. So one get $\sum_{k=1}^{3}\lambda_{k}\geq 2 $ for a symmetric unsteerable state .
\section{Illustrations of generic examples}
(i) \emph{Werner state.} Consider two-qubit Werner states \cite{wern}, which can be written as
\begin{equation}\label{8}
\rho_{W}=p|\psi^{+}\rangle\langle\psi^{+}|+(1-p)\mathbb{I}/4,
\end{equation}
where $|\psi^{+}\rangle=(1/\sqrt{2})(|00\rangle+|11\rangle)$ is Bell state and $\mathbb{I}$ is the identity, $0\leq p\leq1$. The Werner states are entangled iff $p>1/3$, steerable iff $p>1/2$ \cite{wis}, and Bell nonlocal if $p>1/\sqrt{2}$. According to symmetry of the Werner state and our \emph{Theorem 2}, we achieve $p>\sqrt{3}/3$ for successful steering under the Pauli measurements $\{\sigma_{1}, \sigma_{2}, \sigma_{3}\}$. Our results are in agreement with the results of Ref. \cite{zhe1, zhe2, zhen}, which implies that the nonlinear steering criterion is qualified for witnessing steering .
(ii) \emph{Bell diagonal states.} Suppose now that Alice and Bob share a Bell diagonal state as follows: \begin{equation}
\rho_{bd}=\frac{1}{4}(\mathbb{I}+\sum_{i=1}^{3}c_{i}\sigma_{i}\otimes\sigma_{i}), \end{equation}
where $\sigma_{i}$ $(i=1,2,3)$ are Pauli operators and $|c_{i}|\leq1$ for $i=1,2,3$. According to \emph{Theorem 2}, we find that $\rho_{bd}$ are steerable if $\sum_{i}c_{i}^{2}>1$. In this case, the local uncertainty relations steering criterion can be written as $\sum_{i}\delta^{2}(\sigma_{i}^{B})-C^{2}(\sigma_{i}^{A},\sigma_{i}^{B})/\delta^{2}(\sigma_{i}^{A})>2$ \cite{zhen}, where $\delta^{2}(A)=\langle A^{2}\rangle-\langle A\rangle^{2}$ is the variance and $C(A,B)=\langle AB\rangle-\langle A\rangle\langle B\rangle$ is the covariance. The violation is $\sum_{i}c_{i}^{2}>1$ and the corresponding states are steerable. Likely for the linear criterion we have $|\sum_{i}\omega_{i}\langle\sigma_{i}^{A}\otimes\sigma_{i}^{B}\rangle|\geq\sqrt{3}$ with $\omega_{i}\in\{\pm1\}$ \cite{sau}, and the violation implies $|c_{1}\pm c_{2}\pm c_{3}|>\sqrt{3}$. For entropic criterion we have $\sum_{i}H(\sigma_{i}^{B}|\sigma_{i}^{A})>2$ \cite{sch2}, where $H(B|A)=\sum_{a}p(a|A)H(B|A=a)$ and $H(\cdot)$ denotes von Neumann entropy. The violation is $\sum_{i}(1+c_{i})log(1+c_{i})+(1-c_{i})log(1-c_{i})>2$. It can be checked that our criterion performs equivalently well as the local uncertainty relations steering criterion, which certifies more steerable states than the linear criterion and the entropic criterion (Fig.1). \begin{figure}
\caption{The performances of different quantum steering criteria for the Bell diagonal states under the conditions $c_{1}=c_{3}$. The area inside the brown solid lines denotes Bell diagonal states (BDS). The red solid line, blue circled line, green dashed line, cyan dotted line are given by the nonlinear steering criterion (NLC), local uncertainty relations criterion (LUR), linear criterion (LC), entropic criterion (EC), respectively. States in the left side of these lines are steerable. It is clear that the NLC performs equivalently well as the LUR criterion, which certifies more steerable states than the LC and EC.}
\label{fig1}
\end{figure}
(iii) \emph{Asymmetric entangled states.} Let us consider Gisin states \cite{gis}, which can be expressed as \begin{equation}\label{10}
\rho_{G}=p|\psi_{\theta}\rangle\langle\psi_{\theta}|+(1-p)\rho_{s}, \end{equation}
where $\psi_{\theta}=sin\theta|01\rangle+cos\theta|10\rangle$, $\rho_{s}=\frac{1}{2}|00\rangle\langle00|+\frac{1}{2}|11\rangle\langle11|$. In Fig.2, we show the performances of the nonlinear steering criterion (\emph{Theorem 1}), the local uncertainty relations steering criterion \cite{zhen}, the linear criterion \cite{sau} and the entropic criterion \cite{sch2} for the Gisin states. It follows from straightforward calculation that the nonlinear steering criterion certifies more steerable states than the linear criterion and entropic criterion. \begin{figure}
\caption{The performances of different quantum steering criteria for the Gisin states. The cyan dotted line, green dashed line, red solid line, blue dashed line are given by the EC, LC, NLC, LUR criterion, respectively. States above these lines are steerable. It is clear that the NLC certifies more steerable states than the LC and EC.}
\label{fig1}
\end{figure}
\section{Conclusion}
In summary, we have proposed some nonlinear steering criteria applicable for arbitrary two-qubit quantum systems and optimized ones for symmetric quantum states. These criteria can be used to detect a wide range of steerable quantum states under Pauli measurements. Compared with the existing linear steering criterion and the entropic criterion, ours can certify more steerable states without selecting measurement settings or correlation weights, which can also be used to verify entanglement as all steerable quantum states are entangled.
\section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11947102, the Natural Science Foundation of Anhui Province under Grant Nos. 2008085MA16 and 2008085QA26, the Key Program of West Anhui University under Grant No.WXZR201819, the Research Fund for high-level talents of West Anhui University under Grant No.WGKQ202001004.
\appendix
\section{Proof of the equation $\sum_{k=1}^{9}\lambda_{k}=\sum_{i=1}^{3}\sum_{j=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{j})$} In order to prove the Eq. $\sum_{k=1}^{9}\lambda_{k}=\sum_{i=1}^{3}\sum_{j=1}^{3}\delta^{2}(\sigma_{i}\otimes\sigma_{j})$, we extend principal components analysis to quantum correlation matrix $\gamma_{mm^{'}}(\rho_{AB})$ of local observables $\{O_{m}\}=\{\sigma_{i}\otimes\sigma_{j}\} (i,j=1,2,3, m=3(i-1)+j)$. As in classical correlation analysis, the principal components on a matrix space can be expressed as \begin{equation} P_{j}=a_{1j}O_{1}+a_{2j}O_{2}+...+a_{9j}O_{9}, \end{equation} where $j=1,2,...,9$. $\sum_{i}a_{ij}^{*}a_{ij}=1$, and $\sum_{i}a_{ij}^{*}a_{ik}=0$ for $j\neq k$.
To achieve the first principal component, we use the Lagrange multiplier technique to find the maximum of a function. The Lagrangean function is defined as \begin{eqnarray}
L(a)=tr[\rho(a_{11}O_{1}+a_{21}O_{2}+...+a_{91}O_{9})^{2}]-\{tr[\rho(a_{11}O_{1}+a_{21}O_{2}+...+a_{91}O_{9})]\}^{2}\nonumber \\
+\lambda_{1}(1-a_{11}^{2}-a_{21}^{2}-...-a_{91}^{2}), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} where $\lambda_{1}$ are the Lagrange multipliers. The necessary conditions for the maximum are \begin{equation} \frac{\partial L}{\partial a_{11}}=0; \frac{\partial L}{\partial a_{21}}=0;...; \frac{\partial L}{\partial a_{91}}=0. \end{equation} By using the properties of the trace, we obtain \begin{eqnarray}
\frac{\partial L}{\partial a_{i1}}=2a_{i1}tr(\rho O_{i}^{2})-2a_{i1}[tr(\rho O_{i})]^{2}+\sum\limits_{k=1,...,9,k\neq i}a_{k1}[tr(\rho O_{i}O_{k})+tr(\rho O_{k}O_{i})]\nonumber\\ -\sum\limits_{k=1,...,9,k\neq i}a_{k1}[tr(\rho O_{i})tr(\rho O_{k})+tr(\rho O_{k})tr(\rho O_{i})]-2\lambda_{1} a_{i1}=0. \ \ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} By rearranging the above expression, we get \begin{eqnarray}
a_{i1}[tr(\rho O_{i}^{2})-(tr(\rho O_{i}))^{2}]+\{\sum\limits_{k=1,...,9,k\neq i}a_{k1}[tr(\rho O_{i}O_{k})+tr(\rho O_{k}O_{i})]\}/2\nonumber\\ -\sum\limits_{k=1,...,9,k\neq i}a_{k1}tr(\rho O_{i})tr(\rho O_{k})=\lambda_{1} a_{i1}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} For $i=1,...,9$, the following eigenvalue problem is obtained in compact form: \begin{equation}
\gamma \textbf{\emph{a}}_{1}=\lambda_{1} \textbf{\emph{a}}_{1}, \end{equation} where $\textbf{\emph{a}}_{1}=(a_{11},a_{21},...,a_{91})'$, $\gamma_{ij}=(\langle O_{i}O_{j}\rangle+\langle O_{j}O_{i}\rangle)/2-\langle O_{i}\rangle\langle O_{j}\rangle$, which is exactly the quantum covariance matrix as defined in Eq.(6). It shows that $\textbf{\emph{a}}_{1}$ should be chosen to be an eigenvector of the covariance matrix $\gamma$, with eigenvalue $\lambda_{1}$. The variance of the first principal component is \begin{equation}
V(P_{1})=tr(\textbf{\emph{a}}_{1}^{\dagger}\gamma \textbf{\emph{a}}_{1})=\lambda_{1}. \end{equation} Therefore, in order to obtain the maximum of the variance, $\textbf{\emph{a}}_{1}$ should be chosen as the eigenvector corresponding to the largest eigenvalue $\lambda_{1}$ of the covariance matrix. Similarly, for the second principal component, in order to obtain the second maximum of the variance, $\textbf{\emph{a}}_{2}$ should be chosen as the eigenvector corresponding to the second largest eigenvalue $\lambda_{2}$ of the covariance matrix. This is fully consistent with the classical principal components analysis since the variances correspond to the eigenvalues of the covariance matrix.
For a arbitrary covariance matrix $\gamma_{ij}(\rho_{AB})$ of local observables $\{O_{m}\}=\{\sigma_{i}\otimes\sigma_{j}\} (i,j=1,2,3, m=3(i-1)+j)$, the variance of the observables $O_{m}$ can be analytically given as $\sum_{m=1}^{9}\delta^{2}(O_{m})=\sum_{i=1}^{N}\delta^{2}P_{i}$ due to the fact $\sum_{j}a_{ij}^{*}a_{ij}=1$. As $\sum_{i=1}^{N}\delta^{2}P_{i}=\sum_{i=1}^{N}\lambda_{i}$, one achieves $\sum_{m=1}^{9}\delta^{2}(O_{m})=\sum_{i=1}^{9}\lambda_{i}$.
\end{document} |
\begin{document}
\title{On three genetic repressilator topologies\thanks{Ma\v sa Dukari\'c and Valery Romanovski are supported by the Slovenian Research Agency (program P1-0306) and by a Marie Curie International Research Staff Exchange Scheme Fellowship within the 7th European Community Framework Programme, FP7-PEOPLE-2012-IRSES-316338. The work has also been partially supported by the Hungarian-Slovenian cooperation projects T\'ET\_16-1-2016-0070 and BI-HU-17-18-011. Roman Jerala and Tina Lebar are supported by Slovenian Research Agency project J1-6740 and program P4-0176. Tina Lebar is partially supported by the UNESCO-L'OREAL national fellowship "For Women in Science". J\'anos T\'oth also acknowledges the support by the National Research, Development and Innovation Office (SNN 125739).} }
\author{Ma\v sa Dukari\' c \and Hassan Errami \and Roman Jerala \and Tina Lebar \and Valery G. Romanovski \and J\'anos T\'oth$^*$ \and Andreas Weber }
\authorrunning{Dukari\' c, Errami, Jerala, Lebar, Romanovski, T\'oth, Weber}
\titlerunning{ On three genetic repressilator topologies }
\institute{M. Dukari\' c \at Center for Applied Mathematics and Theoretical Physics\\ University of Maribor, Mladinska 3, SI-2000, Slovenia\\ \email{masa.dukaric@gmail.com} \and H. Errami \at Institut f\"{u}r Informatik II, Universit\"{a}t Bonn, Bonn, Germany\\ \email{errami@informatik.uni-bonn.de} \and R. Jerala and T. Lebar \at Department for Synthetic Biology and Immunology, National Institute of Chemistry\\ Hajdrihova 19, 1000 Ljubljana, Slovenia\\ \email{Tina.Lebar@ki.si and Roman.Jerala@ki.si} \and V. G. Romanovski \at Faculty of Electrical Engineering and Computer Science, University of Maribor,\\ Koro\v ska cesta 46, Maribor, SI-2000 Maribor, Slovenia\\ Center for Applied Mathematics and Theoretical Physics, University of Maribor,\\ Mladinska 3, SI-2000, Slovenia\\ Faculty of Natural Science and Mathematics, University of Maribor\\ Koro\v ska cesta 160, SI-2000 Maribor, Slovenia\\ \email{Valerij.Romanovskij@um.si} \and J. T\'oth corresponding author\at Department of Mathematical Analysis, Budapest University of Technology and Economics\\ Budapest, Egry J. u. 1., Hungary, H-1111\\ Chemical Kinetics Laboratory, E\"otv\"os Lor\'and University\\ Budapest, P\'azm\'any P. s\'et\'any 1/A., Hungary, H-1117\\ Tel.: +361 463 2314\quad Fax: +361 463 3172\\ \email{jtoth@math.bme.hu} \and A. Weber \at Institut f\"{u}r Informatik II, Universit\"{a}t Bonn, Bonn, Germany\\ \email{weber@cs.uni-bonn.de}}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} \delete{Previous models of protein synthesis regulation have shown that a high Hill coefficient (exponent)---which defines the cooperativity of transcription factor binding---is of significant importance for a functional circuit. However, , meaning that their Hill coefficient is as low as 1, still, non-linear responses can be achieved by introduction of positive feedback loops. At present there are no general efficient methods for determining singular points of polynomial or rational systems of ordinary differential equations of high dimension depending on parameters, thus the study of such complex systems is non-trivial. We anticipate that our computational approach to studying steady states of polynomial and rational systems of ordinary differential equations could be used to analyse models of various complex chemical and biological networks with a relatively high number of variables.} Novel mathematical models of three different repressilator topologies are introduced. As designable transcription factors have been shown to bind to DNA non-cooperatively, we have chosen models containing non-co\-op\-er\-a\-tive elements. The extended topologies involve three additional transcription regulatory elements---which can be easily implemented by synthetic biology---forming positive feedback loops. This increases the number of variables to six, and extends the complexity of the equations in the model. To perform our analysis we had to use combinations of modern symbolic algorithms of computer algebra systems \textbf{\textsc{Mathematica}}\ and \textbf{\textsc{Singular}}. The study shows that all the three models have simple dynamics what can also be called regular behaviour: they have a single asymptotically stable steady state with small amplitude oscillations in the 3D case and no oscillation in one of the 6D cases and damping oscillation in the second 6D case. Using the program \textbf{\textsc{QeHopf}}\ we were able to exclude the presence of Hopf bifurcation in the 3D system. \keywords{Repressilator models \and Genetic oscillator \and Steady states \and Computer algebra \and \textbf{\textsc{Mathematica}}\ \and \textbf{\textsc{Singular}} \and \textbf{\textsc{QeHopf}}\ \and Designable repressor} \end{abstract}
\section{Introduction}\label{sec:intro}
To understand complex biological systems such as tissues and cells, extensive knowledge of molecular interactions and mechanisms is necessary. However, an important part of understanding biological complexity is also mathematical modeling, which allows researchers to investigate connections between cellular processes and to develop hypotheses for the design of new experiments.
Jacob and Monod \cite{jacobmonod} were the first to present a model of the regulation of the synthesis of a structural protein. In this model enzyme levels are regulated at the level of transcription. Specific proteins are produced which repress the transcription of the DNA to its product (mRNA -- messenger ribonucleic acid), which is translated into $\beta$-galactosidase, an enzyme for degradation of galactose into simple sugars.
Shortly after Jacob and Monod, Goodwin \cite{goodwin} proposed the first mathematical model of a more complex biological system, a genetic oscillator. The simplest formulation of the Goodwin model involves a single gene that represses its own transcription via a negative feedback loop and uses three variables, $ x, y$ and $z$, where $x$ denotes the quantity of mRNA, $y$ stands for the quantity of the repressor protein, and $z$ is the quantity of the product, which acts as a corepressor and generates the feedback loop by negative control of mRNA production: \begin{equation}\label{eq:goodwin} \deriv{x}=\frac{k_1}{k_2+z^n}-k_3x\quad \deriv{y}=k_4x-k_5y\quad \deriv{z}=k_6y-k_7z \end{equation} All synthesis and degradation rates in the model (represented by coefficients $k_1$ to $k_7$) are linear, with the exception of the repression, which takes the form of a sigmoidal Hill curve. Here $ n$ denotes the Hill exponent, which may be interpreted in biological systems as the number of ligand molecules that a receptor can bind. At the level of transcriptional regulation, this can be explained by cooperative binding of the repressor protein to DNA (formation of protein-DNA complexes). It has been demonstrated by Griffith \cite{griffith} that limit cycle oscillations can only be obtained when $n>8$, which is unrealistic in terms of transcriptional regulation, where Hill exponents are rarely higher than 3 or 4.
A repressilator is a network of several genes and can be thought of as an extension of the Goodwin oscillator, which is a one-gene repressilator linked by mutual repression in a cyclic topology. Models of cycles of 2--5 genes have first been studied by Fraser and Tiwari \cite{frasertiwari}, while the first experimental implementation of a 3-gene repressilator in a biological system along with a refined model was demonstrated by Elowitz and Leibler \cite{elowitzleibler}. Let $X_i$ denote the quantity of mRNA and $Y_i$ the quantity of the repressor protein and let $ \alpha_{0}, \alpha$ and $ \beta$ represent the transcription rate of a repressed promoter, the maximal transcription rate of a free promoter and the ratio of protein and mRNA decay rate, respectively. Then the model is given by the equations: \begin{equation} \label{eq:leoleib} \begin{aligned} \deriv{X_{i}}=& \alpha_0 + \frac \alpha {1 + Y_{i-1}^n} - X_{i}\\ \deriv{Y_{i}}=& -\beta(Y_{i}-X_{i}) \quad(i=1,2,3), \end{aligned} \end{equation} where the indices 0 and 3 are identified. (Let us note that Elowitz and Leibler write $j$ instead of $i-1$, still they speak about 6 equations. However, if $i$ and $j$ run independently, then we have $3\times 3+3=12$ equations. Our modification is also in accordance with our model below.) In the paper mentioned above Elowitz and Leibler also determine the unique positive stationary point, and the parameters when the stationary point looses its stability. They map part of the parameter space, and find oscillations \emph{numerically}. In the Goodwin model, undamped oscillations can only occur when repression is accomplished by the co-repressor $Z$ and never directly by the protein $Y$ \cite{griffith}, probably due to the increased time delay. In the cyclic repressilator by Elowitz and Leibler, oscillations can occur without co-repressors and for Hill exponents $n$ as low as 2, which is more applicable to biological systems. It also takes into consideration the production of mRNA with a constant rate.
A theoretical solution for the introduction of non-linearity to non-co\-op\-er\-a\-tive biological systems by using transcription factors, where the same proteins are able to repress one gene and activate another gene has been proposed by M\"uller et al. \cite{mullerhofbauerendlerflammwidderschuster} and Widder et al. \cite{widdermaciasole}. Tyler et al. \cite{tylershiuwalton} continue the work by \cite{mullerhofbauerendlerflammwidderschuster} with biologically less restrictive assumptions. However, such transcription factors are extremely rare in nature and would also be hard to design by directed evolution. Recently, Lebar et al. \cite{lebarbezeljakgolobjeralakaduncpirsstrazarvuckozupancicbencinaforstnericgaberlonzaricmajerleoblaksmolejerala} have shown that non-linearity can be introduced into a biological system, by introduction of non-cooperative repressors in combination with activators, competing for binding to the same DNA sequence, thus creating a positive feedback loop. In principle, positive feedback loops could be introduced---based on the same DNA binding domain---to build functional repressilator circuits, consisting of non-cooperative repressors.
The above described oscillator circuit was experimentally constructed using three natural repressor proteins, the TetR, LacI and CI repressors. However, construction of functional biological circuits using such natural repressors requires fine-tuning due to their diverse biochemical properties. Furthermore, the low number of well-characterized natural repressor proteins does not enable construction of multiple circuits in a single cell, a fact that may support the use of stochastic models, cf. e.g. \cite{aranyitoth,erdilente,tothnagypapp}. With the developments in the field of synthetic biology in the recent years, the use of designable repressors has become more and more frequent \cite{qilarsongilbertdoudnaweissmanarkinlim, kianibealebrahimkhanihuhhallxieliweiss, lohmuellerarmelsilve, garglohmuellersilverarmel, congzhoukuocunniffzhang, gaberlebarmajerlesterdobnikarbencinajerala, lebarbezeljakgolobjeralakaduncpirsstrazarvuckozupancicbencinaforstnericgaberlonzaricmajerleoblaksmolejerala, lebarjerala,nissimprtlifridkinperezpineralu}. Such repressors can be designed to bind any DNA sequence due to their modular structure, which can be exploited to eliminate interactions with the cells' genome. Furthermore, they can be designed in almost unlimited numbers and the biochemical properties of individual repressors are very similar, making construction and modeling of synthetic circuits easier. However, the main disadvantage of designable repressors is that they are monomeric, meaning that their binding to DNA is non-cooperative and the Hill exponent $n$ is equal to 1. Under those conditions, the above described models are not expected to produce oscillations. This poses a challenge of introducing non-linearity in complex biological systems, consisting of such repressors.
Equations describing the model of the repressilator by Elowitz and Leibler with only two variables are easy to handle. However, addition of activators to the model increases the number of variables, thus expanding the complexity of the model. Mathematical analysis of systems of equations with a large number of variables is harder,
and can be investigated using deterministic approach based on ordinary differential equations (ODEs) with kinetics which can be either of the mass action type or other, and use the qualitative theory of ordinary differential equations to find bistability, oscillation etc., or calculating solutions numerically. The stochastic description \cite[Chapter 5]{erditoth},\cite[Chapter 10]{tothnagypapp} or \cite{erdilente} usually does not allow to make symbolic calculations because of the complexity of the model. However, in this case one also may turn to the computer to do simulations \cite{sipostotherdi,nagypapptoth}.
In this work, we compare deterministic mathematical models of three different repressilator topologies based on non-cooperative repressors, which can be implemented in biological systems based on designed DNA binding domains such as zinc fingers, TALEs or dCas9/CRISPR fused to activation or repression domains. The models are simplified and consider reactions only at the protein level. The concentration of each repressor and activator over time is described in a separate equation in a system of equations. In the 3D model, we perform the singular point analysis of
the 3-variable equation system for the basic repressilator topology, consisting of 3 repressors. In the 6D models we expand the complexity by addition of 3 variables, representing activators. The study of the system is non-trivial since there are no efficient methods for determining singular points of polynomial or rational systems of ODEs of high dimension and depending on parameters. We perform our analysis using the combinations of modern symbolic algorithms of computer algebra systems \textbf{\textsc{Mathematica}}\ \cite{mathematica} and \textbf{\textsc{Singular}}\ \cite{deckergreuelpfisterschonemann}, which has not yet been covered in the literature and represents a novel approach in analysis of biological circuits.
Extensive theoretical studies have already been done on the 3D repressilator circuit. \cite{kuznetsovafraimovich} only treat the special case $\alpha_0=0$ of our model. In a nonlinear model such a seemingly slight difference may cause qualitative differences. They also treat saturable degradation, i.e. cases when instead of $-k_{\mathrm{deg}}x$ one has a term $\frac{-k_{\mathrm{deg}}x}{1+x}.$ They have shown the connection between the evolution of the oscillatory solution and formation of a heteroclinic cycle at infinity. \cite{dilao} also deals with the $\alpha_0=0$ case, but he derives the usual nonlinear term starting from a mass action model, and using the Michaelis--Menten type approximation. That author is mainly interested in models with delay. \cite{guantespoyatos} again assumes $\alpha_0=0,$ and the rational functions are such that both the denominator and the numerator are second degree polynomials. The paper contains no general mathematical statements, only numerical simulations. On the other hand, the mathematically correct paper \cite{mullerhofbauerendlerflammwidderschuster} treats a large class of models including the model by Elowitz and Leibler (but not our models) and give a detailed description of the attractors. Summarizing, none of the models in the literature cover the classes of models we are interested in, and also, the present approach seems to be a novel one from the mathematical point of view and uses models based on recent experiments in synthetic biology.
Note also that \cite{tiggesmarquezlagostellingfussenegger} consider a much more complicated process, no formulae can be found in the paper itself. However, its Supplement contains models, delay, stochastic effects, and no qualitative analysis at all. They estimate the parameters of the model. \cite{thieffrythomas} use the heuristic ideas (\textit{kinetic logic}) of Thomas without a mathematical treatment.
\section{A 3D model}\label{sec:3D}
First we model the basic repressilator circuit based on non-cooperative repressors, similar to the Elowitz repressilator. The difference compared to the original repressilator model is that here the Hill exponent $n$ is always equal to 1, due to the non-cooperative nature of the repressors. We consider a symmetrical system, where the biochemical properties of all repressors are similar, as expected with designed transcription factors (and \emph{not} to simplify mathematics). We simplify the system to only consider reactions on the protein level. The variables $x,$ $ y$ and $z$ represent the concentrations of each of the repressors, while the parameters $ \alpha_{0}, \alpha$ and $k_{\mathrm{deg}}$ represent the rate of protein synthesis when the promoter is repressed, the maximal rate of protein synthesis from the free promoter and protein degradation rate, respectively (Figure \ref{fig:RepTop}). \begin{figure}
\caption{The 3D repressilator topology}
\label{fig:RepTop}
\end{figure} We assume equal rates of synthesis and degradation for all three repressor proteins. Then the concentration of each repressor over time is described by the following equations: \begin{equation} \label{ss1} \begin{aligned} \frac{d x}{dt}=& \alpha_0 + \frac \alpha {1 + z} - k_{\mathrm{deg}} x\\ \frac{d y}{dt}=& \alpha_0 + \frac \alpha {1 + x} - k_{\mathrm{deg}} y\\ \frac{d z}{dt}=& \alpha_0 + \frac \alpha {1 + y} - k_{\mathrm{deg}} z, \end{aligned} \end{equation} To simplify the notation we denote $$ \ s=\alpha_0,\ b=\alpha, \ g=k_{\mathrm{deg}}, $$ where the parameters $s$, $b$ and $g$ are positive real numbers, and the dot denotes the derivative with respect to time.
With this notation system \eqref{ss1} is written as \begin{equation} \label{s1} \begin{aligned} \dot x= & s +\frac b{1 + z} - g x \\ \dot y=& s + \frac b{1 + x} - g y \\ \dot z= & s +\frac {b}{1 + y} - g z. \end{aligned} \end{equation} We are interested in the behavior of trajectories of system \eqref{s1} in the region $$ D=\{ (x,y,z) : x>0, \ y>0, \ z>0\}. $$
System \eqref{s1} has two singular points whose coordinates contain the expression $u=\sqrt{4 b g + (g + s)^2}.$ With this we have \begin{equation}\label{eq:bandineq} b= \frac{u^2-(g + s)^2}{4 g}{\quad{\rm and}\quad u>g+s.} \end{equation} Then the steady states of the system are $$ A=(x_0,y_0,z_0)=\left(\frac{s- g - u}{ 2 g}, \frac{s- g - u}{ 2 g}, \frac{s- g - u}{ 2 g}\right) $$ and $$ B=(x_1,y_1,z_1)=\left( \frac{s+ u-g}{ 2 g},\frac{s+ u-g}{ 2 g},\frac{s+ u-g}{ 2 g} \right). $$
The eigenvalues of the Jacobian matrix of system \eqref{s1} at $A$ are $$ \kappa_1=\frac{2 g u}{g+s-u}, \quad \kappa_{2,3}=-\frac{g (3 g+3 s-u)}{2 (g+s-u)}\pm i \frac{\sqrt{3} g (g+s+u)}{2 (g+s-u)} $$ and the eigenvalues at $B$ are given by \begin{equation} \label{lam} \lambda_1= -\frac{2 g u}{g+s+u},\quad \lambda_{2,3}= -\frac{g (3 g+3 s+u)}{2 (g+s+u)} \pm i \frac{\sqrt{3} g \left( g+s-u\right) }{2 (g+s+u)}. \end{equation}
We can expect chemically relevant non-trivial behavior of trajectories in the domain $D$ if both singular points of the system are located in $D$. The necessary and sufficient condition for this is \begin{equation} \label{sem1} x_0> 0, \ x_1 > 0. \end{equation} \delete{ However, $x_0> 0$ alone implies that $s>u+g,$ and this together with $u-g>s$ (see \eqref{eq:bandineq}) shows that these inequalities cannot be fulfilled simultaneously, thus $A\in D$ can never occur. Similarly, $x_1<0$ implies $u<g-s,$ and this together with $u>g+s$ (see \eqref{eq:bandineq}) shows that these inequalities cannot be fulfilled simultaneously, thus $B\in -D$ can never occur. The only possibility left is that $x_0<0$ and $x_1>0,$ which happens if and only if $s+g<u,$ (which is always the case) because this inequality automatically implies $s-g<u$ and $g-s<u.$ } From $u=\sqrt{4bg + (g+s)^2}$, one gets both $u>g$ and $u>s$ (since $b,g$ and $s$ are positive). As a consequence, $s-g-u = (s-u) - g < 0$ since $s<u.$ Thus, $A$ can be discarded. Moreover, $s+u-g = s + (u-g) > 0$ since $u>g,$ so $B$ is always in the domain $D$. Thus, in this case $B$ is in $D$ and $A$ has negative coordinates. For the eigenvalues \eqref{lam} of the matrix of the linear approximation of \eqref{s1} at $B$ we have
$\lambda_1<0$, $\mathrm{Re}\, \lambda_{2,3}<0$, that is, $B$ is asymptotically stable.
To conclude, in the domain $D=\{ (x,y,z) : x>0, \ y>0, \ z>0\}$ the system can have only one steady state (point $B$), which is a (locally) asymptotically stable attractor and the trajectories (exponentially) fast approach a neighborhood of the steady state. In a small neighborhood of it there are damping oscillations, however the amplitude of oscillations is very small. Why? Because to obtain oscillations with a large amplitude we need to have at the point $B$ in $D$ the eigenvalues with $\mathrm{Abs(Re}\, \lambda_{2,3})$ small and $\mathrm{Abs(Im}\, \lambda_{2,3})$ large. However, it can be shown easily that $\mathrm{Abs(Re}\, \lambda_{2,3})<\mathrm{Abs(Im}\, \lambda_{2,3})$ cannot occur. Thus, this is difficult to achieve in our system, while it would probably be facilitated in the system with high Hill exponent $n$. In Fig. \ref{fig:damposc3d} we have chosen the parameters so as to make the difference between $\mathrm{Abs(Re}\, \lambda_{2,3})$ and $\mathrm{Abs(Im}\, \lambda_{2,3})$ as small as possible. Fig. \ref{fig:damposc3d} shows the behaviour of the model for a single, specific set of the parameters, but the argument above is symbolic, i.e. valid for all sets of the parameters. \begin{figure}
\caption{ Damping oscillation (overshoot) in the 3D model. $s=0.3, b=4, g=0.6,$ initial concentrations: $(1,2,2).$}
\label{fig:damposc3d}
\end{figure}
Our calculations above provided an alternative proof of a part of the statement by Allwright \cite{allwright} who has obtained stronger results: he has shown for a class of more general class of models including our one the existence, uniqueness and \emph{global} asymptotic stability of the stationary point. In order to apply Allwright's results to our model one has to calculate a few quantities, this we will do in the Appendix \ref{subsec:D}.
\section{The forward feedback repressilator 6D model}
By a similar principle that was demonstrated to introduce a non-linear response into a non-cooperative system \cite{lebarbezeljakgolobjeralakaduncpirsstrazarvuckozupancicbencinaforstnericgaberlonzaricmajerleoblaksmolejerala}, we devise a more complex repressilator topology (Figure \ref{fig:doscill2}). The new system consists of the same repressor topology as the 3D model, but also includes three transcriptional activators, binding to the same DNA targets as the repressors. Each of the activators drives the synthesis of itself and of the next repressor in the cycle. This topology can be implemented in biological systems using a set of three DNA binding domains (X, Y, Z), their combination with an activator (a) or a repressor (r) domain and appropriate binding sites within the three operons. \begin{figure}
\caption{A repressilator topology, involving activators, driving the synthesis of the next repressor in the cycle.}
\label{fig:doscill2}
\end{figure} The new topology therefore includes 6 variables: the concentration---denoted by the corresponding lowercase letters---of the
3 repressors ($X_{r}, Y_{r}$ and $Z_{r}$) and 3 activators ($X_{a}, Y_{a}$ and $ Z_{a}$). The Hill exponent $n$ is always equal to 1, the parameters $\alpha_{0}, \alpha$ and $k_{\mathrm{deg}}$ represent the rate of protein synthesis when the promoter is repressed, the rate of protein synthesis from the free promoter and protein degradation rate, respectively. We assume equal rates of synthesis and degradation for all repressor and activator proteins. In this case, the protein synthesis rate is considered maximal when the activator is bound to the promoter, so concentration of repressors and activators over time is given as: \begin{equation} \label{s6o} \begin{aligned}
\frac{dx_r}{dt} = & \alpha_0 + \alpha z_a/(1 + z_r + z_a ) - k_{\mathrm{deg}} x_r, \\ \frac{dz_a}{dt} = & \alpha_0 + \alpha z_a/(1 + z_r + z_a ) - k_{\mathrm{deg}} z_a,\\ \frac {dy_r }{dt} = & \alpha_0 + \alpha x_a/(1 + x_r + x_a) - k_{\mathrm{deg}} y_r \\ \frac{dx_a}{dt} = & \alpha_0 + \alpha x_a/(1 + x_r + x_a) - k_{\mathrm{deg}} x_a, \\ \frac {dz_r }{dt} = & \alpha_0 + \alpha y_a/(1 + y_r + y_a) - k_{\mathrm{deg}} z_r,\\ \frac{ dy_a } {dt} = & \alpha_0 + \alpha y_a/(1 + y_r + y_a) - k_{\mathrm{deg}} y_a. \end{aligned} \end{equation}
Introducing the notation $$ x_1=x_{r},\ x_2= z_{a}, x_3= y_{r}, \ x_4= x_{a},\ x_5= z_{r}, \ x_6= y_{a}, \ s= \alpha_{0}, \ b=\alpha, \ g = k_{\mathrm{deg}} $$ where $b, g, s >0$ we rewrite system \eqref{s6o} in the form \begin{equation} \label{s6} \begin{aligned} \dot x_1= &s - g x_1 + \frac{b x_2}{1 + x_2 + x_5}=X(x_1,x_2,x_5) \\ \dot x_2= & s - g x_2 + \frac {b x_2}{1 + x_2 + x_5}=X(x_2,x_2,x_5) \\ \dot x_3= & s - g x_3 + \frac{b x_4}{1 + x_1 + x_4}=X(x_3,x_4,x_1) \\ \dot x_4=&s - g x_4 + \frac{b x_4}{1 + x_1 + x_4}=X(x_4,x_4,x_1) \\ \dot x_5=& s - g x_5 + \frac{b x_6}{1 + x_3 + x_6}=X(x_5,x_6,x_3) \\ \dot x_6= & s - g x_6 + \frac{b x_6}{1 + x_3 + x_6}=X(x_6,x_6,x_3) \end{aligned} \end{equation} with \begin{equation}\label{X} X(u,v,w):=s-gu+\frac{bv}{1+v+w}. \end{equation}
From the first two equations of \eqref{s6} we obtain that for any steady state $(x_1,x_2,\dots, x_6)$ of the system it should be
that $x_1=x_2$. Similarly, two other pairs of equations \eqref{s6} yield
that $x_3=x_4, x_5=x_6.$ Thus, the simplified stationary point
equations are: \begin{eqnarray} 0&= &s - g x_1 + \frac{b x_1}{1 + x_1 + x_5}=X(x_1,x_1,x_5)\label{eq:st1} \\ 0&= & s - g x_3 + \frac{b x_3}{1 + x_1 + x_3}=X(x_3,x_3,x_1)\label{eq:st2} \\ 0&=& s - g x_5 + \frac{b x_5}{1 + x_3 + x_5}=X(x_5,x_5,x_3).\label{eq:st3} \end{eqnarray} We first look for steady states of system \eqref{s6} using the routine \texttt{Solve} of \textbf{\textsc{Mathematica}}\ and we find 8 steady states. Two of them are \begin{equation} \label{F} F=(f,f,f,f,f,f), \quad {\rm where \quad } f= \frac{\sqrt{(b-g+2 s)^2+8 g s}+b-g+2 s}{4
g}
\end{equation} and \begin{equation} \label{H} H=(h,h,h,h,h,h), \quad {\rm where \quad } h = -\frac{\sqrt{(b-g+2 s)^2+8 g s}-b+g-2 s}{4
g}.
\end{equation}
However, coordinates of the other steady states are given by long cumbersome expressions which are not convenient to analyse. (If one applies \texttt{Simplify} or even \texttt{FullSimplify} the result of \texttt{LeafCount} is more than thirteen thousand.) Thus, we choose another approach to finish.
Chemically relevant steady states should satisfy the conditions \begin{multline} \label{6semi} X(x_1,x_1,x_5)= X(x_3,x_3,x_1)=X(x_5,x_5,x_3)=0,\\ s>0,\ g>0,\
b>0,\ x_1>0,\ x_3>0, \ x_5>0 . \end{multline} System \eqref{6semi} is a so-called semi-algebraic system (since it contains not only algebraic equations $X(x_1,x_1,x_5)= X(x_3,x_3,x_1)=X(x_5,x_5,x_3)=0$, but also inequalities). Nowadays powerful algorithms to solve such systems have been developed and implemented in many computer algebra systems. In particular, in \textbf{\textsc{Mathematica}}\ the routine
\texttt{ Reduce } can be applied to finding solutions of
semi-algebraic systems. For algebraic functions \texttt{Reduce} constructs equivalent purely polynomial systems and then uses cylindrical algebraic decomposition (CAD) introduced by Collins in \cite{collins} for real domains and Gr\"{o}bner basis methods for complex domains.
To simplify computations we first clear the denominators on the right hand side of \eqref{eq:st1}--\eqref{eq:st3} obtaining the polynomials \begin{eqnarray*}
f_1&:=&s + b x_1 - g x_1 + s x_1 - g x_1^2 + s x_5 - g x_1 x_5 \\
f_1&:=&s + s x_1 + b x_3 - g x_3 + s x_3 - g x_1 x_3 - g x_3^2 \\
f_1&:=&s + s x_3 + b x_5 - g x_5 + s x_5 - g x_3 x_5 - g x_5^2 0 \end{eqnarray*} Solving with \texttt{Reduce} of \textbf{\textsc{Mathematica}}\
the semi-algebraic system \begin{equation} f_1 = f_3 = f_5 = 0, x_1 > 0,
x_3 > 0, x_5 > 0, s > 0, g > 0, b > 0,\label{semiin} \end{equation} with respect to $
x_1, x_3, x_5, s, b,
g$ we obtain the solution \begin{equation}
x_1 > 0, x_1 =x_3 = x_5, s > 0, b > 0,
g =\frac s{x_1} +\frac { b}{1 + x_1 + x_5}.\label{semiout} \end{equation} The input command and the output are given in Appendix \ref{subsec:B}. The exact result may slightly differ depending on the version you use, but nevertheless, it always implies the essential relation that $x_1 =x_3 = x_5.$
Solving the last equation for $x_1$ we obtain two solutions: $$ \frac{\sqrt{(b-g+2 s)^2+8 g s}+b-g+2 s}{4
g} \mathrm{ and } \frac{-\sqrt{(b-g+2 s)^2+8 g s}+b-g+2 s}{4
g}. $$ However in the second case $x_1$ is negative, so the only steady state whose coordinates satisfy \eqref{6semi} is the point $F$ defined by \eqref{F}.
Computing the eigenvalues of the Jacobian matrix of system \eqref{s6} at $F$ we find that they are \begin{eqnarray*} \kappa_{1,2,3}&=&-g, \quad \kappa_4 = -g+\frac{b}{(1 + 2 f)^2} \\ \kappa_{5,6}&=&-g+\frac{b(2 + 3f)}{2 (1 + 2 f)^2}\pm i \frac{\sqrt{3} b f}{2 (1 + 2 f)^2}, \end{eqnarray*} where $f$ is defined by \eqref{F}. A short calculation shows that all eigenvalues of the Jacobian matrix have negative real parts yielding that $F$ is asymptotically stable. Thus, we have proven the following result. \begin{theorem} Point $F$ is the only positive stationary point of system \eqref{s6} and it is asymptotically stable. \end{theorem}
\section{The backward feedback repressilator 6D model}
Due to the absence of oscillations in the above described model we next consider a repressilator topology with activators wired to activate transcription of the previous repressor in the cycle (Figure 4). The notations of the variables and the constants are the same as in the previous 6D model. Therefore, the concentrations of repressors and activators over time are as follows: \begin{equation} \label{s6sec} \begin{aligned} \dot x_1= & s - g x_1 + b x_4/(1 + x_4 + x_5) =X(x_1,x_4,x_5) \\ \dot x_2= & s - g x_2 + b x_4/(1 + x_4 + x_5) =X(x_2,x_4,x_5) \\ \dot x_3= & s - g x_3 + b x_6/(1 + x_1 + x_6) =X(x_3,x_6,x_1) \\ \dot x_4=&s - g x_4 + b x_6/(1 + x_1 + x_6) =X(x_4,x_6,x_1) \\ \dot x_5=& s - g x_5 + b x_2/(1 + x_2 + x_3) =X(x_5,x_2,x_3) \\ \dot x_6= & s - g x_6 + b x_2/(1 + x_2 + x_3) =X(x_6,x_2,x_3) \end{aligned} \end{equation} with $X(u,v,w):=s-gu+\frac{bv}{1+v+w}$, that is, $X$ is again defined by \eqref{X}, but the right-hand-sides do depend on three variables, differently form the previous case. \begin{figure}
\caption{The backward feedback repressilator 6D model: repressilator topology including transcriptional activators, driving the synthesis of the previous repressor in the cycle.}
\label{fig:doscill1}
\end{figure}
\subsection{Steady states of the model}
From \eqref{s6sec} it is easily seen that any stationary point of \eqref{s6sec} should fulfil
$x_2=x_1, x_4=x_3, x_6=x_5$. Then, similarly as in the case of system \eqref{s6}, computing with \textbf{\textsc{Mathematica}}\ we find that the system has singular points $F$ and $H$ defined by \eqref{F} and \eqref{H} and a few other singular points whose coordinates are given by cumbersome expressions, which are not suitable for further analysis. Therefore, again we proceed using the previous ideas.
The chemically relevant
steady states of system \eqref{s6sec} are solutions to the semi-algebraic system \begin{equation} \label{6sec_semi} f_1=f_2 =f_3=0,\ s>0,\ g>0,\ b>0,\ x_1>0, x_3>0, x_5>0 \end{equation} where $$ \begin{aligned} f_1= & s - g x_1 + b x_3 + s x_3 - g x_1 x_3 + s x_5 -
g x_1 x_5 ,\\ f_2= & s + s x_1 - g x_3 - g x_1 x_3 + b x_5 + s x_5 -
g x_3 x_5 , \\ f_3= & s + b x_1 + s x_1 + s x_3 - g x_5 - g x_1 x_5 -
g x_3 x_5 \end{aligned} $$ (that is, $f_1= X(x_1,x_3,x_5 )(1 + x_3 + x_5), f_2 = X(x_3,x_5,x_1) (1 + x_1 + x_5), \ f_3 =X(x_5,x_1,x_3) (1 + x_1 + x_3) $). But unlike the case of the previous model, we were able to solve system \eqref{6sec_semi} neither with \texttt{Reduce} nor \texttt{Solve} of \textbf{\textsc{Mathematica}}. (\texttt{Solve} provides five roots, most of them in uselessly complicated form.) It appears that the reason is that in the previous model the steady states were determined
from the system $$ X(x_1,x_1,x_5)= X(x_3,x_3,x_1)=X(x_5,x_5,x_3)=0, $$ where each equation depended only on two variables, whereas in the present case they are to be determined from the system $$ X(x_1,x_3,x_5)=X(x_3,x_5,x_1)=X(x_5,x_1,x_3)=0, $$ where each equation depends on three variables, so the latter system is more complicated.
To find the steady states of system \eqref{s6sec} we use the computer algebra system \textbf{\textsc{Singular}}\ \cite{deckerlaplagnepfisterschonemann,deckergreuelpfisterschonemann}. We look for solutions of system \begin{equation}\label{fss} f_1(x_1,x_3,x_5,s,g,b)= f_2(x_1,x_3,x_5,s,g,b)=f_3(x_1,x_3,x_5,s,g,b)=0. \end{equation}
The polynomials $f_1,f_2,f_3$ are polynomials of six variables with rational coefficients, that is, they are polynomials of the ring $\mathbb{Q}[ s,b,g,x_1,x_3,x_5]$. In \textbf{\textsc{Singular}}\ the ring of such polynomials can be declared as
\centerline{
\texttt{ring r=0,(s,b,g,x1,x3,x5),(lp)}),} \noindent where \texttt{r} is the name of the ring, $0$ is the characteristic of the field of rational numbers $\mathbb{Q}$, and \texttt{lp}
means that Gr\"{o}bner basis calculations should
be performed using the lexicographic ordering.
Let $I$ be the ideal generated by $f_1,f_2,f_3$ in $\mathbb{Q}[ s,b,g,x_1,x_3,x_5]$, that is, \begin{equation} \label{I} I=\langle f_1,f_2, f_3 \rangle. \end{equation} The set of solutions of system \eqref{fss} is the variety $V(I)$ of $I$ (the zero set of all polynomials from $I$). (We give definitions and some facts about polynomial ideals and their varieties in Appendix \ref{subsec:A}.)
Then, applying the routine \texttt{minAssGTZ} of
\cite{deckergreuelpfisterschonemann}, which computes
minimal associate primes of polynomial ideals using
the algorithm of \cite{giannitragerzacharias}, we find that the variety of
$I$ consists of three components,
\begin{equation} {\bf V}(I)={\bf V}(I_1)\cup {\bf V}(I_2)\cup {\bf V}(I_3),\label{star} \end{equation} where $I_1,I_2,I_3$ are the ideals written under [1]:, [2]: and [3]:, respectively, in Appendix \ref{subsec:C}.
Since $I_1=\langle
x_3-x_5,
x_1-x_5,
2 s x_5+s+b x_5-2 g x_5^2-g x_5\rangle
$ it is easily seen that the variety ${\bf V}(I_1)$ consists of two points $F$ and $H$ defined by \eqref{F} and \eqref{H}, respectively.
From the equations for the third component we have
$s=g=b=0$, so the system degenerates.
However, the polynomials defining the second component are complicated and difficult to analyse, so we are unable to extract useful description of the component from these polynomials.
Fortunately, there is a slightly different way to treat
the problem of solving system \eqref{fss}. Namely, we can treat polynomials $$ f_1(x_1,x_3,x_5,s,g,b), \
f_2(x_1,x_3,x_5,s,g,b), \ f_3(x_1,x_3,x_5,s,g,b) $$ as polynomials of $ x_1, x_2, x_3$ depending on parameters $s,g,b$ (which is in agreement with the meaning of $s,g,b$ in differential system \eqref{s6sec}).
To do so, we declare the ring as
\centerline{\texttt{ring r=(0,s,b,g),(x1,x3,x5),(lp)},} \noindent where \texttt{r} is the name of the ring, \texttt{(0,s,b,g)}
means that the computations should be performed in the field
of characteristic \texttt{0} and \texttt{s,b,g} should be treated as parameters, and,
as above, \texttt{lp} means that Gr\"{o}bner basis calculations should
be performed using the lexicographic ordering.
Computing with \texttt{minAssGTZ} the minimal associate primes of the ideal
$ J=\langle f_1,f_2, f_3 \rangle $ (which looks as $I$ but now it is considered as the ideal of the ring $
\mathbb{Q}(s,b,g)[x_1,x_3,x_5] $)
we obtain that they are $$ J_1=\langle h_1, h_2, h_3\rangle $$ with \begin{equation} \label{h123} \begin{aligned} h_1=& (2 s g^3+b g^3+g^4) x_5^3+(2 s^2 g^2+2 s b g^2+5 s g^3+2 b^2 g^2+2 b g^3+2 g^4) x_5^2\\ &+(-2 s^3 g-3 s^2 b g-s^2 g^2-3 s b^2 g-2 s b g^2+2 s g^3-b^3 g+g^4) x_5\\ &+(-2 s^4-4 s^3 b-5 s^3 g-5 s^2 b^2-8 s^2 b g-4 s^2 g^2-3 s b^3-7 s b^2 g\\ &-5 s b g^2-s g^3-b^4-2 b^3 g-2 b^2 g^2-b g^3),\\ h_2= &(2 s b g+b^2 g+b g^2) x_3+(-2 s g^2-b g^2-g^3) x_5^2\\ &+(s b g-2 s g^2-b^2 g-g^3) x_5\\ &+(2 s^3+4 s^2 b+3 s^2 g+4 s b^2+6 s b g+s g^2+2 b^3+2 b^2 g+2 b g^2)\\ h_3= &(2 s b g+b^2 g+b g^2) x_1+(2 s g^2+b g^2+g^3) x_5^2\\ &+(s b g+2 s g^2+2 b^2 g+b g^2+g^3) x_5\\ &+(-2 s^3-2 s^2 b-3 s^2 g-2 s b^2-s b g-s g^2) \end{aligned}
\end{equation}
and $$ J_2=\langle
2 g x_5^2+(g-2 s-b) x_5-s,
x_1-x_5,
x_3-x_5\rangle.
$$
So the variety of the ideal consists of two components $$ {\bf V}(J)={\bf V}(J_1)\cup {\bf V}(J_2). $$
Clearly, the variety ${\bf V}(J_2) $ considered as a variety in
$\mathbb{R}^3$ consists of two points
$F$ and $H$ defined by \eqref{F} and \eqref{H}.
Chemically relevant steady states in the component ${\bf V}(J_1)$ are determined from the semi-algebraic system
\begin{equation} \label{sas6}
b>0, \ g>0, \ s>0,x_1>0, x_3>0 \ x_5>0, h_1=0, h_2=0, h_3=0.
\end{equation}
Solving system \eqref{sas6} with \texttt{Reduce} we find that it has no solution (the command \texttt{Reduce} returns \texttt{False} as the output).
Using the analysis performed above we can prove the following result. \begin{theorem} The only steady state of system \eqref{s6} with positive coordinates is the point $F$ defined by \eqref{F}. \end{theorem} \begin{proof} As we have shown above the only point from the variety ${\bf V} (J)$ satisfying the condition
\begin{equation} \label{csas6}
b>0, \ g>0, \ s>0,x_1>0, x_3>0 \ x_5>0
\end{equation}
is the point $F$ defined by \eqref{F}.
However, the complete set of steady states of system
\eqref{s6sec} is determined from the variety ${\bf V}(I)$
of the ideal $I$ defined by \eqref{I}. Thus to prove the theorem it is sufficient to show that
${\bf V} (I)$ is a subset of ${\bf V}(J)$. The first components of ${\bf V} (I)$ and the second component of ${\bf V}(J)$ are the same, the third component of ${\bf V} (I)$ is the variety ${\bf V}(I_3)$ of the ideal
$I_3=\langle s, b , g\rangle$. Obviously, if
$s=b=g=0$ then all polynomials $h_1, h_2, h_3$ vanish,
that means, ${\bf V} (I_3)$ is subset of ${\bf V} (J)$.
So, we have to compare the second
components of the decompositions of ${\bf V} (I)$ and ${\bf V}(J)$,
that is, ${\bf V}(H)$ and ${\bf V}(G)$,
where $H=\langle h_1, h_2, h_3 \rangle $ with
$h_1, h_2, h_3$ defined by \eqref{h123} and
$G=\langle g_1, \dots, g_{11}\rangle$
where by $g_1, \dots, g_{11}$ we denote polynomials
of the second minimal associate prime given in Appendix \ref{subsec:C}.
First, with the command \texttt{std} of \textbf{\textsc{Singular}}\
we compute Gr\"{o}bner bases of $H$ and $G$, denoting them
$H_s$ and $G_s$, respectively.
Then with \texttt{reduce} of \textbf{\textsc{Singular}}\ we check
that $H\subset G$ (since \texttt{reduce($H_s$,$G_s$)}
returns $0$) yielding ${\bf V}(H)\subset {\bf V}(G)$. $\square$ \end{proof}
\begin{remark} Applying the command \texttt{reduce($G_s,H_s$)} we obtain
that $H\subsetneq G$ yielding ${\bf V}(H)\subset {\bf V}(G)$,
and ${\bf V}(H)$ is a strict subset of ${\bf V}(G)$ (as varieties in $\mathbb{C}^6$).
We also can find the precise difference of ${\bf V}(H)$ and ${\bf V}(G)$,
the set ${\bf V}(H)\setminus {\bf V}(G)$.
To this end, we use the fact that
$${\bf V}(H)\setminus {\bf V}(G)= {\bf V}(H:G), $$
where $H:G$ is the quotient of ideals $H$ and $G$ (see e.g. \cite{coxlittleshea} or \cite{romanovskishafer}). In \textbf{\textsc{Singular}}\
we compute
the ideal $H:G$ with the command \texttt{quotient(H,G)} and then with \texttt{minAssGTZ} we compute the minimal associate primes of $H:G$ finding that the variety of $H:G$ consists of 5 components:
1) $g=
s^2+s b+b^2= 0 $
2)
$b
=2 s+g=0$
3)
$3 b-g
=3 s+2 g=0$
4) $
b
=g x_5-s=0$,
5)
$b
=g x_5+s+g=0$
Thus, we see that the varieties ${\bf V}(H)$ and ${\bf V}(G)$ differ only for the set of parameters which are not relevant for our study: $g=0$ in case 1),
$b=0$ in cases 2), 4), 5) and in case 3) $s=-2/3 g$ which is impossible
since $s$ and $g$ are positive. \end{remark}
\subsection{Stability of the positive steady state}
To study the stability properties of system \eqref{s6sec} near the point $F$ we compute the characteristic polynomial $p$ of the Jacobian matrix of system \eqref{s6sec} at $F$ and we find that it is given as \begin{eqnarray*}
p(u) &=& \frac{(g + u)^3 }{(1 + 2 f)^6}
\left(-b+g(1+2f)^2 + u (1+2f)^2\right) \\
&& \left(u^2 (1 + 2 f)^4 + u (1 + 2 f)^2 (b + 2 g (1 + 2 f)^2)\right. \\
&+& \left.g^2 (1 + 2 f)^4 + b g (1 + 2 f)^2 + b^2 (1 + 3 f + 3 f^2)\right). \end{eqnarray*} where $f$ is defined by \eqref{F}. In order to prove that all the roots of the characteristic polynomial have a negative real part it is enough to show that $-b+g(1+2f)^2>0,$ which can be easily proven, e.g.\ using \texttt{Reduce}.
To sum up, for any $s,b,g >0$ all roots of $p$ have negative real parts. Therefore, we have proven the following statement. \begin{theorem} The only positive steady state $F$ of system \eqref{s6sec} is asymptotically stable. \end{theorem} We can get a more precise conclusion about the eigenvalues of $F$. Computing the discriminant of the second degree factor of the above polynomial we find that it is $-3 b^2 (1 + 2 f)^6<0,$ which means that the polynomial $p$ always has a pair of complex conjugate eigenvalues.
Thus, the matrix of the linear approximation of \eqref{s6sec} at $F$ always has four negative real eigenvalues and a pair of complex conjugate eigenvalues with negative real parts. Consequently Hopf bifurcation is not possible in the system. We can expect to observe strong damping oscillations near the steady states if the absolute value of the real parts of the complex eigenvalues are much less than their imaginary parts. However our numerical experiments show that the situation appears to be just the opposite: the real parts of the complex eigenvalues are much larger than their imaginary parts. So we can observe only oscillations which quickly goes to the steady state (see Fig. \ref{fig:7}).
\begin{figure}
\caption{Damping oscillation in the 6D model. $s=1, b=10, g=0.2,$ initial concentrations: $(25, 23, 25, 30.5, 21, 30)$}
\label{fig:7}
\end{figure}
\section{Excluding Hopf Bifurcations by Fully Algorithmic Methods}
We also looked for Hopf bifurcations in the 3D and 6D models using the software package \textbf{\textsc{QeHopf}}\ which uses the method of the semi-algebraic characterization of Hopf bifurcation described in \cite{elkahouiweber} (the package is available by request to the authors). To detect Hopf bifurcation in the models we first generate from the symbolic description of the respective ordinary differential equation a first-order formula in the language of ordered fields, where our domain is the real numbers. Specifically, for a parametrized vector field $f(u,x)$ and the autonomous ordinary differential system associated with it this semi-algebraic description
can be expressed by the following first-order formula: \begin{eqnarray} \lefteqn{\exists x (f_{1}(u,x)=0 \,\land\, f_{2}(u,x)=0 \,\land\, \cdots \,\land\, f_{n}(u,x)=0} \nonumber \\ & & \,\land\, a_{n}>0 \,\land\, \Delta_{n-1}(u,x)=0 \,\land\, \Delta_{n-2}(u,x)>0 \,\land\, \cdots \,\land\, \Delta_{1}(u,x)>0). \end{eqnarray} In this formula $a_n$ is $(-1)^{n}$ times the determinant of the Jacobian matrix $Df(u,x)$, and $\Delta_{i}(u,x)$ is the $i^{\rm th}$ Hurwitz determinant of the characteristic polynomial of the same matrix $Df(u,x)$. Constraints on parameters are added, and for the rational systems we are considering one is using the common numerators (adding the condition of non-vanishing denominators). \textbf{\textsc{QeHopf}}\ is implemented in \textbf{\textsc{Maple}}, and the input for the 3D model is as follows:
\begin{footnotesize} \begin{verbatim} PP:=diff(x(t),t)= s-g*x(t)+b/(1+z(t)) ; QQ:=diff(y(t),t)= s-g*y(t) +b/(1+x(t)); RR:=diff(z(t),t)= s-g*z(t)+b/(1+y(t)); fcns:={x(t), y(t) ,z(t)}; params:=[s, g, b]; paramcondlist:=[s>0, g>0, b>0]; funccondlist:=[x(t)>0, y(t)>0, z(t)>0];
DEHopfexistence({PP,QQ,RR}, fcns, params, funccondlist, paramcondlist); \end{verbatim} \end{footnotesize}
For the 3D model the generated first-order formula is as follows \begin{footnotesize} \begin{verbatim} informula :=
ex (vv3, ex (vv2, ex (vv1, ( ( ( 0 < vv1 and 0 < vv2 ) and 0 < vv3 ) and ( ( ( ( ( ( ( s > 0 and b > 0 and g > 0 and -g*vv1*vv3-g*vv1+s*vv3+b+s = 0 ) and 1+vv3 <> 0 ) and -g*vv1*vv2-g*vv2+s*vv1+b+s = 0 ) and 1+vv1 <> 0 ) and -g*vv2*vv3-g*vv3+s*vv2+b+s = 0 ) and 1+vv2 <> 0 ) and ( ( ( 0 < g^3*vv1^2*vv2^2*vv3^2+2*g^3*vv1^2*vv2^2*vv3+2*g^3*vv1^2*vv2*vv3^2 +2*g^3*vv1*vv2^2*vv3^2+g^3*vv1^2*vv2^2 +4*g^3*vv1^2*vv2*vv3+g^3*vv1^2*vv3^2+4*g^3*vv1*vv2^2*vv3 +4*g^3*vv1*vv2*vv3^2+g^3*vv2^2*vv3^2 +2*g^3*vv1^2*vv2+2*g^3*vv1^2*vv3+2*g^3*vv1*vv2^2+8*g^3*vv1*vv2*vv3 +2*g^3*vv1*vv3^2 +2*g^3*vv2^2*vv3+2*g^3*vv2*vv3^2+g^3*vv1^2+4*g^3*vv1*vv2+4*g^3*vv1*vv3 +g^3*vv2^2 +4*g^3*vv2*vv3+g^3*vv3^2+2*g^3*vv1 +2*g^3*vv2+2*g^3*vv3+b^3+g^3 and 0 < (1+vv2)^2*(1+vv3)^2*(1+vv1)^2 ) and 8*g^3*vv1^2*vv2^2*vv3^2+16*g^3*vv1^2*vv2^2*vv3+16*g^3*vv1^2*vv2*vv3^2 +16*g^3*vv1*vv2^2*vv3^2+8*g^3*vv1^2*vv2^2+32*g^3*vv1^2*vv2*vv3 +8*g^3*vv1^2*vv3^2+32*g^3*vv1*vv2^2*vv3+32*g^3*vv1*vv2*vv3^2 +8*g^3*vv2^2*vv3^2+16*g^3*vv1^2*vv2+16*g^3*vv1^2*vv3+16*g^3*vv1*vv2^2 +64*g^3*vv1*vv2*vv3+16*g^3*vv1*vv3^2+16*g^3*vv2^2*vv3+16*g^3*vv2*vv3^2 +8*g^3*vv1^2+32*g^3*vv1*vv2+32*g^3*vv1*vv3+8*g^3*vv2^2 +32*g^3*vv2*vv3+8*g^3*vv3^2 +16*g^3*vv1+16*g^3*vv2+16*g^3*vv3-b^3+8*g^3 = 0 ) and (1+vv2)^2*(1+vv3)^2*(1+vv1)^2 <> 0 ) ) ) ) ) ) ; \end{verbatim} \end{footnotesize}
The system variables became quantified variables and have been renamed to \verb!vv1!, \verb!vv2!, and \verb!vv3!, and the existential quantification is expressed using the syntax of the package \textbf{\textsc{Redlog}}\ \cite{dolzmannsturm,sturmredlog}, which had been originally driven by the efficient implementation of quantifier elimination based on virtual substitution methods. Applying quantifier elimination to the formula yields in principle a quantifier-free semi-algebraic description of the parameters for which Hopf bifurcation fixed points exist.
If one suspects that there is no Hopf bifurcation fixed point or one just wants to assert that there is one, then one can apply quantifier elimination to the existential closure of our generated formula. If all variables and parameters are known to be positive, the technique of positive quantifier elimination can be used \cite{sturmweberabdelrahmanelkahoui}. \textbf{\textsc{QeHopf}}\ uses for the quantifier elimination \textbf{\textsc{Redlog}}, which can use \textbf{\textsc{QEPCAD B}}\ \cite{brown} for formula simplification and as \textit{fallback method}. However, for the 3D model already \textbf{\textsc{Redlog}}\ reduces this formula to the equivalent formula \verb!false!, i.e. for no parameters (obeying the positivity condition) a Hopf bifurcation fixed point exists (for positive values). The needed computation time was less than 20\,ms.
For the 6D model the fully algorithmic method was not successful, as already the generation of the formula using Maple failed.
\section{Discussion}
Synthetic biology is one of the most rapidly developing fields of biology. Synthetic genetic circuits are of high interest due to their possible applications in biosensing, bioremediation, diagnostics, therapeutics, etc. Genetic oscillators are some of the most studied circuits due to their complexity and the possibility of many different topologies. Building synthetic genetic oscillators with controllable periods and amplitudes would be of great interest to the synthetic biology field as they could for example potentially be used for treatment of diseases related to the circadian cycle.
The experimental validation of complex systems, such as oscillators, can be technically demanding and time consuming. To this day, there has been only few experimental implementations of synthetic oscillators (\cite{elowitzleibler,tiggesmarquezlagostellingfussenegger}). Hence, mathematical modeling of such systems is highly desirable to reduce the experimental workload. Here, we focus on mathematical modeling of 3-cycle genetic repressilators, which have been extensively studied before. However, our study is focused on models based on non-cooperative transcriptional repressors, meaning that all Hill coefficients are always equal to 1. Different studies have already demonstrated that cooperative binding is necessary to obtain oscillations in repressilator systems (\cite{elowitzleibler,bratsunvolfsontsimringhasty, mullerhofbauerendlerflammwidderschuster, wangjingchen}). Our 3D model confirms that oscillations in such a system are indeed absent. However, a theoretical study by \cite{tsaichoimapomereningtangferrell} has shown that the range of parameters in which the system produces oscillations can be expanded by including positive interactions, facilitated by transcriptional activators. We additionally model two repressilator topologies, involving 3 transcriptional activators, driving transcription of either the next or the previous repressor in the cycle. (Let us mention that Allwright's results cannot be applied for our 6D models.)
What do offer the general results of formal reaction kinetics for the treatment our models? The differential equations of each of the models can be considered as induced kinetic differential equations of a reversible reaction, therefore existence of the positive stationary state follows from general results \cite{boroswrexistence, tothnagypapp}, see the details in \ref{subsec:frk}.
To summarize our mathematical results, we have shown that for all positive values of parameters $b, g, s$ system \eqref{s1} has a single positive stationary point which is a globally asymptotically stable attractor. Furthermore, \eqref{s6} and \eqref{s6sec} have a single stationary state (point $F$ defined by \eqref{F}) in the domain $x_i>0 , \ (i=1,\dots, 6)$, which is a locally asymptotically stable attractor.
Comparing the 3D and 6D models we see that the properties of solutions in the domains, where all phase variables are positive, are similar. For all the three systems in these domains there is a unique singular point which is a strong attractor. In the 3D system, a small overshoot is possible near the steady state, whereas no oscillations appear in the first 6D model near the steady state.
In both 6D models the steady state is an attractor: in both cases all eigenvalues of the steady state have negative real parts, however two eigenvalues are always complex conjugate, so it is possible to observe damping oscillations near the steady state, see Fig. \ref{fig:7}. Thus, the 6D models demonstrate richer dynamics than the 3D models, including the possibility of damped oscillations.
We can also note that these models, as many others arising in the studying of biochemical phenomena, exhibit rather simple dynamics. It was somewhat surprising because the models are given by systems of differential equations depending on few parameters, and there are systems which look simpler, but exhibit rather complicate, even chaotic, dynamics. It can be a challenging problem to understand the reasons for such simple dynamics. One source of argument may originate in the fact the models' stationary states are so closely related to stationary states of one linkage class reversible reactions as described in \ref{subsec:frk}.
From the biochemical point of view, the probable reason for the absence of oscillations in the first 6D model is the strength of the activator feedback, which forms a negative feedback loop despite the positive interaction. Nevertheless, different combinations of activators and repressors could result in topologies that produce regular oscillations. Due to the stochasticity of biological systems, stochastic modeling and algorithms could be used to further analyze these topologies.
As to the computational methods: they are based on recent mathematical and algorithmic developments, and can be applied to many different similar problems frequently arising in biochemical studies. Note that theory makes it possible to turn to simpler polynomials than those at the beginning, and also that it is not the same to have a six variable polynomial and to have a three variable polynomial with three parameters.
\section{Appendix}\label{sec:appendix}
\subsection{On the nonlinear term}\label{subsec:nonlinear}
The term $\frac{k_1}{k_2+z^n}$ in \eqref{eq:goodwin} is (from the point of calculations) similar to the one obtained when the Michaelis--Menten kinetics is approximated by Tikhonov method, or to the Holling type kinetics which is often used in population dynamics \cite{kisstoth}. Therefore the methods used above may have applications in reaction kinetics and population biology, as well. The main difference between this term and the reaction rates usually used is that although this rate is always positive, it is not zero if $z$ or $x$ is zero, a general requirement quite often assumed, \cite[p. 613]{volperthudjaev}.
\subsection{Solving systems of polynomial equations}\label{subsec:A}
We give a short summary on the topics of solving polynomial systems. The interested reader may consult \cite{coxlittleshea,romanovskishafer} for more details.
Let $k[x_1, \ldots, x_n]$ denote the ring of polynomials in $n$ indeterminates with coefficients in the field $k$, which is typically the set $\mathbb{R}$ of real numbers or $\mathbb{C}$ of complex numbers.
The problem of finding solutions to a system of polynomials \begin{eqnarray}\label{e:poly.sys} f_1(x_1,\dots,x_n)&=&0,\nonumber\\ \qquad\quad\ \vdots&&\\ f_m(x_1,\dots,x_n)&=&0\nonumber \end{eqnarray} is a challenging mathematical problem. Such systems often have infinitely many solutions, and it is simply impossible to find them all numerically. Even if system (\ref{e:poly.sys}) has a finite number of solutions, it is still very difficult and often impossible to find all of them
numerically without applying methods of computational algebra.
In fact, no regular methods for solving system (\ref{e:poly.sys}) were known until the mid-sixties of the last century
when Bruno Buchberger \cite{buchberger} invented the theory of Gr\"{o}bner bases, which is now the cornerstone of modern computational algebra. We shall recall briefly the notion of a Gr\"{o}bner basis. Let $I=\langle f_1,f_2, \dots, f_s\rangle$ denote the ideal generated by polynomials $f_1(x_1,\dots,x_n)$, $\ldots$, $f_m(x_1,\dots,x_n)$, that is,
the set of all sums $ \{h_1 f_1+h_2 f_2+\dots + h_s f_s\}, $ where $f_k, h_k$ are polynomials.
A Gr\"{o}bner basis of a given ideal $I$ depends on a term ordering of monomials of $k[x_1,\dots, x_n]$. The two most commonly used term orders are lexicographic order (lex) and degree reverse lexicographic order (degrev), defined as follows. Let ${\boldsymbol{\alpha}} = (\alpha_1, \dots,\ \alpha_n)$ and ${\boldsymbol{\beta}} = (\beta_1, \dots, \beta_n)$ be elements of $\mathbb{N}_0^n$ ($\mathbb{N}_0=\mathbb{N}\cup 0$).
We say that
${\boldsymbol{\alpha}} >_{\rm lex} {\boldsymbol{\beta}}$ with respect to lexicographic order if and only if, reading from left to right,
the first nonzero entry in the $n$-tuple
${\boldsymbol{\alpha}} - {\boldsymbol{\beta}} \in \mathbb{Z}^n$ is positive; we say that
${\boldsymbol{\alpha}} >_{\rm degrev} {\boldsymbol{\beta}}$ \ with respect to degree reverse lexicographic order if and only if
$
|{\boldsymbol{\alpha}}| = \sum_{j=1}^n \alpha_j > |{\boldsymbol{\beta}}| = \sum_{j=1}^n \beta_j
$
or
$
|{\boldsymbol{\alpha}}| = |{\boldsymbol{\beta}}| \
$ and, reading from right to left, the first nonzero entry in the $n$-tuple
${\boldsymbol{\alpha}} - {\boldsymbol{\beta}} \in \mathbb{Z}^n$ is negative. For $\gamma\in \mathbb{N}_0$ let ${\bf x}^{\gamma}$ denote the monomial $x_1^{\gamma_1}x_2^{\gamma_2}\cdots x_n^{\gamma_n}$. Fixing a term order on $k[x_1,\dots,x_n]$, any $f \in k[x_1,\dots,x_n]$ may be reordered in the \emph{standard form} with respect to the order, that is, \begin{equation}\label{standard} f = a_1 {\bf x}^{\alpha_1} + a_2 {\bf x}^{\alpha_2} + \dots + a_s {\bf x}^{\alpha_s}, \end{equation} where $\alpha_i \ne \alpha_j$ for $i \ne j$ and $1 \le i,j \le s$, and where, with respect to the specified term order, $\alpha_1 > \alpha_2 > \cdots > \alpha_s$.
The \emph{leading term}\index{term!leading} $LT(f)$ of $f$ is the term
$LT(f) = a_1 {\bf x}^{\alpha_1}$.
Let $f$ and $g$ be from $k[x_1,\dots,x_n]$ with $LT(f) = a {\bf x}^{\boldsymbol{\alpha}}$ and $LT(g) = b {\bf x}^{\boldsymbol{\beta}}$. The \emph{least common multiple}
of ${\bf x}^{\boldsymbol{\alpha}}$ and ${\bf x}^{\boldsymbol{\beta}}$, denoted $LCM({\bf x}^{\boldsymbol{\alpha}},{\bf x}^{\boldsymbol{\beta}})$, is the monomial ${\bf x}^\gamma = x_1^{\gamma_1} \cdots x_n^{\gamma_n}$ such that $\gamma_j = \max(\alpha_j, \beta_j)$, $1 \le j \le n$, and the \emph{$S$-polynomial} of $f$ and $g$ is the polynomial \[ S(f,g)=\frac{{\bf x}^\gamma}{LT(f)}f -\frac{{\bf x}^\gamma}{LT(g)} g. \]
The following algorithm due to Buchberger \cite{buchberger} produces a Gr\"{o}bner basis for the ideal $I=\langlef_1,\dots,f_s \rangle \in k[x_1,\dots,x_n] $. \begin{itemize} \item[Step 1.] $G := \{ f_1,\dots,f_s \}$. \item[Step 2.] For each pair $g_i, g_j \in G$, $i \ne j$, compute the
$S$-polynomial $S(g_i, g_j)$ and compute the remainder $r_{ij}$ of the division $S(g_i, g_j)$ by $ G$. \item[Step 3.] If all $r_{ij}$ are equal to zero, output $G$, else add all nonzero $r_{ij}$ to $G$ and return to Step 2. \end{itemize} Nowadays, all major computer algebra systems (\textbf{\textsc{Mathematica}}, \textbf{\textsc{Maple}}, \textbf{\textsc{REDUCE}}, \textbf{\textsc{Singular}}, \textbf{\textsc{Macaulay}}\ and many others) have routines to compute Gr\"{o}bner bases.
A Gr\"{o}bner basis $G = \{ g_1, \dots, g_m \}$ is called \emph{reduced}\index{Gr\"obner basis!reduced} if for all $i$, $1 \le i \le m$, the coefficient of the leading term is 1 and
no term of $g_i$ is divisible by any $LT(g_j)$ for $j \ne i$.
It is well-known (see e.g. $\!$\cite{coxlittleshea}) that system (\ref{e:poly.sys})
has a solution over $\mathbb{C}$ if and only if the reduced Gr\"obner basis $G$ for $\langle f_1,\dots,f_s \rangle$ with respect to any term order on $\mathbb{C}[x_1, \dots, x_n]$ is different from $\{ 1 \}$. The Gr\"obner basis theory allows to find all solutions of system (\ref{e:poly.sys}) when the system has only finitely many solutions. In such case a Gr\"obner basis with respect to the lexicographic order is always in a ``triangular" form (like the Gauss row-echelon form in the case of linear systems) which means that one has an equation in a single variable, and having solved it one can substitute the roots into an equation in two variables, solve it, etc.
For a field $k$ an \emph{affine variety} is a subset of $k^n$ that is the solution set of a system of equations of the form \eqref{e:poly.sys}, where $f_i$ are polynomials with coefficients in $k$. It is denoted by ${\bf V}(I)$, where $I$ is the ideal generated by $f_1, \ldots, f_m$, $I:=\langle f_1,f_2,\dots,f_m\rangle$. A variety is \emph{irreducible} if it is not the union of finitely many proper subsets, each of which is itself a variety. Every affine variety $V$ can be decomposed into finitely many irreducible components, that is $V$ is expressible as \begin{equation}\label{gevd} V = V_1 \cup \dots \cup V_s, \end{equation} where each $V_j$ is irreducible and $V_j \not\subset V_k$ if $j \ne k$, and in fact this decomposition is unique up to the ordering of the components $V_j$. Thus to solve \eqref{e:poly.sys} we have
to find the decomposition \eqref{gevd} for $V = {\bf V}(I)$.
A radical of the ideal $I$ is the set of polynomials $\sqrt{I}:=\{f:f^p\in I\text{ for some }p\in\mathbb{N}\}.$
An ideal $I \subset k[x_1,\dots,x_n]$ is called a \emph{primary ideal} if for any pair $f, g \in k[x_1,\dots,x_n]$, $f g \in I$ only if either $f \in I$ or $g^p \in I$ for some $p \in \mathbb{N}$. An ideal $I$ is primary if and only if $\sqrt{I}$ is prime; $\sqrt{I}$ is called the \emph{associated prime ideal of $I$}. A \emph{primary decomposition} of an ideal $I \subset k[x_1,\dots,x_n]$ is a representation of $I$ as a finite intersection of primary ideals $Q_j$, $I = \cap_{j = 1}^s Q_j$\,. The decomposition is called a \emph{minimal} primary decomposition if the associated prime ideals $\sqrt{Q_j}$ are all distinct and $\cap_{i \ne j} Q_i \not \subset Q_j$ for any $j$. A minimal primary decomposition of a polynomial ideal always exists, but it is not necessarily unique.
Every ideal $I$ in $k[x_1,\dots,x_n]$ has a minimal primary decomposition according to the Lasker--Noether Decomposition Theorem. All such decompositions have the same number $m$ of primary ideals and the same collection of associated prime ideals.
Minimal associate primes of a polynomial ideal $I=\langle f_1,f_2,\dots,f_m\rangle$ can be computed using the algorithm proposed by \cite{giannitragerzacharias}, and the varieties of the minimal associate primes give then the irreducible decomposition of the variety $V(I)$ (so give the "solution" to the system $f_1=f_2=\dots f_m=0$).
\subsection{Solving Eq. \eqref{semiout}}\label{subsec:B}
Input is system \eqref{semiin}, and the output is its solution \eqref{semiout}. \begin{verbatim} In[20]:= Reduce[{f1 == 0 && f3 == 0 && f5 == 0 && x1 > 0 &&
x3 > 0 && x5 > 0 && s > 0 && g > 0 && b > 0},
{x1, x3, x5, s, b, g}] // FullSimplify
Out[20]= {x1 > 0 && x1 == x3 && x3 == x5 && s > 0 && b > 0 &&
g == s/x1 + b/(1 + x1 + x5)} \end{verbatim}
\subsection{Minimal associate primes}\label{subsec:C}
Minimal associate primes of ideal \eqref{I} defining the ideals $J_1, J_2, J_3$ of the decomposition \eqref{star} are:
\begin{verbatim}
[1]:
_[1]=x3-x5
_[2]=x1-x5
_[3]=2*s*x5+s+b*x5-2*g*x5^2-g*x5 [2]:
_[1]=x1^3*x3+x1^3*x5+x1^3-2*x1^2*x3*x5+x1^2*x5+x1^2+x1*x3^3
-2*x1*x3^2*x5+x1*x3^2-2*x1*x3*x5^2-6*x1*x3*x5-x1*x3+x1*x5^3
-x1*x5+x3^3*x5+x3^3+x3^2+x3*x5^3+x3*x5^2-x3*x5+x5^3+x5^2
_[2]=b*x3^3+b*x3^2+b*x3*x5+b*x3-b*x5^3-2*b*x5^2-b*x5
-g*x1^2*x3^2-2*g*x1^2*x3*x5-2*g*x1^2*x3-g*x1^2*x5^2
-2*g*x1^2*x5-g*x1^2-g*x1*x3^3+g*x1*x3^2*x5-g*x1*x3^2
+g*x1*x3*x5^2-g*x1*x3-g*x1*x5^3-3*g*x1*x5^2-3*g*x1*x5
-g*x1+g*x3^3*x5+2*g*x3^2*x5^2+4*g*x3^2*x5+g*x3^2+g*x3*x5^3
+4*g*x3*x5^2+4*g*x3*x5+g*x3
_[3]=b*x1*x5+b*x1-b*x3^2-b*x3+g*x1^2*x3+g*x1^2*x5+g*x1^2
+g*x1*x3^2+2*g*x1*x3-g*x1*x5^2+g*x1-g*x3^2*x5-g*x3*x5^2
-2*g*x3*x5-g*x5^2-g*x5
_[4]=b*x1*x3+b*x3-b*x5^2-b*x5-g*x1^2*x3-g*x1^2*x5-g*x1^2
+g*x1*x3^2-g*x1*x5^2-2*g*x1*x5-g*x1+g*x3^2*x5+g*x3^2
+g*x3*x5^2+2*g*x3*x5+g*x3
_[5]=b*x1^2+b*x1-b*x3*x5-b*x5+g*x1^2*x3-g*x1^2*x5+g*x1*x3^2
+2*g*x1*x3-g*x1*x5^2-2*g*x1*x5+g*x3^2*x5+g*x3^2-g*x3*x5^2
+g*x3-g*x5^2-g*x5
_[6]=b^2*x3^2+b^2*x3*x5+b^2*x3+b^2*x5^2+2*b^2*x5+b^2
+b*g*x3^2*x5+2*b*g*x3^2-b*g*x3*x5^2+b*g*x3*x5+2*b*g*x3
+b*g*x5^2+2*b*g*x5+b*g+2*g^2*x1*x3^3+2*g^2*x1*x3^2*x5
+3*g^2*x1*x3^2+2*g^2*x1*x3*x5^2+4*g^2*x1*x3*x5+2*g^2*x1*x3
+2*g^2*x1*x5^3+5*g^2*x1*x5^2+4*g^2*x1*x5+g^2*x1
+2*g^2*x3^3*x5+2*g^2*x3^3+4*g^2*x3^2*x5^2+8*g^2*x3^2*x5
+4*g^2*x3^2+2*g^2*x3*x5^3+8*g^2*x3*x5^2+9*g^2*x3*x5
+3*g^2*x3+2*g^2*x5^3+5*g^2*x5^2+4*g^2*x5+g^2
_[7]=2*s*x5+s-b*x1+b*x3+b*x5-2*g*x1*x3-g*x1-g*x3+g*x5
_[8]=2*s*x3+s+b*x1+b*x3-b*x5-2*g*x1*x5-g*x1+g*x3-g*x5
_[9]=2*s*x1+s+b*x1-b*x3+b*x5+g*x1-2*g*x3*x5-g*x3-g*x5
_[10]=s*b+b^2*x1+b^2*x3+b^2*x5+2*b^2+b*g*x1+b*g*x3+b*g*x5
+2*b*g+2*g^2*x1^2*x3+2*g^2*x1^2*x5+2*g^2*x1^2+2*g^2*x1*x3^2
+4*g^2*x1*x3*x5+6*g^2*x1*x3+2*g^2*x1*x5^2+6*g^2*x1*x5
+4*g^2*x1+2*g^2*x3^2*x5+2*g^2*x3^2+2*g^2*x3*x5^2
+6*g^2*x3*x5+4*g^2*x3+2*g^2*x5^2+4*g^2*x5+2*g^2
_[11]=s^2+s*g-b^2*x1-b^2*x3-b^2*x5-b^2-b*g*x1-b*g*x3
-b*g*x5-b*g-2*g^2*x1^2*x3-2*g^2*x1^2*x5-2*g^2*x1^2
-2*g^2*x1*x3^2-4*g^2*x1*x3*x5-5*g^2*x1*x3-2*g^2*x1*x5^2
-5*g^2*x1*x5-3*g^2*x1-2*g^2*x3^2*x5-2*g^2*x3^2
-2*g^2*x3*x5^2-5*g^2*x3*x5-3*g^2*x3-2*g^2*x5^2-3*g^2*x5-g^2 [3]:
_[1]=g
_[2]=b
_[3]=s \end{verbatim}
\subsection{Checking the conditions of Allwright's theorem in the 3D case}\label{subsec:D}
Here we strongly rely on the paper \cite{allwright}: we use the definitions and notations of that paper.
His equations (5) specialize into our Eq. \eqref{s1} with the following cast: $n=3$, and for $j=1,2,3: T_j=0, h_j(x)=s+\frac{b}{1+x}, k_j(x)=-gx.$ The quantities and functions defined in this way fulfil conditions (6)--(8) in his paper. As the inverse of $k$ is $y\mapsto-y/g$ the function $\Phi$ in (9) can be calculated as \begin{equation*} \frac{b^2 g+u \left(-b g^2+2 b g s-g^2 s+2 g s^2-s^3\right)-b g^2+3 b g s-b s^2-g^2 s+2 g s^2-s^3}{g u \left(-b g+g^2-2 g s+s^2\right)+g \left(-2 b g+b s+g^2-2 g s+s^2\right)}. \end{equation*} The derivative of $\Phi$ is negative for nonnegative arguments $u$ in accordance with the fact that the function is decreasing. Thus we have Case I with the notation of the paper. Further---lengthy---calculations show that the equation $\Phi(\Phi(u))=u$ has one positive (and one negative) real root: \begin{equation} u_1=\frac{-g+s+\sqrt{(g+s)^2+4bg}}{2g}>0,\quad u_2=\frac{-g+s-\sqrt{(g+s)^2+4bg}}{2g}<0, \end{equation} therefore case (i) of Theorem 1 of the paper applies stating the \emph{global} asymptotic stability of the unique stationary point.
\subsection{Realizations with reversible reactions}\label{subsec:frk}
Consider the equation \eqref{eq:st6D1} for the stationary points of the first 6D model: \begin{eqnarray}
s + b x_1 - g x_1 + s x_1 - g x_1^2 + s x_5 - g x_1 x_5 &=& 0 \nonumber\\
s + s x_1 + b x_3 - g x_3 + s x_3 - g x_1 x_3 - g x_3^2 &=& 0 \label{eq:st6D1}\\
s + s x_3 + b x_5 - g x_5 + s x_5 - g x_3 x_5 - g x_5^2 &=& 0.\nonumber \end{eqnarray} Let us note that the mass action type induced kinetic differential equation of the reaction in Fig. \ref{fig:sixd1} is has exactly the right hand side equal to the left hand sides of the sbove equations if the reaction rate coefficients have appropriately been chosen. Therefore, based on the results by Orlov and Rozonoer \cite{orlovrozonoer2,tothnagypapp} (or using the recent generalization by Boros \cite{boroswrexistence}) one can conclude that there exists a positive stationary point of the reaction, and thus, of the original (first) 6D model also has one. \begin{figure}
\caption{Feinberg--Horn--Jackson graph of a reaction leading to the stationary point of th first 6D model}
\label{fig:sixd1}
\end{figure}
The same argument can be applied in the case of the other two models.
\end{document}
\section{Introduction} \label{intro} Your text comes here. Separate text sections with \section{Section title} \label{sec:1} Text with citations \cite{RefB} and \cite{RefJ}. \subsection{Subsection title} \label{sec:2} as required. Don't forget to give each section and subsection a unique label (see Sect.~\ref{sec:1}). \paragraph{Paragraph headings} Use paragraph headings as needed. \begin{equation} a^2+b^2=c^2 \end{equation}
\begin{figure}
\caption{Please write your figure caption here}
\label{fig:1}
\end{figure}
\begin{figure*}
\caption{Please write your figure caption here}
\label{fig:2}
\end{figure*}
\begin{table}
\caption{Please write your table caption here} \label{tab:1}
\begin{tabular}{lll} \hline\noalign{
} first & second & third \\ \noalign{
}\hline\noalign{
} number & number & number \\ number & number & number \\ \noalign{
}\hline \end{tabular} \end{table}
\end{document} |
\begin{document}
\title[Some new theorems in plane geometry]{Some new theorems in plane geometry}
\subjclass[2010]{51M04, 51N20}
\keywords{Plane geometry, Analytic geometry}
\author[Alexander Skutin]{Alexander Skutin}
\maketitle
\section{Introduction}
In this article we will represent some ideas and a lot of new theorems in plane geometry.
\section{Deformation of equilateral triangle}
\subsection{Deformation principle for equilateral triangle} If some triangle points lie on a circle (line) or are equivalent in the case of an equilateral triangle, then in the general case of an arbitrary triangle they are connected by some natural relations. Thus, triangle geometry can be seen as a deformation of the equilateral triangle geometry.
This principle partially describes why there exists so many relations between the Kimberling centers $X_i$ (see definition of the Kimberling center here \cite{Kim}).
\subsection{First example of application} Consider an equilateral triangle $ABC$ with center $O$ and let $P$ be an arbitrary point on the circle $(ABC)$. Then we can imagine that $O$ is the first Fermat point of the triangle $ ABC $, and the point $P$ is the second Fermat point of the triangle $ABC$. Consider a circle $\omega$ with center at $P$ and passing through the point $O$. Let $X$, $Y$ be the intersection points of the circles $(ABC)$ and $\omega$. Then it is easy to see that the line $XY$ passes through the middle of the segment $OP$. Thus, we can guess that this fact takes place in general for an arbitrary triangle $ABC$, and it's first and the second Fermat points $F_1$, $F_2$. Thus, we obtain the following theorem.
\begin{theorem}
Consider triangle $ABC$ with the first Fermat point $F_1$ and the second Fermat point $F_2$. Consider a circle $\omega$ with center at $F_2$ and radius $F_2F_1$. Then the radical line of circles $\omega$, $(ABC)$ goes through the midpoint of the segment $F_1F_2$.
\end{theorem}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \definecolor{ffwwqq}{rgb}{1.0,0.4,0.0} \definecolor{ffqqtt}{rgb}{1.0,0.0,0.2} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw(-1.0326794919243112,1.1857219549304678) circle (1.951751350283466cm); \draw(-1.9456925538575303,2.9107554276702468) circle (1.9517513502834662cm); \draw [dash pattern=on 3pt off 3pt] (-2.98310883266206,1.2575461856791745)-- (0.004736786880218679,2.83893119692154); \draw (-1.9456925538575303,2.9107554276702468)-- (-1.0326794919243112,1.1857219549304678); \draw (-2.74,0.24)-- (-0.9980384757729328,3.1371658647914034); \draw (-0.9980384757729328,3.1371658647914034)-- (0.64,0.18); \draw (-2.74,0.24)-- (0.64,0.18); \draw (4.06,0.54)-- (5.78,3.38); \draw (5.78,3.38)-- (7.58,0.28); \draw (4.06,0.54)-- (7.58,0.28); \draw(5.891049406357717,1.3718996553044809) circle (2.01116855708781cm); \draw(4.6638432799308625,3.0499539049055175) circle (1.9325894814553557cm); \draw [dash pattern=on 3pt off 3pt] (3.8818627124472997,1.2826371546001398)-- (6.585034591306983,3.2595393264519172); \draw (4.6638432799308625,3.0499539049055175)-- (5.720051721346142,1.4315207638351217); \begin{scriptsize} \draw [fill=ffqqtt] (-2.74,0.24) circle (1.5pt); \draw [fill=ffqqtt] (0.64,0.18) circle (1.5pt); \draw [fill=ffqqtt] (-0.9980384757729328,3.1371658647914034) circle (1.5pt); \draw [fill=ffwwqq] (-1.9456925538575303,2.9107554276702468) circle (1.5pt); \draw[color=ffwwqq] (-1.8,3.16) node {$P$}; \draw [fill=ffwwqq] (-1.0326794919243112,1.1857219549304678) circle (1.5pt); \draw[color=ffwwqq] (-0.9199999999999999,1.1199999999999999) node {$O$}; \draw [fill=ffffff] (-2.98310883266206,1.2575461856791745) circle (1.5pt); \draw [fill=ffffff] (0.004736786880218679,2.83893119692154) circle (1.5pt); \draw [fill=ffqqtt] (4.06,0.54) circle (1.5pt); \draw [fill=ffqqtt] (5.78,3.38) circle (1.5pt); \draw [fill=ffqqtt] (7.58,0.28) circle (1.5pt); \draw [fill=ffwwqq] (5.720051721346142,1.4315207638351217) circle (1.5pt); \draw[color=ffwwqq] (5.96,1.44) node {$F_1$}; \draw [fill=ffwwqq] (4.6638432799308625,3.0499539049055175) circle (1.5pt); \draw[color=ffwwqq] (4.84,3.3400000000000003) node {$F_2$}; \draw [fill=ffffff] (3.8818627124472997,1.2826371546001398) circle (1.5pt); \draw [fill=ffffff] (6.585034591306983,3.2595393264519172) circle (1.5pt); \draw [fill=ffffff] (5.1919475006385,2.240737334370321) circle (1.5pt); \draw [fill=ffffff] (-1.4891860228909208,2.0482386913003574) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\subsection{Fermat points}
\begin{theorem}
Consider triangle $ABC$ with the first Fermat point $F_1$ and the second Fermat point $F_2$. Consider a point $A'$ on $BC$, such that $F_2A'$ is parallel to $AF_1$. Similarly define $B'$, $C'$. Then $A'$, $B'$, $C'$ lie on the same line which goes through the middle of $F_1F_2$.
\end{theorem}
\begin{theorem}
Given triangle $ABC$ with the first Fermat point $F_1$ and the second Fermat point $F_2$. Let $F_A$ be the second Fermat point of $BCF_1$, similarly define $F_B$, $F_C$.
\begin{enumerate}
\item Reflect $F_A$ wrt line $AF_1$ and get the point $F_A'$, similarly define the points $F_B'$, $F_C'$. Then $F_2$ lies on the circles $(F_AF_BF_C)$, $(F_A'F_B'F_C')$.
\item Consider isogonal conjugations $F_A^A$, $F_A^B$, $F_A^C$ of $F_A$ wrt $BCF_1$, $ACF_1$, $ABF_1$ respectively. Then $\angle F_A^BF_A^AF_A^C = \pi/3$.
\item Triangles $ABC$ and $F_A^AF_B^BF_C^C$ are perspective.
\end{enumerate}
\end{theorem}
\begin{theorem}
Let $F_1$, $F_2$ be the first and the second Fermat points for $ABC$. Consider an alphabet with the letters $"a, b, c"$. Consider the set $\mathcal{W}$ of all words in this alphabet, and let the function $|\cdot|$ denote the length of a word in $\mathcal{W}$. Inductively define $F_A^{(\omega)}$, $F_B^{(\omega)}$, $F_C^{(\omega)}$ :
\begin{enumerate}[(i)]
\item Let $F_A^{(\emptyset)}= F_1$, similarly for $F_B^{(\emptyset)}$ and $F_C^{(\emptyset)}$.
\item Let $F_A^{(\omega a)}$ be the second Fermat point of $BCF_A^{(\omega)}$, same for $F_B^{(\omega a)}$, $F_C^{(\omega a)}$.
\item By definition, let $F_A^{(\omega b)}$ be a reflection of the point $F_A^{(\omega)} $ wrt side $BC$. Similarly define the points $F_B^{(\omega b)}$, $F_C^{(\omega b)}$.
\item By definition, let $F_A^{(\omega c)}$ be a reflection of the point $F_A^{(\omega)}$ wrt midpoint of $BC$. Similarly define the points $F_B^{(\omega c)}$, $F_C^{(\omega c)}$.
\end{enumerate}
Then we have that :
\begin{enumerate}
\item If $|\omega|$ is odd, then the points $F_2$, $F_A^{(\omega)}$, $F_B^{(\omega)}$, $F_C^{(\omega)}$ lie on the same circle.
\item If $|\omega|$ is even, then the points $F_1$, $F_A^{(\omega)}$, $F_B^{(\omega)}$, $F_C^{(\omega)}$ lie on the same circle.
\end{enumerate}
\end{theorem}
\begin{theorem}
Given triangle $ABC$ with the first Fermat point $F$. Let $F_A$ be the second Fermat point for $FBC$. Similarly define $F_B$, $F_C$. Let $F_B^{A}$ be the first Fermat point of $FF_BA$ and $F_C^{A}$ be the first Fermat point of $FF_CA$. Let two tangents from $F_B^{A}$, $F_C^{A}$ to circle $(F_B^{A}F_C^{A}A)$ meet at $X_A$. Similarly define $X_B$, $X_C$. Then $F$, $X_A$, $X_B$, $X_C$ are collinear.
\end{theorem}
\subsection{Miquel points}
\begin{theorem}
Consider any triangle $ABC$ and any point $X$. Let $P= AX\cap BC$, $Q = BX\cap AC$, $R = CX\cap AB$. Let $M_Q$ be the Miquel point of the lines $XA$, $XC$, $BA$, $BC$, similarly define the points $M_P$, $M_R$. Denote $l_Q$ as the Simson line of the point $M_Q$ wrt triangle $XPC$ (it's same for the triangles $XRA$, $APB$, $RBC$). Similarly define the lines $l_P$, $l_R$. Let $P'Q'R'$ be the midpoint triangle for $PQR$. Let $P'Q'\cap l_R = R^*$, like the same define the points $P^*, Q^*$. Then $P'P^*$, $Q'Q^*$, $R'R^*$ are concurrent.
\end{theorem}
\subsection{Steiner lines}
\begin{theorem}
Consider any triangle $ABC$ and any point $P$. Let $\mathcal{L} _{A}$ be the Steiner line for the lines $AB$, $AC$, $PB$, $PC$, similarly define the lines $\mathcal{L}_{B}$, $\mathcal{L}_{C}$. Let the lines $\mathcal{L}_{A}$, $\mathcal{L}_{B}$, $\mathcal{L}_{C}$ form a triangle $\triangle$. Then the circumcircle of $\triangle$, the pedal circle of $P$ wrt $ABC$ and the nine~- point circle of $ABC$ passes through the same point.
\end{theorem}
\subsection{Isogonal conjugations}
\begin{theorem}
Let the incircle of $ABC$ touches sides $BC$, $CA$, $AB$ at $A_1$, $B_1$, $C_1$ respectively. Let the $A$, $B$, $C$~-- excircles of $ABC$ touches sides $BC$, $CA$, $AB$ at $A_2$, $B_2$, $C_2$ respectively. Let $A^{1}$ be isogonal conjugate to the point $A$ wrt $A_1B_1C_1$. Similarly define the points $B^{1}$ and $C^{1}$. Let $A^{2}$ be isogonal conjugate to the point $A$ wrt $A_2B_2C_2$. Similarly define the points $B^{2}$ and $C^{2}$. Then we have that :
\begin{enumerate}
\item Lines $A_2A^{1}$, $B_2B^{1}$, $C_2C^{1}$ are concurrent at $X_2^{1}$.
\item Lines $A_1A^{2}$, $B_1B^{2}$, $C_1C^{2}$ are concurrent at $X_1^{2}$.
\item Points $A^{1}$, $B^{1}$, $C^{1}$, $A^{2}$, $B^{2}$, $C^{2}$, $X_2^{1}$, $X_1^{2}$ lie on the same conic.
\end{enumerate}
\end{theorem}
\subsection{Deformation principle for circles} If in the particular case of an equilateral triangle some two circles are equivalent, then in the general case the radical line of these two circles has some natural relations with respect to the base triangle.
\begin{theorem}
Let given triangle $ABC$ and its centroid $G$. Let the circumcenters of the triangles $ABG$, $CBG$, $AGC$ form a triangle with the circumcircle $\omega$. Then the circumcenter of the pedal triangle of $G$ wrt $ABC$ lies on the radical line of $(ABC)$ and $\omega$.
\end{theorem}
\begin{theorem}
Consider triangle $ABC$ with the first Fermat point $F$ and let $O_A$, $O_B$, $O_C$ be the circumcenters of $FBC$, $FAC$, $FAB$ respectively. Let $X_A$ be a reflection of $O_A$ wrt $BC$, similarly define the points $X_B$, $X_C$. Let $l$ be the radical line of $(O_AO_BO_C)$ and $(ABC)$. Then $F$ is the Miquel point of the lines $X_AX_B$, $X_BX_C$, $X_AX_C$, $l$.
\end{theorem} \begin{commentary} In all theorems from this section consider the case of an equilateral triangle.
\end{commentary}
\section{Nice fact about triangle centers}
For any $i, j\in\mathbb{N}$ if some fact is true for the Kimberling center $X_i$, then we can try to transport the same construction from the triangle center $X_i$ to the center $X_j$ and to look on the nice properties of the resulting configuration.
\begin{theorem}
Given a triangle $ABC$. Let $A$~-- excircle tangent to $BC$ at $A_1$. Similarly define $B_1$, $C_1$. Let $A_2B_2C_2$ be the midpoint triangle of $ABC$. Consider intersection $N$ of the perpendiculars to $AB$, $BC$, $CA$ from $C_1$, $A_1$, $B_1$ respectively. Let the circle $(A_1B_1C_1)$ meet sides $AB$, $BC$ second time at $C_3$, $A_3$ respectively. Let $X= A_3C_3\cap B_2C_2$. Then $AX\perp CN$.
\end{theorem}
\begin{commentary}
Consider the case when $N$ is the incenter of $ABC$.
\end{commentary}
\begin{theorem}
Let the incircle of $ABC$ touches sides $AB$, $BC$, $CA$ at $C'$, $A'$, $B'$ respectively. Reflect $B$, $C$ wrt line $AA'$ and get the points $B_1$, $C_1$ respectively. Let $A^*= CB_1\cap BC_1$, similarly define the points $B^*$, $C^*$. Then the points $A^*$, $B^*$, $C^*$, $I$, $G$ lie on the same circle, where $G$ is the Gergonne point of $ABC$.
\end{theorem}
\begin{commentary}
Consider the case when $AA'$, $BB'$, $CC'$ goes through the first Fermat point.
\end{commentary}
\begin{theorem}
Let the incircle of $ABC$ touches sides $AB$, $BC$, $CA$ at $C'$, $A'$, $B'$ respectively. Reflect point $A'$ wrt lines $BB'$, $CC'$ and get the points $A_B$, $A_C$ respectively. Let $A^*= A_BC'\cap A_CB'$, similarly define the points $B^*$, $C^*$. Then the incenter of $ABC$ coincide with the circumcenter of $A^*B^*C^*$.
\end{theorem}
\section{Construction of midpoint analog}
\begin{definition}
For any pairs of points $A$, $B$ and $C$, $D$ denote $\mathcal{M}(AB, CD)$ as the Miquel point of the complete quadrilateral formed by the four lines $AC$, $AD$, $BC$, $BD$.
\end{definition}
\begin{definition}
For any point $X$ and a segment $YZ$ denote $\mathcal{M}(X, YZ)$~-- as a point $P$, such that the circles $(PXY)$ and $(PXZ)$ are tangent to segments $XZ$, $XY$ at $X$.
\end{definition}
Consider any two segments $AB$ and $CD$, then the point $\mathcal{M}(AB, CD)$ can be seen as midpoint between the two segments $AB$, $CD$. Also we can consider the segment $AB$ and the point $C$ and to look on the point $\mathcal{M}(C, AB)$ as on the midpoint between the point $C$ and the segment $AB$.
\begin{commentary}
In the case when $A=B$ and $C=D$ we will get that the points $\mathcal{M}(C, AB)$ and $\mathcal{M}(AB, CD)$ are midpoints of $AC$.
\end{commentary}
\begin{theorem}
Let given segments $P_AQ_A$, $P_BQ_B$, $P_CQ_C$, $P_DQ_D$. Let $\omega_D$ be the circumcircle of $\mathcal{M}(P_AQ_A, P_BQ_B)\mathcal{M}(P_AQ_A, P_CQ_C)\mathcal{M}(P_CQ_C, P_BQ_B)$. Like the same define the circles $\omega_A$, $\omega_B$, $\omega_C$. Then the circles $\omega_A$, $\omega_B$, $\omega_C$, $\omega_D$ intersect at the same point.
\end{theorem}
\begin{theorem}[\textbf{Three nine-point circles intersect}]
Let given circle $\omega$ and points $P_A$, $Q_A$, $P_B$, $Q_B$, $P_C$, $Q_C$ on it. Let $A'= \mathcal{M}(P_BQ_B, P_CQ_C)$, similarly define points $B'$, $C'$. Consider points $A_1= P_BP_C\cap Q_BQ_C$, $B_1 = P_AP_C\cap Q_AQ_C$, $C_1 = P_AP_B\cap Q_AQ_B$. Let the lines $A_1A'$, $B_1B'$, $C_1C'$ form a triangle with the nine~- point circle $\omega_1$. Then the circles $\omega_1$, $(A'B'C')$, $(A_1B_1C_1)$ pass through the same point.
\end{theorem}
\begin{theorem}
Consider triangle $ABC$. Let points $P$, $Q$ considered such that $PP'\parallel QQ'$, where $P$, $P'$ are isogonal conjugated and $Q$, $Q'$ are isogonal conjugated wrt triangle $ABC$. Let $P_A$ be the second intersection point of line $AP'$ with circle $(BCP')$. Similarly define the points $P_B$, $P_C$. Let $Q_A$ be the second intersection point of line $AQ'$ with circle $(BCQ')$. Similarly define the points $Q_B$, $Q_C$. Let $Q_AP_B\cap Q_BP_A = R_C$, similarly $Q_AP_C\cap Q_CP_A = R_B$ and $Q_BP_C\cap Q_CP_B = R_A$. Then we have that :
\begin{enumerate}
\item Circles $(\mathcal{M}(Q_AP_A , Q_BP_B)\mathcal{M}(Q_CP_C , Q_BP_B)\mathcal{M}(Q_AP_A , Q_CP_C))$ $$(\mathcal{M}(Q_AP_A , Q_BP_B)\mathcal{M}(Q_AP_A , QP)\mathcal{M}(QP , Q_BP_B))$$ and $(R_AR_BR_C)$ goes through the same point.
\item Lines $AR_A$, $BR_B$, $CR_C$ intersect at the same point which lies on the circle $$(\mathcal{M}(Q_AP_A , QP)\mathcal{M}(Q_BP_B , QP)\mathcal{M}(QP , Q_CP_C))$$
\end{enumerate} \end{theorem} \begin{commentary}
Consider the case when $P = Q$~-- incenter of $ABC$.
\end{commentary}
\begin{theorem}
Let $P$ and $Q$ be two isogonal conjugated points wrt $ABC$. Let $O_A^P$, $O_B^P$, $O_C^P$ be the circumcenters of the triangles $BCP$, $CAP$, $ABP$ respectively. Similarly let $O_A^Q$, $O_B^Q$, $O_C^Q$ be the circumcenters of the triangles $BCQ$, $CAQ$, $ABQ$ respectively. Let $M_A = \mathcal{M}(O_B^PO_B^Q , O_C^PO_C^Q)$, similarly define the points $M_B$, $M_C$. Let $O$ be the circumcenter of $ABC$. Then we have that :
\begin{enumerate}
\item Circles $(OM_AA)$, $(OM_BB)$, $(OM_CC)$ are coaxial.
\item Let $A^{\triangle}=\mathcal{M}(PQ , O_A^PO_A^Q)$, $B^{\triangle}=\mathcal{M}(PQ , O_B^PO_B^Q)$ and $C^{\triangle}=\mathcal{M}(PQ , O_C^PO_C^Q)$. Then $A^{\triangle}$ lies on $(OM_AA)$.
\item Points $A^{\triangle}$, $B^{\triangle}$, $C^{\triangle}$, $M_A$, $M_B$, $M_C$ lie on the same circle.
\end{enumerate}
\end{theorem}
\section{Deformation of segment into a conic}
\subsection{Deformation principle for conics} In some statements we can replace some segment by a conic.
\begin{theorem}
Consider a circle $\omega$ and the two conics $\mathcal{K}_1$, $\mathcal{K}_2$ which are tangent to $\omega$ at four points $P_1$, $Q_1$, $P_2$, $Q_2$. Where $P_1$ and $Q_1$ lie on $\mathcal{K}_1$, $P_2$ and $Q_2$ lie on $\mathcal{K}_2$. Consider two external tangents to $\mathcal{K}_1$, $\mathcal{K}_2$ which intersect at the point $C$. Let the circles $(CP_1Q_1)$ and $(CP_2Q_2)$ intersect second time at $E$. Then $\angle CEO =\pi/2$, where $O$ is the center of $\omega$.
\end{theorem}
\begin{theorem}
Consider a circle $\omega$ and the two conics $\mathcal{K}_1$, $\mathcal{K}_2$ which are tangent to $\omega$ at four points. Let $\mathcal{K}_1$ has foci $F_1$, $F_2$ and $\mathcal{K}_2$ has foci $F_3$, $F_4$. Let $l_1$, $l_2$ be two external tangents to $\mathcal{K}_1$, $\mathcal{K}_2$. Let $Y= F_1F_4\cap F_2F_3$ and the circles $(F_1YF_3)$, $(F_2YF_4)$ meet second time at $X$. Consider the point $Z= l_1\cap l_2$ (see picture below). Then $X$, $Y$, $Z$ are collinear.
\end{theorem}
\definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \definecolor{ffxfqq}{rgb}{1.0,0.4980392156862745,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw(0.21764574025017722,-3.4143179245217703) circle (3.5851591753888847cm); \draw [rotate around={63.110919658012214:(-1.6184878332753352,-2.483234159309076)}] (-1.6184878332753352,-2.483234159309076) ellipse (2.826149470384081cm and 1.015214678769174cm); \draw [rotate around={-65.73472634378953:(2.896376658748347,-2.206778542255469)}] (2.896376658748347,-2.206778542255469) ellipse (1.6280316690209338cm and 0.6468347049734543cm); \draw (-1.051262947548442,-1.0823766399042454) node[anchor=north west] {$\mathcal{K}_1$}; \draw (2.71531228882795,-1.7992776667674257) node[anchor=north west] {$\mathcal{K}_2$}; \draw (0.496332919965725,0.6814274738067541) node[anchor=north west] {$\omega$}; \draw (-2.8113406888643215,-4.835586609491031)-- (2.2823918935476546,-0.8447525924738806); \draw (3.5103614239490377,-3.5688044920370543)-- (-0.4256349776863491,-0.1308817091271236); \draw (9.079635740061052,-2.509921792687575)-- (-0.35390814946863464,0.04390349832076328); \draw (-2.4796096589712375,-4.998127766278186)-- (9.079635740061052,-2.509921792687575); \draw [dash pattern=on 3pt off 3pt] (0.3977068654526339,-1.5235055057251057)-- (9.079635740061052,-2.509921792687575); \draw [shift={(0.4701783844458631,-4.804372710432985)}] plot[domain=2.6429194168184713:3.6029264988446057,variable=\t]({1.0*3.281667523682577*cos(\t r)+-0.0*3.281667523682577*sin(\t r)},{0.0*3.281667523682577*cos(\t r)+1.0*3.281667523682577*sin(\t r)}); \draw [shift={(0.47017838444584836,-4.804372710432978)}] plot[domain=1.5928818703158307:2.642919416818471,variable=\t]({1.0*3.2816675236825605*cos(\t r)+-0.0*3.2816675236825605*sin(\t r)},{0.0*3.2816675236825605*cos(\t r)+1.0*3.2816675236825605*sin(\t r)}); \draw [shift={(0.4701783844458578,-4.804372710432995)}] plot[domain=0.6415567997601213:1.5928818703158334,variable=\t]({1.0*3.281667523682578*cos(\t r)+-0.0*3.281667523682578*sin(\t r)},{0.0*3.281667523682578*cos(\t r)+1.0*3.281667523682578*sin(\t r)}); \draw [shift={(0.4701783844458389,-4.804372710432986)}] plot[domain=-0.45416920282476436:0.6415567997601157,variable=\t]({1.0*3.281667523682587*cos(\t r)+-0.0*3.281667523682587*sin(\t r)},{0.0*3.281667523682587*cos(\t r)+1.0*3.281667523682587*sin(\t r)}); \draw [shift={(0.9963786156781685,-0.22986258405152316)}] plot[domain=2.723428314916651:4.9167687559035045,variable=\t]({1.0*1.425454269106613*cos(\t r)+-0.0*1.425454269106613*sin(\t r)},{0.0*1.425454269106613*cos(\t r)+1.0*1.425454269106613*sin(\t r)}); \draw [shift={(0.996378615678169,-0.2298625840515243)}] plot[domain=-1.3664165512760817:0.06765800131192408,variable=\t]({1.0*1.4254542691066117*cos(\t r)+-0.0*1.4254542691066117*sin(\t r)},{0.0*1.4254542691066117*cos(\t r)+1.0*1.4254542691066117*sin(\t r)}); \begin{scriptsize} \draw [fill=ffxfqq] (-0.4256349776863491,-0.1308817091271236) circle (1.5pt); \draw[color=ffxfqq] (-0.6074670737759971,-0.2289230364956972) node {$F_1$}; \draw [fill=ffxfqq] (-2.8113406888643215,-4.835586609491031) circle (1.5pt); \draw[color=ffxfqq] (-2.79230829850188,-4.496191053538438) node {$F_2$}; \draw [fill=ffxfqq] (3.5103614239490377,-3.5688044920370543) circle (1.5pt); \draw[color=ffxfqq] (3.5460071294789364,-3.324114771524032) node {$F_4$}; \draw [fill=ffxfqq] (2.2823918935476546,-0.8447525924738806) circle (1.5pt); \draw[color=ffxfqq] (2.590139093661363,-1.025479733010342) node {$F_3$}; \draw [fill=ffqqqq] (1.2856886476409826,-1.6256489497679758) circle (1.5pt); \draw[color=ffqqqq] (1.3384071419954924,-1.3668611743737613) node {$Y$}; \draw [fill=ffqqqq] (0.3977068654526339,-1.5235055057251057) circle (1.5pt); \draw[color=ffqqqq] (0.4508153944506024,-1.2644467419647356) node {$X$}; \draw [fill=ffqqqq] (9.079635740061052,-2.509921792687575) circle (1.5pt); \draw[color=ffqqqq] (9.178800911975353,-2.2544529219186513) node {$Z$}; \end{scriptsize} \end{tikzpicture}
\begin{definition} For any two conics $\mathcal{C}_1$ and $\mathcal{C}_2$ consider all their four tangents $l_1$, $l_2$, $l_3$, $l_4$. By definition let $\mathcal{M}(\mathcal{C}_1 , \mathcal{C}_2)$ be the Miquel point for the lines $l_1$, $l_2$, $l_3$, $l_4$.
\end{definition}
\begin{theorem}
Let given conics $\mathcal{C}_A$, $\mathcal{C}_B$, $\mathcal{C}_C$, $\mathcal{C}_D$. Let $\omega_D$ be the circumcircle of triangle $$\mathcal{M}(\mathcal{C}_A, \mathcal{C}_B)\mathcal{M}(\mathcal{C}_A, \mathcal{C}_C)\mathcal{M}(\mathcal{C}_C, \mathcal{C}_B)$$ Like the same define $\omega_A$, $\omega_B$, $\omega_C$. Then $\omega_A$, $\omega_B$, $\omega_C$, $\omega_D$ intersect at the same point.
\end{theorem}
\begin{theorem}
Consider any triangle $ABC$. Let $P$, $P'$ and $Q$, $Q'$ be two pairs of isogonal conjugated points wrt $ABC$. Consider a conic $\mathcal{C}_P$ with foci at $P$, $P'$ and also consider a conic $\mathcal{C}_Q$ with foci at $Q$, $Q'$. Then the points $A$, $B$, $C$, $\mathcal{M}(\mathcal{C}_P, \mathcal{C}_Q)$ lie on the same circle.
\end{theorem}
\section{On incircles in orthocenter construction}
Main idea of this section is to construct something nice which includes incircles in orthocenter construction.
\begin{theorem}
Consider triangle $ABC$ with orthocenter $H$ and altitudes $AA'$, $BB'$, $CC'$. Let $C^*=A'B'\cap CC'$. Let $I_1$, $I_2$, $I_3$, $I_4$, $I_5$, $I_6$ be the incenters of the triangles $AB'C'$, $BA'C'$, $HBA'$, $HAB'$, $HA'C^*$, $HC^*C'$ respectively. Then we have that :
\begin{enumerate}
\item Lines $I_1I_4$, $I_2I_3$, $CC'$ are concurrent.
\item Lines $I_6I_4$, $I_5I_3$, $CC'$ are concurrent.
\end{enumerate}
\end{theorem}
\begin{theorem}
Let given triangle $ABC$ with orthocenter $H$ and altitudes $AH_A$, $BH_B$, $CH_C$. Let the second tangent line through $H_A$ to $A$~-- excircle of $ABC$ meet the second tangent line through $H_B$ to $B$~-- excircle of $ABC$ at $C_1$. Let the second tangent line through $H_A$ to $C$~-- excircle of $ABC$ meet the second tangent line through $H_B$ to $C$~-- excircle of $ABC$ at $C_2$. Then $H_C$, $C_1$, $C_2$ are collinear.
\end{theorem}
\begin{theorem}
Let given triangle $ABC$ with altitudes $AH_A$, $BH_B$, $CH_C$. Let $l_A$ be the line which passes through the incenters of $AH_AB$ and $AH_AC$. Similarly define $l_B$, $l_C$. Then the triangle formed by $l_A$, $l_B$, $l_C$ is perspective to $ABC$.
\end{theorem}
\begin{theorem}
Let $I$ be the incenter of $ABC$. Let $H_a$ be the orthocenter of $BCI$, similarly define $H_b$, $H_c$. Let $I_a$ be the incenter of $BH_aC$, similarly define $I_b$ and $I_c$. Then
\begin{enumerate}
\item Point $I$ is the orthocenter of $I_aI_bI_c$.
\item Let $H_a^{(2)}$ be the orthocenter of $BI_aC$, similarly define $H_b^{(2)}$, $H_c^{(2)}$. Then the triangles $H_a^{(2)}H_b^{(2)}H_c^{(2)}$ and $ABC$ are perspective.
\item Let $I_a^{(2)}$ be the incenter of $BH_a^{(2)}C$, similarly define $I_b^{(2)}$, $I_c^{(2)}$. Then the triangles $ABC$ and $I_a^{(2)}I_b^{(2)}I_c^{(2)}$ are perspective.
\end{enumerate}
\end{theorem}
\begin{theorem}[\textbf{Four nine-point circles intersect}]
Let given triangle $ABC$ with orthocenter $H$. Let $\omega$ be the incircle of $ABC$ and $\omega_A$ be the incircle of $BCH$. Like the same define the circles $\omega_B$, $\omega_C$. Let $l_A$ be the second external tangent line of the circles $\omega_A$, $\omega$. Similarly define the lines $l_B$, $l_C$. Let $A^*=l_B\cap l_C$, $B^*=l_A\cap l_C$, $C^*=l_B\cap l_A$. Consider the point $A^{!}=l_A\cap BC$. Similarly define the points $B^{!}$, $C^{!}$. Let $\pi$ be the pedal circle of $H$ wrt $A^*B^*C^*$. Then the nine~- point circles of the triangles $ABC$, $A^*B^*C^*$, circle $\pi$ and circle $(A^{!}B^{!}C^{!})$ meet at the same point.
\end{theorem}
\begin{theorem}
Let given triangle $ABC$ with orthocenter $H$. Let $AA'$, $BB'$, $CC'$ be altitudes of $ABC$. Let $I_1$, $I_2$ be the incenters of the triangles $C'B'A$, $C'A'B$ respectively. Let $I_1'$, $I_2'$ be $C'$~-- excenter of the triangle $AC'B'$ and respectively $C'$~-- excenter of the triangle $C'BA'$. Then the points $I_1$, $I_2$, $I_1'$, $I_2'$, $A$, $B$ lie on the same conic.
\end{theorem}
\begin{theorem}
Let $AA'$, $BB'$, $CC'$ be altitudes of $ABC$. Let $H$ be the orthocenter of $ABC$ and $A_1B_1C_1$ be the midpoint triangle of $ABC$. Places of the points $A'$, $B'$, $C'$ wrt points $A_1$, $B_1$, $C_1$ are the same as on the picture below. Let the incenters of the triangles $C'C_1H$, $B'B_1H$ and $H$~-- excenter of $HA'A_1$ lie on the circle $\omega_1$. Let $H$~-- excenters of the triangles $C'C_1H$, $B'B_1H$ and the incenter of $HA'A_1$ lie on the circle $\omega_2$. Then
\begin{enumerate}
\item Point $H$ lies on the radical line of $\omega_1$ and $(A'B'C')$.
\item Circles $\omega_1$ and $\omega_2$ are symmetric wrt Euler line of $ABC$.
\item Circles $\omega_1$, $\omega_2$, $(A'B'C')$ are coaxial.
\end{enumerate}
\end{theorem}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw (-2.706386413568172,-0.46186748741931893)-- (6.635377852205561,2.0580639526255093); \draw (-2.706386413568172,-0.46186748741931893)-- (6.681921932919673,9.516054964849946); \draw (6.681921932919673,9.516054964849946)-- (9.393826605659127,-0.5373829030991867); \draw (-2.706386413568172,-0.46186748741931893)-- (9.393826605659127,-0.5373829030991867); \draw(5.586838173417778,4.277909989640748) circle (3.9549167901084714cm); \draw(4.415531815190315,1.0095242738377244) circle (3.954916790108472cm); \draw(9.541726491538917,4.292916917068917) circle (1.400798963294004cm); \draw(2.96381314603573,4.688366445270191) circle (0.6003375308040846cm); \draw(7.8414551554829375,2.9854838086258595) circle (0.5813039128268213cm); \draw(5.7819299894785186,0.327807965643392) circle (0.8426332137884553cm); \draw(1.8264950531208803,5.503139125075294) circle (0.7863004077464303cm); \draw (9.393826605659127,-0.5373829030991867)-- (2.365316103278257,6.07579976378683); \draw (1.457599631257715,4.80874406536461)-- (6.635377852205561,2.0580639526255093); \draw [dash pattern=on 3pt off 3pt] (8.346414575285467,1.4448718856017475)-- (1.6559554133226289,3.842562377876725); \draw(5.0011849943040465,2.6437171317392365) circle (3.5535603095571036cm); \draw (6.635377852205561,2.0580639526255093)-- (6.681921932919673,9.516054964849946); \draw (3.3437200960454776,-0.4996251952592528)-- (6.635377852205561,2.0580639526255093); \draw (6.635377852205561,2.0580639526255093)-- (6.619288163058765,-0.5200674704303726); \draw (6.635377852205561,2.0580639526255093)-- (8.0378742692894,4.489336030875379); \draw (6.635377852205561,2.0580639526255093)-- (8.55409306465493,2.5756354359970137); \draw(4.171153256948127,-2.9378350886922906) circle (2.432998641022355cm); \draw (6.604104519134404,-2.9530187326166484)-- (6.619288163058765,-0.5200674704303726); \draw (2.6783419399926784,-1.016638388927697)-- (3.3437200960454776,-0.4996251952592528); \draw (8.0378742692894,4.489336030875379)-- (8.32833993889853,4.9928674934934465); \draw (8.55409306465493,2.5756354359970137)-- (9.906550518632875,2.940459463090973); \begin{scriptsize} \draw [fill=ffqqqq] (-2.706386413568172,-0.46186748741931893) circle (1.5pt); \draw[color=ffqqqq] (-2.9955205499884947,-0.23880705792142481) node {$B$}; \draw [fill=ffqqqq] (6.681921932919673,9.516054964849946) circle (1.5pt); \draw[color=ffqqqq] (6.862567525933959,9.877346150878514) node {$A$}; \draw [fill=ffqqqq] (9.393826605659127,-0.5373829030991867) circle (1.5pt); \draw[color=ffqqqq] (9.572251421148247,-0.1871940313459149) node {$C$}; \draw [fill=ffqqqq] (2.937976741989781,5.536978713629455) circle (1.5pt); \draw[color=ffqqqq] (3.1980426390727326,5.903143104564253) node {$C'$}; \draw [fill=ffqqqq] (6.619288163058765,-0.5200674704303726) circle (1.5pt); \draw[color=ffqqqq] (6.8883740392217145,-0.16138751805815993) node {$A'$}; \draw [fill=ffqqqq] (8.55409306465493,2.5756354359970137) circle (1.5pt); \draw[color=ffqqqq] (8.823862535803347,2.9353940764724338) node {$B'$}; \draw [fill=ffqqqq] (6.635377852205561,2.0580639526255093) circle (1.5pt); \draw[color=ffqqqq] (6.8109544993584485,2.419263810717335) node {$H$}; \draw [fill=ffqqqq] (1.9877677596757506,4.527093738715314) circle (1.5pt); \draw[color=ffqqqq] (2.217395134138038,4.974108626205075) node {$C_1$}; \draw [fill=ffqqqq] (8.0378742692894,4.489336030875379) circle (1.5pt); \draw[color=ffqqqq] (8.28192575676049,4.922495599629565) node {$B_1$}; \draw [fill=ffqqqq] (3.3437200960454776,-0.4996251952592528) circle (1.5pt); \draw[color=ffqqqq] (3.585140338389059,-0.05816146490714019) node {$A_1$}; \draw [fill=ffffff] (1.8264950531208803,5.503139125075294) circle (1.5pt); \draw [fill=ffffff] (2.96381314603573,4.688366445270191) circle (1.5pt); \draw [fill=ffffff] (9.541726491538917,4.292916917068917) circle (1.5pt); \draw [fill=ffffff] (7.8414551554829375,2.9854838086258595) circle (1.5pt); \draw [fill=ffffff] (4.171153256948127,-2.9378350886922906) circle (1.5pt); \draw [fill=ffffff] (5.7819299894785186,0.327807965643392) circle (1.5pt); \draw [fill=ffffff] (8.346414575285467,1.4448718856017475) circle (1.5pt); \draw [fill=ffffff] (1.6559554133226289,3.842562377876725) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\begin{commentary}
Compare previous theorem with ideas of Section 2.
\end{commentary}
\subsection{Analog for cyclic quadrilateral}
\begin{theorem}
Let given convex cyclic quadrilateral $ABCD$, where $AC\perp BD$. Let $AC\cap BD = H$. Let $X$ be the foot of perpendicular from $H$ on $AB$. Let $Y$ be the foot of perpendicular from $H$ on $CD$. Let $M$ and $N$ be the midpoints of segments $AB$ and $CD$ respectively. Let $I_1$, $I_2$, $I_3$, $I_4$ be the incenters of the triangles $HXM$, $ABD$, $HYN$, $CAD$ respectively. Let $W$ be the midpoint of a smaller arc $DA$ of a circle $(ABCD)$. Then $I_1I_2$, $I_3I_4$ and $HW$ are concurrent.
\end{theorem}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \definecolor{ffwwqq}{rgb}{1.0,0.4,0.0} \definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw[color=ffqqqq] (3.0831846647205716,1.165670286792168) -- (2.8356040013414914,0.7878741880026369) -- (3.2134001001310226,0.5402935246235565) -- (3.460980763510103,0.9180896234130876) -- cycle; \draw (0.9452013275164236,2.566752337703729)-- (8.867626732649128,-2.6250411817596513); \draw (0.9452013275164236,2.566752337703729)-- (6.1253610446155005,4.983804862054891); \draw (6.1253610446155005,4.983804862054891)-- (8.867626732649128,-2.6250411817596513); \draw (8.867626732649128,-2.6250411817596513)-- (1.268565629766738,-2.4274296637177923); \draw (1.268565629766738,-2.4274296637177923)-- (0.9452013275164236,2.566752337703729); \draw (1.268565629766738,-2.4274296637177923)-- (6.1253610446155005,4.983804862054891); \draw (5.068096181207933,-2.526235422738722)-- (2.379460063395984,3.2359746952089066); \draw (3.535281186065962,3.77527859987931)-- (3.3725586273793966,-2.482143429613157); \draw [dash pattern=on 3pt off 3pt](3.0274224026585843,3.000310698555249)-- (7.059552659775981,1.517492373625005); \draw [dash pattern=on 3pt off 3pt](4.03753284020386,-1.850877587024035)-- (7.059552659775981,1.517492373625005); \draw [dash pattern=on 3pt off 3pt] (3.460980763510103,0.9180896234130876)-- (9.61625495604132,1.9433541966011665); \draw [shift={(5.1423966037637925,0.3309535537275)}] plot[domain=-0.6707689984678078:1.3625970256191742,variable=\t]({1.0*4.755548800021174*cos(\t r)+-0.0*4.755548800021174*sin(\t r)},{0.0*4.755548800021174*cos(\t r)+1.0*4.755548800021174*sin(\t r)}); \draw(3.0274224026585843,3.000310698555249) circle (0.4875411328398017cm); \draw(4.998844439377637,2.2753190599398403) circle (1.9781186919636629cm); \draw(4.03753284020386,-1.8508775870240348) circle (0.648339134921404cm); \draw(5.515083436985773,-0.20398665225712628) circle (2.333083816703565cm); \draw (8.124052052367897,3.7543417276475717)-- (8.407440703294913,4.079714451937661); \draw (9.622449882061154,-0.4031979477648091)-- (10.048252352032058,-0.47297468997066905); \begin{scriptsize} \draw [fill=ffqqqq] (8.867626732649128,-2.6250411817596513) circle (1.5pt); \draw[color=ffqqqq] (9.00665933241711,-2.334123758096226) node {$A$}; \draw [fill=ffqqqq] (1.268565629766738,-2.4274296637177923) circle (1.5pt); \draw[color=ffqqqq] (1.4263560678311609,-2.1211938911134745) node {$B$}; \draw [fill=ffqqqq] (0.9452013275164236,2.566752337703729) circle (1.5pt); \draw[color=ffqqqq] (1.0856682806587585,2.8613649962829104) node {$C$}; \draw [fill=ffqqqq] (6.1253610446155005,4.983804862054891) circle (1.5pt); \draw[color=ffqqqq] (6.302450021736167,4.458338998653547) node {$D$}; \draw [fill=ffqqqq] (3.460980763510103,0.9180896234130876) circle (1.5pt); \draw[color=ffqqqq] (3.260980763510103,0.9180896234130876) node {$H$}; \draw [fill=ffqqqq] (3.535281186065962,3.77527859987931) circle (1.5pt); \draw[color=ffqqqq] (3.6834126578483253,4.075065238084593) node {$N$}; \draw [fill=ffqqqq] (5.068096181207933,-2.526235422738722) circle (1.5pt); \draw[color=ffqqqq] (5.216507700124136,-2.22765882460485) node {$M$}; \draw [fill=ffqqqq] (2.379460063395984,3.2359746952089066) circle (1.5pt); \draw[color=ffqqqq] (2.5335913761414677,3.542740570627715) node {$Y$}; \draw [fill=ffqqqq] (3.3725586273793966,-2.482143429613157) circle (1.5pt); \draw[color=ffqqqq] (3.513068764262124,-2.1850728512082997) node {$X$}; \draw [fill=ffwwqq] (5.515083436985773,-0.20398665225712628) circle (1.5pt); \draw[color=ffwwqq] (5.706246394184463,0.15715568560196627) node {$I_2$}; \draw [fill=ffwwqq] (4.9988444393776374,2.2753190599398394) circle (1.5pt); \draw[color=ffwwqq] (5.19521471342586,2.6271421426018837) node {$I_4$}; \draw [fill=ffwwqq] (3.0274224026585843,3.000310698555249) circle (1.5pt); \draw[color=ffwwqq] (3.2149669504862723,3.3723966770415137) node {$I_3$}; \draw [fill=ffwwqq] (4.03753284020386,-1.850877587024035) circle (1.5pt); \draw[color=ffwwqq] (4.237030312003479,-1.4824042901652201) node {$I_1$}; \draw [fill=ffffff] (7.059552659775981,1.517492373625005) circle (1.5pt); \draw[color=ffwwqq] (10.01625495604132,1.9433541966011665) node {$W$}; \draw [fill=ffwwqq] (9.61625495604132,1.9433541966011665) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\subsection{Tangent circles}
\begin{theorem}
Let given triangle $ABC$ with orthocenter $H$. Let $AA'$, $BB'$, $CC'$ be altitudes of $ABC$. Let the circle $\omega_1$ is internally tangent to the incircles of the triangles $HA'C$, $HB'A$, $HC'B$. Similarly let the circle $\omega_2$ is internally tangent to the incircles of the triangles $HA'B$, $HB'C$, $HC'A$. Then $H$ lies on the radical line of $\omega_1$ and $\omega_2$.
\end{theorem}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw (3.12,-8.08)-- (12.271911605876896,1.0327592590004755); \draw (12.271911605876896,1.0327592590004755)-- (16.3539933983036,-7.70510230749119); \draw (3.12,-8.08)-- (16.3539933983036,-7.70510230749119); \draw (3.12,-8.08)-- (14.126888121929667,-2.9378931753610886); \draw (12.522515885086076,-7.813641886994741)-- (12.271911605876896,1.0327592590004755); \draw (16.3539933983036,-7.70510230749119)-- (9.952812224389636,-1.2764188964812164); \draw(11.459389371920745,-5.131212370658879) circle (3.8045888002400825cm); \draw(11.34944485498918,-1.2794127333598935) circle (0.9875706728924905cm); \draw(10.86262420142478,-6.245203585959058) circle (1.6148125441740344cm); \draw(13.748838932014372,-3.978872224369569) circle (0.7831215009403726cm); \draw(10.709259725571584,-4.041991493802813) circle (3.823355997556188cm); \draw(13.126780658661646,-2.5746871947250307) circle (0.752374083303709cm); \draw(9.948461692996794,-3.305953019387166) circle (1.4351006381370994cm); \draw(13.645952532605483,-6.624699263554329) circle (1.156653462110199cm); \draw [dash pattern=on 3pt off 3pt] (14.20827277116278,-2.5009013631383077)-- (8.021770830265389,-6.7614499916760185); \begin{scriptsize} \draw [fill=ffqqqq] (3.12,-8.08) circle (1.5pt); \draw[color=ffqqqq] (3.2692663555520602,-7.77129822844946) node {$C$}; \draw [fill=ffqqqq] (12.271911605876896,1.0327592590004755) circle (1.5pt); \draw[color=ffqqqq] (12.426368754779528,1.3416735568057363) node {$A$}; \draw [fill=ffqqqq] (16.3539933983036,-7.70510230749119) circle (1.5pt); \draw[color=ffqqqq] (16.508450547206234,-7.396188009685929) node {$B$}; \draw [fill=ffqqqq] (14.126888121929667,-2.9378931753610886) circle (1.5pt); \draw[color=ffqqqq] (14.390181076541564,-2.961061305481826) node {$C'$}; \draw [fill=ffqqqq] (12.522515885086076,-7.813641886994741) circle (1.5pt); \draw[color=ffqqqq] (12.735283052584794,-7.50651454461638) node {$A'$}; \draw [fill=ffqqqq] (9.952812224389636,-1.2764188964812164) circle (1.5pt); \draw[color=ffqqqq] (9.977119679323508,-0.9531183697476303) node {$B'$}; \draw [fill=ffqqqq] (12.407153139987782,-3.7413047199551643) circle (1.5pt); \draw[color=ffqqqq] (12.55876059669607,-3.3582368312314475) node {$H$}; \draw [fill=ffffff] (8.021770830265389,-6.7614499916760185) circle (1.5pt); \draw [fill=ffffff] (14.20827277116278,-2.5009013631383077) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\begin{theorem}
Let given triangle $ABC$ with orthocenter $H$. Let $W_A$, $W_B$, $W_C$ be the reflections of $H$ wrt lines $BC$, $AC$, $AB$ respectively. Let $I_{AB}$ be $H$~-- excenter of the triangle $AHW_B$ (and let $(I_{AB})$ be the excircle itself), $I_{CB}$ be $H$~-- excenter of the triangle $CHW_B$, $I_{CA}$ be $H$~-- excenter of the triangle $CHW_A$, $I_{BA}$ be $H$~-- excenter of the triangle $BHW_C$, $I_{BC}$ be $H$~-- excenter of the triangle $BHW_C$, $I_{AC}$ be $H$~-- excenter of the triangle $AHW_C$. Suppose that the circle $\omega_1$ externally touches to $(I_{AB})$, $(I_{BC})$, $(I_{CA})$. Suppose that the circle $\omega_2$ externally touches to $(I_{AC})$, $(I_{BA})$, $(I_{CB})$. Then we have that :
\begin{enumerate}
\item Point $H$ lies on the radical line of $(I_{AB}I_{BC}I_{CA})$ and $(I_{BA}I_{CB}I_{AC})$.
\item Point $H$ lies on the radical line of $\pi_1$ and $\pi_2$.
\end{enumerate}
\end{theorem}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \definecolor{qqccqq}{rgb}{0.0,0.6,1.0} \definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \definecolor{ffwwqq}{rgb}{1.0,0.4,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw (3.56,-3.7)-- (1.5,0.994); \draw (-0.8,-3.16)-- (1.5,0.994); \draw (-0.8,-3.16)-- (3.56,-3.7); \draw (-0.8,-3.16)-- (0.8289351111548515,-4.424227621046015); \draw (0.8289351111548515,-4.424227621046015)-- (3.56,-3.7); \draw (3.56,-3.7)-- (4.226211552294074,-0.9542062637993629); \draw (4.226211552294074,-0.9542062637993629)-- (1.5,0.994); \draw (-1.0998017317305564,-1.119946080168445)-- (1.5,0.994); \draw (-1.0998017317305564,-1.119946080168445)-- (-0.8,-3.16); \draw(6.147273394120497,-2.564554070752403) circle (2.246598122667491cm); \draw(4.568310890128029,2.3405531388290886) circle (2.87953453265855cm); \draw(-1.4314371892359248,2.6170875145023174) circle (3.1087093248602167cm); \draw(-2.603914035274589,-2.1612031099827744) circle (1.6395236096113004cm); \draw(-1.0534445552856933,-5.2063301130474455) circle (1.7719747766025078cm); \draw(3.2127112761492187,-6.504034881461859) circle (2.6213383240084633cm); \draw(2.180627282238915,-1.8196689901693406) circle (4.796715665326703cm); \draw(1.5469580307517703,-1.1598499897652341) circle (4.809999479998761cm); \draw [dash pattern=on 3pt off 3pt] (-1.5362312018491084,-4.851738280811207)-- (5.360441159849584,1.771607228355934); \draw (5.059042097668666,-4.5299944209528)-- (-2.9372671571109747,-0.10257234921635994); \draw (0.6112497297168732,-6.181835515619321)-- (1.7106109632199855,2.6944885178502527); \draw (-1.9450511335722407,-3.6625149840559903)-- (5.725487564265295,-0.29623681670504753); \begin{scriptsize} \draw [fill=ffwwqq] (3.56,-3.7) circle (1.5pt); \draw[color=ffwwqq] (3.404447354290962,-3.172906333497974) node {$A$}; \draw [fill=ffwwqq] (-0.8,-3.16) circle (1.5pt); \draw[color=ffwwqq] (-0.4211921941437536,-2.91615200138826) node {$B$}; \draw [fill=ffwwqq] (1.5,0.994) circle (1.5pt); \draw[color=ffwwqq] (1.6841933291558888,1.294619045211045) node {$C$}; \draw [fill=ffqqqq] (0.8289351111548515,-4.424227621046015) circle (1.5pt); \draw[color=ffqqqq] (1.0679829320925789,-4.045871062671001) node {$W_C$}; \draw [fill=ffqqqq] (4.226211552294074,-0.9542062637993629) circle (1.5pt); \draw[color=ffqqqq] (4.457140115940784,-0.5796875791898652) node {$W_B$}; \draw [fill=ffqqqq] (-1.0998017317305564,-1.119946080168445) circle (1.5pt); \draw[color=ffqqqq] (-0.8576745587302648,-0.7337401784556936) node {$W_A$}; \draw [fill=ffqqqq] (1.0881417959804562,-2.3313736472689097) circle (1.5pt); \draw[color=ffqqqq] (1.273386397780349,-2.0175118390042623) node {$H$}; \draw [fill=qqccqq] (6.147273394120497,-2.564554070752403) circle (1.5pt); \draw[color=qqccqq] (6.43414847318557,-2.1715644382700905) node {$I_{AB}$}; \draw [fill=qqccqq] (4.568310890128029,2.3405531388290886) circle (1.5pt); \draw[color=qqccqq] (4.842271614105353,2.7324433050254417) node {$I_{CB}$}; \draw [fill=qqccqq] (-1.4314371892359248,2.6170875145023174) circle (1.5pt); \draw[color=qqccqq] (-1.1401043240509485,3.014873070346127) node {$I_{CA}$}; \draw [fill=qqccqq] (-2.603914035274589,-2.1612031099827744) circle (1.5pt); \draw[color=qqccqq] (-2.3211742517556258,-1.7864329401055197) node {$I_{BA}$}; \draw [fill=qqccqq] (-1.0534445552856933,-5.2063301130474455) circle (1.5pt); \draw[color=qqccqq] (-0.7806482590973511,-4.816134059000142) node {$I_{BC}$}; \draw [fill=qqccqq] (3.2127112761492187,-6.504034881461859) circle (1.5pt); \draw[color=qqccqq] (3.5071490871348474,-6.125581152759683) node {$I_{AC}$}; \draw [fill=ffffff] (-1.5362312018491084,-4.851738280811207) circle (1.5pt); \draw [fill=ffffff] (5.360441159849584,1.771607228355934) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \definecolor{ffwwqq}{rgb}{1.0,0.4,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw (6.106000061798027,-3.439493936676183)-- (3.0453227052096987,2.118545925594674); \draw (-0.8,-3.16)-- (3.0453227052096987,2.118545925594674); \draw (-0.8,-3.16)-- (6.106000061798027,-3.439493936676183); \draw (-0.8,-3.16)-- (2.7370337380672076,-5.49894893478352); \draw (2.7370337380672076,-5.49894893478352)-- (6.106000061798027,-3.439493936676183); \draw (6.106000061798027,-3.439493936676183)-- (5.847498958402081,0.5006159804713985); \draw (5.847498958402081,0.5006159804713985)-- (3.0453227052096987,2.118545925594674); \draw (0.008081054900604007,1.0027268457660514)-- (3.0453227052096987,2.118545925594674); \draw (0.008081054900604007,1.0027268457660514)-- (-0.8,-3.16); \draw(2.6918916166304383,-1.5196094505620066) circle (3.29685131973788cm); \draw(2.8517515902461157,-1.577415352074929) circle (3.2975434398112577cm); \draw [dash pattern=on 3pt off 3pt] (1.6383539545324817,-4.64359520901203)-- (3.88004006687793,1.5557005963490176); \draw (7.879350896625797,-4.7313470957822235)-- (-1.966142473235625,2.440912032540985); \draw (2.6542800502119532,-7.543705183304687)-- (3.118026662935404,3.9149839539745197); \draw (-2.7218734542577208,-4.21832896298651)-- (7.839809524974673,1.5977329478745435); \draw [shift={(-4.227421199005969,-0.6631872900926004)}] plot[domain=-1.1985815754994675:0.9412099352943185,variable=\t]({1.0*3.84041847724904*cos(\t r)+-0.0*3.84041847724904*sin(\t r)},{0.0*3.84041847724904*cos(\t r)+1.0*3.84041847724904*sin(\t r)}); \draw [shift={(0.42998305236338813,4.023772306246498)}] plot[domain=-2.6531941154819414:0.33953478108870094,variable=\t]({1.0*2.6902441074978842*cos(\t r)+-0.0*2.6902441074978842*sin(\t r)},{0.0*2.6902441074978842*cos(\t r)+1.0*2.6902441074978842*sin(\t r)}); \draw [shift={(5.8797036714335045,3.679370579292234)}] plot[domain=2.8235999407123487:5.688960915183469,variable=\t]({1.0*2.768945836053883*cos(\t r)+-0.0*2.768945836053883*sin(\t r)},{0.0*2.768945836053883*cos(\t r)+1.0*2.768945836053883*sin(\t r)}); \draw [shift={(-0.9714750171416897,-7.396966615663481)}] plot[domain=-0.04044910050696249:2.2730897292136176,variable=\t]({1.0*3.628723193586693*cos(\t r)+-0.0*3.628723193586693*sin(\t r)},{0.0*3.628723193586693*cos(\t r)+1.0*3.628723193586693*sin(\t r)}); \draw [shift={(5.946327076362271,-7.38484481047482)}] plot[domain=0.9412099352943183:3.101143553082832,variable=\t]({1.0*3.282930278209549*cos(\t r)+-0.0*3.282930278209549*sin(\t r)},{0.0*3.282930278209549*cos(\t r)+1.0*3.282930278209549*sin(\t r)}); \draw [shift={(9.564821909251618,-1.5348048787878967)}] plot[domain=2.2282777822463427:4.637675546441167,variable=\t]({1.0*3.576095770722571*cos(\t r)+-0.0*3.576095770722571*sin(\t r)},{0.0*3.576095770722571*cos(\t r)+1.0*3.576095770722571*sin(\t r)}); \begin{scriptsize} \draw [fill=ffwwqq] (6.106000061798027,-3.439493936676183) circle (1.5pt); \draw[color=ffwwqq] (5.985306587698288,-2.9934169365848127) node {$A$}; \draw [fill=ffwwqq] (-0.8,-3.16) circle (1.5pt); \draw[color=ffwwqq] (-0.480739175830376,-2.9717913654693318) node {$B$}; \draw [fill=ffwwqq] (3.0453227052096987,2.118545925594674) circle (1.5pt); \draw[color=ffwwqq] (3.195607913801306,2.3697247000543755) node {$C$}; \draw [fill=ffqqqq] (2.7370337380672076,-5.49894893478352) circle (1.5pt); \draw[color=ffqqqq] (2.93610106041554,-5.177599619248352) node {$W_C$}; \draw [fill=ffqqqq] (5.847498958402081,0.5006159804713985) circle (1.5pt); \draw[color=ffqqqq] (6.136685585506652,0.7261812952778502) node {$W_B$}; \draw [fill=ffqqqq] (0.008081054900604007,1.0027268457660514) circle (1.5pt); \draw[color=ffqqqq] (-0.3726113202529736,1.2019438598184233) node {$W_A$}; \draw [fill=ffqqqq] (2.914476396640085,-1.1145282004988823) circle (1.5pt); \draw[color=ffqqqq] (3.1307312004548646,-1.0687411073070392) node {$H$}; \draw [fill=ffffff] (1.6383539545324817,-4.64359520901203) circle (1.5pt); \draw [fill=ffffff] (3.88004006687793,1.5557005963490176) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\subsection{Conjugations} Note that isogonal conjugation can be considered as a conjugation associated with the incenter.
\begin{theorem}
Let $I$ be the incenter of $ABC$. Let $A'B'C'$ be the cevian triangle of $I$ wrt $ABC$. Let $H_{IAB'}$ be the orthocenter of $IAB'$. Similarly define the points $H_{IA'B}$, $H_{IAC'}$, $H_{IA'C}$, $H_{IBC'}$, $H_{IB'C}$. Let $i_{IAB'}$ be isogonal conjugation of $H_{IAB'}$ wrt $ABC$, similarly define the points $i_{IA'B}$, $i_{IAC'}$, $i_{IA'C}$, $i_{IBC'}$, $i_{IB'C}$. Then $i_{IAB'}i_{IA'B}$, $i_{IAC'}i_{IA'C}$, $i_{IBC'}i_{IB'C}$ are concurrent.
\end{theorem}
\begin{theorem}
Let given triangle $ABC$ with orthocenter $H$. Let $A'$ be the reflection of $H$ wrt $BC$, similarly define the points $B', C'$. Let $A'B'\cap CA = C_B$, $A'B'\cap CB = C_A$. Like the same define the points $A_B$, $A_C$, $B_A$, $B_C$. Let $i_{CC_BB'}(H)$ be isogonal conjugation of $H$ wrt $CC_BB'$. Similarly define the points $i_{AA_BB'}(H)$, $i_{BB_AA'}(H)$, $i_{BB_CC'}(H)$, $i_{CC_AA'}(H)$, $i_{AA_CC'}(H)$. Then we have that :
\begin{enumerate}
\item Lines $i_{AA_BB'}(H)i_{BB_AA'}(H)$, $i_{BB_CC'}(H)i_{CC_BB'}(H)$, $i_{CC_AA'}(H)i_{AA_CC'}(H)$ are concurrent.
\item Let $i_{CC_AC_B}(H)$ be isogonal conjugation of $H$ wrt $CC_AC_B$, similarly define the points $i_{BB_CB_A}(H)$, $i_{AA_BA_C}(H)$. Then the point $H$ is the orthocenter of the triangle $i_{CC_AC_B}(H)i_{BB_CB_A}(H)i_{AA_BA_C}(H)$.
\end{enumerate}
\end{theorem}
\section{Combination of different facts}
Here we will build combinations of some well-known constructions from geometry.
\begin{theorem}[\textbf{Pappus and Ceva}]
Consider triangles $A_1B_1C_1$ and $A_2B_2C_2$ and arbitrary points $P$, $Q$. Let $A_1^*B_1^*C_1^*$ be the cevian triangle of $P$ wrt $A_1B_1C_1$ and let $A_2^*B_2^*C_2^*$ be the cevian triangle of $Q$ wrt $A_2B_2C_2$. Let $R=A_1C_2^*\cap C_1^*A_2$, $S=A_1B_2\cap A_2B_1$ and $T = B_1C_2^*\cap C_1^*B_2$. Then by the Pappus's theorem points $R$, $S$, $T$ lie on the same line $l_C$. Similarly define the lines $l_A$, $l_B$. Then the triangle formed by the lines $l_A$, $l_B$, $l_C$ is perspective to the both triangles $A_1B_1C_1$ and $A_2B_2C_2$.
\end{theorem}
\begin{theorem}[\textbf{Two types of tangency}]
Let given triangle $ABC$ and let circle $\omega_A$ is tangent to the lines $AC$, $AB$ and externally tangent to the circle $(ABC)$, similarly define the circles $\omega_B$ and $\omega_C$. Let the circle $\Omega_A$ goes through the points $B$, $C$ and internally tangent to the incircle of $ABC$, similarly define the circles $\Omega_B$ and $\Omega_C$. Let $h_A$ be the external homothety center of the circles $\omega_B$ and $\omega_C$, let $H_A$ be the external homothety center of the circles $\Omega_B$ and $\Omega_C$. Let the line $\mathcal{L}_A$ goes through $h_A$ and $H_A$. Similarly define the lines $\mathcal{L}_B$ and $\mathcal{L}_C$. Let the lines $\mathcal{L}_A$, $\mathcal{L}_B$, $\mathcal{L}_C$ form a triangle $A^{\mathcal{H}}B^{\mathcal{H}}C^{\mathcal{H}}$. Let $\mathcal{H}_A$ be the external homothety center of the circles $\omega_A$ and $\Omega_A$, similarly define $\mathcal{H}_B$, $\mathcal{H}_C$. Then the triangles $A^{\mathcal{H}}B^{\mathcal{H}}C^{\mathcal{H}}$ and $\mathcal{H}_A\mathcal{H}_B\mathcal{H}_C$ are perspective at the incenter of $ABC$.
\end{theorem}
\begin{theorem}[\textbf{Conic and Poncelet point}]
Consider a conic $\mathcal{C}$ and any point $X$. Let given points $A_1$, $B_1$, $C_1$, $D_1$, $A_2$, $B_2$, $C_2$, $D_2$ on $\mathcal{C}$, such that the lines $A_1A_2$, $B_1B_2$, $C_1C_2$, $D_1D_2$ goes through the point $X$. Consider the circle $\Omega_{ABC}$ which goes through the Poncelet points of quadrilaterals $A_1A_2B_1B_2$, $A_1A_2C_1C_2$, $C_1C_2B_1B_2$. Similarly define the circles $\Omega_{BCD}$, $\Omega_{ACD}$, $\Omega_{ABD}$. Then all these circles $\Omega_{ABC}$, $\Omega_{BCD}$, $\Omega_{ACD}$, $\Omega_{ABD}$ goes through the same point.
\end{theorem}
\begin{theorem}[\textbf{Conic and Miquel point}]
Consider a conic $\mathcal{C}$ and any point $X$. Let given points $A_1$, $B_1$, $C_1$, $D_1$, $A_2$, $B_2$, $C_2$, $D_2$ on $\mathcal{C}$, such that the lines $A_1A_2$, $B_1B_2$, $C_1C_2$, $D_1D_2$ goes through the point $X$. Consider the circumcircle $\Omega_{ABC}$ of the triangle $\mathcal{M}(A_1A_2, B_1B_2)\mathcal{M}(A_1A_2, C_1C_2)\mathcal{M}(C_1C_2, B_1B_2)$ (see definition of $\mathcal{M}$ in Section 4). Similarly define a circle $\Omega_{BCD}$. Then we get that $\Omega_{BCD} = \Omega_{ABC}$.
\end{theorem}
\subsection{Combinations with radical lines}
\begin{theorem}[\textbf{Gauss line theorem and radical lines}]
Consider any four lines $l_1$, $l_2$, $l_3$, $l_4$ and let $l$ be the Gauss line for these four lines. Consider any four points $P_1\in l_1$, $P_2\in l_2$, $P_3\in l_3$, $P_4\in l_4$. Let $X_{ij}= l_i\cap l_j$, for any index $i\not= j$. Let $r_1$ be the radical line of circles $(P_1P_2X_{12})$ and $(P_3P_4X_{34})$, let $r_2$ be the radical line of circles $(P_1P_3X_{13})$ and $(P_2P_4X_{24})$, finally let $r_3$ be the radical line of circles $(P_1P_4X_{14})$ and $(P_2P_3X_{23})$. Consider the case when $l_1$, $l_2$, $l_3$ intersect at the same point $P$. Then we get that $P$ lies on $l$.
\end{theorem}
\begin{theorem}[\textbf{Pascal's theorem and radical lines}]
Consider points $A$, $B$, $C$, $D$, $E$, $F$ which lie on the same circle. Let $X = AE\cap BF$, $Y = DB\cap CE$. Let $l_1$ be the radical line of circles $(EFX)$ and $(BCY)$, let $l_2$ be the radical line of circles $(DYE)$ and $(BXA)$, and let $l_3$ be the radical line of circles $(AXF)$ and $(CDY)$. Consider the Pascal line $\mathcal{L}$ of hexagon $AECFBD$ (i.e $\mathcal{L} = XY$). Let the line $\mathcal{W}$ goes through the center of $(ABCDEF)$ and is perpendicular to $\mathcal{L}$. Then we have that :
\begin{enumerate}
\item Lines $\mathcal{W}$, $BE$, $l_3$ are concurrent.
\item Lines $\mathcal{W}$, $l_1$, $l_2$ are concurrent.
\end{enumerate}
\end{theorem}
\begin{theorem}[\textbf{Morley's theorem and radical lines}]
Consider any triangle $ABC$ and it's Morley's triangle $A'B'C'$. Let $l_A$ be the radical line of $(AB'C')$ and $(A'BC)$. Similarly define the lines $l_B$, $l_C$. Then the lines $l_A$, $l_B$, $l_C$ form an equilateral triangle which is perspective to $ABC$.
\end{theorem}
\begin{theorem}[\textbf{Napoleon's theorem and radical lines}]
Consider any triangle $ABC$ and it's Napoleon's triangle $A'B'C'$. Let $l_A$ be the radical line of $(AB'C')$ and $(A'BC)$. Similarly define the lines $l_B$, $l_C$. Then the lines $l_A$, $l_B$, $l_C$ are concurrent.
\end{theorem}
\subsection{On IMO 2011 Problem 6}
Here we will represent some interesting facts which are related to the following IMO 2011 Problem 6 \cite[Problem G8]{IMO}.
\begin{definition}
For any triangle $ABC$ and any point $P$ on $(ABC)$, denote $\omega$ as the circumcircle of triangle formed by the reflections of the tangent line through $P$ to $(ABC)$ wrt sides of $ABC$. By definition let $\otimes(ABC, P)$ be a tangent point of $\omega$ and $(ABC)$. \end{definition}
\begin{theorem}
For any four lines $l_1$, $l_2$, $l_3$, $l_4$ consider triangles $\triangle_{ijk}$, which are formed by the lines $l_i$, $l_j$, $l_k$. Let $M$ be the Miquel point of the lines $l_1$, $l_2$, $l_3$, $l_4$. Then the points $\otimes(\triangle_{123}, M)$, $\otimes(\triangle_{124}, M)$, $\otimes(\triangle_{234}, M)$, $\otimes(\triangle_{134}, M)$ lie on the circle $\boxtimes(l_1, l_2, l_3, l_4)$.
\end{theorem}
\begin{theorem}
Let given a parabola $\mathcal{K}$ and lines $l_1$, $l_2$, $l_3$, $l_4$, $l_5$ which are tangent to $\mathcal{K}$. Then the circles $\boxtimes(l_1, l_2, l_3, l_4)$, $\boxtimes(l_1, l_2, l_3, l_5)$, $\boxtimes(l_1, l_2, l_5, l_4)$ goes through the same point.
\end{theorem}
\section{Blow-ups}
\begin{theorem}[\textbf{Blow-up of Desargues's theorem}]
Consider circles $\omega_A$, $\omega_B$, $\omega_C$, $\omega_{A'}$, $\omega_{B'}$, $\omega_{C'}$ with centers at points $A$, $B$, $C$, $A'$, $B'$, $C'$ respectively. Let $H_{AB}$ be the external homothety center of the circles $\omega_A$, $\omega_B$. Similarly define the points $H_{BC}$, $H_{CA}$, $H_{A'B'}$, $H_{B'C'}$, $H_{C'A'}$, $H_{AB'}$, $H_{AA'}$, $H_{BB'}$, $H_{CC'}$. Let the triangle $\triangle^H$ formed by the lines $H_{AB}H_{A'B'}$, $H_{BC}H_{B'C'}$, $H_{CA}H_{C'A'}$. Then we have that :
\begin{enumerate}
\item Triangle $\triangle^H$ is perspective to $H_{AA'}H_{BB'}H_{CC'}$, to $ABC$ and to $A'B'C'$.
\item All these perspective centers lie on the same line.
\end{enumerate}
\end{theorem}
\begin{theorem}
Consider points $A$, $B$, $C$, $D$ and let $P = AB\cap CD$ and $Q= BC\cap AD$. Consider segments $A_1A_2$, $B_1B_2$, $C_1C_2$ and $D_1D_2$. Let given that the segments $A_1A_2$, $B_1B_2$, $C_1C_2$, $D_1D_2$ are parallel to each other and that the points $A$, $B$, $C$, $D$ are the midpoints of the segments $A_1A_2$, $B_1B_2$, $C_1C_2$, $D_1D_2$ respectively. Let also given that $P = A_1B_1\cap C_1D_1 = A_2B_2\cap C_2D_2$ and $Q= B_1C_1\cap A_1D_1 = B_2C_2\cap A_2D_2$. Let the circle $\Omega_{PBC}$ goes through the points $\mathcal{M}(P, B_1B_2)$, $\mathcal{M}(P, C_1C_2)$, $\mathcal{M}(B_1B_2, C_1C_2)$ and let the line $\mathcal{L}_{PBC}$ goes through the center of $\Omega_{PBC}$ and is perpendicular to $DA$ (see definition of $\mathcal{M}$ in Section 4). Similarly define the lines $\mathcal{L}_{PDA}$, $\mathcal{L}_{QAB}$, $\mathcal{L}_{QCD}$. Consider the case when $\mathcal{L}_{PBC}$, $\mathcal{L}_{PDA}$, $\mathcal{L}_{QAB}$ intersect at the same point $W$. Then $\mathcal{L}_{QCD}$ also goes through $W$.
\end{theorem}
\begin{theorem}[\textbf{Blow-up of Miquel point}]
Consider any four points $A$, $B$, $C$, $D$. Let $P= AB\cap CD$, $Q= BC\cap DA$. Consider circles $\omega_A$, $\omega_B$, $\omega_C$, $\omega_D$ with centers at points $A$, $B$, $C$, $D$ and radii $r_A$, $r_B$, $r_C$, $r_D$ respectively. Let given that $\frac{|PA|}{|PB|} = \frac{r_A}{r_B}$, $\frac{|PC|}{|PD|} = \frac{r_C}{r_D}$, $\frac{|QB|}{|QC|} = \frac{r_B}{r_C}$, $\frac{|QD|}{|QA|} = \frac{r_D}{r_A}$. Let the circle $\Omega_{PBC}$ goes through $P$ and internally touches the circles $\omega_B$, $\omega_C$, let the circle $\Omega_{QAB}$ goes through $Q$ and internally touches the circles $\omega_A$, $\omega_B$. Similarly define the circles $\Omega_{QCD}$ and $\Omega_{PDA}$. Then there exists a circle $\omega_M$ which is tangent to $\Omega_{PBC}$, $\Omega_{QAB}$, $\Omega_{QCD}$ and $\Omega_{PDA}$.
\end{theorem}
\begin{theorem}[\textbf{Blow-up of Pascal's theorem}]
Consider two circles $\Omega_1$ and $\Omega_2$. Let circles $\omega_A$, $\omega_B$, $\omega_C$, $\omega_D$, $\omega_E$, $\omega_F$ are tangent to both $\Omega_1$ and $\Omega_2$. Consider two external tangents $\mathcal{L}_{AB}^{(1)}$, $\mathcal{L}_{AB}^{(2)}$ to the circles $\omega_A$, $\omega_B$, similarly consider two external tangents $\mathcal{L}_{CD}^{(1)}$, $\mathcal{L}_{CD}^{(2)}$ to the circles $\omega_C$, $\omega_D$. Let the lines $\mathcal{L}_{AB}^{(1)}$, $\mathcal{L}_{AB}^{(2)}$, $\mathcal{L}_{CD}^{(1)}$, $\mathcal{L}_{CD}^{(2)}$ form a convex quadrilateral $\Box_{AB, CD}$ and let $P_{AB, CD}$ be the intersection of the diagonals of $\Box_{AB, CD}$. Similarly define $\Box_{BC, DE}$, $\Box_{CD, EF}$ and the points $P_{BC, DE}$, $P_{CD, EF}$. Then $P_{AB, CD}$, $P_{BC, DE}$, $P_{CD, EF}$ are collinear.
\end{theorem}
\begin{theorem}[\textbf{Blow-up of Brianchon's theorem}]
Consider two circles $\Omega_1$ and $\Omega_2$. Consider two circles $\omega_A$, $\omega_A'$ which tangent to both $\Omega_1$ and $\Omega_2$ and also tangent to each other. Similarly define pairs of circles $\omega_B$, $\omega_B'$; $\omega_C$, $\omega_C'$; $\omega_D$, $\omega_D'$; $\omega_E$, $\omega_E'$; $\omega_F$, $\omega_F'$. Consider two external tangents $\mathcal{L}_{A}^{(1)}$, $\mathcal{L}_{A}^{(2)}$ to the circles $\omega_A$, $\omega_A'$, similarly consider two external tangents $\mathcal{L}_{B}^{(1)}$, $\mathcal{L}_{B}^{(2)}$ to the circles $\omega_B$, $\omega_B'$. Let the lines $\mathcal{L}_{A}^{(1)}$, $\mathcal{L}_{A}^{(2)}$, $\mathcal{L}_{B}^{(1)}$, $\mathcal{L}_{B}^{(2)}$ form a convex quadrilateral $\Box_{A,B}$ and let $P_{A, B}$ be the intersection of the diagonals of $\Box_{A, B}$. Similarly define $\Box_{B, C}$, $\Box_{C, D}$, $\Box_{D, E}$, $\Box_{E, F}$, $\Box_{F, A}$ and $P_{B, C}$, $P_{C, D}$, $P_{D, E}$, $P_{E, F}$, $P_{F, A}$. Then $P_{A, B}P_{D, E}$, $P_{B, C}P_{E, F}$, $P_{C, D}P_{F, A}$ are concurrent.
\end{theorem}
\section{Generalizations of Feuerbach's theorem}
\subsection{The Feuerbach point for a set of coaxial circles} Consider any triangle $ABC$ and let given three circles $\omega_A$, $\omega_B$, $\omega_C$, such that $\omega_A$ is tangent to $BC$, $\omega_B$ is tangent to $AC$, $\omega_C$ is tangent to $AB$ and also given that the circles $\omega_A$, $\omega_B$, $\omega_C$, $(ABC)$ are coaxial. Then the set of circles $\omega_A$, $\omega_B$, $\omega_C$ can be seen as a generalized incircle for $ABC$.
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw [line width=2.0pt] (-7.383273107066006,-3.8712030420752663) circle (1.193043622741855cm); \draw (-13.011318820725373,-5.050325953521931)-- (-6.112203231338463,-0.6237341581992785); \draw (-6.112203231338463,-0.6237341581992785)-- (0.7869123580484471,-5.084464193588687); \draw (-13.011318820725373,-5.050325953521931)-- (0.7869123580484471,-5.084464193588687); \draw [shift={(-6.770342547431304,-5.972058435324329)},line width=2.0pt] plot[domain=2.138381011587314:2.9682900320739085,variable=\t]({1.0*4.848646581051996*cos(\t r)+-0.0*4.848646581051996*sin(\t r)},{0.0*4.848646581051996*cos(\t r)+1.0*4.848646581051996*sin(\t r)}); \draw [shift={(-6.770342547431294,-5.972058435324332)},line width=2.0pt] plot[domain=1.3169878454064106:2.138381011587315,variable=\t]({1.0*4.8486465810520025*cos(\t r)+-0.0*4.8486465810520025*sin(\t r)},{0.0*4.8486465810520025*cos(\t r)+1.0*4.8486465810520025*sin(\t r)}); \draw [shift={(-6.770342547431295,-5.972058435324336)},line width=2.0pt] plot[domain=0.17337808514020336:1.3169878454064106,variable=\t]({1.0*4.848646581052009*cos(\t r)+-0.0*4.848646581052009*sin(\t r)},{0.0*4.848646581052009*cos(\t r)+1.0*4.848646581052009*sin(\t r)}); \draw [shift={(-7.1911135633897265,-4.529841133047875)},line width=2.0pt] plot[domain=1.5772316122459031:3.378141643753832,variable=\t]({1.0*2.704953898322903*cos(\t r)+-0.0*2.704953898322903*sin(\t r)},{0.0*2.704953898322903*cos(\t r)+1.0*2.704953898322903*sin(\t r)}); \draw [shift={(-7.191113563389729,-4.529841133047876)},line width=2.0pt] plot[domain=-0.23587762648541233:1.5772316122459022,variable=\t]({1.0*2.704953898322901*cos(\t r)+-0.0*2.704953898322901*sin(\t r)},{0.0*2.704953898322901*cos(\t r)+1.0*2.704953898322901*sin(\t r)}); \draw [shift={(-6.1199568179709125,-8.20129361694651)}] plot[domain=2.009004644801402:2.7351969514000345,variable=\t]({1.0*7.577563425597659*cos(\t r)+-0.0*7.577563425597659*sin(\t r)},{0.0*7.577563425597659*cos(\t r)+1.0*7.577563425597659*sin(\t r)}); \draw [shift={(-6.119956817970907,-8.20129361694654)}] plot[domain=1.2436690536293287:2.009004644801401,variable=\t]({1.0*7.577563425597687*cos(\t r)+-0.0*7.577563425597687*sin(\t r)},{0.0*7.577563425597687*cos(\t r)+1.0*7.577563425597687*sin(\t r)}); \draw [shift={(-6.119956817970911,-8.201293616946517)}] plot[domain=0.40422690663936983:1.2436690536293273,variable=\t]({1.0*7.577563425597666*cos(\t r)+-0.0*7.577563425597666*sin(\t r)},{0.0*7.577563425597666*cos(\t r)+1.0*7.577563425597666*sin(\t r)}); \begin{scriptsize} \draw [fill=ffffff] (-13.011318820725373,-5.050325953521931) circle (2pt); \draw [fill=ffffff] (-6.112203231338463,-0.6237341581992785) circle (2pt); \draw [fill=ffffff] (0.7869123580484471,-5.084464193588687) circle (2pt); \end{scriptsize} \end{tikzpicture}
\begin{theorem}
Consider any triangle $ABC$ and let given three circles $\omega_A$, $\omega_B$, $\omega_C$ (with centers at $O_A$, $O_B$, $O_C$ respectively), such that $\omega_A$ is tangent to $BC$ at $A'$, $\omega_B$ is tangent to $AC$ at $B'$, $\omega_C$ is tangent to $AB$ at $C'$ and also given that the circles $\omega_A$, $\omega_B$, $\omega_C$, $(ABC)$ are coaxial. Then we have that : \begin{enumerate}
\item\textbf{(Gergonne point)} Lines $AA'$, $BB'$, $CC'$ are concurrent.
\item\textbf{(Feuerbach point)} Pedal circles of $O_A$, $O_B$, $O_C$ wrt $ABC$, the nine~- point circle of $ABC$ and the circle $(A'B'C')$ goes through the same point.
\end{enumerate}
\end{theorem}
\subsection{The Feuerbach point for a set of three conics} We can look on the set of three confocal conics (same situation as \cite[Problem 11.18]{Ak}) as on the generalized circle. Also, we can look at a construction with three conics and their three tangents (see picture below), as on a generalized incircle construction.
\definecolor{ffwwqq}{rgb}{0.0,0.0,0.0} \definecolor{ffqqqq}{rgb}{0.0,0.0,0.0} \definecolor{qqqqcc}{rgb}{0.0,0.0,0.0} \definecolor{ffffff}{rgb}{1.0,1.0,1.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw (-0.587235394223108,-1.0126292477674714)-- (5.332800372863854,3.2375557879493955); \draw (5.332800372863854,3.2375557879493955)-- (12.232241855893378,-1.0809455009203504); \draw (-0.587235394223108,-1.0126292477674714)-- (12.232241855893378,-1.0809455009203504); \draw [color=ffqqqq][rotate around={1.3476726290698011:(5.4111686153144705,1.4668181334445674)},line width=2.0pt] (5.4111686153144705,1.4668181334445674) ellipse (2.3419699480147997cm and 0.8198164529236314cm); \draw [color=ffwwqq][rotate around={-44.758575499605854:(4.293644300695104,0.3485878799609866)},line width=2.0pt] (4.293644300695104,0.3485878799609866) ellipse (1.7588691556845897cm and 0.8937928584691871cm); \draw [color=qqqqcc][rotate around={45.01809112536193:(6.486829668874906,0.40018402053548174)},line width=2.0pt] (6.486829668874906,0.40018402053548174) ellipse (1.8274206741832264cm and 0.9166061459868521cm); \begin{scriptsize} \draw [fill=ffffff] (5.369305354255545,-0.7180462329481028) circle (2pt); \draw [fill=ffffff] (3.2179832471346654,1.415221992870073) circle (2pt); \draw [fill=ffffff] (7.6043539834942635,1.5184142740190605) circle (2pt); \end{scriptsize} \end{tikzpicture}
\begin{theorem}
Consider points $A$, $B$, $C$. Let given set of conics $\mathcal{K}_{AB}$, $\mathcal{K}_{BC}$, $\mathcal{K}_{AC}$, where conic $\mathcal{K}_{AB}$ has foci $A$ and $B$, similarly conic $\mathcal{K}_{BC}$ has foci $B$ and $C$ and conic $\mathcal{K}_{CA}$ has foci $C$ and $A$. Consider the tangent line $l_{B}$ to conics $\mathcal{K}_{AB}$, $\mathcal{K}_{BC}$, similarly define the lines $l_A$, $l_C$. Let the lines $l_A$, $l_B$, $l_C$ form a triangle $A'B'C'$. Let $K_A$ be the foot of perpendicular from $A$ to $l_A$. Similarly define $K_B$ and $K_C$. Then
\begin{enumerate}
\item\textbf{(Gergonne point)} Triangles $A'B'C'$ and $K_AK_BK_C$ are perspective.
\item\textbf{(Another incenter)} Lines $A'A$, $B'B$, $C'C$ are concurrent at $I$.
\item\textbf{(Feuerbach point)} Pedal circle of point $I$ wrt $A'B'C'$, circle $(K_AK_BK_C)$, the nine~- point circle of $A'B'C'$ and the cevian circle of $I$ wrt $ABC$ goes through the same point.
\end{enumerate}
\end{theorem}
\definecolor{qqccqq}{rgb}{1.0,0.4,0.0} \definecolor{ffwwqq}{rgb}{1.0,0.4,0.0} \definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \definecolor{wwqqcc}{rgb}{0.0,0.6,1.0} \definecolor{ffqqtt}{rgb}{1.0,0.0,0.2} \definecolor{qqqqcc}{rgb}{0.0,0.0,0.0} \definecolor{ffffff}{rgb}{1.0,1.0,1.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw [rotate around={50.37099469553565:(-1.2919157327661936,1.8485485389519756)},line width=2.0pt] (-1.2919157327661936,1.8485485389519756) ellipse (3.7030695692984836cm and 3.398096009400589cm); \draw [rotate around={-37.945748266635086:(1.134329192017107,1.8219838134981436)},line width=2.0pt] (1.134329192017107,1.8219838134981436) ellipse (3.2197614268874837cm and 2.6092628907875492cm); \draw [rotate around={-0.6273009872255816:(0.1957088926483848,0.6885555274679991)},line width=2.0pt] (0.1957088926483848,0.6885555274679991) ellipse (2.7467075349448997cm and 1.2872576132413915cm); \draw [color=ffqqqq] (-2.0551159617903325,2.262782244673629)-- (3.4773406929018877,-0.7264960514806054); \draw [color=ffqqqq] (-3.5977508857855787,-2.244370274460297)-- (2.3757419973622342,2.3594409050073386); \draw [color=ffqqqq] (0.1305957478720172,8.648827241497216)-- (0.5758569739244453,-1.3489738380152265); \draw [color=ffqqqq] (0.18605234050955832,1.1312143720513117) circle (2.510633627111844cm);
\draw [color=qqccqq] (0.4452414350365527,2.227251302272776) circle (3.5241403855928812cm); \draw [color=wwqqcc](-0.7369159484922225,6.114198038678601)-- (3.4773406929018877,-0.7264960514806054); \draw [color=wwqqcc](-3.5977508857855787,-2.244370274460297)-- (0.6990223055370557,7.0564792036407855); \draw [color=wwqqcc](-0.7473547696385957,-1.6328527072511434)-- (0.1305957478720172,8.648827241497216); \draw (0.1305957478720172,8.648827241497216)-- (-3.5977508857855787,-2.244370274460297); \draw (-3.5977508857855787,-2.244370274460297)-- (3.4773406929018877,-0.7264960514806054); \draw (0.1305957478720172,8.648827241497216)-- (3.4773406929018877,-0.7264960514806054); \draw [color=qqccqq] (-0.16509420707808867,5.186001694703164)-- (-0.961329361237006,5.4585241246195855); \draw [color=qqccqq] (-0.16509420707808867,5.186001694703164)-- (1.1935955502823705,5.671018364018009); \draw [color=qqccqq] (-0.16509420707808867,5.186001694703164)-- (1.1611115325210521,-1.2234146422797065); \draw [shift={(2.835242085620546,2.2358523018691048)},color=wwqqcc] plot[domain=1.9879320949142947:3.9653690328884137,variable=\t]({1.0*5.27274868307838*cos(\t r)+-0.0*5.27274868307838*sin(\t r)},{0.0*5.27274868307838*cos(\t r)+1.0*5.27274868307838*sin(\t r)}); \draw [color=qqccqq](1.6022114669638943,5.671233488609455) node[anchor=north west] {$\textbf{Pedal circle}$};
\draw [color=wwqqcc](-3.395756671945957,5.968563037949698) node[anchor=north west] {$\textbf{Cevian circle}$}; \draw [shift={(2.835242085620548,2.2358523018691128)},color=wwqqcc] plot[domain=0.8240818913943492:1.9879320949142956,variable=\t]({1.0*5.272748683078374*cos(\t r)+-0.0*5.272748683078374*sin(\t r)},{0.0*5.272748683078374*cos(\t r)+1.0*5.272748683078374*sin(\t r)}); \draw [shift={(2.8352420856205702,2.2358523018691345)},color=wwqqcc] plot[domain=3.965369032888414:4.102560690430101,variable=\t]({1.0*5.272748683078419*cos(\t r)+-0.0*5.272748683078419*sin(\t r)},{0.0*5.272748683078419*cos(\t r)+1.0*5.272748683078419*sin(\t r)}); \draw (-0.29503422882624764,3.476658243479091) node[anchor=north west] {$A$}; \draw (1.953504765518719,0.8573264992912372) node[anchor=north west] {$B$}; \draw (-2.107328624804891,1.168814598600063) node[anchor=north west] {$C$}; \begin{scriptsize} \draw [fill=ffffff] (-2.2305360321349164,0.7151202529218306) circle (2pt); \draw [fill=ffffff] (-0.35329543339747155,2.9819768249821204) circle (2pt); \draw [fill=ffffff] (2.621953817431686,0.6619908020141675) circle (2pt); \draw [fill=ffqqqq] (0.5758569739244453,-1.3489738380152265) circle (1.5pt); \draw[color=ffqqqq] (0.5544787692887413,-1.422200045650625) node {$K_A$}; \draw [fill=ffqqqq] (2.3757419973622342,2.3594409050073386) circle (1.5pt); \draw[color=ffqqqq] (2.508358664953216,2.5988281454269453) node {$K_C$}; \draw [fill=ffqqqq] (-2.0551159617903325,2.262782244673629) circle (1.5pt); \draw[color=ffqqqq] (-2.2772312244278883,2.5280353955840305) node {$K_B$}; \draw [fill=ffqqqq] (0.1305957478720172,8.648827241497216) circle (1.5pt); \draw[color=ffqqtt] (0.27130776991707833,8.842748681572047) node {$A'$}; \draw [fill=ffqqtt] (-3.5977508857855787,-2.244370274460297) circle (1.5pt); \draw[color=ffqqtt] (-3.735561871191953,-2.0876518941740256) node {$C'$}; \draw [fill=ffqqtt] (3.4773406929018877,-0.7264960514806054) circle (1.5pt); \draw[color=ffqqtt] (3.6127255625027015,-0.5302113976298962) node {$B'$}; \draw [fill=wwqqcc] (-0.7369159484922225,6.114198038678601) circle (1.5pt); \draw[color=wwqqcc] (-0.9180104274439062,6.435795186912937) node {$L_B$}; \draw [fill=wwqqcc] (0.6990223055370557,7.0564792036407855) circle (1.5pt); \draw[color=wwqqcc] (0.8234912186918211,7.384418034807998) node {$L_C$}; \draw [fill=wwqqcc] (-0.7473547696385957,-1.6328527072511434) circle (1.5pt); \draw[color=wwqqcc] (-0.7197907278837421,-1.7761637948651998) node {$L_A$}; \draw [fill=ffwwqq] (-0.16509420707808867,5.186001694703164) circle (1.5pt); \draw[color=ffwwqq] (-0.40830262857491284,5.0482572899918035) node {$I$}; \draw [fill=qqccqq] (-0.961329361237006,5.4585241246195855) circle (1.5pt); \draw [fill=qqccqq] (1.1935955502823705,5.671018364018009) circle (1.5pt); \draw [fill=qqccqq] (1.1611115325210521,-1.2234146422797065) circle (1.5pt); \draw [fill=ffffff] (-1.5673882298449362,-0.6656516151106914) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\section{Isogonal conjugacy for isogonal lines}
Consider any situation with a triangle $ABC$ and points $P, Q$, such that $\angle BAP = \angle QAC$. Next we will see that it's natural to consider isogonal conjugations of $P$, $Q$ wrt $ABC$.
\begin{theorem}
Let the incircle of $ABC$ is tangent to the sides of $ABC$ at $A_1B_1C_1$. Let $A$~-- excircle of $ABC$ is tangent to the sides of $ABC$ at $A_2B_2C_2$. Let $A_1A_1^*$ be $A_1$~-- altitude of $A_1B_1C_1$. Similarly let $A_2A_2^*$ be $A_2$~-- altitude of $A_2B_2C_2$. Consider isogonal conjugations $B'$, $C'$ of $B$, $C$ wrt $AA_1^*A_2^*$ respectively. Then the midpoint of $BC$ lies on $B'C'$.
\end{theorem}
\begin{theorem}
Consider any four points $A$, $B$, $C$, $D$. Let $P= AB\cap CD$, $Q= BC\cap DA$. Consider a point $N$, such that $\angle DNQ = \angle PNB$. Consider isogonal conjugations $C_1$, $C_2$ of $C$ wrt triangles $NBD$ and $NPQ$ respectively. Consider isogonal conjugations $Q_1$, $Q_2$ of $Q$ wrt triangles $NAC$ and $NBD$ respectively. Consider isogonal conjugations $D_1$, $D_2$ of $D$ wrt triangles $NAC$ and $NPQ$ respectively. Then all the points $C_1$, $C_2$, $Q_1$, $Q_2$, $D_1$, $D_2$ lie on the same conic.
\end{theorem}
\begin{theorem}
Consider any triangle $ABC$ and any two points $P$, $Q$ on it's circumcircle. Consider a point $A'$ on $BC$, such that $BC$ is the external angle bisector of $\angle PA'Q$. Similarly define $B'$ on $AC$ and $C'$ on $AB$. Consider isogonal conjugations $B^A$, $C^A$ of $B$, $C$ wrt $PA'Q$ respectively. Similarly define the points $A^B$, $A^C$, $B^C$, $C^B$. Then all the points $B^A$, $C^A$, $A^B$, $A^C$, $B^C$, $C^B$ lie on the same conic.
\end{theorem}
\begin{commentary}
Note that the previous three theorems are related to \cite[Problem 4.5.7]{Ak} to \cite[Problem 4.12.3]{Ak} and to \cite[Problem 3.13]{Ak}.
\end{commentary}
\section{Fun with some lines}
In this section we will construct some nice theorems which includes some famous lines.
\begin{theorem}[\textbf{Gauss lines}]
Consider any four lines $l_1$, $l_2$, $l_3$, $l_4$. Let $l$ be the Gauss line of the complete quadrilateral formed by these four lines. Let $g_1$ be the Gauss line of the complete quadrilateral formed by four lines $l$, $l_2$, $l_3$, $l_4$. Similarly define the lines $g_2$, $g_3$, $g_4$. Then the lines $l$, $g_1$, $g_2$, $g_3$, $g_4$ are concurrent.
\end{theorem}
\begin{theorem}[\textbf{Steiner lines}]
Consider any four lines $l_1$, $l_2$, $l_3$, $l_4$. Let $l$ be the Steiner line of complete quadrilateral formed by these four lines. Let $g_1$ be the Steiner line of the complete quadrilateral formed by four lines $l$, $l_2$, $l_3$, $l_4$. Similarly define the lines $g_2$, $g_3$, $g_4$. Then the Steiner line of the complete quadrilateral formed by $g_1$, $g_2$, $g_3$, $g_4$ is parallel to $l$.
\end{theorem}
\begin{theorem}[\textbf{Euler lines}]
Let given triangle $ABC$, let the Euler line of $ABC$ cut it's sides at $A_1$, $B_1$, $C_1$ respectively. So we get three new triangles $AB_1C_1$, $A_1BC_1$, $A_1B_1C$. Let three Euler lines of $AB_1C_1$, $A_1BC_1$, $A_1B_1C$ form a triangle $\triangle$. Then we have that :
\begin{enumerate}
\item Euler lines of $ABC$ and $\triangle$ are the same.
\item Let the Euler line of $\triangle$ and three side lines of $\triangle$ form triangles $\triangle$, $\triangle_1$, $\triangle_2$, $\triangle_3$. Then the Euler lines of $\triangle_1$, $\triangle_2$, $\triangle_3$ form a triangle $ABC$.
\end{enumerate}
\end{theorem}
\begin{theorem}[\textbf{Simson lines}]
Let given triangle $ABC$, let $P$ be any point on it's circumcircle. Let the Simson line of $P$ wrt $ABC$ intersects with $(ABC)$ at $Q$, $R$. Let the Simson lines of $Q$, $R$ wrt $ABC$ intersect at $N$. Then the middle of $PN$ lies on $QR$.
\end{theorem}
\section{On Three Pascal lines}
Consider some nice situation which includes three Pascal lines $l_1, l_2$, $l_3$ of some nice hexagons. Then we can predict that these lines are concurrent at the same point.
\begin{theorem}
Let given triangle $ABC$ with orthocenter $H$. Let $AA'$, $BB'$, $CC'$ be altitudes of $ABC$. Let $I_1$, $I_2$ be the incenters of the triangles $C'B'A$, $C'A'B$ respectively. Let $I_1'$, $I_2'$ be $C'$~-- excenter of $AC'B'$ and respectively $C'$~-- excenter of $C'BA'$. From theorem 6.6 we know that the points $I_1$, $I_2$, $I_1'$, $I_2'$, $A$, $B$ lie on the same conic, so let $l_C$ be the Pascal line of hexagon $AI_1I_1'BI_2I_2'$ in this order. Similarly define the lines $l_A$, $l_B$. Then $l_A$, $l_B$, $l_C$ are concurrent.
\end{theorem}
\begin{theorem}
Consider triangle $ABC$ and it's first Fermat point $F$. Let $O_A$ be the circumcenter of $BFC$. Similarly define the points $O_B$, $O_C$. Let $X_A$ be the reflection of $O_A$ wrt side $BC$. Similarly define the points $X_B$, $X_C$. It's well known that circles $(O_ABO_C)$, $(O_AO_BC)$ and $(AO_BO_C)$ intersect at the second Fermat point $F_2$. Let the lines $F_2X_A$, $F_2X_B$ intersect with $(X_AX_BC)$ second time at $T_{CA}$, $T_{CB}$ respectively. Similarly define the points $T_{AB}$, $T_{AC}$, $T_{BA}$, $T_{BC}$. Let $l_C$ be the Pascal line of hexagon $T_{CA}CT_{CB}O_AO_B$ in this order. Similarly define $l_A$, $l_B$. Then $l_A$, $l_B$, $l_C$ are concurrent.
\end{theorem}
\begin{theorem} Consider any points $A'$, $B'$, $C'$ on the sides $BC$, $CA$, $AB$ of $ABC$ respectively. Let the circles $(A'B'C)$, $(AB'C')$ and $(A'BC')$ intersect at $X$. Let a line through point $X$ and perpendicular to $AB$ intersects with $(AB'C')$ at $P_{AB}$ and a line through point $X$ and perpendicular to $AC$ intersects with $(AB'C')$ at $P_{AC}$. Similarly define the points $P_{BA}$, $P_{BC}$ on $(A'BC')$, and $P_{CA}$, $P_{CB}$ on $(A'B'C)$. Let $l_A$ be the Pascal line of hexagon $P_{AB}XP_{AC}B'AC'$ in this order. Similarly define $l_B$, $l_C$. Then we have that :
\begin{enumerate}
\item Lines $AB$, $l_A$, $XP_{AC}$ are concurrent.
\item Let $A'=l_B\cap l_C$, $B' = l_A\cap l_C$, $C' = l_A\cap l_B$. Then $XP_{BC}$, $BB'$, $CC'$ are concurrent.
\end{enumerate}
\end{theorem}
\subsection{New Pascal line} Consider any conic $\mathcal{C}$ and let conics $\mathcal{C}_1$, $\mathcal{C}_2$, $\mathcal{C}_3$ are tangent to it at six points. Let the common internal tangents to the conics $\mathcal{C}_1$, $\mathcal{C}_2$ meet at $X_{12}$. Let the common internal tangents to conics $\mathcal{C}_1$, $\mathcal{C}_3$ meet at $X_{13}$. Let the common internal tangents to the conics $\mathcal{C}_2$, $\mathcal{C}_3$ meet at $X_{23}$. Then $X_{12}$, $X_{23}$, $X_{13}$ are collinear. See picture below for more details.
\definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw [rotate around={0.3005284740075708:(-4.4782212554020795,0.028220915665245875)}] (-4.4782212554020795,0.028220915665245875) ellipse (7.0117514894687645cm and 0.9761705316932237cm); \draw(-10.185053638503375,-0.0017129009385687926) circle (0.556042684842982cm); \draw(1.1111860447961652,0.05753880724089326) circle (0.5791388509422064cm); \draw [rotate around={19.990518003323153:(-4.4782212554020715,0.028220915665248227)}] (-4.4782212554020715,0.028220915665248227) ellipse (2.042931611061124cm and 0.7123716021331129cm); \draw (-6.157168010203404,-0.8737789436285963)-- (-10.13250141524023,0.570927320190138); \draw (-10.132079293368546,0.5518005913314532)-- (1.0560111488900652,-0.5189657803665344); \draw (1.0499665189245055,0.6334328759051819)-- (-10.12627567576413,-0.5546402184619754); \draw (1.04761934831359,-0.5381655820455087)-- (-2.788389514350013,0.9261663033436611); \draw (-4.802315373508265,-0.815100518573936)-- (1.0929852475032855,0.6497565204647491); \draw [dash pattern=on 3pt off 3pt] (-8.255650450335173,-0.11115341845106604)-- (-0.8194503426661027,0.1745568799268181); \draw (-6.997637111239369,1.219462078415699) node[anchor=north west] {$\mathcal{C}$}; \draw (-10.417316367534905,0.4543812956512786) node[anchor=north west] {$\mathcal{C}_1$}; \draw (-3.6938791856657147,0.8369216870334889) node[anchor=north west] {$\mathcal{C}_2$}; \draw (0.865693156061668,0.539390271513992) node[anchor=north west] {$\mathcal{C}_3$}; \draw (-10.055477174092994,-0.5424470382355465)-- (-3.995719911646171,0.909656439414368); \begin{scriptsize} \draw [fill=ffqqqq] (-8.255650450335173,-0.11115341845106604) circle (1.5pt); \draw[color=ffqqqq] (-8.214811083819137,0.17810212409746012) node {$X_{12}$}; \draw [fill=ffqqqq] (-0.8194503426661027,0.1745568799268181) circle (1.5pt); \draw[color=ffqqqq] (-0.8344863611926102,0.6456514913423838) node {$X_{23}$}; \draw [fill=ffqqqq] (-4.651849261824445,0.027310191720221075) circle (1.5pt); \draw[color=ffqqqq] (-4.6792104968017165,0.3906245637542436) node {$X_{13}$}; \end{scriptsize} \end{tikzpicture}
\begin{definition}
Name this line as $\text{Pasc}_{\mathcal{C}}(\mathcal{C}_1, \mathcal{C}_2, \mathcal{C}_3)$, where order of conics is important.
\end{definition}
\begin{theorem}
Consider three conics $\mathcal{C}_A$, $\mathcal{C}_B$, $\mathcal{C}_C$. Let given that there exists a conic $\mathcal{C}$, which is tangent to them at six points. Let two outer tangents to the conics $\mathcal{C}_A$, $\mathcal{C}_B $ intersect with two outer tangents to the conics $\mathcal{C}_A $, $\mathcal{C}_C $ at the points $P_A^{(1)}$, $P_A^{(2)}$, $P_A^{(3)}$, $P_A^{(4)}$. Similarly define $P_B^{(1)}$, $P_B^{(2)}$, $P_B^{(3)}$, $P_B^{(4)}$, $P_C^{(1)}$, $P_C^{(2)}$, $P_C^{(3)}$, $P_C^{(4)}$ (see picture below for more details). Let by definition union of the external tangents to conics $\mathcal{C}_A$, $\mathcal{C}_B$ form a conic $\mathcal{C}_{AB}$. Similarly define conics $\mathcal{C}_{BC}$, $\mathcal{C}_{CA}$. Note that we can look on the segments $XY$ as on the degenerated conics. Let the lines $\text{Pasc}_{\mathcal{C}_{AB}}(\mathcal{C}_A, P_{A}^{(1)}P_{A}^{(4)}, \mathcal{C}_B)$ and $\text{Pasc}_{\mathcal{C}_{BC}}(\mathcal{C}_C, P_{C}^{(1)}P_{C}^{(2)}, \mathcal{C}_B)$ meet at $W_B$. Similarly define $W_A$, $W_C$. Then $P_{A}^{(1)}W_A$, $P_{B}^{(1)}W_B$, $P_{C}^{(1)}W_C$ are concurrent.
\end{theorem}
\definecolor{ffqqqq}{rgb}{1.0,0.0,0.0} \definecolor{qqccqq}{rgb}{0.0,0.6,1.0} \definecolor{ffffff}{rgb}{1.0,1.0,1.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw(-5.927380554700882,-0.7956155059193679) circle (3.956404344821457cm); \draw [rotate around={-51.47741441083869:(-3.6237416740074337,1.038264699620901)}] (-3.6237416740074337,1.038264699620901) ellipse (2.102833721950116cm and 1.0042143224650857cm); \draw [rotate around={64.70652146073907:(-8.589562353057218,0.4624213669167215)}] (-8.589562353057218,0.4624213669167215) ellipse (2.1028337219500925cm and 1.0042143224650744cm); \draw [rotate around={8.207378488465768:(-5.507039478345662,-3.709921955221418)}] (-5.507039478345662,-3.709921955221418) ellipse (2.1028337219501028cm and 1.0042143224650792cm); \draw [dash pattern=on 3pt off 3pt] (-5.123694065825692,-1.0104834813962966)-- (-7.417872926140182,0.09780563653314696); \draw [dash pattern=on 3pt off 3pt] (-6.535834784234573,-1.1742372433810018)-- (-4.622566169429861,0.36892907894155635); \draw [dash pattern=on 3pt off 3pt] (-5.613063098868942,-2.465201944792031)-- (-6.000277484387625,0.17601443650678822); \draw [color=qqccqq] (-9.191971609609855,-1.4822464890823759)-- (-4.899286421260857,-2.6612423513314907); \draw [color=ffqqqq] (-7.552245361125632,-3.7368203494185392)-- (-7.390424527599698,0.8811087174626273); \draw [color=ffqqqq] (-7.493646738091285,2.1973617180772216)-- (-4.181240525504717,0.0878731515502491); \draw [color=qqccqq] (-5.094482956019033,2.459711407703674)-- (-7.9447301764669085,-0.4377855453745729); \draw [color=qqccqq] (-3.5266826674854332,-3.1721154024139064)-- (-4.88436522045534,1.2148603145272876); \draw [color=ffqqqq] (-2.5922903333563,-0.7169384551179991)-- (-6.385380304968094,-2.912178291137984); \draw (-12.055430640288748,1.9353262152297415)-- (-0.711648563780299,3.250766642622808); \draw (-0.711648563780299,3.250766642622808)-- (-5.01380147230369,-7.5958583469496315); \draw (-5.01380147230369,-7.5958583469496315)-- (-12.055430640288748,1.9353262152297415); \draw (-7.661447520642244,-4.012141775112518)-- (-4.976900122983826,2.756162259698645); \draw (-7.675309504667428,2.4432510307643285)-- (-3.3526314360490708,-3.407702135330326); \draw (-9.503815201447011,-1.5184082969308847)-- (-2.2705832250310456,-0.6796328604568388); \draw (-9.348544202588934,0.6115577673658549) node[anchor=north west] {$\mathcal{C}_A$}; \draw (-3.914482363908655,1.829537145001087) node[anchor=north west] {$\mathcal{C}_B$}; \draw (-6.401545149009974,-3.492095828051312) node[anchor=north west] {$\mathcal{C}_C$}; \draw (-12.346647285998746,2.7851825028379613) node[anchor=north west] {$P_A^{(3)}$}; \draw (-8.053908780207426,3.347326830977299) node[anchor=north west] {$P_A^{(4)}$}; \draw (-10.336555445985349,-1.6932339780054304) node[anchor=north west] {$P_A^{(2)}$}; \draw (-5.277256492731295,3.6658752835895907) node[anchor=north west] {$P_B^{(2)}$}; \draw (-0.7971365442268638,4.09685260182975) node[anchor=north west] {$P_B^{(3)}$}; \draw (-2.1088066432186556,-0.30661130192839703) node[anchor=north west] {$P_B^{(4)}$}; \draw (-6.912585447318464,0.6490340559084774) node[anchor=north west] {$P_C^{(1)}$}; \draw (-7.048862860200728,-0.40030202328495335) node[anchor=north west] {$P_B^{(1)}$}; \draw (-5.58388067171639,-0.9249700628816687) node[anchor=north west] {$P_A^{(1)}$}; \draw (-3.1990259462767683,-3.079856654082464) node[anchor=north west] {$P_C^{(2)}$}; \draw (-4.885458930694786,-7.4458442692979885) node[anchor=north west] {$P_C^{(3)}$}; \draw (-8.53087972529535,-3.998025723376716) node[anchor=north west] {$P_C^{(4)}$}; \draw (-5.464159775438935,-2.1846707115681614) node[anchor=north west] {$W_C$}; \draw (-8.260440798003908,0.39112849060047655) node[anchor=north west] {$W_A$}; \draw (-4.464342979860318,0.662265248723491) node[anchor=north west] {$W_B$}; \begin{scriptsize} \draw [fill=ffffff] (-4.622566169429861,0.36892907894155635) circle (1.5pt); \draw [fill=ffffff] (-7.417872926140182,0.09780563653314696) circle (1.5pt); \draw [fill=ffffff] (-5.613063098868942,-2.465201944792031) circle (1.5pt); \draw [fill=ffffff] (-9.191971609609855,-1.4822464890823759) circle (1.5pt); \draw [fill=ffffff] (-7.552245361125632,-3.7368203494185392) circle (1.5pt); \draw [fill=ffffff] (-7.493646738091285,2.1973617180772216) circle (1.5pt); \draw [fill=ffffff] (-5.094482956019033,2.459711407703674) circle (1.5pt); \draw [fill=ffffff] (-3.5266826674854332,-3.1721154024139064) circle (1.5pt); \draw [fill=ffffff] (-2.5922903333563,-0.7169384551179991) circle (1.5pt); \draw [fill=ffffff] (-12.055430640288748,1.9353262152297415) circle (1.5pt); \draw [fill=ffffff] (-0.711648563780299,3.250766642622808) circle (1.5pt); \draw [fill=ffffff] (-0.711648563780299,3.250766642622808) circle (1.5pt); \draw [fill=ffffff] (-5.01380147230369,-7.5958583469496315) circle (1.5pt); \draw [fill=ffffff] (-5.01380147230369,-7.5958583469496315) circle (1.5pt); \draw [fill=ffffff] (-12.055430640288748,1.9353262152297415) circle (1.5pt); \draw [fill=ffffff] (-7.661447520642244,-4.012141775112518) circle (1.5pt); \draw [fill=ffffff] (-4.976900122983826,2.756162259698645) circle (1.5pt); \draw [fill=ffffff] (-7.675309504667428,2.4432510307643285) circle (1.5pt); \draw [fill=ffffff] (-3.3526314360490708,-3.407702135330326) circle (1.5pt); \draw [fill=ffffff] (-9.503815201447011,-1.5184082969308847) circle (1.5pt); \draw [fill=ffffff] (-2.2705832250310456,-0.6796328604568388) circle (1.5pt); \draw [fill=ffffff] (-6.535834784234573,-1.1742372433810018) circle (1.5pt); \draw [fill=ffffff] (-6.000277484387625,0.17601443650678822) circle (1.5pt); \draw [fill=ffffff] (-5.123694065825692,-1.0104834813962966) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\section{Facts related to the set of confocal conics}
\begin{theorem}
Consider any set of points $A_1$, $A_2$, $\ldots$, $A_n$. Let given a set of conics $\mathcal{K}_1$, $\mathcal{K}_2$, $\ldots$, $\mathcal{K}_n$, where the conic $\mathcal{K}_i$ has foci $A_{i}$ and $A_{i+1}$, for every $1\leq i\leq n$ ($A_{n+1} = A_1$). For each integer $i$, consider the line $l_i$ passing through two intersection points of the conics $\mathcal{K}_i$, $\mathcal{K}_{i + 1}$. Then in the case when the lines $l_1$, $l_2$,$\ldots$ , $l_{n-1}$ pass through the same point $P$ we get that the line $l_n$ also passes through $P$.
\end{theorem}
\begin{commentary}
Previous theorem can be seen as $n$~- conic analog of \cite[Problem 11.18]{Ak}.
\end{commentary}
By observing theorem 13.1 one can note that $P$ can be seen as the incenter for the set of conics $\mathcal{K}_1$, $\mathcal{K}_2$, $\ldots$, $\mathcal{K}_n$. So if we apply this analogy to \cite[Problem 5.5.10]{Ak}, then we'll obtain the following theorem.
\begin{theorem}
Consider cyclic quadrilateral $ABCD$. Let $\mathcal{K}_{AB}$ be an ellipse with foci $A$ and $B$. Let $\mathcal{K}_{CD}$ be an ellipse with foci $C$ and $D$. Let the segments $AC$, $AD$, $BC$, $BD$ intersect with the ellipse $\mathcal{K}_{AB}$ at $T_{AC}$, $T_{AD}$, $T_{BC}$, $T_{BD}$ respectively. Let the segments $CA$, $CB$, $DA$, $DB$ intersect with the ellipse $\mathcal{K}_{CD}$ at $T_{CA}$, $T_{CD}$, $T_{DA}$, $T_{DB}$ respectively. Consider the intersection point $I_C$ of the tangent lines through $T_{AC}$ and $T_{BC}$ to $\mathcal{K}_{AB}$. Consider the intersection point $I_D$ of the tangent lines through $T_{AD}$ and $T_{BD}$ to $\mathcal{K}_{AB}$. Similarly define the points $I_A$, $I_B$ (similarly for ellipse $\mathcal{K}_{CD}$). Then the points $I_A$, $I_B$, $I_C$, $I_D$ lie on the same circle.
\end{theorem}
We can look at the set of three confocal conics (same situation as \cite[Problem 11.18]{Ak}) as on the analog of the situation where given three circles $\omega_A$, $\omega_B$, $\omega_C$ with centers at $A$, $B$ and $C$. So \cite[Problem 11.18]{Ak} can be seen as an analog of radical center theorem. Next two pictures describes conic analogs of \cite[Theorem 6.3.7]{Ak} and \cite[Theorem 10.11]{Ak}.
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw [rotate around={-66.1707356271919:(-2.1245186707387655,2.1147289690092292)}] (-2.1245186707387655,2.1147289690092292) ellipse (1.9599950257357077cm and 0.8424823523000249cm); \draw [rotate around={63.8345250658968:(-3.5233589298036194,2.341649347394238)}] (-3.5233589298036194,2.341649347394238) ellipse (2.3343746602986277cm and 1.7447701756644716cm); \draw [rotate around={-9.214275880875809:(-2.808381780666485,0.7228190094083604)}] (-2.808381780666485,0.7228190094083604) ellipse (2.1246968422389774cm and 1.5830634045777392cm); \draw [dash pattern=on 3pt off 3pt] (-6.309756829868154,-0.2867912419496193)-- (-3.5477470997927134,1.3375859844858176); \draw [dash pattern=on 3pt off 3pt] (-3.5477470997927134,1.3375859844858176)-- (0.25212733808678844,-0.15820389265697415); \draw [dash pattern=on 3pt off 3pt] (-3.5477470997927134,1.3375859844858176)-- (-1.9992752530123383,6.575977132862175); \draw (-6.309756829868154,-0.2867912419496193)-- (-1.4095415216016314,0.495898631023351); \draw (-1.4095415216016314,0.495898631023351)-- (-1.9992752530123383,6.575977132862175); \draw (-2.839495819875898,3.7335593069951054)-- (0.25212733808678844,-0.15820389265697415); \draw (-4.207222039731341,0.9497393877933701)-- (0.25212733808678844,-0.15820389265697415); \draw (-6.309756829868154,-0.2867912419496193)-- (-2.839495819875898,3.7335593069951054); \draw (-4.207222039731341,0.9497393877933701)-- (-1.9992752530123383,6.575977132862175); \draw [rotate around={68.19859051364821:(3.6404447770974184,2.0328440008127457)}] (3.6404447770974184,2.0328440008127457) ellipse (1.2617404435303345cm and 1.0526824170408495cm); \draw [rotate around={-66.06525220653162:(4.225093515501749,1.9435238524373069)}] (4.225093515501749,1.9435238524373069) ellipse (1.3904850022530333cm and 1.1342427325008433cm); \draw [rotate around={-8.686240940738992:(3.9667570132968324,1.2976825969250183)}] (3.9667570132968324,1.2976825969250183) ellipse (1.1610084026014185cm and 0.9990737083356319cm); \draw (1.3817593984566303,0.34031303529374074)-- (5.898312579549596,0.2617144881301778); \draw [dash pattern=on 3pt off 3pt] (5.898312579549596,0.2617144881301778)-- (3.956357578684052,6.968743752474139); \draw (3.8595816544703654,4.4439674717053705)-- (7.484444689637288,-0.6823441563935639); \draw (2.173398509998168,0.742081063360782)-- (7.484444689637288,-0.6823441563935639); \draw (2.173398509998168,0.742081063360782)-- (3.956357578684052,6.968743752474139); \draw (1.3817593984566303,0.34031303529374074)-- (3.8595816544703654,4.4439674717053705); \draw (3.720758942899589,0.8222372505401745)-- (3.956357578684052,6.968743752474139); \draw (3.057074428122954,1.952806561070234)-- (7.484444689637288,-0.6823441563935639); \draw (4.7158641976714035,2.0324183027317453)-- (1.3817593984566303,0.34031303529374074); \begin{scriptsize} \draw [fill=ffffff] (-1.4095415216016314,0.495898631023351) circle (1.5pt); \draw [fill=ffffff] (-2.839495819875898,3.7335593069951054) circle (1.5pt); \draw [fill=ffffff] (-4.207222039731341,0.9497393877933701) circle (1.5pt); \draw [fill=ffffff] (-1.9992752530123383,6.575977132862175) circle (1.5pt); \draw [fill=ffffff] (-6.309756829868154,-0.2867912419496193) circle (1.5pt); \draw [fill=ffffff] (0.25212733808678844,-0.15820389265697415) circle (1.5pt); \draw [fill=ffffff] (-3.5477470997927134,1.3375859844858176) circle (1.5pt); \draw [fill=ffffff] (3.3821082748925018,1.3870027453004565) circle (1.5pt); \draw [fill=ffffff] (3.898781279302334,2.678685256325034) circle (1.5pt); \draw [fill=ffffff] (4.551405751701162,1.2083624485495799) circle (1.5pt); \draw [fill=ffffff] (4.7158641976714035,2.0324183027317453) circle (1.5pt); \draw [fill=ffffff] (2.8478783090062776,1.0843890858773324) circle (1.5pt); \draw [fill=ffffff] (3.057074428122954,1.952806561070234) circle (1.5pt); \draw [fill=ffffff] (4.9958929673849495,0.7988305520655441) circle (1.5pt); \draw [fill=ffffff] (3.720758942899589,0.8222372505401745) circle (1.5pt); \draw [fill=ffffff] (3.8144695012973817,3.2670415448181815) circle (1.5pt); \draw [fill=ffffff] (3.956357578684052,6.968743752474139) circle (1.5pt); \draw [fill=ffffff] (2.173398509998168,0.742081063360782) circle (1.5pt); \draw [fill=ffffff] (7.484444689637288,-0.6823441563935639) circle (1.5pt); \draw [fill=ffffff] (3.8595816544703654,4.4439674717053705) circle (1.5pt); \draw [fill=ffffff] (1.3817593984566303,0.34031303529374074) circle (1.5pt); \draw [fill=ffffff] (5.898312579549596,0.2617144881301778) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\begin{definition}
Consider any two conics $\mathcal{K}_1$ and $\mathcal{K}_2$ which share same focus $F$. Let the conics $\mathcal{K}_1$ and $\mathcal{K}_2$ intersect at the points $A$, $B$. Consider the intersection point $X$ of the tangents to the conic $\mathcal{K}_1$ from $A$, $B$. Similarly let $Y$ be the intersection point of the tangents to the conic $\mathcal{K}_2$ from $A$, $B$. Then let by definition $\mathcal{L}_F(\mathcal{K}_1 , \mathcal{K}_2) = XY$.
\end{definition}
\begin{theorem}
Consider points $A$, $B$, $C$. Let given set of conics $\mathcal{K}_{AB}$, $\mathcal{K}_{BC}$, $\mathcal{K}_{CA}$, where conic $\mathcal{K}_{AB}$ has foci $A$ and $B$, similarly for $\mathcal{K}_{BC}$, $\mathcal{K}_{CA}$. Let $X_C$ be the intersection point of the lines $\mathcal{L}_{A}(\mathcal{K}_{AB} , \mathcal{K}_{CA})$ and $\mathcal{L}_{B}(\mathcal{K}_{AB} , \mathcal{K}_{BC})$. Similarly define the points $X_A$ and $X_B$. Then $AX_A$, $BX_B$ and $CX_C$ are concurrent.
\end{theorem}
\begin{theorem}
Consider cyclic quadrilateral $ABCD$. Let $P= AC\cap BD$. Let conic $\mathcal{K}_{AB}$ has foci at points $A$, $B$ and goes through the incenter of $ABP$. Like the same let conic $\mathcal{K}_{BC}$ has foci at points $B$, $C$ and goes through the incenter of $BCP$. Similarly define conics $\mathcal{K}_{CD}$ and $\mathcal{K}_{DA}$. Consider $P_{AB}$ as the intersection point of the lines $\mathcal{L}_A(\mathcal{K}_{AB} , \mathcal{K}_{DA})$ and $\mathcal{L}_B(\mathcal{K}_{AB} , \mathcal{K}_{BC})$. Similarly define the points $P_{BC}$, $P_{CD}$, $P_{DA}$. Then we have that :
\begin{enumerate}
\item Lines $P_{AB}P_{CD}$, $P_{BC}P_{DA}$ intersect at $P$.
\item Define point $A^*$ as the second intersection point of the line $\mathcal{L}_A(\mathcal{K}_{AB} , \mathcal{K}_{DA})$ with $(ABCD)$. Similarly define the points $B^*$, $C^*$, $D^*$. Then the lines $A^*C^*$ and $B^*D^*$ intersect at $P$.
\end{enumerate}
\end{theorem}
\definecolor{ffffff}{rgb}{1.0,1.0,1.0} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw(-1.9449837908720675,9.527425330318346) circle (3.9774050150527cm); \draw (-0.7384668608292344,5.737429611153694)-- (-2.0215535384729044,13.50409324800002); \draw (-5.653797867363089,10.964244316756238)-- (1.8559109979214456,10.699154590125698); \draw (1.8559109979214456,10.699154590125698)-- (-2.0215535384729044,13.50409324800002); \draw (-2.0215535384729044,13.50409324800002)-- (-5.653797867363089,10.964244316756238); \draw (-5.653797867363089,10.964244316756238)-- (-0.7384668608292344,5.737429611153694); \draw (-0.7384668608292344,5.737429611153694)-- (1.8559109979214456,10.699154590125698); \draw [rotate around={62.395912549730156:(0.5587220685461054,8.218292100639694)}] (0.5587220685461054,8.218292100639694) ellipse (3.0756734017851968cm and 1.273730373437171cm); \draw [rotate around={-35.881791064018394:(-0.08282127027571821,12.101623919062845)}] (-0.08282127027571821,12.101623919062845) ellipse (2.537916787245108cm and 0.8458242090544579cm); \draw [rotate around={34.9631919592365:(-3.837675702918005,12.234168782378124)}] (-3.837675702918005,12.234168782378124) ellipse (2.4346702009011882cm and 1.0082713536457595cm); \draw [rotate around={-46.759104427014925:(-3.1961323640961674,8.350836963954976)}] (-3.1961323640961674,8.350836963954976) ellipse (3.805014656231591cm and 1.2681162472396246cm); \draw [dash pattern=on 3pt off 3pt] (-3.459445247075839,5.849632901232724)-- (-0.6449055271618617,13.2863543216661); \draw (-5.377995114614327,14.034968227723823)-- (0.542311933802695,13.098576870094348); \draw (0.542311933802695,13.098576870094348)-- (4.694942127371137,5.513375510649789); \draw (4.694942127371137,5.513375510649789)-- (-6.10338374815116,5.95865934557596); \draw (-6.10338374815116,5.95865934557596)-- (-5.377995114614327,14.034968227723823); \draw [dash pattern=on 3pt off 3pt] (-5.850473881610498,8.774499168659519)-- (1.0900296731508101,12.098114717810779); \draw [dash pattern=on 3pt off 3pt] (4.694942127371137,5.513375510649789)-- (-5.377995114614327,14.034968227723823); \draw [dash pattern=on 3pt off 3pt] (-6.10338374815116,5.95865934557596)-- (0.542311933802695,13.098576870094348); \draw(-2.72633644771811,11.829772883141644) circle (0.968263716042755cm); \draw(-0.8637345068813359,11.633041766565189) circle (0.8373632344715588cm); \draw(-0.12776239631046318,9.545226149946167) circle (1.2231894719227405cm); \draw(-2.649354297811856,9.601975740657382) circle (1.2554309880318006cm); \begin{scriptsize} \draw [fill=ffffff] (-0.7384668608292344,5.737429611153694) circle (1.5pt); \draw [fill=ffffff] (-5.653797867363089,10.964244316756238) circle (1.5pt); \draw [fill=ffffff] (-2.0215535384729044,13.50409324800002) circle (1.5pt); \draw [fill=ffffff] (1.8559109979214456,10.699154590125698) circle (1.5pt); \draw [fill=ffffff] (-1.5781919282523986,10.820377053239929) circle (1.5pt); \draw [fill=ffffff] (-0.12776239631046318,9.545226149946167) circle (1.5pt); \draw [fill=ffffff] (-0.8637345068813359,11.633041766565189) circle (1.5pt); \draw [fill=ffffff] (-2.72633644771811,11.829772883141644) circle (1.5pt); \draw [fill=ffffff] (-2.649354297811856,9.601975740657382) circle (1.5pt); \draw [fill=ffffff] (-3.459445247075839,5.849632901232724) circle (1.5pt); \draw [fill=ffffff] (-0.6449055271618617,13.2863543216661) circle (1.5pt); \draw [fill=ffffff] (-5.377995114614327,14.034968227723823) circle (1.5pt); \draw [fill=ffffff] (0.542311933802695,13.098576870094348) circle (1.5pt); \draw [fill=ffffff] (4.694942127371137,5.513375510649789) circle (1.5pt); \draw [fill=ffffff] (-6.10338374815116,5.95865934557596) circle (1.5pt); \draw [fill=ffffff] (-5.850473881610498,8.774499168659519) circle (1.5pt); \draw [fill=ffffff] (1.0900296731508101,12.098114717810779) circle (1.5pt); \end{scriptsize} \end{tikzpicture}
\end{document} |
\begin{document}
\title{Multiplicative component models for replicated point processes} \author{Daniel Gervini \\
Department of Mathematical Sciences\\ University of Wisconsin--Milwaukee} \maketitle
\begin{abstract} We propose a multiplicative semiparametric model for the intensity function of replicated point processes. Two examples of applications are given: a temporal one, about the dynamics of Internet auctions, and a spatial one, about the spatial distribution of street robberies in Chicago.
\emph{Key words:} Doubly-stochastic process; functional data analysis; latent-variable model; Poisson process; spline smoothing. \end{abstract}
\section{Introduction}
Point processes in time and space have a broad range of applications, in diverse areas such as neuroscience, ecology, finance, astronomy, seismology, and many others. Examples are given in classic textbooks like Cox and Isham (1980), Diggle (2013), M\o ller and Waagepetersen (2004), Streit (2010), and Snyder and Miller (1991), and in the papers cited below. However, the point-process literature has mostly focused on single-realization cases, such as the distribution of trees in a single forest (Jalilian et al., 2013) or the distribution of cells in a single tissue sample (Diggle et al., 2006). Situations where several replications of a process are available are increasingly common, but this area is still relatively unexplored in the literature. We can cite Diggle et al.~(1991), Baddeley et al.~(1993), Diggle et al.~(2000), Bell and Grunwald (2004), Landau et al.~(2004), Wager et al.~(2004), and Pawlas (2011). However, these papers propose estimators for summary statistics of the processes rather than the intensity functions, which would be more informative.
When several replications of a process are available, it is possible to estimate the intensity functions by \textquotedblleft borrowing strength\textquotedblright\ across replications. Along these lines Wu et al.~(2013) propose estimators for the mean and principal components of independent and identically distributed realizations of a temporal doubly stochastic process based on kernel estimators of covariance functions. Gervini (2016) proposes an additive independent component model that has the advantages, over Wu et al., of treating the temporal and spatial cases in a unified way and of being easy to extend beyond the i.i.d.~case, for instance, to regression and multivariate settings. In fact, Gervini and Baur (2017) is an extension of this method to marked point processes.
In this paper we propose an alternative to the additive model of Gervini (2016), namely an additive model for the log-intensity functions. This simplifies the numerical and theoretical aspects of the procedure by eliminating the nonnegativity constraints, but the interpretability is somewhat hampered by the fact that the additive model for the log-intensities translates into a multiplicative model for the intensities. At the end of this brief paper we present two examples of application, one temporal and one spatial, to illustrate these issues.
\section{The model\label{sec:Model}}
A point process $X$ is a random countable set in a space $\mathscr{S}$, where $\mathscr{S}$ is usually $\mathbb{R}$ for temporal processes and $ \mathbb{R}^{2}$ or $\mathbb{R}^{3}$ for spatial processes (M\o ller and Waagepetersen, 2004, ch.~2; Streit, 2010, ch.~2). A process is locally finite if $\#(X\cap B)<\infty $ with probability one for any bounded $ B\subseteq \mathscr{S}$. In that case we can define the count function $ N(B)=\#(X\cap B)$ for any bounded $B\subseteq \mathscr{S}$, which essentially characterizes the process and is equivalent to $X$ in this case.
Let $X$ be locally finite and define $X_{B}=X\cap B$. Given a locally integrable function $\lambda :\mathscr{S}\rightarrow \lbrack 0,\infty )$, i.e.~a function $\lambda $ such that $\int_{B}\lambda <\infty $ for any bounded $B\subseteq \mathscr{S}$, we say that $X$ is a Poisson process with intensity function $\lambda $, denoted by $X\sim \mathscr{P}(\lambda )$, if \emph{(i)} $N(B)$ follows a Poisson distribution with rate $\int_{B}\lambda $ and \emph{(ii)} conditionally on $N(B)=m$, the $m$ points in $X_{B}$ are independent and identically distributed with density $\tilde{\lambda} =\lambda /\int_{B}\lambda $.
For $X\sim \mathscr{P}(\lambda )$, then, the density function of $X_{B}$ at $ x_{B}=\{t_{1},\ldots ,t_{m}\}$ is \begin{eqnarray}
f(x_{B}) &=&f(m)f(t_{1},\ldots ,t_{m}|m) \label{eq:Pois_lik} \\ &=&\exp \left\{ -\int_{B}\lambda (t)dt\right\} \frac{\{\int_{B}\lambda (t)dt\}^{m}}{m!}\times \prod_{j=1}^{m}\tilde{\lambda}(t_{j}) \nonumber \\ &=&\exp \left\{ -\int_{B}\lambda (t)dt\right\} \frac{1}{m!} \prod_{j=1}^{m}\lambda (t_{j}). \nonumber \end{eqnarray} What we mean by\ density of $X_{B}$, whose realizations are sets, not vectors, is the following: if $\mathscr{N}$ is the family of locally finite subsets of $\mathscr{S}$, i.e.~$\mathscr{N}=\{A\subseteq \mathscr{S} :\#(A\cap B)<\infty $ for all bounded $B\subseteq \mathscr{S}\}$, then for any $F\subseteq \mathscr{N}$, \begin{eqnarray*} P\left( X_{B}\in F\right) &=&\sum_{m=0}^{\infty }\int_{B}\cdots \int_{B} \mathbb{I}(\{t_{1},\ldots ,t_{m}\}\in F)f(\{t_{1},\ldots ,t_{m}\})dt_{1}\cdots dt_{m} \\ &=&\sum_{m=0}^{\infty }\frac{\exp \left\{ -\int_{B}\lambda (t)dt\right\} }{m! }\int_{B}\cdots \int_{B}\mathbb{I}(\{t_{1},\ldots ,t_{m}\}\in F)\{\prod_{j=1}^{m}\lambda (t_{j})\}dt_{1}\cdots dt_{m}, \end{eqnarray*} and, more generally, for any function $h:\mathscr{N}\rightarrow \lbrack 0,\infty )$ \begin{equation} E\{h(X_{B})\}=\sum_{m=0}^{\infty }\int_{B}\cdots \int_{B}h(\{t_{1},\ldots ,t_{m}\})f(\{t_{1},\ldots ,t_{m}\})dt_{1}\cdots dt_{m}. \label{eq:Eh} \end{equation} A function $h$ on $\mathscr{N}$ is a function well defined on $\mathscr{S} ^{m}$ for any integer $m$ and invariant under permutation of the coordinates; for example, $h(\{t_{1},\ldots ,t_{m}\})=\sum_{j=1}^{m}t_{j}/m$.
Single realizations of point processes are often modeled as Poisson processes with fixed $\lambda $s, but for replicated point processes a single intensity function $\lambda $ rarely provides an adequate fit for all replications. It is more reasonable to assume that the $\lambda $s are subject-specific and treat them as latent random effects. Such processes are called doubly stochastic or Cox processes (M\o ller and Waagepetersen, 2004, ch.~5; Streit, 2010, ch.~8). A doubly stochastic process is a pair $
(X,\Lambda )$ where $X|\Lambda =\lambda \sim \mathscr{P}(\lambda )$ and $ \Lambda $ is a random function that takes values on the space $\mathscr{F}$ of non-negative locally integrable functions on $\mathscr{S}$. The $n$ replications of the process are then i.i.d.~realizations $(X_{1},\Lambda _{1}),\ldots ,(X_{n},\Lambda _{n})$ of $(X,\Lambda )$, where $X$ is observable but $\Lambda $ is not. In this paper we will assume that all $ X_{i}$s are observed on a common region $B$ of $\mathscr{S}$; the method can be extended to $X_{i}$s observed on non-conformal regions $B_{i}$ at the expense of higher computational complexity.
The latent intensity process $\Lambda $ characterizes the distribution of $X$ . Gervini (2016) proposes an additive model for $\Lambda $, but here we will explore the alternative approach of assuming an additive model for $\log \Lambda $, which is not constrained to be nonnegative. Let us assume, then, that \begin{equation} \log \Lambda (t)=\mu (t)+\sum_{k=1}^{p}U_{k}\phi _{k}(t) \label{eq:Log_lambda_model} \end{equation} where $\mu \in L^{2}(B)$ and $\phi _{1},\ldots ,\phi _{p}$ are orthonormal functions in $L^{2}(B)$. The $U_{k}$s are assumed independent $N(0,\sigma _{k}^{2})$ random variables. Model (\ref{eq:Log_lambda_model}), minus the Gaussianity assumption, is a truncated version of the Karhunen--Lo\`{e}ve expansion (Ash and Gardner, 1975, ch.~1) that any process in $L^{2}(B)$ must follow, so it requires little justification. The Gaussianity assumption on the $U_{k}$s is added in order to derive maximum likelihood estimators; see next section. Model (\ref{eq:Log_lambda_model}) translates into a multiplicative model for $\Lambda (t)$: \begin{equation} \Lambda (t)=\lambda _{0}(t)\prod_{k=1}^{p}\xi _{k}(t)^{U_{k}}, \label{eq:Lambda_model} \end{equation} where $\lambda _{0}=\exp \mu $ is the baseline intensity function and $\xi _{k}=\exp \phi _{k}$ is a multiplicative component.
The mean and components of model (\ref{eq:Log_lambda_model}) are functional parameters that need to be estimated. We will follow a semiparametric approach, modeling $\mu $ and the $\phi _{k}$s in terms basis functions $ \beta _{1},\ldots ,\beta _{q}$ which can be, for example, B-splines for temporal processes or radial Gaussian kernels for spatial processes. Simplicial bases are another possibility for spatial processes, particularly if the domain $B$ is irregular. In any case, we will have $\mu (t)=\mathbf{c} _{0}^{T}\mathbf{\beta }(t)$ and $\phi _{k}(t)=\mathbf{c}_{k}^{T}\mathbf{ \beta }(t)$, where $\mathbf{\beta }$ is the vector of the $\beta _{k}$s. From (\ref{eq:Log_lambda_model}) we can express \[ \log \Lambda (t)=(\mathbf{c}_{0}+\mathbf{CU})^{T}\mathbf{\beta }(t) \] where $\mathbf{C}=[\mathbf{c}_{1},\ldots ,\mathbf{c}_{p}]$ and $\mathbf{U} =(U_{1},\ldots ,U_{p})^{T}$. The parameters $\mathbf{c}_{0}$ and $\mathbf{c} _{k}$s, along with the variances $\sigma _{k}^{2}$s of the $U_{k}$s, are estimated by penalized maximum likelihood, as explained next.
\section{Estimation\label{sec:Estimation}}
Let us collect the parameters $\mathbf{c}_{0}$, $\mathbf{c}_{k}$s and $ \sigma _{k}^{2}$s into a single vector $\mathbf{\theta }$. From now on we will omit the subindex $B$ in $x_{B}$, since $B$ is fixed. Then the marginal density of $X_{B}$ at $x$ is \begin{eqnarray} f(x;\mathbf{\theta }) &=&\int \int f(x,\mathbf{u})~d\mathbf{u} \label{eq:marg_XB} \\ &=&\int \int f(x\mid \mathbf{u})f(\mathbf{u})~d\mathbf{u} \nonumber \end{eqnarray} where, for $x=\{t_{1},\ldots ,t_{m}\}$, \begin{eqnarray*} \log f(x\mid \mathbf{u}) &=&-\int_{B}\lambda _{\mathbf{u}}(t)dt+ \sum_{j=1}^{m}\log \lambda _{\mathbf{u}}(t_{j})-\log m! \\ &=&-\int_{B}\exp \{(\mathbf{c}_{0}+\mathbf{Cu})^{T}\mathbf{\beta }(t)\}dt \\ &&+(\mathbf{c}_{0}+\mathbf{Cu})^{T}\sum_{j=1}^{m}\mathbf{\beta }(t_{j})-\log m! \end{eqnarray*} and \[ \log f(\mathbf{u})=\sum_{k=1}^{p}\left( -\frac{1}{2}\log 2\pi \sigma _{k}^{2}-\frac{u_{k}^{2}}{2\sigma _{k}^{2}}\right) . \] There is no closed form for $f(x;\mathbf{\theta })$ but it can be easily computed by Monte Carlo integration, as explained in the Technical Supplement.
The model parameters are estimated by penalized maximum likelihood. Since the dimension $q$ of the functional basis $\mathbf{\beta }$ may be large, a roughness penalty is necessary to obtain smooth $\mu $ and $\phi _{k}$s. We use penalties of the form $P(g)=\int_{B}\left\Vert \mathrm{H}g(t)\right\Vert _{F}^{2}\ dt$, where $\mathrm{H}$ denotes the Hessian and $\left\Vert \cdot \right\Vert _{F}$ the Frobenius matrix norm. Then for a temporal process $ P(g)=\int (g^{\prime \prime })^{2}$ and for a spatial process $P(g)=\int \{( \frac{\partial ^{2}g}{\partial t_{1}^{2}})^{2}+2(\frac{\partial ^{2}g}{ \partial t_{1}\partial t_{2}})^{2}+(\frac{\partial ^{2}g}{\partial t_{2}^{2}} )^{2}\}$, both of which are quadratic in the basis coefficients when evaluated at $\mu $ and the $\phi _{k}$s.
Then the penalized maximum likelihood estimator $\mathbf{\hat{\theta}}$ based on $n$ independent realizations $x_{1},\ldots ,x_{n}$ is \[ \mathbf{\hat{\theta}}=\limfunc{argmax}_{\mathbf{\theta }}\rho _{n}(\mathbf{ \theta }) \] where \[ \rho _{n}(\mathbf{\theta })=\frac{1}{n}\sum_{i=1}^{n}\log f(x_{i};\mathbf{ \theta })-\nu _{1}P(\mu )-\nu _{2}\sum_{k=1}^{p}P(\phi _{k}) \] and $\nu _{1}$ and $\nu _{2}$ are smoothing parameters. We use two different parameters for $\mu $ and the $\phi _{k}$s because the latter have unit norm but $\mu $ does not, so it may be necessary to use $\nu _{1}$ and $\nu _{2}$ of different magnitudes to attain the same degree of smoothness. As mentioned before, $P(\mu )=\mathbf{c}_{0}^{T}\mathbf{\Omega c}_{0}$ and $ P(\phi _{k})=\mathbf{c}_{k}^{T}\mathbf{\Omega c}_{k}$ for a matrix $\mathbf{ \Omega }$ that depends on $\mathbf{\beta }$ and is derived in the Technical Supplement.
The smoothing parameters and the number of components $p$ can be chosen by cross-validation, by maximizing \begin{equation} \func{CV}(\nu _{1},\nu _{2},p)=\sum_{i=1}^{n}\log f(x_{i};\mathbf{\hat{\theta }}_{(-i)}), \label{eq:cv_crit} \end{equation} where $\mathbf{\hat{\theta}}_{(-i)}$ is the estimator for the reduced sample obtained after deleting $x_{i}$.
\section{\label{sec:Example}Applications}
\subsection{Internet auction data}
In this section we analyze eBay auction data for Palm M515 Personal Digital Assistants (PDA) on week-long auctions that took place between March and May of 2003. The data was downloaded from the companion website of Jank and Shmueli (2010). There were 194 auctioned items in this sample; a subsample of 20 bid price trajectories are shown in Figure \ref{fig:data}. The dots are the actual bids; the solid lines are for better visualization only. Individual trajectories are hard to follow in Figure \ref{fig:data}, but some general trends are visible. For example, bidding activity seems to concentrate at the beginning and at the end of the auctions, in patterns that have been called \textquotedblleft early bidding\textquotedblright\ and \textquotedblleft bid sniping\textquotedblright , respectively. In this paper we are interested in the bidding times as a temporal point process, not on the bidding prices (the relationship between the two is explored in Gervini and Baur (2017) via additive models).
\FRAME{ftbpFU}{3.7758in}{2.5452in}{0pt}{\Qcb{Internet Auction Data. Price trajectories of Palm Digital Assistants auctioned at eBay (first 20 trajectories in a sample of 194).}}{\Qlb{fig:data}}{ebay_data.eps}{\special {language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.7758in;height 2.5452in;depth 0pt;original-width 7.0854in;original-height 4.7573in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename '../MarkedPPs/eBay_data.eps';file-properties "XNPEU";}}
For these data we fitted a model (\ref{eq:Lambda_model}) with $p=2$ components, using cubic B-splines with 10 equally spaced knots as basis $ \mathbf{\beta }$. We found the smoothing parameters $\nu _{1}$ and $\nu _{2}$ by cross-validation, obtaining $\nu _{1}=10^{-4.5}$ and $\nu _{2}=10^{-2}$. We did not attempt to find an optimal $p$ by cross-validation, since for illustrative purposes $p=2$ suffices. The resulting baseline intensity function $\lambda _{0}$ and components $\xi _{1}$ and $\xi _{2}$ are shown in Figure \ref{fig:ebay_estim}. We see in Figure \ref{fig:ebay_estim}(a) that, as mentioned above, bidding generally intensifies towards the end of the auction period. The component $\xi _{1}$, shown in Figure \ref {fig:ebay_estim}(b), is greater than one everywhere, so it is a size component: items with component scores $u_{i1}>0$ will tend to have intensity functions $\lambda _{i}$ that are overall larger than the baseline $\lambda _{0}$, so they are items that attracted lots of bidders; whereas items with $u_{i1}<0$ will tend to have $\lambda _{i}$s overall smaller than the baseline and therefore are items that attracted few bidders. This interpretation is in fact corroborated by the correlation between $ \{u_{i1}\} $ and the number of bids per item, $\{m_{i}\}$, which is $.88$.
The second component, $\xi _{2}$, is a contrast or shape component, because $ \xi _{2}(t)>1$ for $t<1$ or $t>4$, and $\xi _{2}(t)<1$ for $1<t<4$, roughly. So, for an item $i$ with $u_{i2}>0$, the intensity $\lambda _{i}$ will tend to be below the baseline for $t\in (1,4)$ and above the baseline for $ t\notin (1,4)$. In particular, items subject to strong \textquotedblleft bid snipping\textquotedblright\ will tend to have positive $u_{i2}$s while items that show more \textquotedblleft early bidding\textquotedblright\ will tend to have negative $u_{i2}$s.
\FRAME{ftbpFU}{6.1393in}{2.2018in}{0pt}{\Qcb{Internet Auction Data. (a) Baseline intensity function $\protect\lambda _{0}$. (b) Multiplicative components $\protect\xi _{1}$ (solid line) and $\protect\xi _{2}$ (dashed line).}}{\Qlb{fig:ebay_estim}}{ebay_lmb0_xis.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 6.1393in;height 2.2018in;depth 0pt;original-width 8.8168in;original-height 2.898in;cropleft "0.0758";croptop "1";cropright "1";cropbottom "0";filename 'eBay_lmb0_xis.eps';file-properties "XNPEU";}}
\subsection{Street theft in Chicago}
As a second example, this time of a spatial process, we analyzed the spatial distribution of street robberies in Chicago during the year 2014. The data was downloaded from the City of Chicago Data Portal, a very extensive data repository that provides, among other things, detailed information about every crime reported in the city. The information provided includes type, date, time, and coordinates (latitude and longitude) of the incident. Here we focus on crimes typified as of primary type \textquotedblleft theft\textquotedblright\ and location \textquotedblleft street\textquotedblright . There were 16,278 reported incidents of this type between January 1, 2014 and December 31, 2014. Their locations cover most of the city, as shown in Figure \ref{fig:Maps}(a); a kernel-density estimator of these data is shown in Figure \ref{fig:Maps}(b).
\FRAME{ftbpFU}{5.9439in}{4.8948in}{0pt}{\Qcb{Chicago Street Theft. (a) Location of reported incidents in the year 2014. (b) Kernel density estimator of the data in (a).}}{\Qlb{fig:Maps}}{maps_color.eps}{\special {language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "ICON";valid_file "F";width 5.9439in;height 4.8948in;depth 0pt;original-width 6.7196in;original-height 4.8672in;cropleft "0.0596";croptop "1";cropright "0.9402";cropbottom "0";filename 'Maps_color.eps';file-properties "XNPEU";}}
We grouped up the data by day and considered them as $n=365$ replications of a spatial point process, for which we fitted a multiplicative model (\ref {eq:Lambda_model}). For illustrative purposes, we fitted a model with $p=3$ components (we did not attempt to find an optimal $p$). As basis $\mathbf{ \beta }$ we used renormalized Gaussian radial kernels $\beta _{k}(\mathbf{t} )=\exp \{-\left\Vert \mathbf{t}-\mathbf{\tau }_{k}\right\Vert ^{2}/2\delta _{k}^{2}\}/\sum_{j=1}^{q}\exp \{-\left\Vert \mathbf{t}-\mathbf{\tau } _{j}\right\Vert ^{2}/2\delta _{j}^{2}\}$, where the $\mathbf{\tau }_{k}$s were initially 100 uniformly spaced points in $[-87.84,-87.53]\times \lbrack 41.65,42.03]$, the smallest rectangle that includes the domain $B$ (the city of Chicago), but those $\mathbf{\tau }_{k}$s outside $B$ were eliminated, leaving $q=40$ basis functions. The parameter $\delta _{k}$ was taken as half the distance between $\tau _{k}$ and the closest $\tau _{j}$. The optimal smoothing parameters were obtained by cross-validation, $\nu _{1}=10^{-6.5}$ and $\nu _{2}=10^{-6}$.
\FRAME{ftbpFU}{6.2137in}{2.546in}{0pt}{\Qcb{Chicago Street Theft. (a) Lateral view and (b) top view of baseline intensity function $\protect \lambda _{0}$. }}{\Qlb{fig:Crime_lmb0}}{crime_lmb0.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 6.2137in;height 2.546in;depth 0pt;original-width 9.7222in;original-height 3.9574in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_lmb0.eps';file-properties "XNPEU";}}
The baseline intensity $\lambda _{0}$ is shown in Figure \ref{fig:Crime_lmb0} and essentially coincides with the kernel smoother of the aggregated data (Figure \ref{fig:Maps}(b)), as is to be expected. The mode of $\lambda _{0}$ occurs at Pulaski and Wicker Park, which are generally safe and affluent neighborhoods, but this is precisely what attracts street thieves; the poorer, crime-riddled neighborhoods of the West and South sides of the city are less populated and have less foot traffic, so street theft is actually rarer there.
The multiplicative components $\xi _{1}$, $\xi _{2}$ and $\xi _{3}$ are shown in Figures \ref{fig:Crime_xi1}, \ref{fig:Crime_xi2} and \ref {fig:Crime_xi3}, respectively. The corresponding components of the log-intensity, $\phi _{1}$, $\phi _{2}$ and $\phi _{3}$, are shown in Figure \ref{fig:Crime_phis}. The latter are sometimes easier to interpret due to their scale. For instance, we clearly see that $\phi _{1}$ is nonnegative everywhere, whereas it is not easy to determine from Figure \ref {fig:Crime_xi1} if $\xi _{1}$ is greater than one everywhere or not. It also helps interpretation to plot the baseline intensity $\lambda _{0}$ versus $ \lambda _{+}=\exp (\mu +2\sigma _{k}\phi _{k})$ and $\lambda _{-}=\exp (\mu -2\sigma _{k}\phi _{k})$, since this shows the overall effect on $\lambda $ of moving in the direction of the components. For the first component this is shown in Figure \ref{fig:Crime_plusmin_1}. This plot confirms that $\xi _{1}$ is a size component: $\lambda $ will be greater than $\lambda _{0}$ everywhere for positive scores and smaller than $\lambda _{0}$ everywhere for negative scores, and the difference in amplitude will be more noticeable in the South-eastern part of the city, but not only in this part, as Figure \ref{fig:Crime_xi1} may seem to indicate. To further corroborate this interpretation, Figure \ref{fig:Crime_days_pc1} shows the incidents in the days with highest and lowest scores on the first component, which is in line with what has been said.
A similar analysis reveals that the second and third components are contrasts. For the second component, we see in Figure \ref {fig:Crime_plusmin_2} that positive scores correspond to $\lambda $s that are above the baseline in the North-west part of the city and below the baseline in the South side, and the other way around for negative scores. The individual plots of the two extreme days (Figure \ref{fig:Crime_days_pc2} ) confirms this. For the third component, Figure \ref{fig:Crime_plusmin_3} shows that positive scores correspond to $\lambda $s that are above the baseline in the narrow strip of affluent North-east neighborhoods by the lake and below the baseline everywhere else, and the other way around for negative scores. This is confirmed by the individual plots of the two extreme days (Figure \ref{fig:Crime_days_pc3}).
\FRAME{ftbpFU}{5.9456in}{2.5452in}{0pt}{\Qcb{Chicago Street Theft. (a) Lateral view and (b) top view of first multiplicative component, $\protect \xi _{1}$. }}{\Qlb{fig:Crime_xi1}}{crime_xi1.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 5.9456in;height 2.5452in;depth 0pt;original-width 10.2774in;original-height 4.3734in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_xi1.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{5.9456in}{2.5452in}{0pt}{\Qcb{Chicago Street Theft. (a) Lateral view and (b) top view of second multiplicative component, $\protect \xi _{2}$. }}{\Qlb{fig:Crime_xi2}}{crime_xi2.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 5.9456in;height 2.5452in;depth 0pt;original-width 10.2774in;original-height 4.3734in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_xi2.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{5.9456in}{2.5452in}{0pt}{\Qcb{Chicago Street Theft. (a) Lateral view and (b) top view of third multiplicative component, $\protect \xi _{3}$. }}{\Qlb{fig:Crime_xi3}}{crime_xi3.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 5.9456in;height 2.5452in;depth 0pt;original-width 10.2774in;original-height 4.3734in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_xi3.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{3.3027in}{2.5443in}{0pt}{\Qcb{ Chicago Street Theft. Log-intensity components $\protect\phi _{1}$ (blue), $\protect\phi _{2}$ (green) and $\protect\phi _{3}$ (red).}}{\Qlb{fig:Crime_phis}}{crime_phis.eps }{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.3027in;height 2.5443in;depth 0pt;original-width 7.2938in;original-height 5.604in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_phis.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{3.0173in}{2.5443in}{0pt}{\Qcb{Chicago Street Theft. Baseline intensity function $\protect\lambda _{0}$ (blue) versus $\protect\lambda _{-} $ (green) and $\protect\lambda _{+}$ (red) for the first component.}}{\Qlb{ fig:Crime_plusmin_1}}{crime_plusmin_1.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.0173in;height 2.5443in;depth 0pt;original-width 6.6245in;original-height 5.5763in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_plusmin_1.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{5.4379in}{2.5443in}{0pt}{\Qcb{Chicago Street Theft. Days with highest [(a)] and lowest [(b)] scores on the first component.}}{\Qlb{ fig:Crime_days_pc1}}{crime_days_pc1.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 5.4379in;height 2.5443in;depth 0pt;original-width 9.5259in;original-height 4.4313in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_days_pc1.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{3.2759in}{2.5452in}{0pt}{\Qcb{Chicago Street Theft. Baseline intensity function $\protect\lambda _{0}$ (blue) versus $\protect\lambda _{-} $ (green) and $\protect\lambda _{+}$ (red) for the second component.}}{\Qlb{ fig:Crime_plusmin_2}}{crime_plusmin_2.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.2759in;height 2.5452in;depth 0pt;original-width 6.8753in;original-height 5.3281in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_plusmin_2.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{5.1612in}{2.5443in}{0pt}{\Qcb{Chicago Street Theft. Days with highest [(a)] and lowest [(b)] scores on the second component.}}{\Qlb{ fig:Crime_days_pc2}}{crime_days_pc2.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 5.1612in;height 2.5443in;depth 0pt;original-width 9.2353in;original-height 4.529in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_days_pc2.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{3.1946in}{2.5443in}{0pt}{\Qcb{Chicago Street Theft. Baseline intensity function $\protect\lambda _{0}$ (blue) versus $\protect\lambda _{-} $ (green) and $\protect\lambda _{+}$ (red) for the third component.}}{\Qlb{ fig:Crime_plusmin_3}}{crime_plusmin_3.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 3.1946in;height 2.5443in;depth 0pt;original-width 6.8597in;original-height 5.4518in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_plusmin_3.eps';file-properties "XNPEU";}}
\FRAME{ftbpFU}{5.4665in}{2.5452in}{0pt}{\Qcb{Chicago Street Theft. Days with highest [(a)] and lowest [(b)] scores on the third component.}}{\Qlb{ fig:Crime_days_pc3}}{crime_days_pc3.eps}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 5.4665in;height 2.5452in;depth 0pt;original-width 9.5415in;original-height 4.4157in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'Crime_days_pc3.eps';file-properties "XNPEU";}}
\section*{References}
\begin{description} \item Ash, R.B. and Gardner, M.F. (1975). \emph{Topics in stochastic processes}. Academic Press, New York.
\item Baddeley, A.J., Moyeed, R.A., Howard, C.V., and Boyde, A. (1993). Analysis of a three-dimensional point pattern with replication. \emph{ Applied Statistics} \textbf{42 }641--668.
\item Bell, M.L., and Grunwald, G.K. (2004). Mixed models for the analysis of replicated spatial point patterns. \emph{Biostatistics }\textbf{5 } 633--648.
\item Cox, D.R., and Isham, V. (1980). \emph{Point Processes.} Chapman and Hall/CRC, Boca Raton.
\item Diggle, P.J. (2013). \emph{Statistical Analysis of Spatial and Spatio-Temporal Point Patterns, Third Edition.} Chapman and Hall/CRC, Boca Raton.
\item Diggle, P.J., Lange, N., and Bene\v{s}, F.M. (1991). Analysis of variance for replicated spatial point patterns in clinical neuroanatomy. \emph{Journal of the American Statistical Association }\textbf{86} 618--625.
\item Diggle, P.J., Mateau, J., and Clough, H.E. (2000). A comparison between parametric and nonparametric approaches to the analysis of replicated spatial point patterns. \emph{Advances in Applied Probability} \textbf{32 }331--343.
\item Diggle, P.J., Eglen, S.J., and Troy, J.B. (2006). Modeling the bivariate spatial distribution of amacrine cells. In \emph{Case Studies in Spatial Point Process Modeling}, eds.~A. Baddeley et al., New York: Springer, pp.~215--233.
\item Gervini, D. (2016). Independent component models for replicated point processes. \emph{Spatial Statistics} \textbf{18} 474-488.
\item Gervini, D. and Baur, T.J. (2017). Regression models for replicated marked point processes. \emph{ArXiv} 1705.06259.
\item Jalilian, A., Guan, Y., and Waagpetersen, R. (2013). Decomposition of variance for spatial Cox processes. \emph{Scandinavian Journal of Statistics }\textbf{40 }119--137.
\item Jank, W., and Shmueli, G. (2010). \emph{Modeling Online Auctions.} Wiley \& Sons, New York.
\item Landau, S., Rabe-Hesketh, S., and Everall, I.P. (2004). Nonparametric one-way analysis of variance of replicated bivariate spatial point patterns. \emph{Biometrical Journal }\textbf{46 }19--34.
\item Li, Y., and Guan, Y. (2014). Functional principal component analysis of spatiotemporal point processes with applications in disease surveillance. \emph{Journal of the American Statistical Association }\textbf{109} 1205--1215.
\item M\o ller, J., and Waagepetersen, R.P. (2004). \emph{Statistical Inference and Simulation for Spatial Point Processes}. Chapman and Hall/CRC, Boca Raton.
\item Pawlas, Z. (2011). Estimation of summary characteristics from replicated spatial point processes. \emph{Kybernetika} \textbf{47} 880--892.
\item Snyder, D.L., and Miller, M.I. (1991). \emph{Random Point Processes in Time and Space.} Springer, New York.
\item Streit, R.L. (2010). \emph{Poisson Point Processes: Imaging, Tracking, and Sensing.} Springer, New York.
\item Wager, C.G., Coull, B.A., and Lange, N. (2004). Modelling spatial intensity for replicated inhomogeneous point patterns in brain imaging. \emph{Journal of the Royal Statistical Society Series B} \textbf{66 } 429--446.
\item Wu, S., M\"{u}ller, H.-G., and Zhang, Z. (2013). Functional data analysis for point processes with rare events. \emph{Statistica Sinica } \textbf{23} 1--23. \end{description}
\end{document} |
\begin{document}
\begin{center} {\LARGE On the Stability of Symmetric Periodic Orbits of \\ the Elliptic Sitnikov Problem}\\ \vskip 0.3cm
Xiuli Cen\footnote{This author is supported by the National Natural Science Foundation of China (Grant No. 11801582).}\\ School of Mathematics (Zhuhai), Sun Yat-sen University, \\ Zhuhai, Guangdong 519082, China \\ E-mail: {\tt cenxiuli2010@163.com} \\ \vskip 0.2cm
Xuhua Cheng\footnote{This author is supported by the National Natural Science Foundation of China (Grant No. 11601257).} \\ Department of Applied Mathematics, Hebei University of Technology, \\ Tianjin 300130, China\\ E-mail: {\tt chengxuhua88@163.com} \vskip 0.2cm
Zaitang Huang\footnote{This author is supported by the Guangxi Natural Science Foundation (Grant No. 2018JJA110052).}\\ School of Mathematics and Statistics, Nanning Normal University,\\ Nanning 530023, China\\ E-mail: {\tt zaitanghuang@163.com}\\ \vskip 0.2cm
Meirong Zhang\footnote{Correspondence author. This author is supported by the National Natural Science Foundation of China (Grant No. 11790273).}\\ Department of Mathematical Sciences, Tsinghua University, \\ Beijing 100084, China\\ E-mail: {\tt zhangmr@tsinghua.edu.cn}
\end{center} \vskip 0.2cm
\begin{abstract} Motivated by the recent works on the stability of symmetric periodic orbits of the elliptic Sitnikov problem, for time-periodic Newtonian equations with symmetries, we will study symmetric periodic solutions which are emanated from nonconstant periodic solutions of autonomous equations. By using the theory of Hill's equations, we will first deduce in this paper a criterion for the linearized stability and instability of periodic solutions which are odd in time. Such a criterion is complementary to that for periodic solutions which are even in time, obtained recently by the present authors. Applying these criteria to the elliptic Sitnikov problem, we will prove in an analytical way that the odd $(2p,p)$-periodic solutions of the elliptic Sitnikov problem are hyperbolic and therefore are Lyapunov unstable when the eccentricity is small, while the corresponding even $(2p,p)$-periodic solutions are elliptic and linearized stable. These are the first analytical results on the stability of nonconstant periodic orbits of the elliptic Sitnikov problem.
\end{abstract}
{\bf Mathematics Subject Classification (2010):} 34D20; 34C25; 34C23
{\bf Keywords:} Elliptic Sitnikov problem; periodic solution; symmetric solution; linearized stability/instability; Hill's equation; hyperbolic periodic solution; elliptic periodic solution.
\section{Introduction} \setcounter{section}{1} \setcounter{equation}{0} \label{main-result}
The elliptic Sitnikov problem, denoted by $(S_e)$, is the simplest model in the restricted $3$-body problems \cite{S60}. By assuming that the two primaries with equal masses are moving in a circular or an elliptic orbit of the $2$-body problem of the eccentricity $e\in [0, 1)$, the Sitnikov problem describes the motion of the infinitesimal mass moving on the straight line orthogonal to the plane of motion of the primaries, whose governing equation was given in \cite{BLO94, LO08} and will be stated as Eq. \x{se} in \S \ref{Sitnikov} of this paper. When $e=0$, $(S_0)$ is called the circular Sitnikov problem, whose equation, stated as Eq. \x{s0}, is an autonomous scalar Newtonian or Lagrangian equation. For $e\in(0,1)$, the equation for $(S_e)$ is a nonlinear scalar Newtonian equation which is $2\pi$-periodic in time.
There is a long history and a rigorous study on motions of problem $(S_e)$, covering the following topics.
$\bullet$\ {\bf Oscillation and expressions of motions:} The motions of the circular Sitnikov problem can be expressed using various elliptic functions in an implicit way \cite{BLO94, F03, LS90, S60}. It is also found that the elliptic Sitnikov problem admits oscillatory motions. See the bibliography of \cite{LO08} for some historic references on this topic.
$\bullet$\ {\bf Existence and construction of periodic orbits:} Due to the symmetries of the elliptic Sitnikov problem, many interesting periodic orbits have been obtained in \cite{BLO94, LO08, LS80, O16, OR10}, mainly by using the bifurcation method and global continuation.
$\bullet$\ {\bf Stability and linearized stability of motions:} This is a central topic in dynamical systems \cite{O17, SM71}. For example, $(S_e)$ has the origin as an equilibrium which can be considered as a $2\pi$-periodic solution. In case the equilibrium is elliptic, its Lyapunov stability can be studied using the third order approximation developed by Ortega \cite{O96} and extended in \cite{LLYZ03}. See \cite[\S 6]{LO08} for details. As for nonconstant, even (in time) periodic solutions of $(S_e)$ which are emanated from the corresponding solutions of $(S_0)$, the stability and linearized stability are studied in very recent papers \cite{GNR18, GNRR18, M18, ZCC18}. Most of these are based on the theory for Hill's equations. Though some analytical formulas have been derived, many results of these are numerical due to the difficulties caused by nonconstant periodic solutions.
In this paper we continue the study for the stability and linearized stability of nonconstant, symmetric (in time) periodic solutions of $(S_e)$. Our aim is to provide some analytical results. In order to make such an analytical approach be applicable to more general problems, we consider the following second-order nonlinear scalar Newtonian equation
\begin{equation}} \def\ee{\end{equation} \label{xe}
\ddot x+ F(x,t,e)=0.
\ee Here $F(x,t,e)$ is a smooth function of $(x,t,e)\in {\mathbb R}} \def\C{{\mathbb C}^3$ fulfilling the following symmetries
\begin{equation}} \def\ee{\end{equation} \label{Sy1}
\left} \def\y{\right\{\begin{array}{l} F(-x,t,e) \equiv -F(x,t,e), \\
F(x,-t,e) \equiv F(x,t,e),\\
F(x,t+2\pi,e) \equiv F(x,t,e),\\
F(x,t,0)\equiv f(x), \\
x f(x)> 0\mbox{ for }x\ne 0.
\end{array}\y.
\ee These symmetries are verified by the Sitnikov problem $(S_e)$. In particular, when $e=0$, the starting equation
\begin{equation}} \def\ee{\end{equation} \label{x}
\ddot x+f(x)=0
\ee is autonomous and has the unique equilibrium $x=0$. Obviously, $f(x)$ is also odd in $x$.
Let $m, \ p\in \N$ be integers. We say that $x(t)$ is an $({m,p})$-periodic solution of Eq. \x{xe}, if $x(t)$ is a ${2m\pi}$-periodic solution of \x{xe} and has precisely $2p$ zeros in intervals $[t_0,t_0+{2m\pi})$, $t_0\in {\mathbb R}} \def\C{{\mathbb C}$.
Because of the autonomy and the complete integrability, all $({m,p})$-periodic solutions of Eq. \x{x} are clear. In particular, with suitable choice of $({m,p})$, Eq. \x{x} admits the $({m,p})$-periodic solutions $\vp_{m,p}(t)$ and $\phi_{m,p}(t)$, which are respectively even and odd in time $t$. These are the symmetric $({m,p})$-periodic solutions of Eq. \x{x} we are interested in. Due to the autonomy of Eq. \x{x}, both $\vp_{m,p}(t)$ and $\phi_{m,p}(t)$ have the minimal period ${2m\pi}/p$.
From bifurcation theory, it is known that, under some non-degeneracy conditions, Eq. \x{xe} admits families of $({m,p})$-periodic solutions $\vp_{m,p}(t,e)$ and $\phi_{m,p}(t,e)$, $0 \le e \ll 1$, such that
\[
\left} \def\y{\right\{ \begin{array}{l}
\vp_{m,p}(t,0)\equiv\vp_{m,p}(t), \mbox{ and } \phi_{m,p}(t,0)\equiv\phi_{m,p}(t),\\
\vp_{m,p}(t,e) \mbox{ is even in } t, \ \vp_{m,p}(0,e)>0, \mbox{ and } \vp_{m,p}(t+m\pi,e) \equiv - \vp_{m,p}(t,e), \\
\phi_{m,p}(t,e) \mbox{ is odd in } t, \ \dot\phi_{m,p}(0,e)>0, \mbox{ and } \phi_{m,p}(t+m\pi,e) \equiv - \phi_{m,p}(t,e).
\end{array}
\y.
\] They are called the even and the odd $({m,p})$-periodic solutions of Eq. \x{xe}, respectively.
Generally speaking, when $e>0$, $\vp_{m,p}(t,e)$ and $\phi_{m,p}(t,e)$ have the minimal period ${2m\pi}$, not ${2m\pi}/p$. For more details, see Theorem \ref{M1}. For the elliptic Sitnikov problem $(S_e)$, such symmetric periodic solutions have been studied extensively in \cite{BLO94, LO08, O16}. Moreover, some interesting global continuations of these solutions are also obtained. See, for example, \cite[Theorem 3.1]{LO08} and \cite[Theorem 1]{O16}.
Since the linearization equations of \x{xe} are Hill's equations with parameter $e$ \cite{MW66}, the linearized stability/instability of these periodic solutions $\vp_{m,p}(t,e)$ and $\phi_{m,p}(t,e)$ are related with the traces $\tau_{m,p}(e)$ of the corresponding Poincar\'e matrixes. For $e=0$, one has $\tau_{m,p}(0)=2$ because Eq. \x{x} is autonomous and $\vp_{m,p}(t)$ and $\phi_{m,p}(t)$ are parabolic. Hence the signs of $\tau'_{m,p}(0)=\frac} \def\fa{\forall\,{\,{\rm d} \tau_{m,p}(e)}{\,{\rm d} e}|_{e=0}$, if they are nonzero, can yield the linearized stability or instability.
As for even $({m,p})$-periodic solutions $\vp_{m,p}(t,e)$, a formula of $\tau'_{m,p}(0)$ has been obtained in \cite{ZCC18} and will be restated as \x{tau-e1} of this paper.
One of the main results of this paper is to derive the corresponding formula of $\tau'_{m,p}(0)$ for odd $({m,p})$-periodic solutions $\phi_{m,p}(t,e)$. See formula \x{dtau0} in \S \ref{criteria}. Note that formulas \x{dtau0} and \x{tau-e1} for $\tau'_{m,p}(0)$ are involved of nonconstant periodic solutions $\vp_{m,p}(t)$ and $\phi_{m,p}(t)$ of the autonomous equation \x{x}, which are not known explicitly.
By applying these formulas to the elliptic Sitnikov problem $(S_e)$, we can obtain the following analytical results on the stability or instability for some families of symmetric periodic solutions.
\begin{Theorem} \label{M5-7} For those frequencies $({m,p})=(2p,p)$ where $p\in \N$ is arbitrary, we have the following results.
{\rm (i)} For the odd $(2p,p)$-periodic solutions $\phi_{2p,p}(t,e)$, one has
\(
\tau'_{2p, p}(0)>0.
\) Consequently, for $e>0$ small, $\phi_{2p,p}(t,e)$ is hyperbolic and Lyapunov unstable.
{\rm (ii)} For the even $(2p,p)$-periodic solutions $\vp_{2p,p}(t,e)$, one has
\(
\tau'_{2p, p}(0)<0.
\) Consequently, for $e>0$ small, $\vp_{2p,p}(t,e)$ is elliptic and linearized stable.
\end{Theorem}
It seems to us that these are the first analytical results on the stability or instability for the nonconstant symmetric periodic solutions of the elliptic Sitnikov problem $(S_e)$.
The organization of the paper is as follows. In \S \ref{Hill}, we will introduce some notions for Hill's equations. The linearization equations of autonomous equation \x{x} along symmetric periodic solutions will be discussed with the emphasis on the relation between the fundamental solutions of linearization equations and the solutions of Eq. \x{x} themselves. See Lemma \ref{psi12}. Moreover, a relation between the Poincar\'e matrixes and the period function of the periodic solutions of Eq. \x{x} will be found in Lemma \ref{hbn}. These results may be of independent interests. In \S \ref{criteria}, we will first give the bifurcation result on odd $({m,p})$-periodic solutions $\phi_{m,p}(t,e)$ of Eq. \x{xe}. See Theorem \ref{M1}. Then we will derive the formula of $\tau'_{m,p}(0)$ in Theorem \ref{M2}. Finally, in \S \ref{Sitnikov}, we will use the formulas of $\tau'_{m,p}(0)$ to analyze the elliptic Sitnikov problem $(S_e)$. The results of Theorem \ref{M5-7} will be proved in \S \ref{odd} and \S \ref{even}.
Note from Theorem \ref{M5-7} that we have only obtained analytical results for some families of symmetric periodic solutions with very specific frequencies $({m,p})=(2p,p)$, because we are dealing with nonconstant periodic solutions. In fact, it is found numerically and analytically in \cite{GNRR18, ZCC18} that the stability/instability depend on frequencies in a delicate way. As for the elliptic Sitnikov problem, we will prove in Theorems \ref{M4} and \ref{M6} that $\tau'_{m,p}(0)$ are always $0$ for both odd and even $({m,p})$-periodic solutions when frequencies $({m,p})$ satisfy $m/(2p)\not\in \N$.
The remaining frequencies are $({m,p})=(2np, p)$, $n\ge 2$. For odd $(2np,p)$-periodic solutions, numerical simulation shows that $\tau'_{2np,p}(0)$ are always positive and $\phi_{2np,p}(t,e)$ will lead to instability. For even $(2np,p)$-periodic solutions, we will prove in Lemma \ref{rels} that the signs of $\tau'_{2np,p}(0)$ differ from that of the odd ones by a factor $(-1)^n$. Hence some even solutions are linearized stable, while the others are unstable. These observations will be stated as a conjecture at the end of the paper.
\iffalse Let us mention some important ones, among a lot of existence results.
$\bullet\bullet$\ In \cite[Theorem 3.1]{LO08}, Llibre and Ortega gave the following result. Let integers $p,\ m$ be in condition \x{mp}. Then there exists $e_{m,p}\in(0,1]$ and a family of solutions of problem $(S_e)$, $\vp_{m,p}(t,e)$, $e\in [0,e_{m,p})$ such that $\vp_{m,p}(t,e)$ is even, ${2m\pi}$-periodic in $t$, $\vp_{m,p}(0,e)>0$, and has precisely $2p$ zeros in one period. Moreover, a sharp estimate on the possible maximal eccentricity $e_{m,p}$ is also obtained there.
These are called the {\it even} periodic orbits of the $({m,p})$-type, because they have the shape like $\cos(p t/m)$. The original proof in \cite{LO08} is to use global continuation theory. However, if no estimate on $e_{m,p}$ is considered, these solutions can also obtained from the Implicit Function Theorem (IFT) with the starting periodic solutions $\vp_{m,p}(t) :=\vp_{m,p}(t,0)$ being nonconstant ${2m\pi}$-periodic solutions of the circular Sitnikov problem $(S_0)$ of the $({m,p})$-type.
$\bullet\bullet$\ With the same choice of $({m,p})$ as in \x{mp}, an appropriate translation of $\vp_{m,p}(t)$ in $t$ can lead to an odd, ${2m\pi}$-periodic solution $\phi_{m,p}(t)$ of problem $(S_0)$ of the $({m,p})$-type. By the IFT again, one has then a smooth family of solutions of problem $(S_e)$, $\phi_{m,p}(t,e)$, $e\in [0,e_{m,p}^*)$ such that $\phi_{m,p}(t,e)$ is odd, ${2m\pi}$-periodic in $t$, $\dot\phi_{m,p}(0,e)>0$, and has precisely $2p$ zeros in one period. For details, see Theorem \ref{M1}. These are then called {\it odd} periodic solutions of the $({m,p})$-type.
$\bullet\bullet$\ In \cite[Theorem 1]{O16}, Ortega proved a very interesting result, i.e. for any $m\in \N$, the above family $\phi_{1,m}(t,e)$ of odd periodic orbits of the $(1,m)$-type is {\it uniquely, globally} defined. That is, for any $e\in[0,1)$, such a ${2m\pi}$-periodic solution $\phi_{1,m}(t,e)$ is existent and unique. Such a uniqueness is obtained from some property on solutions of the linearized equations satisfying the Dirichlet boundary conditions.
\fi
\section{Periodic Solutions and Linearization of Autonomous Equations} \setcounter{equation}{0} \label{Hill}
\subsection{Periodic solutions of autonomous equations} We consider the autonomous equation \x{x} with the symmetries as before. By introducing
\begin{equation}} \def\ee{\end{equation} \label{Ex}
E(x):=\int_0^x f(u) \,{\rm d}u, \qq x\in {\mathbb R}} \def\C{{\mathbb C},
\ee an even function such that $E(0)=0$ and $E(x)>0$ for $x\ne 0$, we know that solutions $x(t)$ of \x{x} satisfy
\begin{equation}} \def\ee{\end{equation} \label{ener}
C_h: \qq \frac{\footnotesize 1}{\footnotesize 2} \dot x^2(t) + E(x(t)) \equiv h,
\ee where $h\in [0, +\infty} \def\d{\cdot)$. For $h=0$, \x{ener} corresponds to the equilibrium $x(t)\equiv 0$. For
\[
0< h < E_{\max}:=\sup_{x\in {\mathbb R}} \def\C{{\mathbb C}} E(x),
\] $C_h$ consists of a nonconstant periodic orbit in the phase plane, whose minimal period is denoted by $T=T(h)>0$. We will not write down $T$ explicitly and refer to \cite{L91} for details.
Because of the symmetries of $f(x)$, we are interested in the following two classes of periodic solutions of Eq. \x{x}.
{\bf Odd periodic solutions:} For
\begin{equation}} \def\ee{\end{equation} \label{v}
\eta\in \left} \def\y{\right(0,\eta_{\max}\y),\qq \eta_{\max}:=\sqrt{2 E_{\max}},
\ee let $x=S(t)=S(t,\eta)$ be the solution of \x{x} satisfying the initial value conditions
\begin{equation}} \def\ee{\end{equation} \label{ini}
\left} \def\y{\right(x(0), \dot x(0)\y)=(0,\eta).
\ee Then $S(t)$ is a periodic solution of \x{x} of the minimal period
\begin{equation}} \def\ee{\end{equation} \label{Tv}
T=T(h),\qq \mbox{where } h=\eta^2/2,
\ee with the following symmetries
\begin{equation}} \def\ee{\end{equation} \label{Os1}
S(-t) \equiv -S(t) \quad \mbox{ and } \quad S(t+T/2)\equiv - S(t).
\ee Moreover, $S(t)>0$ is strictly increasing on $(0,T/4)$.
{\bf Even periodic solutions:} For
\[
\xi\in \left} \def\y{\right(0,+\infty} \def\d{\cdot\y),
\] let $x=C(t)=C(t,\xi)$ be the solution of \x{x} satisfying the initial value conditions
\begin{equation}} \def\ee{\end{equation} \label{ini2}
\left} \def\y{\right(x(0), \dot x(0)\y)=(\xi,0).
\ee Then $C(t)$ is a periodic solution of \x{x} of the minimal period
\[
T=T(h),\qq \mbox{where } h=E(\xi),
\] with the following symmetries
\begin{equation}} \def\ee{\end{equation} \label{Os2}
C(-t)\equiv C(t) \quad \mbox{ and } \quad C(t+T/2)\equiv - C(t).
\ee Moreover, $C(t)>0$ is strictly decreasing on $(0,T/4)$.
From \x{Os1} and \x{Os2}, one sees that
\begin{equation}} \def\ee{\end{equation} \label{Os12}
S(T/2-t) \equiv S(t) \quad \mbox{ and } \quad C(T/2-t) \equiv -C(t).
\ee The solutions $S(t)$ and $C(t)$ are also called $T/2$-anti-periodic. Like the sine and cosine, these solutions are related in the following way.
\begin{Lemma} \label{SC} Suppose that $\eta$ and $\xi$ satisfy
\begin{equation}} \def\ee{\end{equation} \label{sc1}
\eta^2/2=E(\xi)=:h.
\ee By setting $T=T(h)$, the odd and the even periodic solutions $S(t)=S(t,\eta)$ and $C(t)=C(t,\xi)$ are related via
\begin{equation}} \def\ee{\end{equation} \label{sc2}
S(t+T/4) \equiv C(t) \quad \mbox{ and } \quad C(t+T/4) \equiv -S(t).
\ee
\end{Lemma}
\subsection{Traces of Hill's equations} We need some general results for Hill's equations \cite{MW66}. Let $q: {\mathbb R}} \def\C{{\mathbb C}\to {\mathbb R}} \def\C{{\mathbb C}$ be a $T$-periodic locally Lebesgue integrable function and consider the Hill's equation
\begin{equation}} \def\ee{\end{equation} \label{he}
\ddot y + q(t) y=0,\qq t\in {\mathbb R}} \def\C{{\mathbb C}.
\ee As usually, we use $y= \psi_i(t) =\psi_i(t,q)$, $i=1,2$ to denote the fundamental solutions of Eq. \x{he}, i.e. the solutions of \x{he} satisfying initial conditions $(\psi_1(0),\dot\psi_1(0)) = (1,0)$ and $(\psi_2(0),\dot\psi_2(0)) = (0,1)$ respectively. The $T$-periodic Poincar\'e matrix of Eq. \x{he} is
\[
P=P_T=\matt{a}{b}{c}{d}:= \matt{\psi_1(T)}{\psi_2(T)}{\dot\psi_1(T)}{\dot\psi_2(T)}.
\] The Liouville law for Eq. \x{he} asserts that
\begin{equation}} \def\ee{\end{equation} \label{Ll}
\det P_T = a d - bc =+1.
\ee The trace of the $T$-Poincar\'e matrix $P_T$ is
\[
\tau=\tau_T:= {\rm tr}(P_T)= a+d= \psi_1(T)+\dot\psi_2(T).
\]
Because of \x{Ll}, we know that (i) in case $|\tau|<2$, \x{he} is elliptic and is stable, (ii) in case $|\tau|>2$, \x{he} is hyperbolic and is unstable, and (iii) the case $|\tau|=2$ corresponds to the parabolicity of Eq. \x{he} which can be either stable or unstable.
Being considered as functionals of potentials $q$, all of the above objects are Fr\'echet differentiable in $q\in L^1({\mathbb R}} \def\C{{\mathbb C}/T{\mathbb Z}} \def\N{{\mathbb N})$, the Lebesgue space endowed with the $L^1$ norm $\|\d\|_{L^1}$.
\begin{Lemma} \label{tau'0} {\rm (\cite[Lemma 2.2]{ZCC18})} The Fr\'echet derivative of the trace $\tau: L^1({\mathbb R}} \def\C{{\mathbb C}/T{\mathbb Z}} \def\N{{\mathbb N})\to {\mathbb R}} \def\C{{\mathbb C}$ at $q$ is
\begin{equation}} \def\ee{\end{equation} \label{dtau}
\frac} \def\fa{\forall\,{\pa \tau}{\pa q}(h)=\int_0^T K(s) h(s) \,{\rm d}s\qquad \forall h\in L^1({\mathbb R}} \def\C{{\mathbb C}/T{\mathbb Z}} \def\N{{\mathbb N}).
\ee Here, by using the fundamental solutions $\psi_i(s)=\psi_i(s,q)$,
\begin{equation}} \def\ee{\end{equation} \label{ker0}
K(s):=-\psi_2(T) \psi^2_1(s) +\left} \def\y{\right(\psi_1(T)-\dot \psi_2(T)\y) \psi_1(s)\psi_2(s) +\dot \psi_1(T) \psi^2_2(s).
\ee
\end{Lemma}
\subsection{Linearization of autonomous equations} We consider a nonconstant $T$-periodic solution $x=\phi(t)$ of the autonomous equation \x{x}. Here $T$ is not necessarily the minimal period of $\phi(t)$. Then the linearization equation of \x{x} along the solution $\phi(t)$ is the Hill's equation \x{he}, where
\begin{equation}} \def\ee{\end{equation} \label{qt}
q(t):= f'(\phi(t))
\ee is a $T$-periodic potential.
In the sequel, we consider
\begin{equation}} \def\ee{\end{equation} \label{pss}
\phi(t):=S(t,\eta)\quad \mbox{ and } \quad q(t):= f'(S(t,\eta)).
\ee Here $S(t,\eta)$ is an odd periodic solution of \x{x} of the minimal period $T$ as in \x{Tv}. Then one has the following important observations.
\begin{Lemma} \label{psi12} Using the solutions $S(t,\eta)$ of initial value problems, the fundamental solutions $\psi_i(t)=\psi_i(t,q)$ of Eq. \x{he} are given by
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{psi10}
\psi_1(t)\hh & = & \hh} \def\EE{\hh & \equiv & \hh
\frac} \def\fa{\forall\,{1}{\eta}\pas{\frac} \def\fa{\forall\,{\pa S}{\pa t}}{(t,\eta)} \quad \mbox{ and } \quad
\dot\psi_1(t)
=-\frac} \def\fa{\forall\,{f\left} \def\y{\right(S(t,\eta)\y)}{\eta},\\
\label{psi20}
\psi_2(t)\hh & = & \hh} \def\EE{\hh & \equiv & \hh \pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(t,\eta)}\quad \mbox{ and } \quad \dot\psi_2(t)= \pas{\frac} \def\fa{\forall\,{\pa^2 S}{\pa t\pa \eta}}{(t,\eta)}.
\eea
\end{Lemma}
\noindent{\bf Proof} \quad Recall that $S(t,\eta)$ satisfies
\begin{eqnarray}} \def\eea{\end{eqnarray} \label{phit}
\EM\ddot S(t,\eta) +f\left} \def\y{\right(S(t,\eta)\y) = 0,\\
\label{ini-v}
\EM \left} \def\y{\right(S(0,\eta),\dot S(0,\eta)\y)=(0,\eta).
\eea Differentiating \x{phit} with respect to $t$, we know that $y(t):=\pas{\frac} \def\fa{\forall\,{\pa S}{\pa t}}{(t,\eta)}= \dot S(t,\eta)$ satisfies Eq. \x{he} and the initial values
$$
(y(0),\dot y(0))=\left} \def\y{\right(\dot S(0,\eta),\ddot S(0,\eta)\y)=\left} \def\y{\right(\dot S(0,\eta),-f\left} \def\y{\right(S(0,\eta)\y)\y)=\left} \def\y{\right(\eta,0\y) = \eta(1,0).
$$ Hence we have
\[
\psi_1(t)\equiv{\dot S(t,\eta)}/\eta\quad \mbox{ and } \quad \dot\psi_1(t)\equiv{\ddot S(t,\eta)}/\eta=-f\left} \def\y{\right(S(t,\eta)\y)/\eta,
\] the equalities in \x{psi10}.
On the other hand, by differentiating \x{phit} and \x{ini-v} with respect to $\eta$, we know that the variational equation for $y(t):= \pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(t,\eta)}$ is just Eq. \x{he} and the initial values are $(y(0),\dot y(0))=(0,1)$. Thus $\psi_2(t) \equiv \pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(t,\eta)}$. As a consequence,
\[
\dot\psi_2(t) \equiv \frac} \def\fa{\forall\,{\pa}{\pa t}\left} \def\y{\right(\pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(t,\eta)}\y)
=\pas{\frac} \def\fa{\forall\,{\pa^2 S}{\pa t\pa \eta}}{(t,\eta)}.
\] Thus we have the equalities in \x{psi20}.
$\Box$
Since $f'(x)$ is even in $x$, it follows from \x{Os1} and \x{pss} that the minimal period of $q(t)$ is actually $T/2$. Because of this, we consider the Poincar\'e matrixes of Eq. \x{he} with different periods
\[
\hat P:=P_{T/2}\quad \mbox{ and } \quad \hat{P}_n:=P_{n T/2},\quad} \def\qq{\qquad n\in \N.
\] Using the fundamental solutions $\psi_i(t)$, these are
\[
\hat P=\matt{\psi_1(T/2)}{\psi_2(T/2)}{\dot\psi_1(T/2)}{\dot\psi_2(T/2)}\quad \mbox{ and } \quad \hat{P}_n=\matt{\psi_1(n T/2)}{\psi_2(n T/2)}{\dot\psi_1(n T/2)}{\dot\psi_2(n T/2)}.
\]
\begin{Lemma} \label{PM} By letting
\begin{equation}} \def\ee{\end{equation} \label{B1}
\hat{b} :=\psi_2(T/2)\quad \mbox{ and } \quad \hat b_n:=\psi_2(n T/2),
\ee one has
\begin{equation}} \def\ee{\end{equation} \label{Ptp}
\hat P =\matt{-1}{\hat{b}}{0}{-1}\quad \mbox{ and } \quad \hat P_n =\matt{(-1)^n}{\hat b_n}{0}{(-1)^n},
\ee and the constants $\hat{b}, \ \hat{b}_n$ are related via
\begin{equation}} \def\ee{\end{equation} \label{B2}
\hat b_n=(-1)^{n+1}n\hat{b}.
\ee
\end{Lemma}
\noindent{\bf Proof} \quad From \x{Os1} and their derivatives, one has
\[
\left} \def\y{\right(S(T/2),\dot S(T/2)\y)=\left} \def\y{\right(-S(0), -\dot S(0)\y)=\left} \def\y{\right(0, -\eta\y).
\] By \x{psi10}, we have
$$
\left} \def\y{\right(\psi_1(T/2),\dot\psi_1(T/2)\y)=\left} \def\y{\right(\dot S(T/2),-f(S(T/2))\y)/\eta=\left} \def\y{\right(-1,0\y),
$$ i.e. the first column of $\hat P$ is $(-1,0)^\top$. Moreover, it follows from \x{Ll} that $\dot \psi_2(T/2)=-1$. This gives the first result of \x{Ptp}.
For general $n\in \N$, one has then
\[
\hat{P}_n = \hat P^n= \matt{-1}{\hat{b}} {0} {-1}^n = \matt{(-1)^n}{{(-1)^{n+1}n \hat{b}}} {0} {(-1)^n}.
\] Hence we have all equalities of the lemma.
$\Box$
Using the period function $T(h)$ of orbit $C_h$ of Eq. \x{x}, we have the following relation.
\begin{Lemma} \label{hbn} Suppose that $T(h)$ is differentiable in $h$. Then
\begin{equation}} \def\ee{\end{equation} \label{B3}
\hat{b}_n= (-1)^{n+1}n \frac} \def\fa{\forall\,{\eta^2}{2} \left} \def\y{\right.\frac} \def\fa{\forall\,{\,{\rm d} T(h)}{\,{\rm d} h}\y|_{\eta^2/2}= (-1)^{n+1}n h T'(h),
\ee where $h=\eta^2/2$ and $'=\frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d} h}$.
\end{Lemma}
\noindent{\bf Proof} \quad Since we are considering odd periodic solutions $S(t,\eta)$, we know from the second equality of \x{Os1} that
\[
S{(T(h)/2,\eta)} \equiv 0
\] for all $\eta$ as in \x{v}, where $h=\eta^2/2$ is as in \x{Tv}. Differentiating it with respect to $\eta$, we obtain
\[
\pas{\frac} \def\fa{\forall\,{\pa S}{\pa t}}{(T(h)/2,\eta)} T'(h) \frac} \def\fa{\forall\,{\eta}{2} + \pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(T(h)/2,\eta)}=0.
\] By \x{psi10} and \x{psi20}, we have
\[
\pas{\frac} \def\fa{\forall\,{\pa S}{\pa t}}{(T(h)/2,\eta)} =\eta \psi_1(T/2)=-\eta\quad \mbox{ and } \quad \pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(T(h)/2,\eta)}= \psi_2(T/2) = \hat b.
\] See the proof of Lemma \ref{PM}. Thus $\hat b=(\eta^2/2) T'(\eta^2/2)$. Combining with \x{B2}, we obtain result \x{B3} for general $n$.
$\Box$
\begin{Remark}\label{nd20} {\rm (i) From Lemmas \ref{PM} and \ref{hbn}, we have the following equivalence relations
\begin{equation}} \def\ee{\end{equation} \label{nd11} \hat{b}\ne 0 \Longleftrightarrow \hat{b}_n\ne 0 \Longleftrightarrow T'(h)\ne 0.
\ee One can notice that the former two conditions mean that $\phi(t)=S(t,\eta)$ is parabolic-unstable, while the last means that $\phi(t)$ is Lyapunov unstable because the periodic orbits inside a neighborhood of $C_h$ will have different periods.
(ii) For even periodic solutions $x=C(t)=C(t,\xi)$ of Eq. \x{x}, results analogous to those in Lemmas \ref{psi12}---\ref{hbn} have been deduced in \cite{ZCC18} in a similar way. }
\end{Remark}
\section{A Stability Criterion for Odd Periodic Solutions} \setcounter{equation}{0} \label{criteria}
\subsection{Bifurcations of odd periodic solutions} For $\eta>0$, we use $x=X(t,\eta,e)$ to denote the solution of problem \x{xe}-\x{ini}. In particular, when $e=0$, one has
\begin{equation}} \def\ee{\end{equation}\label{xs}
X(t,\eta,0)\equiv S(t,\eta),
\ee the solution of problem \x{x}-\x{ini}.
Let $m\in \N$ and $p\in \N$. Suppose that there exists $h_{m,p}$ such that $C_{h_{m,p}}$ of \x{ener} is a periodic orbit of Eq. \x{x} of the minimal period ${2m\pi}/p$, i.e.
\begin{equation}} \def\ee{\end{equation} \label{Tm}
T(h_{{m,p}})= {2m\pi}/p.
\ee Due to the autonomy and the symmetries of Eq. \x{x}, $C_{h_{m,p}}$ can be presented using either odd or even periodic solutions of Eq. \x{x}. In fact, by defining
\begin{equation}} \def\ee{\end{equation} \label{vpm}
\phi_{m,p}(t):= S(t,\eta_\pp), \qq \mbox{where }\eta_\pp: = \sqrt{2 h_{m,p}},
\ee $\phi_{m,p}(t)$ is then an odd periodic solution of Eq. \x{x} of the minimal period ${2m\pi}/p$. More symmetries of $\phi_{m,p}(t)$ can be found from \S 2.1. In particular, $\phi_{m,p}(t)$ is an $({m,p})$-periodic solution of \x{x} and satisfies
\begin{equation}} \def\ee{\end{equation} \label{ps0}
\phi_{m,p}(t+m\pi/p)\equiv -\phi_{m,p}(t).
\ee This implies that $\phi_{m,p}(m \pi)=S(m\pi, \eta_\pp)=0$, i.e.
\begin{equation}} \def\ee{\end{equation} \label{veq} X(m\pi,\eta_\pp,0) =0.
\ee See \x{xs}. As for the dependence of these solutions on $({m,p})$, one has $\phi_{mn,pn}(t) \equiv \phi_{m,p}(t)$ for any $n\in \N$.
A bifurcation result for odd $({m,p})$-periodic solutions of \x{xe} emanating from $\phi_{m,p}(t)$ is as follows.
\begin{Theorem} \label{M1} Let $m, \ p$ and $h_{m,p}, \ \eta_\pp$ be as above. Assume that
\begin{equation}} \def\ee{\end{equation} \label{nd90} T'(h_{m,p})\ne 0.
\ee Then there exist $e_{m,p}>0$ and a smooth function $E_{m,p}(e)$ of $e\in [0,e_{m,p})$ such that
\begin{equation}} \def\ee{\end{equation} \label{Vs}
E_{m,p}(0)=\eta_\pp\quad \mbox{ and } \quad X(m\pi,E_{m,p}(e),e)=0 \mbox{ for } e\in [0,e_{m,p}).
\ee Hence, for any $e\in [0,e_{m,p})$,
\begin{equation}} \def\ee{\end{equation} \label{psie}
\phi_{m,p}(t,e):= X(t,E_{m,p}(e),e)
\ee is an odd $({m,p})$-periodic solution of the non-autonomous equation \x{xe}, with the following symmetry
\begin{equation}} \def\ee{\end{equation} \label{p01}
\phi_{m,p}(t+m\pi,e) \equiv -\phi_{m,p}(t,e).
\ee
\end{Theorem}
\noindent{\bf Proof} \quad Let $\eta=\eta_\pp$ be in Lemmas \ref{psi12}---\ref{hbn}. Then $T={2m\pi}/p$ and $m\pi= p \d T/2$. Thus
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{hbp}
\pas{\frac} \def\fa{\forall\,{\pa X}{\pa \eta}}{(m\pi,\eta_\pp,0)}\hh & = & \hh} \def\EE{\hh & \equiv & \hh \pas{\frac} \def\fa{\forall\,{\pa S}{\pa \eta}}{(m\pi,\eta_\pp)}=\psi_2(m\pi)\qq \mbox{(by \x{psi20})}\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \hat b_p \qq \mbox{(by \x{B1})} \nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh (-1)^{p+1}p h_{m,p} T'(h_{m,p})\qq \mbox{(by \x{B3})}\nonumber} \def\tl{\tilde\\
\hh & \ne & \hh} \def\AND#1{\hh & #1 & \hh 0\qq \mbox{(by \x{nd90})}.
\eea Combining with \x{veq}, the existence of the function $E_{m,p}(e)$ as in \x{Vs} follows immediately from the Implicit Function Theorem (IFT).
Since $F(x,t,e)$ is odd in $x$, the solution $\phi_{m,p}(t,e)$ of \x{psie} is obviously odd in $t$. Moreover, $\phi_{m,p}(t,e)$ satisfies \x{p01} and is $({m,p})$-periodic.
$\Box$
\begin{Remark} \label{M11} {\rm (i) As seen from \x{nd11} of Remark \ref{nd20}, the non-degeneracy condition \x{nd90} is equivalent to the instability of the $({m,p})$-periodic solution $\phi_{m,p}(t)$ of Eq. \x{x}, in the linearized sense and/or in the Lyapunov sense.
(ii) Note that $\phi_{m,p}(t,0) \equiv \phi_{m,p}(t)$ is ${2m\pi}/p$-periodic. See \x{vpm}. Usually speaking, if $e>0$, the minimal period of $\phi_{m,p}(t,e)$ is ${2m\pi}$, not ${2m\pi}/p$.}
\end{Remark}
\subsection{A stability criterion for odd periodic solutions} We consider the family $\phi_{m,p}(t,e)$ of odd $({m,p})$-periodic solutions of Eq. \x{xe} as in Theorem \ref{M1}.
For $e\in[0,e_{m,p})$, the linearization equation of Eq. \x{xe} along $x=\phi_{m,p}(t,e)$ is the Hill's equation
\begin{equation}} \def\ee{\end{equation} \label{He}
\ddot y + q(t,e)y=0, \qq q(t,e):=\pas{\frac} \def\fa{\forall\,{\pa F}{\pa x}}{(\phi_{m,p}(t,e),t,e)}.
\ee Here the period is understood as $T={2m\pi}$. The corresponding trace is
\begin{equation}} \def\ee{\end{equation} \label{Tre} \tau_{m,p}(e):={\psi_1({2m\pi},e)}+{\dot\psi_2({2m\pi},e)}.
\ee Here $\psi_i(t,e)$ are fundamental solutions of Eq. \x{He}. When $e=0$, we have
\[
\phi_{m,p}(t,0)=\phi_{m,p}(t):= S(t,\eta_\pp)\quad \mbox{ and } \quad q(t,0)=q(t)= f'(S(t,\eta_\pp)).
\] See \x{pss}.
\begin{Theorem}\label{M2} Let $\phi_{m,p}(t)$ be the odd $({m,p})$-periodic solution of Eq. \x{x} verifying condition \x{nd90}. Denote
\begin{equation}} \def\ee{\end{equation} \label{F23}
F_{23}(t):=\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa t\pa e }}{(\phi_{m,p}(t),t,0)}.
\ee Then the derivative of the trace \x{Tre} at $e=0$ is
\begin{equation}} \def\ee{\end{equation} \label{dtau0}
\tau'_{m,p}(0):= \pas{\frac} \def\fa{\forall\,{\,{\rm d}\tau_{m,p}(e)}{\,{\rm d} e}}{e=0} =- p T'(h_{m,p}) \int_0^\T F_{23}(t)\dot\phi_{m,p}(t)\,{\rm d}t.
\ee Here $h_{m,p}=\eta_\pp^2/2$ and $'=\frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d} h}$.
\end{Theorem}
\noindent{\bf Proof} \quad In order to apply Lemma \ref{tau'0}, we need to consider the ${2m\pi}$-periodic Poincar\'e matrix $P$ of the linearization equation
\[
\ddot y+ q(t) y=0, \qq \mbox{where } q(t):=f'(\phi_{m,p}(t)).
\] Arguing as in the proof of \x{hbp}, by letting $T={2m\pi}/p$ in Lemmas \ref{psi12}---\ref{hbn} and noticing that ${2m\pi}=2p \d T/2$, we have
\[
\matt {\psi_1(2m\pi)} {\psi_2(2m\pi)} {\dot\psi_1(2m\pi)} {\dot\psi_2(2m\pi)} = \hat P_{2p}=\matt{1}{\hat b_{2p}}{0}{1},
\] where
\begin{equation}} \def\ee{\end{equation} \label{bmp}
\hat b_{2p}= \psi_2(2m\pi)= -2 p h_{m,p} T'(h_{m,p})=: b_{m,p}.
\ee See \x{B3} with $n=2p$. Thus the kernel of \x{ker0} is
\[
K(t) = -b_{m,p} \psi_1^2(t)= -\frac} \def\fa{\forall\,{b_{m,p}}{\eta_\pp^2} \dot\phi^2_{m,p}(t)\equiv p T'(h_{m,p})\dot\phi^2_{m,p}(t).
\]
Denote
\begin{equation}} \def\ee{\end{equation} \label{phi}
\Phi(t):=\pas{\frac} \def\fa{\forall\,{\pa \phi_{m,p}(t,e)}{\pa e}}{(t,0)} \quad \mbox{ and } \quad F_{13}(t)
:=\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa e\pa x}}{(\phi_{m,p}(t),t,0)}.
\ee From \x{He}, we have
\begin{eqnarray*}} \def\eeaa{\end{eqnarray*}
h(t)\AND{:=}\pas{\frac} \def\fa{\forall\,{\pa q}{\pa e}}{(t,0)} = \left} \def\y{\right.\frac} \def\fa{\forall\,{\pa }{\pa e}\left} \def\y{\right(\pas{\frac} \def\fa{\forall\,{\pa F}{\pa x}}{(\phi_{m,p}(t,e),t,e)}\y)\y|_{(t,0)} \\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa x^2}}{(\phi_{m,p}(t),t,0)}\Phi(t) +F_{13}(t)\\
\AND{=:} f''(\phi(t)) \Phi(t)+F_{13}(t).
\eeaa Here, for simplicity, $\phi(t):=\phi_{m,p}(t)$. From \x{dtau}, we obtain
\begin{eqnarray}} \def\eea{\end{eqnarray} \label{tau0}
\tau'_{m,p}(0)\hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_0^\T K(t) h(t) dt
= p T'(h_{m,p})\int_0^\T\left} \def\y{\right(\Phi f''(\phi) + F_{13}\y)\dot\phi^2\,{\rm d}t.
\eea
Since $\phi_{m,p}(t,e)$ is ${2m\pi}$-periodic for any $e$, we know from the defining equality \x{phi} that $\Phi(t)$ is necessarily ${2m\pi}$-periodic. Moreover, $\Phi(t)$ satisfies the variational equation
\begin{equation}} \def\ee{\end{equation} \label{phieq}
\ddot{\Phi} + q(t)\Phi + F_{3}(t) =0,
\ee where
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{df3}
F_{3}(t)\AND{:=}\pas{\frac} \def\fa{\forall\,{\pa F}{\pa e}} {(\phi(t),t,0)},\nonumber} \def\tl{\tilde\\
\dot F_{3}(t)\hh & = & \hh} \def\EE{\hh & \equiv & \hh \frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d}t}\left} \def\y{\right(\pas{\frac} \def\fa{\forall\,{\pa F}{\pa e}} {(\phi(t),t,0)}\y)=
\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa x \pa e}}{(\phi(t),t,0)}\dot \phi(t)
+ \pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa t \pa e}}{(\phi(t),t,0)}\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh F_{13} (t) \dot \phi(t)+F_{23}(t).
\eea
Recall that we have Eq. \x{phit} for $\phi(t)$ and Eq. \x{phieq} for $\Phi(t)$. From these we can obtain the following equality
\begin{equation}} \def\ee{\end{equation} \label{eqss}
\frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d}t}\left} \def\y{\right(\dot \Phi \ddot \phi- \ddot \Phi \dot \phi\y) =
\left} \def\y{\right(\Phi f''(\phi) + F_{13}\y)\dot\phi^2 +F_{23} \dot \phi.
\ee In fact, by using Eq. \x{phit} and Eq. \x{phieq}, one has
\[
\dot \Phi \ddot \phi- \ddot \Phi \dot \phi= -\dot \Phi f(\phi) + \Phi q \dot \phi + F_3 \dot \phi.
\] Thus the left-hand side of \x{eqss} is
\begin{eqnarray*}} \def\eeaa{\end{eqnarray*}
\EM -\frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d}t}\left} \def\y{\right(\dot \Phi f(\phi)\y) + \frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d}t}\left} \def\y{\right(\Phi q \dot \phi\y) + \frac} \def\fa{\forall\,{\,{\rm d}}{\,{\rm d}t}\left} \def\y{\right(F_3 \dot \phi\y)\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh -\ddot \Phi f(\phi) - \dot \Phi f'(\phi)\dot\phi + \dot\Phi q \dot \phi+\Phi \dot q \dot \phi +\Phi q \ddot \phi+F_3 \ddot \phi +\dot F_3 \dot \phi\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \left} \def\y{\right(\ddot \Phi+q \Phi +F_3\y) \ddot \phi +\left} \def\y{\right(-f'(\phi)+q \y) \dot \Phi \dot \phi +\Phi \dot q \dot \phi+\dot F_3 \dot \phi \qq \mbox{(by \x{phit})}\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \Phi \dot q \dot \phi+\dot F_3 \dot \phi\qq \mbox{(by \x{phieq} and \x{qt})}\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \Phi f''(\phi)\dot\phi^2 + F_{13}\dot\phi^2 +F_{23} \dot \phi\qq \mbox{(by \x{qt} and \x{df3})}.
\eeaa
Finally, as $\Phi(t)$ and $\phi(t)$ are ${2m\pi}$-periodic, by integrating \x{eqss} over $[0,{2m\pi}]$, we obtain
\[
\int_0^\T \left} \def\y{\right(\Phi f''(\phi) + F_{13}\y)\dot\phi^2\,{\rm d}t +\int_0^\T F_{23} \dot \phi\,{\rm d}t =0.
\] Combining with \x{tau0}, we obtain the desired formula \x{dtau0}.
$\Box$
Since $\tau_{m,p}(0)=2$, the role of formula \x{dtau0} is as follows.
\begin{Corollary} \label{M21} {\rm (i)} If $\tau'_{m,p}(0)<0$, then $\phi_{m,p}(t,e)$ is elliptic and is linearized stable for $0<e\ll 1$.
{\rm (ii)} If $\tau'_{m,p}(0)>0$, then $\phi_{m,p}(t,e)$ is hyperbolic and is Lyapunov unstable for $0<e\ll 1$.
\end{Corollary}
\subsection{A stability criterion for even periodic solutions, revisited} The bifurcations and linearized stability of even $({m,p})$-periodic solutions of Eq. \x{xe} have been done in \cite{ZCC18}. In the present notations, we restate the results in \cite{ZCC18} as follows. For $\xi>0$, we use $x=\ul{X}(t,\xi,e)$ to denote the solution of problem \x{xe}-\x{ini2}. Let $m\in \N$ and $p\in \N$ and the energy $h_{m,p}$ be as in \x{Tm}. By taking $\xi_{m,p}>0$ such that
\[
E(\xi_{m,p})=h_{m,p},
\] we know that
\[
\vp_{m,p}(t):= C(t,\xi_{m,p})= \ul{X}(t,\xi_{m,p},0)
\] is an even $({m,p})$-periodic solution of \x{x} of the minimal period $T(h_{m,p})={2m\pi}/p$. From Lemmas 2.5 and 2.6 of \cite{ZCC18}, under the same non-degeneracy condition \x{nd90}, i.e. $T'(h_{m,p})\ne0$, one has from the IFT a smooth function $\Xi_{m,p}(e)$ of $e\in [0,\underline{e}_{m,p})$ such that $\Xi_{m,p}(0)=\xi_{m,p}$ and
\[
\dot \ul{X}(m\pi,\Xi_{m,p}(e),e)\equiv 0.
\] Thus
\[
\vp_{m,p}(t,e) := \ul{X}(t,\Xi_{m,p}(e),e)
\] defines a family of even $({m,p})$-periodic solutions of Eq. \x{xe} which are emanated from $\vp_{m,p}(t)$. Moreover, $\vp_{m,p}(t,e)$ is also $m\pi$-anti-periodic as in \x{p01}.
Let $\underline{\tau}_{m,p}(e)$ be the trace of the ${2m\pi}$-periodic Poincar\'e matrix of the linearization equation of \x{xe} along the solution $\vp_{m,p}(t,e)$. One has $\underline{\tau}_{m,p}(0)=2$ and the following formula.
\begin{Theorem} \label{M3} {\rm (\cite[Theorem 3.1]{ZCC18})} With the notations above,
\begin{equation}} \def\ee{\end{equation} \label{tau-e1}
\underline{\tau}'_{m,p}(0)= \left} \def\y{\right.\frac} \def\fa{\forall\,{\,{\rm d}\underline{\tau}_{m,p}(e)}{\,{\rm d} e}\y|_{e=0} = - p T'(h_{m,p}) \int_0^\T\underline{F}_{23}(t) \dot\vp_{m,p}(t)\,{\rm d}t,
\ee where
\begin{equation}} \def\ee{\end{equation}
\label{F23e}
\underline{F}_{23}(t):=\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa t\pa e }}{(\vp_{m,p}(t),t,0)}.
\ee
\end{Theorem}
\begin{Remark} \label{M31} {\rm For the case $m=1$, result \x{tau-e1} is proved in \cite{ZCC18}. See Formula (3.2) there. However, the coefficient there is expressed using $\dot {\underline{\psi}}_1(2\pi)$ and $f(\xi_{1,p})$, where $\underline{\psi}_1(t)$ is the first fundamental solution of the corresponding linearization equation. For general $m$, formula \x{tau-e1} can be deduced by a scaling of time. Moreover, arguing as in the deduction of \x{bmp}, the coefficient can be written in the present way. One can notice that the forms of formulas \x{dtau0} and \x{tau-e1} are the same.}
\end{Remark}
\section{Stability Results for the Elliptic Sitnikov Problem} \setcounter{equation}{0} \label{Sitnikov}
\subsection{Equations for the motions of the Sitnikov problems} After choosing the masses and the gravitational constant in an appropriate way, the governing equation for the motion of the infinitesimal mass in the elliptic Sitnikov problem $(S_e)$ is \cite{BLO94, LO08}
\begin{equation}} \def\ee{\end{equation} \label{se} \ddot x +F(x,t,e) =0,\qq F(x,t,e):= \frac{x}{\left} \def\y{\right(x^2 + r^2(t,e) \y)^{3/2}}.
\ee Here $e\in [0, 1)$ is the eccentricity, and
\begin{equation}} \def\ee{\end{equation}\label{r}
r(t,e) = r_0(1-e \cos u(t,e)), \qq r_0:=1/2,
\ee where, after some translation of time, $u=u(t,e)$ is the solution of the Kepler's equation
\begin{equation}} \def\ee{\end{equation} \label{Ke} u - e\sin u =t.
\ee Note that the Kepler solution $u(t,e)$ is smooth in $(t,e)$ and satisfies
\[
u(-t,e) \equiv - u(t,e)\quad \mbox{ and } \quad u(t+2\pi,e) \equiv u(t,e) +2\pi.
\] Consequently, $F(x,t,e)$ fulfills all requirements in \x{Sy1}. Moreover, when $e\in(0,1)$, the minimal period of $F(x,t,e)$ in $t$ is $2\pi$.
In particular, the circular Sitnikov problem $(S_0)$ is described by the autonomous equation
\begin{equation}} \def\ee{\end{equation} \label{s0}
\ddot x + f(x)=0,\qq f(x):= \frac{x}{\left} \def\y{\right(x^2 + r_0^2\y)^{3/2}}.
\ee For Eq. \x{s0}, the energy $E(x)$ in \x{Ex} is
\[
E(x)=\int_0^x f(u) \,{\rm d}u = 2- \frac} \def\fa{\forall\,{1}{ \sqrt{x^2+r_0^2}}.
\] Solutions $x(t)$ of Eq. \x{s0} are on energy levels
\begin{equation}} \def\ee{\end{equation} \label{H}
H(x,\dot x):=\frac} \def\fa{\forall\,{1}{2} \dot x^2 - \frac} \def\fa{\forall\,{1}{ \sqrt{x^2+r_0^2}}= h.
\ee Here the energy $h$ differs from that in \x{ener} by a constant $2$ and takes values from
\(
h\in[-2,+\infty} \def\d{\cdot).
\) For $h=-2$, \x{H} corresponds to the origin which is the equilibrium of \x{s0}. For $h\in(-2,0)$, \x{H} corresponds to periodic orbits of \x{s0} whose minimal period is denoted by $T(h)$. It is not difficult to verify that
\[
\lim_{h\to -2+} T(h) = 2\pi/\sqrt{8} \quad \mbox{ and } \quad \lim_{h\to0-} T(h) =+\infty} \def\d{\cdot.
\] Moreover, it is proved in \cite[Theorem C]{BLO94} that
\begin{equation}} \def\ee{\end{equation} \label{Th1}
T'(h)=\frac} \def\fa{\forall\,{\,{\rm d} T(h)}{\,{\rm d} h} >0 \qquad \forall h\in (-2,0).
\ee Hence the origin is surrounded by a family of periodic orbits, whose minimal periods take values from $(2\pi/\sqrt{8},+\infty} \def\d{\cdot)$. For more facts on the dynamics of Eq. \x{s0}, see \cite{BLO94, LO08}.
To bifurcate the families $\phi_{m,p}(t,e)$ and $\vp_{m,p}(t,e)$ of $({m,p})$-periodic solutions of Eq. \x{se} which are respectively odd and even in $t$, the integers $m, \ p$ are required that ${2m\pi}/p\in (2\pi/\sqrt{8},+\infty} \def\d{\cdot)$, i.e.
\begin{equation}} \def\ee{\end{equation} \label{mp1}
1\le p \le \nu_m := [\sqrt{8} m],\qq m\in \N,
\ee because the non-degeneracy conditions \x{nd90} are ensured by \x{Th1}. Condition \x{mp1} is also used in \cite[\S 3]{LO08}. As before, we write $\phi_{m,p}(t,0)$ and $\vp_{m,p}(t,0)$ as $\phi_{m,p}(t)$ and $\vp_{m,p}(t)$ respectively. For these $({m,p})$-periodic solutions, it is convenient to call
\[
\varrho:={p}/{m}
\] the rotation number. Condition \x{mp1} for $({m,p})$ is now equivalent to
\begin{equation}} \def\ee{\end{equation} \label{mp}
\varrho\in (0,\sqrt{8}) \cap {\mathbb Q}} \def\SS{{\mathbb S}.
\ee
\subsection{Analytical results for stability of odd periodic orbits}\label{odd} From the defining equalities \x{se}--\x{s0}, a direct computation can yield
\begin{equation}} \def\ee{\end{equation} \label{f23}
\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa t\pa e}}{(x,t,0)} =\frac} \def\fa{\forall\,{-3 x}{4\left} \def\y{\right(x^2+r^2_0\y)^{5/2}}\sin t.
\ee See also \cite[Formula (4.21)]{ZCC18}.
We first study the families $\phi_{m,p}(t,e)$ of odd $({m,p})$-periodic solutions of Eq. \x{se} for $m, \ p$ as in \x{mp1}. By \x{F23}, \x{dtau0} and \x{f23}, we have
\[
F_{23}(t)=\pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa t\pa e}}{(\phi_{m,p}(t),t,0)} = \frac} \def\fa{\forall\,{- 3 \phi_{m,p}(t)}{4\left} \def\y{\right(\phi_{m,p}^2(t)+r^2_0\y)^{5/2}}\sin t,
\] and
\begin{eqnarray*}} \def\eeaa{\end{eqnarray*}
\tau'_{m,p}(0)\hh & = & \hh} \def\EE{\hh & \equiv & \hh - p T'(h_{m,p})\int_0^\T F_{23}(t)\dot \phi_{m,p}(t)\,{\rm d}t\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh - \frac{\footnotesize 1}{\footnotesize 4} p T'(h_{m,p})\int_0^\T \frac} \def\fa{\forall\,{-3 \phi_{m,p}(t)\dot \phi_{m,p}(t)}{\left} \def\y{\right(\phi^2_{m,p}(t)+r^2_0\y)^{5/2}}\sin t\,{\rm d}t.
\eeaa Define
\begin{equation}} \def\ee{\end{equation} \label{G}
G_{m,p}(t):={1}/{\left} \def\y{\right(\phi^2_{m,p}(t)+r^2_0\y)^{3/2}}.
\ee One has
\[
\dot G_{m,p}(t)= -{3 \phi_{m,p}(t)\dot \phi_{m,p}(t)}/{\left} \def\y{\right(\phi^2_{m,p}(t)+r^2_0\y)^{5/2}}.
\] Integrating by parts, we know that $\tau'_{m,p}(0)$ can be written as
\begin{equation}} \def\ee{\end{equation} \label{t11}
\tau'_{m,p}(0) =\frac{\footnotesize 1}{\footnotesize 4} p T'(h_{m,p})\int_0^\T G_{m,p}(t)\cos t\,{\rm d}t.
\ee Such an observation was also used in \cite{ZCC18} for the study of even periodic solutions.
\begin{Theorem} \label{M4} One has $\tau'_{m,p}(0)=0$ if $({m,p})$ satisfies \x{mp1} and
\begin{equation}} \def\ee{\end{equation} \label{pm0}
\varrho=\frac} \def\fa{\forall\,{p}{m}\ne \frac} \def\fa{\forall\,{1}{2}, \ \frac} \def\fa{\forall\,{1}{4}, \ \frac} \def\fa{\forall\,{1}{6},\ \cdots} \def\pa{\partial
\ee In particular, $\tau'_{m,p}(0)=0$ if $m$ is odd and $1\le p\le \nu_m$, or $m$ is even and $m/2+1 \le p \le \nu_m$.
\end{Theorem}
\noindent{\bf Proof} \quad Let us notice from \x{ps0} and \x{G} that the minimal period of $G_{m,p}(t)$ is $m\pi/p$. Moreover, $G_{m,p}(t)$ is even in $t$. Hence one has the $m\pi/p$-periodic Fourier expansion
\[ G_{m,p}(t) \equiv \sum_{n=0}^\infty} \def\d{\cdot a_n \cos\left} \def\y{\right(n \frac} \def\fa{\forall\,{2p t}{m}\y)= \sum_{n=0}^\infty} \def\d{\cdot a_n \cos\left} \def\y{\right(2n p \frac} \def\fa{\forall\,{t}{m}\y).
\] Let us write $\cos t$ as $\cos \left} \def\y{\right( m \frac} \def\fa{\forall\,{t}{m}\y)$. By using the orthogonality of $\{\cos \left} \def\y{\right(n \frac} \def\fa{\forall\,{t}{m}\y): n \in {\mathbb Z}} \def\N{{\mathbb N}^+\}$ in the space $L^2[0,{2m\pi}]$, we know from \x{t11} that $\tau'_{m,p}(0)=0$ if $({m,p})$ satisfies $m\ne 2 n p$ for all $n\in \N$, i.e. if $\varrho$ satisfies \x{pm0}.
$\Box$
\iffalse As a corollary, one has the following results on the odd ${2m\pi}$-periodic solutions.
\begin{Corollary}\label{M41} One has $\tau'_{m,p}(0)=0$ if
$\bullet$\ $m$ is odd and $1\le p\le \nu_m$, or
$\bullet$\ $m$ is even and $m/2+1 \le p \le \nu_m$.
\end{Corollary} \fi
\begin{Remark} \label{M42} {\rm From Theorem \ref{M4}, the signs of $\tau'_{m,p}(0)$ depend on the frequencies $({m,p})$ in a delicate way. For example, we have no information on the stability of odd $({m,p})$-periodic orbits $\phi_{m,p}(t,e)$ for any odd number $m$. This phenomenon was also observed for the families $\vp_{m,p}(t,e)$ of even periodic solutions of Eq. \x{xe} and Eq. \x{se}. See \cite{ZCC18} and \cite{GNRR18}.}
\end{Remark}
In contrast to case \x{pm0}, we have $m/(2p)=n \in \N$, i.e. $m=2p n$, or equivalently,
\begin{equation}} \def\ee{\end{equation} \label{pm09}
\varrho=\frac} \def\fa{\forall\,{1}{2n},\qq n\in \N.
\ee In this case,
\begin{equation}} \def\ee{\end{equation} \label{phint}
\phi_{2pn,p}(t)\equiv\phi_{2n,1}(t)=:\phi_n(t),
\ee which are the odd periodic solutions used by Ortega \cite{O16}. Note that $\phi_n(t)$ has the minimal period $T={2m\pi}/p=4n\pi$. More symmetries on $\phi_n(t)$ include
\begin{equation}} \def\ee{\end{equation} \label{sys}
\left} \def\y{\right\{ \begin{array}{l} \phi_n(-t) \equiv -\phi_n(t), \\
\phi_n(t+2n\pi)\equiv -\phi_n(t),\\
\phi_n(2n\pi-t) \equiv \phi_n(t), \\
\phi_n(t)> 0 \quad} \def\qq{\qquad \mbox{for } t\in(0,2n\pi),\\
\phi_n(t) \mbox{ is strictly increasing on $[0,n\pi]$.}
\end{array}\y.
\ee Here the third equality of \x{sys} is deduced from \x{Os12}. Passing to the function
\begin{equation}} \def\ee{\end{equation} \label{Gnt}
G_n(t):=1/\left} \def\y{\right(\phi_n^2(t)+r^2_0\y)^{3/2},
\ee one has
\begin{equation}} \def\ee{\end{equation} \label{sysn}
\left} \def\y{\right\{ \begin{array}{l} \mbox{$G_n(t)>0$ is even and has the minimal period $2n\pi$,} \\
G_n(2n\pi-t) \equiv G_n(t), \\
\mbox{$G_n(t)$ is strictly decreasing on $[0,n\pi]$.}
\end{array}\y.
\ee For the solution $\phi_n(t)$ as in \x{phint}, we can use the symmetries in \x{sysn} to obtain
\begin{eqnarray*}} \def\eeaa{\end{eqnarray*}
\int_0^{{2m\pi}} G_n(t) \cos t\,{\rm d}t\hh & = & \hh} \def\EE{\hh & \equiv & \hh\int_0^{2p\d 2n\pi} G_n(t) \cos t\,{\rm d}t\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh 2p \int_0^{2n\pi}G_n(t) \cos t\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh 2 p \left} \def\y{\right(\int_0^{n\pi} G_n(t) \cos t\,{\rm d}t +\int_{n\pi}^{2n\pi} G_n(t) \cos t\,{\rm d}t\y)\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh 4 p \int_0^{n\pi} G_n(t) \cos t\,{\rm d}t,
\eeaa because both $G_n(t)$ and $\cos t$ are symmetric with respect to $t=n\pi$. Combining with \x{Th1} and \x{t11}, we have the following results.
\begin{Lemma}\label{same} For any $p, \ n\in \N$, we have
\begin{equation}} \def\ee{\end{equation} \label{t12}
\tau'_{2pn, p}(0) = p^2 T'(h_{2n,1})A_n,
\ee where
\begin{equation}} \def\ee{\end{equation}\label{An}
A_n:=\int_0^{n\pi} G_n(t) \cos t\,{\rm d}t= \frac{\footnotesize 1}{\footnotesize 2} \int_0^{2n\pi} G_n(t) \cos t\,{\rm d}t.
\ee In particular, $\tau'_{2pn, p}(0)$ and $A_n$ have the same sign for any $p\in \N$.
\end{Lemma}
\iffalse
\begin{Theorem} \label{M5} For any $p\in \N$ and $m=2p$, i.e. $n=1$ in \x{pm09}, one has
\[
\tau'_{2p, p}(0)>0.
\] Consequently, for $e>0$ small, $\phi_{2p,p}(t,e)$ is hyperbolic and therefore is Lyapunov unstable.
\end{Theorem} \fi
Now we can complete the proof of Theorem \ref{M5-7} (i) for odd $(2p,p)$-periodic solutions $\phi_{2p,p}(t,e)$. The frequencies $({m,p})=(2p,p)$ correspond to the rotation number $\varrho=\frac{\footnotesize 1}{\footnotesize 2}$. See \x{pm09}. Due to Lemma \ref{same}, we need only to prove that $A_1>0$. By \x{An}, one has $n=1$ and
\begin{eqnarray}} \def\eea{\end{eqnarray} \label{A1}
A_1\hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_0^{\pi/2} G_1(t) \cos t\,{\rm d}t +\int_{\pi/2}^{\pi} G_1(t) \cos t\,{\rm d}t \nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_0^{\pi/2} G_1(t) \cos t\,{\rm d}t +\int_{\pi/2}^0 G_1(\pi-t) \cos (\pi-t)\,{\rm d}(\pi-t)\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_0^{\pi/2} \left} \def\y{\right(G_1(t)-G_1(\pi-t) \y) \cos t \,{\rm d}t.
\eea From the last property of \x{sysn}, $G_1(t)$ is strictly decreasing on $[0,\pi]$. Hence \x{A1} implies that $A_1 >0$.
$\Box$
\subsection{Analytical results for stability of even periodic orbits} \label{even}
Let $m, \ p$ be as in \x{mp1}. We are now studying the family $\vp_{m,p}(t,e)$ of even $({m,p})$-periodic solutions of Eq. \x{se}. By \x{tau-e1}, \x{F23e} and \x{f23}, we have
\begin{eqnarray}} \def\eea{\end{eqnarray} \label{t22}
\underline{F}_{23}(t)\hh & = & \hh} \def\EE{\hh & \equiv & \hh \pas{\frac} \def\fa{\forall\,{\pa^2 F}{\pa t\pa e}}{(\vp_{m,p}(t),t,0)} = \frac} \def\fa{\forall\,{-3\vp_{m,p}(t)}{4\left} \def\y{\right(\vp_{m,p}^2(t)+r^2_0\y)^{5/2}}\sin t,\nonumber} \def\tl{\tilde \\
\underline{\tau}'_{m,p}(0)\hh & = & \hh} \def\EE{\hh & \equiv & \hh -p T'(h_{m,p})\int_0^\T \underline{F}_{23}(t)\dot \vp_{m,p}(t)\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh -\frac{\footnotesize 1}{\footnotesize 4} p T'(h_{m,p})\int_0^\T \frac} \def\fa{\forall\,{-3 \vp_{m,p}(t)\dot \vp_{m,p}(t)}{\left} \def\y{\right(\vp^2_{m,p}(t)+r^2_0\y)^{5/2}}\sin t\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \frac{\footnotesize 1}{\footnotesize 4} p T'(h_{m,p})\int_0^\T \underline{G}_{m,p}(t)\cos t\,{\rm d}t,
\eea where
\begin{equation}} \def\ee{\end{equation} \label{Ge}
\underline{G}_{m,p}(t):={1}/{\left} \def\y{\right(\vp^2_{m,p}(t)+r^2_0\y)^{3/2}}.
\ee Note that $\underline{G}_{m,p}(t)$ is even in $t$ and has the minimal period $m\pi/p$. The similar proof as in Theorem \ref{M4} can yield the following result.
\begin{Theorem} \label{M6} One has $\underline{\tau}'_{m,p}(0)=0$ if $({m,p})$ satisfies \x{mp1} and \x{pm0}.
\end{Theorem}
For the cases as in \x{pm09}, we have the following relation.
\begin{Lemma} \label{rels} For any $p, \ n\in \N$, there holds
\begin{equation}} \def\ee{\end{equation} \label{rels1}
\underline{\tau}'_{2pn,p}(0) = (-1)^n \tau'_{2pn,p}(0).
\ee
\end{Lemma}
\noindent{\bf Proof} \quad We go back to formulas \x{t11} and \x{t22}, where $m=2pn$. Note that $\phi_{m,p}(t)$ and $\vp_{m,p}(t)$ have the same energy $h_{2pn,p}=h_{2n,1}$ and the same minimal period $T={2m\pi}/p=4n\pi$. Hence \x{sc1} is verified and the factors in \x{t11} and \x{t22} are the same. By \x{sc2}, one has
\[
\vp_{m,p}(t)\equiv \phi_{m,p}(t+n\pi).
\] By \x{Gnt} and \x{Ge}, we obtain the relation
\[
\underline{G}_{m,p}(t)\equiv G_{m,p}(t+n\pi)
\] Hence
\begin{eqnarray*}} \def\eeaa{\end{eqnarray*}
\int_0^\T \underline{G}_{m,p}(t) \cos t \,{\rm d}t \hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_0^{4pn \pi}G_{m,p}(t+n\pi)\cos t\,{\rm d}t\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_{n\pi}^{n\pi+4pn \pi}G_{m,p}(t)\cos (t-n\pi)\,{\rm d}t\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh (-1)^n \int_{n\pi}^{n\pi+4pn \pi}G_{m,p}(t)\cos t\,{\rm d}t\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh (-1)^n \int_{0}^{4pn \pi}G_{m,p}(t)\cos t\,{\rm d}t,
\eeaa because $G_{m,p}(t)$ and $\cos t$ are $2n\pi$-periodic. Thus we have relation \x{rels1}.
$\Box$
The stability result of Theorem \ref{M5-7} (ii) for even $(2p,p)$-periodic solutions $\vp_{2p,p}(t,e)$ follows immediately from Theorem \ref{M5-7} (i) and Lemma \ref{rels}. Hence the proof of Theorem \ref{M5-7} is complete.
\iffalse
\begin{Theorem} \label{M7} For any $p\in \N$ and $m=2p$, i.e. $n=1$ in \x{pm09}, one has
\begin{equation}} \def\ee{\end{equation} \label{ell2}
\underline{\tau}'_{2p, p}(0)<0.
\ee Consequently, for $e>0$ small, $\vp_{2p,p}(t,e)$ is elliptic and therefore is linearized stable.
\end{Theorem}
In fact, result \x{ell2} can be proved in a direct way. Arguing as in the deduction of \x{t12} and \x{A1}, we have
\begin{eqnarray*}} \def\eeaa{\end{eqnarray*}
\underline{\tau}'_{2p, p}(0)\hh & = & \hh} \def\EE{\hh & \equiv & \hh p^2 T'(h_{2,1})\int_0^{\pi} \underline{G}_1(t) \cos t\,{\rm d}t\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh p^2 T'(h_{2,1})\int_0^{\pi/2} \left} \def\y{\right(\underline{G}_1(t)-\underline{G}_1(\pi-t) \y) \cos t \,{\rm d}t\\
\hh & < & \hh} \def\GT{\hh & > & \hh 0,
\eeaa because, for the present case, the function
\(
\underline{G}_1(t) = {1}/{\left} \def\y{\right(\vp^2_{2,1}(t)+r^2_0\y)^{3/2}}
\) is strictly increasing on $[0,\pi]$. \fi
\subsection{The numerical result and a conjecture} \label{s44} For conservative systems like Hamiltonian systems, the stability of periodic orbits is an important and a difficult problem \cite{SM71}. For the $N$-body problems and the related systems, one can refer to \cite{C10, C08, HLS14} for some different approaches to the stability of periodic orbits.
Going back to the Sitnikov problem, we know from Lemmas \ref{same} and \ref{rels} that, for any $n\ge 2$ and any $p\in \N$, the linearized stability/instability of $\phi_{2pn,p}(t)$ and $\vp_{2pn,p}(t)$ are determined by the sign of $A_n$. By \x{Gnt} and \x{An}, $A_n$ is only involved of the odd $(2n,1)$-periodic solution $\phi_n(t):=\phi_{2n,1}(t)$ of Eq. \x{s0}. It is easy to do the numerical simulation. With the choice of $1\le n \le 10$, we have the numerical results listed in Table \ref{tab1}.
\begin{table} \centering \caption{Numerical results for $\eta_n:=\eta_{2n,1}$, $h_n:=h_{2n,1}$ and $A_n$.} \label{tab1} \begin{tabular}{rrrrrrr} $n$ & & $\eta_n$ & & $h_n$ & & $A_n$ \\ \midrule 1 & & $ 1.7192 $ & & $-0.5221$ & & $2.3179$ \\ 2 & & $ 1.8319 $ & & $-0.3221$ & & $2.2194$ \\ 3 & & $ 1.8735 $ & & $-0.2449$ & & $2.1843$ \\ 4 & & $ 1.8965 $ & & $-0.2017$ & & $2.1615$ \\ 5 & & $ 1.9112 $ & & $-0.1736$ & & $2.1479$ \\ 6 & & $ 1.9216 $ & & $-0.1537$ & & $2.1380$ \\ 7 & & $ 1.9294 $ & & $-0.1387$ & & $2.1293$ \\ 8 & & $ 1.9355 $ & & $-0.1269$ & & $2.1227$ \\ 9 & & $ 1.9404 $ & & $-0.1174$ & & $2.1174$ \\ 10 & & $ 1.9445 $ & & $-0.1095$ & & $2.1131$ \\ \bottomrule[1pt] \end{tabular} \end{table}
\iffalse
From \x{t11}, the signs of $\tau'_{m,p}(0)$ are only involved of the odd periodic solutions $\phi_{m,p}(t)$ of the circular Sitnikov problem \x{s0} we are considering. Thus it is easy to evaluate numerically. For $m=2,\ 4, \ 6, \ 8,\ 10, \ 12$, we list the numerical results in the table.
$\bullet$\ the numerical result is consistent with the analytical results \x{pm0} and \x{A1}, and
$\bullet$\ for all of those $(p,m)$ other than that in \x{pm0}, $\tau'_{m,p}(0)$ are always positive. It is then a very interesting question whether this is really true.
$\bullet$\ The problem is to prove the positiveness of
\begin{equation}} \def\ee{\end{equation}\label{An9}
A_n=\int_0^{n\pi} \frac} \def\fa{\forall\,{\cos t}{\left} \def\y{\right(\phi_n^2(t) +r_0^2\y)^{3/2}}\,{\rm d}t,\qq n=2,3,\cdots} \def\pa{\partial
\ee where $\phi_n(t):=\phi_{2n,1}(t)$ are odd $(2n,1)$-periodic solutions considered in \cite{O16}, i.e. $\phi_n(t)$ is the unique odd $4n\pi$-periodic solution of the circular Sitnikov problem \x{s0} such that $\dot \phi_n(0)>0$ and $\phi_n(t)$ has the unique zero $t=2n\pi$ in the interval $(0,4n\pi)$.
From \cite{BLO94}, $\phi_n(t)$ can be expressed as elliptic functions of different kinds in an implicit way. Is this useful? \fi
Note that the positiveness of $A_1$ in Table \ref{tab1} has already been proved in an analytical way. It is surprising that numerically, all of $A_n$, $n\ge 2$ are positive. Hence we have the following interesting problem.
\noindent {\bf Conjecture} One has $A_n>0$ for all $n\ge 2$.
We end the paper with two remarks.
1. Once the conjecture is proved, we could conclude that (i) odd $(2np,p)$-periodic solutions $\phi_{2np,p}(t,e)$ are hyperbolic and Lyapunov unstable for $e>0$ small, (ii) even $(4np,p)$-periodic solutions $\vp_{4np,p}(t,e)$ are also hyperbolic and Lyapunov unstable for $e>0$ small, and (iii) even $((4n-2)p,p)$-periodic solutions $\vp_{(4n-2)p,p}(t,e)$ are elliptic and linearized stable for $e>0$ small.
2. For the case $n=2$, arguing as in \x{A1}, we have from \x{An}
\[
A_2
=\int_0^{\pi/2} \left} \def\y{\right(G_2(t)-G_2(\pi-t)+G_2(2\pi-t) -G_2(\pi+t) \y) \cos t \,{\rm d}t.
\] The sign of $A_2$ is related with a certain kind of `convexity' of $G_2(t)$ on the interval $[0,2\pi]$. This is also true for general case $n\ge 3$.
\iffalse For general case $n\ge 2$, arguing as in \x{A1}, we have from \x{An}
\begin{eqnarray}} \def\eea{\end{eqnarray}\label{An1}
A_n\hh & = & \hh} \def\EE{\hh & \equiv & \hh \sum_{i=1}^n \int_{(i-1)\pi}^{i\pi} G_n(t) \cos t\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh\sum_{i=1}^n \int_{0}^{\pi} G_n(t+(i-1)\pi) \cos (t+(i-1)\pi)\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \sum_{i=1}^n \int_{0}^{\pi} (-1)^{i-1} G_n(t+(i-1)\pi) \cos t\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \sum_{i=1}^n \int_{0}^{\pi/2} (-1)^{i-1} \left} \def\y{\right(G_n(t+(i-1)\pi) -G_n(i\pi-t)\y) \cos t\,{\rm d}t\nonumber} \def\tl{\tilde\\
\hh & = & \hh} \def\EE{\hh & \equiv & \hh \int_{0}^{\pi/2}\left} \def\y{\right( \sum_{i=1}^n (-1)^{i-1}\left} \def\y{\right(G_n(t+(i-1)\pi) -G_n(i\pi-t)\y)\y) \cos t\,{\rm d}t.
\eea It is crucial to prove that $A_2>0$. This is related with some `convexity' of $G_2(t)$. However, $G_2(t)$ cannot be convex in the whole interval $[0,2\pi]$. \fi
\fbox{\small Ver. 1, 2019-04-26}
\end{document} |
\begin{document}
\title{How to find all connections in the Pantelides algorithm for delay differential-algebraic equations hanks{Submitted February 3, 2022.}
\begin{abstract}
The Pantelides algorithm for delay differen\-tial-algebraic equations (DDAEs) is a method to structurally analyse such systems with the goal to detect which equations have to be differentiated or shifted to construct a solution. In this process, one has to detect implicit connections between equations in the shifting graph, making it necessary to check all possible connections. The problem of finding these efficiently remained unsolved so far. It is explored in further detail and a reformulation is introduced. Additionally, an algorithmic approach for its solution is presented. \end{abstract}
\begin{keywords}
delay differential-algebraic equation, Pantelides algorithm, structural analysis, enumeration algorithm, spanning tree \end{keywords}
\begin{AMS}
05C30, 34A09, 34K32, 65L80 \end{AMS}
\section{Introduction}
Delay differential-algebraic equations (DDAEs) are a class of differential equations for some function $x(t)$ on a time interval $[0,T)\subseteq\mathbb{R}$, $T>0$, that, in their simplest form, not only depend on the time derivative $\dot x(t)$ but also on a previous time state $\Delta_{-\tau} x(t):=x(t-\tau)$ with a delay $\tau>0$. Additionally, the system may possess algebraic constraints such that it can only be formulated in implicit form. Here, the DDAE is assumed to be a system of $n\in\mathbb{N}$ equations~${F=(F_1,...,F_n)}$ and variables~${x=(x_1,...,x_n)}$. Thus, consider a DDAE of the form \begin{equation}
\label{eq:ddae}
F(t,x(t),\dot x(t), \Delta_{-\tau} x(t))=0, \end{equation} where \begin{equation*}
x:[-\tau,T) \to \mathbb{R}^n \quad \text{and} \quad F:[0,T)\times \mathbb{D}_{x}\times \mathbb{D}_{\dot x}\times \mathbb{D}_{\Delta x} \to \mathbb{R}^n, \end{equation*} with $\mathbb{D}_{x},\mathbb{D}_{\dot x},\mathbb{D}_{\Delta x}\subseteq \mathbb{R}^n$ being open. Here, $\dot{x}$ denotes the derivative of $x$ with respect to $t$ from the right. To obtain an initial value problem, \cref{eq:ddae} has to be equipped with an initial condition \begin{equation}
\label{eq:init}
x(t)=\phi(t) \quad \text{for} \quad t\in[-\tau,0]. \end{equation}
Equations of this form arise in many applications, such as multibody control systems, electric circuits or fluid dynamics (see \cite{ph,unger20b}). They combine features of delay differential equations (DDEs) and differential-algebraic equations (DAEs), which makes them particularly difficult to solve. Solutions may depend on derivatives of $F$ and on evaluations of $F$ at future time points (see \cite{campbell95,phi12}). Therefore, the interplay of the differentiation operator~$\tfrac{\mathrm{d}}{\mathrm{d}t}$ and the shift operator $\Delta_{-\tau}$ has to be treated carefully (cf. \cite{phi14}) and solutions have to be constructed by differentiating and shifting equations (cf. \cite{campbell95,phi12,phi14,phi16,unger18,trenn19}). Even for linear DDAEs, general existence and uniqueness results can only be obtained using a distributional solution concept (see \cite{trenn19,unger20b}) or imposing further restrictions on the DDAE (see \cite{phi12,phi16,unger18}) or its initial function in \cref{eq:init} (see \cite{phi18,unger18}). For nonlinear DDAEs like \cref{eq:ddae}, solutions can be established for certain classes (cf. \cite{ascher95,unger20}).
In most cases, the construction of a solution for a DDAE involves the method of steps. This means that the equation is successively integrated over the time intervals~${\left[i\tau,(i+1)\tau\right)}$, $i\in\mathbb{N}$. By substituting the delayed variables with the already computed solution of the previous interval, the problem can be reduced to solve a DAE in each step (cf. \cite{campbell80,bellen03,phi16}). However, this method does not always succeed and a reformulation of \cref{eq:ddae} is required such that the DAE that has to be solved in each interval is regular and has a small index. Hereby, the index is, roughly speaking, a measure how often parts of the DAE have to be differentiated to reformulate the DAE as an ordinary differential equation. This reformulation can be done by a compress-and-shift algorithm (see \cite{campbell95,trenn19}) or by a combined shift and derivative array (see \cite{phi16}). The differentiation and shifting of certain equations is necessary in both cases.
Determining which equations one has to differentiate or shift is therefore a central aspect of the solution process of a DDAE. In \cite{ahrens20}, the Pantelides algorithm for delay differential-algebraic equations is presented as a tool to exploit the structure of (\ref{eq:ddae}), i.e., the information which variable appears in which equation, to determine the number of differentiations and shifts necessary to solve the DDAE.
It is based on the Pantelides algorithm for DAEs (see \cite{pantelides88}) and will be simply referred to as Pantelides algorithm from now on. The approach consists of defining different bipartite graphs, where each equation and certain equivalence classes of variables are represented by the nodes. Edges exist between equation nodes and variable nodes if and only if one of the variables of the equivalence class appears in that equation. In other words, the graphs represent the structure of the DDAE. Then, matchings between equation nodes and variable nodes of highest shift and differentiation order are constructed in these graphs. This is achieved by following a specific pattern of shifting and differentiating equations and the variables belonging to it. At the end of the process, each equation can be resolved for a variable of highest shift and differentiation order. For more details on the whole procedure and the algorithm, see the original paper \cite{ahrens20}.
In this work, the focus is put on a specific subproblem that appears in the Pantelides algorithm. During the first part of the algorithm, called the shifting step, one has to shift equations that are connected to each other in a certain way through edges in a specific graph, called the shifting graph. Generally, such a connection is not unique and all possible connections have to be found to allow for correct shifting. The identification of all of these connections, however, may be computationally very expensive, and no efficient algorithmic solution is known so far.
The contribution of this paper is a deeper exploration of this problem. First, it is explained in further detail and all important preliminaries are given in \cref{sec:2}. Then, a new solution approach is proposed based on the reformulation to a known enumeration problem from graph theory. The equivalence of both problems is proven (\cref{sec:3}). Additionally, an algorithm from \cite{gabow78} for the solution of the enumeration problem is presented (\cref{sec:4}). This algorithm is applied to the original problem, yielding a method for finding all connections in the Pantelides algorithm, and two detailed examples of its usage are given (\cref{sec:5}). Finally, a numerical demonstration of the advantageous properties of the new method is shown (\cref{sec:6}) and the paper is concluded with a summary and some final remarks (\cref{sec:7}). \bigbreak \textbf{Notation:}
The natural numbers, the non-negative integers, and the reals are denoted by $\mathbb{N}$, $\mathbb{N}_0$ and $\mathbb{R}$, respectively. For a differentiable function $f: \mathbb{I}\to\mathbb{R}^n$, the notation~${\dot f:=\tfrac{\mathrm{d}}{\mathrm{d}t}f}$ is used to denote the derivative with respect to the (time) variable $t$ and $\ddot f:=\frac{d}{dt}\dot f$ for the second derivative. For higher derivatives of order $q\in\mathbb{N}_0$, the abbreviation $f^{(q)}$ is used. Similarly, the shift operator~$\Delta_\tau$ is defined as ${\Delta_\tau f(t) = f(t+\tau)}$. The union of two sets $A$ and $B$ is denoted by $A \cup B$. A disjoint union of sets is written as~${A\dot\cup B}$. The cardinality of the set $A$ is denoted by $|A|$.
\section{Problem description} \label{sec:2}
This section describes the overall problem of the paper in detail and introduces the most important definitions to give all preliminaries needed to understand the solution approach. Since this paper can be seen as an extension of \cite{ahrens20}, most information is based on that work and all derivations can be found there.
The Pantelides algorithm translates the structural information of the DDAE into graphs. First, the \textit{shifting graph} $G^S$ is constructed by combining all variables of the same index $k$ (for each $k=1,...,n$) and shift order~${p\in\mathbb{N}_0\cup\{-1\}}$ (but possibly different differentiation order) into the same equivalence class, i.e., \begin{equation*}
[\Delta_{p\tau}x_k]:=\left\{\Delta_{p\tau}x_k,\Delta_{p\tau}\dot x_k,\Delta_{p\tau} \ddot x_k,...\right\}. \end{equation*} Then, one can define the set of equation nodes, variable nodes, and edges as \begin{align*}
V^S_E&:=\left\{F_1,...,F_n\right\},\\
V^S_V&:=\Big\{[\Delta_{p\tau}x_k]\;\Big|\; \exists k,p\in\mathbb{N}_0\cup\{-1\} \text{ s.t. }\Delta_{p\tau}x_k^{(q)}\\
&\qquad\qquad\qquad\;\;\text{ appears in DDAE for any }q\in\mathbb{N}_0\Big\},\\
E^S&:=\left\{\{F_i,v_k\}\in V^S_E\times V^S_V \;\Big| \; \exists \tilde x \in v_k\text{ that appears in }F_i \right\}, \end{align*} respectively, which yields the shifting graph defined as~$G^S:=(V^S_E \dot \cup V^S_V,E^S)$.
In the bipartite shifting graph, one successively assigns to each equation node $F_i\in V^S_E$ an equivalence class $v_k\in V^S_V$ of highest shift, i.e., if $v_k=[\Delta_{p\tau}x_k]$ is of highest shift and occurs in $F_i$, then $[\Delta_{(p+\ell)\tau}x_k]$, for $\ell>0$, does not occur in any equation. By definition, a variable node $[\Delta_{p\tau}x_k]$ with negative shift $p=-1$ is never of highest shift and cannot be matched to an equation node. Like that, a matching $\mathcal{M}$ is constructed, consisting of all assigned pairs $\{F_i,v_k\}$. If a particular $F_j$ cannot be matched to a variable node that is not in $\mathcal{M}$ yet, the node $F_j$ is called \textit{exposed with respect to $\mathcal{M}$}. The corresponding equation is shifted, together with all other equations that $F_j$ is connected to via alternating paths with respect to $\mathcal{M}$ in~$G^S$. An \textit{alternating path with respect to $\mathcal{M}$} is a sequence of edges \begin{equation*}
\left(\left\{F_{i_1},v_{k_1}\right\},\left\{v_{k_1},F_{i_2}\right\},\left\{F_{i_2},v_{k_2}\right\},...,\left\{v_{k_{N-1}},F_{i_N}\right\}\right) \end{equation*} in $G^S$, where all $i_\ell$, $\ell=1,...,N$, and all $k_m$, $m=1,...,N-1$, are distinct, respectively, and that has alternating non-matching and matching edges while starting with a non-matching edge.
However, simply shifting all these equations may not be sufficient, since the connection may be given only implicitly through the equivalence classes of the variable nodes. To see this, define $G:=(V_E\dot\cup V_V,E)$ as the \textit{graph of the DDAE} with \begin{align*}
V_E&:=\left\{ F_1,...,F_n\right\},\\
V_V&:=\Big\{\Delta_{p\tau}x_k^{(q)}\;\Big|\; \exists k,p,q\in\mathbb{N}_0\cup\{-1\} \text{ s.t. }\\
&\qquad\qquad\qquad\;\;\Delta_{p\tau}x_k^{(q)}\text{ appears in DDAE}\Big\},\\
E&:=\left\{\left.\{F_i,v_k\}\in V_E\times V_V \;\right| \; v_k\text{ appears in }F_i \right\}. \end{align*} In other words, the graph of the DDAE contains all variables explicitly as distinct nodes without using equivalence classes. An implicit connection in the shifting graph means that the involved equations contain variables with the same shift but a different differentiation order. Thus, they belong to the same variable node in the shifting graph but not to the same node in the graph of the DDAE. There is an alternating path connecting the exposed equation $F_j$ and the equation that has to be shifted in $G^S$ but not in $G$. In this case, an explicit connection has to be established by differentiating the involved equations that do not depend on the highest derivative in the equivalence class. To ensure that all implicit connections are resolved, all possible connections have to be identified and checked.
Theoretically, this could be done by just checking all possible combinations of edges of $E^S$ that yield alternating paths. In practice, however, this approach is not feasible, because the amount of combinations increases rapidly with the number of nodes and edges of the graph~$G^S$. By reformulating the problem, it can be solved much more efficiently.
\begin{figure}
\caption{Visualization of the problem of finding all connections for $F_3$ with respect to $\mathcal{M}$ in the shifting step of the DDAE \cref{eq:example1}.}
\label{fig:example1.1}
\label{fig:example1.2}
\label{fig:example1.3}
\label{fig:example1.4}
\label{fig:example1}
\end{figure}
\begin{example} \label{ex:con} For an illustration of the problem, consider the DDAE from \cite[Example 3.11, p.17]{ahrens20}: \begin{equation}\label{eq:example1}
\begin{aligned}
\dot x_1 &= f_1,\\
\dot x_1 &= x_2+ f_2,\\
0 &= x_1 +x_2+ \Delta_{-\tau} x_3 + f_3.
\end{aligned} \end{equation} After assigning the equivalence classes $[x_1]=\{x_1,\dot x_1\}$ to~$F_1$ and~$[x_2]=\{x_2\}$ to~$F_2$ in the shifting step, this yields the matching \begin{equation*}
\mathcal{M}=\left\{\{F_1,\{x_1,\dot x_1\}\},\{F_2,\{x_2\}\}\right\} \end{equation*} and the shifting graph $G^S$ in figure \Cref{fig:example1.1} (matching edges are colored in blue). Equation $F_3$ is exposed and cannot be matched directly to any equivalence class, but it is connected via alternating paths to the other equation nodes.
Therefore, all possible connections for~$F_3$ with respect to $\mathcal{M}$ have to be found. These are \begin{equation*}
\mathcal{C}_1=\left\{(F_3,\{x_2\},F_2),(F_2,\{x_1,\dot x_1\},F_1)\right\} \end{equation*} (pictured red in \Cref{fig:example1.2}) and \begin{equation*}
\mathcal{C}_2=\left\{(F_3,\{x_1,\dot x_1\},F_1),(F_3,\{x_2\},F_2)\right\} \end{equation*} (pictured red in \Cref{fig:example1.3}). Additionally, $G$, the graph of the DDAE, is visualized in \Cref{fig:example1.4}. One can see that connection $\mathcal{C}_1$ does also exist in $G$ via the path \begin{equation*}
\left(\{F_3,x_2\},\{x_2,F_2\},\{F_2,\dot x_1\},\{\dot x_1,F_1\}\right) \end{equation*} and hence, is an explicit connection. However, $\mathcal{C}_2$ is implicit, as the alternating path~${(F_3,\{x_1,\dot x_1\},F_1)}$ does not connect $F_3$ and $F_1$ in $G$. One has to differentiate $F_3$ to establish an explicit connection. It can be seen that checking one connection is not enough, all have to be identified to resolve possible implicit connections in the shifting graph. \end{example}
\section{Reformulation of problem} \label{sec:3}
First, a connection has to be technically defined. To simplify the notation, let $G=(V_E \dot\cup V_V,E)$ be a bipartite graph with equation nodes $V_E$ and variable nodes $V_V$, where $E$ contains only edges between $V_E$ and $V_V$, not between nodes of one set. Further, let a matching \begin{equation*}
\mathcal{M}=\left\{\left\{F_{i_1},v_{k_1}\right\},..., \left\{F_{i_M},v_{k_M}\right\}\right\}\in E^M \end{equation*} be given in $G$ with $M<n$ and $F_j\in V_E$ as an exposed equation node with respect to $\mathcal{M}$. Then, \begin{align*}
C_{F_j}:=\big\{ \left.F_k\in V_E \;\right|\; \exists\text{ alternating path }\text{between } F_j \text{ and } F_k \text{ in } G \big\} \end{align*} denotes the equation nodes that are connected to $F_j$ via an alternating path. This set is automatically generated by the algorithm Augmentpath (see \cite[Algorithm 1, p.10]{ahrens20}, \cite[Algorithm 3.2, p.217]{pantelides88}). \begin{definitionn} \label{def:con} Let $G=(V_E \dot\cup V_V,E)$ be a bipartite graph and $\mathcal{M}$ a matching in $G$. Further, let $F_j\in V_E$ be exposed with respect to $\mathcal{M}$ and $C_{F_j}$ as defined above. A \textit{connection for $F_j$ with respect to $\mathcal{M}$} is defined as a set of connected alternating paths $(F_i,v_k,F_\ell) \in V_E \times V_V \times V_E$, with~$\{F_i,v_k\} \in E \setminus \mathcal{M}$ and $\{v_k,F_\ell\}\in\mathcal{M}$. Additionally, it has to hold that for all~$F_\ell\in C_{F_j}$ the corresponding matching edge~$\{v_k,F_\ell\} \in \mathcal{M}$ occurs exactly once and there is at least one alternating path starting in $F_j$. \end{definitionn}
Note that the definition of a connection for $F_j$ with respect to $\mathcal{M}$ has been changed in comparison to the definition from \cite[p.18]{ahrens20}. The previous definition allows sets of alternating paths that contain cycles and not necessarily the exposed node $F_j$. In the forthcoming \Cref{cor:cycle-free}, it is shown that a connection in the sense of \Cref{def:con}, however, is cycle-free. Also, a connection for $F_j$ with respect to $\mathcal{M}$ will simply be referred to as a connection when it is clear which node is exposed and which matching the connection is based on.
With the exact definition of a connection, one can further define the connection graph by interpreting the alternating paths from this definition as directed edges between the equation nodes.
\begin{definitionn} \label{def:congraph} Let $G=(V_E \dot\cup V_V,E)$ be a bipartite graph and $\mathcal{M}$ a matching in $G$. Further, let $F_j\in V_E$ be exposed with respect to $\mathcal{M}$ and $C_{F_j}$ as defined above. Define the set of nodes $V_H:=C_{F_j}\dot\cup \{F_j\}$ and directed edges \begin{align*}
E_H:=\big\{&\left. (F_i,F_\ell)\in V_H \times V_H \;\right| \; (F_i,v_k,F_\ell) \textnormal{ is an alternating}\\
&\text{path with }\{F_i,v_k\}\in E\setminus \mathcal{M},\{v_k,F_\ell\}\in\mathcal{M}\big\}. \end{align*} Then, the directed graph $H:=(V_H,E_H)$ is called \textit{connection graph for $F_j$ with respect to $\mathcal{M}$}. \end{definitionn}
\begin{remark} \label{rem:var} Denoting the alternating path $(F_i,v_k,F_\ell)$ as~$(F_i,F_\ell)$, it might seem as if information is lost about which variable node $v_k$ connects the equation nodes~$F_i$ and $F_\ell$. However, since each $F_\ell$ is uniquely matched to one~$v_k$ in $\mathcal{M}$, the variable node can easily be reconstructed from the directed edge $(F_i,F_\ell)$ using $\mathcal{M}$. A second approach to not lose information is to define edge weights~$w_{i\ell}=k$ for the edges $(F_i,F_\ell)$, i.e., if $v_k$ is to be reconstructed from $(F_i,F_\ell)$, it holds that $v_k=v_{w_{i\ell}}$. \end{remark}
\noindent Similar to before, the connection graph for $F_j$ with respect to $\mathcal{M}$ will be referred to simply as connection graph when it is clear which node is exposed and which matching the connection graph is based on.
\begin{example} \label{ex:congraph} Consider again the DDAE \cref{eq:example1} from \Cref{ex:con} and the shifting graph from \Cref{fig:example1.1}. Based on the matching \begin{equation*}
\mathcal{M}=\left\{\{F_1,\{x_1,\dot x_1\}\},\{F_2,\{x_2\}\}\right\} \end{equation*} and $C_{F_3}=\{F_1,F_2\}$, the connection graph for $F_3$ with respect to $\mathcal{M}$ can be defined according to \Cref{def:congraph} as~$H=(V_H,E_H)$ with \begin{align*}
V_H&=\{F_1,F_2,F_3\},\\
E_H&=\left\{(F_2,F_1),(F_3,F_1),(F_3,F_2)\right\}. \end{align*} A picture of $H$ can be seen in \Cref{fig:example2}. \end{example}
\begin{figure}
\caption{The connection graph for $F_3$ with respect to~$\mathcal{M}$ of the DDAE \cref{eq:example1}.}
\label{fig:example2}
\end{figure}
The connection graph facilitates to reformulate the problem of finding all connections in the sense of \Cref{def:con} by transferring it from the shifting graph to the connection graph. It translates to finding all arborescences (defined below) with root $F_j$ in $H$. To prove this, some definitions and lemmas from graph theory are needed (see \cite[p.71-73]{algmath} for reference and proofs).
\begin{definitionn} \label{def:arb} \cite[Definition 6.15, p.71, Definition 6.20, 6.22, p.73]{algmath} \begin{enumerate}
\item An undirected graph is called \textit{forest} if it contains no cycles.
\item An undirected graph is called \textit{tree} if it is a forest and connected.
\item Given a directed graph $G$, one can replace every directed edge by an undirected edge to get an undirected graph. The arising graph is called the \textit{underlying undirected graph} of $G$.
\item A directed graph is called \textit{branching} if its underlying undirected graph is a forest and every node has at most one edge ending in it.
\item A directed graph is called \textit{arborescence} if it is a connected branching. \end{enumerate}
\end{definitionn}
\begin{lemma} \label{lem:tree} \textnormal{\cite[Theorem 6.18, p.72]{algmath}} Let $G$ be an undirected graph with $n$ vertices. Then, $G$ is a tree if and only if $G$ has $n-1$ edges and is connected. \end{lemma}
\noindent The underlying undirected graph of an arborescence has to be a connected forest, i.e., a tree. According to \Cref{lem:tree}, an arborescence with $n$ nodes thus has~$n-1$ edges and, according to \Cref{def:arb}, every node has at most one edge ending in it. Therefore, there is exactly one node $r$ with no incoming edge. Let $\delta^-(v)$ be the set of incoming edges of a vertex~$v$ of $G$. Then, this condition can be formulated as ${\delta^-(r)=\emptyset}$. In this case,~$r$ is called \textit{root} of the arborescence. For any edge $(u,v)$ of an arborescence, $v$ is called a \textit{child} of $u$ and $u$ the \textit{predecessor} of~$v$. Vertices with no children are called \textit{leaves} (see \cite[p.73]{algmath}).
\begin{lemma} \label{lem:arb} \textnormal{\cite[Theorem 6.23, p.73]{algmath}} Let $G$ be a directed graph and $r$ a vertex of $G$. Then, the following statements are equivalent: \begin{enumerate}
\item $G$ is an arborescence with root $r$.
\item $G$ is a branching and $\delta^-(r)=\emptyset$.
\item $\delta^-(r)=\emptyset$ and there exists a uniquely determined directed path from $r$ to every vertex in $G$. \end{enumerate} \end{lemma}
\begin{theorem} \label{thm:con} Let $G=(V_E \dot\cup V_V,E)$ be a bipartite graph,~$\mathcal{M}$ a matching in $G$, $F_j\in V_E$ exposed with respect to $\mathcal{M}$, and~$H=(V_H,E_H)$ the connection graph for $F_j$ with respect to $\mathcal{M}$. Then, the following statements are equivalent: \begin{enumerate}
\item The set
\begin{equation*}
\mathcal{C}:=\left\{\left(F_{i_1},v_{k_1},F_{\ell_1}\right),...,\left(F_{i_M},v_{k_M},F_{\ell_M}\right)\right\}
\end{equation*}
is a connection for $F_j$ with respect to $\mathcal{M}$.
\item The directed subgraph
\begin{equation*}
\mathcal{H}:= \left(V_H,\left\{\left(F_{i_1},F_{\ell_1}\right),...,\left(F_{i_M},F_{\ell_M}\right)\right\}\right)\subseteq H
\end{equation*}
is an arborescence with root $F_j$. \end{enumerate} \end{theorem}
\begin{proof} $\Rightarrow$ ii): Let $F_j\in V_E$ be exposed, denote a connection for~$F_j$ as ~$\mathcal{C}=\{(F_{i_1},v_{k_1},F_{\ell_1}),...,(F_{i_M},v_{k_M},F_{\ell_M})\}$, and let $H$ be the connection graph for $F_j$, with respect to $\mathcal{M}$, respectively. Additionally, denote \begin{equation*}
\mathcal{H}=(V_H,E_\mathcal{H}),\;\;\;\; E_\mathcal{H}:=\left\{\left(F_{i_1},F_{\ell_1}\right),...,\left(F_{i_M},F_{\ell_M}\right)\right\}\subseteq E_H, \end{equation*} as a subgraph of $H$, and \begin{equation*}
\mathcal{H}_u:=(V_H,E_u),\quad E_u:=\left\{\left\{F_{i_1},F_{\ell_1}\right\},...,\left\{F_{i_M},F_{\ell_M}\right\}\right\}, \end{equation*}
as the underlying undirected graph of $\mathcal{H}$. According to \Cref{def:congraph}, it holds that ${|V_H|=|C_{F_j}|+1}$ and therefore, \begin{equation*}
M=|\mathcal{C}|=|C_{F_j}|=|V_H|-1. \end{equation*}
It follows that $ \mathcal{H}_u$ has $|V_H|-1$ edges. According to \Cref{def:con}, the alternating paths $(F_i,v_k,F_\ell)\in\mathcal{C}$ are connected and each~$F_\ell~\in~C_{F_j}$ occurs exactly once. The variable nodes~$v_k$ are uniquely determined by the equation nodes $F_\ell$ via the matching~$\mathcal{M}$. This implies that each~$v_{k_m}$, for $m=1,...,M$, also occurs only once in $\mathcal{C}$ and the alternating paths must be connected via the equation nodes $F_i$ and $F_\ell$. Thus, the edges in $E_u$ are connected and thereby, $\mathcal{H}_u$ is connected as well.
In summary, $\mathcal{H}_u$ has $|V_H|-1$ edges and is connected. Therefore, according to \Cref{lem:tree}, the underlying undirected graph of $\mathcal{H}$ is a tree (and also a forest). Additionally, $V_H=C_{F_j}\dot\cup \{F_j\}$ and it was already mentioned that each $F_\ell\in C_{F_j}$ occurs exactly once as second equation node in the alternating paths $(F_i,v_k,F_\ell)\in\mathcal{C}$. The node~$F_j$ itself has no alternating path leading to it because it is exposed. Therefore, every node of $\mathcal{H}$ has at most one edge ending in it. According to \Cref{def:arb}, the graph $\mathcal{H}$ is a branching.
Finally, one can choose the exposed node $F_j$ as the root of~$\mathcal{H}$ because $\delta^-(F_j)=\emptyset$. By \Cref{lem:arb}, it follows that~$\mathcal{H}$ is an arborescence with root $F_j$.
ii) $\Rightarrow$ i): Let $\mathcal{H}= (V_H,\{(F_{i_1},F_{\ell_1}),...,(F_{i_M},F_{\ell_M})\})\subseteq H$ be an arborescence with root $F_j$, and denote \begin{equation*}
\mathcal{C}=\left\{\left(F_{i_1},v_{k_1},F_{\ell_1}\right),...,\left(F_{i_M},v_{k_M},F_{\ell_M}\right)\right\}. \end{equation*} Since $\mathcal{H}$ is a subgraph of the connection graph for $F_j$ with respect to $\mathcal{M}$, it follows that all $(F_i,v_k,F_\ell)\in\mathcal{C}$ are alternating paths with $\{F_i,v_k\}\in E\setminus \mathcal{M}$, $\{v_k,F_\ell\}\in\mathcal{M}$, where $v_k$ is uniquely determined by the matching $\mathcal{M}$ (see \Cref{def:congraph} and \Cref{rem:var}). From \Cref{lem:arb}, it is known that the root $F_j$ of the arborescence is a vertex of~$\mathcal{H}$ with $\delta^-(F_j)=\emptyset$ and, therefore, is also included as the starting node in an alternating path from $\mathcal{C}$.
Additionally, there exists a uniquely determined directed path from the root $F_j$ to every vertex in $\mathcal{H}$. Thus, the same property applies to the alternating paths in $\mathcal{C}$, which yields that they are connected and each $F_\ell\in C_{F_j}$ occurs exactly once as final node of an alternating path. Hence, $\mathcal{C}$ fulfills all properties of a connection for~$F_j$ with respect to $\mathcal{M}$. \end{proof}
\begin{corollary} \label{cor:cycle-free} All connections for $F_j$ with respect to $\mathcal{M}$ are cycle-free. \end{corollary}
\begin{proof} The proof follows directly from \Cref{thm:con}, using the equivalence of a connection for $F_j$ with respect to~$\mathcal{M}$ to an arborescence in the connection graph for $F_j$ with respect to~$\mathcal{M}$. Arborescences are by \Cref{def:arb} cycle-free. \end{proof}
In the literature (e.g., \cite{gabow78}), an arborescence is also referred to as a \textit{spanning tree} of a directed graph. The sort of problem where all possible solutions to a computational problem have to be computed and explicitly returned as an output is called \textit{enumeration problem}. Methods for solving these problems are called \textit{enumeration algorithms}.
With \Cref{thm:con}, a reformulation of the initial problem (finding all connections) has been derived by showing that it is equivalent to the problem of enumerating all arborescences/spanning trees in the corresponding connection graph. Each spanning tree can then be interpreted as a connection. There are efficient algorithms to solve this enumeration problem, one of which is discussed in the next section.
\section{Enumeration of spanning trees} \label{sec:4}
One enumeration algorithm for finding all spanning trees of a directed graph was published in \cite{gabow78}. It turns out to be an effective method for the purpose of this work and is therefore implemented in the (overall) Pantelides algorithm to solve the subproblem of finding all connections. In this section, a short summary is given of how this algorithm works. For further details of the implementation as well as theoretical results and their proofs, see \cite{gabow78}.
First, the important concept of so-called bridges has to be introduced. \begin{definitionn} \cite[p.280]{gabow78} Let $G=(V,E)$ be a directed graph and $r\in V$ a vertex. \begin{enumerate}
\item $G$ is called \textit{rooted at $r$} if there exists a spanning tree with root $r$ in $G$.
\item An edge $e\in E$ is called a \textit{bridge for $r$} if $G$ is rooted at $r$ but $G\setminus\{e\}$ is not rooted at $r$.
\item Equivalently, an edge $e\in E$ is a \textit{bridge for $r$} if it is part of every spanning tree rooted at $r$ in $G$. \end{enumerate} \end{definitionn}
Assume a directed graph $G=(V,E)$ and a root vertex $r$ are given and all spanning trees of $G$ rooted at $r$ have to be computed. This goal is accomplished by finding all spanning trees containing different subtrees $T\subseteq G$, also rooted at $r$.
Given a subtree $T$, the approach consists of successively adding edges to $T$ in the following way: A new edge $e_i:=(u,v)\in E$, directed from a vertex $u\in T$ to a vertex $v\notin T$, is added to $T$, and all spanning trees containing $T \cup \{e_i\}$ are computed. When this is done, the edge~$e_i$ is deleted from $G$ and $T$ and another edge~$e_j\in E\setminus\{e_i\}$ (directed from $T$ to a vertex not in $T$), is added to $T$. Again, all spanning trees containing $T \cup \{e_j\}$ are computed, then $e_j$ is deleted from $G$ and $T$. The same process continues with the next edge and is repeated until an edge is processed that is a bridge for $r$ in the modified graph $G\setminus\{e_i,e_j,...\}$. Each spanning tree containing~$T$ has now been found exactly once.
A key point in this approach is to discover efficiently if an edge $e$ is a bridge. Assume all spanning trees containing $T\cup \{e\}$ have been computed and let $L$ be the last found spanning tree. It has to be checked if $e:=(u,v)$ is a bridge.
There are several possibilities. The idea that is pursued in this algorithm is to consider the descendants and nondescendants of $v$ in $L$. \textit{Descendants of $v$ in $L$} are vertices that can be reached following a directed path starting in $v$ and using only edges of the spanning tree~$L$. Contrary, \textit{nondescendants of $v$ in $L$} are vertices for which there cannot be constructed such a path using edges from $L$.
Clearly, if there is an edge in $G\setminus \{e\}$ that goes from a nondescendant of $v$ (in $L$) to $v$, then $e$ cannot be a bridge, since one could delete $e$ and replace it with that edge to construct another spanning tree. Thus, $G\setminus \{e\}$ is still rooted at $r$ and $e$ could not have been a bridge. On the other hand, if no edge that goes from a nondescendant of $v$ in $L$ to $v$ can be found, $e$ must be a bridge, because deleting $e$ leads to a graph $G\setminus\{e\}$ where there does not exist a path to vertex $v$ anymore.
For this to hold true, the way edges are added plays an important role. Here, the algorithm adds edges depth-first. The depth of a vertex contained in a tree is the length of the path between the vertex and the root of the tree it is contained in. Thus, adding an edge depth-first means that it is added to the vertex that has the greatest depth in $T\cup\{e\}$. Particularly, this ensures that the last computed spanning tree that contains $T\cup\{e\}$ (namely the tree $L$) has the fewest descendants of $v$ amongst all spanning trees containing~${T\cup\{e\}}$. This fact can be used to prove that this bridge test works correctly (see \cite[Lemma 2, p.284]{gabow78} for more details).
Thus, it is important for the implementation to grow~$T$ depth-first. To do so, a stack $F$ is used, where edges are stored that are directed from vertices in $T$ to vertices not in $T$. Note that the action of removing an element from the top of a stack is referred to as \textit{popping}, whereas the action of adding an element to the top of a stack is referred to as \textit{pushing}. An edge $e:=(u,v)$ is always popped from the top of $F$ if it is added to $T$ and then, edges for $T\cup\{e\}$ are pushed onto the top of $F$. Also, some edges might be removed from the inner part of~$F$ while growing $T$. This is necessary for all edges in $F$ that are directed to $v$, the newest leaf of $T$. To ensure the depth-first property, these edges have to be restored at the exact same place in $F$ after all spanning trees containing~${T\cup\{e\}}$ have been found.
A second stack $FF$ is used to store already processed edges since they are temporarily deleted from $G$ but have to be restored later.
The full algorithm is stated in \Cref{alg:grow}. Note that the pseudo code uses MATLAB notation for indexing, i.e. array indexing begins at 1 and the index "end" of an array points to the last element of it or to the top of a stack. The symbol "$\triangleright$" indicates a comment in the code. \Cref{alg:enum} illustrates how to initialize the method.
\begin{algorithm} \caption{GROW} \label{alg:grow} \textbf{Input:} directed graph $G=(V,E)$, directed subgraph ${T=(V_T,E_T) \subseteq G}$, stack of edges $F$, set of spanning trees~$S$\\ \textbf{Output:} set of spanning trees $S$, last computed spanning tree $L$\\ \begin{algorithmic}[1]
\IF{$|V_T| = |V|$} \STATE $L$ $\leftarrow$ $T$
$\triangleright$ store spanning tree in $L$ and $S$ \STATE $S$ $\leftarrow$ $S \;\dot\cup\; T$ \ELSE \STATE $FF$ $\leftarrow$ $[\emptyset]$ \WHILE{$b=0$} \STATE $e$ $\leftarrow$ $F$(end)
$\triangleright$ pop an edge $e$ from $F$, add it to $T$ \STATE $v$ $\leftarrow$ $e(2)$ \STATE $F$ $\leftarrow$ pop($\{e\}$) \STATE $T$ $\leftarrow$ $(V_T\;\dot\cup\; \{v\},E_T\;\dot\cup\; \{e\})$
\STATE $F$ $\leftarrow$ $F\setminus\{(u,w)\in E\;|\;u\in T,w=v\}$
$\triangleright$ update $F$
\STATE $F$ $\leftarrow$ push($\{(u,w)\in E\;|\;u=v,w\notin T\}$) \STATE $(S,L)$ $\leftarrow$ GROW$(G,T,F,S)$
$\triangleright$ recurse
\STATE $F$ $\leftarrow$ pop($\{(u,w)\in E\;|\;u=v,w\notin T\}$)
$\triangleright$ restore $F$
\STATE $F$ $\leftarrow$ $F\;\dot\cup\;\{(u,w)\in E\;|\;u\in T,w=v\}$ $\triangleright$ restore in same place as before \STATE $T$ $\leftarrow$ $(V_T\setminus \{v\},E_T\setminus \{e\})$ $\triangleright$ delete $e$ from $T$ and $G$, add it to $FF$ \STATE $G$ $\leftarrow$ $(V,E\setminus \{e\})$ \STATE $FF$ $\leftarrow$ push($\{e\}$)
\IF{$\{(u,w)\in E\;|\; w=v,\; u \textnormal{ is a nondescendant}$ $\textnormal{of }v\textnormal{ in }L\}\neq\emptyset$} \STATE $b$ $\leftarrow$ 0
$\triangleright$ bridge test \ELSE \STATE $b$ $\leftarrow$ 1 \ENDIF \ENDWHILE \WHILE{$FF($end$)\neq\emptyset$} \STATE $e$ $\leftarrow$ $FF$(end)
$\triangleright$ reconstruct $G$ \STATE $F$ $\leftarrow$ push($\{e\}$) \STATE $FF$ $\leftarrow$ pop($\{e\}$) \STATE $G$ $\leftarrow$ $(V,E\;\dot\cup\; \{e\})$ \ENDWHILE \ENDIF \end{algorithmic} \end{algorithm}
\begin{algorithm} \caption{Enumeration of spanning trees of a directed graph} \label{alg:enum} \textbf{Input:} directed graph $G=(V,E)$, root node $r \in V$\\ \textbf{Output:} set of all spanning trees $S$\\ \begin{algorithmic}[1] \STATE $T$ $\leftarrow$ $(\{r\},\emptyset)$
\STATE $F$ $\leftarrow$ push($\{(u,v)\in E\;|\; u=r\}$) \STATE $S$ $\leftarrow$ $\emptyset$ \STATE $S$ $\leftarrow$ GROW$(G,T,F,S)$ \end{algorithmic} \end{algorithm}
To conclude this section, the complexity of the algorithm is stated. For a directed graph~${G=(V,E)}$ that has~$N$ spanning trees, it has a time complexity of $O(|E|N)$ time and a space or memory complexity of $O(|E|)$ (see \cite[Lemma 4, p.285]{gabow78}). Next, it is shown how to use this method in the Pantelides algorithm.
\section{An algorithm that finds all connections} \label{sec:5}
The enumeration algorithm from the previous section has to be applied to the initial problem of finding all connections. This would replace the computation stated in line 1 of Algorithm 3 from \cite[p.20]{ahrens20}. Thus, transferring the notation, the shifting graph~${G^S=(V^S_E\dot\cup V^S_V,E^S)}$, the exposed equation $F_j\in V^S_E$ and the matching $\mathcal{M}$ are given. It has been shown that finding all connections for $F_j$ with respect to~$\mathcal{M}$ is equivalent to enumerating the spanning trees with root $F_j$ in the connection graph for~$F_j$ with respect to~$\mathcal{M}$.
Therefore, the connection graph $H$ is constructed according to \Cref{def:congraph} and used as input to \Cref{alg:enum}, together with $F_j$ as the root $r$. All spanning trees in $H$ are returned. Given a spanning tree, one can reconstruct the corresponding connection by taking its edges $(F_i,F_\ell)$ and inserting into each directed edge the variable node that was assigned to the equation node $F_\ell$ by $\mathcal{M}$. That yields a set of alternating paths $(F_i,v_k,F_\ell)$ as desired. The method is summarized in \Cref{alg:con}.
\begin{algorithm} \caption{Find all connections for $F_j$ with respect to~$\mathcal{M}$} \label{alg:con} \textbf{Input:} shifting graph $G^S=(V^S_E\dot\cup V^S_V,E^S)$, exposed node $F_j\in V^S_E$, matching $\mathcal{M}$ stored in \code{assign}, \code{colorE}, \code{colorV}\\ \textbf{Output:} set of all connections $P$\\ \begin{algorithmic}[1]
\STATE $C_{F_j}\leftarrow \{ F_k\in V^S_E \;|\; \exists\text{ alternating path between $F_j$}$ $\text{and $F_k$ in $G^S$} \}$ \STATE $V_H \leftarrow C_{F_j}\dot\cup \{F_j\}$
\STATE ${E_H \leftarrow \{(F_i,F_\ell)\in V_H \times V_H \;|\; (F_i,v_k,F_\ell) \text{ is an alterna-}}$ ${\text{ting path with } (v_k,F_\ell)\in\mathcal{M},(F_i,v_k)\in E^S\setminus \mathcal{M} \}}$ \STATE $H\;\leftarrow (V_H, E_H) $
$\triangleright$ construct connection graph \STATE $P$ $\leftarrow$ Algorithm2($H,F_j$)
$\triangleright$ enumeration algorithm \FORALL{$T=(V_T,E_T)\in P$} \STATE $T\leftarrow E_T$
$\triangleright$ replace spanning trees by connections \FORALL{$e=(F_i,F_\ell)\in T$} \STATE $e\leftarrow (F_i,v_k,F_\ell)$ such that $(v_k,F_\ell)\in\mathcal{M}$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm}
\noindent To illustrate the new method, a simple and a slightly more complex example are given in the following.
\begin{example} Consider again the DDAE \cref{eq:example1} from \Cref{ex:con} with the shifting graph \Cref{fig:example1.1}. In \Cref{ex:congraph}, it has been shown how to construct the connection graph for $F_3$ with respect to the matching \begin{equation*}
\mathcal{M}=\left\{\{F_1,\{x_1,\dot x_1\}\},\{F_2,\{x_2\}\}\right\}. \end{equation*} It is given as $H=(V_H,E_H)$ with \begin{align*}
V_H&=\{F_1,F_2,F_3\},\\
E_H&=\left\{(F_2,F_1),(F_3,F_1),(F_3,F_2)\right\}, \end{align*} and is visualized in \Cref{fig:example2}.
Hence, one defines $G:=H$ and $r:=F_3$ as the input to \Cref{alg:enum} and enumerates all spanning trees of $G$ rooted at $r$. To initialize the process, set \begin{align*}
T &= (\{F_3\},\emptyset),\\
F &= \left[(F_3,F_2),(F_3,F_1)\right], \end{align*} and execute \Cref{alg:grow}.
The recursion process of \Cref{alg:grow} can be visualized by the tree structure in \Cref{fig:example3}. Note that the nodes of the computation tree will be called \textit{bisections} and the edges \textit{arrows} to not confuse them with the nodes and edges of $T$ or $G$. In general, the notion of the tree is as follows: each bisection represents the current subgraph~${T\subseteq G}$, indicated by its edges $E_T$. As described in \cref{sec:4}, one then adds an edge $e\in G$ from the stack $F$ to~$T$ and computes all spanning trees containing $T\cup\{e\}$. Adding an edge $e$ is represented by an arrow pointing away from a bisection, i.e., if \begin{equation*}
E_T=\left\{(F_{i_1},F_{\ell_1}),...,(F_{i_m},F_{\ell_m})\right\} \end{equation*} and $e=(F_{i_{m+1}},F_{\ell_{m+1}})$ is added, then this is visualized in the computation tree by an arrow pointing from \begin{align*}
&\left\{(F_{i_1},F_{\ell_1}),...,(F_{i_m},F_{\ell_m})\right\}\quad\text{to} \\
&\left\{(F_{i_1},F_{\ell_1}),...,(F_{i_m},F_{\ell_m}),(F_{i_{m+1}},F_{\ell_{m+1}})\right\}. \end{align*} Thus, the computation of all spanning trees containing~$T$, or $T\cup\{e\}$, is represented by the subtree (of the computation tree) rooted at the bisection representing~$T$, or $T\cup\{e\}$, respectively. The arrows pointing away from~$T$ are, from left to right, all edges from the stack~$F$ that are added to $T$. Also, remember that after the computation of all spanning trees containing $T\cup\{e\}$, it has to be checked if $e$ is a bridge. If it is not, $e$ is deleted from~$T$ and $G$. This is depicted by the red arrows pointing to the bisection that represents the addition of the next edge, together with the corresponding label indicating which edge is deleted. If it is a bridge, then all spanning trees containing $T$ have been found and that iteration comes to an end. This is similarly depicted by a red arrow pointing to "END". Finally, each leaf in the lowest level of the computation tree is a complete and unique spanning tree.
\begin{figure}
\caption{Computation tree of \Cref{alg:grow} for the construction of connections for $F_3$ with respect to $\mathcal{M}$ in the shifting step of the DDAE \cref{eq:example1}.}
\label{fig:example3.3}
\label{fig:example3}
\end{figure}
In the case of the present example and as stated above, one starts with $T$ containing no edge ($E_T=\emptyset$) and pops the last element from $F$ to add it to $T$, yielding \begin{equation}\label{it1} \begin{aligned}
e&=(F_3,F_1),\\
T&=\left(\{F_3,F_1\},\{(F_3,F_1)\}\right),\text{ and}\\
F&= \left[(F_3,F_2)\right]. \end{aligned} \end{equation} As all spanning trees containing $T$ shall be computed, one pops the next edge from $F$, here $(F_3,F_2)$. This results in \begin{align*}
e&=(F_3,F_2),\\
T&=\left(\{F_3,F_1,F_2\},\{(F_3,F_1),(F_3,F_2)\}\right),\text{ and}\\
F&= [\emptyset]. \end{align*} The tree $T$ is now a complete spanning tree as it has~$n-1$ (here, $n=3$) edges. Thus, one sets $L=T$ and tests if~${e=(F_3,F_2)}$ is a bridge. The nondescendants of~$F_2$ in $L$ are $F_3$ and $F_1$, and there is no edge in $G$ that goes from a nondescendant of $F_2$ to $F_2$ besides $e$ itself. Consequently, $e$ is categorized as a bridge and indeed, all spanning trees containing the subtree with $E_T=\{(F_3,F_1)\}$ have been computed. The iteration ends and the algorithms returns to the setting of (\ref{it1}). Doing the bridge test here reveals that $e=(F_3,F_1)$ is not a bridge, since~$L$ remains unchanged and there exists the edge $(F_2,F_1)$ in~$G$ where $F_2$ is a nondescendant of $F_1$ in $L$. Therefore, the edge $(F_3,F_1)$ is deleted from $G$ and $T$ and the next iteration begins, meaning that the next edge from $F$ is added to $T$: \begin{equation}\label{it2}
\begin{aligned}
e&=(F_3,F_2),\\
T&=\left(\{F_3,F_2\},\{(F_3,F_2)\}\right),\text{ and}\\
F&= \left[(F_2,F_1)\right].
\end{aligned} \end{equation} Again, all spanning trees containing $T$ have to be computed and the next edge is popped from $F$, resulting in \begin{align*}
e&=(F_2,F_1),\\
T&=\left(\{F_3,F_2,F_1\},\{(F_3,F_2),(F_2,F_1)\}\right),\text{ and}\\
F&= [\emptyset]. \end{align*} The tree $T$ is a new and distinct spanning tree. One sets~$L=T$ and a test reveals that $e$ is a bridge: $F_3$ is the only nondescendant of $F_1$ in $L$ that does not belong to $e$ itself, and the edge $(F_3,F_1)$ was just deleted from $G$, so it does not exist anymore in the current graph (although it will be restored later). All spanning trees containing the subtree with $E_T=\{(F_3,F_2)\}$ have been found. The current iteration is terminated and the algorithm returns to the iteration with the setting (\ref{it2}). Here, one checks if $e=(F_3,F_2)$ is a bridge and again, it is. The edge $e$ is the only edge leading to $F_2$ in $G$. Hence, this iteration ends as well, which means that all spanning trees containing the subtree with $E_T=\emptyset$ have been computed successfully, or in other words, all spanning existing in $G$. The original graph $G$ is restored (i.e., the edge that was deleted, $(F_3,F_1)$, is added once again to $G$), the whole algorithm terminates and returns \begin{align*}
S=\big\{&\big(\{F_3,F_1,F_2\},\{(F_3,F_1),(F_3,F_2)\}\big),\\
&\big(\{F_3,F_2,F_1\},\{(F_3,F_2),(F_2,F_1)\}\big)\big\}. \end{align*} One can easily check by hand that these two spanning trees are the only ones existing in $G$.
Finally, the result, still in tree structure, has to be converted back to a set of connections. Inserting the variable nodes stored in the matching $\mathcal{M}$ gives \begin{align*}
P=\big\{&\big\{(F_3,\{x_1,\dot x_1\},F_1),(F_3,\{x_2\},F_2)\big\},\\
&\big\{(F_3,\{x_2\},F_2),(F_2,\{x_1,\dot x_1\},F_1)\big\}\big\}. \end{align*} By comparing $P$ to \Cref{fig:example1.2} and \Cref{fig:example1.3}, it can be seen that the algorithm successfully determined all desired connections for $F_3$ with respect to $\mathcal{M}$. \end{example}
\begin{example} Consider the DDAE \begin{equation}\label{eq:example2} \begin{aligned}
\dot x_1 &= x_2+x_3,\\
\dot x_2 &= x_3+ \Delta_{-\tau} x_2,\\
\dot x_3 &= x_2+ \Delta_{-\tau} x_3,\\
0 &= x_1 +x_2+x_3+\Delta_{-\tau} x_4. \end{aligned} \end{equation} The shifting graph, after assigning $\{x_1,\dot x_1\}$ to~$F_1$,~$\{x_2,\dot x_2\}$ to $F_2$ and $\{x_3,\dot x_3\}$ to $F_3$, is shown in \Cref{fig:example4.1}. The equation $F_4$ is exposed and cannot be matched directly to any equivalence class, but it is connected via alternating paths to all other equation nodes. Therefore, it holds that ${C_{F_4}=\{F_1,F_2,F_3\}}$ and all possible connections for~$F_4$ with respect to \begin{equation*}
\mathcal{M}=\left\{\{F_1,\{x_1,\dot x_1\}\},\{F_2,\{x_2,\dot x_2\}\right\}, \{F_3,\{x_3,\dot x_3\}\}\} \end{equation*} have to be found. The connection graph $H=(V_H,E_H)$ for $F_4$ with respect to $\mathcal{M}$ is given in \Cref{fig:example4.2} with \begin{align*}
V_H=\{&F_1,F_2,F_3,F_4\},\\
E_H=\big\{&(F_1,F_2),(F_1,F_3),(F_2,F_3),(F_3,F_2),(F_4,F_1),(F_4,F_2),\\
&(F_4,F_3)\big\}. \end{align*} After defining $G:=H$ and $r:=F_4$ as the input to \Cref{alg:enum} and initializing \begin{align*}
T &= (\{F_4\},\emptyset), \text{ and}\\
F &= \left[(F_4,F_3),(F_4,F_2),(F_4,F_1)\right], \end{align*} \Cref{alg:grow} is executed to enumerate all spanning trees of $G$ rooted at $r$.
The computation tree that represents the recursion structure of \Cref{alg:grow} can be seen in \Cref{fig:example5}. Similarly to the last example, one can follow the different paths in the tree to retrace the construction of subtrees~$T$ by addition and deletion of edges $e$.
Note that even though $F$ is initialized with all three edges outgoing from $F_4$, the algorithm already terminates after the first iteration where all spanning trees containing the subtree with $E_T=\{(F_4,F_1)\}$ are com\-put\-ed. This is due to the fact that after deleting $(F_4,F_1)$ from~$G$, there is no edge leading to $F_1$ anymore, and hence it is not possible to construct another spanning tree. The following eight spanning trees are returned: \begin{align*}
S=\big\{&\left(\{F_4,F_1,F_2,F_3\},\{(F_4,F_1),(F_1,F_2),(F_2,F_3)\}\right),\\
&\left(\{F_4,F_1,F_2,F_3\},\{(F_4,F_1),(F_1,F_2),(F_1,F_3)\}\right),\\
&\left(\{F_4,F_1,F_2,F_3\},\{(F_4,F_1),(F_1,F_2),(F_4,F_3)\}\right),\\
&\left(\{F_4,F_1,F_3,F_2\},\{(F_4,F_1),(F_1,F_3),(F_3,F_2)\}\right),\\
&\left(\{F_4,F_1,F_3,F_2\},\{(F_4,F_1),(F_1,F_3),(F_4,F_2)\}\right),\\
&\left(\{F_4,F_1,F_2,F_3\},\{(F_4,F_1),(F_4,F_2),(F_2,F_3)\}\right),\\
&\left(\{F_4,F_1,F_2,F_3\},\{(F_4,F_1),(F_4,F_2),(F_4,F_3)\}\right),\\
&\left(\{F_4,F_1,F_3,F_2\},\{(F_4,F_1),(F_4,F_3),(F_3,F_2)\}\right)\big\}. \end{align*} After converting them into connections with respect to the matching $\mathcal{M}$, one finally obtains \begin{align*}
P=\big\{&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_1,\{x_2,\dot x_2\},F_2),(F_2,\{x_3,\dot x_3\},F_3)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_1,\{x_2,\dot x_2\},F_2),(F_1,\{x_3,\dot x_3\},F_3)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_1,\{x_2,\dot x_2\},F_2),(F_4,\{x_3,\dot x_3\},F_3)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_1,\{x_3,\dot x_3\},F_3),(F_3,\{x_2,\dot x_2\},F_2)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_1,\{x_3,\dot x_3\},F_3),(F_4,\{x_2,\dot x_2\},F_2)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_4,\{x_2,\dot x_2\},F_2),(F_2,\{x_3,\dot x_3\},F_3)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_4,\{x_2,\dot x_2\},F_2),(F_4,\{x_3,\dot x_3\},F_3)\right\},\\
&\left\{(F_4,\{x_1,\dot x_1\},F_1),(F_4,\{x_3,\dot x_3\},F_3),(F_3,\{x_2,\dot x_2\},F_2)\right\}\big\}\!. \end{align*} Indeed, all possible connections for $F_4$ with respect to~$\mathcal{M}$ have been found. \end{example}
\begin{figure}
\caption{Visualization of the construction of connections for $F_4$ with respect to $\mathcal{M}$ in the shifting step of the DDAE \cref{eq:example2}.}
\label{fig:example4.1}
\label{fig:example4.2}
\label{fig:example4}
\end{figure}
\begin{sidewaysfigure}[p!]
\centering
\begin{tikzpicture}[every node/.style = {align=center}]
\tikzset{edge from parent/.append style={->,>=stealth}}
\tikzstyle{level 1}=[level distance=25mm]
\tikzstyle{level 2}=[sibling distance=15mm, level distance=30mm]
\tikzstyle{level 3}=[sibling distance=3mm, level distance=40mm]
\Tree [.$(\;\emptyset\;)$
[.\node(6){$(F_4,F_1)$};
[.\node(1){$(F_4,F_1)$\\$(F_1,F_2)$};
[.\node(8){$(F_4,F_1)$\\$(F_1,F_2)$\\$(F_2,F_3)$}; ]
[.\node(9){$(F_4,F_1)$\\$(F_1,F_2)$\\$(F_1,F_3)$}; ]
[.\node(10){$(F_4,F_1)$\\$(F_1,F_2)$\\$(F_4,F_3)$}; ] ]
[.\node(2){$(F_4,F_1)$\\$(F_1,F_3)$};
[.\node(11){$(F_4,F_1)$\\$(F_1,F_3)$\\$(F_3,F_2)$}; ]
[.\node(12){$(F_4,F_1)$\\$(F_1,F_3)$\\$(F_4,F_2)$}; ] ]
[.\node(3){$(F_4,F_1)$\\$(F_4,F_2)$};
[.\node(13){$(F_4,F_1)$\\$(F_4,F_2)$\\$(F_2,F_3)$}; ]
[.\node(14){$(F_4,F_1)$\\$(F_4,F_2)$\\$(F_4,F_3)$}; ] ]
[.\node(4){$(F_4,F_1)$\\$(F_4,F_3)$};
[.\node(15){$(F_4,F_1)$\\$(F_4,F_3)$\\$(F_3,F_2)$}; ] ] ] ]
\draw[->,>=stealth,red] (1) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_1,F_2)$}}(2) ;
\draw[->,>=stealth,red] (2) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_1,F_3)$}}(3) ;
\draw[->,>=stealth,red] (3) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_4,F_2)$}}(4) ;
\node(5)[below right=-6mm and 2mm of 4]{\footnotesize{END}};
\draw[->,>=stealth,red] (4) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\footnotesize{$(F_4,F_3)$}\\\small{bridge}}(5) ;
\node(7)[right=of 6]{\footnotesize{END}};
\draw[->,>=stealth,red] (6) to [out=45,in=135,looseness=1] node[pos=.5,above=0mm]{\footnotesize{$(F_4,F_1)$}\\\small{bridge}}(7) ;
\draw[->,>=stealth,red] (8) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_2,F_3)$}}(9) ;
\draw[->,>=stealth,red] (9) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_1,F_3)$}}(10) ;
\draw[->,>=stealth,red] (11) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_3,F_2)$}}(12) ;
\draw[->,>=stealth,red] (13) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\small{delete}\\\footnotesize{$(F_2,F_3)$}}(14) ;
\node(16)[below right=-6mm and 2mm of 10]{\footnotesize{END}};
\draw[->,>=stealth,red] (10) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\footnotesize{$(F_4,F_3)$}\\\small{bridge}}(16) ;
\node(17)[below right=-6mm and 2mm of 12]{\footnotesize{END}};
\draw[->,>=stealth,red] (12) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\footnotesize{$(F_4,F_2)$}\\\small{bridge}}(17) ;
\node(18)[below right=-6mm and 2mm of 14]{\footnotesize{END}};
\draw[->,>=stealth,red] (14) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\footnotesize{$(F_4,F_3)$}\\\small{bridge}}(18) ;
\node(19)[below right=-6mm and 2mm of 15]{\footnotesize{END}};
\draw[->,>=stealth,red] (15) to [out=315,in=225,looseness=1] node[pos=.5,below=0mm]{\footnotesize{$(F_3,F_2)$}\\\small{bridge}}(19) ;
\end{tikzpicture}
\caption{Computation tree of \Cref{alg:grow} for the DDAE \cref{eq:example2}.}
\label{fig:example5} \end{sidewaysfigure}
\section{Numerical demonstration} \label{sec:6}
The developed algorithm presented in this paper has been implemented to empirically demonstrate its effectiveness. Also, a naive depth-first method is used to compute connections in order to estimate the efficiency of the new algorithm in terms of computational complexity. All computations are performed using MATLAB R2021a on a laptop with the processor Intel CORE i5-6267U CPU @2.90GHz (4 CPUs), $\sim$2.8GHz.
For simplicity, a shifting graph $G^S=(V_E^S\dot\cup V_V^S,E^S)$ is assumed to be given where only the variable nodes $v_k\in V_V^S$ of highest shift exist for $k=1,...,n-1$ (i.e., all other variable nodes have already been deleted) and each $F_i\in V_E^S$ is matched to $v_i$, for $i=1,...,n-1$. Thus, $F_n$ is exposed with respect to the matching \begin{equation*}
\mathcal{M}=\left\{\{F_1,v_1\},...,\{F_{n-1},v_{n-1}\}\right\}. \end{equation*} Three different scenarios are tested. To illustrate the edge structures $E^S$ of the corresponding shifting graphs, let $A\in\mathbb{R}^{n\times (n-1)}$ be a matrix with entries \begin{equation*}
a_{ij}=
\begin{cases}
1,\quad\text{if } \{F_i,v_j\}\in E^S,\\
0,\quad\text{else}.
\end{cases} \end{equation*}
First, a shifting graph is constructed such that \begin{equation}
\label{eq:scenario1}
A=
\begin{bmatrix}
1 & 1 & & \\
1 & \ddots & \ddots & \\
& \ddots & \ddots & 1\\
& & 1 & 1\\
1 & \cdots & \cdots & 1
\end{bmatrix}
, \end{equation} i.e., each equation node $F_i$, for $i=1,...,n-1$, is connected to at most three variable nodes and $F_n$ is connected to each $v_k$, for $k=1,...,n-1$. The computation times for shifting graphs of this structure for different~${n\in\mathbb{N}}$ can be seen in \Cref{tab:scenario1}. In all tables, "DFS" is the abbreviation for "depth-first search" and $N$ denotes the number of possible connections. Some computations have been stopped after 10 minutes of computing time, which is indicated by "$>600$". In these cases, computations for even higher $n$ have not been executed. This is marked as "-" in the tables.
\begin{table}[htb]
\centering
\caption{Computation times in [s] for scenario \cref{eq:scenario1}.}
\label{tab:scenario1}
\begin{tabular}{|c| c c c c c c|} \hline
$n$ & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline
DFS & 0.01 & 0.11 & 8.3 & $>$600 & - & - \\
Alg. \ref{alg:con} & 0.02 & 0.04 & 0.06 & 0.16 & 0.43 & 1.32\\
$N$ & 21 & 55 & 144 & 377 & 987 & 2584\\
\hline
\end{tabular} \end{table}
In a second test, a scenario is created such that \begin{equation}
\label{eq:scenario2}
A=
\begin{bmatrix}
1 & \cdots & 1 \\
& \ddots & \vdots\\
& & 1 \\
1 & \cdots & 1
\end{bmatrix}
, \end{equation} i.e., each equation node $F_i$, for $i=1,...,n-1$, is connected to $n-i$ variable nodes and $F_n$ is again connected to each $v_k$, for $k=1,...,n-1$. The computation times for shifting graphs of this structure can be seen in \Cref{tab:scenario2}. \begin{table}[htb]
\centering
\caption{Computation times in [s] for scenario \cref{eq:scenario2}.}
\label{tab:scenario2}
\begin{tabular}{|c|c c c c c c|} \hline
$n$ & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline
DFS & 0.01 & 0.22 & 31 & $>$600 & - & - \\
Alg. \ref{alg:con} & 0.04 & 0.09 & 0.39 & 2.2 & 14 & 126\\
$N$ & 24 & 120 & 720 & 5040 & 40320 & 362880\\
\hline
\end{tabular} \end{table}
For the third scenario, a complete graph is assumed, where each equation node is connected to all variable nodes, i.e., \begin{equation}
\label{eq:scenario3}
A=
\begin{bmatrix}
1 & \cdots & 1 \\
\vdots & & \vdots\\
1 & \cdots & 1
\end{bmatrix}
. \end{equation} The computation times are listed in \Cref{tab:scenario3}. \begin{table}[htb]
\centering
\caption{Computation times in [s] for scenario \cref{eq:scenario3}.}
\label{tab:scenario3}
\begin{tabular}{|c | c c c c c|} \hline
$n$ & 5 & 6 & 7 & 8 & 9 \\
\hline
DFS & 0.03 & 0.41 & 318 & $>$600 & - \\
Alg. \ref{alg:con} & 0.05 & 0.48 & 6.3 & 88 & 2462 \\
$N$ & 125 & 1296 & 16807 & 262144 & 4782969\\
\hline
\end{tabular} \end{table}
The results clearly show the advantage of \Cref{alg:con} as it is strongly superior in terms of computation time. For all scenarios, the depth-first search algorithm is only competitive for very small system sizes~$n$, before its computation time suddenly explodes. This has a simple reason: by naively testing all possible combinations of edges, an extreme amount of possibilities arises. Even more, the majority of connections computed by the depth-first search algorithm are duplicates, meaning that they possess the same alternating paths in different order. All of these have to be identified and deleted after the algorithm terminates. \Cref{alg:con}, however, does not have this problem, as only unique spanning trees (and thus, connections) are computed. Therefore, it scales well with the number of possible connections $N$ and has a huge advantage in terms of computational complexity. Nevertheless, one can also see that the problem itself is very demanding, because~$N$ increases rapidly with the system size $n$ and already for relatively small $n$, one cannot compute all connections in a reasonable time anymore. There are just too many in the case of dense graphs.
\section{Conclusion} \label{sec:7}
In this work, the problem of finding all connections in the shifting step of the Pantelides algorithm for DDAEs from \cite{ahrens20} has been discussed. A new method, based on on the reformulation of the problem into the problem of enumerating all spanning trees (or arborescences) in a directed graph, has been developed. This directed graph is constructed with the alternating paths of the shifting graph and is called connection graph. The equivalence of the solutions to these two problems has been proven in \Cref{thm:con}. That led to the possibility to exploit the fact that there already exist efficient methods to solve the enumeration problem. By introducing and implementing the method from \cite{gabow78}, \Cref{alg:con} has been introduced to compute all connections in the shifting graph. Its effectiveness for the problem at hand has been shown by giving theoretical examples and its efficiency has been demonstrated by an implementation and numerical tests.
In summary, the lack of a satisfactory solution to the problem of finding all connections in the shifting step of the Pantelides algorithm for DDAEs has been overcome by this work for small problems. The new method now provides an efficient algorithm for its solution and will hopefully help to solve many DDAEs in the future.
\appendix \section{Code} The MATLAB source code of the implementation used to compute the presented results is available as supplementary material and can be obtained under
\begin{center}
\href{https://github.com/DanielCollin96/pantelides\_ddae\_connections}{https://github.com/DanielCollin96/pantelides\_ddae\_connections}.
\end{center}
\section*{Acknowledgments} The author thanks his supervisors Ines Ahrens (Technische Universit채t Berlin), Benjamin Unger (Universit채t Stuttgart) and Volker Mehr\-mann (Technische Universit채t Berlin) for their help, valuable tips and the encouragement to publish this paper. His work is supported by the DFG Collaborative Research Center 910 \textit{Control of self-organizing nonlinear systems: Theoretical methods and concepts of application}, project number 163436311.
\end{document} |
\begin{document}
\begin{abstract} An interplay between the sum of certain series related to Harmonic numbers and certain finite trigonometric sums is investigated. This allows us to express the sum of these series in terms of the considered trigonometric sums, and permits us to find sharp inequalities bounding these trigonometric sums. In particular, this answers positively an open problem of H. Chen (2010). \end{abstract}
\maketitle
\section{Introduction}\label{sec1}
\goodbreak
Many identities that evaluate trigonometric sums in closed form can be found in the literature. For example, in a solution to a problem in SIAM Review \cite[p.157]{klam}, M. Fisher shows that \[ \sum_{k=1}^{p-1}\sec^2\left(\frac{k\pi}{2p}\right) =\frac{2}{3}\left(p^2-1\right),\quad \sum_{k=1}^{p-1}\sec^4\left(\frac{k\pi}{2p}\right) =\frac{4}{45}\left(2p^4+5p^2-7\right). \]
General results giving closed forms for the power sums secants $\sum_{k=1}^{p-1}\sec^{2n}(\frac{k\pi}{2p})$ and ${\sum_{k=1}^{p}\sec^{2n}(\frac{k\pi}{2p+1}})$, for many values of the positive integer $n$, can be found in \cite{chen} and \cite{grab}. Also, in \cite{kou2} the author proves that \[\sum_{k=1}^{p}\sec\left(\frac{2k\pi}{2p+1}\right) =\begin{cases} \phantom{-}p&\text{if $p$ is even,}\\ -p-1& \text{if $p$ is odd.} \end{cases} \]
However, while there are many cases where closed forms for finite trigonometric sums can be obtained it seems that there are no such formul\ae\, for the sums we are interested in.
In this paper we study the trigonometric sums $I_p$ and $J_p$ defined for positive integers $p$ by the formul\ae: \begin{align}
I_p&=\sum_{k=1}^{p-1}\frac{1}{\sin(k\pi /p)}=\sum_{k=1}^{p-1}\csc\left(\frac{k\pi}{p}\right)\label{E:I}\\
J_p&=\sum_{k=1}^{p-1}k\cot\left(\frac{k\pi}{p}\right)\label{E:J} \end{align} with empty sums interpreted as $0$.
\goodbreak To the author's knowledge there is no known closed form for $I_p$, and the same can be said about the sum $J_p$. Therefore, we will look for asymptotic expansions for these sums and will give some tight inequalities that bound $I_p$ and $J_p$. This investigation complements the work of H. Chen in \cite[Chapter 7.]{chen2} where it was asked, as an open problem, whether the inequality \[ I_p\le \frac{2p}{\pi}(\ln p+\gamma -\ln(\pi/2)) \] holds true for $p\ge3$, (here
$\gamma$ is the so called Euler-Mascheroni constant.)
\goodbreak
In fact, it will be proved that
for every positive integer $p$ and every nonnegative integer $n$, we have
\begin{align*}
I_p&<\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+
\sum_{k=1}^{2n}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1},\\
\noalign{\noindent\text{and}}
I_p&>\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+
\sum_{k=1}^{2n+1}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1}.
\end{align*} where the $b_{2k}$'s are Bernoulli numbers (see Corollary \ref{cor94}. The corresponding inequalities for $J_p$ are also proved (see Corollary \ref{cor97}.)
Harmonic numbers play an important role in this investigation. Recall that the $n$th harmonic number $H_n$ is defined by $H_n=\sum_{k=1}^n1/k$ (with the convention $H_0=0$). In this work, a link between our trigonometric sums $I_p$ and $J_p$ and the sum of several series related to harmonic numbers is uncovered. Indeed, the well-known fact that $H_n=\ln n+\gamma+\frac{1}{2n}+\mathcal{O}\left(\frac{1}{n^2}\right)$ proves the convergence of the numerical series, \begin{align*} C_p&=\sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right), \\ D_p&=\sum_{n=1}^\infty(-1)^{n-1}\left(H_{pn}-\ln(pn)-\gamma\right),\\ E_p&=
\sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn}), \end{align*} for every positive integer $p$.
An interplay between the considered trigonometric sums and the sum of these series will allow us to prove sharp inequalities for $I_p$ and $J_p$, and to find the expression of the sums $C_p$, $D_p$ and $E_p$ in terms of $I_p$ and $J_p$.
The main tool will be the following formulation \cite[Corollary 8.2]{kou3} of the Euler-Maclaurin summation formula:
\begin{theorem}\label{cor61} Consider a positive integer $m$, and a function $f$ that has a continuous $(2m-1)^{\text{st}}$ derivative on $[0,1]$. If
$f^{(2m-1)}$ is {\normalfont \text{decreasing}}, then \[ \int_0^1f(t)\,dt=\frac{f(1)+f(0)}{2} -\sum_{k=1}^{m-1}\frac{b_{2k}}{(2k)!}\,\delta f^{(2k-1)}+(-1)^{m+1}R_m \] with \[ R_m=\int_0^{1/2}\frac{\abs{B_{2m-1}(t)}}{(2m-1)!}\,\left(f^{(2m-1)}(t)-f^{(2m-1)}(1-t)\right)\,dt \] and \[ 0\leq R_m\leq\frac{6}{(2\pi)^{2m}}\left(f^{(2m-1)}(0)-f^{(2m-1)}(1)\right). \] where the $b_{2k}$'s are Bernoulli numbers, $B_{2m-1}$ is the Bernoulli polynomial of degree $2m-1$, and the notation $\delta g$ for a function $g:[0,1]\to\mathbb{C}$ means $g(1)-g(0)$. \end{theorem} For more information on the Euler-Maclaurin formula, Bernoulli polynomials and Bernoulli numbers the reader may refer to \cite{abr, grad, hen,kou3,olv} and the references therein. This paper is organized as follows. In section 2 we find the asymptotic expansions of $C_p$ and $D_p$ for large $p$. In section 3, the trigonometric sums $I_p$ and $J_p$ are studied.
\goodbreak
\section{The sum of certain series related to harmonic numbers}\label{sec8}
\nobreak In the next lemma, the asymptotic expansion of $(H_n)_{n\in\mathbb{N}}$ is presented. It can be found implicitly in \cite[Chapter 9]{knuth2} we present a proof for the convenience of the reader.
\goodbreak \begin{lemma}\label{pr81}
For every positive integer $n$ and nonnegative integer $m$, we have
\[
H_n=\ln n+\gamma+\frac{1}{2n}-\sum_{k=1}^{m-1}\frac{b_{2k}}{2k}\cdot\frac{1}{n^{2k}}+(-1)^m R_{n,m},
\]
with
\[
R_{n,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}\,
\sum_{j=n}^\infty\left(\frac{1}{(j+t)^{2m}}-\frac{1}{(j+1-t)^{2m}}
\right)\,dt
\]
Moreover, $ 0<R_{n,m}<\dfrac{\abs{b_{2m}}}{2m\cdot n^{2m}}$. \end{lemma} \begin{proof}
Note that for $j\geq1$ we have
\[
\frac{1}{j}-\ln\left(1+\frac{1}{j}\right)=\int_0^1\left(\frac1j-\frac{1}{j+t}\right)\,dt=\int_0^1\frac{t}{j(j+t)}\,dt
\]
Adding these equalities as $j$ varies from $1$ to $n-1$ we conclude that
\[
H_n-\ln n-\frac{1}{n}=\int_0^1\left(\sum_{j=1}^{n-1}\frac{t}{j(j+t)}\right)\,dt.
\]
Thus, letting $n$ tend to $\infty$, and using the Monotone Convergence Theorem, we conclude
\[
\gamma=\int_0^1\left(\sum_{j=1}^{\infty}\frac{t}{j(j+t)}\right)\,dt.
\]
It follows that
\begin{equation*}
\gamma+\ln n-H_n+\frac{1}{n}=\int_0^1\left(\sum_{j=n}^\infty\frac{t}{j(j+t)}\right)\,dt.
\end{equation*}
So, let us consider the function $f_n:[0,1]{\,\longrightarrow\,}\mathbb{R}$ defined by
\[
f_n(t)=\sum_{j=n}^\infty\frac{t}{j(j+t)}
\]
Note that $f_n(0)=0$, $f_n(1)=1/n$, and that $f_n$ is infinitely continuously derivable with
\[
\frac{f_n^{(k)}(t)}{k!}=(-1)^{k+1}\sum_{j=n}^\infty\frac{1}{(j+t)^{k+1}},\quad\text{for $k\geq1$.}
\]
In particular,
\[
\frac{f_n^{(2k-1)}(t)}{(2k-1)!}=\sum_{j=n}^\infty\frac{1}{(j+t)^{2k}},\quad\text{for $k\geq1$.}
\]
So, $f_n^{(2m-1)}$ is decreasing on the interval $[0,1]$, and
\[
\frac{\delta f_n^{(2k-1)}}{(2k-1)!} =\sum_{j=n}^\infty\frac{1}{(j+1)^{2k}}
-\sum_{j=n}^\infty\frac{1}{j^{2k}}=-\frac{1}{n^{2k}}
\]
Applying Theorem \ref{cor61} to $f_n$, and using the above data, we get
\[
\gamma+\ln n-H_n+\frac{1}{2n}=
\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\,n^{2k}}+(-1)^{m+1}R_{n,m}
\]
with
\[
R_{n,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}\,
\sum_{j=n}^\infty\left(\frac{1}{(j+t)^{2m}}-\frac{1}{(j+1-t)^{2m}}
\right)\,dt
\]
and
\[
0< R_{n,m}<\frac{6\cdot(2m-1)!}{(2\pi)^{2m}n^{2m}}.
\]
The important estimate is the lower bound, \textit{i.e.} $R_{n,m}>0$. In fact, considering separately the cases $m$ odd and $m$ even, we obtain, for every nonnegative integer $m'$:
\begin{align*}
H_n&<\ln n+\gamma+\frac{1}{2n}-\sum_{k=1}^{2m'}\frac{b_{2k}}{2k}\cdot\frac{1}{n^{2k}},\\
\noalign{\noindent\text{and}}
H_n&>\ln n+\gamma+\frac{1}{2n}-\sum_{k=1}^{2m'+1}\frac{b_{2k}}{2k}\cdot\frac{1}{n^{2k}}.\\
\end{align*}
This yields the following more precise estimate for the error term:
\begin{equation*}
0<(-1)^{m}\left(H_n-\ln n-\gamma-\frac{1}{2n}+
\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\cdot n^{2k}} \right)<\frac{\abs{b_{2m}}}{2m\cdot n^{2m}}
\end{equation*}
which is valid for every positive integer $m$. \end{proof}
\goodbreak
Now, consider the two sequences $(c_n)_{n\geq1}$ and $(d_n)_{n\geq1}$ defined by \[ c_n=H_n-\ln n-\gamma-\frac{1}{2n}\qquad\text{and}\qquad d_n=H_n-\ln n-\gamma \] For a positive integer $p$, we know according to Lemma~\ref{pr81} that $c_{pn}=\mathcal{O}\left(\frac{1}{n^2}\right)$, it follows that the series $\sum_{n=1}^\infty c_{pn}$ is convergent. Similarly, since $d_{pn}=c_{pn}+\frac{1}{2pn}$ and the series $\sum_{n=1}^\infty(-1)^{n-1}/n$ is convergent, we conclude that $\sum_{n=1}^\infty(-1)^{n-1} d_{pn}$ is also convergent. In what follows we aim to find asymptotic expansions, (for large $p$,) of the following sums: \begin{align} C_p&=\sum_{n=1}^\infty c_{pn}=\sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right)\label{E:Cp}\\ D_p&=\sum_{n=1}^\infty (-1)^{n-1}d_{pn} =\sum_{n=1}^\infty(-1)^{n-1}\left(H_{pn}-\ln(pn)-\gamma\right)\label{E:Dp}\\
E_p &=\sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn}).\label{E:Ep} \end{align}
\begin{proposition}\label{pr82}
If $p$ and $m$ are positive integers and $C_p$ is defined by \eqref{E:Cp}, then
\[
C_p=-\sum_{k=1}^{m-1}\frac{b_{2k}\zeta(2k)}{2k\cdot p^{2k}}
+(-1)^m\frac{\zeta(2m)}{2m\cdot p^{2m}}\varepsilon_{p,m},\quad\text{with $0<\varepsilon_{p,m}<\abs{b_{2m}}$},
\]
where $\zeta$ is the well-known Riemann zeta function. \end{proposition} \begin{proof} Indeed, we conclude from Lemma \ref{pr81} that
\[
H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}=-\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\cdot p^{2k}}\cdot\frac{1}{n^{2k}}
+\frac{(-1)^m}{2m\cdot p^{2m}}\cdot\frac{r_{pn,m}}{n^{2m}}.
\]
with $0<r_{pn,m}\leq\abs{b_{2m}}$. It follows that
\[C_p=-\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\,p^{2k}}\cdot\left(\sum_{n=1}^\infty
\frac{1}{n^{2k}}\right)+\frac{(-1)^m}{2m\cdot p^{2m}}\cdot\tilde{r}_{p,m}.
\]
where $\tilde{r}_{p,m}=\sum_{n=1}^\infty\frac{r_{pn,m}}{n^{2m}}$.
\goodbreak
Hence,
\[
0<\tilde{r}_{p,m}=\sum_{n=1}^\infty
\frac{r_{pn,m}}{n^{2m}}< \abs{b_{2m}} \,
\sum_{n=1}^\infty
\frac{1}{n^{2m}}=\abs{b_{2m}}\zeta(2m)
\]
and the desired conclusion follows with $\varepsilon_{p,m}=\tilde{r}_{p,m}/\zeta(2m)$. \end{proof}
\goodbreak For example, taking $m=3$, we obtain \[ \sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right) =-\frac{\pi^2}{72p^2}+\frac{\pi^4}{10800p^4}+\mathcal{O}\left(\frac{1}{p^6}\right). \]
\goodbreak In the next proposition we have the analogous result corresponding to $D_p$.
\goodbreak
\begin{proposition}\label{pr83}
If $p$ and $m$ are positive integers and $D_p$ is defined by \eqref{E:Dp}, then
\[
D_p=\frac{\ln 2}{2p}
-\sum_{k=1}^{m-1} \frac{b_{2k}\eta(2k)}{2k \cdot p^{2k}}
+(-1)^m\frac{\eta(2m)}{2m\cdot p^{2m}}\varepsilon'_{p,m},\quad\text{with $0<\varepsilon'_{p,m}<\abs{b_{2m}}$,}
\]
where $\eta$ is the Dirichlet eta function \cite{wis}. \end{proposition} \begin{proof}
Indeed, let us define $a_{n,m}$ by the formula
\[
a_{n,m}=H_n-\ln n-\gamma-\frac{1}{2n}+\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\cdot n^{2k}}
\]
with empty sum equal to 0. We have shown in the proof of Lemma \ref{pr81} that
\[
(-1)^ma_{n,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}g_{n,m}(t)\,dt
\]
where $g_{n,m}$ is the positive decreasing function on $[0,1/2]$ defined by
\[
g_{n,m}(t)=\sum_{j=n}^\infty\left(\frac{1}{(j+t)^{2m}}-\frac{1}{(j+1-t)^{2m}}\right).
\]
Now, for every $t\in[0,1/2]$ the sequence $(g_{np,m}(t))_{n\geq1}$ is positive and decreasing to $0$. So, using the alternating series criterion
\cite[Theorem~7.8, and Corollary~7.9]{aman} we see that, for every $N\geq1$ and $t\in[0,1/2]$,
\[
\abs{\sum_{n=N}^\infty(-1)^{n-1}g_{np,m}(t)}\leq g_{Np,m}(t)\leq g_{Np,m}(0)=\frac{1}{(Np)^{2m}}.
\]
This proves the uniform convergence on $[0,1/2]$ of the series
\[G_{p,m}(t)=\sum_{n=1}^\infty(-1)^{n-1}g_{np,m}(t).
\]
Consequently
\[
(-1)^m\sum_{n=1}^\infty(-1)^{n-1}a_{pn,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}G_{p,m}(t)\,dt.
\]
Now using the properties of alternating series, we see that for $t\in(0,1/2)$ we have
\[
0<G_{p,m}(t)<g_{p,m}(t)<g_{p,m}(0)=\sum_{j=p}^\infty\left(\frac{1}{j^{2m}}-\frac{1}{(j+1)^{2m}}\right)=\frac{1}{p^{2m}}
\]
Thus,
\[
\sum_{n=1}^\infty(-1)^{n-1}a_{pn,m}=\frac{(-1)^m}{p^{2m}}\rho_{p,m}
\]
with $0<\rho_{p,m}<\int_0^{1/2}\abs{B_{2m-1}(t)}\,dt$.
\goodbreak
On the other hand we have
\begin{align*}
\sum_{n=1}^\infty(-1)^{n-1}a_{pn,m}&=D_p
-\frac{1}{2p} \sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}+\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\,p^{2k}}\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^{2k}}\\
&=D_p-\frac{\ln 2}{2p} +\sum_{k=1}^{m-1}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}}.
\end{align*}
Thus
\[
D_p=\frac{\ln 2}{2p}-\sum_{k=1}^{m-1}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}}
+\frac{(-1)^m}{p^{2m}}\rho_{p,m}
\]
Now, the important estimate for $\rho_{p,m}$ is the lower bound, \textit{i.e.} $\rho_{p,m}>0$. In fact, considering separately the cases $m$ odd and $m$ even, we obtain, for every nonnegative integer $m'$:
\begin{align*}
D_p&<\frac{\ln 2}{2p}-\sum_{k=1}^{2m'}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}},\\
\noalign{\text{and}}
D_p&>\frac{\ln 2}{2p}-\sum_{k=1}^{2m'+1}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}}.\\
\end{align*}
This yields the following more precise estimate for the error term:
\[
0<(-1)^{m}\left(D_p-\frac{\ln 2}{2p}+\sum_{k=1}^{m-1}\frac{b_{2k}\eta(2k)}{2k\,p^{2k}}
\right)<\frac{\abs{b_{2m}}\eta(2m)}{2m\cdot p^{2m}},
\]
and the desired conclusion follows. \end{proof}
\goodbreak
The case of $E_p$ which is the sum of another alternating series \eqref{E:Ep} is discussed in the next lemma where it is shown that $E_p$ can be easily expressed in terms of $D_p$.
\goodbreak \begin{lemma}\label{lm84}
For a positive integer $p$, we have
\begin{equation*}
E_p =\ln p+\gamma-\ln\left(\frac{\pi}{2}\right)+2D_p,
\end{equation*}
where $D_p$ is the sum defined by \eqref{E:Dp}. \end{lemma} \begin{proof}
Indeed
\begin{align*}
2D_p&=d_p+\sum_{n=2}^\infty(-1)^{n-1}d_{pn}+\sum_{n=1}^\infty(-1)^{n-1}d_{pn}\\
&=d_p+\sum_{n=1}^\infty(-1)^{n}d_{p(n+1)}+\sum_{n=1}^\infty(-1)^{n-1}d_{pn}\\
&=d_p+\sum_{n=1}^\infty(-1)^{n-1}(d_{pn}-d_{p(n+1)})\\
&=d_p+\sum_{n=1}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn})+\sum_{n=1}^\infty(-1)^{n-1}\ln\left(\frac{n+1}{n}\right)\\
&=-\ln p-\gamma+\sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn})+\sum_{n=1}^\infty(-1)^{n-1}\ln\left(\frac{n+1}{n}\right)\\
\end{align*}
Using Wallis formula for $\pi$~ \cite[Formula~0.262]{grad}, we have
\begin{align*}
\sum_{n=1}^\infty(-1)^{n-1}\ln\left(\frac{n+1}{n}\right)&=
\sum_{n=1}^\infty\ln\left(\frac{2n}{2n-1}\cdot\frac{2n}{2n+1}\right)\\
&=-\ln\prod_{n=1}^\infty\left(1-\frac{1}{4n^2}\right)=\ln\left(\frac{\pi}{2}\right)\\
\end{align*}
and the desired formula follows. \end{proof}
\section{Inequalities for trigonometric sums}\label{sec2}
As we mentioned in the introduction, we are interested in the sum of cosecants $I_p$ defined by \eqref{E:I} and the sum of cotangents $J_p$ defined by \eqref{E:J}. Many other trigonometric sums can be expressed in terms of $I_p$ and $J_p$. The next lemma lists some of these identities.
\goodbreak \begin{lemma}\label{lm91} For a positive integer $p$ let
\begin{alignat*}{2}
K_p&=\sum_{k=1}^{p-1}\tan\left(\frac{k\pi}{2p}\right),
&\qquad \widetilde{K}_p&=\sum_{k=1}^{p-1}\cot\left(\frac{k\pi}{2p}\right),\\
L_p&=\sum_{k=1}^{p-1}\frac{k}{\sin(k\pi/p)},
&\qquad M_p&=\sum_{k=1}^{p}(2k-1)\cot\left(\frac{(2k-1)\pi}{2p}\right)
\end{alignat*}
Then,
$\displaystyle
\begin{matrix}
i.&
K_p=&\widetilde{K}_p=I_p.
\label{lm911}\\
ii.&
L_p=&(p/2)\,I_p.
\label{lm912}\\
iii.&
M_p=&(p/2)\,J_{2p}-2J_p=-p\,I_p.
\label{lm913}\\
\end{matrix}
$ \end{lemma} \begin{proof}
First, note that the change of summation variable $k\leftarrow p-k$ proves that $K_p=\widetilde{K}_p$. So,
using the trigonometric identity $\tan\theta+\cot\theta=2\csc(2\theta)$ we obtain $(i)$ as follows:
\begin{equation*}
2K_p=K_p+\widetilde{K}_p=\sum_{k=1}^{p-1}\left(\tan\left(\frac{k\pi}{2p}\right)+\cot\left(\frac{k\pi}{2p}\right)\right)
=2\sum_{k=1}^{p-1}\csc\left(\frac{k\pi}{p}\right)=2I_p
\end{equation*}
Similarly, $(ii)$ follows from the change of summation variable $k\leftarrow p-k$ in $L_p$:
\[
L_p=\sum_{k=1}^{p-1}\frac{p-k}{\sin(k\pi/p)}=pI_p-L_p
\]
Also,
\begin{align*}
M_p&=\sum_{\substack{1\leq k<2p\\ k \text{ odd}
}} k\cot\left(\frac{k\pi}{2p}\right)=\sum_{k=1}^{2p-1} k\cot\left(\frac{k\pi}{2p}\right)- \sum_{\substack{1\leq k<2p\\ k \text{ even}
}} k\cot\left(\frac{k\pi}{2p}\right)\\
&=\sum_{k=1}^{2p-1} k\cot\left(\frac{k\pi}{2p}\right)-2 \sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{p}\right)=J_{2p}-2J_p.
\end{align*}
But
\begin{align*}
J_{2p}&=\sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{2p}\right)+\sum_{k=p+1}^{2p-1} k\cot\left(\frac{k\pi}{2p}\right)\\
&=\sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{2p}\right)-\sum_{k=1}^{p-1} (2p-k)\cot\left(\frac{k\pi}{2p}\right)\\
&=2\sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{2p}\right)-2p\widetilde{K}_p
\end{align*}
Thus, using $(i)$ and the trigonometric identity $\cot(\theta/2)-\cot\theta=\csc\theta$ we obtain
\begin{align*}
M_p&=J_{2p}-2J_p=2\sum_{k=1}^{p-1} k\left(\cot\left(\frac{k\pi}{2p}\right)-\cot\left(\frac{k\pi}{p}\right)\right)
-2pI_p\\
&=2\sum_{k=1}^{p-1}k\csc\left(\frac{k\pi}{p}\right)-2pI_p=2L_p-2pI_p=-pI_p
\end{align*}
This concludes the proof of $(iii)$. \end{proof}
\begin{proposition}\label{pr92}
For $p\geq2$, let $I_p$ be the sum of cosecants defined by the \eqref{E:I}. Then
\begin{align*}
I_p&=-\frac{2\ln 2}{\pi}+\frac{2p}{\pi}E_p,\\
&=-\frac{2\ln 2}{\pi}+\frac{2p}{\pi}\left(\ln p+\gamma-\ln(\pi/2)\right)+\frac{4p}{\pi}D_p,
\end{align*}
where $D_p$ and $E_p$ are defined by formul\ae~ \eqref{E:Dp} and \eqref{E:Ep} respectively. \end{proposition} \begin{proof}
Indeed, our starting point will be the ``simple fractions'' expansion \cite[Chapter 5, \S 2]{ahl} of the cosecant function:
\[
\frac{\pi}{\sin(\pi\alpha)}=\sum_{n\in\mathbb{Z}}\frac{(-1)^n}{\alpha-n}=\frac{1}{\alpha}+\sum_{n=1}^\infty(-1)^n\left(\frac{1}{\alpha-n}+
\frac{1}{\alpha+n}\right)
\]
which is valid for $\alpha\in\mathbb{C}\setminus\mathbb{Z}$. Using this formula with $\alpha=k/p$ for $k=1,2,\ldots,p-1$ and adding, we conclude that
\begin{align*}
\frac{\pi}{p} I_p&=\sum_{k=1}^{p-1}\frac{1}{k}+\sum_{n=1}^\infty(-1)^n\sum_{k=1}^{p-1}\left(\frac{1}{k-np}+
\frac{1}{k+n p}\right)\\
&=\sum_{k=1}^{p-1}\frac{1}{k}+ \sum_{n=1}^\infty(-1)^n\left(-\sum_{j=p(n-1)+1}^{pn-1}\frac{1}{j}+
\sum_{j=pn+1}^{p(n+1)-1}\frac{1}{j}\right),
\end{align*}
and this result can be expressed in terms of the Harmonic numbers as follows
\begin{align*}
\frac{\pi}{p} I_p&=H_{p-1}+ \sum_{n=1}^\infty(-1)^n\left(- H_{pn-1}+H_{p(n-1)}+H_{p(n+1)-1}-H_{pn}
\right)\\
&=H_{p-1}+ \sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right)
+\frac{1}{p}\sum_{n=1}^\infty(-1)^n\left(\frac{1}{n}-\frac{1}{n+1}
\right)\\
&=H_{p-1}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right)
+\frac{1}{p}\left(\sum_{n=1}^\infty\frac{(-1)^n}{n}+\sum_{n=2}^\infty\frac{(-1)^n}{n}\right)\\
&=H_{p}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right)
-\frac{2}{p}\sum_{n=1}^\infty(-1)^{n-1}\frac{1}{n}\\
&=H_{p}-\frac{2\ln 2}{p}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right).
\end{align*}
Thus
\begin{align*}
\frac{\pi}{p} I_p+\frac{2\ln 2}{p}&=
H_{p}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-H_{pn} \right)
+\sum_{n=1}^\infty(-1)^n\left(H_{p(n-1)}-H_{pn}\right)\\
&=\sum_{n=0}^\infty(-1)^n\left(H_{p(n+1)}-H_{pn}\right)
+\sum_{n=1}^\infty(-1)^n\left(H_{p(n-1)}-H_{pn}\right)\\
&=E_p+E_p=2E_p,
\end{align*}
and the desired formula follows according to Lemma~\ref{lm84}. \end{proof}
Combining Proposition~\ref{pr92} and Proposition~\ref{pr83}, we obtain:
\begin{proposition}\label{pr93}
For $p\geq2$ and $m\geq 1$, we have
\[
\pi I_p= 2p\ln p+2(\gamma-\ln( \pi/2))p -\sum_{k=1}^{m-1} \frac{2b_{2k}\eta(2k)}{ k \cdot p^{2k-1}}
+(-1)^m\frac{2\eta(2m)}{ m\cdot p^{2m-1}}\varepsilon'_{p,m}
\]
with $\displaystyle 0<\varepsilon'_{p,m}<\abs{b_{2m}}$. \end{proposition}
Using the well-known result (\cite{ wis},\cite[Formula 9.542]{grad}):
\[
\eta(2k)=(1-2^{1-2k})\zeta(2k)=\frac{(2^{2k-1}-1)\pi^{2k}(-1)^{k-1}b_{2k}}{(2k)!},
\]
and considering separately the cases $m$ even and $m$ odd we obtain the following corollary.
\begin{corollary}\label{cor94}
For every positive integer $p$ and every nonnegative integer $n$, the sum of cosecants $I_p$ defined by \eqref{E:I} satisfies the following
inequalities:
\begin{align*}
I_p&<\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+
\sum_{k=1}^{2n}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1},\\
\noalign{\text{and}}
I_p&>\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+
\sum_{k=1}^{2n+1}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1}.
\end{align*} \end{corollary}
\goodbreak \qquad As an example, for $n=0$ we obtain the following inequality, valid for every $p\geq1$: \[ \frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))-\frac{\pi}{36p} <I_p<\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2)). \] This answers positively the open problem proposed in \cite[Section 7.4]{chen2}.
\goodbreak \begin{remark}
The asymptotic expansion of $I_p$ was proposed as an exercise in \cite[Exercise~13, p.~460]{hen}, and
it was attributed to P. Waldvogel, but the result there is less precise than Corollary~\ref{cor94} because here we have inequalities valid in the whole range of $p$. \end{remark}
\goodbreak
Now we turn our attention to the other trigonometric sum $J_p$.
The first step to we find an analogous result to Proposition~\ref{pr92} for the trigonometric sum $J_p$, is the next lemma, where an asymptotic expansion for $J_p$ is proved but it has a harmonic number as an undesired term, later it will be removed.
\begin{lemma}\label{pr72} For every positive integers $p$, there is a real number $\theta_{p}\in(0,1)$ such that \[ \pi J_p=-p^2H_p+\ln(2\pi) p^2-\frac{p}{2}-\theta_p. \] \end{lemma} \begin{proof} Indeed, let $\varphi$ be the function defined by \[\varphi(x)=\pi x\cot(\pi x)+\frac{1}{1-x}.\] According to the partial fraction expansion formula for the cotangent function \cite[Chapter 5, \S 2]{ahl} we know that \[ \varphi(x)=2+\frac{x}{x+1}+\sum_{n=2}^\infty\left(\frac{x}{x-n}+\frac{x}{x+n}\right). \] Thus, $\varphi$ is defined and analytic on the interval $(-1,2)$. Let us show that $\varphi$ is concave on this interval. Indeed, it is straight forward to check that, for $-1<x<2$ we have \begin{align*} \varphi^{\prime\prime}(x)&=-\frac{2}{(1+x)^3}-2 \sum_{n=2}^\infty\left(\frac{n}{(n-x)^3}+\frac{n}{(n+x)^3}\right)<0. \end{align*} So, we can use Theorem \ref{cor61} with $m=1$ applied to the function $x\mapsto\varphi\left(\frac{x+k}{p}\right)$ for $1\le k<p$ to get \[ 0< p\int_{k/p}^{(k+1)/p}\varphi(x)dx-\frac{1}{2}\left(\varphi\left(\frac{k+1}{p}\right)+\varphi\left(\frac{k}{p}\right)\right)\le\frac{3}{2p\pi^2} \left(\varphi'\left(\frac{k}{p}\right)-\varphi'\left(\frac{k+1}{p}\right)\right) \] Adding these inequalities and noting that $\varphi(0)=2$, $\varphi'(0)=1$, $\varphi(1)=1$ and $\varphi'(1)=-\pi^2/3$, we get \[ 0< p\int_0^1\varphi(x)dx-\frac{\pi}{p}J_p-pH_p-\frac{1}{2}\le\frac{3+\pi^2}{2\pi^2p}<\frac{1}{p} \] Also, for $x\in[0,1)$, we have \[ \int_0^x\varphi(t)\,dt=-\ln(1-x)+x\ln \sin(\pi x)-\int_0^x \ln\sin(\pi t)\,dt \] and, letting $x$ tend to $1$ we obtain \[ \int_0^1\varphi(t)\,dt=\ln(\pi)-\int_0^1\ln\sin(\pi t)\,dt=\ln(2\pi) \] where we used the fact $\int_0^1\ln\sin(\pi t)\,dt=-\ln2$, (see \cite[4.224 Formula 3.]{grad}. So, we have proved that \[ 0< p\ln(2\pi)-\frac{\pi}{p}J_p-pH_p-\frac{1}{2}<\frac{1}{p} \] which is equivalent to the desired conclusion. \end{proof}
\goodbreak The next proposition gives an analogous result to Proposition~\ref{pr92} for the trigonometric sum $J_p$.
\goodbreak \begin{proposition}\label{pr95}
For a positive integer $p$,
let $J_p$ be the sum of cotangents defined by \eqref{E:J}. Then
\[
\pi J_p= -p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p+2p^2C_p
\]
where $C_p$ is given by \eqref{E:Cp}. \end{proposition} \begin{proof}
Recall that $c_{n}=H_n-\ln n-\gamma-\frac{1}{2n}$ satisfies
$c_n=\mathcal{O}(1/n^2)$. Thus, both series
\[
C_p=\sum_{n=1}^\infty c_{pn}\quad\text{ and }\quad \widetilde{C}_p=\sum_{n=1}^\infty(-1)^{n-1} c_{pn}
\] are convergent. Further, we note that $\widetilde{C}_p=D_p-\frac{\ln 2}{2p}$ where $D_p$ is defined by \eqref{E:Dp}.
According to Proposition~\ref{pr92} we have
\begin{equation}\label{E:pr991}
\widetilde{C}_p =\frac{\ln(\pi/2)-\gamma-\ln p}{2}+\frac{\pi}{4p}I_p.
\end{equation}
Now, noting that
\begin{align*}
C_p&=\sum_{\substack{n\geq1\\
n\,\text{odd}}}c_{pn}
+\sum_{\substack{n\geq1\\
n\,\text{even}}}c_{pn}
=\sum_{\substack{n\geq1\\
n\,\text{odd}}}c_{pn}+\sum_{n=1}^\infty c_{2pn}\\
\widetilde{C}_p&=\sum_{\substack{n\geq1\\
n\,\text{odd}}}c_{pn}
-\sum_{\substack{n\geq1\\
n\,\text{even}}}c_{pn}
=\sum_{\substack{n\geq1\\
n\,\text{odd}}}c_{pn}-\sum_{n=1}^\infty c_{2pn}
\end{align*}
we conclude that $C_p-\widetilde{C}_p=2C_{2p}$, or equivalently
\begin{equation}\label{E:pr992}
C_p-2C_{2p}=\widetilde{C}_p
\end{equation}
On the other hand, for a positive integer $p$ let us define $F_p$ by
\begin{equation}\label{E:Fp}
F_p=\frac{\ln p+\gamma-\ln(2\pi)}{2}+\frac{1}{2p}+\frac{\pi}{2p^2}J_p.
\end{equation}
It is easy to check, using Lemma~\ref{lm91} $(iii)$, that
\begin{align}\label{E:pr994}
F_p-2F_{2p}&=\frac{\ln(\pi/2)-\ln p-\gamma}{2} -\frac{\pi}{4p^2}(J_{2p}-2J_p)\notag\\
&=\frac{\ln(\pi/2)-\ln p-\gamma}{2} +\frac{\pi}{4p}I_p
\end{align}
We conclude from \eqref{E:pr992} and \eqref{E:pr994} that $C_p-2C_{2p}=F_p-2F_{2p}$, or equivalently
\[C_p-F_p=2(C_{2p}-F_{2p}).\]
Hence,
\begin{equation}\label{E:pr995}
\forall\,m\geq1,\qquad C_p-F_p=2^m(C_{2^mp}-F_{2^mp})
\end{equation}
Now, using Lemma~\ref{pr81} to replace $H_p$ in Lemma~\ref{pr72}, we obtain
\begin{align*}
\frac{\pi}{p^2}J_p&=\ln(2\pi)-H_p-\frac{1}{2p}+\mathcal{O}\left(\frac{1}{p^2}\right)\\
&=\ln(2\pi)-\ln p-\gamma-\frac{1}{p} +\mathcal{O}\left(\frac{1}{p^2}\right)
\end{align*}
Thus $F_p=\mathcal{O}\left(\frac{1}{p^2}\right)$. Similarly, from the fact that $c_n=\mathcal{O}\left(\frac{1}{n^2}\right)$
we conclude also that $C_p=\mathcal{O}\left(\frac{1}{p^2}\right)$. Consequently, there exists a constant $\kappa$ such that, for large values of $p$
we have $\abs{C_p-F_p}\leq \kappa/p^2$. So, from \eqref{E:pr995}, we see that for large values of $m$ we have
\[
\abs{C_p-F_p}\leq\frac{\kappa}{2^mp^2}
\]
and letting $m$ tend to $+\infty$ we obtain $C_p=F_p$, which is equivalent to the announced result. \end{proof}
\goodbreak \qquad Combining Proposition~\ref{pr95} and Proposition~\ref{pr82}, we obtain:
\begin{proposition}\label{pr96}
For $p\geq2$ and $m\geq 1$, we have
\[
\pi J_p= -p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p -\sum_{k=1}^{m-1}\frac{b_{2k}\zeta(2k)}{ k\cdot p^{2k-2}}
+(-1)^m\frac{\zeta(2m)}{m\cdot p^{2m-2}}\varepsilon_{p,m},
\]
with $0<\varepsilon_{p,m}<\abs{b_{2m}}$, where $\zeta$ is the well-known Riemann zeta function. \end{proposition} \qquad Using the values of the $\zeta(2k)$'s \cite[Formula 9.542]{grad}), and considering separately the cases $m$ even and $m$ odd we obtain the next corollary.
\begin{corollary}\label{cor97}
For every positive integer $p$ and every nonnegative integer $n$, the sum of cotangents $J_p$ defined by \eqref{E:J} satisfies the following
inequalities:
\begin{align*}
J_p&< \frac{1}{\pi}\left(-p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p\right)
+2\pi\sum_{k=1}^{2n}(-1)^k\frac{b_{2k}^2}{ k\cdot(2k)!} \left(\frac{2\pi}{ p}\right)^{2k-2},\\
\noalign{\text{and}}
J_p&> \frac{1}{\pi}\left(-p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p\right)
+2\pi\sum_{k=1}^{2n+1}(-1)^k\frac{b_{2k}^2}{ k\cdot(2k)!} \left(\frac{2\pi}{ p}\right)^{2k-2}.
\end{align*} \end{corollary}
\goodbreak
As an example, for $n=0$ we obtain the following double inequality, which is valid for $p\geq1$ : \[ 0 < \frac{1}{\pi}\left(-p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p\right)-J_p<\frac{\pi}{36} \]
\goodbreak
\begin{remark} \label{rm98} Note that we have proved the following results. For a postive integer $p$:
\begin{align*}
\sum_{n=1}^\infty(-1)^{n-1}(H_{pn}-\ln(pn)-\gamma)&=\frac{\ln(\pi/2)-\gamma-\ln p}{2}+\frac{\ln 2}{2p}
+\frac{\pi}{4p}\sum_{k=1}^{p-1}\csc\left(\frac{k \pi}{p}\right).\\
\sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn})&=\frac{\ln 2}{p}
+\frac{\pi}{2p}\sum_{k=1}^{p-1}\csc\left(\frac{k \pi}{p}\right).\\
\sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right)&=
\frac{\ln p+\gamma-\ln(2\pi)}{2}+\frac{1}{2p}+\frac{\pi}{2p^2}\sum_{k=1}^{p-1}k\cot\left(\frac{k \pi}{p}\right).
\end{align*}
These results are to be compared with those in \cite{kou}, see also \cite{kou1}. \end{remark}
\end{document} |
\begin{document}
\title{Active-Learning a Convex Body in Low Dimensions
\thanks{A preliminary version appeared in ICALP
2020~\cite{hjr-alcbl-20}
\begin{abstract}
Consider a set $\Mh{P} \subseteq \mathbb{R}^d$ of $n$ points, and a convex
body $\Mh{C}$ provided via a separation oracle. The task at hand is
to decide for each point of $\Mh{P}$ if it is in $\Mh{C}$ using the
fewest number of oracle queries. We show
that one can solve this problem in two and three dimensions using
$O( \indexX{\Mh{P}} \log n)$ queries, where $\indexX{\Mh{P}}$ is the
size of the largest subset of points of $\Mh{P}$ in convex position.
In 2D, we provide an algorithm that efficiently generates these
adaptive queries.
Furthermore, we show that in two dimensions one can
solve this problem using $O( \priceY{\Mh{P}}{\Mh{C}} \log^2 n )$
oracle queries, where $\priceY{\Mh{P}}{\Mh{C}}$ is a lower bound
on the minimum number of queries that any algorithm for this
specific instance requires.
Finally, we consider other variations on the problem, such
as using the fewest number of queries to decide if $\Mh{C}$
contains all points of $\Mh{P}$.
As an application of the above, we show that the discrete
geometric median of a point set $P$ in $\mathbb{R}^2$ can be computed
in $O(n\log^2 n\,(\log n \log\log n + \indexX{\Mh{P}}))$ expected
time. \end{abstract}
\SubmitVer{\keywords{Approximation algorithms, computational geometry, separation oracles, active learning}}
\section{Introduction}
\subsection{Background} \paragraph{Active learning.} Active learning is a subfield of machine learning. At any time, the learning algorithm is able to query an oracle for the label of a particular data point. One model for active learning is the membership query synthesis model \cite{a-qcl-87}. Here, the learner wants to minimize the number of oracle queries, as such queries are expensive---they usually correspond to either consulting with a specialist, or performing an expensive computation. In this setting, the learning algorithm is allowed to query the oracle for the label of any data point in the instance space. See \cite{s-alls-09} for a more in-depth survey on the various active learning models.
\begin{figure}
\caption{The shaded region shows the symmetric difference between
the hypothesis and true classifier. (I) Learning
halfspaces. (II) Learning arbitrary convex regions.}
\end{figure}
\myparagraph{PAC learning.} A classical approach for learning is using random sampling, where one gets labels for the samples (i.e., in the above setting, the oracle is asked for the labels of all items in the random sample). PAC learning studies the size of the sample needed. For example, consider the problem of learning a halfplane for $n$ points $\Mh{P} \subset \mathbb{R}^2$, given a parameter $\varepsilon \in (0,1)$. The first stage is to take a labeled random sample $\Mh{R} \subseteq \Mh{P}$. The algorithm computes any halfplane that classifies the sample correctly (i.e., the hypothesis). The misclassified points lie in the symmetric difference between the learned halfplane, and the (unknown) true halfplane, see \figref{active:learning}. In this case, the error region is a double wedge, and it is well known that its VC-dimension \cite{vc-ucrfe-71} is a constant (at most eight). As such, by the $\varepsilon$-net Theorem \cite{hw-ensrq-87}, a sample of size $O(\varepsilon^{-1} \log\varepsilon^{-1})$ is an $\varepsilon$-net for double wedges, which implies that this random sampling algorithm has at most $\varepsilon n$ error.
A classical example of a hypothesis class that cannot be learned is the set of convex regions (even in the plane). Indeed, given a set of points $\Mh{P}$ in the plane, any sample $\Mh{R} \subseteq \Mh{P}$ cannot distinguish between the true region being $\CHX{\Mh{R}}$ or $\CHX{\Mh{P}}$. Intuitively, this is because the hypothesis space in this case grows exponentially in the size of the sample (instead of polynomially). See \figref{convex:example}.
We stress that the above argument does not necessarily imply these types of hypothesis classes are unlearnable in practice. In general, there are other ways for learning algorithms to handle hypothesis classes with high (or even infinite) VC-dimension (for example, using regularization or assuming there is a large margin around the decision boundary).
\myparagraph{Weak $\varepsilon$-nets.} Because $\varepsilon$-nets for convex ranges do not exist, an interesting direction to overcome this problem is to define weak $\varepsilon$-nets \cite{hw-ensrq-87}. A set of points $\Mh{R}$ in the plane, not necessarily a subset of $\Mh{P}$, is a \emph{weak $\varepsilon$-net} for $\Mh{P}$ if for any convex body $\Mh{C}$ containing at least $\varepsilon n$ points of $\Mh{P}$, it also contains a point of $\Mh{R}$. \Matousek and Wagner \cite{mw-ncwe-03} gave a weak $\varepsilon$-net construction of size $O(\varepsilon^{-d}(\log \varepsilon^{-1})^{O(d^2 \log d)})$, which is doubly exponential in the dimension. The state of the art is the recent result of Rubin \cite{r-ibwen-18}, that shows a weak $\varepsilon$-net construction in the plane of size (roughly) $O(1/\varepsilon^{3/2})$. However, these weak $\varepsilon$-nets cannot be used for learning such concepts. Indeed, the analysis above required an $\varepsilon$-net for the symmetric difference of two convex bodies of finite complexity, see \figref{active:learning}.
\myparagraph{PAC learning with additional parameters.} If one assumes the input instance obeys some additional structural properties, then random sampling can be used. For example, suppose that the point set $\Mh{P}$ has at most $k$ points in convex position. For an arbitrary convex body $\Mh{C}$, the convex hull $\CHX{\Mh{P} \cap \Mh{C}}$ has complexity at most $k$. Let $\Mh{R} \subseteq \Mh{P}$ be a random sample, and $\Mh{C}'$ be the learned classifier for $\Mh{R}$. The region of error is the symmetric difference between $\Mh{C}$ and $\Mh{C}'$. In particular, since $k$-vertex polytopes in $\mathbb{R}^d$ have VC-dimension bounded by $O(d^2 k\log k)$ \cite{k-vcdkvdp-20}, this implies that the error region also has VC-dimension at most $O(d^2 k\log k)$. Hence if $\Mh{R}$ is a random sample of size $O(d^2 k\log k\varepsilon^{-1} \log \varepsilon^{-1})$, the $\varepsilon$-net Theorem \cite{hw-ensrq-87} implies that this sampling algorithm has error at most $\varepsilon n$. However, even for a set of $n$ points chosen uniformly at random from the unit square $[0,1]^2$, the expected number of points in convex position is $O(n^{1/3})$ \cite{ab-lcc-09}. Since we want $\cardin{\Mh{R}} < n$, this random sampling technique is only useful when $\varepsilon$ is larger than $\log^2 n /n^{2/3}$ (ignoring constants).
\noindent To summarize the above discussions, random sampling on its own does not seem powerful enough to learn arbitrary convex bodies, even if one allows some error to be made. In this paper we focus on developing algorithms for learning convex bodies in low dimensions, where the algorithms are deterministic and do not make any errors.
\begin{figure}
\caption{(I) A set of points $\Mh{P}$. (II) The unknown
convex body $\Mh{C}$. (III) Classifying all points of $\Mh{P}$ as
either inside or outside $\Mh{C}$.}
\end{figure}
\subsection{Problem and motivation}
\myparagraph{The problem.} In this paper, we consider a variation on the active learning problem, in the membership query synthesis model. Suppose that the learner is trying to learn an unknown convex body $\Mh{C}$ in $\mathbb{R}^d$. Specifically, the learner is provided with a set $\Mh{P}$ of $n$ unlabelled points in $\mathbb{R}^d$, and the task is to label each point as either inside or outside $\Mh{C}$, see \figref{classify}. For a query $\Mh{q} \in \mathbb{R}^d$, the oracle either reports that $\Mh{q} \in \Mh{C}$, or returns a hyperplane separating $\Mh{q}$ and $\Mh{C}$ (as a proof that $\Mh{q} \not\in \Mh{C}$). Note that if the query is outside the body, the oracle answer is significantly more informative than just the label of the point. The problem is to minimize the overall number of queries performed.
\myparagraph{Hard and easy instances.} Note that in the worst case, an algorithm may have to query the oracle for all input points---such a scenario happens when the input points are in convex position, and any possible subset of the points can be the points in the (appropriate) convex body. As such, the purpose here is to develop algorithms that are \emph{instance sensitive}---if the given instance is easy, they work well. If the given instance is hard, they might deteriorate to the naive algorithm that queries all points.
Natural inputs where one can hope to do better, are when relatively few points are in convex position. Such inputs are grid points, or random point sets, among others. However, there are natural instances of the problem that are easy, despite the input having many points in convex position. For example, consider when the convex body is a triangle, with the input point set being $n/2$ points spread uniformly on a tiny circle centered at the origin, while the remaining $n/2$ points are outside the convex body, spread uniformly on a circle of radius $10$ centered at the origin, see \figref{easy}. Clearly, such a point set can be fully classified using a sequence of a constant number of oracle queries. See \figref{easy:not:easy} for some related examples.
\myparagraph{Discretely optimizing convex functions.}
As an application of this particular query model, we explore the connection between active-learning a convex body and minimizing a convex function. Concretely, suppose we are given a set of $n$ points $\Mh{P}$ in the plane and a convex function $\Mh{f} : \mathbb{R}^2 \to \mathbb{R}$ equipped with an oracle that can evaluate $\Mh{f}$ or the derivative of $\Mh{f}$ at a given point. Our goal is to compute the point in $\Mh{P}$ minimizing $\min_{\Mh{p} \in \Mh{P}} \Mh{f}(\Mh{p})$ using the fewest number of oracle queries (i.e., evaluations of $\Mh{f}$ or the derivative). We discuss the result in full in \secref{applications}.
We show that there is a natural connection between the studied query model and this problem. Namely, the level sets of a convex function are convex bodies, and the gradient of $\Mh{f}$ can be used to construct separating lines for the level set. Thus, developing algorithms for active-learning a convex body in the membership query synthesis model in conjunction with the two aforementioned facts leads to alterative methods for minimizing a convex function over a discrete collection of points. Importantly, the running time of such algorithms depend not only on how quickly we can evaluate $\Mh{f}$, but also on the structure of the point set $\Mh{P}$, as we aim to develop instance sensitive algorithms.
\subsection{Additional motivation \& previous work} \seclab{motiv:prev:work}
\myparagraph{Separation oracles.} The use of separation oracles is a common tool in optimization (e.g., solving exponentially large linear programs) and operations research. It is natural to ask what other problems can be solved efficiently when given access to this specific type of oracle. For example, B{\'{a}}r{\'{a}}ny and F{\"{u}}redi \cite{bf-cvd-87} study the problem of computing the volume of a convex body in $\mathbb{R}^d$ given access to a separation oracle.
\myparagraph{Other types of oracles.} Various models of computation utilizing oracles have been previously studied within the computational geometry community. Examples of other models include nearest-neighbor oracles (i.e., black-box access to nearest neighbor queries over a point set $\Mh{P}$) \cite{hkmr-sevps-16}, proximity probes (in which given a convex polygon $\Mh{C}$ and a query $\Mh{q}$, returns the distance from $\Mh{q}$ to $\Mh{C}$) \cite{pavg-eppam-13}, and linear queries. Recently, Ezra and Sharir \cite{es-nqbplha-19} gave an improved algorithm for the problem of point location in an arrangement of hyperplanes. Here, a \emph{linear query} consists of a point $x$ and a hyperplane $h$, and outputs either that $x$ lies on $h$, or else the side of $h$ contains $x$. Alternatively, their problem can be interpreted as querying whether or not a given point lies in a halfspace $h^+$. Here, we study the more general problem as the convex body can be the intersection of many halfspaces.
Furthermore, other types of active learning models (in addition to the membership query model) have also been studied within the learning community, see, for example, \cite{a-qcl-87}.
\myparagraph{Active learning.} As discussed, the problem at hand can be interpreted as active-learning a convex body in relation to a set of points $\Mh{P}$ that need to be classified (as either inside or outside the body), where the queries are via a separation oracle. We are unaware of any work directly on this problem in the computational geometry community, while there is some work in the learning community that studies related active-learning classification problems \cite{cal-igwal-94,gg-oalumi-07,s-alls-09,klmz-accq-17}.
For example, Kane et~al.\xspace~ \cite{klmz-accq-17} study the problem of actively learning halfspaces with access to \emph{comparison queries}. Given a halfspace $h^+$ to learn, the model has two types of queries: \begin{compactenumi*}
\item label queries (given $x \in \mathbb{R}^d$, is $x \in h^+$?), and
\item comparison queries (given $x_1, x_2 \in \mathbb{R}^d$, is $x_1$
closer to the boundary of $h^+$ than $x_2$?). \end{compactenumi*} For example, they show that in the plane, one can classify all points using $O(\log n)$ comparison/label queries in expectation.
\subsection{Our results}
\begin{compactenumA}
\item We develop a greedy algorithm, for points in the plane,
that solves the problem using $O( \indexX{\Mh{P}} \log n)$ oracle
queries, where $\indexX{\Mh{P}}$ is the size of the largest subset of
points of $\Mh{P}$ in convex position. See \thmref{greedy-method}.
It is known that for a random set of $n$ points in the unit square,
$\Ex{\indexX{\Mh{P}}} = \Theta( n^{1/3})$ \cite{ab-lcc-09}, which
readily implies that classifying these points can be solved using
$O( n^{1/3} \log n)$ oracle queries. A similar bound holds for the
$\sqrt{n} \times \sqrt{n}$ grid. An animation of this algorithm is
on YouTube \cite{j-agca-18}. We also show that this algorithm
can be implemented efficiently, using dynamic segment trees,
see \lemref{impl-greedy}.
We remark that Kane et~al.\xspace~ \cite{klmz-accq-17} develop a framework
and randomized algorithm for learning a concept $\Mh{C}$, where the
expected number of queries depends near-linearly on a parameter they
define as the \emph{inference dimension} \cite[Definition
III.1]{klmz-accq-17} of the concept class. For our problem, one can
show that the inference dimension is $O(\indexX{\Mh{P}})$. As a
corollary of their framework, one can obtain a randomized algorithm
that solves our problem where the expected number of queries is
$O(\indexX{\Mh{P}} \log n)$, which matches the performance of our
deterministic algorithm. See \secref{inference} for details.
\item The above algorithm naturally extends to three dimensions,
also using $O(\indexX{\Mh{P}} \log n)$ oracle queries. While the
proof idea is similar to that of the algorithm in 2D, we believe
the analysis in three dimensions is also technically interesting.
See \thmref{greedy-method-3d}.
\item For a given point set $\Mh{P}$ and convex body $\Mh{C}$, we
define the separation price $\priceY{\Mh{P}}{\Mh{C}}$ of an instance
$(\Mh{P}, \Mh{C})$, and show that any algorithm classifying the points
of $\Mh{P}$ in relation to $\Mh{C}$ must make at least
$\priceY{\Mh{P}}{\Mh{C}}$ oracle queries (\lemref{lower:bound}).
As an aside, we show that when $\Mh{P}$ is a set of $n$ points
chosen uniformly at random from the unit square and $\Mh{C}$
is a (fixed) smooth convex body,
$\Ex{\priceY{\Mh{P}}{\Mh{C}}} = O(n^{1/3})$, and this bound is
tight when $\Mh{C}$ is a disk (our result also
generalizes to higher dimensions, see \lemref{ex:sep:pr}).
For randomly chosen points, the separation price is related
to the expected size of the convex hull of $\Mh{P} \cap \Mh{C}$,
which is also known to be $\Theta(n^{1/3})$ \cite{b-rpcba-07}.
We believe this result may be of independent interest, see
\apndref{ex:sep:pr}.
\item In \secref{improved:2d} we present an improved algorithm
for the 2D problem, and show that the number of queries made is
$O(\priceY{\Mh{P}}{\Mh{C}} \log^2 n)$. This result is $O(\log^2 n)$
approximation to the optimal solution, see
\thmref{improved:alg:2d}.
\item We consider the extreme scenarios of the problem: Verifying
that all points are either inside or outside of $\Mh{C}$. For each
problem we present a $O(\log n)$ approximation algorithm to the
optimal strategy. The results are presented in
\secref{emptiness:2d}, see \lemref{greedy-method-empty} and
\lemref{reverse:emptiness}.
\item \secref{applications} presents an application of the above
results, we consider the problem of minimizing a \emph{convex} function
$\Mh{f} : \mathbb{R}^2\to \mathbb{R}$ over a point set $\Mh{P}$. Specifically, the
goal is to compute $\arg\min_{\Mh{p} \in \Mh{P}} \Mh{f}(\Mh{p})$. If $\Mh{f}$
and its derivative can be efficiently evaluated at a given query
point, then $\Mh{f}$ can be minimized over $\Mh{P}$ using
$O(\indexX{\Mh{P}} \log^2 n)$ queries to $\Mh{f}$ (or its derivative) in
expectation. We refer the reader to \lemref{discrete:min}.
Given a set of $n$ points $\Mh{P}$ in $\mathbb{R}^d$, the discrete geometric
median of $\Mh{P}$ is a point $\Mh{p} \in \Mh{P}$ minimizing the function
$\sum_{\Mh{q} \in \Mh{P}} \normX{\Mh{p} - \Mh{q}}$. As a corollary of
\lemref{discrete:min}, we obtain an algorithm for computing the
discrete geometric median for $n$ points in the plane. The algorithm
runs in $O(n\log^2 n \cdot (\log n \log\log n + \indexX{\Mh{P}}))$
expected time. See \lemref{discrete:med}. In particular, if $\Mh{P}$
is a set of $n$ points chosen uniformly at random from the unit
square, it is known that $\Ex{\indexX{\Mh{P}}} = \Theta(n^{1/3})$
\cite{ab-lcc-09} and hence the discrete geometric median can be
computed in $O(n^{4/3} \log^2 n)$ expected time.
While there has been ample work on approximating the geometric
median (recently, Cohen et~al.\xspace~ \cite{clmps-gmnlt-16} gave a
$(1+\varepsilon)$-approximation algorithm to the geometric median in
$O(dn \log^3 (1/\varepsilon))$ time), we are unaware
of any \emph{exact} sub-quadratic algorithm for the discrete
case even in the plane. \end{compactenumA}
Throughout this paper, the model of computation we have assumed is unit-cost real RAM.
\section{The greedy algorithm in two and three dimensions} \seclab{greedy-2d-3d}
\subsection{Preliminaries}
For a set of points $\Mh{P} \subseteq \mathbb{R}^2$, let $\CHX{\Mh{P}}$ denote the convex hull of $\Mh{P}$. Given a convex body $\Mh{C} \subseteq \mathbb{R}^d$, two points $\Mh{p}, \Mh{x} \in \mathbb{R}^d \setminus \intX{\Mh{C}}$ are \emphi{mutually visible}, if the segment $\Mh{p} \Mh{x}$ does not intersect $\intX{\Mh{C}}$, where $\intX{\Mh{C}}$ is the interior of $\Mh{C}$. We also use the notation $\Mh{P} \cap \Mh{C} = \Set{\Mh{p} \in \Mh{P}}{\Mh{p} \in \Mh{C}}$.
For a point set $\Mh{P} \subseteq \mathbb{R}^d$, a \emphi{centerpoint} of $\Mh{P}$ is a point $\Mh{\mathcalb{c}} \in \mathbb{R}^d$, such that for any closed halfspace $h^+$ containing $\Mh{\mathcalb{c}}$, we have $\cardin{h^+ \cap \Mh{P}} \geq \cardin{\Mh{P}}/(d+1)$. A centerpoint always exists, and it can be computed exactly in $O(n^{d-1} + n \log n)$ time \cite{c-oramt-04}.
Let $\Mh{C}$ be a convex body in $\mathbb{R}^d$ and $q \in \mathbb{R}^d$ be a point such that $q$ lies outside $\Mh{C}$. A hyperplane $h$ \emphi{separates} $q$ from $\Mh{C}$ if $q$ lies in the \emph{closed} halfspace $h^+$ bounded by $h$, and $\Mh{C}$ is contained in the \emph{open} halfspace $h^-$ bounded by $h$. This definition allows the separating hyperplane to contain the point $q$, and will simplify the descriptions of the algorithms.
\subsection{The greedy algorithm in 2D}
Given a set of points $\Mh{P}$ in $\mathbb{R}^2$ and a convex body $\Mh{C}$ specified via a separation oracle, recall that the problem is to classify, for all the points of $\Mh{P}$, whether or not they are in $\Mh{C}$, using the fewest oracle queries possible. We define some operations the algorithm will use before stating the algorithm in full. Finally, we analyze the the number of queries the algorithm performs.
\subsubsection{Operations} \seclab{operations}
Initially, the algorithm copies $\Mh{P}$ into a set $\Mh{U}$ of unclassified points. The algorithm is going to maintain an inner approximation $\Mh{B} \subseteq \Mh{C}$. There are two types of updates (\figref{operations} illustrates the two operations): \begin{compactenumA}
\item \AlgorithmI{expand}\xspace{}$(\Mh{p})$: Given a point
$\Mh{p} \in \Mh{C} \setminus \Mh{B}$, the algorithm is going to:
\begin{compactenumi}
\item Update the inner approximation:
$\Mh{B} \leftarrow \CHX{\Mh{B} \cup \brc{\Mh{p}}}$.
\item Remove (and mark) newly covered points:
$\Mh{U} \leftarrow \Mh{U} \setminus \Mh{B}$.
\end{compactenumi}
\item \AlgorithmI{remove}\xspace{}$(\Mh{\mathcalb{l}})$: Given a closed halfplane $\Mh{\mathcalb{l}}^+$
such that $\intX{\Mh{C}} \cap \Mh{\mathcalb{l}}^+ = \emptyset$, the algorithm
marks all the points of $\Mh{U}_\Mh{\mathcalb{l}} = \Mh{U} \cap \intX{\Mh{\mathcalb{l}}^+}$
as being outside $\Mh{C}$, and sets
$\Mh{U} \leftarrow \Mh{U} \setminus \Mh{U}_\Mh{\mathcalb{l}}$. \end{compactenumA}
\subsubsection{The algorithm} \seclab{round}
The algorithm repeatedly performs rounds, as described next, until the set of unclassified points is empty.
At every round, if the inner approximation $\Mh{B}$ is empty, then the algorithm sets $\Mh{U}^+ = \Mh{U}$. Otherwise, the algorithm picks a line $\Mh{\mathcalb{l}}$ that is tangent to $\Mh{B}$ with the largest number of points of $\Mh{U}$ on the other side of $\Mh{\mathcalb{l}}$ than $\Mh{B}$. Let $\Mh{\mathcalb{l}}^-$ and $\Mh{\mathcalb{l}}^+$ be the two closed halfspace bounded by $\Mh{\mathcalb{l}}$, where $\Mh{B} \subseteq \Mh{\mathcalb{l}}^-$. The algorithm computes the point set $\Mh{U}^+ = \Mh{U} \cap \Mh{\mathcalb{l}}^+$. We have two cases:
\begin{compactenumA}[label=\Alph*.]
\item Suppose $\cardin{\Mh{U}^+}$ is of constant size. The
algorithm queries the oracle for the status of each of these
points. For every point $\Mh{p} \in \Mh{U}^+$, such that $\Mh{p} \in
\Mh{C}$, the algorithm performs \AlgorithmI{expand}\xspace{}$(\Mh{p})$. Otherwise, the
oracle returned a separating line $\Mh{\mathcalb{l}}$, and the algorithm calls
\AlgorithmI{remove}\xspace{}$(\Mh{\mathcalb{l}}^+)$.
\item Otherwise, $\cardin{\Mh{U}+}$ does not have constant size.
The algorithm computes a centerpoint
$\Mh{\mathcalb{c}} \in \mathbb{R}^2$ for $\Mh{U}^+$, and asks the oracle for the
status of $\Mh{\mathcalb{c}}$. There are two possibilities:
\begin{compactenumA}[label*=\Roman*.]
\item If $\Mh{\mathcalb{c}} \in \Mh{C}$, then the algorithm performs
\AlgorithmI{expand}\xspace{}$(\Mh{\mathcalb{c}})$.
\item If $\Mh{\mathcalb{c}} \notin \Mh{C}$, then the oracle returned a
separating line $\Mh{\mathcalb{h}}$, and the algorithm performs
\AlgorithmI{remove}\xspace{}$(\Mh{\mathcalb{h}})$.
\end{compactenumA} \end{compactenumA}
\subsubsection{Analysis}
Let $\Mh{B}_i$ be the inner approximation at the start of the $i$th\xspace iteration, and let $\Mh{z}$ be the first index where $\Mh{B}_\Mh{z}$ is not an empty set. Similarly, let $\Mh{U}_i$ be the set of unclassified points at the start of the $i$th\xspace iteration, where initially $\Mh{U}_1 = \Mh{U}$.
\begin{lemma}
\lemlab{iterations:empty:inapx}
The number of (initial) iterations in which the inner
approximation is empty is $\Mh{z} = O(\log n)$. \end{lemma} \begin{proof}
As soon as the oracle returns a point that is in $\Mh{C}$, the
inner approximation is no longer empty. As such, we need to bound
the initial number of iterations where the oracle returns that the
query point is outside $\Mh{C}$. Let $f_i = \cardin{\Mh{U}_i}$, and
note that $\Mh{U}_1 = \Mh{P}$ and $f_1 = \cardin{\Mh{P}} = n$. Let
$\Mh{\mathcalb{c}}_i$ be the centerpoint of $\Mh{U}_i$, which is the query point
in the $i$th\xspace iteration ($\Mh{\mathcalb{c}}_i$ is outside $\Mh{C}$). As such,
the line separating $\Mh{\mathcalb{c}}_i$ from $\Mh{C}$, returned by the
oracle, has at least $f_i/3$ points of $\Mh{U}_i$ on the same side
as $\Mh{\mathcalb{c}}_i$, by the centerpoint property. All of these points get
labeled in this iteration, and it follows that
$f_{i+1} \leq (2/3) f_i$, which readily implies the claim, since
$f_{\Mh{z}} < 1$, for $\Mh{z} = \ceil{\log_{3/2} n} +1$. \end{proof}
\begin{definition}[Visibility graph]
\deflab{visi:graph}
Consider the graph $\Mh{G}_i$ over $\Mh{U}_i$, where two points
$\Mh{p}, \Mh{r} \in \Mh{U}_i$ are connected $\iff$ the segment
$\Mh{p} \Mh{r}$ does not intersect the interior of $\Mh{B}_i$. \end{definition}
\begin{figure}
\caption{Four points and a convex body with their associated circular
intervals.}
\end{figure}
\myparagraph{The visibility graph as an interval graph.} For a point $\Mh{p} \in \Mh{U}_i$, let $\IY{i}{\Mh{p}}$ be the set of all directions $v$ (i.e., vectors of length $1$) such that there is a line perpendicular to $v$ that separates $\Mh{p}$ from $\Mh{B}_i$. Formally, a line $\Mh{\mathcalb{l}}$ separates $\Mh{p}$ from $\Mh{B}_i$, if the interior of $\Mh{B}_i$ is on one side of $\Mh{\mathcalb{l}}$ and $\Mh{p}$ is on the (closed) other side of $\Mh{\mathcalb{l}}$ (if $\Mh{p} \in \Mh{\mathcalb{l}}$, the line is still considered to separate the two). Clearly, $\IY{i}{\Mh{p}}$ is a circular interval on the unit circle. See \figref{intervals}. The resulting set of intervals is $\Mh{\mathcal{V}}_i = \Set{\IY{i}{\Mh{p}}}{\Mh{p} \in \Mh{U}_i}$. It is easy to verify that the intersection graph of $\Mh{\mathcal{V}}_i$ is $\Mh{G}_i$. Throughout the execution of the algorithm, the inner approximation $\Mh{B}_i$ grows monotonically, this in turn implies that the visibility intervals shrink over time; that is, $\IY{i}{\Mh{p}} \subseteq \IY{i-1}{\Mh{p}}$, for all $\Mh{p} \in \Mh{P}$ and $i$. Intuitively, in each round, either many edges from $\Mh{G}_i$ are removed (because intervals had shrunk and they no longer intersect), or many vertices are removed (i.e., the associated points are classified).
\begin{definition}
Given a set $\Mh{\mathcal{V}}$ of objects (e.g., intervals) in a domain
$\Mh{{D}}$ (e.g., unit circle), the \emphi{depth} of a point
$\Mh{p} \in D$, is the number of objects in $\Mh{\mathcal{V}}$ that contain
$\Mh{p}$. Let $\depthX{\Mh{\mathcal{V}}}$ be the maximum depth of any point
in $\Mh{{D}}$. \end{definition}
When it is clear, we use $\depthX{\Mh{G}}$ to denote $\depthX{\Mh{\mathcal{V}}}$, where $\Mh{G} = (\Mh{\mathcal{V}}, \Mh{E})$ is the intersection graph of the intervals $\Mh{\mathcal{V}}$ as defined above. Throughout, we commonly refer to $\Mh{G}$ as the \emphi{intersection graph}.
First, we bound the number of edges in this visibility graph $\Mh{G}$ and then argue that in each iteration, either many edges of $\Mh{G}$ are discarded or vertices are removed (as they are classified).
\begin{lemma}
\lemlab{int-graphs}
Let $\Mh{\mathcal{V}}$ be a set of $n$ intervals on the unit circle, and
let $\Mh{G} = (\Mh{\mathcal{V}}, \Mh{E})$ be the associated intersection
graph. Then $\cardin{\Mh{E}} =O(\Mh{\alpha} \Mh{\omega}^2)$, where
$\Mh{\omega}= \depthX{\Mh{\mathcal{V}}}$ and $\Mh{\alpha} = \Mh{\alpha}(\Mh{G})$ is the
size of the largest independent set in $\Mh{G}$. Furthermore,
the upper bound on $\cardin{\Mh{E}}$ is tight. \end{lemma} \begin{proof}
Let $J$ be the largest independent set of intervals in
$\Mh{G}$. The intervals of $J$ divide the circle into
$2\cardin{J}$ (atomic) circular arcs. Consider such an arc
$\Mh{\gamma}$, and let $K(\Mh{\gamma})$ be the set of all intervals of $\Mh{\mathcal{V}}$
that are fully contained in $\Mh{\gamma}$. All the intervals of $K(\Mh{\gamma})$
are pairwise intersecting, as otherwise one could increase the
size of the independent set. As such, all the intervals of
$K(\Mh{\gamma})$ must contain a common intersection point. It follows
that $\cardin{K(\Mh{\gamma})} \leq \Mh{\omega}$.
Let $K'(\Mh{\gamma})$ be the set of all intervals intersecting
$\Mh{\gamma}$. This set might contain up to $2\Mh{\omega}$ additional
intervals (that are not contained in $\Mh{\gamma}$), as each such
additional interval must contain at least one of the endpoints of
$\Mh{\gamma}$. Namely, $\cardin{K'(\Mh{\gamma})} \leq 3 \Mh{\omega}$. In particular,
any two intervals intersecting inside $\Mh{\gamma}$ both belong to
$K'(\Mh{\gamma})$. As such, the total number of edges contributed by
$K'(\Mh{\gamma})$ to $\Mh{G}$ is at most
$\binom{3\Mh{\omega}}{2} = O(\Mh{\omega}^2)$. Since there are at most
$2 \Mh{\alpha}$ arcs under consideration, the total number of
edges in $\Mh{G}$ is bounded by $O(\Mh{\alpha} \Mh{\omega}^2)$, which
implies the claim.
The lower bound is easy to see by taking an independent set of
intervals of size $\Mh{\alpha}$, and replicating every interval
$\Mh{\omega}$ times. \end{proof}
\begin{lemma}
\lemlab{segments:intersect}
Let $\Mh{P}$ be a set of $n$ points in the plane lying above the
$x$-axis, $\Mh{\mathcalb{c}}$ be a centerpoint of $\Mh{P}$, and
$\Mh{S} = \binom{\Mh{P}}{2}$ be set of all segments induced by
$\Mh{P}$. Next, consider any point $\Mh{r}$ on the $x$-axis. Then,
the segment $\Mh{\mathcalb{c}} \Mh{r}$ intersects at least $n^2/36$ segments of
$\Mh{S}$. \end{lemma} \begin{proof}
If the segment $\Mh{\mathcalb{c}} \Mh{r}$ intersects the segment
$\Mh{p}_1 \Mh{p}_2$, for $\Mh{p}_1, \Mh{p}_2 \in \Mh{P}$, then we consider
$\Mh{p}_1$ and $\Mh{p}_2$ to no longer be mutually visible.
It suffices to lower bound the number of pairs of points
that lose mutual visibility of each other.
Consider a line $\Mh{\mathcalb{l}}$ passing through the point $\Mh{\mathcalb{c}}$, see
\figref{center:point}.. Let $\Mh{\mathcalb{l}}^+$ be the closed halfspace
bounded by $\Mh{\mathcalb{l}}$ containing $\Mh{r}$. Note that
$\cardin{\Mh{P} \cap \Mh{\mathcalb{l}}^+} \geq n/3$, since $\Mh{\mathcalb{c}}$ is a
centerpoint of $\Mh{P}$, and $\Mh{\mathcalb{c}} \in \Mh{\mathcalb{l}}$. Rotate $\Mh{\mathcalb{l}}$ around
$\Mh{\mathcalb{c}}$ until there are $\geq n/6$ points on each side of
$\Mh{r}\Mh{\mathcalb{c}}$ in the halfspace $\Mh{\mathcalb{l}}^+$. To see why this rotation
of $\Mh{\mathcalb{l}}$ exists, observe that the two halfspaces bounded by the
line spanning $\Mh{r}\Mh{\mathcalb{c}}$, have zero points on one side, and at
least $n/3$ points on the other side --- a continuous rotation of
$\Mh{\mathcalb{l}}$ between these two extremes, implies the desired property.
Observe that points in $\Mh{\mathcalb{l}}^+$ and on opposite sides of the
segment $\Mh{\mathcalb{c}}\Mh{r}$ cannot see each other, as the segment
connecting them must intersect $\Mh{\mathcalb{c}}\Mh{r}$. Consequently, the
number of induced segments that $\Mh{\mathcalb{c}}\Mh{r}$ intersects is at
least $n^2/36$. \end{proof}
For a graph $\Mh{G}$, we let $\EdgesX{\Mh{G}}$ denote the set of edges in $G$, and let $\cardin{\EdgesX{G}}$ denote the number of edges in $\Mh{G}$.
\begin{lemma}
\lemlab{depth:reduce}
Let $\Mh{G}_i$ be the intersection graph, in the beginning of the
$i$th\xspace iteration, and let
$\nEdgesX{i} = \cardin{\EdgesX{\Mh{G}_{i}}}$. After the $i$th\xspace
iteration of the greedy algorithm, we have
$\nEdgesX{i+1} \leq \nEdgesX{i} - \Mh{\omega}^2/36$, where
$\Mh{\omega} = \depthX{\Mh{G}_i}$. \end{lemma} \begin{proof}
Recall that in the algorithm $\Mh{U}^+ = \Mh{U}_i \cap \Mh{\mathcalb{l}}^+$ is
the current set of unclassified points and $\Mh{\mathcalb{l}}$ is the line
tangent to $\Mh{B}_i$, where $\Mh{\mathcalb{l}}^+$ is the closed halfspace
that avoids the interior of $\Mh{B}_i$ and contains the largest
number of unlabeled points of $\Mh{U}_i$. We have that
$\Mh{\omega} = \cardin{\Mh{U}^+}$.
If a \AlgorithmI{remove}\xspace{} operation was performed in the $i$th\xspace iteration,
then the number of points of $\Mh{U}^+$ that are discarded is at
least $\Mh{\omega}/3$. In this case, the oracle returned a separating
line $\Mh{\mathcalb{h}}$ between a centerpoint $\Mh{\mathcalb{c}}$ of $\Mh{U}^+$ and the
inner approximation. For the halfspace $\Mh{\mathcalb{h}}^+$ containing
$\Mh{\mathcalb{c}}$, we have
$t_i = \cardin{\Mh{U}^+ \cap \Mh{\mathcalb{h}}^+} \geq \cardin{\Mh{U}^+}/3 \geq
\Mh{\omega}/3$. Furthermore, all the points of $\Mh{U}^+$ are pairwise
mutually visible (in relation to the inner approximation
$\Mh{B}_i$). Namely,
$$
\nEdgesX{i+1}
=
\cardin{\EdgesX{\Mh{G}_{i} - (\Mh{U}^+ \cap \Mh{\mathcalb{h}}^+)}}
\leq
\nEdgesX{i} - \binom{t_i}{2}
\leq
\nEdgesX{i} - \Mh{\omega}^2 /36.
$$
If an \AlgorithmI{expand}\xspace{} operation was performed, the centerpoint $\Mh{\mathcalb{c}}$
of $\Mh{U}^+$ is added to the current inner approximation
$\Mh{B}_i$. Let $\Mh{r}$ be a point in $\Mh{\mathcalb{l}} \cap \Mh{B}_i$, and
let $\Mh{\mathcalb{c}}_i$ be the centerpoint of $\Mh{U}_i$ computed by the
algorithm. By \lemref{segments:intersect} applied to
$\Mh{r}, \Mh{\mathcalb{c}}$ and $\Mh{U}^+$, we have that at least $\Mh{\omega}^2/36$
pairs of points of $\Mh{U}^+$ are no longer mutually visible to each
other in relation to $\Mh{B}_{i+1}$. We conclude, that at least
$\Mh{\omega}^2/36$ edges of $\Mh{G}_i$ are no longer present in
$\Mh{G}_{i+1}$. \end{proof}
\begin{definition}
\deflab{index}
A subset of points $X \subseteq \Mh{P} \subseteq \mathbb{R}^2$ are in
\emphi{convex position}, if all the points of $X$ are vertices of
$\CHX{X}$ (note that a point in the middle of an edge is not
considered to be a vertex). The \emphi{index} of $\Mh{P}$, denoted
by $\indexX{\Mh{P}}$, is the cardinality of the largest subset of
$\Mh{P}$ of points that are in convex position. \end{definition}
\begin{theorem}
\thmlab{greedy-method}
Let $\Mh{C}$ be a convex body provided via a separation oracle, and
let $\Mh{P}$ be a set of $n$ points in the plane. The greedy
classification algorithm performs
$O\bigl((\indexX{\Mh{P}}+1) \log n\bigr)$ oracle queries. The
algorithm correctly identifies all points in $\Mh{P} \cap \Mh{C}$
and $\Mh{P} \setminus \Mh{C}$. \end{theorem} \begin{proof}
By \lemref{iterations:empty:inapx}, the number of iterations (and
also queries) in which the inner approximation is empty is
$O(\log n)$, and let $\Mh{z} = O(\log n)$ be the first iteration
such that the inner approximation is not empty. It suffices to
bound the number of queries made by the algorithm after the inner
approximation becomes non-empty.
For $i \geq \Mh{z}$, let $\Mh{G}_i = (\Mh{U}_i, \Mh{E}_i)$ denote
the visibility graph of the remaining unclassified points
$\Mh{U}_i$ in the beginning of the $i$th\xspace iteration. Any
independent set in $\Mh{G}_i$ corresponds to a set of points
$X \subseteq \Mh{P}$ that do not see each other due to the presence
of the inner approximation $\Mh{B}_i$. That is, $X$ is in
convex position, and furthermore $\cardin{X} \leq \indexX{\Mh{P}}$.
For $0 \leq t \leq n$, let $\startX{t}$ be the first iteration
$i$, such that $\depthX{ \Mh{G}_i} \leq t$. Since the depth of
$\Mh{G}_i$ is a monotone decreasing function, this quantity is
well defined. An \emphi{epoch} is a range of iterations between
$s(t)$ and $s(t/2)$, for any parameter $t$. We claim that an epoch
lasts $O( \indexX{ \Mh{P}})$ iterations (and every iteration issues
only one oracle query). Since there are only $O( \log n)$
(non-overlapping) epochs till the algorithm terminates, as the
depth becomes zero, this implies the claim.
So consider such an epoch starting at $i = \startX{t}$. We have
$m = \nEdgesX{i} = \cardin{\EdgesX{\Mh{G}_i}} = O( \indexX{\Mh{P}}
t^2)$, by \lemref{int-graphs}, since $\indexX{\Mh{P}}$ is an
upper bound on the size of the largest independent set in
$\Mh{G}_i$. By \lemref{depth:reduce}, as long as the depth of the
intervals is at least $t/2$, the number of edges removed from the
graph at each iteration, during this epoch, is at least
$\Omega(t^2)$. As such, the algorithm performs at most
$O(m_i/t^2) = O( \indexX{\Mh{P}} )$ iterations in this epoch, till
the maximum depth drops to $t/2$. \end{proof}
\subsubsection{Implementing the greedy algorithm}
With the use of dynamic segment trees \cite{mn-dfc-90} we show that the greedy classification algorithm can be implemented efficiently.
\begin{lemma}
\lemlab{impl-greedy}
Let $\Mh{C}$ be a convex body provided via a separation oracle, and
let $\Mh{P}$ be a set of $n$ points in the plane. If an oracle query
costs time $T$, then the greedy algorithm can be implemented in
$O\bigl(n\log^2 n\log\log n + T\cdot\indexX{\Mh{P}} \log n\bigr)$
expected time. \end{lemma} \begin{proof}
The algorithm follows the proof of \thmref{greedy-method}. We
focus on efficiently implementing the algorithm once inner
approximation is no longer empty. Let $\Mh{U} \subseteq \Mh{P}$ be
the subset of unclassified points. By binary searching on the
vertices of the inner approximation $\Mh{B}$, we can compute the
collection of visibility intervals $\Mh{\mathcal{V}}$ for all points in
$\Mh{U}$ in $O(\cardin{\Mh{U}}\log m) = O(n\log n)$ time (recall
that $\Mh{\mathcal{V}}$ is a collection of circular intervals on the unit
circle). We store these intervals in a dynamic segment tree
$\mathcal{T}$ with the modification that each node $v$ in $\mathcal{T}$ stores
the maximum depth over all intervals contained in the subtree
rooted at $v$. Note that $\mathcal{T}$ can be made fully dynamic to
support updates in $O(\log n \log \log n)$ time \cite{mn-dfc-90}.
An iteration of the greedy algorithm proceeds as follows. Start by
collecting all points $\Mh{U}^+ \subseteq \Mh{U}$ realizing the
maximum depth using $\mathcal{T}$. When $t = \cardin{\Mh{U}^+}$, this
step can be done in $O(\log n + t)$ time by traversing $\mathcal{T}$.
We compute the centerpoint of $\Mh{U}^+$ in $O(t \log t)$ expected
time \cite{c-oramt-04} and query the oracle using this
centerpoint. Either points of $\Mh{U}$ are classified (and we
delete their associated intervals from $\mathcal{T}$) or we improve the
inner approximation. The inner approximation (which is the convex
hull of query points inside the convex body $\Mh{C}$) can be
maintained in an online fashion with insert time $O(\log n)$
\cite[Chapter 3]{ps-cg-85}. When the inner approximation expands,
the points of $\Mh{U}^+$ have their intervals shrink. As such, we
recompute $\IX{\Mh{p}}$ for each $\Mh{p} \in \Mh{U}^+$ and reinsert
$\IX{\Mh{p}}$ into $\mathcal{T}$.
As defined in the proof of \thmref{greedy-method}, an epoch is the
subset of iterations in which the maximum depth is in the range
$[t/2, t]$, for some integer $t$. During such an epoch, we make
two claims:
\begin{compactenumi}
\item there are $\sigma = O(n)$ updates to $\mathcal{T}$, and
\item the greedy algorithm performs $O(n/t)$
centerpoint calculations on sets of size $O(t)$.
\end{compactenumi}
Both of these claims imply that a single epoch of the greedy
algorithm can be implemented in expected time
$O(\sigma \log n \log\log n + n\log n + T\cdot \indexX{\Mh{P}})$.
As there are $O(\log n)$ epochs, the algorithm can be
implemented in expected time
$O(n \log^2 n \log\log n + T\cdot \indexX{\Mh{P}}\log n)$.
We now prove the first claim. Recall that we have a collection of
intervals $\Mh{\mathcal{V}}$ lying on the circle of directions. Partition
the circle into $k$ atomic arcs, where each arc contains $t/10$
endpoints of intervals in $\Mh{\mathcal{V}}$. Note that $k = 20n/t =
O(n/t)$. For each circular arc $\Mh{\gamma}$, let
$\Mh{\mathcal{V}}_\Mh{\gamma} \subseteq \Mh{\mathcal{V}}$ be the set of intervals
intersecting $\Mh{\gamma}$. As the maximum depth is bounded by $t$, we
have that $\cardin{\Mh{\mathcal{V}}_\Mh{\gamma}} \leq t + t/10 = 1.1t$. In
particular, if $\Mh{G}[\Mh{\mathcal{V}}_\Mh{\gamma}]$ is the induced subgraph of
the intersection graph $\Mh{G}$, then $\Mh{G}[\Mh{\mathcal{V}}_\Mh{\gamma}]$ has
at most $\binom{\cardin{\Mh{\mathcal{V}}_\Mh{\gamma}}}{2} = O(t^2)$ edges.
In each iteration, the greedy algorithm chooses a point in an
arc $\Mh{\gamma}$ (we say that $\Mh{\gamma}$ is \emph{hit}) and edges
are only deleted from $\Mh{G}[\Mh{\mathcal{V}}_\Mh{\gamma}]$.
The key observation is that an arc $\Mh{\gamma}$ can only be hit
$O(1)$ times before all points of $\Mh{\gamma}$ have depth below
$t/2$, implying that it will not be hit again until the next
epoch. Indeed, each time $\Mh{\gamma}$ is hit, the number of edges
in the induced subgraph $\Mh{G}[\Mh{\mathcal{V}}_\Mh{\gamma}]$ drops
by a constant factor (\lemref{depth:reduce}). Additionally,
when $\Mh{G}[\Mh{\mathcal{V}}_\Mh{\gamma}]$ has less than
$\binom{t/2}{2}$ edges then any point on $\Mh{\gamma}$ has
depth less than $t/2$. These two facts imply that an arc
is hit $O(1)$ times.
When an arc is hit, we must reinsert
$\cardin{\Mh{\mathcal{V}}_\Mh{\gamma}} = O(t)$ intervals into $\mathcal{T}$. In
particular, over a single epoch, the total number of hits
over all arcs is bounded by $O(k)$. As such,
$\sigma = O(kt) = O(n)$.
For the second claim, each time an arc is hit, a single
centerpoint calculation is performed. Since each arc
has depth at most $t$ and is hit a constant number
of times, there are $O(k) = O(n/t)$ such
centerpoint calculations in a single epoch, each costing
expected time $O(t\log t)$. \end{proof}
In \secref{applications} we present an application of the greedy classification algorithm. Namely, we present an efficient algorithm for computing the discrete geometric median of a point set (\lemref{discrete:med}).
\subsubsection{The inference dimension and an alternative
algorithm} \seclab{inference}
Kane et~al.\xspace~ \cite{klmz-accq-17} define the notion of \emph{inference
dimension}, which in our context is the minimum number of queries needed to classify all points.
\begin{figure}
\caption{The minimal external set must be convex.}
\end{figure}
\begin{lemma}
Let $\Mh{C}$ be a convex body provided via a
separation oracle, and let $\Mh{P}$ be a set of $n$ points in
the plane. There is a set of $2\indexX{\Mh{P}}$ oracle queries whose
answers can be used to classify all points of $\Mh{P}$ correctly. \end{lemma} \begin{proof}
We put the at most $\indexX{\Mh{P}}$ vertices of
$\CHX{ \Mh{P} \cap \Mh{C}}$ into a query set. Querying these points is
enough to label correctly all points inside the body
$\Mh{C}$. As for the points of $\Mh{P}$ outside $\Mh{C}$, let
$\Mh{T} \subseteq \Mh{P} \setminus \Mh{C}$ be the minimum size subset
such that querying these points correctly labels all
points outside $\Mh{C}$. Each point $p \in \Mh{T}$ is associated with a
halfspace $h^+_p$ that contains $\Mh{C}$. Let $H$ be this set
of halfspaces. Observe that for any point $p \in \Mh{T}$, there is
a point $\mathrm{witness}(p)\in \Mh{P} \setminus \Mh{C}$ for which
$h^+_p$ does not contain $\mathrm{witness}(p)$ (as
otherwise, $p$ can be removed from $\Mh{T}$). Let
$\Mh{U} = \Set{\mathrm{witness}(p)}{p \in \Mh{T}}$. The points of
$U$ are in the faces of the arrangement $\ArrX{H}$ that are
adjacent to the face $\cap_{p \in \Mh{T}} h^+_p$, see
\figref{outer:conex}.
Since each point of $\Mh{U}$ is separable by a line from the
remaining points of $\Mh{U}$, it follows that $\Mh{U}$ is convex. As
such, $\cardin{\Mh{T}} = \cardin{\Mh{U}} \leq \indexX{\Mh{P}}$
which implies the result. \end{proof}
The above lemma implies that the inference dimension of $\Mh{P}$ is $2\indexX{\Mh{P}}$. Plugging this into the algorithm of Kane et~al.\xspace~ \cite{klmz-accq-17} results in an algorithm that labels all points correctly and performs the same number of queries as \thmref{greedy-method} in expectation. The advantage of \thmref{greedy-method} is that it does not require knowing the value of $\indexX{\Mh{P}}$ in advance. However, one could perform an exponential search for a tight upper bound on $\indexX{\Mh{P}}$, and still use the algorithm of Kane et~al.\xspace~\cite{klmz-accq-17}. We leave the question of experimentally comparing the two algorithms as an open problem for future research.
\paragraph{Sketch of the algorithm of \cite{klmz-accq-17}.} The algorithm of Kane et~al.\xspace~ \cite{klmz-accq-17} specialized for our case works as follows. Start by randomly picking a sample of size $O( \indexX{\Mh{P}})$ and query the oracle with each of these points. Next, stream the unlabeled points through the computed regions, leaving only the points that are yet to be labeled. The algorithm repeats this process $O( \log n )$ times, in each iteration working on the remaining unlabeled points. By proving that in expectation at least half of the points are being labeled at each round, it follows that $O(\log n)$ iterations suffice.
\subsection{The greedy algorithm in 3D} \seclab{greedy:3d}
Consider the 3D variant of the 2D problem: Given a set of points $\Mh{P}$ in $\mathbb{R}^3$ and a convex body $\Mh{C}$ specified via a separation oracle, the task at hand is to classify, for all the points of $\Mh{P}$, whether or not they are in $\Mh{C}$, using the fewest oracle queries possible.
The greedy algorithm naturally extends, where at each iteration $i$ a plane $\Mh{\mathcalb{e}}_i$ is chosen that is tangent to the current inner approximation $\Mh{B}_i$, such that it's closed halfspace (which avoids the interior of $\Mh{B}_i$) contains the largest number of unclassified points from the set $\Mh{U}_i$. If the queried centerpoint is outside, the oracle returns a separating plane and as such points can be discarded by the \AlgorithmI{remove}\xspace{} operation. Similarly, if the centerpoint is reported inside, then the algorithm calls the \AlgorithmI{expand}\xspace{} and updates the 3D inner approximation $\Mh{B}_i$.
\subsubsection{Analysis}
Following the analysis of the greedy algorithm in 2D, we (conceptually) maintain the following set of objects: For a point $\Mh{p} \in \Mh{U}_i$, let $\Mh{\mathcalb{d}}_i(\Mh{p})$ be the set of all unit length directions $v \in \mathbb{R}^3$ such that a plane perpendicular to $v$ separates $\Mh{p}$ from $\Mh{B}_i$. Let $\Mh{\mathcal{P}}_i = \Set{\Mh{\mathcalb{d}}_i(\Mh{p})}{\Mh{p} \in \Mh{U}_i}$. A set of objects form a collection of \emphi{pseudo-disks} if the boundary of every pair of them intersect at most twice. The following claim shows that $\Mh{\mathcal{P}}_i$ is a collection of pseudo-disks on $\Mh{\mathbb{S}}$, where $\Mh{\mathbb{S}}$ is the sphere of radius one centered at the origin.
\begin{lemma}
The set
$\Mh{\mathcal{P}}_i = \Set{\Mh{\mathcalb{d}}_i(\Mh{p}) \subseteq \Mh{\mathbb{S}}}{\Mh{p} \in
\Mh{U}_i}$ is a collection of pseudo-disks. \end{lemma} \begin{proof}
Fix two points $\Mh{p}, \Mh{r} \in \Mh{U}_i$ such that the boundaries
of $\Mh{\mathcalb{d}}_i(\Mh{p})$ and $\Mh{\mathcalb{d}}_i(\Mh{r})$ intersect on
$\Mh{\mathbb{S}}$. Let $\Mh{\mathcalb{l}}$ be the line in $\mathbb{R}^3$ passing through
$\Mh{p}$ and $\Mh{r}$. Consider any plane $\Mh{\mathcalb{e}}$ such that $\Mh{\mathcalb{l}}$
lies on $\Mh{\mathcalb{e}}$. Since $\Mh{\mathcalb{l}}$ is fixed, $\Mh{\mathcalb{e}}$ has one degree
of freedom. Conceptually rotate $\Mh{\mathcalb{e}}$ until becomes tangent to
$\Mh{B}_i$ at point $\Mh{u}'$. The direction of the normal to this
tangent plane, is a point in
$X = \partial\Mh{\mathcalb{d}}_i(\Mh{p}) \cap \partial\Mh{\mathcalb{d}}_i(\Mh{r})$. Note
that this works also in the other direction --- any point in $X$
corresponds to a tangent plane passing through $\Mh{\mathcalb{l}}$. The
family of planes passing through $\Mh{\mathcalb{l}}$ has only two tangent
planes to $\Mh{C}$. It follows that $\cardin{X}=2$. As such, any
two regions in $\Mh{\mathcal{P}}_i$ intersect as pseudo-disks. \end{proof}
We need the following two classical results that follows from the Clarkson-Shor \cite{cs-arscg-89} technique.
\begin{lemma}
\lemlab{num:vertices:depth}
Let $\Mh{\mathcal{P}}$ be a collection of $n$ pseudo-disks, and let
$\vDY{\Mh{k}}{\mathcal{A}}$ be the set of all vertices of depth at most
$\Mh{k}$ in the arrangement $\mathcal{A} = \ArrX{\Mh{\mathcal{P}}}$. Then
$\cardin{\vDY{\Mh{k}}{\mathcal{A}}} = O(n\Mh{k})$. \end{lemma} \begin{proof}
Let $\Mh{S} \subseteq \mathcal{V}$ be a random sample where each
pseudo-disk is independently placed into $\Mh{S}$ with probability
$1/\Mh{k}$. For each $\Mh{p} \in \vDY{\Mh{k}}{\mathcal{A}}$, let
$\mathcal{E}_\Mh{p}$ be the event that $\Mh{p}$ is a vertex in the union
$\unionX{\Mh{S}}$ of this random subset of pseudo-disks. The
probability that $\Mh{p}$ is part of the union is at least the
probability that both pseudo-disks defining $\Mh{p}$ in $\mathcal{A}$ are
sampled into $\Mh{S}$ and the remaining $\Mh{k}-2$ objects
containing $\Mh{p}$ are not in $\Mh{S}$. Thus,
\begin{align*}
\Prob{\mathcal{E}_\Mh{p}}
\geq \frac{1}{\Mh{k}^2} \pth{1 - \frac{1}{\Mh{k}}}^{\Mh{k}}
\geq \frac{1}{e^2 \Mh{k}^2},
\end{align*}
since $1 - 1/x \geq e^{-2/x}$ for $x \geq 2$. If
$\cardin{\unionX{\Mh{S}}}$ denotes the number of vertices on the
boundary of the union, then linearity of expectations imply
$\Ex{\cardin{\unionX{\Mh{S}}}} \geq
\cardin{\vDY{\Mh{k}}{\mathcal{A}}}/(e^2 \Mh{k}^2)$. On the other hand,
it is well known the union complexity of a collection of $n$
pseudo-disks is $O(n)$ \cite{klps-ujrcf-86}. Therefore,
$\Ex{\cardin{\unionX{\Mh{S}}}} \leq \Ex{c \cardin{\Mh{S}}} \leq
cn/\Mh{k}$, for some appropriate constant $c$. Putting both
bounds on $\Ex{\cardin{\unionX{\Mh{S}}}}$ together, it follows that
$cn/\Mh{k} \geq \cardin{\vDY{\Mh{k}}{\mathcal{A}}}/(e^2 \Mh{k}^2) \iff
\cardin{\vDY{\Mh{k}}{\mathcal{A}}} = O(n\Mh{k})$. \end{proof}
\begin{lemma}
\lemlab{num:edges:depth}
Let $\Mh{\mathcal{P}}$ be a collection of $n$ pseudo-disks. For two integers
$0 < t \leq k$, a subset $X \subseteq \Mh{\mathcal{P}}$ is a
\emphi{$( t,k)$-tuple} if
\begin{compactenumi*}
\item $\cardin{X} \leq t$,
\item $\exists \Mh{p} \in \cap_{\Mh{\mathcalb{d}} \in X} \Mh{\mathcalb{d}}$, and
\item $\depthY{\Mh{p}}{\Mh{\mathcal{P}}} \leq \Mh{k}$.
\end{compactenumi*}
Let $\tuplesZ{t}{\Mh{k}}{n}$ be the set of all $(\leq t,k)$-tuples
of $\Mh{\mathcal{P}}$. Then
$\cardin{\tuplesZ{t}{\Mh{k}}{n}} = O(n t k^{t-1})$. \end{lemma} \begin{proof}
Let $\Mh{R} \subseteq \Mh{\mathcal{P}}$ be a random sample, where each
pseudo-disk is independently placed into $\Mh{R}$ with
probability $1/k$. Consider a specific $(t,k)$-tuple $X$, with a
witness point $\Mh{p}$ of depth $\leq k$. Without loss of
generality, by moving $\Mh{p}$, one can assume $\Mh{p}$ is a vertex of
$\ArrX{\Mh{\mathcal{P}}}$.
Let $\mathcal{E}_{X}$ be the event that $\Mh{p}$ is of depth exactly $t$
in $\ArrX{\Mh{R}}$, and $X \subseteq \Mh{R}$. For $\mathcal{E}_{X}$
to occur, all the objects of $X$ need to be sampled into
$\Mh{R}$, and each of the at most $k-t$ pseudo-disks containing
$\Mh{p}$ in its interior are not in $\Mh{R}$. Therefore
\begin{equation*}
\Prob{\mathcal{E}_{X}}
\geq
\frac{\pth{1-1/\Mh{k}}^{\depthY{\Mh{p}}{\Mh{\mathcal{P}}} - |X|}}{k^{|X|}}
\geq
\frac{\pth{1-1/\Mh{k}}^k}{k^{t}}
\geq
\frac{1}{e^2 \Mh{k}^t}.
\end{equation*}
Note, that a vertex of depth $\leq k$ in $\ArrX{\Mh{R}}$
corresponds to at most one such an event happening. We thus have,
by linearity of expectations, that
\begin{equation*}
\frac{\cardin{\tuplesZ{t}{\Mh{k}}{n}}}{e^2 k^t}
\leq
\Ex{\bigl.\cardin{\vDY{t}{\ArrX{\Mh{R}}}}}
=
O(tn/k),
\end{equation*}
by \lemref{num:vertices:depth}. \end{proof}
\begin{lemma}
\lemlab{num:edges:pdisks}
Let $\Mh{G}_i = (\Mh{\mathcal{P}}_i, E_i)$ be the intersection graph of the
pseudo-disks of $\Mh{\mathcal{P}}_i$ (in the $i$th\xspace iteration). If
$\ArrX{\Mh{\mathcal{P}}_i}$ has maximum depth $\Mh{k}$, then
$\cardin{E_i} = O(n\Mh{k})$. Furthermore,
$\Mh{\alpha}(\Mh{G}_i) = \Omega(n/\Mh{k})$, where $\Mh{\alpha}(\Mh{G}_i)$
denotes the size of the largest independent set in $\Mh{G}_i$. \end{lemma} \begin{proof}
The first claim follows from \lemref{num:edges:depth}.
Indeed, $\cardin{E_i} = \tuplesZ{2}{\Mh{k}}{n} = O(n\Mh{k})$ ---
since every intersecting pair of pseudo-disks induces a
corresponding $(2,\Mh{k})$-tuple.
For the second part, Tur\'an\xspace's Theorem states that any graph has an
independent set of size at least $n/\pth{{d}_\mathrm{avg}(\Mh{G}_i) + 1}$,
where ${d}_\mathrm{avg}(\Mh{G}_i) = 2\cardin{E_i}/n \leq c \Mh{k}$ is the
average degree of $\Mh{G}_i$ and $c$ is some constant. It follows
that $\Mh{\alpha}(\Mh{G}_i) \geq n/(c\Mh{k} + 1) = \Omega(n/\Mh{k})$. \end{proof}
The challenge in analyzing the greedy algorithm in 3D is that mutual visibility between pairs of points is not necessarily lost as the inner approximation grows. As an alternative, consider the \emph{hypergraph} $\Mh{H}_i = (\Mh{\mathcal{P}}_i, \Mh{\mathcal{E}}_i)$, where a triple of pseudo-disks $\Mh{\mathcalb{d}}_1, \Mh{\mathcalb{d}}_2, \Mh{\mathcalb{d}}_3 \in \Mh{\mathcal{P}}_i$ form a hyperedge $\brc{\Mh{\mathcalb{d}}_1,\Mh{\mathcalb{d}}_2,\Mh{\mathcalb{d}}_3} \in \Mh{\mathcal{E}}_i$ $\iff$ $\Mh{\mathcalb{d}}_1 \cap \Mh{\mathcalb{d}}_2 \cap \Mh{\mathcalb{d}}_3 \neq \varnothing$ (this is equivalent to the condition that the corresponding triple of points span a triangle which does not intersect $\Mh{B}_i$).
As in the analysis of the algorithm in 2D, we first bound the number of edges in $\Mh{H}_i$ and then argue that enough progress is made in each iteration.
\begin{lemma}
\lemlab{num:triples:pdisks}
Let $\Mh{H}_i = (\Mh{\mathcal{P}}_i, \Mh{\mathcal{E}}_i)$ be the hypergraph in
iteration $i$, and let $\Mh{G}_i$ be the corresponding
intersection graph of $\Mh{\mathcal{P}}_i$. If $\ArrX{\Mh{\mathcal{P}}_i}$ has
maximum depth $\Mh{k}$, then
$\cardin{\Mh{\mathcal{E}}_i} = O(\Mh{\alpha}(\Mh{G}_i) \Mh{k}^3)$. \end{lemma} \begin{proof}
\lemref{num:edges:pdisks} implies that $\Mh{G}_i$ has an
independent set of size $\Omega(f_i/k)$, where
$f_i = \cardin{\Mh{\mathcal{P}}_i}$. \lemref{num:edges:depth} implies that
$\cardin{\Mh{\mathcal{E}}_i} \leq \cardin{\tuplesZ{3}{\Mh{k}}{f_i}} =
O(f_i\Mh{k}^2) = O(\Mh{\alpha}(\Mh{G}_i) \Mh{k}^3)$. \end{proof}
The following is a consequence of the Colorful Carath\'eodory\xspace Theorem \cite{b-gct-82}, see Theorem 9.1.1 in \cite{m-ldg-02}.
\begin{theorem}
\thmlab{cpnt:many:simplices}
Let $\Mh{P}$ be a set of $n$ points in $\mathbb{R}^d$ and $\Mh{\mathcalb{c}}$ be the
centerpoint of $\Mh{P}$. Let $\Mh{S} = \binom{\Mh{P}}{d+1}$ be the
set of all $d+1$ simplices induced by $\Mh{P}$. Then for
sufficiently large $n$, the number of simplices in $\Mh{S}$ that
contain $\Mh{\mathcalb{c}}$ in their interior is at least $c_d n^{d+1}$, where
$c_d$ is a constant depending only on $d$. \end{theorem}
Next, we argue that in each iteration of the greedy algorithm, a constant fraction of the edges in $\Mh{H}_i$ are removed. The following is the higher dimensional version of \lemref{segments:intersect}.
\begin{lemma}
\lemlab{cpnt:many:simplex:faces}
Let $\Mh{P}$ be a set of $n$ points in $\mathbb{R}^3$ lying above the
$xy$-plane, $\Mh{\mathcalb{c}}$ be the centerpoint of $\Mh{P}$ and
$T = \binom{P}{3}$ be the set of all triangles induced by
$\Mh{P}$. Next, consider any point $\Mh{r}$ on the $xy$-plane. Then
the segment $\Mh{\mathcalb{c}}\Mh{r}$ intersects at least $\Omega(n^3)$
triangles of $T$. \end{lemma} \begin{proof}
Let $\Mh{S} = \binom{\Mh{P}}{d+1}$ be the set of all simplices
induced by $\Mh{P}$. \thmref{cpnt:many:simplices} implies that the
centerpoint $\Mh{\mathcalb{c}}$ is contained in $n^4/c_1$ simplices of
$\Mh{S}$ for some constant $c_1 > 1$. Let $K$ be a
simplex that contains $\Mh{\mathcalb{c}}$ and observe the segment $\Mh{\mathcalb{c}}\Mh{r}$
must intersect at least one of the triangular faces $\tau$ of
$K$. As $K \in \Mh{S}$, charge this simplex
$K$ to the triangular face $\tau$. Applying this counting
to all the simplices containing $\Mh{\mathcalb{c}}$, implies that at least
$n^4/c_1$ charges are made. On the other hand, a triangle
$\tau$ can be charged at most $n-3$ times (because a simplex can
be formed from $\tau$ and one other additional point of
$\Mh{P}$). It follows that $\Mh{\mathcalb{c}}\Mh{r}$ intersects at least
$(n^4/c_1)/ (n-3) = \Omega(n^3)$ triangles of $T$. \end{proof}
\begin{lemma}
\lemlab{depth:reduce:3d}
In each iteration of the greedy algorithm, the number of edges in
the hypergraph $\Mh{H}_i = (\Mh{\mathcal{P}}_i, \Mh{\mathcal{E}}_i)$ decreases by at
least $\Omega(\Mh{k}^3)$, where $\Mh{k}$ is the maximum depth of
any point in $\ArrX{\Mh{\mathcal{P}}_i}$. \end{lemma} \begin{proof}
Recall that $\Mh{U}^+ = \Mh{U}_i \cap \Mh{\mathcalb{e}}^+$ is the current set
of unclassified points and $\Mh{\mathcalb{e}}$ is the plane tangent to
$\Mh{B}_i$, where $\Mh{\mathcalb{e}}^+$ is the closed halfspace that avoids
the interior of $\Mh{B}_i$ and contains the largest number of
unlabeled points. Note that $|\Mh{U}^+| \geq \Mh{k}$.
In a \AlgorithmI{remove}\xspace{} operation, arguing as in \lemref{depth:reduce},
implies that the number of points of $\Mh{U}^+$ that are discarded
is at least $t_i \geq \Mh{k}/4$. Since all of the discarded points are in
a halfspace avoiding $\Mh{B}_i$, it follows that all the triples
they induce are in $\Mh{H}_i$. Namely, at least
$\binom{t_i}{3} = \Omega(k^3)$ hyperedges get discarded.
In an \AlgorithmI{expand}\xspace{} operation, the centerpoint $\Mh{\mathcalb{c}}$ of $\Mh{U}^+$
is added to the current inner approximation $\Mh{B}_i$. Since all
of the points of $\Mh{U}^+$ lie above the plane $\Mh{\mathcalb{e}}$, applying
\lemref{cpnt:many:simplex:faces} on $\Mh{U}^+$ with the centerpoint
$\Mh{\mathcalb{c}}$ and a point lying on the plane $\Mh{\mathcalb{e}}$ inside the
(updated) inner approximation, we deduce that at least
$\Omega(\Mh{k}^3)$ hyperedges are removed. \end{proof}
\begin{theorem}
\thmlab{greedy-method-3d}
Let $\Mh{C} \subseteq \mathbb{R}^3$ be a convex body provided via a
separation oracle, and let $\Mh{P}$ be a set of $n$ points in
$\mathbb{R}^3$. The greedy classification algorithm performs
${O\bigl((\indexX{\Mh{P}}+1) \log n\bigr)}$ oracle queries. The
algorithm correctly identifies all points in $\Mh{P} \cap \Mh{C}$ and
$\Mh{P} \setminus \Mh{C}$. \end{theorem} \begin{proof}
The proof is essentially the same as \thmref{greedy-method}.
Arguing as in \lemref{iterations:empty:inapx} implies that there
are at most $O(\log n)$ iterations (and thus also oracle queries)
in which the inner approximation is empty.
Now consider the hypergraph $\Mh{H}_1 = (\Mh{\mathcal{P}}_1, \Mh{\mathcal{E}}_1)$ at
the start of the algorithm execution. As the algorithm
progresses, both vertices and hyperedges are removed from the
hypergraph. Let $\Mh{H}_i = (\Mh{\mathcal{P}}_i, \Mh{\mathcal{E}}_i)$ denote the
hypergraph in the $i$th\xspace iteration of the algorithm. Recall that
$\Mh{\mathcal{P}}_i$ is a set of pseudo-disks associated with each of the
points yet to be classified. Observe that any independent set of
pseudo-disks in the corresponding {intersection graph} $\Mh{G}_i$
corresponds to an independent set of points with respect to the
inner approximation $\Mh{B}_i$, and as such is a subset of points
in convex position. Therefore, the size of any such independent
set is bounded by $\indexX{\Mh{P}}$.
Let $\Mh{k}_i$ denote the maximum depth of any vertex in the
arrangement $\ArrX{\Mh{\mathcal{P}}_i}$. \lemref{num:triples:pdisks}
implies that
$\cardin{\Mh{\mathcal{E}}_i} = O\pth{\indexX{\Mh{P}} \Mh{k}_i^3}$.
\lemref{depth:reduce:3d} implies that the number of hyperedges in
the $i$th\xspace iteration decreases by at least
$\Omega(\Mh{k}_i^3)$. Namely, after $O( \indexX{\Mh{P}})$
iterations, the maximum depth is halved. It follows that after
$O( \indexX{\Mh{P}} \log n)$ iterations, the maximum depth is zero,
which implies that all the points are classified. Since the
algorithm performs one query per iteration, the claim follows. \end{proof}
\section{An instance-optimal approximation in two dimensions} \seclab{improved:2d}
Before discussing the improved algorithm, we present a lower bound on the number of oracle queries performed by any algorithm that classifies all the given points. We then present the improved algorithm, which matches the lower bound up to a factor of $O(\log^2n)$.
\subsection{A lower bound} \seclab{lower:bound}
Given a set $\Mh{P}$ of points in the plane, and a convex body $\Mh{C}$, the \emphi{outer fence} of $\Mh{P}$ is a closed convex polygon $\Mh{F_{\mathrm{out}}}$ with minimum number of vertices, such that $\Mh{C} \subseteq \Mh{F_{\mathrm{out}}}$ and $\Mh{C} \cap \Mh{P} = \Mh{F_{\mathrm{out}}} \cap \Mh{P}$. Similarly, the \emphi{inner
fence} is a closed convex polygon $\Mh{F_{\mathrm{in}}}$ with minimum number of vertices, such that $\Mh{F_{\mathrm{in}}} \subseteq \Mh{C}$ and $\Mh{C} \cap \Mh{P} = \Mh{F_{\mathrm{in}}} \cap \Mh{P}$. Intuitively, the outer fence separates $\Mh{P} \setminus \Mh{C}$ from $\partial \Mh{C}$, while the inner fence separates $\Mh{P} \cap \Mh{C}$ from $\partial \Mh{C}$. The \emphi{separation price} of $\Mh{P}$ and $\Mh{C}$ is \begin{equation*}
\priceY{\Mh{P}}{\Mh{C}} = \nVX{ \Mh{F_{\mathrm{in}}}} + \nVX{ \Mh{F_{\mathrm{out}}}}, \end{equation*} where $\nVX{F}$ denotes the number of vertices of a polygon $F$. See \figref{easy:not:easy} for an example.
\begin{figure}
\caption{The separation price, for the same point set, is
different depending on how ``tight'' the body is in relation to
the inner and outer point set.}
\end{figure}
\begin{lemma}
\lemlab{lower:bound}
Let $\Mh{C}$ be a convex body provided via a separation oracle,
and let $\Mh{P}$ be a point set in the plane. Any algorithm that
classifies the points of $\Mh{P}$ in relation to $\Mh{C}$, must perform
at least $\priceY{\Mh{P}}{\Mh{C}}$ separation oracle queries. \end{lemma} \begin{proof}
Consider the set $Q$ of queries performed by the optimal algorithm
(for this input), and split it, into the points inside and outside
$\Mh{C}$. The set of points inside, $\Mh{Q_{\mathrm{in}}} = Q \cap \Mh{C}$
has the property that $\Mh{Q_{\mathrm{in}}} \subseteq \Mh{C}$, and furthermore
$\CHX{\Mh{Q_{\mathrm{in}}}} \cap \Mh{P} = \Mh{C} \cap \Mh{P}$ --- otherwise, there
would be a point of $\Mh{C} \cap \Mh{P}$ that is not
classified. Namely, the vertices of $\CHX{\Mh{Q_{\mathrm{in}}}}$ are vertices of a
fence that separates the points of $\Mh{P}$ inside $\Mh{C}$ from the
boundary of $\Mh{C}$. As such, we have that
$\cardin{\Mh{Q_{\mathrm{in}}}} \geq \nVX{\CHX{\Mh{Q_{\mathrm{in}}}}} \geq \nVX{\Mh{F_{\mathrm{in}}}}$.
Similarly, each query in $\Mh{Q_{\mathrm{out}}} = Q \setminus \Mh{Q_{\mathrm{in}}}$ gives rise to
a separating halfplane. The intersection of the corresponding
halfplanes is a convex polygon $H$ that contains $\Mh{C}$, and
furthermore contains no point of $\Mh{P} \setminus \Mh{C}$. Namely,
the boundary of $H$ behaves like an outer fence. As such, we have
$\cardin{\Mh{Q_{\mathrm{out}}}} \geq \nVX{H} \geq \nVX{\Mh{F_{\mathrm{out}}}}$.
Combining, we have that
$\cardin{Q} = \cardin{\Mh{Q_{\mathrm{in}}}} + \cardin{\Mh{Q_{\mathrm{out}}}} \geq \nVX{\Mh{F_{\mathrm{in}}}} +
\nVX{\Mh{F_{\mathrm{out}}}} = \priceY{\Mh{P}}{\Mh{C}}$, as claimed. \end{proof}
\begin{remarks} \begin{compactenumi} \item Naturally the separation price, and thus the proof of the lower bound, generalizes to higher dimensions. See \defref{lower:bound:high} and \lemref{lower:bound:high}.
\item The lower bound only holds for $d \geq 2$. In 1D, the problem can be solved using $O(\log n)$ queries with binary search. The above would predict that any algorithm needs $\Omega(1)$ queries. However it is not hard to argue a stronger lower bound of $\Omega(\log n)$.
\item In \apndref{ex:sep:pr}, we show that when $\Mh{P}$ is a set of $n$ points chosen uniformly at random from a square and $\Mh{C}$ is a smooth convex body, $\Ex{\priceY{\Mh{P}}{\Mh{C}}} = O(n^{1/3})$. Thus, when the points are randomly chosen, one can think of $\priceY{\Mh{P}}{\Mh{C}}$ as growing sublinearly in $n$.
\end{compactenumi} \end{remarks}
\subsection{Useful operations} \seclab{useful}
We start by presenting some basic operations that the new algorithm will use.
\subsubsection{A directional climb} \seclab{dir:climb}
Given a direction $v$, a \emphi{directional climb} is a sequence of iterations, where in each iteration, the algorithm finds the extreme line $\ell$ perpendicular to $v$, that is tangent to the inner approximation $\Mh{B}$. The algorithm then performs an iteration with $\ell$, as described in \secref{round}, which we now recall. Specifically, the algorithm computes the centerpoint $\Mh{q}$ of all points in the halfspace bounded by $\ell$ that avoids $\Mh{C}$. Depending on whether $\Mh{q} \in \Mh{C}$, we either perform $\AlgorithmI{expand}\xspace{}(\Mh{q})$ or $\AlgorithmI{remove}\xspace{}(\ell)$ (see \secref{operations}). We then classify points accordingly and recompute $\ell$ with the updated inner approximation $\Mh{B}$. See \figref{directional:climb} for an illustration. The directional climb ends when the outer halfspace induced by this line contains no unclassified point.
\begin{lemma}
\lemlab{dir:climb}
A directional climb requires $O( \log n)$ oracle queries. \end{lemma} \begin{proof}
Consider the tangent to $\Mh{B}$ in the direction of $v$. At each
iteration, we claim the number of points in this halfplane is
reduced by a factor of $1/3$. Indeed, if the query (i.e.,
centerpoint) is outside $\Mh{C}$ then at least a third of these
points got classified as being outside. Alternatively, the tangent
halfplanes moves in the direction of $v$, since the query point is
inside $\Mh{C}$. But then the new halfspace contains at most $2/3$
fraction of the previous point set --- again, by the centerpoint
property. \end{proof}
\subsubsection{Line cleaning}
\begin{figure}
\caption{Unclassified points and their pockets.}
\end{figure}
A \emphi{pocket} is a connected region of $\CHX{\Mh{U} \cup \Mh{B}} \setminus \Mh{B}$, see \figref{pockets}. For the set $\Mh{P}$ of input points, consider the set of all lines \begin{equation}
\LinesX{\Mh{P}}
=
\Set{\mathrm{line}(\Mh{p}, \Mh{r})}{\Mh{p},\Mh{r} \in \Mh{P}}
\eqlab{lines:x} \end{equation} they span.
Let $\Mh{\mathcalb{l}}$ be a line that splits a pocket $\Mh{\Upsilon}$ into two regions, and furthermore, it intersects $\Mh{B}$. Let $\Mh{I} = \Mh{\mathcalb{l}} \cap \Mh{\Upsilon}$, and consider all the intersection points of interest along $\Mh{I}$ in this pocket. That is, \begin{equation*}
\IPSetZ{\Mh{\Upsilon}}{\Mh{\mathcalb{l}}}{\Mh{P}}
=
\Mh{I} \cap \LinesX{\Mh{P}}
=
\Set{\bigl. (\Mh{\Upsilon} \cap \Mh{\mathcalb{l}}) \cap \Mh{\mathcalb{h}} }{
\Mh{\mathcalb{h}} \in \LinesX{\Mh{P}}}. \end{equation*} In words, we take all the pairs of points of $\Mh{P}$ (each such pair induces a line) and we compute the intersection points of these lines with the interval $\Mh{I}$ of interest. Ordering the points of this set along $\Mh{\mathcalb{l}}$, a prefix of them is in $\Mh{C}$, while the corresponding suffix are all outside $\Mh{C}$. One can easily compute this prefix/suffix by doing a binary search, using the separation oracle for $\Mh{C}$ --- see the lemma below for details. Each answer received from the oracle is used to update the point set, using \AlgorithmI{expand}\xspace{} or \AlgorithmI{remove}\xspace{} operations, as described in \secref{operations}. We refer to this operation along $\Mh{\mathcalb{l}}$ as \emphi{cleaning} the line $\Mh{\mathcalb{l}}$. See \figref{clean}.
\begin{figure}\end{figure}
\begin{lemma}
Given a pocket $\Mh{\Upsilon}$, and a splitting line $\Mh{\mathcalb{l}}$, one can
clean the line $\Mh{\mathcalb{l}}$ --- that is, classify all the points of
$\Mh{\Xi} = \IPSetZ{\Mh{\Upsilon}}{\Mh{\mathcalb{l}}}{\Mh{P}}$ using
$O\pth{ \log n \bigr.}$ oracle queries. By the end of this
process, $\Mh{\Upsilon}$ is replaced by two pockets, $\Mh{\Upsilon}_1$ and
$\Mh{\Upsilon}_2$ that do not intersect $\Mh{\mathcalb{l}}$. The pockets $\Mh{\Upsilon}_1$
or $\Mh{\Upsilon}_2$ may be empty sets. \end{lemma}
\begin{proof}
First, we describe the line cleaning procedure in more detail.
The algorithm maintains, in the beginning of the $i$th\xspace iteration,
an interval $\Mh{J}_i$ on the line $\Mh{\mathcalb{l}}$ containing all the
points of $\Mh{\Xi}$ that are not classified yet. Initially,
$\Mh{J}_1 = \Mh{\Upsilon} \cap \Mh{\mathcalb{l}}$. One endpoint, say
$\Mh{p}_i \in \Mh{J}_i$ is on $\partial \Mh{B}_i$, and the
other, say $\Mh{p}_i'$, is outside $\Mh{C}$, where $\Mh{B}_i$ is the
inner approximation in the beginning of the $i$th\xspace iteration.
In the $i$th\xspace iteration, the algorithm computes the set
$\Mh{\Xi}_i = \Mh{J}_i \cap \Mh{\Xi}$. If this set is empty, then
the algorithm is done. Otherwise, it picks the median point
$\Mh{u}_i$, in the order along $\Mh{\mathcalb{l}}$ in $\Mh{\Xi}_i$, and queries
the oracle with $\Mh{u}_i$. There are two possibilities:
\begin{compactenumA}
\item If $\Mh{u}_i \in \Mh{C}$ then the algorithm sets
$\Mh{\Xi}_{i+1} = \Mh{\Xi}_i \setminus [\Mh{p}_i, \Mh{u}_i)$, and
$\Mh{J}_{i+1} = \Mh{J}_i \setminus [\Mh{p}_i,\Mh{u}_i)$.
\item If $\Mh{u}_i \notin \Mh{C}$, then the oracle provided a
closed halfspace $h^+$ that contains $\Mh{C}$. Let $h^-$ be the
complement open halfspace that contains $\Mh{u}_i$. The
algorithm sets $\Mh{\Xi}_{i+1} = \Mh{\Xi}_{i} \setminus h^-$ and
$\Mh{J}_{i+1} = \Mh{J}_i \cap h^+$.
\end{compactenumA}
This resolves the status of at least half the points in
$\Mh{\Xi}_i$, and shrinks the active interval. The algorithm repeats
this till $\Mh{\Xi}_i$ becomes empty. Since
$\cardin{\Mh{\Xi}} = O(n^2)$, this readily implies that the
algorithm performs $O( \log n)$ iterations.
We now argue that the pocket is split --- that is, $\Mh{\Upsilon}_1$ and
$\Mh{\Upsilon}_2$ do not intersect $\Mh{\mathcalb{l}}$. Assume that it is false, and
let $\Mh{B}'$ be the inner approximation after this procedure is
done. Let $L$ (resp.~$R$) be the points of
$\Mh{U}_\Mh{\Upsilon} = \Mh{U} \cap \Mh{\Upsilon}$
that are unclassified on one side (resp.~other side) of
$\Mh{\mathcalb{l}}$. If the pocket is not split, then there are two points
$\Mh{p} \in L$ and $\Mh{r} \in R$, such that
$\Mh{p}\Mh{r} \cap \Mh{B}' = \emptyset$, and
$\partial \CHX{\Mh{B}' \cup L \cup R}$ intersects $\Mh{\mathcalb{l}}$ at the
point $\Mh{u} = \Mh{p} \Mh{r} \cap \Mh{\mathcalb{l}}$. However, by construction,
the point $\Mh{u} \in \Mh{\Xi}$. As such, the point $\Mh{u}$ is now
classified as either being inside or outside $\Mh{C}$, as it is a
point in $\Mh{\Xi}$. If $\Mh{u}$ is outside, then the halfplane $h^-$
that classified it as such, must had classified either $\Mh{p}$ or
$\Mh{r}$ as being outside $\Mh{C}$, which is a contradiction. The
other option, is that $\Mh{u}$ is classified as being inside, but
then, it is in $\Mh{B}'$, which is again a contradiction, as it
implies that $\Mh{B}'$ intersects the segment $\Mh{p} \Mh{r}$. \end{proof}
\subsubsection{Vertical pocket splitting} \seclab{pocket:split}
Consider a pocket $\Mh{\Upsilon}$ such that all of its points lie vertically above $\Mh{B}$, and the bottom of $\Mh{\Upsilon}$ is part of a segment of $\partial \Mh{B}$, see \figref{v:pocket}. Such a pocket can be viewed as being defined by an interval on the $x$-axis corresponding to its two vertical walls. Let $\Mh{U}_\Mh{\Upsilon}$ be the set of unclassified points in this pocket. In each iteration, the algorithm computes the centerpoint $\Mh{q}$ of $\Mh{U}_\Mh{\Upsilon}$, and queries the separation oracle for the label of $\Mh{q}$. As long as the query point is outside $\Mh{C}$, the algorithm performs a \AlgorithmI{remove}\xspace{} operation using the returned separating line.
When the oracle returns that the query point $\Mh{q}$ is inside $\Mh{C}$, the algorithm computes the vertical line $\Mh{\mathcalb{l}}_\Mh{q}$ through $\Mh{q}$. The algorithm now performs line cleaning on this vertical line. This operation splits $\Mh{\Upsilon}$ into two sub-pockets. Crucially, since $\Mh{q}$ was a centerpoint for $\Mh{U}_\Mh{\Upsilon}$, the number of points in each of the two sub-pockets is at most $2\cardin{\Mh{U}_\Mh{\Upsilon}}/3$. See \figref{v:pocket}.
\begin{figure}
\caption{Vertical pocket splitting. In this example, the
centerpoint $\Mh{q}$ lies inside $\Mh{C}$. Thus we construct the
vertical line $\ell_\Mh{q}$ through $\Mh{q}$ (left). Next, we
perform a line cleaning operation on $\ell_\Mh{q}$. This splits
the original pocket $\Mh{\Upsilon}$ into two new pockets $\Mh{\Upsilon}_1$,
$\Mh{\Upsilon}_2$, while classifying some points in the process
(right). Observe that the unclassified points in $\Mh{\Upsilon}_1$
and $\Mh{\Upsilon}_2$ are no longer mutually visible to each other
after the line cleaning operation.}
\end{figure}
\subsection{The algorithm} \seclab{improved:alg}
The algorithm starts in the same way as the greedy algorithm of \secref{round}, which we restate for convenience. Recall that $\Mh{U}$ is the set of unclassified points (initially $\Mh{U} = \Mh{P}$). At all times, the algorithm maintains the inner approximation $\Mh{B} \subseteq \Mh{C}$. At the beginning, $\Mh{B}$ is uninitialized. The algorithm computes the centerpoint $\Mh{q}$ of $\Mh{U}$ and queries the oracle for the label of $\Mh{q}$. While $\Mh{q}$ is outside, we classify the appropriate set of points as outside (according to the separating hyperplane returned from the oracle), update $\Mh{U}$, and repeat. As soon as the computed centerpoint $\Mh{q}$ lies in $\Mh{C}$, we set $\Mh{B} = \Mh{q}$ and continue to the next stage.
Next, the algorithm performs two directional climbs (\secref{dir:climb}) in the positive and negative directions of the $x$-axis. This uses $O( \log n)$ oracle queries by \lemref{dir:climb} and results in a computed segment $\Mh{v} \Mh{v}' \subseteq \Mh{C}$, where $\Mh{v}, \Mh{v}'$ are vertices of the inner approximation $\Mh{B}$, such that all unclassified points lie in the strip induced by the vertical line through $\Mh{v}$ and the vertical line through $\Mh{v}'$, see also \figref{v:pocket}.
The algorithm now handles all points of $\Mh{U}$ lying above $\Mh{v} \Mh{v}'$ (the points below the line are handled in a similar fashion). Let $\Mh{B}^+$ be the set of vertices of $\Mh{B}$ in the top chain. Note that $\Mh{B}^+$ consists of at most $O(\log n)$ vertices. For each vertex $v$ of $\Mh{B}^+$, the algorithm performs line cleaning on the vertical line going through $v$. This results in $O(\log n)$ vertical pockets, where all vertical lines passing originally through $\Mh{B}^+$ are now clean.
The algorithm repeatedly picks a vertical pocket. If the pocket contains less than three points the algorithm queries the oracle for the classification of these points, and continues to the next pocket. Otherwise, the algorithm performs a vertical pocket splitting operation, as described in \secref{pocket:split}. The algorithm stops when there are no longer any pockets (i.e., all the points above the segment $\Mh{v} \Mh{v}'$ are classified). The algorithm then runs the symmetric procedure below this segment $\Mh{v} \Mh{v}'$.
\subsection{Analysis}
\begin{figure}
\caption{Constructing the polygon $\Mh{\pi}$ from an inner
fence $\Mh{\sigma}$.}
\end{figure}
\begin{lemma}
\lemlab{separate:but}
Given a point set $\Mh{P}$, and a convex polygon $\Mh{\sigma}$ that is
an inner fence for $\Mh{P} \cap \Mh{C}$; that is,
$\Mh{P} \cap \Mh{C} \subseteq \Mh{\sigma} \subseteq \Mh{C}$. Then, there
is a convex polygon $\Mh{\pi}$, such that
\begin{compactenumA}
\item
$\Mh{P} \cap \Mh{C} \subseteq \Mh{\pi} \subseteq \Mh{\sigma}$.
\item $\nVX{\Mh{\pi}} \leq 2\nVX{\Mh{\sigma}}$ (where $\nVX{Q}$
denotes the number of vertices of the polygon $Q$).
\item Every edge of $\Mh{\pi}$ lies on a line of
$\LinesX{\Mh{P}}$, see \Eqref{lines:x}.
\end{compactenumA} \end{lemma} \begin{proof}
Any edge $\Mh{e}$ of $\Mh{\sigma}$ that does not contain any point of
$\Mh{P}$ on it can be moved parallel to itself into the polygon
until it passes through a point of $\Mh{P}$. Next, split the edges
that contain only a single point of $\Mh{P}$, by adding this point
as a vertex.
Consider a vertex $v$ of the polygon that is not in $\Mh{P}$ ---
and consider the two adjacent vertices $u,w$, which must be in
$\Mh{P}$. If $\triangle uvw \setminus uw$ contains no point of $\Mh{P}$,
then we delete $v$ from the polygon and replace it by the edge
$uw$. Otherwise, move $v$ towards $u$, until the edge $vw$ hits a
point of $\Mh{P}$. Next, move $v$ towards $w$, till the edge $vu$
hits a point of $\Mh{P}$. See \figref{inner:fence}.
Repeating this process so that all edges contain two points of
$\Mh{P}$ means that properties (A) and (C) are met.
Additionally, the number of edges of the new polygon $\Mh{\pi}$
is at most twice the number of edges of $\Mh{\sigma}$,
implying property (B). \end{proof}
Consider the inner and outer fences $\Mh{F_{\mathrm{in}}}$ and $\Mh{F_{\mathrm{out}}}$ of $\Mh{P}$ in relation to $\Mh{C}$. Applying \lemref{separate:but} to $\Mh{F_{\mathrm{in}}}$, results in a convex polygon $\Mh{\pi}$ that separates $\Mh{P} \cap \Mh{C}$ from $\partial \Mh{C}$, that has at most $2 \nVX{\Mh{F_{\mathrm{in}}}}$ vertices. Let $\Mh{{V}}$ be the set of all vertices of the polygons $\Mh{F_{\mathrm{in}}}, \Mh{F_{\mathrm{out}}}$ and $\Mh{\pi}$.
The following two Lemmas state that if a vertical pocket $\Mh{\Upsilon}$ containing no vertex of $\Mh{{V}}$, then all points in $\Mh{\Upsilon}$ can be classified using $O(\log n)$ oracle queries. Finally, we analyze the scenario when $\Mh{\Upsilon}$ contains at least one vertex of $\Mh{{V}}$.
\begin{lemma}
\lemlab{no:vertex:outside}
Let $\Mh{\Upsilon}$ be a vertical pocket created during the algorithm with
current inner approximation $\Mh{B}$. Suppose that
$\Mh{{V}} \cap \Mh{\Upsilon} = \varnothing$, then all points in
$\Mh{P} \cap \Mh{\Upsilon}$ are outside $\Mh{C}$. \end{lemma} \begin{proof}
Assume without loss of generality that $\Mh{\Upsilon}$ lies above $\Mh{B}$.
Let $\Mh{U} = \Mh{P} \cap \Mh{\Upsilon}$ be the set of unclassified points in
the pocket. Note that $\Mh{\Upsilon}$ is bounded by two vertical lines that
were previously cleaned.
By assumption, $\Mh{\Upsilon}$ does not contain any vertex of $\Mh{\pi}$.
It follows that there is a single edge of $\Mh{\pi}$ that
intersects the two vertical lines bounding $\Mh{\Upsilon}$. Let
$\Mh{u}_L, \Mh{u}_R$ be these two intersection points, one lying on
each line. By definition, we have $\Mh{u}_L, \Mh{u}_R \in \Mh{C}$.
Furthermore, $\Mh{u}_L, \Mh{u}_R$ lie on lines of $\LinesX{\Mh{P}}$ by
construction of $\Mh{\pi}$. Since both vertical lines bounding
$\Mh{\Upsilon}$ were cleaned, it must be that the segment
$\Mh{u}_L \Mh{u}_R \subseteq \Mh{B}$. Since all points of $\Mh{U}$ are
above $\Mh{B}$, this implies that $\Mh{U}$ lies above
$\Mh{u}_L \Mh{u}_R$ and thus above $\Mh{\pi}$. Namely, all points of
$\Mh{U}$ are outside $\Mh{C}$. \end{proof}
\begin{lemma}
\lemlab{no:vertex:classify} Let $\Mh{\Upsilon}$ be a vertical pocket with
$\Mh{{V}} \cap \Mh{\Upsilon} = \varnothing$. Then during the vertical pocket
splitting operation of \secref{pocket:split} applied to $\Mh{\Upsilon}$,
all oracle queries are outside $\Mh{C}$. In particular, all points of
$\Mh{P} \cap \Mh{\Upsilon}$ are classified after $O(\log n)$ oracle queries. \end{lemma} \begin{proof}
Let $\Mh{U} = \Mh{P} \cap \Mh{\Upsilon}$. By \lemref{no:vertex:outside}, all
points of $\Mh{U}$ lie outside $\Mh{C}$. Assume that the first
statement of the Lemma is false, and let $\Mh{U}' \subseteq \Mh{U}$ be
the set of unclassified points such that $\Mh{q}$ was the centerpoint
for $\Mh{U}'$ and $\Mh{q} \in \Mh{C}$. Now $\Mh{q}$ is inside a
triangle induced by three points of $\Mh{U}'$. Namely, there are (at
least) two points outside $\Mh{C}$ in this pocket that are not
mutually visible to each other with respect to $\Mh{C}$. But this implies
that $\Mh{F_{\mathrm{out}}}$ must have a vertex somewhere inside the vertical pocket
$\Mh{\Upsilon}$, which is a contradiction.
Hence, all oracle queries made by the algorithm are outside $\Mh{C}$.
Each such query results in a constant reduction in the size of
$\Mh{U}$, since the query point is a centerpoint of the unclassified
points. It follows that after $O(\log\cardin{\Mh{U}}) = O(\log n)$
queries, all points in $\Mh{\Upsilon}$ are classified. \end{proof}
\begin{theorem}
\thmlab{improved:alg:2d}
Let $\Mh{C}$ be a convex body provided via a separation oracle, and
let $\Mh{P}$ be a set of $n$ points in the plane. The improved
classification algorithm performs
\begin{math}
O\pth{ \bigl[ 1 +\priceY{\Mh{P}}{\Mh{C}}\bigr] \log^2 n}
\end{math}
oracle queries. The algorithm correctly identifies all points
in $\Mh{P} \cap \Mh{C}$ and $\Mh{P} \setminus \Mh{C}$. \end{theorem} \begin{proof}
The initial stage involves two directional climbs and $O( \log n)$
line cleaning operations, and thus requires $O( \log^2 n)$
queries.
A vertical pocket that contains a vertex of $\Mh{{V}}$ is charged
arbitrarily to any such vertex. Since the number of points in a
pocket reduces by at least a factor of $1/3$ during a split
operation, this means that a vertex of $\Mh{{V}}$ is charged at most
$O(\log n)$ times. Each time a vertex gets charged, it has to pay
for the $O(\log n)$ oracle queries that were issued in the
process of creating this pocket, and later on for the price of
splitting it. Thus, we only have to account for queries performed
in vertical pockets that do not contain a vertex of $\Mh{{V}}$.
By \lemref{no:vertex:classify}, such a pocket will have all
points inside it classified after $O(\log n)$ oracle queries.
However, the above implies that there are at most
$O([1+\priceY{\Mh{P}}{\Mh{C}}] \log n)$ vertical pockets with no
vertex of $\Mh{{V}}$ throughout the algorithm execution.
Since handling such a pocket requires $O( \log n)$ queries, the
bound follows. \end{proof}
\section{On emptiness variants in two dimensions} \seclab{emptiness:2d}
Here, we present two instance-optimal approximation algorithms for solving the following two variants: \begin{compactenumA}
\item Emptiness: Find a point $\Mh{p} \in \Mh{P} \cap \Mh{C}$, or
using as few queries as possible, verify that
$\Mh{P} \cap \Mh{C} = \varnothing$.
\item Reverse emptiness: Find a point
$\Mh{p} \in \Mh{P} \setminus (\Mh{P} \cap \Mh{C})$, or using as few
queries as possible, verify that $\Mh{P} \cap \Mh{C} = \Mh{P}$. \end{compactenumA}
For both variants we present $O( \log n )$ approximation (the algorithm for emptiness is randomized), improving over the general approximation algorithm of \secref{improved:2d} which provides a $O( \log^2 n)$ approximation.
\subsection{Emptiness: Are all the points outside?}
Here we consider the problem of verifying that all the given points are outside the convex body.
\myparagraph{Algorithm.} The algorithm is a slight modification of the algorithm of \secref{round}. Recall the two operations \AlgorithmI{expand}\xspace{} and \AlgorithmI{remove}\xspace{} that the algorithm will need (\secref{operations}).
Initially, let $\Mh{U} = \Mh{P}$ be the set of unclassified points. At every round, if the inner approximation $\Mh{B}$ is empty, then the algorithm sets $\Mh{U}^+ = \Mh{U}$. Otherwise, the algorithm picks a line $\Mh{\mathcalb{l}}$ that is tangent to $\Mh{B}$ with the largest number of points of $\Mh{U}$ on the other side of $\Mh{\mathcalb{l}}$ than $\Mh{B}$. Let $\Mh{\mathcalb{l}}^-$ and $\Mh{\mathcalb{l}}^+$ be the two closed halfspace bounded by $\Mh{\mathcalb{l}}$, where $\Mh{B} \subseteq \Mh{\mathcalb{l}}^-$. The algorithm computes the point set $\Mh{U}^+ = \Mh{U} \cap \Mh{\mathcalb{l}}^+$. We have two cases:
\begin{compactenumA}[label=\Alph*.]
\item Suppose $\cardin{\Mh{U}^+}$ is of constant size. The
algorithm queries the oracle for the status of each of these
points. If there exists a point $\Mh{p} \in \Mh{U}^+$ which lies in $\Mh{C}$,
then we return $\Mh{p}$ as the witness. Otherwise, for each
$\Mh{p} \in \Mh{U}^+$ we receive a separating line $\Mh{\mathcalb{l}}_\Mh{p}$ from the
oracle, and the algorithm executes $\AlgorithmI{remove}\xspace{}(\Mh{\mathcalb{l}}_\Mh{p})$.
\item Otherwise, $\cardin{\Mh{U}+}$ does not have constant size.
The algorithm chooses a point $\Mh{q} \in \Mh{U}^+$ at random
and queries the oracle using $\Mh{q}$. If $\Mh{q} \in \Mh{C}$,
we return $\Mh{q}$ as the witness. Otherwise, we perform a
\AlgorithmI{remove}\xspace{} operation on the separating line returned.
Next, we compute the centerpoint $\Mh{q}$ of $\Mh{U}^+$ and
query the oracle for the label of $\Mh{q}$. Depending on the
label of $\Mh{q}$, the algorithm either executes
$\AlgorithmI{expand}\xspace{}(\Mh{q})$ or $\AlgorithmI{remove}\xspace{}(\ell)$, where $\ell$
is the separating line in the instance that $\Mh{q} \not\in \Mh{C}$. \end{compactenumA}
\myparagraph{Analysis.} Let $\Mh{G}_i$ be the intersection graph (see \defref{visi:graph}) over the points outside $\Mh{C}$ in the beginning of the $i$th\xspace iteration. We need the following technical Lemma.
\begin{lemma}
\lemlab{indep:fin}
Suppose $\Mh{P} \cap \Mh{C} = \varnothing$. Then at any iteration
$i$, the largest independent set in the visibility graph $\Mh{G}_i$
is at most $\cardin{\Mh{F_{\mathrm{out}}}}$. \end{lemma} \begin{proof}
For the body $\Mh{C}$ and point set $\Mh{P}$, define the set
$\Mh{R} \subseteq \Mh{P}$ to be the maximum set of points such that
no two points in $\Mh{R}$ are visible with respect to $\Mh{C}$.
Observe that $\Mh{R}$ corresponds to the maximum independent set
in the visibility graph for $\Mh{P}$ with respect to the body
$\Mh{C}$. We claim $\cardin{\Mh{R}} \leq \cardin{\Mh{F_{\mathrm{out}}}}$. Suppose
that $\cardin{\Mh{R}} > \cardin{\Mh{F_{\mathrm{out}}}}$. Given the polygon
$\Mh{F_{\mathrm{out}}}$, for each edge $e$ of $\Mh{F_{\mathrm{out}}}$ consider the line $\Mh{\mathcalb{l}}_e$
through $e$ and let $\Mh{\mathcalb{h}}_e^+$ be the halfspace bounded by
$\Mh{\mathcalb{l}}_e$ that does not contain $\Mh{C}$ in its interior. Then
$\Set{\Mh{\mathcalb{h}}_e^+}{e \in \Mh{F_{\mathrm{out}}}}$ covers the space
$\mathbb{R}^2 \setminus \intX{\Mh{C}}$. By the hypothesis, one halfspace
$\Mh{\mathcalb{h}}_e^+$ must contain at least two points of $\Mh{R}$. But
then these two such points are visible with respect to $\Mh{C}$,
contradicting the definition of $\Mh{R}$.
We know that the size of the largest independent set (with respect
to the current inner approximation $\Mh{B}_i$) is monotone
increasing over the iterations. Hence each independent set can be
of size at most $\cardin{R} \leq \cardin{\Mh{F_{\mathrm{out}}}}$. \end{proof}
\begin{lemma}
\lemlab{greedy-method-empty}
Let $\Mh{C}$ be a convex body provided via a separation oracle, and
let $\Mh{P}$ be a set of $n$ points in the plane. The randomized greedy
classification algorithm for emptiness performs
${O\bigl((\cardin{\Mh{F_{\mathrm{out}}}}+1) \log n\bigr)}$ oracle queries
with high probability. The algorithm always correctly
verifies that $\Mh{P} \cap \Mh{C} = \varnothing$ or
finds a witness point of $\Mh{P}$ inside $\Mh{C}$. \end{lemma} \begin{proof}
Suppose $\Mh{P} \cap \Mh{C} = \varnothing$. Then \lemref{indep:fin}
along with the proof of \thmref{greedy-method} implies the result,
by replacing the quantity $\indexX{\Mh{P}}$ with $\cardin{\Mh{F_{\mathrm{out}}}}$.
If $\Mh{P} \cap \Mh{C} \neq \varnothing$, let $\Mh{U}^+$ be a set of
points in the current iteration,
$\USetin{+} = \Mh{U}^+ \cap \Mh{C}$, and
$\USetout{+} = \Mh{U}^+ \setminus \USetin{+}$. Observe that
$\USetin{+}$ remains the same throughout the algorithm execution,
while $\USetout{+}$ shrinks. If
$\cardin{\USetout{+}} > \cardin{\Mh{U}^+}/2$, then by
\lemref{depth:reduce} the number of edges removed from $\Mh{G}_i$
is $\Omega\pth{\cardin{\USetout{+}}^2}$ (though the hidden
constants will be smaller). Thus, after at most
$O\bigl((\cardin{\Mh{F_{\mathrm{out}}}}+1)\log n \bigr)$ iterations, we must
encounter an iteration in which there is a set of points
$\Mh{U}^+$ with $\cardin{\USetout{+}} < \cardin{\Mh{U}^+}/2$. Now
the probability that our randomly sampled point lies in
$\USetin{+}$ is at least 1/2. In particular, after an additional
$O(\log n)$ iterations, the probability that we fail to find a
witness point is at most $1/n^{\Omega(1)}$, thus implying the bound
on the number of queries. \end{proof}
\subsection{Reverse emptiness: Are all the points inside?}
Here we consider the problem of verifying that all the given points are inside the convex body.
\subsubsection{Algorithm}
\myparagraph{Initialization.} Let $\mathcal{D} = \CHX{\Mh{P}}$. Define $\Mh{v}, \Mh{v}' \in \Mh{P}$ to be the extreme left and right vertices of $\mathcal{D}$. For the sake of exposition, by a rotation of the space, we assume without loss of generality that the segment $\Mh{v} \Mh{v}'$ is parallel to the $x$-axis. Let $\Mh{v}_1$ and $\Mh{v}_2$ be the vertices adjacent to $\Mh{v}$ on $\mathcal{D}$. Similarly define $\Mh{v}'_1$ and $\Mh{v}'_2$ for $\Mh{v}'$. The algorithm asks the oracle for the status of $\Mh{v}$, $\Mh{v}_1$, $\Mh{v}_2$, $\Mh{v}'$, $\Mh{v}'_1$, and $\Mh{v}'_2$. If any of them are outside, the algorithm halts and reports the witness found. Otherwise, all points must lie either above or below the horizontal segment $\Mh{v}\Mh{v}'$. We now describe how to handle the points above $\Mh{v}\Mh{v}'$ (the below case is handled similarly).
Let $\Mh{\ch^+}$ be the polygonal chain that is the portion of $\mathcal{D}$ contained inside the region bounded by the segment $vv'$ and the two vertical lines passing through $\Mh{v}$ and $\Mh{v}'$. Label the edges along $\mathcal{D}^+$ by $\Mh{f}_1, \ldots, \Mh{f}_k$ clockwise from $\Mh{v}$ to $\Mh{v}'$. For $1 \leq i < j \leq k$, let $\ChRangeY{i}{j}$ be the polygonal chain consisting of the consecutive edges $\Mh{f}_i, \ldots, \Mh{f}_j$. The algorithm now invokes the following recursive procedure.
\myparagraph{Recursive procedure.} A recursive call is described by two indices $(i,j)$, the goal is to verify that all the points of $\Mh{P}$ lying below $\ChRangeY{i}{j}$ are inside $\Mh{C}$.
For a given recursive instance $(i,j)$, the algorithm proceeds as follows. Begin by computing the lines $\Mh{\mathcalb{l}}_i$ and $\Mh{\mathcalb{l}}_j$ through the edges $\Mh{f}_i$ and $\Mh{f}_j$ respectively. Let $\Mh{q} = \Mh{\mathcalb{l}}_i \cap \Mh{\mathcalb{l}}_j$ be the point of intersection. The algorithm asks the oracle for the status of $\Mh{q}$. If $\Mh{q}$ is inside, then all points below $\ChRangeY{i}{j}$ must also be in $\Mh{C}$. The algorithm classifies the appropriate points and returns. Otherwise $\Mh{q}$ is outside, and generates two recursive calls. Let $\ell = \floor{(i + j)/2}$ and $\Mh{f}_\ell = (x,y)$ be the middle edge in the chain $\ChRangeY{i}{j}$. The algorithm queries the oracle with $x$ and $y$. If either $x$ or $y$ is outside, the algorithm returns the appropriate witness found. Otherwise $x$ and $y$ are both inside. The algorithm recurses on the instances $(i, \ell)$ and $(\ell, j)$.
\subsubsection{Analysis}
The analysis will use the polygon $\Mh{\pi}$, as defined in \lemref{separate:but}, applied to $\Mh{F_{\mathrm{in}}}$. Specifically, it is an inner fence where $\cardin{\Mh{\pi}} = O(\cardin{\Mh{F_{\mathrm{in}}}})$ and every edge of $\Mh{\pi}$ lies on a line of $\LinesX{\Mh{P}}$, see \Eqref{lines:x}. Note that $\mathcal{D} \subseteq \Mh{\pi}$ and every edge of $\mathcal{D}$ lies on a line of $\LinesX{\Mh{P}}$. For each edge $\Mh{e}$ of $\Mh{\pi}$, let $\Mh{\mathcalb{l}}_\Mh{e} \in \LinesX{\Mh{P}}$ be the line containing $\Mh{e}$. We can match every edge $\Mh{e}$ of $\Mh{\pi}$ with the edge $\Mh{f}(\Mh{e})$ of $\mathcal{D}$ that lies on $\Mh{\mathcalb{l}}_\Mh{e}$. If an edge $\Mh{f}$ of $\mathcal{D}$ is matched to some edge of $\Mh{\pi}$, we say that $\Mh{f}$ is \emphi{active}. A recursive call $(i,j)$ is \emphi{alive} if the query $\Mh{q} = \Mh{\mathcalb{l}}_i \cap \Mh{\mathcalb{l}}_j$ generated is outside $\Mh{C}$.
\begin{lemma}
\lemlab{num:recursive:calls}
The number of alive recursive calls at the same recursive depth
is at most $\cardin{\Mh{\pi}} = O(\cardin{\Mh{F_{\mathrm{in}}}})$. \end{lemma} \begin{proof}
Fix an alive recursive call $(i,j)$ with edges
$\Mh{f}_i, \ldots, \Mh{f}_j$ of $\mathcal{D}$. Suppose that none of these
edges are active. Because $\Mh{\pi}$ is an inner fence for $\Mh{P}$
and $\Mh{C}$, there must be a vertex $\Mh{v}$ of $\Mh{\pi}$ lying
on or above the chain $\ChRangeY{i}{j}$. Let $\Mh{e}_1$ and
$\Mh{e}_2$ be the edges adjacent to $v$ in $\Mh{\pi}$. For
$\ell = 1, 2$, consider $\Mh{f}(\Mh{e}_\ell)$, the edge of $\mathcal{D}$
matched to $e_\ell$. Since there are no active edges in
$\ChRangeY{i}{j}$, we have
$\Mh{f}(\Mh{e}_\ell) \not\in \{\Mh{f}_i, \ldots, \Mh{f}_j\}$ for
$\ell = 1, 2$. This readily implies that all vertices in the
polygonal chain $\ChRangeY{i}{j}$ are contained in the wedge
formed by $\Mh{v}$ and the two edges $\Mh{e}_1$ and $\Mh{e}_2$. See
\figref{edges}.
In particular, the query $\Mh{q}$ generated is inside $\Mh{\pi}$
and thus $\Mh{C}$. Contradicting that the recursive call was alive.
It follows that each alive recursive call must contain at least
one active edge. The number of active edges is bounded by
$\cardin{\Mh{\pi}}$, implying the result. \end{proof}
\begin{lemma}
\lemlab{reverse:emptiness}
Let $\Mh{C}$ be a convex body provided via a separation oracle, and
let $\Mh{P}$ be a set of $n$ points in the plane. The
classification algorithm for reverse emptiness performs
$O\bigl(\cardin{\Mh{F_{\mathrm{in}}}} \log n\bigr)$ oracle queries.
The algorithm correctly verifies that $\Mh{P} \cap \Mh{C} = \Mh{P}$
or finds a witness point of $\Mh{P}$ outside $\Mh{C}$. \end{lemma} \begin{proof}
Suppose all points of $\Mh{P}$ are inside $\Mh{C}$. By
\lemref{num:recursive:calls}, there are at most $O(\cardin{\Mh{F_{\mathrm{in}}}})$
alive recursive calls at each level of the recursion tree. Since
the depth of the recursion tree is $O(\log n)$, the number of
total alive recursive calls throughout the algorithm is
$O(\cardin{\Mh{F_{\mathrm{in}}}} \log n)$. At each alive recursive call of the
above algorithm, $O(1)$ queries are made. This implies the result.
Otherwise not all points of $\Mh{P}$ are inside $\Mh{C}$. At least
one such point outside of $\Mh{C}$ must be a vertex on the convex
hull $\mathcal{D}$. Hence after at most $O(\cardin{\Mh{F_{\mathrm{in}}}} \log n)$ oracle
queries, this vertex will be queried and found to be outside
$\Mh{C}$. \end{proof}
\section{Application: Minimizing a convex function} \seclab{applications}
Suppose we are given a set of $n$ points $\Mh{P}$ in the plane and a convex function $\Mh{f} : \mathbb{R}^2 \to \mathbb{R}$. Our goal is to compute the point in $\Mh{P}$ minimizing $\min_{\Mh{p} \in \Mh{P}} \Mh{f}(\Mh{p})$. Given a point $\Mh{p} \in \mathbb{R}^2$, assuming that we can evaluate $\Mh{f}$ and the derivative of $\Mh{f}$ at $\Mh{p}$ efficiently, we show that the point in $\Mh{P}$ minimizing $\Mh{f}$ can be computed using $O(\indexX{\Mh{P}} \log^2 n)$ evaluations to $\Mh{f}$ or its derivative.
\begin{definition}
Let $\Mh{f} : \mathbb{R}^d \to \mathbb{R}$ be a convex function. For a number
$c \in \mathbb{R}$, define the \emphi{level set of $\Mh{f}$} as
$\LvSetY{\Mh{f}}{c} = \Set{\Mh{p} \in \mathbb{R}^d}{\Mh{f}(\Mh{p}) \leq c}$. If
$\Mh{f}$ is a convex function, then $\LvSetY{\Mh{f}}{c}$ is a convex set
for all $c \in \mathbb{R}$. \end{definition} \begin{definition}
Let $\Mh{f} : \mathbb{R}^d \to \mathbb{R}$ be a convex (and possibly
non-differentiable) function. For a point $\Mh{p} \in \mathbb{R}^d$, a
vector $v \in \mathbb{R}^d$ is a \emphi{subgradient} of $\Mh{f}$ at $\Mh{p}$
if for all $\Mh{q} \in \mathbb{R}^d$,
$\Mh{f}(\Mh{q}) \geq \Mh{f}(\Mh{p}) + \DotProdY{v}{\Mh{q} - \Mh{p}}$. The
\emphi{subdifferential} of $\Mh{f}$ at $\Mh{p} \in \mathbb{R}^d$, denoted by
$\partial \Mh{f}(\Mh{p})$, is the set of all subgradients $v \in \mathbb{R}^d$
of $\Mh{f}$ at $\Mh{p}$. \end{definition}
It is well known that when the domain for $\Mh{f}$ is $\mathbb{R}^d$ and $\Mh{f}$ is a convex function, then $\partial \Mh{f}(\Mh{p})$ is a non-empty set of all $\Mh{p} \in \mathbb{R}^d$ (for example, see \cite[Chapter 3]{f-ina-13}).
Let $\alpha = \min_{p \in \Mh{P}} \Mh{f}(p)$. We have that $\LvSetY{\Mh{f}}{\alpha} \cap \Mh{P} = \Set{p \in \Mh{P}}{\Mh{f}(p) = \alpha}$ and $\LvSetY{\Mh{f}}{\alpha'} \cap \Mh{P} = \varnothing$ for all $\alpha' < \alpha$. Hence, the problem is reduced to determining the smallest value $r$ such that $\LvSetY{\Mh{f}}{r} \cap \Mh{P}$ is non-empty.
\begin{lemma}
\lemlab{decision}
Let $\Mh{P}$ be a collection of $n$ points in the plane.
For a given value $r$, let $\Mh{C}_r = \LvSetY{\Mh{f}}{r}$.
The set $\Mh{C}_r \cap \Mh{P}$ can be computed using
$O(\indexX{\Mh{P}} \log n)$ evaluations to $\Mh{f}$ or
its derivative. If $T$ is the time needed to evaluate $\Mh{f}$
or its derivative, the algorithm can be implemented in
$O(n\log^2 n\log\log n + T\cdot \indexX{\Mh{P}} \log n)$ expected
time. \end{lemma} \begin{proof} The Lemma follows by applying \thmref{greedy-method}. Indeed, let $\Mh{C}_r = \LvSetY{\Mh{f}}{r}$ be the convex body of interest. It remains to design a separation oracle for $\Mh{C}_r$.
Given a query point $\Mh{q} \in \mathbb{R}^2$, first compute $c = \Mh{f}(\Mh{q})$. If $c \leq r$, then report that $\Mh{q} \in \Mh{C}_r$. Otherwise, $c > r$. In this case, compute some gradient vector $v$ in $\partial \Mh{f}(\Mh{q})$. Using the vector $v$, we can obtain a line $\Mh{\mathcalb{l}}$ tangent to the boundary of $\LvSetY{\Mh{f}}{c}$ at $\Mh{q}$. As $\LvSetY{\Mh{f}}{r} \subseteq \LvSetY{\Mh{f}}{c}$, $\Mh{\mathcalb{l}}$ is a separating line for $\Mh{q}$ and $\Mh{C}_r$, as desired. As such, the number of separation oracle queries needed to determine $\Mh{C}_r \cap \Mh{P}$ is bounded by $O(\indexX{\Mh{P}} \log n)$ by \thmref{greedy-method}.
The implementation details of \thmref{greedy-method} are given in \lemref{impl-greedy}. \end{proof}
\myparagraph{The algorithm.} Let $\alpha = \min_{p \in \Mh{P}} \Mh{f}(p)$. For a given number $r \geq 0$, set $\Mh{P}_r = \LvSetY{\Mh{f}}{r} \cap \Mh{P}$. We develop a randomized algorithm to compute $\alpha$.
Set $\Mh{P}_0 = \Mh{P}$. In the $i$th\xspace iteration, the algorithm chooses a random point $p_i \in \Mh{P}_{i-1}$ and computes $r_i = \Mh{f}(p_i)$. Next, we determine $\Mh{P}_{r_i}$ using \lemref{decision}. In doing so, we modify the separation oracle of \lemref{decision} to store the collection of queries $S_i \subseteq \Mh{P}$ that satisfy $\Mh{f}(s) = r_i$ for all $s \in S_i$. We set $\Mh{P}_{i+1} = \Mh{P}_{r_i} \setminus S_i$. Observe that all points $p \in \Mh{P}_{i+1}$ have $\Mh{f}(p) < r_i$. The algorithm continues in this fashion until we reach an iteration $j$ in which $\cardin{\Mh{P}_{j+1}} \leq 1$. If $\Mh{P}_{j+1} = \{q\}$ for some $q \in \Mh{P}$, output $q$ as the desired point minimizing $\Mh{f}$. Otherwise $\Mh{P}_{j+1} = \varnothing$, implying that $\Mh{P}_{r_j} = S_j$, and the algorithm outputs any point in the set $S_j$.
\myparagraph{Analysis.} We analyze the running time of the algorithm. To do so, we argue that the algorithm invokes the algorithm in \lemref{decision} only a logarithmic number of times.
\begin{lemma}
\lemlab{jumps}
In expectation, the above algorithm terminates after $O(\log n)$
iterations. \end{lemma} \begin{proof}
Let $V = \Set{\Mh{f}(p)}{p \in \Mh{P}}$ and $N = \cardin{V}$. For a
number $r$, define $V_r = \Set{i \in V}{i \leq r}$. Notice that we
can reinterpret the algorithm described above as the following
random process. Initially set $r_0 = \max_{i \in V} i$. In the
$i$th\xspace iteration, choose a random number $r_i \in
V_{r_{i-1}}$. This process continues until we reach an iteration
$j$ in which $\cardin{V_{r_j}} \leq 1$.
We can assume without loss of generality that
$V = \{1, 2, \ldots, N\}$. For an integer $i \leq N$,
let $T(i)$ be the expected number of iterations needed for the
random process to terminate on the set $\{1, \ldots, i\}$. We have
that $T(i) = 1 + \frac{1}{i-1} \sum_{j=1}^{i-1} T(i-j)$, with
$T(1) = 0$. This recurrence solves to $T(i) = O(\log i)$.
As such, the algorithm repeats this random process
$O(\log N) = O(\log n)$ times in expectation. \end{proof}
\begin{lemma}
\lemlab{discrete:min} Let $\Mh{P}$ be a set of $n$ points in $\mathbb{R}^2$
and let $\Mh{f} : \mathbb{R}^2 \to \mathbb{R}$ be a convex function. The point in
$\Mh{P}$ minimizing $\Mh{f}$ can be computed using
$O(\indexX{\Mh{P}} \log^2 n)$ evaluations to $\Mh{f}$ or its
derivative. The bound on the number of evaluations holds in
expectation. If $T$ is the time needed to evaluate $\Mh{f}$ or its
derivative, the algorithm can be implemented in
$O(n\log^3 n\log\log n + T\cdot \indexX{\Mh{P}} \log^2 n)$ expected
time. \end{lemma} \begin{proof}
The result follows by combining \lemref{decision} and
\lemref{jumps}. \end{proof}
\subsection{The discrete geometric median}
Let $\Mh{P}$ be a set of $n$ points in $\mathbb{R}^d$. For all $x \in \mathbb{R}^d$, define the function $\Mh{f}(x) = \sum_{q \in \Mh{P} - x} \normX{x - q}$. The \emphi{discrete geometric median} is defined as the point in $\Mh{P}$ minimizing the quantity $\min_{p \in \Mh{P}} \Mh{f}(p)$.
Note that $\Mh{f}$ is convex, as it is the sum of convex functions. Furthermore, given a point $\Mh{p}$, we can compute $\Mh{f}(\Mh{p})$ and the derivative of $\Mh{f}$ at $\Mh{p}$ in $O(n)$ time. As such, by \lemref{discrete:min}, we obtain the following.
\begin{lemma}
\lemlab{discrete:med}
Let $\Mh{P}$ be a set of points in $\mathbb{R}^2$. Then
the discrete geometric median of $\Mh{P}$ can be computed in
$O(n\log^2 n \cdot (\log n \log\log n + \indexX{\Mh{P}}))$
expected time. \end{lemma}
\begin{remark}
For a set of $n$ points $\Mh{P}$ chosen uniformly at random from the unit
square, it is known that in expectation $\indexX{\Mh{P}} = \Theta(n^{1/3})$
\cite{ab-lcc-09}.
As such, the discrete geometric median for such a random set $\Mh{P}$
can be computed in $O(n^{4/3} \log^2 n)$ expected time. \end{remark}
\section{Conclusion and open problems}
In this paper we presented various algorithms for classifying points with oracle access to an unknown convex body. As far as the authors are aware, this exact problem has not been studied within the computational geometry community previously. However, since the problem is closely related to active learning and has further applications (such as discretely minimizing a convex function), we believe that this is an interesting problem to study. We now pose some open problems.
\begin{compactenumA}
\item Develop a more natural instance-optimal algorithm in 2D that
improves upon the $O(\log^2 n)$ approximation. Alternatively, develop
algorithms in which the number of queries is parameterized by different
functions of the input instance.
\item An algorithm in 3D that is instance-optimal up to some
additional factors (see the beginning of \apndref{ex:sep:pr} for
the definition of the separation price in higher dimensions).
\item Any results beyond three dimensions is unknown. The greedy
algorithm (\thmref{greedy-method} and \thmref{greedy-method-3d})
easily extends to $\mathbb{R}^d$. However the analysis in higher dimensions
will most likely reveal that the algorithm makes (ignoring logarithmic
factors) of the order of ${\indexX{\Mh{P}}}^{O(d)}$ queries, which is only
interesting when $\indexX{\Mh{P}}$ is much smaller than $n$.
\item In \lemref{discrete:med} we gave a randomized algorithm for computing
the discrete geometric median in expected time $\widetilde{O}(n \cdot \indexX{\Mh{P}})$
(where $\widetilde{O}$ hides logarithmic factors in $n$). The bottleneck of the
algorithm was in the computation of the function and the gradient, which
naively requires $O(n)$ time. Is it possible to speed up the gradient
computation by introducing additional randomization or optimization
techniques? Improving the running time further is an open problem. \end{compactenumA}
\BibTexMode{
\SubmitVer{
}
\RegVer{
}
}
\BibLatexMode{\printbibliography}
\appendix
\section{Expected separation price for random points} \apndlab{ex:sep:pr}
We first extend the notion of separation price (see \secref{lower:bound}) to higher dimensions. For a closed convex $d$-dimensional polytope $F$, we let $f_k(F)$ denote the number of $k$-dimensional faces of $F$.
\begin{definition}[Separation price in higher dimensions]
\deflab{lower:bound:high}
Let $\Mh{P}$ be a set of points and $\Mh{C}$ be a convex body in
$\mathbb{R}^d$. The inner fence $\Mh{F_{\mathrm{in}}}$ is a closed convex $d$-dimensional
polytope with the minimum number of vertices, such that
$\Mh{F_{\mathrm{in}}} \subseteq \Mh{C}$ and $\Mh{C} \cap \Mh{P} = \Mh{F_{\mathrm{in}}} \cap \Mh{P}$.
Similarly, the outer fence $\Mh{F_{\mathrm{out}}}$ is a closed convex
$d$-dimensional polytope with the minimum number of facets, such
that $\Mh{C} \subseteq \Mh{F_{\mathrm{out}}}$ and $\Mh{C} \cap \Mh{P} = \Mh{F_{\mathrm{out}}} \cap \Mh{P}$.
The separation price is defined as
$\priceY{\Mh{P}}{\Mh{C}} = f_0(\Mh{F_{\mathrm{in}}}) + f_{d-1}(\Mh{F_{\mathrm{out}}})$. \end{definition}
By extending the argument of \lemref{lower:bound} to use \defref{lower:bound:high}, one can prove the following.
\begin{lemma}
\lemlab{lower:bound:high}
Given a point set $\Mh{P}$ and a convex body $\Mh{C}$ in $\mathbb{R}^d$, any
algorithm that classifies the points of $\Mh{P}$ in relation to
$\Mh{C}$, must perform at least $\priceY{\Mh{P}}{\Mh{C}}$ separation
oracle queries. \end{lemma}
Informally, for any fixed convex body $\Mh{C}$ and a set of $n$ points $\Mh{P}$ chosen uniformly at random from the unit cube, the separation price is sublinear (approaching linear as the dimension increases).
\begin{lemma}
\lemlab{ex:sep:pr}
Let $\Mh{P}$ be a set of $n$ points chosen uniformly at random from
the unit cube $[0,1]^d$, and let $\Mh{C}$ be a convex body in
$\mathbb{R}^d$, with $\VolX{\Mh{C}} \geq c$ for some constant $c \leq 1$.
Then $\Ex{\priceY{\Mh{P}}{\Mh{C}}} = O(n^{1 - 2/(d+1)})$, where $O$
hides constants that depend on $d$ and $\Mh{C}$. \end{lemma} \begin{proof}
It is known that for convex bodies $\Mh{C}$, the expected number of
vertices of the convex hull of $\Mh{P} \cap \Mh{C}$ is
$O(n^{1 - 2/(d+1)})$. Indeed, since $\VolX{\Mh{C}} \geq c$, the
expected number of points of $\Mh{P}$ that fall inside $\Mh{C}$ is
$m = \Theta(n)$ (and these bounds hold with high probability by
applying any Chernoff-like bound). It is known that for $m$ points
chosen uniformly at random from $\Mh{C}$, the expected size of the
convex hull of points inside $\Mh{C}$ is
$O(m^{1 - 2/(d+1)}) = O(n^{1 - 2/(d+1)})$ \cite{b-rpcba-07}. This
readily implies that $\Ex{f_0(\Mh{F_{\mathrm{in}}})} = O(n^{1 - 2/(d+1)})$.
To bound $\Ex{f_{d-1}(\Mh{F_{\mathrm{out}}})}$, we apply a result of Dudley
\cite{d-mescs-74} which states the following. Given a convex body
$\Mh{C}$ and a parameter $\varepsilon > 0$, there exists a
convex body $D$, which is a polytope
formed by the intersection of $O(\varepsilon^{-(d-1)/2})$ halfspaces,
such that $\Mh{C} \subseteq D \subseteq (1+\varepsilon)\Mh{C}$, where
$(1+\varepsilon)\Mh{C} = \Set{\Mh{p} \in \mathbb{R}^d}{\exists \Mh{q} \in \Mh{C} : \|
\Mh{p} - \Mh{q} \| \leq \varepsilon}$.
We claim that the number of points of $\Mh{P}$ that fall inside $D
\setminus \Mh{C}$, plus the number of halfspaces defining $D$, is
an upper bound on the size of the outer fence. Indeed, for each
point $\Mh{p}$ that falls in inside $D \setminus \Mh{C}$, let
$\Mh{q}$ be its nearest neighbor in $\Mh{C}$ (naturally $\Mh{q}$ lies
on $\BX{\Mh{C}}$). Let $\Mh{\mathcalb{h}}_\Mh{p}$ be the hyperplane that is
perpendicular to the segment $\Mh{p}\Mh{q}$ and passing through the
midpoint of $pq$. Next, let $\Mh{\mathcalb{h}}^+_\Mh{p}$ be the halfspace
bounded by $\Mh{\mathcalb{h}}_\Mh{p}$ such that $\Mh{C} \subseteq \Mh{\mathcalb{h}}^+_\Mh{p}$.
If $H$ is the collection of $O(\varepsilon^{-(d-1)/2})$ halfspaces defining
$D$, then it is easy to see that the polytope defined by
\begin{equation*}
\Bigl({\bigcap_{\Mh{p} \in \Mh{P} \cap (D \setminus \Mh{C})} \Mh{\mathcalb{h}}^+_\Mh{p}}\Bigr)
\hspace{0.6pt}
\bigcap
\,
\Bigl({\bigcap_{\Mh{\mathcalb{h}}^+ \in H} \Mh{\mathcalb{h}}^+}\Bigr)
\end{equation*}
separates the boundary of $\Mh{C}$ from $\Mh{P} \setminus C$ (i.e., it
is an outer fence). See \figref{demonstration}.
We now bound the size of this inner fence. Since
$\VolX{D} - \VolX{\Mh{C}} \leq \VolX{(1+\varepsilon)\Mh{C}} - \VolX{\Mh{C}}
\leq O(\varepsilon)$, we have that
$\Ex{\cardin{\Mh{P} \cap (D \setminus \Mh{C})}} = O(\varepsilon n)$.
Combining both inequalities,
\begin{align*}
\Ex{f_{d-1}(\Mh{F_{\mathrm{out}}})}
\leq \Ex{\cardin{\Mh{P} \cap (D \setminus \Mh{C})}} + O(\varepsilon^{-(d-1)/2})
= O\pth{\varepsilon n + \frac{1}{\varepsilon^{(d-1)/2}}}.
\end{align*}
Choose $\varepsilon = 1/n^{2/(d+1)}$ to balance both terms, so that
$\Ex{f_{d-1}(\Mh{F_{\mathrm{out}}})} = O(n^{1 - 2/(d+1)})$. \end{proof}
The next Lemma shows that the bound of \lemref{ex:sep:pr} is tight in the worst case.
\begin{lemma}
Let $\Mh{P}$ be a set of $n$ points chosen uniformly at random from
the hypercube $[-2,2]^d$, and let $\Mh{C}$ be a unit radius ball
centered at the origin. Then
$\Ex{\priceY{\Mh{P}}{\Mh{C}}} \geq \Ex{f_0(\Mh{F_{\mathrm{in}}})} = \Omega(n^{1 - 2/(d+1)})$,
where $\Omega$ hides constants depending on $d$. \end{lemma}
\begin{proof}
For a parameter $\delta$ to be chosen, let $Q \subseteq \BX{\Mh{C}}$
be a maximal set of points such that:
\begin{compactenumi}
\item for any $\Mh{p} \in \BX{\Mh{C}}$, there is a point
$\Mh{q} \in Q$ such that $\| \Mh{p} - \Mh{q} \| \leq \delta$, and
\item for any two points
$\Mh{p}, \Mh{q} \in Q$, $\| \Mh{p} - \Mh{q} \| \geq \delta$.
\end{compactenumi}
Note that $\cardin{Q} = \Omega(1/\delta^{d-1})$. For each $\Mh{p}
\in Q$, we let $\gamma_\Mh{p}$ be the spherical cap that is
``centered'' at $\Mh{p}$ (in the sense that the center of the base of
$\gamma_\Mh{p}$, $\Mh{p}$, and the origin are collinear) and has base
radius $2\delta$. Let $\Gamma = \Set{\gamma_\Mh{p}}{\Mh{p} \in Q}$.
By construction, the caps of $\Gamma$ cover the surface of
$\Mh{C}$.
By setting $\delta = 1/n^{1/(d+1)}$, we claim that for
each cap $\gamma \in \Gamma$, in expectation $\Omega(1)$ points of
$\Mh{P}$ fall inside $\gamma$. This implies that there must be a
vertex of the inner fence inside $\gamma$, and this holds for
all caps in $\Gamma$. As such, the size of the inner fence is at
least
$\cardin{Q} = \Omega(1/\delta^{d-1}) = \Omega(n^{1 - 2/(d+1)})$.
To prove the claim, for all $\gamma \in \Gamma$, we show that
$\VolX{\gamma} = \Omega(1/n)$, and hence
$\Ex{\cardin{\Mh{P} \cap \gamma}} = \Omega(1)$. By construction, the
cap has a polar angle of $\theta = \Omega(\delta)$, see
\figref{what:what}. Indeed, we have that
$\theta \geq \sin(\theta) = 2\delta$ for $\theta \in [0,\pi/2]$
(which holds when $n$ is sufficiently large). Let $t$ denote the
distance from the origin to the center of the base of $\gamma$.
Then the height $h$ of the spherical cap is
$h = 1 - t = 1 - \cos(\theta) \geq \theta^2/6 = \Omega(\delta^2)$
(using the inequality $\cos(x) \leq 1 - x^2/6$). Since the volume
of the base of $\gamma$ is $\Omega(\delta^{d-1})$, we have that
$\VolX{\gamma} = \Omega(h \delta^{d-1}) = \Omega(\delta^{d+1}) =
\Omega(1/n)$, as required. \end{proof}
\end{document} |
\begin{document}
\title[On parameterized differential Galois extensions]{On parameterized differential Galois extensions}
\author{Omar Le\'on S\'anchez} \address{Omar Le\'on S\'anchez\\ McMaster University\\ Department of Mathematics and Statistics\\ 1280 Main Street West\\ Hamilton, Ontario \ L8S 4L8\\ Canada} \email{oleonsan@math.mcmaster.ca}
\author{Joel Nagloo} \address{Joel Nagloo\\ Graduate Center\\ Mathematics\\ 365 Fifth Avenue, New York\\ NY 10016-4309} \thanks{Joel Nagloo was supported by NSF grant CCF-0952591.} \email{jnagloo@gc.cuny.edu}
\date{\today}
\pagestyle{plain} \subjclass[2010]{03C60, 12H05} \keywords{parameterized strongly normal extensions, model theory}
\begin{abstract} We prove some existence results on parameterized strongly normal extensions for logarithmic equations. We generalize a result in [Wibmer, {\em Existence of $\partial$-parameterized Picard-Vessiot extensions over fields with algebraically closed constants}, J. Algebra, 361, 2012]. We also consider an extension of the results in [Kamensky and Pillay, {\em Interpretations and differential Galois extensions}, Preprint 2014] from the ODE case to the parameterized PDE case. More precisely, we show that if $\DD$ and $\D$ are two distinguished sets of derivations and $(K^{\DD},\D)$ is existentially closed in $(K,\D)$, where $K$ is a $\DD\cup\D$-field of characteristic zero, then every (parameterized) logarithmic equation over $K$ has a parameterized strongly normal extension.
\end{abstract}
\maketitle
\section{Introduction}
Let $\Pi=\DD\cup\D=\{D_1,\dots,D_r\}\cup \{\delta_{1},\dots,\delta_{m-r}\}$ be a set of commuting derivations, with $m\geq r>0$, and $K$ be a $\Pi$-field of characteristic zero. Consider the (parameterized) system of homogeneous linear differential equations \[D_1Y=A_1Y, \; \dots,\; D_r Y=A_rY, \quad \text{ with $Y$ ranging in GL$_n$},\tag{$\star$}\] where the $A_i$'s are $n\times n$ matrices with entries from the differential field $K$ satisfying the usual integrability condition $$D_i A_j -D_jA_i=[A_i,A_j], \quad \text{ for } i,j=1,\dots,r.$$ Recall that a parameterized Picard-Vessiot (PPV) extension of $K$ for $(\star)$ is a $\Pi$-field extension $L$ of $K$ such that \begin{enumerate} \item $L$ is generated over $K$ by the entries (and all $\Pi$-derivatives) of a matrix solution $Z\in GL_n(L)$ of $(\star)$; in other words, $L=K\gen{Z}_\Pi=K\gen{Z}_\D$, and \item $L^\DD=K^\DD$, that is the field of $\DD$-constants of $L$ is the same as the field of $\DD$-constants of $K$. \end{enumerate}
For an example, take $K=(\mathbb{C}(x,t),\{\frac{\partial}{\partial x},\frac{\partial}{\partial t}\})$, where we think of $\DD$ as $\{\frac{\partial}{\partial x}\}$ and the parametric derivations $\D$ as $\{\frac{\partial}{\partial t}\}$. Let $(\star)$ be \begin{equation}\label{useno} \frac{\partial y}{\partial x}=\frac{t}{x}y. \end{equation} Clearly, $y=x^t$ is a solution and as $\frac{\partial}{\partial t}(x^t)= x^t\cdot log\:x$, one is interested in the field $L=K(x^t,log\:x)$. It turns out (see \cite[Example 3.1]{CaSi}) that $L$ is indeed a PPV extension of $K$ for equation (\ref{useno}).
Parameterized Picard-Vessiot extensions were introduced in \cite{CaSi} by Cassidy and Singer as a fundamental tool for studying parametric equations such as equation $(\star)$. In particular, they capture valuable information about the algebraic relations that exist among a set of solutions as well as their $\D$-algebraic relations (where $\D$ is the set of parametric derivations). PPV extensions have attracted much attention in recent years and we direct the reader to \cite{Mitschi} for some applications of the parameterized Picard-Vessiot theory. It should be noted that PPV extensions do not always exist; for example, consider the differential field $(\mathbb{R}\gen{\alpha},\frac{d}{dx})$, where $\alpha$ is a non constant solution of the equation $(\frac{dy}{dx})^2+4y^2+1=0$ in a differential closure of $(\mathbb R(x),\frac{d}{dx})$. Take $(\star)$ to be the linear differential equation \[\frac{d^2y}{dx^2}+y=0.\] Seidenberg (see \cite[\S3]{Seidenberg}) proved that there are no Picard-Vessiot extensions of $K=\mathbb{R}\gen{\alpha}$ for this equation.
It has been known for quite some time that to get a general existence result, one needs to impose additional assumptions on $K^\DD$. In \cite{CaSi}, Cassidy and Singer showed that if $(K^\DD,\D)$ is $\D$-closed, then the existence of a PPV extension of $K$ is guaranteed. This was later improved in \cite{GilletAl} by Gillet et al. where they show that the assumption can be weaken to $(K^\DD,\D)$ being existentially closed in $(K,\D)$. Examples of the latter occur for instance when $K$ is formally real and $(K^\DD,\D)$ is a real closed ordered differential field; more precisely, $(K^\DD, \D)$ is model of the theory $RCF\cup UC_{m-r}$ introduced by Tressl in \cite[\S8]{Tressl}. A similar observation can be made for the $p$-adic case now using the theory $pC_d\cup UC_{m-r}$. Another important result is due to Wibmer \cite{Wibmer}, who showed that in the case of a single parameter (i.e., $\D=\{\delta\}$), a PPV extension exists whenever $K^\DD$ is algebraically closed.
The most recent result can be found in \cite{MoshePillay} and concerns the nonlinear case with no parametric derivations (i.e, $\D=\emptyset$). In that paper, using model theoretic techniques, Kamensky and Pillay give a new proof of the nonparametric part of the above mentioned result of Gillet et al. \cite{GilletAl}. Moreover, their results are presented in the general context of logarithmic equations and strongly normal extensions (a generalization of Picard-Vessiot extensions to the nonlinear setting).
The aim of this current paper is to extend the work of Kamensky and Pillay to the parametric case. We work in the more general context of parameterized logarithmic equations (not necessarily linear) and parameterized strongly normal equations. In the process we obtain a new proof of the result of Gillet et al. about the existence of PPV extensions when $(K^\DD,\D)$ is existentially closed in $(K,\D)$. We also discuss ways in which one can extend the main result of Wibmer \cite{Wibmer} when there are more than one parameters (i.e., $|\D|>1$).
The paper is organized as follows. In Section 2, we give a quick review of the notion of parameterized D-varieties and D-groups. We then, in Section 3, explain what we mean by parameterized logarithmic equations and parameterized strongly normal (PSN) extensions and study their basic properties such as their Galois group of $\Pi$-automorphisms and the Galois correspondence. Section 4 and 5 is where our main results are proved. We first develop a quantifier elimination result for ``parameterized D-groups" and then use the latter to prove our main result about existence of PSN extensions.
\section{Preliminaries on parameterized D-varieties and D-groups}
In this section we recall the notions of parameterized prolongations of differential algebraic varieties and parameterized D-varieties with respect to a fixed partition of the distinguished derivations. We refer the reader to \cite[\S3]{Omar1} for a more detailed discussion (there the terminology \emph{relative} is used instead of parameterized).
Throughout $(\mathcal{U},\Pi)$ will denote a sufficiently large saturated model of $DCF_{0,m}$; that is, $(\mathcal U,\Pi)$ will be play the role of our universal differential field (of characteristic zero) with $m$ commuting derivations for differential algebraic geometry. We also fix a ground (small) $\Pi$-subfield $K<\mathcal{U}$. We consider a partition $\mathcal{D}\cup\Delta$ of $\Pi$, where $\mathcal{D}=\{D_1,\ldots,D_r\}$, $r>0$, and $\Delta=\{\delta_1,\ldots,\delta_{m-r}\}$. For any $\Pi$-subfield $F\leq \mathcal U$ we let $$F^{\mathcal{D}}:=\{a\in F: Da=0 \text{ for all } D\in \mathcal D\}$$ be the $\mathcal{D}$-constants of $F$. Recall that, by quantifier elimination of $DCF_{0,m}$, any $F$-definable set is a finite boolean combination of $\Pi$-algebraic varieties ($\subseteq\mathcal U^n$ for some $n$) over $F$.
An important fact about the $\mathcal{D}$-constants that we will use several times in this paper is the following: \begin{fct}\label{ConstFact} \ \begin{enumerate} \item $(\mathcal{U}^{\mathcal{D}},\D)$ is a differentially closed $\Delta$-field. \item Suppose $F$ is a $\Pi$-subfield of $\mathcal U$. The $F$-definable subsets of Cartesian powers of $\mathcal{U}^{\mathcal{D}}$ in the structure $(\mathcal U,\Pi)$ are precisely the $F^{\mathcal{D}}$-definable sets in the structure $(\mathcal{U}^{\mathcal{D}},\Delta)$. \end{enumerate} \end{fct}
Now, let $x=(x_1,\dots,x_n)$ and $u=(u_1,\dots,u_n)$ be $n$-tuples of $\Delta$-indeterminates. We denote by $K\{x\}_\D$ the $\D$-ring of $\D$-polynomials over $K$ and by $\Theta_{\Delta}$ the set of $\Delta$-derivatives; that is, $$\Theta_{\Delta}=\{\delta_1^{e_1}\cdots \delta_{m-r}^{e_{m-r}}:\, e_i\geq0\}.$$ For each $D\in\DD$ and $f\in K\{x\}_{\Delta}$ we have a $\Delta$-polynomial $d_{D/\Delta}f\in K\{x,u\}_{\Delta}$ given by \[d_{D/\Delta}f(x,u)=\sum_{\theta\in\Theta_{\Delta}, j\leq n}\frac{\partial f}{\partial (\theta x_j)}(x)\theta u_j+f^{D}(x),\] where the $\Delta$-polynomial $f^{D}$ is obtained by applying $D$ to the coefficients of $f$.
\begin{defn} The parameterized prolongation, $\tau_{\mathcal{D}/\Delta}V\subseteq \mathcal U^{n(r+1)}$, of an affine $\D$-algebraic variety $V\subseteq \mathcal U^n$ over $K$ is defined as the fibered product \[\tau_{\mathcal{D}/\Delta}V=\tau_{D_1/\Delta}V\times_V\cdots\times_V\tau_{D_r/\Delta}V\] where $\tau_{D_i/\Delta}V\subseteq U^{2n}$, $i=1,\dots,r$, is the affine $\Delta$-algebraic variety defined by \[f(x)=0 \text{ and } d_{D_i/\Delta}f(x,y)=0,\] for all $f\in I_\Delta(V/K):=\{f\in K\{x\}_\Delta: f(V)=0\}$. We equip $\ta V$ with its canonical projection $\pi:\ta V\to V$ to the $x$-coordinate. \end{defn}
More generally, for an arbitrary $\D$-algebraic variety (i.e., not necessarily affine), the \emph{parameterized prolongation} $\ta V$ is defined by piecing together the prolongations of a (finite) affine cover of $V$ (see \cite[\S2.3]{Omar2} for details). For the basic properties of $\tau_{\mathcal{D}/\Delta}$ we refer the reader to \cite[\S3]{Omar1}. For instance, $\ta$ is a (covariant) functor from the category of $\D$-algebraic varieties to itself, it preserves differential fields of definition, and has the characteristic property that if $v\in V$ then $\nabla_\DD v:=(v,D_1v,\dots,D_rv)\in \ta V$. Moreover, for each $D\in\DD$, $\pi:\tau_{D/\D} V\to V$ is a torsor under the $\Delta$-tangent bundle $\rho:T_\Delta V\to V$ (where the fibrewise action is translation), and if $V$ is defined over $K^{D}$ then $\tau_{D/\D} V=T_\Delta V$.
\begin{rem} When $\mathcal D=\{D\}$ and $\Delta=\emptyset$, the parameterized prolongation $\ta V$ is nothing more than the usual prolongation $\tau V$ used in ordinary differential algebraic geometry \cite{PiPi}. In this case, whenever $V$ is defined over the constants we recover the tangent bundle of $V$. \end{rem}
By a \emph{parameterized D-variety} defined over $K$ we mean a pair $(V,s)$ where $V$ is a $\Delta$-algebraic variety and $s$ is a $\Delta$-section of $\tau_{\mathcal{D}/\Delta}V\rightarrow V$ (both defined over $K$) satisfying the following integrability condition: for each $v\in V$, \[d_{D_i/\Delta}s_j(v,s_i(v))= d_{D_j/\Delta}s_i(v,v_j(x)), \: \text{ for } i,j=1,\ldots,r,\] where $s = (\operatorname{Id},s_1,...,s_r)$ are local coordinates for $s$ in an affine chart containing the point $v$.
By a \emph{parameterized $D$-subvariety} of $(V,s)$ we mean a $\Delta$-subvariety $W$ of $V$ such that $s(W)\subset \ta W$. A D-morphism of parameterized D-varieties $(V,s)$ and $(V',s')$ is a $\Delta$-morphism $f:V\to W$ such the following diagram commutes $$\xymatrix{ \ta V \ar[rr]^{\ta f}&&\ta V'\\ V \ar[u]^{s}\ar[rr]^{f}&&V'\ar[u]_{s'} }$$ This yields a category of parameterized D-varieties with D-morphisms. Since $\ta$ commutes with products, this category has products and thus, given a $\Delta$-algebraic group $G$, the parameterized prolongation $\ta G$ also has the structure of a $\Delta$-algebraic group (see \cite[\S4]{Omar1}). Hence, we can talk about group objects in this category. These are called \emph{parameterized D-groups} and they are precisely those parameterized D-varieties where the underlying $\Delta$-variety is a $\Delta$-algebraic group and the section is a group homomorphism. A \emph{parameterized D-subgroup} is defined in the natural way.
The set of \emph{sharp points} of $(V,s)$, denoted by $(V,s)^\sharp$ or simply $V^\sharp$ when $s$ is understood, is the $\Pi$-algebraic subvariety of $V$ given by \[ V^{\sharp}=\{v\in V:s(v)=\nabla_\DD v\}.\] For an arbitrary subset $A\subseteq V$, we let $A^\sharp:=A\cap V^\sharp$.
\begin{lem} \label{onsubvar}\ \begin{enumerate} \item Every irreducible $\Delta$-component of a parameterized D-variety is a parameterized D-subvariety. \item A $\Delta$-subvariety $W$ of a parameterized D-variety is a parameterized D-subvariety if and only if $W^\sharp$ is $\Delta$-dense in $W$. Moreover, the $\sharp$-points functor establishes a 1:1 correspondence between parameterized D-subvarieties of V and $\Pi$-algebraic subvarieties of $V^\sharp$. \item Intersections and finite unions of parameterized D-subvarieties are again parameterized D-subvarieties.
\end{enumerate} \end{lem} \begin{proof} Fix a parameterized D-variety $(V,s)$. \begin{enumerate} \item If $W$ is a $\Delta$-component of $V$, then $\ta W$ and $\ta V$ agree on a nonempty $\Delta$-open subset $Q$ of $W$ (see \cite[Lemma 2.3.9]{Omar2}). Then, $s(Q)\subset \ta W$, but since $Q$ is $\Delta$-dense in $W$, we have $s(W)\subset \ta W$. \item This is \cite[Proposition 3.10]{Omar1}. \item By Noetherianity of the $\Delta$-topology (and induction) it suffices to show that the intersection of two parameterized D-subvarieties $W_1$ and $W_2$ is again a parameterized D-subvariety. Moreover, it suffices to consider the affine case. By (2) we must show that $(W_1\cap W_2)^\sharp$ is $\Delta$-dense in $W_1\cap W_2$, but this follows from the next equalities of ideals: $$I_\Delta(W_1^\sharp\cap W_2^\sharp)=\sqrt{I_\D(W_1^\sharp)+I_\D(W_2^\sharp)}=I_\Delta(W_1\cap W_2),$$ where the last equality uses $I_\Delta(W_i^\sharp)=I_\Delta(W_i)$, for $i=1,2$, which holds by (2). To see that the union $W_1\cup W_2$ is a parameterized D-subvariety, we use (2) and $$\operatorname{\Delta-Clo}((W_1\cup W_2)^\sharp)=\operatorname{\Delta-Clo}(W_1^\sharp)\cup\operatorname{\Delta-Clo(W_2^\sharp)}=W_1\cup W_2,$$ where $\operatorname{\Delta-Clo}$ denotes closure in the $\Delta$-topology. \end{enumerate} \end{proof}
The following lemma and proposition will be used in Section \ref{QED}.
\begin{lem}\label{usil} Let $(V,s)$ be a parameterized D-variety defined over $K$. Suppose $A$ is a (finite) Boolean combination of parameterized D-subvarieties (which are not necessarily defined over $K$). If A is definable over $K$, then $A$ is a Boolean combination of parameterized D-subvarieties each defined over $K$. \end{lem} \begin{proof} This follows by induction on the rank (Morley rank say) of $A$. Let $\bar A=\D$-$\operatorname{Clo}(A)$ be the closure of $A$ in the $\D$-topology. We note that $\bar A$ is also defined over $K$. Then, by Lemma \ref{onsubvar}, $\bar A$ is a parameterized D-subvariety of $V$ defined over $K$. Thus, the set $\bar A\setminus A$ is definable over $K$ and is a Boolean combination of parameterized D-subvarieties. By induction, $\bar A\setminus A$ is a Boolean combination of parameterized D-subvarieties each defined over $K$. The result now follows since $A=\bar A\setminus (\bar A\setminus A)$. \end{proof}
The following is the parameterized PDE analog of \cite[Lemma 2.5(ii)]{KoPi}.
\begin{prop}\label{uselater} Let $(V,s)$ be a parameterized D-variety. If A is a (finite) Boolean combination of parameterized D-subvarieties of $V$, then $A^\sharp$ is $\D$-dense in $A$. \end{prop} \begin{proof} By Lemma \ref{onsubvar} (3), we may assume that $A$ is of the form $W\cap Q$ where $W$ is a parameterized D-subvariety of $V$ and $Q$ is a $\D$-open set. Moreover, by (1) of the same lemma, we may assume that $W$ is irreducible. Now, by (2) of Lemma \ref{onsubvar}, we have that $V^\sharp$ is $\Delta$-dense in $V$, and hence $A^\sharp$ is $\Delta$-dense in $A$. \end{proof}
We finish this section with the following proposition which establishes the connection between parameterized D-groups and definable groups in $(\mathcal U,\Pi)$ of $\Pi$-type bounded by $\Delta$. Recall that given a $K$-definable set $X$ of $(\mathcal U,\Pi)$ we say that $\D$ \emph{bounds} the $\Pi$-type of $X$ if for each $a\in X$ the $\Pi$-field generated by $a$ over $K$ is also finitely generated as a $\Delta$-field, i.e., there exists a (finite) tuple $\alpha$ such that $K\langle a\rangle_\Pi=K\langle\alpha\rangle_\Delta$.
\begin{prop} The $\sharp$-points functor is an equivalence between the category of parameterized D-groups over $K$ and the category of $\Pi$-algebraic groups over $K$ with $\Pi$-type bounded by $\Delta$. \end{prop} \begin{proof} By \cite[Theorem 4.6]{Omar1}, every $\Pi$-algebraic group of $\Pi$-type bounded by $\Delta$ is isomorphic to $G^\sharp$ for some parameterized D-group $(G,s)$. On the other hand, given two parameterized D-groups $(G,s)$ and $(H,t)$, any $\Pi$-isomorphism between $G^\sharp$ and $H^\sharp$ extends to a \emph{generically} defined $\Delta$-morphism of the $\Delta$-algebraic groups $G$ and $H$. It is well known that every such generically defined $\Delta$-morphism extends to an actual $\Delta$-isomorphism. The result follows. \end{proof}
\section{On parameterized strongly normal extensions}
We use the notation of the previous section; in particular, we have a partition $\mathcal D\cup\Delta$ of the set of derivations $\Pi$ where $\mathcal{D}=\{D_1,\ldots,D_r\}$, $r>0$, and $\Delta=\{\delta_1,\ldots,\delta_{m-r}\}$. Throughout this section we fix a $\Delta$-algebraic group $G$ defined over $K^{\mathcal D}$ and denote its idenity element by $e$. Note that in this case $$\ta G=(T_\Delta G)^r:=T_\Delta G\times_G\cdots \times_G T_\Delta G,$$ and thus the fibre of $\ta G$ over the identity $e$ of $G$ is equal to the $r$-fold product of $\mathfrak L_\Delta G$ (the $\Delta$-Lie algebra of $G$). Furthermore, we equip $G$ with the canonical \emph{zero section} $s_0$ of $\ta G$ and, henceforth, we will work with the parameterized D-group $(G,s_0)$. We note that the parameterized D-subgroups of $(G,s_0)$ defined over $K$ are precisely the $\Delta$-algebraic subgroups of $G$ that are defined over $K^\DD$.
A point $a\in \ta G_e$ is said to be \emph{integrable} if $(G,as_0)$ is a parameterized D-variety. Here $as_0:G\to \ta G$ denotes the $\D$-section given by $(as_0)(y)=a\cdot s_0(y)$ where the product occurs in $\ta G$. Since $\nabla_{\DD} e$ is the identity of $\ta G$, the point $\nabla_\DD e$ is integrable.
\begin{rem} If $G$ is a linear (i.e., a $\D$-algebraic subgroup of some GL$_n$), then a point $(\operatorname{Id},A_1,\dots,A_r)\in \ta G_e\subseteq \mathfrak{gl}_n^r$ is integrable if and only if $$D_i A_j -D_jA_i=[A_i,A_j], \quad \text{ for } i,j=1,\dots,r.$$ These are usual integrability conditions for homogeneous linear differential algebraic equations, see \cite{CaSi} for instance. \end{rem}
We have the following:
\begin{fct}\label{intpts}\cite[Lemma 5.3]{Omar1} A point $a\in \ta G_e$ is integrable if and only if the system of $\Pi$-algebraic equations $$\nabla_\DD y=a\cdot s_0(y)$$ has a solution in $G$. \end{fct}
We denote the set of integrable point of $(G,s_0)$ by $\operatorname{Int}(G,s_0)$. The {\em parameterized logarithmic derivative} on $(G,s_0)$ is defined by \begin{eqnarray*} \ell_{0}: G & \rightarrow & \quad\; \ta G_e\\
g & \mapsto & \nabla_{\mathcal D}(g)\cdot s_0(g)^{-1} \end{eqnarray*} where the product and inverse occur in the $\Delta$-algebraic group $\ta G$. By Fact~\ref{intpts}, $\operatorname{Int}(G,s_0)$ is the image of $\ell_0$. Moreover, $\operatorname{Ker}(\ell_0)=(G,s_0)^\#=G(\mathcal U^\DD)$, and $\ell_0$ is a crossed homomorphism with respect to the adjoint action of $G$ on $\ta G_e$ given by $a^g:=\ta C_g(a)$ where $C_g$ denotes conjugation by $g$ in $G$ (see \cite[Lemma 5.4]{Omar1}).
For the remainder of this section we fix an integrable $K$-point $a$ of $(G,s_0)$ and a \emph{parameterized logarithmic equation} \[\ell_{0}(y)=a, \quad \text{ where } y \text{ ranges in } G \tag{$\star$}.\]
\begin{defn} A {\em parameterized strongly normal} (PSN) extension of $K$ for $(\star)$ is a $\Pi$-field $L$ such that \begin{enumerate} \item $L=K\gen{\alpha}_{\Delta}$, for $\alpha$ a solution of $(\star)$, and \item $L^{\mathcal{D}}=K^{\mathcal{D}}$ \end{enumerate} \end{defn}
\begin{rmk}\label{remonex} \ \begin{enumerate} \item As explained in Example 5.6 of \cite{Omar1}, if $G=$ GL$_n$, then the parameterized logarithmic equation $(\star)$ reduces to the homogeneous linear differential system $$D_1 Y=A_1 Y, \; \dots \; , D_r Y=A_r Y,$$ where the integrable $K$-point has the form $a=(\operatorname{Id},A_1,\dots,A_r)\in \mathfrak{gl}_n^r(K)$. Moreover, in this case, a parameterized strongly normal extension for $(\star)$ is precisely a parameterized Picard-Vessiot extension for the above linear system. \item Suppose $L=\gen {\alpha}_\D$ is a PSN extension of $K$ for $(\star)$. If $\sigma:L\to \mathcal U$ is a $\Pi$-isomorphism over $K$, then $\sigma(\alpha)=\alpha\cdot c$ for some $c\in G(\mathcal U^\DD)$, which implies that $\sigma(L)\subset L\gen{\mathcal U^\DD}_\D$. Thus, PSN extensions for $(\star)$ are examples of the \emph{$\D$-strongly normal extensions} studied by Landesman in \cite{Land}. \end{enumerate} \end{rmk}
As a consequence of the definition, PSN extensions are contained in a differential closure of $(K,\Pi)$:
\begin{lem}\label{ongalgro} Suppose $L=K\gen{\alpha}_\D$ is a PSN extension of $K$ for $(\star)$. Then, $L$ is a contained in a $\Pi$-closure of $K$. Consequently, every $\Pi$-isomorphism $\sigma:L\to \mathcal U$ over $K$ extends uniquely to a $\Pi$-automorphism of $L\gen{\mathcal U^\DD}_\D$ over $K\gen{\mathcal U^\DD}_\D$. \end{lem} \begin{proof} Let $\bar L$ be a $\Pi$-closure of $L$, and let $\bar K$ a $\Pi$-closure of $K$. Since $(\bar L^\DD,\D)$ is a $\D$-closure of $(L^\DD,\D)$, $\bar L^\DD$ embedds in $\bar K$. Thus, we may assume $\bar L^\DD< \bar K$. Now let $\beta$ be a solution of $(\star)$ in $\bar K$. Then, $\alpha=\beta\cdot c$ for some $c\in G(\bar L^\DD)\subset G(\bar K)$. Thus, $\alpha$ is a tuple from $\bar K$, and consequently $L<\bar K$.
For the ``consequently'' clause, suppose $\sigma:L\to \mathcal U$ is a $\Pi$-isomorphism over $K$. Since $L$ is contained in a $\Pi$-closure of $K$, there is a formula $\phi$ (in the language of $\Pi$-rings) isolating the type $tp(\alpha/K)$. A standard argument shows that $\phi$ also isolates the type $tp(\alpha/K\gen{\mathcal U^\DD}_\D)$. This implies that $$tp(\alpha/K\gen{\mathcal U^\DD}_\D)=tp(\sigma(\alpha)/K\gen{\mathcal U^\DD}_\D).$$ It then follows (from the fact that $L\gen{\mathcal U^\DD}_\D=\sigma(L)\gen{\mathcal U^\DD}_\D$, see Remark \ref{remonex} (2)) that $\sigma$ extends uniquely to a $\Pi$-automorphism of $L\gen{\mathcal U^\DD}_\D$ over $K\gen{\mathcal U^\DD}_\D$. \end{proof}
Let $L$ be a PSN extension of $K$ for $(\star)$. The Galois group of $\Pi$-automorphisms of $L$ is defined as $$\operatorname{Gal}(L/K):=Aut_\Pi(L\langle \mathcal U^\DD\rangle_\Pi/K\langle \mathcal U^\DD\rangle_\Pi).$$ That is, $\operatorname{Gal}(L/K)$ is the group of $\Pi$-automorphisms of $L\langle \mathcal U^\DD\rangle_\Pi$ fixing $K\langle \mathcal U^\DD\rangle_\Pi$ pointwise. We also have the group $$\operatorname{gal}(L/K):=Aut_\Pi(L/K).$$ By Lemma \ref{ongalgro}, $\operatorname{gal}(L/K)$ can be identified with a subgroup of $\operatorname{Gal}(L/K)$.
We have the following important facts about the Galois group:
\begin{thm} Suppose $L=K\gen \alpha_\Delta$ is a PSN extension for $(\star)$. Then \begin{enumerate} \item [(i)] There is a $\Delta$-algebraic subgroup $H$ of $G$ defined over $K^\DD$ such that the $\Pi$-algebraic group of $\DD$-constant points of $H$, $H(\mathcal U^\DD)$, is isomorphic to $\operatorname{Gal}(L/K)$. Moreover, the isomorphism $$\mu:\operatorname{Gal}(L/K)\to H(\mathcal U^\DD)$$ is given by $\mu(\sigma)=\alpha^{-1}\cdot \sigma(\alpha)$. \item [(ii)] There is a natural Galois correspondence between the intermediate $\Pi$-fields (of $K$ and $L$) and the $\Delta$-algebraic subgroups of $H$ defined over $K^\DD$. \item [(iii)] $\mu(\operatorname{gal}(L/K))=H(K^\DD).$ \end{enumerate} \end{thm} \begin{proof} These are standard arguments; however, we provide details for the sake of completeness.
\noindent (i) Let $Z$ be the set of realisations (from $\mathcal U$) of the type of $\alpha$ over $K$; i.e., $Z=tp(\alpha/K)^{\mathcal U}$. Note that, by Lemma \ref{ongalgro}, $Z$ is a $K$-definable set. Set $V=\{g\in G(\mathcal U^\DD): Z\cdot g=Z\}$. Then $V$ is a $\Pi$-algebraic subgroup of $G(\mathcal U^\DD)$ defined over $K$, by Fact \ref{ConstFact}, $V$ is actually defined over $K^\DD$. The desired $\D$-algebraic group is $H:=\D$-$\operatorname{Clo}(V)\leq G$, which is defined over $K^\DD$. Clearly, by the correspondence of Lemma \ref{onsubvar} (2), $V=H(\mathcal U^\DD)$, and furthermore for every $\sigma\in \operatorname{Gal}(L/K)$ we have that $\alpha^{-1}\cdot \sigma(\alpha)\in V$. Moreover, we have $$\mu(\sigma_1\circ\sigma_2)=\alpha^{-1}\sigma_1(\sigma_2(\alpha))=\alpha^{-1}\sigma_1(\alpha)\sigma_1(\alpha^{-1}\sigma_2(\alpha))=\alpha^{-1}\sigma_1(\alpha)\alpha^{-1}\sigma_2(\alpha)=\mu(\sigma_1)\mu(\sigma_2),$$ where the third equality uses that $\sigma_1(\alpha^{-1}\sigma_2(\alpha))=\alpha^{-1}\sigma_2(\alpha)$ which follows from the fact that $\alpha^{-1}\sigma_2(\alpha)\in G(\mathcal U^\DD)$. Thus, $\mu$ is a group homomorphism. It can easily be checked that $\mu$ is a bijection, and so a group isomorphism.
\noindent (ii) By the correspondence of Lemma \ref{onsubvar} (2), it suffices to show the Galois correspondence between intermediate $\Pi$-subfields and $\Pi$-algebraic subgroups of $V=H(\mathcal U^\DD)$ defined over $K^\DD$. The correspondence is given as follows: If $K\leq F\leq L$ is an intermediate $\Pi$-field, then $L/F$ is a PSN extension for $(\star)$, and so $V_F:=\mu(\operatorname{Gal}(L/F))$ is a $\Pi$-algebraic subgroup of $V\leq G(\mathcal U^\DD)$ deifined over $F$. Since $G(L^\DD)=G(K^\DD)$, Fact \ref{ConstFact} implies that $V_F$ is actually defined over $K^\DD$.
Now we prove that the correspondence $F\mapsto V_F$ is 1:1 and onto. Let $F_1\neq F_2$ be intermediate $\Pi$-fields. Let $b\in F_2\setminus F_1$. Then there is $\sigma\in \operatorname{Aut}_\Pi(\mathcal U/F_1)$ such that $\sigma(b)\neq b$. Setting $\sigma'$ to be the unique extension of $\sigma |_{L}$ to an element of $\operatorname{Gal}(L/F_1)$ (see Lemma \ref{ongalgro}), we see that $\sigma' \in \operatorname{Gal}(L/F_1)\setminus \operatorname{Gal}(L/F_2)$; in particular, $V_{F_1}\neq V_{F_2}$. For surjectivity, let $W$ be a $\Pi$-algebraic subgroup of $V$ defined ove $K^\DD$. Let $b$ be a tuple from $L$ that generates the minimal $\Pi$-field of definition of $\alpha\cdot W$. If we let $F=K\langle b\rangle_\Pi$, it can be checked that $V_F=W$.
\noindent (iii) Let $\sigma\in \operatorname{Gal}(L/K)$. If $\mu(\sigma)\in H(K^\DD)$, then $\sigma(\alpha)=\alpha\cdot c$ for some $c \in H(K^\DD)$. In particular, $\sigma(\alpha)$ is a tuple from $L$, and so $\sigma\in \operatorname{gal}(L/K)$. On the other hand, if $\sigma\in\operatorname{gal}(L/K)$, then $\alpha^{-1}\sigma(\alpha)\in H(L^\DD)=H(K^\DD)$. \end{proof}
Let us now discuss the issue of \emph{existence} of PSN extensions (they do not generally exist as we pointed out in the introduction). In \cite[\S5]{Omar1}, it was shown that these extensions exist if we make the additional assumption that $(K^\DD,\Delta)$ is $\D$-closed. To see this, let $\bar K$ be a $\Pi$-closure of $K$. Recall that any tuple of $\DD$-constants from $\bar K$ is $\Delta$-constrained (or equivalently $\Delta$-isolated) over $K^\DD$ and so, by the assumption on $(K^\DD,\Delta)$, this tuple must be in $K^\DD$. Now, since $a$ is a $K$-point in the image of $\ell_0$, we can find a solution $\alpha$ of $(\star)$ in $\bar K$. Setting $L:=K\gen{\alpha}_\Delta$ and using $\bar K^\DD=K^\DD$, we have that $L^\DD=K^\DD$ and so $L$ is a PSN extension of $K$ for $(\star)$. Moreover, the assumption that $(K^\DD,\Delta)$ is $\Delta$-closed implies that, up to $\Pi$-isomorphism over $K$, this is the only PSN extension of $K$ for $(\star)$.
There are weaker assumptions on the field of $\DD$-constants $K^\DD$ that imply the existence of PSN extensions. For instance, we have the following result of Wibmer:
\begin{thm}\label{wib}\cite[Theorem 8]{Wibmer} When $G=\operatorname{GL}_n$ and $\D=\{\delta\}$, PSN extensions exist if $K^\DD$ is algebraically closed. \end{thm}
It should be noted that Wibmer works in the $\delta$-parameterized Picard-Vessiot context with systems of linear \emph{difference-differential} equations. Thus, Theorem \ref{wib} is a special case (when the set of automorphisms is empty) of the main result of \cite{Wibmer}.
Our next goal is to extend Theorem \ref{wib} to an arbitrary set of parametric derivations $\Delta$, and $G$ not necessarily linear. To do so, we will need the following result, see \cite[Chapter 0, \S7]{Kolchin2}.
\begin{fct}\label{extKol} Suppose $(F,\bar \partial\cup\{\delta\})$ is a differential field with $\bar\partial =\{\partial_1,\dots,\partial_s\}$ (in particular we require that all the derivations commute). If $(E,\bar\partial)$ is a differential field extension of $(F,\bar\partial)$ which is $\bar\partial$-closed, then there exists an extension $\delta':E\to E$ of $\delta$ such that $(E,\bar\partial\cup\{\delta'\})$ is a differential field (i.e., $\delta'$ commutes with $\bar\partial$). \end{fct}
\begin{rem} It is not known (at least to the authors) if the above result holds if we replace $\delta$ for a finite family of derivations commuting with each other and with $\bar\partial$ on $F$. \end{rem}
If $\Delta\neq \emptyset$, we set $$\Delta^*=\Delta\setminus\{\delta_1\}.$$ The following theorem gives sufficient conditions for the existence of parameterized stronly normal extensions.
\begin{thm}\label{genwib} Suppose $G$ is a $\Delta^*$-algebraic group defined over $K^\DD$. If $(K^\DD,\Delta^*)$ is $\Delta^*$-closed, then there exists a parameterized strongly normal extension of $K$ for $(\star)$. \end{thm} \begin{proof} Let $\bar K$ be a $\DD\cup\Delta^*$-closure of $K$. Any tuple of $\DD$-constants from $\bar K$ is $\Delta^*$-constrained (or equivalently $\Delta^*$-isolated) over $K^\DD$ and so, by the assumption on $(K^\DD,\Delta^*)$, this tuple must be in $K^\DD$. In particular, $\bar K^\DD=K^\DD$. Now, by Fact~\ref{extKol}, there is an extension $\delta_1':\bar K\to \bar K$ of $\delta_1:K\to K$. This yields a differential field extension $(\bar K, \DD\cup\Delta')$ of $(K,\DD\cup\Delta)$ where $\Delta'=\{\delta_1'\}\cup\Delta^*$.
Since $G$ is a $\Delta^*$-algebraic group, we can find a $\bar K$-point $\alpha\in G$ such that $\ell_0(\alpha)=a$. Let $L$ be the differential subfield of $(\bar K, \DD\cup\Delta')$ generated by $\alpha$ over $K$; in other words, $L=K\langle \alpha\rangle_{\Delta'}$. This is the desired PSN extension. Indeed, it is $\Delta'$-generated by a solution of $(\star)$ and $L^\DD$ is contained in $\bar K^\DD=K^\DD$. \end{proof}
\begin{rem}\label{remonwib}\ \begin{enumerate} \item When $G$ is a $\Delta^*$-algebraic group and $\Delta=\{\delta\}$, the above theorem shows that in order to guarantee the existence of a parameterized strongly normal extension, it suffices to assume that $K^\DD$ is algebraically closed. \item When $G=\operatorname{GL}_n$ and $\Delta=\{\delta\}$, the above theorem shows that if $K^\DD$ is algebraically closed, then parameterized Picard-Vessiot extensions of $K$ exist. This is Theorem \ref{wib} above and, as we already mentioned, is (a special case of) the main result of \cite{Wibmer}. \end{enumerate} \end{rem}
The remainder of the paper is devoted to prove the existence of PSN extensions under the assumption that $(K^\DD,\D)$ is existentially closed in $(K,\D)$. We achieve this by extending some of the results of Kamensky and Pillay from \cite{MoshePillay}.
\section{A quantifier elimination result for parameterized D-groups}\label{QED}
The goal of this section is to proof a quantifier elmination result for certain theories of parameterized D-groups (cf. \cite[\S2]{MoshePillay}). This result will be used in the next section. We continue with the notation of the previous sections, and again fix $\Delta$-algebraic group $G$ defined over $K^{\mathcal{D}}$ and an integrable $K$-point $a$ of $(G,s_0)$. Recall that the latter means that $(G,s)$ is a parameterized D-variety, where $s:=as_0$.
We fix the parameterized logarithmic equation \[\ell_{0}(y)=a, \quad \text{ where } y \text{ ranges in } G \tag{$\star$},\] and let $\mathcal{Y}=\{y\in G: \ell_{0}(y)=a\}$ be its solution set. Note that $$\mathcal Y=(G,s)^\#=b\cdot G(\mathcal U^\DD)=b \cdot \operatorname{Ker}(\ell_0)$$ for any $b\in \mathcal Y$.
For each $n,m\geq 0$, the product $\mathcal U^n\times G^m$ has a natural structure of parameterized D-variety induced from the zero section on $\mathcal U$ and the section $s=as_0$ on $G$. By a parameterized D-subvariety of $\mathcal U^n\times G^m$ we mean one with respect to this parameterized D-structure. In particular, the set of sharp points of $\mathcal U^n\times G^m$ is $(\mathcal U^\DD)^n\times \mathcal Y^m$, and, by Lemma \ref{onsubvar} (2), a $\D$-subvariety $W$ of $\mathcal U^n\times G^m$ is a parameterized D-subvariety iff $W^\#:=W\cap \left((\mathcal U^\DD)^n\times \mathcal Y^m\right)$ is $\D$-dense in $W$.
The two-sorted language $\mathcal L_{D,K}$ has symbols $R_W$ for each parameterized D-subvariety $W$ of $\mathcal U^n\times G^m$ defined over $K$. We denote by $(\mathcal U, G)_{D,K}$ the $\mathcal L_{D,K}$-structure with sorts $\mathcal U$ and $G$ where the interpretation of $R_W$ is the tautological one, namely $W$. On the other hand, $(\mathcal U^\DD, \mathcal Y)_{D,K}$ denotes the $\mathcal L_{D,K}$-structure with sorts $\mathcal U^\DD$ and $\mathcal Y$ where the interpretation of $R_W$ is $W^\#$.
\begin{rem}\label{propi} \ \begin{enumerate} \item The parameterized D-subvarieties of $\mathcal U$ (with its zero section) and of $(G,s_0)$ are precisely those $\D$-subvarieties of $\mathcal U$ and $G$, respectively, that are defined over $\mathcal U^\DD$. A similar observation holds for parameterized D-subvarieties defined over $K$. \item By quantifier elimination of $DCF_{0,m}$ and Lemma \ref{onsubvar} (2), every $\emptyset$-definable set of the structure $(\mathcal U^\DD,\mathcal Y)_{D,K}$ is a (finite) Boolean combination of $W^\#$'s where the $W$'s are parameterized D-subvarieties of some $\mathcal U^n\times G^m$ defined over $K$. In particular, the structure $(\mathcal U^\DD,\mathcal Y)_{D,K}$ has quantifier elimination. \end{enumerate} \end{rem}
Here is the main result of this section (cf. \cite[Corollary 2.5]{MoshePillay}).
\begin{thm}\label{QEthm} The structure $(\mathcal{U},G)_{D,K}$ has quantifier elimination. \end{thm} \begin{proof} This is an extension of the arguments of \cite[Lemma 2.4 and Corollary 2.5]{MoshePillay}. Let $A\subseteq \mathcal U^n\times G^m$ be any constructible set of the structure $(\mathcal U, G)_{D,K}$ and $\pi:\mathcal U^n\times G\to \mathcal U^{n'}\times G^{m'} $ a coordinate-projection map. We must show that $\pi(A)$ is again constructible; i.e., that $\pi(A)$ is a (finite) Boolean combination of parameterized D-subvarieties of $\mathcal U^{n'}\times G^{m'}$ which are defined over $K$.
We first check that $\pi(A)$ is a Boolean combination of parameterized D-varieties (defined over $\mathcal U$). Fix $b\in \mathcal Y$. Let $\lambda:G\to G$ be left-translation by $b^{-1}$. Since $\mathcal Y=b\cdot G(\mathcal U^\DD)$, the map $\lambda$ induces a bijection between $\mathcal Y$ and $G(\mathcal U^\DD)$. By Proposition \ref{onsubvar} (2) and Remark \ref{propi} (1), a $\D$-subvariety $B$ of $\mathcal U^n\times G^m$ is a parameterized D-subvariety iff the $\D$-subvariety $B':=(\operatorname{Id}^n, \lambda^m)(B)$ is defined over $\mathcal U^\DD$. Consequently, $A':=(\operatorname{Id}^n,\lambda^m)(A)$ is a Boolean combination of $\D$-subvarieties of $\mathcal U^n\times G^m$ defined over $\mathcal U^\DD$. By quantifier elimination of $DCF_{0,m-r}$ (applied in the structure $(\mathcal U, \D)$), $\pi(A')$ is a Boolean combination of $\D$-subvarieties of $\mathcal U^{n'}\times G^{m'}$ defined over $U^{\DD}$. Applying the inverse of $(\operatorname{Id}^{n'},\lambda^{m'})$, we obtain that $\pi(A)$ is a Boolean combination parameterized D-subvarieties of $\mathcal U^{n'}\times G^{m'}$ defined over $\mathcal U$.
Finally, since $\pi(A)$ is definable over $K$, Lemma \ref{usil} implies that $\pi(A)$ is a Boolean combination of parameterized D-subvarieties all defined over $K$. \end{proof}
We end this section with an easy consequence of the above theorem (and Proposition \ref{uselater}).
\begin{cor}\label{Interpretation} $(\mathcal{U}^{\mathcal{D}},\mathcal{Y})_{D,K}$ is an elementary substructure of $(\mathcal{U},G)_{D,K}$. Consequently, the map assigning to each symbol $R_W \in L_{D,K}$ the formula (over $K$) defining $W$ in $(\mathcal{U},\Delta)$ is an interpretation of $Th(\mathcal{U}^{\mathcal{D}},\mathcal{Y})_{D,K}$ in $DCF_{0,m-r,K}$. \end{cor} \begin{proof} Let $A$ be a nonempty definable set of $(\mathcal U,G)_{D,K}$. By the usual Tarski-Vaught test, it suffices to show that $A$ has a point in $(\mathcal U^\DD, \mathcal Y)_{D,K}$. By Theorem \ref{QEthm}, $A$ is a (finite) Boolean combination of parameterized D-subvarieties of some $\mathcal U^n\times G^m$. But, by Proposition \ref{uselater}, $A^\#$ is dense in $A$; in particular, $A^\#\subset (\mathcal U^\DD)^n \times \mathcal Y^m $ is nonempty. The result follows. \end{proof}
\begin{rem} It should be noted that in the ODE case algebraic D-groups (not necessarily defined over the constants) have quantifier elimination \cite{KoPi}. This result has an immediate extension to the PDE case when the parametric set of derivations $\D$ is empty (these are the algebraic D-groups studied by Buium \cite{Bu}). However, in general, $\D\neq \emptyset$, the situation is quite different. For instance, the arguments from \cite[\S2]{KoPi} rely heavily on the properties of Jet spaces and Grassmanians for algebraic varieties, and at the moment it is unclear how such properties will translate for $\Delta$-algebraic varieties. We leave the question of quantifier elimination of parameterized D-groups for future work. \end{rem}
\section{Existence of parameterized strongly normal extensions} In this section we prove our main result concerning the existence of parameterized strongly normal extensions. Our assumptions are similar to those in the previous section: $K$ denotes our ground $\Pi$-field and $G$ is a $\Delta$-algebraic group defined over $K^{\mathcal{D}}$. We fix an integrable $K$-point $a$ of $(G,s_0)$ and a parameterized logarithmic equation \[\ell_{0}(y)=a, \quad \text{ where } y \text{ ranges in } G. \tag{$\star$}\] We let $\mathcal{Y}=\{y\in G: \ell_{0}(y)=a\}$ be the solution set of $(\star)$.
We start with some key model theoretic results and, as we shall see, most follow by adapting the proof of similar results found in \cite[\S3]{MoshePillay} and using the results of previous sections (primarily Corollary \ref{Interpretation}).
\begin{lem}{\color{white}kn}\label{onty} \begin{enumerate} \item $\mathcal{Y}$ is contained in any $\Pi$-closure of $K\gen {\mathcal{U}^{\mathcal{D}}}_\D$. In particular, for all $\alpha\in\mathcal{Y}$, $tp(\alpha/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$ is isolated. \item For each complete type $p(x)$ over $K$ containing ``$x\in\mathcal{Y}$", there is a formula $\phi_p(x,y)$ and a definable partial function $f_p(x)$, both defined over $K$ in the language of $\Pi$-rings, such that for each realisation $\alpha$ of $p$, $f_p(\alpha)$ is a tuple from $\mathcal{U}^{\mathcal{D}}$ and $\phi_p(x,f_p(\alpha))$ isolates $tp(\alpha/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$. \end{enumerate} \end{lem} \begin{proof} \
\noindent (1) This is clear.
\noindent (2) For any $\alpha\in\mathcal{Y}$, by elimination of imaginaries in $DCF_{0,m}$, we may assume that the formula $\phi(x,c)$ that isolates $tp(\alpha/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$ is such that $c$ (a tuple from $\mathcal{U}^{\mathcal{D}}$) is a canonical parameter for $\phi(x,c)$ over $K$. So we have a $K$-definable partial function $f_p$, depending only on $p=tp(\alpha/K)$, so that $c=f_p(\alpha)$. As the formula $\phi(x,y)$ also depends only on $p$, we write it as $\phi_p(x,y)$. \end{proof}
One of the main outcomes of the above lemma is the following:
\begin{lem}\label{Const} Let $p(x)$ be a complete type over $K$ containing ``$x\in\mathcal{Y}$". Suppose $c=f_p(\alpha)$ for some $\alpha$ realizing $p$, where $f_p$ is as given in Lemma~\ref{onty}. Then $(K\gen{\alpha}_{\Delta})^{\mathcal{D}}=K^{\mathcal{D}}\gen{c}_{\Delta}$. \end{lem} \begin{proof} Since $c$ is a tuple from $(K\gen{\alpha}_{\Delta})^{\mathcal{D}}$ all we really have to show is that $(K\gen{\alpha}_{\Delta})^{\mathcal{D}}\subseteq K^{\mathcal{D}}\gen{c}_{\Delta}$. So let $\beta\in (K\gen{\alpha}_{\Delta})^{\mathcal{D}}$, say $\beta=g(\alpha)$ for some $K$-definable partial function $g$. But $``\beta=g(x)"\in tp(\alpha/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$ and so we have $\mathcal U \models \forall x(\phi_p(x,c)\rightarrow \beta=g(x))$. Hence $\beta$ is a tuple from $K\gen{c}_\D$. Using Fact \ref{ConstFact}, we see that $\beta$ is in fact a tuple from $K^{\mathcal{D}}\gen{c}_\D$. The result follows. \end{proof}
Note that Lemma \ref{Const} indicates that one way to prove that a PSN extension of $K$ for $(\star)$ exists is to find an $\alpha\in\mathcal{Y}$ such that $f_p(\alpha)\in K^{\mathcal{D}}$, where $p=tp(\alpha/K)$. This is precisely what we aim to do under the assumption that the $\D$-field $(K^{\mathcal{D}},\D)$ is existentially closed in $(K,\D)$. First we need to prove that we can find a single $f$ and $\phi$ to do the job of the $f_p$'s and $\phi_p$'s:
\begin{prop}\label{mainPSN} There is a formula $\phi(x,y)$ over $K$ and $K$-definable function $f:\mathcal{Y}\rightarrow (\mathcal{U}^{\mathcal{D}})^m$ for some $m$ such that for all $\alpha\in\mathcal{Y}$ the formula $\phi(x,f(\alpha))$ isolates $tp(\alpha/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$. \end{prop} \begin{proof} We prove this by means of two claims. We follow closely the strategy of \cite[\S3]{MoshePillay}.
\noindent {\bf Claim 1.} Let $\alpha\in\mathcal{Y}$ and $\sigma\in Aut(\mathcal{Y}/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$. Then $\sigma(\alpha)\cdot\alpha^{-1}$ does not depend on $\alpha$.\\ {\it Proof of claim.} If $\beta$ is another element of $\mathcal{Y}$, then $\alpha=\beta\cdot c$ for some $c\in G(\mathcal{U}^{\mathcal{D}})$. So from $\sigma(\alpha)=\sigma(\beta)\cdot c$ and $c=\beta^{-1}\cdot\alpha$, we have that $\sigma(\alpha)\cdot\alpha^{-1}=\sigma(\beta)\cdot\beta^{-1}$.
\noindent {\bf Claim 2:} The map $\rho:Aut(\mathcal{Y}/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)\rightarrow G$ taking $\sigma$ to $\sigma(\alpha)\cdot\alpha^{-1}$ (for any $\alpha\in \mathcal{Y}$) is an isomorphism between $Aut(\mathcal{Y}/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$ and a $K$-definable subgroup $H^+$ of $G$.\\ {\it Proof of claim.} For any $\alpha\in \mathcal{Y}$, letting $p=tp(\alpha/K)$, $H^+$ is definable over $K\gen{f_p(\alpha)}_{\Delta}$ as $$\{x\cdot\alpha^{-1}:\; \mathcal U\models\phi_p(x,f_p(\alpha))\}.$$ Since $H^+$ does not depend on the choice of $\alpha$ it is defined over $K$.
To finish the prove, let $Z$ be the set $\mathcal{Y}/H^+$ of right cosets of $H^+$ in $\mathcal{Y}$. By elimination of imaginaries we have a $K$-definable $f:\mathcal{Y}\rightarrow \mathcal{U}^m$ (for some $m$) such that $Z$ can be considered as the definable set $f(\mathcal{Y})$. Moreover, as $Z$ is fixed pointwise by $Aut(\mathcal{U}/K\gen {\mathcal{U}^{\mathcal{D}}}_\D)$, we have that our $K$-definable function is in fact a map from $\mathcal{Y}$ to $(\mathcal{U}^{\mathcal{D}})^m$. \end{proof}
Recall that a $\D$-subfield $F$ of $K$ is said to be \emph{existentially closed in} $(K,\D)$ if every quantifier-free formula in the language of $\D$-rings over $F$ with a solution in $K$ has a solution in $F$. It is easy to see that $(F,\D)$ is existentially closed in $(K,\D)$ if and only if any $\D$-algebraic variety over $F$ with a $K$-point has an $F$-point.
We are now ready to prove:
\begin{thm} Suppose $(K^{\mathcal{D}},\D)$ is existentially closed in $(K,\D)$. Then there is a parameterized strongly normal extension $L$ of $K$ for the equation $(\star)$. \end{thm} \begin{proof}
The function $f$ given in Proposition \ref{mainPSN} and its image $Im(f)$ are $\emptyset$-definable in the structure $(\mathcal{U}^{\mathcal{D}},\mathcal{Y})_{L_{D,K}}$ introduced in Section 4. Moreover, Fact \ref{ConstFact} tell us that $Im(f)$ is definable over $K^{\mathcal{D}}$, by a formula $\chi(y)$ say.
We use the interpretation result (Corollary \ref{Interpretation}) in two ways. First we take $F$ to be the interpretation of $f$ in $(\mathcal{U},G)$. So $F$ is a function from $G$ to $\mathcal{U}^m$ definable over $K$ in the language of $\D$-rings; i.e., $K$-definable in the structure $(\mathcal{U},\D)$. We also take $\Gamma$ to be the interpretation of $\chi$ in $(\mathcal{U},G)$. We have that $\Gamma(y)$ defines $Im(F)$ and is also definable over $K^{\mathcal{D}}$ in $(\mathcal{U},\D)$.
Now, as the identity element $e$ of $G$ is a $K$-point, we have that $F(e)\in K^m$ and so $Im(F)$ has a point in $K$. In other words, $\Gamma(y)$ is realized in $K$. By the assumption that $(K^{\mathcal{D}},\Delta)$ is existentially closed in $(K,\Delta)$, $\Gamma(y)$ is realized in $K^{\mathcal{D}}$, say by $a$. So ``there is an $x$ such that $F(x)=a$" is true in the structure $(\mathcal{U},G)_{D,K}$. Because $(\mathcal{U}^{\mathcal{D}},\mathcal{Y})_{D,K}$ is an elementary substructure of $(\mathcal{U},G)_{D,K}$ and $a\in K^{\mathcal{D}}$, one can find such an $x$ in $\mathcal{Y}$.
So there is an $\alpha\in\mathcal{Y}$ such that $f(\alpha)\in (K^{\mathcal{D}})^m$. By Lemma \ref{Const}, $(K\gen{\alpha}_{\Delta})^{\mathcal{D}}=K^{\mathcal{D}}\gen{f(\alpha)}_{\Delta}=K^{\mathcal{D}}$. Hence $L=K\gen{\alpha}_{\Delta}$ is a parametrized strongly normal extension of $K$ for $(\star)$.\\
\end{proof}
\end{document} |
\begin{document}
\title{ Nested Artin approximation }
\author{ Dorin Popescu } \thanks{The support from the the project ID-PCE-2011-1023, granted by the Romanian National Authority for Scientific Research, CNCS - UEFISCDI is gratefully acknowledged. }
\address{Dorin Popescu, Simion Stoilow Institute of Mathematics of the Romanian Academy, Research unit 5, University of Bucharest, P.O.Box 1-764, Bucharest 014700, Romania} \email{dorin.popescu@imar.ro}
\maketitle
\begin{abstract} A short proof of the linear nested Artin approximation property of the algebraic power series rings is given here.
\noindent
{\it Key words } : Henselian rings, Algebraic power series rings, Nested Artin approximation property\\
{\it 2010 Mathematics Subject Classification: Primary 13B40, Secondary 14B25,13J15, 14B12.} \end{abstract}
\vskip 0.5 cm
\section*{Introduction}
The solution of an old problem (see for instance \cite{K}), the so-called nested Artin approximation property, is given in the following theorem. \begin{Theorem}[\cite{P}, {\cite[Theorem 3.6]{P1}}] \label{nes}
Let $k$ be a field, $ A=k\langle x\rangle$, $x=(x_1,\ldots,x_n)$ the algebraic power series over $k$, $f=(f_1,\ldots,f_s)\in A[Y]^s$, $Y=(Y_1,\ldots,Y_m)$ and $0<r_1\leq \ldots \leq r_m\leq n$, $c$ be some non-negative integers. Suppose that $f$ has a solution ${\hat y}=({\hat y}_1,\ldots,{\hat y}_m)$ in $ k[[x]] $ such that ${\hat y}_i\in k[[x_1,\ldots,x_{r_i}]]$ for all $1\leq i\leq m$ (that is a so-called nested formal solution). Then there exists a solution $y=(y_1,\ldots,y_m)$ of $f$ in $A$ such that $y_i\in k\langle x_1,\ldots,x_{r_i}\rangle$ for all $1\leq i\leq m$ and $y\equiv{\hat y} \ \ \mbox{mod}\ \ (x)^ck[[x]]$. \end{Theorem} The proof relies on an idea of Kurke from 1972 and the Artin approximation property of rings of type $k[[u]]\langle x\rangle$ (see \cite{P}, \cite{S}, \cite{P1}). When $f$ is linear, interesting relations with other problems and a description of many results on this topic are nicely explained in \cite{Rond1}. Also a proof of the above theorem in the linear case
can be found in \cite{Rond1} using just the classical Artin approximation property of $A$ (see \cite{A}). Unfortunately, we had some difficulties in reading \cite{Rond1}, but finally we noticed a shorter proof using mainly the same ideas. This proof is the content of the present note.
We owe thanks to G. Rond who noticed a gap in a previous version of this note.
\section{Linear nested Artin approximation property}
We start recalling \cite[Lemma 9.2]{Rond} with a simplified proof.
\begin{Lemma}\label{am} Let $(A,{\frk m})$ be a complete normal local domain, $u=(u_1,\dots,u_n)$, $v=(v_1,\ldots,v_m)$, $B=A[[u]]\langle v\rangle $ be the algebraic closure of $A[[u]][v]$ in $A[[u,v]]$ and $f\in B$. Then there exists $g$ in the algebraic closure $A\langle v, Z\rangle $ of $A[v,Z]$, $Z=(Z_1,\ldots,Z_s)$ in $A[[v,Z]]$
for some $s\in \bf N$ and ${\hat z}\in A[[u]]^s$ such that $f=g(\hat z)$.
\end{Lemma}
\begin{proof} Changing $f$ by $f-f(u=0)$ we may assume that $f\in (u)$. Note that $B$ is the Henselization of $C=A[[u]][v]_{({\frk m},u,v)}$ by \cite{Ra} and so there exists some etale neighborhood of $C$ containing $f$. Using for example \cite[Theorem 2.5]{S} there exists a monic polynomial $F$ in $X$ over $A[[u]][v]$ such that $F(f)=0$ and $F'(f)\not \in ({\frk m},u,v)$, let us say $F=\sum_{i,j}F_{ij}v^iX^j$ for some $F_{ij}\in A[[u]]$.
Set ${\hat z}_{ij}=F_{ij}-F_{ij}(u=0)\in (u)A[[u]]$, ${\hat z}=({\hat z}_{ij})$ and $G=\sum_{ij}(F_{ij}(u=0)+Z_{ij})v^iX^j$ for some new variables $Z=(Z_{ij})$. We have $G({\hat z})=F$. Set $G'=\partial G/\partial X$. As
$$G(Z=0)=G({\hat z}(u=0))\equiv F(f)=0\ \mbox{ modulo}\ (u),$$
$$G'(Z=0)=G'({\hat z}(u=0))\equiv F'(f)\not \equiv 0 \ \mbox{modulo}\ ({\frk m},u,v)$$
we get $G(X=0)\equiv 0$, $G'(X=0)\not \equiv 0$ modulo $({\frk m},v,Z)$. By the Implicit Function Theorem there exists $g\in ({\frk m},v,Z) A\langle v,Z\rangle$ such that $G(g)=0$. It follows that $G(g({\hat z}))=0$. But $F=G({\hat z})=0$ has just a solution $X=f$ in $({\frk m},u,v)B$ by the Implicit Function Theorem and so $f=g({\hat z})$.
\ \end{proof} \begin{Lemma} \label{la} Let $(A,{\frk m})$ be a Noetherian local ring, $f\in A[U]$, $U=(U_1,\ldots,U_s)$ a linear system of polynomial equations, $c\in \bf N$ and ${\hat u}$ a solution of $f$ in the completion $\hat A$ of $A$. Then there exists a solution $u\in A^s$ of $f$ such that $u\equiv {\hat u}\ \mbox{modulo}\ {\frk m}^c\hat A$. \end{Lemma} \begin{proof} Let $B=A[U]/(f)$ and $h:B\to {\hat A}$ be the map given by $U\to {\hat u}$. By \cite[Lemma 4.2]{P} (or \cite[Proposition 36]{P2}) $h$ factors through a polynomial algebra $A[Z]$, $Z=(Z_1,\ldots,Z_s)$, let us say $h$ is the composite map $B\xrightarrow{t} A[Z]\xrightarrow{g} {\hat A}$. Choose $ z\in A^s$ such that $z\equiv g(Z)\ \mbox{modulo} \ {\frk m}^c\hat A$. Then $u=t(cls\ U)(z)$ is a solution of $f$ in $A$ such that $u\equiv {\hat u}\ \mbox{modulo}\ {\frk m}^c\hat A$.
\ \end{proof}
\begin{Proposition} \label{p} Let $k\langle x,y\rangle$, $x=(x_1,\ldots,x_n)$, $y=(y_1,\ldots,y_m)$ be the ring of algebraic power series in $x,y$ over a field $k$ and $M\subset k\langle x,y\rangle^p$ a finitely generated $k\langle x,y\rangle$-submodule. Then $$ k[[x]] (M\cap k\langle x\rangle^p)=(k[[x,y]]M)\cap k[[x]]^p,$$ equivalently $M\cap k\langle x\rangle^p$ is dense in $(k[[x,y]]M)\cap k[[x]]^p$, that is for all ${\hat v}\in (k[[x,y]]M)\\ \cap k[[x]]^p$ and $c\in \bf N$ there exists $v_c\in (M\cap k\langle x\rangle)^p$ such that $v_c\equiv {\hat v}\ \mbox{modulo}\ (x)^ck[[x]]^p$.
Moreover, if $c\in \bf N$ and ${\hat v}=\sum_{i=1}^t {\hat u}_i a_i$ for some $a_i\in M$, ${\hat u}_i\in k[[x,y]]$ then there exist $ u_{ic}\in k\langle x,y\rangle$ such that
$ u_{ic}\equiv {\hat u}_i\ \mbox{modulo} \ (x)^ck[[x,y]] $, $v_c=\sum_{i=1}^t u_{ic}a_i\in (M\cap k\langle x\rangle^p) $ and $\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology. \end{Proposition} \begin{proof} Let ${\hat v}\in (k[[x,y]]M)\cap k[[x]]^p$, let us say ${\hat v}=\sum_{i=1}^t {\hat u}_ia_i$ for some $a_i\in M$ and ${\hat u}_i\in k[[x,y]]^p$. By flatness of $k[[x]]\langle y\rangle\subset k[[x,y]]$ we see that there exist ${\tilde u}_i\in k[[x]]\langle y\rangle$ such that ${\hat v}=\sum_{i=1}^t {\tilde u}_ia_i$. Moreover by Lemma \ref{la} we may choose ${\tilde u}_i$ such that ${\tilde u}_i\equiv {\hat u}_i\ \mbox{modulo} \ (x)^ck[[x,y]]$. Then using Lemma \ref{am} there exist $g_i\in k\langle y,Z\rangle$, $i\in [t]$ for some variables $Z=(Z_1,\ldots,Z_s)$ and ${\hat z}\in k[[x]]^s$ such that ${\tilde u}_i=g_i({\hat z})$. Note that $a_i=\sum_{r\in {\bf N}^m} a_{ir}y^r$, $g_i=\sum_{r\in {\bf N}^m} g_{ir}y^r$ with $a_{ir}\in k\langle x\rangle^p$, $g_{ir}\in k\langle Z\rangle$.
Clearly, ${\hat z}, {\hat v}$ is a solution in $k[[x]]$ of the system of polynomial equations $ V=\sum_{i=1}^t a_ig_i(Z) $, $V=(V_1,\ldots, V_p)$ if and only if it is a solution of the infinite system of polynomial equations
($*$) $V=\sum_{i=1}^t a_{i0}g_{i0}(Z)$, \ \ \ $\sum_{i=1}^t \sum_{r+r'=e}a_{ir}g_{ir'}(Z)=0$, $e\in {\bf N}^m$, $e\not =0.$
Since $k\langle x,Z,V\rangle$ is Noetherian we see that it is enough to consider in ($*$) only a finite set of equations, let us say with $e\leq \omega$ for some $\omega$ high enough. Applying the Artin approximation property of $k\langle x\rangle$ (see \cite{A}) we can find for any $c\in \bf N$ a solution $v_c\in k\langle x\rangle^p$, $z_c\in k\langle x\rangle^s$ of ($*$) such that $v_c\equiv {\hat v}\ \mbox{modulo}\ (x)^ck[[x]]^p$, $z_c\equiv {\hat z}\ \mbox{modulo}\ (x)^ck[[x]]^s$. Then $v_c=\sum_{i=1}^t a_ig_i(z_c)\in M\cap k\langle x\rangle^p $, and $u_{ic}=g_i(z_{ic})\in k\langle x,y\rangle$ satisfies $u_{ic}\equiv {\tilde u}_i\equiv {\hat u}_i\ \mbox{modulo}\ (x)^ck[[x,y]] $. Clearly $\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology and belongs to $k[[x]](M\cap k\langle x\rangle^p)$.
\ \end{proof}
\begin{Remark}{\em When $p=1$ then the above $M$ is an ideal and we get the so-called (see \cite{Rond1}) strong elimination property of the algebraic power series.} \end{Remark} The following proposition is partially contained in \cite[Lemma 4.2]{Rond1}. \begin{Proposition} \label{p1} Let $M\subset k\langle x\rangle^p$ be a finitely generated $k\langle x\rangle$-submodule and $1\leq r_1< \ldots < r_e\leq n$, $p_1,\ldots,p_e$ be some positive integers such that $p=p_1+\ldots +p_e$. Then $$T=M\cap (k\langle x_1,\ldots,x_{r_1}\rangle^{p_1}\times \ldots \times k\langle x_1,\ldots,x_{r_e}\rangle^{p_e})$$ is dense in $${\hat T}=(k[[x]]M)\cap (k[[x_1,\ldots,x_{r_1}]]^{p_1}\times \ldots \times k[[ x_1,\ldots,x_{r_e}]]^{p_e}).$$
Moreover, if $c\in \bf N$ and ${\hat v}=\sum_{i=1}^t {\hat u}_i a_i\in {\hat T}$ for some $a_i\in M$, ${\hat u}_i\in k[[x]]$ then there exist $u_{ic}\in k\langle x\rangle$ such that $ u_{ic}\equiv {\hat u}_i\ \mbox{modulo} \ (x)^ck[[x]]$, $v_c=\sum_{i=1}^t u_{ic}a_i\in T$ and
$\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology. \end{Proposition} \begin{proof} Apply induction on $e$, the case $e=1$ being done in Proposition \ref{p}. Assume that $e>1$. We may reduce to the case when $r_e=n$ replacing $M$ by $M\cap k\langle x_1,\ldots,x_{r_e}\rangle^p$ if $r_e<n$. Let $$q:\Pi_{i=1}^{p_e}k[[x_1,\ldots,x_{r_i}]]^{p_i}\to \Pi_{i=1}^{p_{e-1}}k[[x_1,\ldots,x_{r_i}]]^{p_i},$$ $$q':\Pi_{i=1}^{p_e}k[[x_1,\ldots,x_{r_i}]]^{p_i}\to k[[x_1,\ldots,x_{r_e}]]^{p_e}$$ be the canonical projections, ${\hat v}=({\hat v_1},\ldots, {\hat v}_p)\in {\hat T},$ and $M_1=q(M)$. Assume that ${\hat v}=\sum_{i=1}^t {\hat u}_ia_i$ for some ${\hat u}_i\in k[[x]]$, $a_i\in M$. By induction hypothesis applied to $M_1$, $q(\hat v)$ given $c\in \bf N$ there exists $ u_{ic}\in k\langle x\rangle$ with $u_{ic}\equiv {\hat u}_i\ \mbox{modulo}\ (x)^ck[[x]]$
such that $v'_c=\sum_{i=1}^t u_{ic}q(a_i)\in q(T)$ and $q({\hat v})$ is the limit of $(v'_c)_c$ in the $(x)$-adic topology.
Now, let $v''_c=\sum_{i=1}^t u_{ic}q'(a_i)\in k\langle x_1,\ldots,x_n\rangle^{p_e} $. We have $v''_c\equiv q'({\hat v})\\
\mbox{modulo}\ (x)^ck[[x]]^{p_e}$. Then $v_c=(v'_c,v''_c)=\sum_{i=1}^t u_{ic}a_i\in T$, $v_c\equiv {\hat v}\ \mbox{modulo} \
(x)^ck[[x]]^{p}$ and $\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology.
\ \end{proof}
\begin{Corollary} \label{lnes} Theorem \ref{nes} holds when $f$ is linear. \end{Corollary} \begin{proof} If $f$ is homogeneous then it is enough to apply Proposition \ref{p1} for the module $M$ of the solutions of $f$ in $A=k\langle x\rangle$. Suppose that $f$ is not homogeneous, let us say $f$ has the form $g+a_0$ for some system of linear homogeneous polynomials $g\in A[Y]^s$ and $a_0\in A^s$. The proof in this case follows \cite[page 7]{Rond1} and we give it here only for the sake of completeness. Change $f$ by the homogeneous system of linear polynomials ${\bar f}=g+a_0Y_0$ from $A[Y_0,Y]^s$. A nested formal solution $\hat y$ of $\bar f$ in $k[[x]]$ with ${\hat y}_i\in K[[x_1,\ldots,x_{r_i}]]$, $1\leq i\leq m$ induces a nested formal solution $({\hat y}_0,{\hat y}) $, ${\hat y}_0=1$ of $\bar f$ with $r_0=r_1$. As above, for all $c\in \bf N$ we get a nested algebraic solution $(y_0,y)$ of $\bar f$ with $y_i\in k\langle x_1,\ldots,x_{r_i}\rangle$ and $y_i\equiv {\hat y}_i\ \mbox{modulo}\ (x)^ck[[x]]$ for all $0\leq i\leq m$. It follows that $y_0$ is invertible and clearly $y_0^{-1}y$ is the wanted nested algebraic solution of $f$.
\ \end{proof} \vskip 0.5 cm
\end{document} |
\begin{document}
\title{$\overline{M}_{0,n}$ is not a Mori Dream Space}
\author{Ana-Maria Castravet} \author{Jenia Tevelev}
\address{Ana-Maria Castravet: \sf Department of Mathematics, The Ohio State University, 100 Math Tower, 231 West 18th Avenue, Columbus, OH 43210-1174} \email{noni@alum.mit.edu}
\address{\vskip -.5cm Jenia Tevelev: \sf Department of Mathematics, University of Massachusetts at Amherst, Lederle Graduate Research Tower, Amherst, MA 01003-9305} \email{tevelev@math.umass.edu}
\subjclass[2000]{14E30, 14H10, 14J60, 14M25, 14N20}
\begin{abstract} Building on the work of Goto, Nishida and Watanabe on symbolic Rees algebras of monomial primes, we prove that the moduli space of stable rational curves with $n$ punctures is not a Mori Dream Space for $n>133$. This answers a question of Hu and Keel. \end{abstract}
\maketitle
\section{Introduction}
We work over an algebraically closed field $k$. It~was argued that $\overline{M}_{0,n}$ should be a Mori Dream Space (MDS for short) because it is ``similar to a toric variety'' and toric varieties are basic examples of MDS. We suggest an adjustment to this principle: $\overline{M}_{0,n}$~is similar to the blow-up of a toric variety at the identity element of the torus. Specifically, we prove the following. For any toric variety $X$, we denote by $\Bl_eX$ the blow-up of $X$ at the identity element of the torus. Let $\overline{LM}_n$ be the Losev--Manin space \cite{LM}. It is a smooth projective toric variety of dimension $n-3$.
\begin{theorem}\label{asdazxvsfvsfvsdaqwf} There exists a small $\mathbb Q$-factorial projective modification $\widetilde{LM}_{n+1}$ of $\Bl_e\overline{LM}_{n+1}$ and surjective morphisms $$\widetilde{LM}_{n+1}\to\overline{M}_{0,n}\to\Bl_e\overline{LM}_n.$$ In particular, \begin{itemize} \item If $\overline{M}_{0,n}$ is a MDS then $\Bl_e\overline{LM}_n$ is a MDS. \item If $\Bl_e\overline{LM}_{n+1}$ is a MDS then $\overline{M}_{0,n}$ is a MDS. \end{itemize} \end{theorem}
Next we invoke a beautiful theorem of Goto, Nishida, and Watanabe:
\begin{theorem}[\cite{GNW}]\label{GNW theorem} If $(a, b, c) = (7m-3, 5m^2-2m, 8m-3)$, with $m\ge 4$ and $3\nmid m$, then $\Bl_e\mathbb P(a,b,c)$ is not a MDS when $\ch k=0$. \end{theorem} We show that \begin{theorem}\label{asdasdaqwf} Let $n=a+b+c+8$, where $a,b,c$ are positive coprime integers. If $\Bl_e\overline{LM}_n$ is a MDS then $\Bl_e\mathbb P(a,b,c)$ is a MDS. \end{theorem}
It immediately follows from these results, answering the question of Hu--Keel \cite[Question 3.2]{HK}, that: \begin{corollary}\label{main} Assume $\ch k=0$. Then $\overline{M}_{0,n}$ is not a Mori Dream Space for $n\ge 134$. \end{corollary}
Understanding the birational geometry of the moduli spaces $\overline{M}_{g,n}$ of stable, $n$-pointed genus $g$ curves is a problem that has received a lot of attention from many authors. Interest in the effective cone originated in the work of Harris and Mumford \cite{HM} who showed that $\overline{M}_{g,n}$ is a variety of general type for large $g$. Mumford also raised the question of describing the ample divisors, i.e., ~the nef cone. A long standing conjecture of Fulton and Faber provides a conjectural description, which was reduced to the case of genus $0$ by Gibney, Keel, and Morrison \cite{GKM}. This prompted Hu and Keel \cite{HK} to raise the question if $\overline{M}_{0,n}$ is a Mori Dream Space.
In positive genus, this is known to be typically false. For example, Keel proved in \cite{Keel} that, in characteriztic zero, $\overline{M}_{g, n}$ is not a MDS for $g\geq3$, $n\geq1$, by proving that it has a nef divisor that is not semiample. Recently, Chen and Coskun proved in \cite{CC} that $\overline{M}_{1,n}$ is not a MDS for $n\geq3$ as it has infinitely many extremal effective divisors. For genus zero, the only previously settled cases were for $n\leq6$ ($\overline{M}_{0,5}$ is a del Pezzo surface, hence, a MDS by \cite{BP}; $\overline{M}_{0,6}$ is log-Fano threefold, hence, a MDS by \cite{HK}; for a direct proof that $\overline{M}_{0,6}$ is a MDS, see \cite{C}. Note more generally that in characteristic zero, log-Fano varieties are MDS by \cite{BCHM}; however, $\overline{M}_{0,n}$ is not log-Fano for $n\geq7$). Since \cite{HK}, the question whether $\overline{M}_{0,n}$ is a MDS was raised by several authors, see for example \cite{C}, \cite{AGS}, \cite{GM},
\cite{Kiem}, \cite{McK_survey}, \cite{Fed}, \cite{Hausen}, \cite{GianGib}, \cite{Milena}, \cite{GM_nef}, \cite{BGM}, \cite{CT1}, \cite{GianJenMoon}, \cite{Larsen}. One of the results in \cite{Milena} is that $\overline{M}_{0,n}$ is a MDS if and only if the projectivization of the pull-back of the cotangent bundle of $\mathbb P^{n-3}$ to $\overline{LM}_n$ is a MDS. In particular, Cor. \ref{main} adds to the examples in \cite{Milena} of toric vector bundles whose projectivization is not a MDS.
The original motivation for Hu and Keel's question was coming from Keel and M\textsuperscript{c}Kernan's result \cite{KM} that any extremal ray of the Mori cone of $\overline{M}_{0,n}$ that (1) can be contracted by a map of relative Picard number $1$ and (2) the exceptional locus of the map in (1) has dimension at least $2$, is generated by a one-dimensional stratum (i.e., the Fulton-Faber conjecture is satisfied for such rays). As in a MDS any extremal ray of the Mori cone can be contracted by a map of relative Picard number $1$, a positive answer to the Hu-Keel question ``would nearly answer Fulton's question for $\overline{M}_{0,n}$" \cite{HK}. Implicit in this statement is the expectation that condition (2) should be satisfied. It was a long held belief that the exceptional locus of any map $\overline{M}_{0,n}\to X$ has all components of dimension at least $2$. We gave counterexamples to this statement in \cite{CT2}.
\begin{remarks}
(1) By the Kapranov description, $\overline{M}_{0,n}$ is the iterated blow-up of $\mathbb P^{n-3}$ along proper transforms of linear subspaces spanned by $n-1$ points in linearly general position. The Losev-Manin space $\overline{LM}_n$ is the iterated blow-up of $\mathbb P^{n-3}$ along proper transforms of linear subspaces spanned by $n-2$ points in linearly general position. We denote by $X_n$ the intermediate toric variety obtained by blowing-up only linear subspaces of codimension $\geq3$. By Cor. \ref{codim 3}, $\Bl_e X_{n+1}$ is a small modification of a certain $\mathbb P^1$-bundle over $\overline{M}_{0,n}$. In particular, $\Bl_eX_{n+1}$ is not a Mori Dream Space if $\ch k=0$ and $n\ge 134$.
(2) Thm. \ref{GNW theorem} is stated slightly differently in \cite{GNW}. In Section \ref{GNW section} we translate into a geometric proof the arguments in \cite{GNW}. They are based on reduction to positive characteristic and a version of Max Noether's ``AF+BG'' theorem that holds for weighted projective planes.
(3) Several arguments in this paper involve elementary transformations of vector bundles, for example the second part of Thm. \ref{asdazxvsfvsfvsdaqwf} follows by doing elementary tranformations of rank $2$ bundles on $\overline{M}_{0,n}$. We give a general criterion for being able to iterate elementary transformations (Prop.~\ref{mvcvcvcnb}), which might be of independent interest. \end{remarks}
{\bf Acknowledgements.} We are grateful to Aaron Bertram, Tommaso de Fernex, Jose Gonzalez, Sean Keel and James M\textsuperscript{c}Kernan for useful discussions. We thank the referee for several useful comments. The first author was partially supported by NSF grants DMS-1160626 and DMS-1302731. The second author was partially supported by NSF grants DMS-1001344 and DMS-1303415.
\section{Preliminaries}
We briefly recall some basic properties of MDS from \cite{HK}.
Let $X$ be a normal projective variety. A \emph{small $\mathbb Q$-factorial modification} (SQM for short) of $X$ is a small (i.e., isomorphic in codimension one) birational map $X\dashrightarrow Y$ to another normal $\mathbb Q$-factorial projective variety $Y$.
\begin{definition}\label{MDS def} A normal projective variety $X$ is called a \emph{Mori Dream Space (MDS)} if the following conditions hold: \begin{itemize} \item[(1) ] $X$ is $\mathbb Q$-factorial and $\Pic(X)_{\mathbb Q}\cong \NN^1(X)_{\mathbb Q}$; \item[(2) ] $\Nef(X)$ is generated by finitely many semi-ample line bundles; \item[(3) ] There is a finite collection of SQMs $f_i: X\dashrightarrow X_i$ such that each $X_i$ satisfies (1), (2) and $\Mov(X)$ is the union of $f_i^*(\Nef(X_i))$. \end{itemize} \end{definition}
\begin{say}\label{MDS obs} In what follows, we will often make use of the following facts: \begin{itemize} \item If $X$ is a MDS, any normal projective variety $Y$ which is an SQM of $X$, is also a MDS. This follows from the fact that the $f_i$ of Def. \ref{MDS def} are the only SQMs of $X$ \cite[Prop. 1.11]{HK}.
\item (\cite[Thm. 1.1]{Okawa}) Let $X\to Y$ be a surjective morphism of projective normal $\mathbb Q$-factorial varieties. If $X$ is a MDS then $Y$ is a MDS. Note, we only use this for maps $f$ with connected fibers, in which case the statement follows from \cite{HK}. \end{itemize} \end{say}
\begin{definition}\label{Cox} For a semigroup $\Gamma$ of Weil divisors on $X$, consider the $\Gamma$-graded ring: $$R(X,\Gamma):=\bigoplus_{D\in\Gamma}\HH^0(X, {\mathcal{O}}(D)).$$ where $\mathcal O(D)$ is the divisorial sheaf associated to the Weil divisor $D$. Suppose that the divisor class group $\Cl(X)$ is finitely generated. If $\Gamma$ is a group of Weil divisors such that $\Gamma_{\mathbb Q}\cong\Cl(X)_{\mathbb Q}$, the ring $R(X, \Gamma)$ is called a \emph{Cox ring} of $X$ and is denoted $\Cox(X)$. \end{definition}
The definition of $\Cox(X)$ depends on the choice of $\Gamma$, but finite generation of $\Cox(X)$ does not. Def. \ref{Cox} differs from \cite[Def. 2.6]{HK}, in that $\Cl(X)$ replaces $\Pic(X)$. However, for us $X$ will always be $\mathbb Q$-factorial; hence, finite generation of $\Cox(X)$ is not affected. The following is an algebraic characterization of MDS:
\begin{theorem}\cite[Prop. 2.9]{HK} Let $X$ be a $\mathbb Q$-factorial projective variety with $\Pic(X)_{\mathbb Q}\cong\NN^1(X)_{\mathbb Q}$. Then $X$ is a MDS if and only if $\Cox(X)$ is a finitely generated $k$-algebra. \end{theorem}
\section{Proof of Theorem \ref{asdasdaqwf}}\label{main section}
\begin{proposition}\label{toric} Let $\pi:\,N\to N'$ be a surjective map of lattices (finitely generated free ${\mathbb{Z}}$-modules) with kernel of rank $1$ spanned by a primitive vector $v_0\in N$. Let $\Gamma$ be a finite set of rays in $N_{\mathbb R}$ spanned by elements of $N$, such that the rays $\pm{R_0}$ spanned by $\pm{v_0}$ are not in~$\Gamma$. Let $\mathcal F'\subset N'_\mathbb R$ be a complete simplicial fan with rays given by $\pi(\Gamma)$. Suppose that the corresponding toric variety $X'$ is projective (notice that it is also $\mathbb Q$-factorial because $\mathcal F'$ is simplicial). Then
(A) There exists a complete simplicial fan $\mathcal F\subset N_\mathbb R$ with rays given by $\Gamma\cup\{\pm R_0\}$ and such that \begin{itemize} \item the corresponding toric variety $X$ is projective; \item the rational map $p:\,X\dashrightarrow X'$ induced by $\pi$ is regular; \item each cone of $\mathcal F$ maps onto a cone of $\mathcal F'$. \end{itemize}
(B) There exists an SQM $Z$ of $\Bl_eX$ such that the rational map $Z\dashrightarrow\Bl_eX'$ induced by $p$ is regular. In particular, if $\Bl_eX$ is a MDS then $\Bl_eX'$ is a MDS. \end{proposition}
\begin{proof}
We first prove (A). We argue by induction on $|\Gamma|-|\pi(\Gamma)|$. Suppose that this number is zero, and in particular we have a bijection between $\Gamma$ and $\pi(\Gamma)$. Then we define $\mathcal F$ as follows: for any subset $J\subset\Gamma$ (maybe empty) such that the rays spanned by the vectors in $\pi(J)$ form a cone, $\mathcal F$ contains the cone spanned by the rays in $J$, the cone spanned by the rays in $J\cup\{R_0\}$, and the cone spanned by the rays in $J\cup\{-R_0\}$. It follows from the fact that $\mathcal F'$ is a complete simplicial fan that $\mathcal F$ is a also a complete simplicial fan $\mathcal F\subset N_\mathbb R$ with rays in $\Gamma\cup\{\pm R_0\}$. Moreover, the rational map $p:\,X\dashrightarrow X'$ induced by $\pi$ is regular and in fact each cone of $\mathcal F$ maps onto a cone of $\mathcal F'$.
Next we show that $X$ is projective. It follows from the description of the map of fans that all fibers of $p$ are $\mathbb P^1$'s (only set-theoretically because the fibers are not necessarily reduced), and moreover~$D_{0}$, the torus invariant $\mathbb Q$-Cartier divisor corresponding to the ray~$R_0$, is a section of $p$ and therefore is $p$-ample. It follows that $p$ is projective and therefore that $X$ is projective because $X'$ is projective. For a purely toric proof of projectivity, let $A$ be an ample Cartier divisor on $X'$.
Let $D=D_0+mp^*(A)$. We argue that the $\mathbb Q$-Cartier divisor $D$ is ample for large $m>0$ by using the Toric Kleiman Criterion \cite[Thm. 6.3.13]{CLS}, i.e., we prove that $D\cdot C>0$ for every torus invariant curve $C$ in $X$. Torus invariant curves have the form $V(\tau)$, for $\tau$ a cone in $\mathcal F$ of dimension $n-1$ ($n=\dim X$). There are two cases: (1) $\tau$ is spanned by rays $R_1,\ldots, R_{n-1}$ in $\Gamma$; and (2) $\tau$ is spanned by $R_0$ and rays $R_1,\ldots, R_{n-2}$ in $\Gamma$. In Case (1), $p(C)$ is a point in $X'$; hence, $D\cdot C=D_0\cdot C$. Note that $\tau=\sigma\cap\sigma'$, where $\sigma$ is the cone spanned by $\tau$ and $R_0$ and $\sigma'$ is the cone spanned by $\tau$ and $-R_0$. Then by \cite[Lemma 6.4.2]{CLS} $D_0\cdot C=\frac{\mult(\tau)}{\mult(\sigma)}>0$, where $\mult(\sigma)$ denotes the multiplicity of a simplicial cone $\sigma$. In Case (2), $p(C)$ is the torus invariant curve $V(\overline{\tau})$ in $X'$, where $\overline{\tau}=\langle\pi(R_1),\ldots, \pi(R_{n-2})\rangle$. Let $M\geq0$ be an integer such that $D_0\cdot C'> -M$, for all the torus invariant curves $C'$ in $X$. By the projection formula, $D\cdot C=D_0\cdot C+ mA\cdot p_*(C)>0$ if $m\geq M$.
Now we do the inductive step. Let $R'\in\pi(\Gamma)$ and let $Z\subset\Gamma$ be the set of all rays $R\in\Gamma$ such that $\pi(R)=R'$. Without loss of generality
we can suppose that $|Z|>1$. Choose $R\in Z$. Let $\tilde\Gamma=\Gamma\setminus\{R\}$. Since the rays of $\mathcal F'$ are given by $\pi(\tilde\Gamma)=\pi(\Gamma)$, by the inductive assumption, the theorem is true for $\tilde\Gamma$. Let $\mathcal G\subset N_\mathbb R$ be the corresponding fan and $\tilde{X}$ be the corresponding toric variety. Let $\pi_{\mathbb R}: N_{\mathbb R}\to N'_{\mathbb R}$ be the map induced by $\pi$. Then $\pi_{\mathbb R}^{-1}(R')\subset N'_{\mathbb R}$ is a $2$-dimensional half-space, which is the union of the cones in $\mathcal G$ spanned by pairs of rays: $$\{R_0=U_0,U_1\}, \{U_1,U_2\},\ldots, \{U_{k-1},U_k\}, \{U_k,U_{k+1}=-R_0\},$$ where $\{U_1,\ldots,U_k\}=Z\setminus\{R\}$ (see Figure \ref{fanpic}). \begin{figure}
\caption{\small The rays $U_1,\ldots, U_k$ of $\tilde{\Gamma}$ that map to $R'$.}
\label{fanpic}
\end{figure}
Choose an index $i$ such that $R$ belongs to the relative interior of the angle spanned by $U_i$ and $U_{i+1}$. Then the fan $\mathcal F$ is obtained as a star subdivision on $\mathcal G$ centered at $R$. By \cite[Prop. 11.1.6]{CLS} the map $X\to \tilde{X}$ is projective. All properties in (A) are clearly satisfied.
Now we prove (B). Notice that the map $p:\,X\to X'$ over the open torus $T'\subset X'$ is a trivial $\mathbb P^1$-bundle $\pr_1:\,T'\times\mathbb P^1\to T'$ (recall that the map $p:\,X\to X'$ is not globally a $\mathbb P^1$-bundle). To construct $Z$ and the morphism $f: Z\to\Bl_e X'$ that factors through $\Bl_e X$, we first construct a small modification $Z'$ of $\Bl_e(T'\times\mathbb P^1)$ and a morphism $$f': Z'\to \Bl_e T'$$ resolving the induced rational map $\Bl_e(T'\times\mathbb P^1)\dashrightarrow \Bl_e T'$.
We then obtain $Z$ and $f$ by gluing $f'$ to $$p: X\setminus p^{-1}\{e\}\to X'\setminus\{e\}$$ along the $\mathbb P^1$-bundle $\pr_1:\,(T'\setminus\{e\})\times\mathbb P^1\to (T'\setminus\{e\})$.
To construct $Z'$, we do a linear change of variables to identify $$T'\simeq\mathbb A^k\setminus\bigcup_i\{x_i=-1\},\quad e\mapsto 0$$ and $$\mathbb P^1\simeq\mathbb P^1,\quad 1\mapsto 0.$$
Thus we identify $p_{|p^{-1}(T')}$ with the restriction of the toric projection map $\pr_1:\,\mathbb A^k\times\mathbb P^1\to \mathbb A^k$ (for a different choice of the toric structure) to the open set $T'\subset\mathbb A^k$. Blow-ups of $X$ and $X'$ at the identity elements of their tori now correspond to blow-ups in torus fixed points: $$Y:=\Bl_0{\mathbb A^k\times\mathbb P^1},\quad Y':=\Bl_{0}\mathbb A^k$$
The fans are as follows: the fan of $Y'$ is the star subdivision of the positive octant $\langle e_1,\ldots,e_k\rangle$ in the vector $e_0:=e_1+\ldots+e_k$. Its top-dimensional cones are spanned by $e_0$ and $\{e_i\}_{i\in I}$, where $I\subset\{1,\ldots,k\}$ is a subset of cardinality $k-1$. The fan of $Y$ contains an octant $\tau=\langle e_1,\ldots,e_k,-e_{k+1}\rangle$ and the star subdivision of the positive octant $\langle e_1,\ldots,e_k,e_{k+1}\rangle$ in the vector $f_0:=e_1+\ldots+e_{k+1}$. In particular, the fan of $Y$ contains the cone $\tau'=\langle e_1,\ldots,e_k,f_0\rangle$. We construct a small modification $Z'$ of $Y$ as follows: We remove the cones $\tau$ and $\tau'$ from the fan of $Y$ and instead add $k$ top-dimensional cones spanned by $f_0$, $-e_{k+1}$, and $\{e_i\}_{i\in I}$, where $I\subset\{1,\ldots,k\}$ is a subset of cardinality $k-1$. To see this geometrically, consider the trivial bundle $\mathbb P:=Y'\times\mathbb P^1\to Y'$ with its sections $s_0=Y'\times\{0\}$ and $s_{\infty}=Y'\times\{\infty\}$. If $E$ denotes the exceptional divisor of $Y'\to\mathbb A^k$, let $Z=s_0(E)$. Let $\tilde\mathbb P$ be the blow-up of $\mathbb P$ along $Z$. Let $D=E\times\mathbb P^1\subset\mathbb P$ and let $\tilde D$ be its proper transform in $\tilde\mathbb P$. There are two ways to blow-down $\tilde D\cong\mathbb P^{k-1}\times\mathbb P^1$: $$\alpha: \tilde\mathbb P\to Z',\quad \alpha(\tilde D)=\tilde{s}_{\infty}(E)\cong\mathbb P^{k-1},$$ $$\beta: \tilde\mathbb P\to Y,\quad \beta(\tilde{D})=\tilde{F}\cong\mathbb P^1,\quad F=\{0\}\times\mathbb P^1$$ where $\tilde{s}_{\infty}$ is the proper transform of the section $s_{\infty}$ under the rational map $\mathbb P\dashrightarrow Z'$ and $\tilde{F}$ is the proper transform of $F$ in $Y$. Notice that the rational map $Z'\dashrightarrow Y'$ is regular, and one can check that it is the $\mathbb P^1$-bundle $\mathbb P_{Y'}(\mathcal O\oplus\mathcal O(-E))$. Note that over $Y'\setminus E\cong(\mathbb A^k\setminus\{0\})\times\mathbb P^1$ all above birational maps are isomorphisms.
\begin{remark} Note that $Z'$ is the elementary transformation of the trivial $\mathbb P^1$-bundle over $Y'$ given by the data $(E,Z)$ (see Section \ref{iff section}). Alternatively, one can construct $Z'$ and $f'$ by doing this elementary transformation. Then it is not hard to argue that the new $\mathbb P^1$-bundle is a small modification of $\Bl_e(T'\times\mathbb P^1)$. \end{remark}
To construct $Z$ and the morphism $f: Z\to\Bl_e X'$, we glue $Z'\to Y'$ (with preimages of hyperplanes $\{x_i=-1\}$ removed) to $$p: X\setminus p^{-1}\{e'\}\to X'\setminus\{e'\}$$ along the $\mathbb P^1$-bundle $\pr_1:\,(T'\setminus\{e\})\times\mathbb P^1\to (T'\setminus\{e\})$. Clearly, $Z$ is $\mathbb Q$-factorial, since $Z'$ and $X$ are $\mathbb Q$-factorial.
It remains to show that $Z$ is projective and it would suffice to show that the morphism $f$ is projective. This morphism is clearly projective in both charts of $Z$, but since projectivity is not local on the base, we have to give a global argument. It is enough to construct an $f$-ample divisor on $Z$. Let $A$ be an irreducible very ample divisor on $X$ and let $\tilde A$ be its proper transform in $Z$. We claim that $\tilde A$ is $f$-ample. Indeed, it is obviously $f$-ample in the second chart of $Z$. But the first chart is a $\mathbb P^1$-bundle and $\tilde A$ surjects onto the base, and so it is $f$-ample. \end{proof}
\begin{proof}[Proof of Thm. \ref{asdasdaqwf}.] The toric data of $\overline{LM}_n$ is as follows, see \cite{LM}. Fix general vectors $e_1,\ldots,e_{n-2}\in\mathbb R^{n-3}$ such that $e_1+\ldots+e_{n-2}=0$. The lattice $N$ is generated by $e_1,\ldots,e_{n-2}$. The rays of the fan of $\overline{LM}_n$ are spanned by the primitive lattice vectors $\sum_{i\in I}e_i$, for each subset
$I$ of $S:=\{1,\ldots, n-2\}$ with $1\le |I|\le n-3$. Notice that rays of this fan come in opposite pairs. We are not going to need cones of higher dimension of this fan. The main idea is to choose a sequence of projections from these rays to get a sequence of (generically) $\mathbb P^1$-bundles $$X_1\to X_2\to X_3\to X_4\to\ldots,$$ where $X_1$ is an SQM of $\overline{LM}_n$ which is different from the standard tower of forgetful maps $$\overline{LM}_n\to\overline{LM}_{n-1}\to\overline{LM}_{n-2}\to\ldots$$ Specifically, we partition $$S=S_1\coprod S_2\coprod S_3$$ into subsets of size $a+2, b+2, c+2$ (so $n=a+b+c+8$). We also fix some indices $n_i\in S_i$, for $i=1,2,3$. Let $N''\subset N$ be a sublattice spanned by the following vectors: \begin{equation}\label{axfafgfgfg} e_{n_i}+e_r\quad\hbox{\rm for}\quad r\in S_i\setminus\{n_i\},\ i=1,2,3. \tag{\arabic{section}.\arabic{et}}\addtocounter{et}{1}\end{equation} Let $N'=N/N''$ be the quotient group and let $\pi$ be the projection map. Then we have the following: \begin{enumerate} \item $N'$ is a lattice; \item $N'$ is spanned by the vectors $\pi(e_{n_i})$, for $i=1,2,3$; \item $a\pi(e_{n_1})+b\pi(e_{n_2})+c\pi(e_{n_3})=0$. \end{enumerate} It follows at once that the toric surface with lattice $N'$ and rays spanned by $\pi(e_{n_i})$ for $i=1,2,3$, is a weighted projective plane $\mathbb P(a,b,c)$.
To finish the proof of the theorem, we apply Prop. \ref{toric} inductively to the sequence of lattices $N_j$, $j=1,\ldots,n-4$, obtained by taking the quotient of $N$ by the sublattice spanned by the first $j-1$ vectors of the sequence \eqref{axfafgfgfg} (arranged in any order) and the sets of rays $\Gamma_j$ obtained by projecting the rays of the fan of $\overline{LM}_n$. More precisely, we do a backwards induction, by starting with the canonical simplicial structure on the fan of the complete (hence, projective) toric surface $X_{n-4}$ with data $N'=N_{n-4}$, $\Gamma_{n-4}$. It remains to notice that we have a regular map $X_{n-4}\to\mathbb P(a,b,c)$ obtained by dropping all vectors in $\Gamma_{n-4}$ except for $\pi(e_{n_i})$ for $i=1,2,3$. Clearly, the map is an isomorphism on the open torus; hence, there is a birational morphism $\Bl_e X_{n-4}\to\Bl_e\mathbb P(a,b,c)$. The result of applying induction is a sequence of toric morphisms $$X_1\to X_2\to\ldots\to X_{n-4},$$ such that the rational map $\Bl_e X_i\dashrightarrow\Bl_e X_{i+1}$ factors through a projective $\mathbb Q$-factorial small modification $Z_i$ of $\Bl_e X_i$, followed by a surjective regular map $Z_i\to\Bl_e X_{i+1}$. The first toric variety in the sequence $X_1$ is a small modification of $\overline{LM}_n$ (having the same rays) which is an isomorphism on the open torus. Hence, $\Bl_e X_1$ is a small modification of $\Bl_e\overline{LM}_n$. The result now follows from Thm. \ref{GNW theorem} and \ref{MDS obs}. \end{proof}
\section{Proof of Theorem \ref{GNW theorem}}\label{GNW section}
The results in \cite{GNW} are stated in a slightly different form than Thm. \ref{GNW theorem}. We first explain how our formulation is equivalent to \cite[Cor. 1.2]{GNW}. For the reader's convenience, we also translate the arguments in \cite{GNW} into a geometric proof of Thm. \ref{GNW theorem}.
Let $a, b, c>0$ be pairwise coprime integers. Let $\mathbb P:=\mathbb P(a,b,c)$ be the weighted projective space $\Proj k[x,y,z]$, with $\deg(x)=a$, $\deg(y)=b$, $\deg(z)=c$. Then $\mathbb P$ is a toric variety which is smooth outside the three torus invariant points. Consider the torus invariant divisors: $$D_1=V_+(x), \quad D_2=V_+(y), \quad D_3=V_+(z).$$
Let $m_i$ ($i=1,2,3$) be integers such that $m_1a+m_2b+m_3c=1$ and let $H=\sum m_i D_i$. Then $\Cl(\mathbb P)=\mathbb Z\{H\}$, $H$ is $\mathbb Q$-Cartier and $H^2=1/(abc)$.
Let $\mathfrak{p}:=\mathfrak{p}(a,b,c)$ be the kernel of the $k$-algebra homomorphism: $$\phi: k[x, y, z]\rightarrow k[t], \quad\phi(x)=t^a,\quad\phi(y)=t^b,\quad\phi(z)=t^c.$$
The identity of the open torus in $\mathbb P$ is the point $e=V_+(\mathfrak{p})$. Let $X=\Bl_e\mathbb P$ denote the blow-up of $\mathbb P$ at $e$; let $E$ denote the exceptional divisor. As $e\notin D_i$, we can pull-back to $X$ the Weil divisors $D_i$ and let $A=\sum m_i\pi^{-1}(D_i)$. Then $\Cl(X)=\mathbb Z\{A, E\}$. A Cox ring of $X$ is: $$\Cox(X)=\oplus_{d,l\in\mathbb Z}\HH^0(X, {\mathcal{O}}(dA-lE)). $$ Note that since $a,b,c$ are pairwise coprime, $\mathcal O(dH)\cong\mathcal O(d)$.
It was observed by Cutkosky \cite{Cutkosky} that finite generation of $\Cox(X)$ is equivalent to the finite generation of the symbolic Rees algebra $R_s(\mathfrak{p})$ (here we follow the exposition in \cite{Kurano-Matsuoka}). Recall that for a prime ideal $\mathfrak{p}$ in a ring $R$, the $l$-th \emph{symbolic power} of $\mathfrak{p}$ is the ideal: $$\mathfrak{p}^{(l)}=\mathfrak{p}^lR_{\mathfrak{p}}\cap R.$$ The subring of the polynomial ring $R[T]$ given by $$R_s(\mathfrak{p}):=\bigoplus_{l\geq0}\mathfrak{p}^{(l)}T^l,$$ is called the \emph{symbolic Rees algebra} of $\mathfrak{p}$.
In our situation, for the prime ideal $\mathfrak{p}$ in $S=k[x,y,z]$ defined above, we identify the symbolic Rees algebra $R_s(\mathfrak{p})$ with a subalgebra of $\Cox(X)$. Using the identification $\HH^0(\mathbb P,{\mathcal{O}}(d))=S_d$, we have: $$\HH^0(X, {\mathcal{O}}(dA-lE))\cong\HH^0(\mathbb P, {\mathcal{O}}(d)\otimes{\mathcal{I}}^l_e)=S_d\cap\mathfrak{p}^{(l)},$$ where ${\mathcal{I}}_e$ denotes the ideal sheaf of the point $e$. It follows that $R_s(\mathfrak{p})$ is isomorphic to the subalgebra of $\Cox(X)$ given by $$\bigoplus_{d,l\geq0}\HH^0(X,{\mathcal{O}}(dA-lE)).$$ Moreover, $\Cox(X)$ is isomorphic to the extended symbolic Rees ring: $$R_s(\mathfrak{p})[T^{-1}]=\ldots\oplus ST^{-2}\oplus ST^{-1}\oplus S\oplus \mathfrak{p} T\oplus\mathfrak{p}^{(2)}T^2\oplus\ldots$$ Clearly, $R_s(\mathfrak{p})$ is a finitely generated $k$-algebra if and only if $\Cox(X)$~is.
Assume now that $$(a,b,c)=(7m-3, 5m^2-2m, 8m-3), \quad m\geq4,\quad m\not\equiv0 \mod 3.$$
By \cite[Cor. 1.2]{GNW}, the symbolic Rees algebra $R_s(\hat{\mathfrak{p}})$ of the extended ideal $\hat{\mathfrak{p}}$ in the formal power series ring $\hat{S}=k[[x,y,z]]$ is not Noetherian if $\ch k=0$
(and it is Noetherian if $\ch k>0$). Since
$R_s(\mathfrak{p})\otimes_S\hat{S}\cong R_s(\hat{\mathfrak{p}})$ \cite[Lemma 2.3]{GN_ams}, it follows that $R_s(\mathfrak{p})$ is not
finitely generated.
Indeed, otherwise $R_s(\hat{\mathfrak{p}})$ would be a finitely generated $\hat S$-algebra and hence
Noetherian by Hilbert's basis theorem.
We now give a geometric proof of Thm. \ref{GNW theorem}. First note the following characterization of $X$ being a MDS in the presence of a negative curve:
\begin{lemma}\cite{Huneke,Cutkosky}\label{MDS surface} Assume $X=\Bl_e\mathbb P$ contains an irreducible curve $C\neq E$ with $C^2<0$. Then $X$ is a MDS if and only if there exists an effective divisor $D$ such that $D\cdot C=0$ and $D$ does not contain $C$ as a fixed component. \end{lemma}
\begin{proof} Since $C^2<0$, it follows that $C$ generates an extremal ray of the Mori cone $\NE(X)$ and hence, $\NE(X)=\mathbb R_{\geq0}\{C, E\}$. The nef cone is generated by $H$ and the ray $R$ in $\NE(X)$ defined by $R\cdot C=0$, $R\cdot E>0$. Then $X$ is a MDS if and only if $R$ is generated by a semiample divisor. This proves the ``only if" implication. If there is an effective divisor $D$ as in the lemma, we may replace $D$ with a divisor that has no fixed components and $D$ is semiample by Zariski's theorem (\cite[2.1.32]{Laz}). \end{proof}
\begin{remark} As observed by Cutkosky \cite{Cutkosky}, if $\ch k>0$ and $X=\Bl_e\mathbb P$ contains a negative curve, then $X$ is always a MDS due to Artin's contractability criterion \cite{Artin}. \end{remark}
Let now $(a,b,c)=(7m-3, 5m^2-2m, 8m-3)$, $m\geq4$, $m\not\equiv0 \mod 3$.
Let $C$ be the proper transform on $X$ of the curve $y^3=x^mz^m$ in $\mathbb P$. The class of $C$ in $\Cl(X)$ is $$C=3(5m^2-2m)H-E.$$ Note that $C$ is an irreducible curve with $C^2<0$. If $D\in\NE(X)$ is such that $D\cdot C=0$, the class of $D$ equals $$D_d:=d(7m-3)(8m-3)H-3dE,$$ for some positive integer $d$.
Consider the set ${\mathcal{I}}$ of effective Weil divisors $D$ on $X$ such that $D\cdot C=0$ and $D$ does not contain $C$ as a fixed component. A crucial fact is the following:
\begin{proposition}\cite{GNW}\label{MaxNoether} The set
$$I=\{d\in\mathbb Z_{\ge0}\quad |\quad \exists D\in{\mathcal{I}},\ [D]=D_d\}$$ equals $\mathbb Z_{\ge0}d_0$ for some non-negative integer $d_0$. \end{proposition}
We will prove Prop.~\ref{MaxNoether} using a version of Max Noether's ``AF+BG" theorem \cite[p. 61]{Fulton} that holds for weighted projective planes. Note that ${\mathcal{I}}$ and $I$ depend on the field $k$. We will write ${\mathcal{I}}_k$ whenever we need to specify the field $k$.
\begin{definition} Let $f, g\in S$ and $\mathfrak{p}$ be a prime ideal in $S$ which is a minimal prime of the ideal $(f,g)$. We say that \emph{$h\in S$ satisfies Noether's condition at the prime ideal $\mathfrak{p}$ (with respect to $f,g$)} if $h\in(f,g)S_{\mathfrak{p}_i}$. \end{definition}
\begin{proposition}[AF+BG theorem]\label{MaxNoether WPP} Let $f, g, h\in S$. Assume that the minimal primes $\mathfrak{p}_1,\ldots, \mathfrak{p}_s$ of the ideal $(f,g)$ all have height $2$. If $h$ satisfies Noether's condition at $\mathfrak{p}_i$ for all $i=1,\ldots, s$, then $h\in(f,g)$. \end{proposition}
\begin{proof} As $h\in(f,g)S_{\mathfrak{p}_i}$, there exist $u_i\in S\setminus\mathfrak{p}_i$ such that $u_ih\in(f,g)$.
For each $i$ we can find elements $y_i\in \cap_{j\neq i}\mathfrak{p}_j\setminus\mathfrak{p}_i$. Then $u:=\sum u_i y_i\notin\mathfrak{p}_i$ for any $i$ and $uh\in (f,g)$. Since $S$ is Cohen-Macaulay, by the Unmixedness Theorem \cite[Cor. 18.14]{Eisenbud}, all the associated primes of $(f,g)$ are minimal. Hence, the zero divisors of $S/(f,g)$ consist of elements from $\mathfrak{p}_i$'s. It follows that $u$ is not a zero divisor in $S/(f,g)$, hence $h\in (f,g)$. \end{proof}
\begin{corollary}\label{MN} If $F=V_+(f)$, $G=V_+(g)$ are curves in $\mathbb P$ with no common components and $h\in S$ satisfies Noether's condition at each point of $F\cap G$, then $h=Af+Bg$, for some $A,B\in S$. \end{corollary}
\begin{lemma}\label{N condition} Assume $F=V_+(f)$ and $G=V_+(g)$ are curves in $\mathbb P$ with no common components, $F\cap G$ does not contain any of the torus invariant points, and $F$ is smooth along $F\cap G$. Let $h\in S$ and let $G'=V_+(h)$. Assume that for all $p\in F\cap G$ we have: $$\mult_p(G',F)\geq\mult_p(G,F).$$ Then $h$ satisfies Noether's condition at each point of $F\cap G$. \end{lemma}
\begin{remark} Note that this lemma includes the ``classical'' case when $F$ and $G$ intersect transversally (and away from torus fixed points) and $G'$ passes through all points in $F\cap G$. \end{remark}
\begin{proof} Let $p\in F\cap G$ with the corresponding homogeneous prime ideal~$\mathfrak{p}$. By assumption, at least two of $x,y,z$ are not in $\mathfrak{p}$. Say $x,y\notin\mathfrak{p}$. Since $a,b$ are coprime, let $m_1,m_2$ be integers such that $m_1a+m_2b=1$. Let $r=x^{m_1}y^{m_2}$. Note that $r$ is a unit in $S_{xy}$. For $f\in S_d$, denote $f_1=f/r^d\in S_{(xy)}$. Consider the functions $f_1, g_1, h_1$ corresponding to $f,g,h$. Denote by $t$ a generator of the maximal ideal of $\mathcal O_{C,p}=\mathcal O_{\mathbb P,p}/(f_1)$. If $\overline{g}_1$, $\overline{h}_1$ denote the images of $g_1$, $h_1$ in $\mathcal O_{C,p}$, we have $\overline{g}_1=ut^n$, $\overline{h}_1=vt^m$, for units $u,v\in\mathcal O_{C,p}$ and with $n=\mult_p(G,F)$, $m=\mult_p(G',F)$. As $m\geq n$, it follows that $\overline{h}_1\in(\overline{g}_1)$, i.e., $h_1\in(f_1,g_1)\subseteq\mathcal O_{\mathbb P,p}=S_{(\mathfrak{p})}$. Since $x,y\notin\mathfrak{p}$, it follows that $h\in(f,g)S_{\mathfrak{p}}$. \end{proof}
\begin{proof}[Proof of Prop. \ref{MaxNoether}] Assume ${\mathcal{I}}\neq\emptyset$ and let $d_0$ be the smallest positive integer in $I$. Let $g\in S$ be such that the proper transform $D$ in $X$ of $G:=V_+(g)\subset\mathbb P$ has class $D_{d_0}$ and such that $D$ does not contain $C$. Let $d\in I$, $d>0$. Let $h\in S$ be such that the proper transform $D'$ of $G':=V_+(h)$ has class $D_{d}$
and such that $D'$ does not contain $C$. Recall that $C$ is the proper transform in $X$ of $F:=V_+(f)$, where $f=y^3-x^mz^m$. Since $D\cdot C=0$, $D$ and $C$ are disjoint in $X$, but $G$ and $F$ intersect only at $e$ in $\mathbb P$ and we have: $$\mult_e(G,F)=\mult_e(G)=3d_0.$$
Similarly, $\mult_e(G',F)=3d$. Since $d\geq d_0$, by Lemma \ref{N condition}, $h$ satisfies Noether's condition (with respect to $f,g$). By Cor. \ref{MN}, $h=Af+Bg$ for some $A,B\in S$. If $D_1$ denotes the proper transform in $X$ of $V_+(B)$, note that $[D']=[D]+[D_1]$. It follows that $D_1\in{\mathcal{I}}$ and so $d-d_0\in I$. The statement now follows by induction. \end{proof}
\begin{lemma}\cite{GNW}\label{D_p}
Assume $\ch k=p\geq3$. Then there exists $D\in{\mathcal{I}}_k$ with class $D_p$. \end{lemma}
\begin{proof} We recall from \cite[p. 390]{GNW} the construction of a polynomial $h\in\mathfrak{p}^{(3p)}$ of degree $p(7m-3)(8m-3)$ such that $c\nmid h$. The ideal $\mathfrak{p}$ contains polynomials $u$, $v$ and $f$, where $$u=z^{3m-1}-x^{2m-1}y^2,\quad v=x^{3m-1} -yz^{2m-1}, \quad f=y^3-x^mz^m$$ (in fact $u,v,f$ generate $\mathfrak{p}$ by the Hilbert--Burch theorem but we don't need this). Let $$d_2=x^{m-1}y^5z^{m-1}-3x^{2m-1}y^2z^{2m-1}+x^{5m-2}y+z^{5m-2},$$ $$d_3=-x^{3m-2}y^7+2x^{m-1}y^5z^{3m-1}+x^{4m-2}y^4z^m-5x^{2m-1}y^2z^{4m-1}+$$ $$+3x^{5m-2}yz^{2m}-x^{8m-3}z+z^{7m-2},$$ $$d'_3=y^8z^{2m-2}-4x^my^5z^{3m-2}+x^{4m-1}y^4z^{m-1}+6x^{2m}y^2z^{4m-2}-$$ $$-4x^{5m-1}yz^{2m-1}+x^{8m-2}-xz^{7m-3}.$$
A direct computation shows: $$x^md_2-yv^2+z^{m-1}uf=0,$$ $$x^{m-1}v^2f+ud_2-z^{m-1}d_3=0,$$ $$xd_3+yvf^2+zd'_3=0.$$
It follows that $d_2\in\mathfrak{p}^{(2)}$ and $d_3, d'_3\in\mathfrak{p}^{(3)}$. Note also that $f\nmid d_3$. Since $\ch k =p$, we get from the third equation that $$x^pd_3^p+y^pv^pf^{2p}+z^p{d'_3}^p=0.$$
Write $p=2q+1$ for some integer $q>0$. Since $$x^pd_3^p+y^pv^pf^{2p}\equiv0 \mod (z^p),\quad x^mu+y^2v+z^{2m-1}f=0,$$
it follows that $$x^pd_3^p+y^pv^pf^{2p}=x^pd_3^p+(-1)^qyv^{p-q}f^{2p}\big(x^mu+z^{2m-1}f\big)^q=$$ $$=x^pd_3^p+(-1)^q\sum_{i=0}^q{q \choose{i}}x^{m(q-i)}yz^{(2m-1)i}u^{q-i}v^{p-q}f^{2p+i}$$ $$\equiv0 \mod (z^p).$$
Notice that either $m(q-i)\geq p$ or $(2m-1)i\geq p$ for each $0\leq i\leq q$ (use $m\geq4$). Then $$x^pd_3^p+(-1)^q\sum_{(2m-1)i<p}{q \choose{i}}x^{m(q-i)}yz^{(2m-1)i}u^{q-i}v^{p-q}f^{2p+i} \equiv0 \mod (z^p),$$ and therefore, $$z^ph=d_3^p+(-1)^q\sum_{(2m-1)i<p}{q \choose{i}}x^{m(q-i)-p}yz^{(2m-1)i}u^{q-i}v^{p-q}f^{2p+i},$$ for some $h\in\mathfrak{p}^{(3p)}$. If $f\mid h$, then $f\mid d_3$, which is a contradiction. \end{proof}
\begin{proof}[Proof of Thm. \ref{GNW theorem}] Assume that $X$ is a MDS in characteristic $0$. By Lemma \ref{MDS surface}, there exists an integer $d>0$ and a monic polynomial $f\in S$ such that the proper transform $D$ in $X$ of $V_+(f)$ has class $D_d$ and $D$ does not contain $C$ as a fixed component. Since a multiple of $D$ is base-point free and $D$ is big, by eventually replacing $d$ with a multiple, we may assume by Bertini's theorem that $D$ is smooth and connected.
Let $R$ be the $\mathbb Z$-algebra generated by the coefficients of $f$. Let $\mathbb P_R:=\Proj R[x,y,z]$ and $e_R$ be the section of $\mathbb P_R\rightarrow\Spec(R)$ corresponding to $\mathfrak{p} R[x,y,z]$. Let ${\mathcal{X}}_R$ be the blow-up of $\mathbb P_R$ along $e_R$, with exceptional divisor $\mathcal E$. Let $\mathcal D$ be the proper transform of $V_+(f)\subset\mathbb P_R$ in $\mathcal X_R$. Since the geometric generic fiber of
$\rho:\mathcal D\rightarrow\Spec(R)$ is smooth and connected, by eventually replacing $R$ with a localization, we may assume that $\rho$ is smooth and all its geometric fibers $\mathcal D_s$ are connected. Since $\rho$ is flat, $\deg{\mathcal{O}}(\mathcal E)_{|\mathcal D_s}$ does not depend on $s$. It follows that all $\mathcal D_s$ have class $D_d$ and do not contain the curve $C$, i.e., for each $s\in\Spec(R)$, we obtain a divisor in $\mathcal I_{\overline{k(s)}}$. For each prime $p$ in the image of the dominant map $\Spec R\rightarrow\Spec\mathbb Z$, pick some $s_p\in\Spec(R)$. By Prop. \ref{MaxNoether}, there are integers $d_p$ such that $I_{\overline{k(s_p)}}=\mathbb N\{d_p\}$. Hence, $d_p\mid d$ for sufficiently large primes $p$. As by Lemma \ref{D_p}, $d_p\mid p$ for all primes $p\geq3$, we must have that $d_p=1$ for all sufficiently large $p$.
But one can see directly that $D_1$ is not effective in characteristic $0$ (and hence, in characteristic $p$, for $p$ large). To see this, note that we have the following:
\begin{claim}\label{monomials} The only monomials in $S$ of degree $(7m-3)(8m-3)$ are $$x^{m-1}y^5 z^{3m-2}, x^{4m-2}y^4 z^{m-1}, x^{2m-1}y^2 z^{4m-2}, x^{5m-2}y z^{2m-1}, x^{8m-3}, z^{7m-3}.$$ \end{claim}
\begin{proof} To simplify notation, we let $$a=7m-3,\quad b=5m^2-2m,\quad c=8m-3.$$
Consider monomials $x^{\alpha}y^{\beta}z^{\gamma}$ of degree $ac$, i.e., with $a\alpha+b\beta+c\gamma=ac.$ Since $3b=(a+c)m$, it follows that $$a(3\alpha+m\beta)+c(3\gamma+m\beta)=3ac.$$
In particular, $a | 3\gamma+m\beta$ and $c | 3\alpha+m\beta$.
Moreover, note that $0\leq\alpha\leq c$, $0\leq\gamma\leq a$ and $$0\leq\beta\leq \frac{ac}{b}=\frac{(7m-3)(8m-3)}{5m^2-2m}<12.$$
If $\beta=0$ then $a | 3\gamma$, $c | 3\alpha$. Since $a$, $c$ are not divisible by $3$, it follows that
$a | \gamma$, $c | \alpha$ and therefore the only solutions are $\alpha=c, \gamma=0$ and $\alpha=0, \gamma=a$.
Assume that $\beta>0$. Note that for a fixed $\beta>0$, there is at most one choice of $\alpha, \gamma$. Indeed, if $a\alpha_1+c\gamma_1=a\alpha_2+c\gamma_2$, it follows from $a(\alpha_1-\alpha_2)=c(\gamma_2-\gamma_1)$ and $(a, c)=1$ that the only possibility is $\alpha_1=\alpha_2$ and $\gamma_1=\gamma_2$. Moreover,
as $c | 3\alpha+m\beta$, there is $u\in\mathbb Z_{>0}$ with $$cu=3\alpha+m\beta.$$
Since $\alpha<c$, it follows that $cu<3c+m\beta\leq 3c+11m$ and hence, $$u<3+\frac{11m}{c}=3+\frac{11m}{8m-3}<3+2=5.$$
Considering divisibility by $3$ in $cu=3\alpha+m\beta$, we must have $2u\equiv\beta$ modulo $3$. Hence, the only possibilities are: $u=1, 4$, $\beta=2, 5, 8, 11$;
$u=2$, $\beta=1, 4, 7, 10$; $u=3$, $\beta=3, 6, 9$. For fixed $u$ and $\beta$, one computes $\alpha$ from $cu=3\alpha+m\beta$. One can directly see that the only possibilities are $u=1$, $\beta=2, 5$ and $u=2$, $\beta=1, 4$. \end{proof}
No linear combination of the monomials in Claim \ref{monomials} has all six derivatives of order $2$ vanishing at $e=(1,1,1)$. A direct computation shows that the determinant of the corresponding $6\times 6$ matrix is (up to a sign):
$$4(7m-3)^2(8m-3)^2(7m-4)(8m-4)(51m^2-43m+9).$$ Q.E.D.\end{proof}
\section{Proof of Theorem \ref{asdazxvsfvsfvsdaqwf}}\label{iff section}
We recall the elementary transformations of Maruyama \cite{Mar} in the generality that we need. Let $X$ be a scheme of finite type over $k$, let $i:\,D\hookrightarrow X$ be an effective Cartier divisor, let $\mathcal F$ be a locally free sheaf of rank $2$ on~$X$, and let $\mathcal F|_D\to \mathcal L$ be a surjection onto an invertible sheaf on~$D$. Then we have a commutative diagram: $$ \begin{CD} &&0 && 0 \\ @. @AAA @AAA \\
0 @>>> i_*\mathcal L' @>>> i_*(\mathcal F|_D) @>>> i_*\mathcal L @>>>0\\
@. @A{\pi'}AA @AAA @|\\ 0 @>>> \mathcal F' @>>> \mathcal F @>\pi >> i_*\mathcal L @>>>0\\ @. @AAA @AAA \\ && \mathcal F(-D) @= \mathcal F(-D) \\ @. @AAA @AAA \\ &&0 && 0 \\ \end{CD}$$
The sheaf $\mathcal F'$ is called an elementary transformation of~$\mathcal F$. It is a locally free sheaf of rank $2$.
Geometrically, consider $\mathbb P^1$-bundles $\mathbb P(\mathcal F)$ and $\mathbb P(\mathcal F')$, where say $\mathbb P(\mathcal F)=\bProj_{\mathcal O_X}Sym(\mathcal F)$. Quotient maps $\pi$ and $\pi'$ give sections $s:\,D\to\mathbb P(\mathcal F|_D)$ and $s':\,D\to\mathbb P(\mathcal F'|_D)$. Let $Z=s(D)$ and $Z'=s'(D)$ be their images. Note that they are local complete intersections of codimension $2$. We have a canonical isomorphism $$\Bl_Z\mathbb P(\mathcal F)\simeq\Bl_{Z'}\mathbb P(\mathcal F').$$
More concretely, $\mathbb P(\mathcal F')$ is obtained from $\Bl_Z\mathbb P(\mathcal F)$ by blowing down the proper transform of the Cartier divisor $\mathbb P(\mathcal F|_D)$. Note that elementary transformations are functorial, i.e., for a map $g: Y \to X$, $\mathbb P(g^*\mathcal F')$ is the elementary transformation of $\mathbb P(g^*\mathcal F)$ along the data $(g^{-1}(D), g^*s)$.
\begin{lemma}\label{aux} Let $p: Y\to X$ be a $\mathbb P^1$-bundle and let $p': Y'\to X$ be an elementary transformation given by the data $(D, Z)$. Let $t: X\to Y$ be a global section and let $T'$ denote the proper transform of $T=t(X)$ in $Y'$. If $T$ and $Z$ agree over $D$, or if they are disjoint, then $T'$ is a section of $p'$.
Let now $t_1, t_2$ be two global sections and let $T'_1$, $T'_2$ denote the proper transforms of $T_1=t_1(X)$, $T_2=t_2(X)$. Assume $T_1$, $Z$ agree over $D$.
\begin{itemize} \item[(a) ] If $T_2$, $Z$ are disjoint, then $T'_1$, $T'_2$ are disjoint over $D$. \item[(b) ] Assume $T_1$, $T_2$, $Z$ agree over $D$ and for some point $x\in D$ (with $X$, $D$ non-singular at $x$), we have at $z=s(x)$ that $$T_{z, T_1}\cap T_{z, T_2}=T_{z, Z}\subseteq T_{z, Y}.$$ Then $T'_1$, $T'_2$ are disjoint over $x$. \end{itemize} \end{lemma}
The condition on tangent spaces in (b) is equivalent to the differentials ${dt_1}_{|x}$, ${dt_2}_{|x}$ not having the same image. Alternatively, there exists a curve $C$ in $X$ smooth at $x$, such that in the ruled surface $S:=p^{-1}(C)\to C$, the sections $T_1\cap S$ and $T_2\cap S$ are not tangent at $z$.
\begin{proof}[Proof of Lemma \ref{aux}] If $T$ and $Z$ agree along $D$, the proper transform $\tilde{T}$ in the blow-up $\tilde{Y}$ of $Y$ along $Z$ is isomorphic to $T$, as it is the blow-up of $T$ along $Z$ (a Cartier divisor in $T$). As $Y'$ is the blow-down of $\tilde{Y}$ along the proper transform of $p^{-1}(D)$, which is disjoint from $\tilde{T}$, it follows that $T'$ is isomorphic to $\tilde{T}$, hence $T'$ is a section of $p'$.
Assume that $T$ and $Z$ are disjoint. Set $Y=\mathbb P(\mathcal F)$, $Y'=\mathbb P(\mathcal F')$, for $\mathcal F'$ the elementary transformation of $\mathcal F$ along $\mathcal F_{|D}\to\mathcal L$ (corresponding to $Z$). The global section $T$ corresponds to a quotient $\mathcal F\to\mathcal M$. Since $T$ and $Z$ are disjoint, the induced map $\mathcal F_{|D}\to\mathcal M_{|D}\oplus\mathcal L$ is an isomorphism (hence, the first exact sequence in the commutative diagram relating $\mathcal F$ and $\mathcal F'$ is split). The induced map
$\mathcal F'\to i_*\mathcal M_{|D}$ factors through $\mathcal F'\to\mathcal F\to\mathcal M$. It follows that $\mathcal F'\to\mathcal M$ is surjective (it is enough to check this on $D$) and $T'=\mathbb P(\mathcal M)$, i.e., $T'$ is a section of $p'$.
We now prove the second part of the lemma. As proved above, $T'_1$ and $T'_2$ are sections of $p'$. Assume we are in situation (a). We prove that $T'_1$, $T'_2$ are disjoint above any point $x\in D$. Consider a general curve $C$ in $X$ through $x$. By functoriality, the ruled surface $S=p^{-1}(C)\to C$ undergoes an elementary transformation given by data $(x, z)$, where $z=s(x)$. As the section $T_1$ passes through $z$, while $T_2$ does not, it follows immediately that $T'_1$, $T'_2$ are disjoint over $x$. Assume now that we are in situation (b). As before, we reduce to the ruled surface case. We may choose $C$ a curve through $x$ that is transverse to $D$ at $x$ and let $S=p^{-1}(C)$. It follows that $\dim(T_{z,Z}\cap T_{z,S})=0$ and sections $T_1\cap S$, $T_2\cap S$ are transverse at $z$; hence, $T'_1$, $T'_2$ are disjoint above $x$. \end{proof}
\begin{definition}\label{compatible} Let $X$ be a non-singular variety and let $D_1,\ldots,D_N$ be irreducible divisors in $X$ with simple normal crossings. Assume that the intersections $D_{ij}:=D_i\cap D_j$ and $D_{ijk}:=D_i \cap D_j\cap D_k$ are irreducible or empty. We denote the interiors of these intersections by $D^0_{ij}$ and $D_{ijk}^0$, respectively. Let $p:\,Y\to X$ be a $\mathbb P^1$-bundle.
A {\em compatible sequence of sections starting at $M$ (with respect to the ordered set $D_1,\ldots,D_N$)} is a sequence $Z_M\ldots, Z_N$, where $Z_i$ is the image of a section $s_i:\,D_i\to p^{-1}(D_i)$ ($i=M,\ldots, N$) such that the following conditions are satisfied: \begin{enumerate} \item For any $j>i\geq M$, if $D_{ij}\ne\emptyset$ then either \begin{enumerate} \item $Z_i$ and $Z_j$ agree over $D_{ij}$, or \item $Z_i$ and $Z_j$ are disjoint over $D_{ij}^0$, in which case the locus in $D_{ij}$ where $Z_i$ and $Z_j$ agree is either empty or it is a union of subsets $D_{ijk}$ for some indices $k$ such that $$M\le k< i.$$ Moreover, for such an index $k$, $Z_k$ agrees with $Z_i$ over $D_{ik}$, $Z_k$ agrees with $Z_j$ over $D_{jk}$, and, for any $z\in s_k(D_{ijk}^0)$, \begin{equation}\label{svsdasdvsdv} T_{z, s_i(D_{ij})}\cap T_{z, s_j(D_{ij})}=T_{z, s_k(D_{ijk})}. \tag{\arabic{section}.\arabic{et}}\addtocounter{et}{1}\end{equation} \end{enumerate} \item If $i, j, k\geq M$ are such that $D_{ijk}\ne\emptyset$, then there exists a subset $\{a,b\}$ of $\{i,j,k\}$ such that $Z_a$ and $Z_b$ agree over $D_{ab}$. \end{enumerate} \end{definition}
\begin{remarks} (a) Def. \ref{compatible} gives sufficient conditions to iterate elementary transformations along a sequence of data (see Prop. \ref{mvcvcvcnb} - the role of $M$ being to help formulate the inductive step). Note that a compatible sequence of sections starting at $M$, with respect to $D_1,\ldots, D_N$, is the same as a compatible sequence of sections starting at $1$, with respect to $D_M,\ldots, D_N$, appropriately reindexed (i.e., we ignore $D_1,\ldots, D_{M-1}$).
(b) In general, when making an elementary transformation along $(D, Z)$, the proper transform of a section may not be a section. By Lemma \ref{aux}, this holds, however, when the section either agrees with $Z$, or is disjoint from it. Condition (1) in Def. \ref{compatible} guarantees that in a compatible sequence $Z_M,\ldots Z_N$, any $Z_i$ for $i>M$ either agrees with $Z_M$, or is disjoint from it. Hence, after the elementary transformation given by $(D_M, Z_M)$, the proper transform of $Z_i$ is still a section.
(c) If $Z_i$ and $Z_j$ are disjoint over $D_{ij}^0$, then $Z_i$ and $Z_j$ give distinct sections of the $\mathbb P^1$-bundle $p^{-1}(D_{ij})\rightarrow D_{ij}$ and, hence, their intersection has pure codimension $4$ in $Y$, i.e., the locus $G$ where $Z_i$ and $Z_j$ agree, has pure codimension $3$ in $X$. Moreover, as $G\subseteq D_{ij}\setminus D_{ij}^0=\cup_k D_{ijk}$, it follows that $G$ is a union of subsets $D_{ijk}$. Hence, condition (1)(b) simply states that one cannot have $k<M$ or $k\geq i$.
(d) Condition (2) in Def. \ref{compatible} guarantees that in a compatible sequence $Z_M,\ldots Z_N$, if $j, i>M$, then either $Z_i$ and $Z_j$ agree over $D_{ij}$ (hence, after the elementary transformation given by $(D_M, Z_M)$, the proper transforms $Z'_i$ and $Z'_j$ still agree over $D_{ij}$) or, if not, then $Z_i'$ and $Z_j'$ become disjoint over $D_{Mij}^0$ (see the proof of Prop. \ref{mvcvcvcnb}). \end{remarks}
\begin{proposition}\label{mvcvcvcnb} Given a compatible sequence of sections $Z_M,\ldots, Z_N$ starting at $M$, let $p':\,Y'\to X$ be an elementary transformation given by the data $(D_M,Z_M)$. Let $Z_{M+1}',\ldots,Z_N'\subset Y'$ be the proper transforms of $Z_{M+1},\ldots,Z_N$. Then $Z_{M+1}',\ldots,Z_N'$ are sections of $p'$ which form a compatible sequence of sections starting at $M+1$.
In~particular, given a compatible sequence of sections $Z_1,\ldots, Z_N$ starting at $1$, we can iterate elementary transformations (along the data $(D_i, Z_i)$), to get a sequence of $\mathbb P^1$-bundles $Y_0=Y$, $Y_1=Y'$, $\ldots$, $Y_N$ over $X$. \end{proposition}
\begin{proof}[Proof of Prop. \ref{mvcvcvcnb}] We first show that each $Z_i'$ is a section for each $i>M$. By Lemma \ref{aux}, it suffices to show that $Z_M$ and $Z_i$ are either disjoint or agree over $D_{Mi}$. Suppose they do not agree over $D_{Mi}$ and are not disjoint. Then we are in situation (b) of condition (1) in Def. \ref{compatible}. Since there are no indices $k$ such that $M\leq k< M$, it follows that the locus where $Z_i$ and $Z_M$ agree is empty; hence, we have a contradiction.
Next we show that $Z_{M+1}',\ldots, Z_N'$ form a compatible sequence of sections starting at $M+1$. Notice that condition (2) is obvious because the elementary transformation is an isomorphism outside of $D_M$ (if $Z_a$ and $Z_b$ agree over $D_{ab}$, then $Z'_a$ and $Z'_b$ agree over $D_{ab}$ as well). So we only need to check condition (1). Take $M<i<j$ such that $D_{ij}\ne\emptyset$. As before, if $Z_i$ and $Z_j$ agree over $D_{ij}$, then $Z_i'$ and $Z_j'$ agree over $D_{ij}$ as well. If $Z_i$ and $Z_j$ do not agree, then let
$$\mathcal K:=\{k\in\{1,\ldots, N\}\,|\,\hbox{\rm $Z_i$ and $Z_j$ agree over $D_{ijk}$}\},$$
$$\mathcal K':=\{k\in\{1,\ldots, N\}\,|\,\hbox{\rm $Z_i'$ and $Z_j'$ agree over $D_{ijk}$}\}.$$
It is clear that $\mathcal K'\setminus\{M\}=\mathcal K\setminus\{M\}$ and \eqref{svsdasdvsdv} is satisfied for these indices $k$ (because the elementary transformation is an isomorphism over $D^0_{ijk}$). So we only need to check that $M\not\in \mathcal K'$, i.e., that $Z'_i$ and $Z'_j$ do not agree over $D_{Mij}$. We can assume that $D_{Mij}\ne\emptyset$, as otherwise there is nothing to prove. Consider two cases. Firstly, suppose $M\not\in \mathcal K$. By condition (2) of Def. \ref{compatible}, we may assume without loss of generality that $Z_M$ and $Z_i$ agree over $D_{Mi}$. Then $Z_M$ and $Z_j$ do not agree over $D_{Mj}$ and therefore, must be disjoint as proved above. It follows by Lemma \ref{aux}(a) (applied to $Z_i$ and $Z_j$ over $D_{ij}$) that $Z'_i$ and $Z'_j$ are disjoint over $D_{Mij}$. Secondly, suppose $M\in \mathcal K$. Then by Lemma \ref{aux}(b) applied to $Z_i$ and $Z_j$ over $D_{ij}$, we have that $Z'_i$, $Z'_j$ are disjoint over $D_{Mij}^0$ and hence, $M\not\in \mathcal K'$. \end{proof}
Before we give the proof of Theorem \ref{asdazxvsfvsfvsdaqwf}, we recall some basic properties of birational contractions. Recall that a birational map $f: ~Y\dashrightarrow~ X$ between smooth, projective varieties is called a \emph{birational contraction} if the inverse map $f^{-1}$ does not contract any divisor. Equivalently, given a common resolution $(p,q): W\rightarrow Y\times X$, any $p$-exceptional divisor is $q$-exceptional \cite{HK}[Def. 1.0]. For such a $W$, we have $\rho(W)=\rho(X)+r$, where $r$ is the number of $p$-exceptional divisors. Note that if $f$ does not contract a divisor $D$ in $Y$, then $f$ is a local isomorphism at the generic point of $D$. Hence, a birational contraction $f: Y\dashrightarrow X$ is a small modification if and only if $f$ does not contract any divisor, or, equivalently, $\rho(X)=\rho(Y)$.
\begin{lemma}\label{preimage} Let $f: Y\rightarrow X$ be a proper birational morphism of smooth varieties. Assume that $T\subset X$ is a smooth, irreducible closed subvariety with smooth, irreducible scheme-theoretic preimage $Z\subset Y$. Consider the blow-ups $$\pi_1: \tilde{X}=\Bl_T(X)\rightarrow X,\quad \pi_2: \tilde{Y}=\Bl_Z(Y)\rightarrow Y$$ with exceptional divisors $E_T$ and $E_Z$. Then there is an induced birational proper morphism $\tilde{f}:\tilde{Y}\rightarrow\tilde{X}$, such that $\tilde{f}(E_Z)=E_T$. \end{lemma}
\begin{proof} By the universal property of blow-ups, there is a morphism $\tilde{f}$ such that $\pi_1\circ\tilde{f}=f\circ\pi_2$ and we have $\tilde{f}^{-1}(E_T)=E_Z$. It follows that $\tilde{f}$ is proper and $\tilde{f}(E_Z)=E_T$. \end{proof}
\begin{lemma}\label{contract} Assume that $f: Y\dashrightarrow X$ is a birational map between normal, projective varieties and $\pi:\tilde{Y}\rightarrow Y$ is the blow-up of a closed subvariety $Z\subseteq Y$, with exceptional divisor $E$. Assume that $$f\circ\pi:\tilde{Y}\dashrightarrow X$$ contracts all the components of $E$. Then if $f\circ\pi$ is a birational contraction, then $f$ is a birational contraction. \end{lemma}
\begin{proof} If $(p,q): W\rightarrow\tilde{Y}\times X$ is a common resolution and any $p$-exceptional divisor is $q$-exceptional, then, clearly any $\pi\circ p$-exceptional divisor (i.e., $p$-exceptional or a proper transforms of a component of $E$) is $q$-exceptional. \end{proof}
\begin{proof}[Proof of Thm. \ref{asdazxvsfvsfvsdaqwf}.] Choose general points $q_1,\ldots,q_n\in\mathbb P^{n-2}$ and let $\pi:\,\Bl_{q_n}\mathbb P^{n-2}\to\mathbb P^{n-3}$
be a resolution of the linear projection away from $q_n$. Then $\pi$ is a $\mathbb P^1$-bundle. Let $p_i=\pi(q_i)$ for $i=1,\ldots,n-1$. For any subset $I$ of $\{1,\ldots,n-1\}$ such that $1\le |I|\le n-4$, let $L_I\subset\mathbb P^{n-3}$ be the linear subspace spanned by $p_i$ for $i\in I$. Notice that we have sections $t_I:\,L_I\to \pi^{-1}(L_I)$ that send $L_I$ to the proper transform of the linear subspace in $\mathbb P^{n-2}$ spanned by $q_i$, for $i\in I$. Let $\Psi:\,\overline{M}_{0,n}\to\mathbb P^{n-3}$ be the Kapranov map such that $\Psi(\delta_{I\cup\{n\}})=L_I$ for any subset $I$ as above \cite{Kapranov}. Let $\pi_0:\,Y\to \overline{M}_{0,n}$ be the pull-back of $\pi$ and let $s_I:\,\delta_{I\cup\{n\}}\to \pi^{-1}(\delta_{I\cup\{n\}})$
be the pull-back of $t_I$ for each subset $I$ as above. We order the boundary divisors $\delta_{I\cup\{n\}}$ according to $|I|$ (in increasing order)
and arbitrarily for fixed $|I|$. This gives an order - which we denote by $\prec$ - on the subsets $I$.
\begin{claim}\label{check compatible} The sections $s_I$ form a compatible sequence of sections. \end{claim}
Assuming Claim \ref{check compatible}, we prove that by Prop.~\ref{mvcvcvcnb}, the last elementary transformation $Y_N$ is a SQM of the blow-up of $\mathbb P^{n-2}$ along the points $q_1,\ldots, q_n$ and the proper transforms of the linear subspaces spanned by $\{q_i\}_{i\in I}$ for all subsets $I\subset\{1,\ldots, n-1\}$ with $\leq n-4$ elements. Moreover, we prove that the required small modification $\widetilde{LM}_{n+1}$ is the blow-up of $Y_N$ in the proper transforms of the linear subspaces spanned by $\{q_i\}_{i\in I}$ for all subsets $I$ with $n-3$ elements.
Consider the successive blow-ups $$X_0=\Bl_{q_n}\mathbb P^{n-2}, X_1,\ldots, X_N$$ of $X_0$ along the (proper transforms of the) linear subspaces $t_I(L_I)$ in $\mathbb P^{n-2}$ spanned by $q_i$, for $i\in I$, with the subsets $I$ ordered as above ($|I|\leq n-4$). For each $\mathbb P^1$-bundle in the sequence $$Y_0=Y, Y_1,\ldots, Y_N,$$ consider the induced birational map $f_k: Y_k\dashrightarrow X_k$. For example, $f_0: Y_0\rightarrow X_0$ is the birational proper map $Y\rightarrow\Bl_{q_n}\mathbb P^{n-2}$. \begin{claim}\label{contractions} The map $f_k: Y_k\dashrightarrow X_k$ is a birational contraction for all $k$. \end{claim}
\begin{proof} We do an induction on $k$. Clearly, the statement holds for $k=0$ as $f_0$ is a birational morphism between smooth projective varieties.
For each $I\subset\{1,\ldots, n-1\}$ ($|I|\leq n-4$), we let $U_I\subseteq\mathbb P^{n-3}$ be the complement of all the subspaces $L_{I'}$ for all subsets $I'\prec I$ ($I'\neq I$). The order $\prec$ is such that $L_{I'}\subseteq L_I$ only if $I'\prec I$ (since $L_{I'}\subseteq L_I$ if and only if $I'\subseteq I$). In particular, $L_I\cap U_I\neq\emptyset$ and $U_I\subseteq U_{I'}$ if $I'\prec I$.
We introduce some notation: for an open set $U\subseteq\mathbb P^{n-3}$ and a map $f: W\rightarrow\mathbb P^{n-3}$ we denote $W_U=f^{-1}(U)$. We will use this for the $\mathbb P^1$-bundles $\pi_i:~Y_i\rightarrow\overline{M}_{0,n}$ (via the Kapranov map $\Psi: \overline{M}_{0,n}\rightarrow\mathbb P^{n-3}$) and the blow-ups $X_i$ of $X_0$ (via $\pi: X_0\rightarrow\mathbb P^{n-3}$).
Assume now that $k\geq1$ and $Y_k$ is the elementary transformation of $Y_{k-1}$ along $(\delta_{I\cup\{n\}}, s_I)$, for a fixed subset $I$ with $|I|\leq n-4$. If $$\tilde{Y}_k\rightarrow Y_{k-1}$$ is the blow-up along the proper transform of $s_I(\delta_{I\cup\{n\}})$, then $Y_k$ is the blow-down of $\tilde{Y}_k$ along the proper transform of $\pi_i^{-1}(\delta_{I\cup\{n\}})$. Recall that $$X_k\rightarrow X_{k-1}$$ is the blow-up along the proper transform of $t_I(L_I)$. By induction, the map $f_{k-1}$ is a birational contraction. To prove that $f_k$ is a birational contraction, using Lemma \ref{contract}, it is enough to prove that: \begin{itemize} \item[(1) ] $\tilde{Y}_k\dashrightarrow X_k$ is a birational contraction; \item[(2) ] $\pi_i^{-1}(\delta_{I\cup\{n\}})$ is contracted by $f_{k-1}$. \end{itemize}
Clearly, it is enough to check (1) and (2) over open sets that intersect the above divisors. Note that for $I'\prec I$, the elementary transformation with center $(\delta_{I'\cup\{n\}}, s_{I'})$ is an isomorphism away from $\delta_{I'\cup\{n\}}=\Psi^{-1}(L_{I'})$. Hence, for $0\leq i\leq k-1$, the bundles $Y_i$ are isomorphic over $U_I$, i.e., $(Y_i)_{U_I}\cong Y_{U_I}$. Similarly, the blow-ups $X_0, X_1,\ldots, X_{k-1}$ are also isomorphic over $U_I$, since at each step we blow-up a subvariety whose image under $\pi$ lies in $L_{I'}$, for some $I'\prec I$. In particular, the induced birational morphism $$(f_{k-1})_{U_I}: (Y_{k-1})_{U_I}\rightarrow (X_{k-1})_{U_I}$$ is proper (being the same as the map $Y_0\rightarrow X_0$ over $U_I$). Moreover, as the section $s_I$ is (by definition) the pull-back of the section $t_I$, the same is true when we consider these sections restricted to $U_I$. If we let $$(t_I)_{U_I}:=t_I(L_I)\cap\pi^{-1}(U_I),\quad (Z_I)_{U_I}:=s_I(\Psi^{-1}(U_I)\cap\delta_{I\cup\{n\}}),$$ then the pull-back under $(f_{k-1})_{U_I}$ of $(t_I)_{U_I}$ is $(Z_I)_{U_I}$. Moreover, $(X_k)_{U_I}$ is the blow-up of $(X_{k-1})_{U_I}$ along $(t_I)_{U_I}$ and $(Y_k)_{U_I}$ is the elementary transformation of $(Y_{k-1})_{U_I}$ along $(Z_I)_{U_I}$: $(\tilde{Y}_k)_{U_I}$ is the blow-up of $(Y_{k-1})_{U_I}$ along $(Z_I)_{U_I}$, and $(Y_k)_{U_I}$ is the blow-down of $(\tilde{Y}_k)_{U_I}$ along the proper transform of $\pi_{k-1}^{-1}(\delta_{I\cup\{n\}}\cap\Psi^{-1}(U_I))$. We now check (1) and (2) over $U_I$ (which intersects $L_I$, over which all the blown-up or blown-down loci lie). Property (2) follows immediately, as $$\pi_{k-1}^{-1}(\delta_{I\cup\{n\}}\cap\Psi^{-1}(U_I))=\pi_0^{-1}(\delta_{I\cup\{n\}}\cap\Psi^{-1}(U_I))$$ is mapped by $f_0$ (hence, $f_{k-1}$) to $\pi^{-1}(L_I\cap U_I)$. We apply Lemma \ref{preimage} to the morphism $(f_{k-1})_{U_I}: (Y_{k-1})_{U_I}\rightarrow (X_{k-1})_{U_I}$ and closed subschemes $(t_I)_{U_I}$, $(Z_I)_{U_I}$ (both sections of $\mathbb P^1$-bundles over a smooth base, with $(Z_I)_{U_I}$ the scheme theoretic preimage of $(t_I)_{U_I}$). It follows by Lemma \ref{preimage} that the birational map $(\tilde{Y}_k)_{U_I}\dashrightarrow X_k$ is a birational contraction, as it is a local isomorphism at the generic points of the corresponding exceptional divisors. Hence, property (1) holds. \end{proof}
As after each elementary transformation, the Picard number $\rho(Y_i)$ stays constant, while $\rho(X_i)$ increases by one after each blow-up, it follows that $\rho(Y_N)=\rho(Y)=\rho(\overline{M}_{0,n})+1$ equals $\rho(X_N)$. Hence, using Claim \ref{contractions}, it follows that the induced birational map $f_N: Y_N\dashrightarrow X_N$ is a small modification. As in the proof of Claim \ref{contractions}, for all $I\subset\{1,\ldots, n-2\}$ such that $|I|=n-3$, the proper transform in $X_N$ of the subspace spanned by $\{q_i\}_{i\in I}$ does not lie in the indeterminacy locus of $f_N$. Moreover, blowing up successively these loci and their proper transforms in $Y_N$ leads to a sequence of small modifications $f_{N+1}, f_{N+2},\ldots$, the last of which gives the required small modification $\widetilde{LM}_{n+1}$.
\begin{proof}[Proof of Claim \ref{check compatible}]
Set $D_I:=\delta_{I\cup\{n\}}$. Suppose $I\ne J$, $|I|\le |J|$, $D_{IJ}\ne\emptyset$. Then either $I\subset J$, in which case $Z_I$ and $Z_J$ agree over $D_{IJ}$, or there exists a partition $A\sqcup B\sqcup C=\{1,\ldots,n-1\}$ such that $I=A\cup B$ and $J=A\cup C$. In this case, the set $\mathcal K$ from condition (1) of the compatible sequence is the set of all non-empty subsets of $A$. This shows condition (2) and all of condition (1), except \eqref{svsdasdvsdv}. If $A=\emptyset$ then there is nothing to check. Assume $A\neq\emptyset$. Let $\alpha\in D_{KIJ}^0$. It is enough to find a curve $C$ in $D_{IJ}$ passing through $\alpha$, such that in the ruled surface $S:=p^{-1}(C)$, $s_I$ and $s_J$ are not tangent above $\alpha$. As we have
$$\Psi(D_{IJ})=L_I\cap L_J\cong\mathbb P^{|A|}, \quad \Psi(D_{IJK})=L_K\subseteq
L_A\cong\mathbb P^{|A|-1},$$ we may choose $l$ to be any line in $L_I\cap L_J$ that passes through $\Psi(\alpha)$ and is not contained in $L_A$. Let $C$ be any curve in $D_{IJ}$ that maps to $l$ and is smooth at $\alpha$. We claim that $C$ has the desired property, i.e., that $s_I(C)$ and $s_J(C)$ are not tangent above $\alpha$. It suffices to check this after composing with the map $\Psi':\,Y\to \Bl_{q_n}\mathbb P^{n-2}$, the pull-back of the Kapranov map, and the blow-up map $\Bl_{q_n}\mathbb P^{n-2}\to \mathbb P^{n-2}$. Let $\Lambda$ be the plane in $\mathbb P^{n-2}$ which is the image of $p^{-1}(l)$. If $Z_I$ is the linear subspace in $\mathbb P^{n-2}$ spanned by the points $q_i$ for $i\in I$, then $Z_I\cap Z_J=Z_A$. Clearly, the linear subspaces $Z_I\cap\Lambda$ and $Z_J\cap\Lambda$ intersect only at a point (lying above $L_A\cap l=\Psi(\alpha)$). Equivalently, $Z_I\cap\Lambda$ and $Z_J\cap\Lambda$ are not tangent at their intersection point. This proves the claim. \end{proof}
\end{proof}
The proof of Thm. \ref{asdazxvsfvsfvsdaqwf} and Cor. \ref{main} yield the following: \begin{corollary}\label{codim 3} Let $p_1,\ldots, p_{n-2}\in\mathbb P^{n-3}$ be points in linearly general position and let $X_n$ be the toric variety which is the blow-up of $\mathbb P^{n-3}$ along the proper transforms of linear subspaces of codimension $\geq 3$ spanned by the points $p_i$, in order of increasing dimension. Let $e$ denote the identity of the open torus of $X_n$. Then $\Bl_eX_{n+1}$ is a SQM of a $\mathbb P^1$-bundle over $\overline{M}_{0,n}$. If $\ch k=0$ and $n\geq 134$, then $\Bl_eX_{n+1}$ is not a MDS. \end{corollary}
\section*{References}
\begin{biblist}
\bib{Artin}{article}{
AUTHOR = {Artin, Michael},
TITLE = {Some numerical criteria for contractability of curves on
algebraic surfaces},
JOURNAL = {Amer. J. Math.},
FJOURNAL = {American Journal of Mathematics},
VOLUME = {84},
YEAR = {1962},
PAGES = {485--496}, }
\bib{AGS}{article}{
author={Alexeev, Valery},
author={Gibney, Angela},
author={Swinarski, David},
title={Conformal blocks divisors on $\bar{M}_{0,n}$ from $sl_2$},
eprint={arXiv:1011.6659v1},
date={2010}, }
\bib{BCHM}{article}{ AUTHOR = {Birkar, Caucher}, AUTHOR = {Cascini, Paolo}, AUTHOR = {Hacon, Christopher D.}, AUTHOR = {M\textsuperscript{c}Kernan, James},
TITLE = {Existence of minimal models for varieties of log general type},
JOURNAL = {J. Amer. Math. Soc.},
FJOURNAL = {Journal of the American Mathematical Society},
VOLUME = {23},
YEAR = {2010},
NUMBER = {2},
PAGES = {405--468},
URL = {http://dx.doi.org/10.1090/S0894-0347-09-00649-3}, }
\bib{BGM}{article}{
AUTHOR = {Belkale, Prakash},
AUTHOR = {Gibney, Angela},
AUTHOR = {Mukhopadhyay, Swarnava},
title={Quantum cohomology and conformal blocks on $\overline{M}_{0,n}$},
eprint={arXiv:1308.4906},
date={2013}, }
\bib{Hausen}{article}{
AUTHOR = {B\"aker, Hendrik},
AUTHOR = {Hausen, Juergen},
AUTHOR = {Keicher, Simon},
title={On Chow quotients of torus actions},
eprint={arXiv:1203.3759},
date={2012}, }
\bib{BP}{incollection}{ AUTHOR = {Batyrev, Victor V.} AUTHOR= {Popov, Oleg N.},
TITLE = {The {C}ox ring of a del {P}ezzo surface},
BOOKTITLE = {Arithmetic of higher-dimensional algebraic varieties ({P}alo
{A}lto, {CA}, 2002)},
SERIES = {Progr. Math.},
VOLUME = {226},
PAGES = {85--103},
PUBLISHER = {Birkh\"auser Boston},
ADDRESS = {Boston, MA},
YEAR = {2004}, }
\bib{C}{article}{ AUTHOR = {Castravet, Ana-Maria},
TITLE = {The {C}ox ring of {$\overline M_{0,6}$}},
JOURNAL = {Trans. Amer. Math. Soc.},
FJOURNAL = {Transactions of the American Mathematical Society},
VOLUME = {361},
YEAR = {2009},
NUMBER = {7},
PAGES = {3851--3878},
URL = {http://dx.doi.org/10.1090/S0002-9947-09-04641-8}, }
\bib{CC}{article}{
author={Coskun, Izzet},
author={Chen, Dawei},
title={Extremal effective divisors on the moduli space of n-pointed genus one curves},
eprint={arXiv:1304.0350},
date={2013}, }
\bib{CLS}{book}{ AUTHOR = {Cox, David A.}, AUTHOR= {Little, John B.}, AUTHOR={Schenck, Henry K.},
TITLE = {Toric varieties},
SERIES = {Graduate Studies in Mathematics},
VOLUME = {124},
PUBLISHER = {American Mathematical Society},
ADDRESS = {Providence, RI},
YEAR = {2011}, }
\bib{CT1}{article}{ AUTHOR = {Castravet, Ana-Maria} AUTHOR= {Tevelev, Jenia},
TITLE = {Hypertrees, projections, and moduli of stable rational curves},
JOURNAL = {J. Reine Angew. Math.},
FJOURNAL = {Journal f\"ur die Reine und Angewandte Mathematik. [Crelle's
Journal]},
VOLUME = {675},
YEAR = {2013},
PAGES = {121--180}, }
\bib{CT2}{article}{ AUTHOR = {Castravet, Ana-Maria} AUTHOR= {Tevelev, Jenia},
TITLE = {Rigid curves on {$\overline M_{0,n}$} and arithmetic
breaks},
BOOKTITLE = {Compact moduli spaces and vector bundles},
SERIES = {Contemp. Math.},
VOLUME = {564},
PAGES = {19--67},
PUBLISHER = {Amer. Math. Soc.},
ADDRESS = {Providence, RI},
YEAR = {2012}, }
\bib{Cutkosky}{article}{ AUTHOR = {Cutkosky, Steven Dale},
TITLE = {Symbolic algebras of monomial primes},
JOURNAL = {J. Reine Angew. Math.},
FJOURNAL = {Journal f\"ur die Reine und Angewandte Mathematik},
VOLUME = {416},
YEAR = {1991},
PAGES = {71--89},
ISSN = {0075-4102}, }
\bib{Eisenbud}{book}{ AUTHOR = {Eisenbud, David},
TITLE = {Commutative algebra},
SERIES = {Graduate Texts in Mathematics},
VOLUME = {150},
NOTE = {With a view toward algebraic geometry},
PUBLISHER = {Springer-Verlag},
ADDRESS = {New York},
YEAR = {1995},
PAGES = {xvi+785}, }
\bib{Fed}{article}{
AUTHOR = {Fedorchuk, Maksym},
title={Cyclic covering morphisms on $\overline{M}_{0,n}$},
eprint={arXiv:1105.0655},
date={2011}, }
\bib{Fulton}{book}{
AUTHOR = {Fulton, William},
TITLE = {Algebraic curves},
SERIES = {Advanced Book Classics},
PUBLISHER = {Addison-Wesley Publishing Company Advanced Book Program},
ADDRESS = {Redwood City, CA},
YEAR = {1989}, }
\bib{GianGib}{article}{
AUTHOR = {Giansiracusa, Noah},
AUTHOR = {Gibney, Angela},
TITLE = {The cone of type {$A$}, level 1, conformal blocks divisors},
JOURNAL = {Adv. Math.},
FJOURNAL = {Advances in Mathematics},
VOLUME = {231},
YEAR = {2012},
NUMBER = {2},
PAGES = {798--814}, }
\bib{Milena}{article}{ AUTHOR ={Gonzalez, Jose}, AUTHOR ={Hering, Milena}, AUTHOR = {Payne, Sam}, AUTHOR = {S\"uss, Hendrik}
TITLE = {Cox rings and pseudoeffective cones of projectivized toric
vector bundles},
JOURNAL = {Algebra Number Theory},
FJOURNAL = {Algebra \& Number Theory},
VOLUME = {6},
YEAR = {2012},
NUMBER = {5},
PAGES = {995--1017}, }
\bib{GianJenMoon}{article}{
AUTHOR = {Giansiracusa, Noah}, AUTHOR = {Jensen, David}, AUTHOR={Moon, Han-Bom},
TITLE = {G{IT} compactifications of {$M_{0,n}$} and flips},
JOURNAL = {Adv. Math.},
FJOURNAL = {Advances in Mathematics},
VOLUME = {248},
YEAR = {2013},
PAGES = {242--278}, }
\bib{GKM}{article}{ AUTHOR = {Gibney, Angela}, AUTHOR = {Keel, Sean}, AUTHOR = {Morrison, Ian},
TITLE = {Towards the ample cone of {$\overline M_{g,n}$}},
JOURNAL = {J. Amer. Math. Soc.},
FJOURNAL = {Journal of the American Mathematical Society},
VOLUME = {15},
YEAR = {2002},
NUMBER = {2},
PAGES = {273--294}, }
\bib{GM}{article}{ AUTHOR = {Gibney, Angela} AUTHOR = {Maclagan, Diane},
TITLE = {Equations for {C}how and {H}ilbert quotients},
JOURNAL = {Algebra Number Theory},
FJOURNAL = {Algebra \& Number Theory},
VOLUME = {4},
YEAR = {2010},
NUMBER = {7},
PAGES = {855--885},
ISSN = {1937-0652}, }
\bib{GM_nef}{article}{ AUTHOR = {Gibney, Angela} AUTHOR = {Maclagan, Diane}, TITLE = {Lower and upper bounds for nef cones},
JOURNAL = {Int. Math. Res. Not. IMRN},
FJOURNAL = {International Mathematics Research Notices. IMRN},
YEAR = {2012},
NUMBER = {14},
PAGES = {3224--3255}, }
\bib{GN_ams}{book}{ AUTHOR = {Goto, Shiro}, AUTHOR = {Nishida, Koji},
TITLE = {The {C}ohen-{M}acaulay and {G}orenstein {R}ees algebras
associated to filtrations},
NOTE = {Mem. Amer. Math. Soc. {{\bf{110}}} (1994), no. 526},
PUBLISHER = {American Mathematical Society},
ADDRESS = {Providence, RI},
YEAR = {1994},
PAGES = {i--viii and 1--134}, }
\bib{GNW}{article}{ AUTHOR = {Goto, Shiro}, AUTHOR = {Nishida, Koji}, AUTHOR = {Watanabe, Keiichi},
TITLE = {Non-{C}ohen-{M}acaulay symbolic blow-ups for space monomial
curves and counterexamples to {C}owsik's question},
JOURNAL = {Proc. Amer. Math. Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {120},
YEAR = {1994},
NUMBER = {2},
PAGES = {383--392}, }
\bib{HK}{article}{
AUTHOR = {Hu, Yi},
AUTHOR = {Keel, Sean},
TITLE = {Mori dream spaces and {GIT}},
JOURNAL = {Michigan Math. J.},
FJOURNAL = {Michigan Mathematical Journal},
VOLUME = {48},
YEAR = {2000},
PAGES = {331--348},
URL = {http://dx.doi.org/10.1307/mmj/1030132722}, }
\bib{HM}{article}{
AUTHOR = {Harris, Joe}, AUTHOR={Mumford, David},
TITLE = {On the {K}odaira dimension of the moduli space of curves},
NOTE = {With an appendix by William Fulton},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {67},
YEAR = {1982},
NUMBER = {1},
PAGES = {23--88}, }
\bib{Huneke}{article}{
AUTHOR = {Huneke, Craig},
TITLE = {Hilbert functions and symbolic powers},
JOURNAL = {Michigan Math. J.},
FJOURNAL = {The Michigan Mathematical Journal},
VOLUME = {34},
YEAR = {1987},
NUMBER = {2},
PAGES = {293--318}, }
\bib{Kapranov}{article}{ AUTHOR = {Kapranov, M. M.},
TITLE = {Veronese curves and {G}rothendieck-{K}nudsen moduli space
{$\overline M_{0,n}$}},
JOURNAL = {J. Algebraic Geom.},
FJOURNAL = {Journal of Algebraic Geometry},
VOLUME = {2},
YEAR = {1993},
NUMBER = {2},
PAGES = {239--262}, }
\bib{Keel}{article}{ AUTHOR = {Keel, Se{\'a}n},
TITLE = {Basepoint freeness for nef and big line bundles in positive
characteristic},
JOURNAL = {Ann. of Math. (2)},
FJOURNAL = {Annals of Mathematics. Second Series},
VOLUME = {149},
YEAR = {1999},
NUMBER = {1},
PAGES = {253--286},
ISSN = {0003-486X}, }
\bib{Kiem}{article}{
AUTHOR = {Kiem, Young-Hoon},
title={Curve counting and birational geometry of compactified moduli spaces of curves}, Journal = {Proceedings of the Waseda symposium on algebraic geometry}, year ={2010}, eprint={http://www.math.snu.ac.kr/~kiem/recentpapers.html}, }
\bib{Kurano-Matsuoka}{article}{ AUTHOR = {Kurano, Kazuhiko}, AUTHOR = {Matsuoka, Naoyuki},
TITLE = {On finite generation of symbolic {R}ees rings of space
monomial curves and existence of negative curves},
JOURNAL = {J. Algebra},
FJOURNAL = {Journal of Algebra},
VOLUME = {322},
YEAR = {2009},
NUMBER = {9},
PAGES = {3268--3290}, }
\bib{KM}{article}{
AUTHOR = {Keel, Sean},
AUTHOR = {M\textsuperscript{c}Kernan, James},
title={Contractible Extremal Rays on $\overline{M}_{0,n}$},
eprint={arXiv:alg-geom/9607009v1}
date={1996}, }
\bib{Larsen}{article}{ AUTHOR = {Larsen, Paul},
TITLE = {Permutohedral spaces and the {C}ox ring of the moduli space of
stable pointed rational curves},
JOURNAL = {Geom. Dedicata},
FJOURNAL = {Geometriae Dedicata},
VOLUME = {162},
YEAR = {2013},
PAGES = {305--323},
ISSN = {0046-5755}, }
\bib{Laz}{book}{
AUTHOR = {Lazarsfeld, Robert},
TITLE = {Positivity in algebraic geometry. {I}},
SERIES = {Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A
Series of Modern Surveys in Mathematics},
VOLUME = {48},
PUBLISHER = {Springer-Verlag},
ADDRESS = {Berlin},
YEAR = {2004}, }
\bib{LM}{article}{ AUTHOR = {Losev, A.}, AUTHOR= {Manin, Y.},
TITLE = {New moduli spaces of pointed curves and pencils of flat
connections},
JOURNAL = {Michigan Math. J.},
FJOURNAL = {The Michigan Mathematical Journal},
VOLUME = {48},
YEAR = {2000},
PAGES = {443--472} }
\bib{Mar}{article}{
AUTHOR = {Maruyama, Masaki},
TITLE = {Elementary transformations in the theory of algebraic vector
bundles},
BOOKTITLE = {Algebraic geometry ({L}a {R}\'abida, 1981)},
SERIES = {Lecture Notes in Math.},
VOLUME = {961},
PAGES = {241--266},
PUBLISHER = {Springer},
ADDRESS = {Berlin},
YEAR = {1982}, }
\bib{McK_survey}{article}{ AUTHOR = {McKernan, James},
TITLE = {Mori dream spaces},
JOURNAL = {Jpn. J. Math.},
FJOURNAL = {Japanese Journal of Mathematics},
VOLUME = {5},
YEAR = {2010},
NUMBER = {1},
PAGES = {127--151},
URL = {http://dx.doi.org.proxy.lib.ohio-state.edu/10.1007/s11537-010-0944-7}, }
\bib{Okawa}{article}{
author={Okawa, Shinnosuke},
title={On images of Mori dream spaces},
eprint={arXiv:1104.1326}
date={2011}, }
\end{biblist}
\end{document} |
\begin{document}
\title{Trace ideal and annihilator of Ext and Tor of regular fractional ideals, and some applications}
\author{Souvik Dey} \address{Souvik Dey\\ Department of Mathematics \\ University of Kansas\\405 Snow Hall, 1460 Jayhawk Blvd.\\ Lawrence, KS 66045, U.S.A.} \email{souvik@ku.edu}
\thanks{2020 {\em Mathematics Subject Classification.} 13B30, 13C60, 13D07} \thanks{{{\em Key words and phrases.} annihilator, $\operatorname{Ext}$, $\operatorname{Tor}$, Cohen--Macaulay ring, Gorenstein ring, trace ideal}} \begin{abstract} Given a commutative Noetherian ring $R$ with total ring of fractions $Q(R)$, and a finitely generated $R$-submodule $M$ of $Q(R)$, we prove an equality between trace ideal, and certain annihilator of Ext and Tor of $M$. As a consequence, we answer in one-dimensional local analytically unramified case, a question raised by the present author and R. Takahashi. As another application, we give an alternative proof of a recent result of Ö. Esentepe that for one-dimensional analytically unramified Gorenstein local rings, the cohomology annihilator of Iyengar and Takahashi coincides with the conductor ideal. \end{abstract} \maketitle
\section{Introduction}
Let $R$ be a commutative Noetherian ring with unity, with total ring of fractions $Q(R)$. Let $\operatorname{Mod} R$ and $\operatorname{mod} R$ denote the category of all $R$-modules and all finitely generated $R$-modules respectively.
Annihilators of Ext and Tor modules of certain subcategories, and their connections to generation of module categories and the singular locus of the ring, have recently caught a lot of attention. We draw the reader's attention to \cite{iy}, \cite{dim}, \cite{bl} among many instances in the literature.
In this article, we prove a connection between annihilators of Ext and Tor and trace ideal (see Definition \ref{trd}) of a finitely generated $R$-module $M$ where $M$ is an $R$-submodule of $Q(R)$ containing a non-zero-divisor of $R$. Namely, we prove the following main result (see Theorem \ref{traceann}), where $\operatorname{tr}_R(M)$ denotes trace ideal.
\begin{thm}\label{mai} Let $M$ be a finitely generated $R$-submodule of $Q(R)$ containing a non-zero-divisor of $R$. Let $\Omega_R M$ be the first syzygy in some projective resolution of $M$. Then, it holds that
\begin{align*} \operatorname{tr}_R(M)=\bigcap_{i>0, N\in \operatorname{mod} R} \operatorname{ann}_R \operatorname{Tor}^R_i(M,N)=\bigcap_{i>0, N\in \operatorname{Mod} R} \operatorname{ann}_R \operatorname{Ext}^i_R(M,N)=\operatorname{ann}_R\operatorname{Ext}^1_R(M,\Omega_R M) \end{align*}
\end{thm}
Using this main Theorem, we give the following affirmative answer to \cite[Question 4.7]{dt} when $R$ is a one-dimensional local ring with reduced completion (see Proposition \ref{41}).
\begin{thm} Let $R$ be a one-dimensional local ring with reduced completion. Let $\c(R)$ denote the conductor of $R$ in $\overline R$. Then, $$\c(R)=\bigcap_{i>0, M,N\in \operatorname{CM}_0(R)} \operatorname{ann}_R \operatorname{Tor}^R_i(M,N)=\bigcap_{i>0, M,N\in \operatorname{CM}_0(R)} \operatorname{ann}_R \operatorname{Ext}^i_R(M,N)=\bigcap_{i>1, M,N\in \operatorname{CM}_0(R)} \operatorname{ann}_R \operatorname{Ext}^i_R(M,N)$$ \end{thm}
Here, $\operatorname{CM}_0(R)$ is the subcategory of all maximal Cohen--Macaulay modules whose localiztions at all non-maximal prime ideals are free. We note here that \cite[Question 4.7]{dt} was motivated by the question of whether we have $(c)\implies (b)$ in \cite[Theorem 1.1(1)]{dim} (also see \cite[Theorem 1.1]{dt} in this regard).
Using the above two results, we also give a quick alternative proof of one of the main results of \cite{curve}, namely \cite[Theorem 4.3 and 5.10]{curve}, that for one-dimensional local Gorenstein rings with reduced completion, the cohomology annihilator $\bigcup_{n\ge 1} \bigcap_{i\ge n, M,N\in \operatorname{mod} R} \operatorname{Ext}^i_R(M,N)$ coincides with the conductor ideal, see Proposition \ref{cacon}.
The organization of the paper is as follows: In Section 2, we state definitions and properties of fundamental notions used in this paper. In Section 3, we prove our main result, namely Theorem \ref{mai}. In Section 4, we give applications of our main result.
\section{Preliminaries}
In this section, we lay out the basic conventions, definitions, and preliminary discussions which will be used throughout the rest of the paper.
\begin{conv} Throughout, $R$ will denote a commutative Noetherian ring with unity, with total ring of fractions $Q(R)$. Let $\operatorname{Mod} R$ denote the category of all $R$-modules. All subcategories of $\operatorname{Mod} R$ are assumed to be strict (closed under isomorphism) and full and are assumed to contain the zero module. So in particular, our subcategories are only determined by the collection of objects (modules) only. By $\operatorname{mod} R$, we will denote the subcategory of $\operatorname{Mod} R$ consisting of all finitely generated modules. We sometimes may abbreviate "finitely generated module" as "finite module" only. When $M$ is a finitely generated $R$-module, we denote by $\Omega^n_R M$ the $n$-th syzygy in some projective resolution of $M$. When $R$ is moreover local, we take $\Omega^n_R M$ to be the $n$-th syzygy in the minimal free resolution of $M$ by finitely generated free $R$-modules.
We also denote by by $\operatorname{CM}(R)$ the subcategory of $\operatorname{mod} R$ consisting of maximal Cohen--Macaulay modules (recall that an $R$-module $M$ is called {\em maximal Cohen--Macaulay} if $\operatorname{depth}_{R_\mathfrak{p}}M_\mathfrak{p}=\dim R_\mathfrak{p}$ for all $\mathfrak{p}\in\operatorname{Supp}_RM$). We say that $R$ is Cohen--Macaulay if $R\in \operatorname{CM}(R)$. When $(R,\mathfrak{m})$ is local, by $\operatorname{CM}_0(R)$, we denote the subcategory of all modules in $\operatorname{CM}(R)$ whose localizations at all non-maximal prime ideals are free. \end{conv}
\begin{dfn} Let $\mathcal{X},\mathcal{Y}$ be subcategories of $\operatorname{Mod} R$, and let $n\ge0$ be an integer.
Adopting the notation of \cite[Definition 3.1]{dt}, we define the ideals $\operatorname{\mathbb{T}}_n(\mathcal{X},\mathcal{Y})$ and $\operatorname{\mathbb{E}}^n(\mathcal{X},\mathcal{Y})$ of $R$ by \begin{align*} \operatorname{\mathbb{T}}_n(\mathcal{X},\mathcal{Y})&=\bigcap_{i>n}\bigcap_{X\in\mathcal{X}}\bigcap_{Y\in\mathcal{Y}}\operatorname{ann}_R\operatorname{Tor}_i^R(X,Y),\\ \operatorname{\mathbb{E}}^n(\mathcal{X},\mathcal{Y})&=\bigcap_{i>n}\bigcap_{X\in\mathcal{X}}\bigcap_{Y\in\mathcal{Y}}\operatorname{ann}_R\operatorname{Ext}_R^i(X,Y). \end{align*}
We put $\operatorname{\mathbb{T}}_n(\mathcal{X})=\operatorname{\mathbb{T}}_n(\mathcal{X},\mathcal{X})$ and $\operatorname{\mathbb{E}}^n(\mathcal{X})=\operatorname{\mathbb{E}}^n(\mathcal{X},\mathcal{X})$. \end{dfn}
\begin{dfn} Following \cite[Definition 2.1]{iy}, we define for any integer $n\ge 1$, the ideal $$\ca^n(R):=\bigcap_{X,Y\in \operatorname{mod} R} \bigcap_{ i\ge n} \operatorname{ann}_R \operatorname{Ext}^i_R(X,Y) $$ we also put $\ca(R):=\bigcup_{n\ge 1} \ca^n(R)$. It is clear that $\ca^{n}(R)=\operatorname{\mathbb{E}}^{n-1}(\operatorname{mod} R,\operatorname{mod} R)$. Moreover, it is also clear that $\ca^n(R)\subseteq \ca^{n+1}(R)$ for all $n\ge 1$, hence this chain of ideals stabilizes since $R$ is Noetherian, so we get $\ca(R)=\ca^s(R)$ for all big enough $s>0$. \end{dfn}
\begin{chunk}\label{conductor} Let $Q(R)$ be the total ring of fractions of $R$, and $\overline R$ be the integral closure of $R$ in $Q(R)$. We call finitely generated $R$-submodules of $Q(R)$ to be fractional ideals, and those fractional ideals which also contain a non-zero-divisor of $R$ are called regular fractional ideals.
For $R$-submodules (not necessarily finitely generated) $M,N$ of $Q(R)$, we denote by $(N:M)$ the $R$-submodule of $Q(R)$ defined as $(N:M) :=\{x\in Q(R): xM\subseteq N\}$. If $M,N$ are $R$-submodules of $Q(R)$ and $M$ contains a non-zero-divisor of $R$, then $(N:M)\cong \operatorname{Hom}_R(M,N)$, see \cite[Proposition 2.4(1)]{trace}. We will use this identification freely throughout the article without possible further reference. We also put $\c(R):=(R:\overline R)$, and we call it the conductor of $R$. We note that $\c(R)$ is an ideal of both $R$ and $\overline R$: indeed, since $1\in \overline R$, so $\c(R)\subseteq R$, hence $\c(R)$ is an $R$-submodule of $R$, so an ideal of $R$; and similarly $\c(R)\overline R\cdot\overline R=\c(R)\overline R\subseteq R$, so $\c(R)\overline R\subseteq (R:\overline R)=\c(R)$, hence $\c(R)$ is an ideal of $\overline R$. Moreover, if $I$ is an ideal of both $R$ and $\overline R$, then $I\subseteq \c(R)$: indeed, if $I$ is an ideal of $R$ and $\overline R$, then $I\overline R\subseteq I\subseteq R$, so $I\subseteq (R:\overline R)=\c(R)$. It is clear that $\c(R)$ contains a non-zero-divisor if and only if $\overline R$ is module finite over $R$ (for example, when $R$ is a local ring whose completion is reduced, see \cite[Theorem 4.6(i)]{lw}). \end{chunk}
\begin{dfn}\label{trd} For an $R$-module $M$, the trace ideal of $M$ is $\operatorname{tr}_R(M)=\sum_{f\in \operatorname{Hom}_R(M,R)}\text{Im}(f)$, which is the ideal of $R$ generated by all homomorphic images of $M$ into $R$. \end{dfn}
\begin{chunk}\label{trcon} If $M$ is an $R$-submodule (not necessarily finitely generated) of $Q(R)$ containing a non-zero-divisor of $R$, then it follows that $\operatorname{tr}_R(M)=(R:M)M$, see \cite[Proposition 2.4(2)]{trace}. So in particular, $\operatorname{tr}_R(\overline R)=(R:\overline R)\overline R=(R:\overline R)=\c(R)$. Hence, $\operatorname{tr}_R(\c(R))=\c(R)$ (see \cite[Proposition 2.8(iv)]{lindo}, for this part of the result of \cite[Proposition 2.8]{lindo}, one does not need $M$ to be finitely presented). We will use this fact $\operatorname{tr}_R(\c(R))=\c(R)$ throughout the rest of the paper without possible further reference. \end{chunk}
\begin{dfn} Given a subcategory $\mathcal{X}$ of $\operatorname{mod} R$, and integer $n\ge 1$, by $\Omega^n \mathcal{X}$ we denote the collection of all modules for which there exists an exact sequence of the form $0\to M \to P_{n-1} \to \cdots \to P_0 \to N\to 0$ for some $N\in \mathcal{X}$ and some finitely generated projective $R$-module $P_0,...,P_{n-1}$. \end{dfn}
\begin{chunk} Note that if $M\in \operatorname{mod} R$, then $M^*\in \Omega^2 \operatorname{mod} R$. If $R$ is Cohen--Macaulay, then it is clear that $\Omega^n \operatorname{CM}(R) \subseteq \operatorname{CM}(R)$ for all $n\ge 1$. If $R$ is Cohen--Macaulay of dimension $d$, then $\Omega^n \operatorname{mod} R\subseteq \operatorname{CM}(R)$ for all $n\ge d$. \end{chunk}
\begin{dfn} For a finitely generated $R$-module $M$ we denote by $\Tr M$ the {\em (Auslander) transpose} of $M$. This is defined as follows. Take a projective presentation $P_1\xrightarrow{f}P_0\to M\to 0$ by finitely generated projective modules $P_1,P_0$. Dualizing this by $R$, we get an exact sequence $0\to M^\ast\to P_0^\ast\xrightarrow{f^\ast}P_1^*\to\Tr M\to0$, that is, $\Tr M$ is the cokernel of the map $f^\ast$. It is clear that $\Tr M$ is also finitely generated. The transpose of $M$ is uniquely determined up to projective summands; see \cite{AB} for basic properties. \end{dfn}
\section{Main result}
In this section, we prove our main result. For this, we first need the following Lemma, part of which was essentially shown in \cite[Lemma 2.14]{iy}.
\begin{lem}\label{lemm} Let $M$ be a finitely generated module over $R$. Let $\Omega_R M$ be the first syzygy in some projective resolution of $M$. Then, we have equalities $\operatorname{\mathbb{T}}_0(M,\operatorname{Mod} R)=\operatorname{ann}_R \operatorname{Tor}^R_1(M,\Tr M)=\operatorname{\mathbb{E}}^0(M,\operatorname{Mod} R)=\operatorname{ann}_R\operatorname{Ext}^1_R(M,\Omega_R M)=\{x\in R: M\xrightarrow{\cdot x} M \text{ factors through some finitely generated free } R\text{-module }\}$. \end{lem}
\begin{proof} Let us call the sets (1), (2), (3), (4) and (5) in order. Clearly, $(1)\subseteq (2)$ and $(3)\subseteq (4)$.
We first prove $(2)\subseteq (3)$: By \cite[Lemma (3.9)]{Y}, we have isomorphism $\operatorname{Tor}^R_1(M,\Tr M)\cong \underline{\operatorname{Hom}}_R(M,M)$. So, if $x\in \operatorname{ann}_R\operatorname{Tor}^R_1(M,\Tr M)=\operatorname{ann}_R \underline{\operatorname{Hom}}_R(M,M)$, then since $\text{id}_M \in \operatorname{Hom}_R(M,M)$, so the map $x\cdot \text{id}_M: M \to M$ factors through a projective $R$-module i.e. there is a commutative diagram \begin{tikzcd} M \arrow[rd] \arrow[rr, "\cdot x"] & & M \\
& P \arrow[ru] & \end{tikzcd} Hence for every $R$-module $N$ and integer $i>0$, we get an induced commutative diagram by linearity of the functor $\operatorname{Ext}^i_R(-,N)$ \begin{tikzcd} {\operatorname{Ext}^i_R(M,N)} & & {\operatorname{Ext}^i_R(M,N)} \arrow[ld] \arrow[ll, "\cdot x"'] \\
& {\operatorname{Ext}^i_R(P,N)=0} \arrow[lu] & \end{tikzcd} Hence $x\cdot \operatorname{Ext}^i_R(M,N)=0$ i.e. $x\in \operatorname{ann}_R \operatorname{Ext}^i_R(M,N)$. Since $i>0$ and $N$ were arbitrary, we get $x\in \operatorname{\mathbb{E}}^0(M,\operatorname{Mod} R)$. This proves $(2)\subseteq (3)$.
Next we prove $(4)\subseteq (1)$: By hypothesis, we have an exact sequence $\sigma: 0\to \Omega_R M \to P \xrightarrow{f} M \to 0$ for some projective $R$-module $P$. Applying $\operatorname{Hom}_R(M,-)$ we get exact sequence $$0\to \operatorname{Hom}_R(M,\Omega_R M)\to \operatorname{Hom}_R(M,P)\xrightarrow{\phi\mapsto f\circ \phi} \operatorname{Hom}_R(M,M)\xrightarrow{g} \operatorname{Ext}^i_R(M,\Omega_R M)$$ , where $g(\text{id}_M)=\sigma$. Now if $x\in \operatorname{ann}_R\operatorname{Ext}^i_R(M,\Omega_R M)$, then $0=x\sigma=g(x\cdot \text{id}_M)$, so
$x\cdot \text{id}_M\in \ker g=\text{Im}(\operatorname{Hom}_R(M,P)\xrightarrow{\phi\mapsto f\circ \phi} \operatorname{Hom}_R(M,M))$. So, there exists $\phi\in \operatorname{Hom}_R(M,P)$ such that we have a commutative diagram \begin{tikzcd} M \arrow[rd, "\phi"] \arrow[rr, "\cdot x"] & & M \\
& P \arrow[ru, "f"] & \end{tikzcd} Hence for every $R$-module $N$ and integer $i>0$, we get an induced commutative diagram by linearity of the functor $\operatorname{Tor}^R_i(-,N)$ \begin{tikzcd} {\operatorname{Tor}^R_i(M,N)} \arrow[rd, "\phi"] \arrow[rr, "\cdot x"] & & {\operatorname{Tor}^R_i(M,N)} \\
& {\operatorname{Tor}^R_i(P,N)=0} \arrow[ru, "f"] & \end{tikzcd}
Hence $x\cdot \operatorname{Tor}^R_i(M,N)=0$ i.e. $x\in \operatorname{ann}_R \operatorname{Tor}^R_i(M,N)$. Since $i>0$ and $N$ were arbitrary, we get $x\in \operatorname{\mathbb{T}}_0(M,\operatorname{Mod} R)$.
So we have now seen $(1)\subseteq (2)\subseteq (3)\subseteq (4)\subseteq (1)$. So, $(1)=(2)=(3)=(4)$. Now clearly $(5)\subseteq (1)$ by same argument as above.
To see $(3)\subseteq (5)$: First notice that since $M$ is finitely generated, so there exists a finitely generated free $R$-module $F$ and an $R$-module $X$ such that we have an exact sequence $0\to X \to F \xrightarrow{h} M \to 0$. If $x\in \operatorname{ann}_R \operatorname{Ext}^1_R(M,X)$, then by the similar argument as in the proof of $(4)\subseteq (1)$, we see that $M\xrightarrow{\cdot x}M$ factors through $F$. This shows $(3)\subseteq (5)$.
This finally shows all the five sets are equal. \end{proof}
\begin{rem} Since for a finitely generated $R$-module $M$, $\operatorname{\mathbb{T}}_0(M,\operatorname{Mod} R)\subseteq \operatorname{\mathbb{T}}_0(M,\operatorname{mod} R)\subseteq \operatorname{ann}_R\operatorname{Tor}^R_1(M,\Tr M)$, hence all the ideals in Lemma \ref{lemm} are also equal to $\operatorname{\mathbb{T}}_0(M,\operatorname{mod} R)$. \end{rem}
Now we prove the main result of our paper
\begin{thm}\label{traceann} Let $M$ be a finitely generated module over $R$. Let $\Omega_R M$ be the first syzygy in some projective resolution of $M$. If $M$ is an $R$-submodule of $Q(R)$ and $M$ contains a non-zero-divisor of $R$, then $\operatorname{tr}_R(M)=\operatorname{\mathbb{T}}_0(M,\operatorname{Mod} R)=\operatorname{ann}_R \operatorname{Tor}^R_1(M,\Tr M)=\operatorname{\mathbb{E}}^0(M,\operatorname{Mod} R)=\operatorname{ann}_R\operatorname{Ext}^1_R(M,\Omega_R M)$.
\end{thm}
\begin{proof} By Lemma \ref{lemm}, it is enough to show that $$\operatorname{tr}_R(M)=\{x\in R: M\xrightarrow{\cdot x} M \text{ factors through some finitely generated free } R\text{-module }\}$$ Call the right hand side set (which is an ideal by Lemma \ref{lemm}) to be $L$.
Since $\operatorname{tr}(M)=(R:M)M$ by \cite[Proposition 2.4(2)]{trace}, and $L$ is an ideal, so to show $\operatorname{tr}_R(M)\subseteq L$, it is enough to show that for every $x\in (R:M), m\in M$, that $M\xrightarrow{\cdot xm}M$ factors through a finitely generated free $R$-module. And indeed, since $xM\subseteq R$, so the factoring is given by the commutative diagram \begin{tikzcd} M \arrow[rd, "\cdot x"'] \arrow[rr, "\cdot xm"] & & M \\
& R \arrow[ru, "\cdot m"'] & \end{tikzcd}
To show the reverse inclusion $L\subseteq \operatorname{tr}_R(M)$, let $r\in L$ i.e. suppose $r\in R$ and we have a commutative diagram $$\begin{tikzcd} M \arrow[rd, "g"'] \arrow[rr, "\cdot r"] & & M \\
& R^{\oplus n} \arrow[ru, "f"'] & \end{tikzcd}$$ for some integer $n\ge 0$ and some $R$-linear maps $f,g$. Since $M$ contains a non-zero-divisor of $R$, so $rM\ne 0$, so $n> 0$. Let $\pi_i,j_i$ denote the $i$-th coordinate projection and inclusion maps from $R^{\oplus n} \to R$ and $R\to R^{\oplus n}$ respectively. Writing $g_i=\pi_i\circ g:M\to R$, and $f_i=f\circ j_i:R\to M$ respectively, we see that $g=(g_1,\cdots,g_n)$ and $f(r_1,\cdots,r_n)=\sum_{i=1}^n f_i(r_i)=\sum_{i=1}^nr_if_i(1), \forall (r_1,...,r_n)\in R^{\oplus n}$. Since each $g_i\in \operatorname{Hom}_R(M,R)$, so by \cite[Proposition 2.4(1)]{trace}, there exists $q_i\in (R:M)\subseteq Q(R)$ such that $g_i(x)=q_ix, \forall x\in M$. Now let $b\in M \cap R$ be a non-zero-divisor. Then, due to the commutative diagram, we have
$$rb=f(g(b))=f(g_1(b),\cdots,g_n(b))=f(q_1b,\cdots,q_nb)=\sum_{i=1}^n f_i(q_ib)=\sum_{i=1}^n q_ibf_i(1)=\left(\sum_{i=1}^n q_if_i(1)\right)b$$, where all these equalities takes place in $Q(R)$. Since $b$ is a non-zero-divisor on $R$, so it is a non-zero-divisor on $Q(R)$, so $r=\sum_{i=1}^n q_if_i(1)\in (R:M)M$.
\end{proof}
\section{some applications to annihilators of Ext and Tor of one-dimensional local Cohen--Macaulay rings}
In this section, let $(R,\mathfrak{m})$ be a local Cohen--Macaulay ring of dimension $1$ with $\mathfrak{m}$-adic completion $\widehat R$. Since $\c(R)=(R:\overline R)$ is an ideal of $R$ and $\overline R$ (see \ref{conductor}), so $\widehat{\c(R)}$ is an ideal of $\widehat R$ and $\widehat R\otimes_R \overline R=\overline{\widehat R}$ (see the discussion of \cite[Remark 4.8]{lw}). So, $\widehat{\c(R)}\subseteq \c(\widehat R)$ (see the discussion of \ref{conductor}).
As our first application of Theorem \ref{traceann}, we give an affirmative answer to \cite[Question 4.7]{dt}, relating to equality of annihilator of $\operatorname{Tor}$ and $\operatorname{Ext}$ of all modules in $\operatorname{CM}_0(R)$, when $R$ is a local ring of dimension $1$ whose completion is reduced (hence $R$ is Cohen--Macaulay). More precisely, we prove the following result:
\begin{prop}\label{41} Let $(R,\mathfrak{m})$ be a local ring of dimension one whose completion is reduced, so that $\operatorname{CM}(R)=\operatorname{CM}_0(R)$ holds. Let $\mathcal{X}$ be a subcategory of $\operatorname{Mod} R$ containing $\Omega^2\operatorname{CM}(R)$ and $\mathcal{Y}$ be a subcategory of $\operatorname{Mod} R$ containing $\operatorname{CM}(R)$. Then, it holds that $\operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\mathcal{X})=\operatorname{\mathbb{E}}^1(\operatorname{CM}(R),\mathcal{X})=\operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\mathcal{Y})=\c(R)$. In particular, it holds that $\operatorname{\mathbb{E}}^0(\operatorname{CM}(R))=\operatorname{\mathbb{E}}^1(\operatorname{CM}(R))=\operatorname{\mathbb{T}}_0(\operatorname{CM}(R))=\c(R)$. \end{prop}
\begin{proof} Since $\widehat R$ is reduced, so $R$ is reduced, so $R$ is Cohen--Macaulay with an isolated singularity (as $\dim R$=1). So, $\operatorname{CM}(R)=\operatorname{CM}_0(R)$. Clearly, $\operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\mathcal{X})\subseteq \operatorname{\mathbb{E}}^1(\operatorname{CM}(R),\mathcal{X})$. Since $\widehat R$ is reduced, so $\overline R$ is a finite $R$-module, hence $\c(R)=(R:\overline R)$ contains a non-zero-divisor of $R$. Moreover, $\c(R)\cong \operatorname{Hom}_R(\overline R,R)$ is the $R$-dual of the finite $R$-module $\overline R$, so $\c(R)$ is a second syzygy module over $R$. So, $\c(R)\cong F\oplus \Omega^2_R M $, for some finite free $R$-module $F$, where $\Omega^2_R M$ denotes the second-syzygy in some minimal free resolution for some $R$-module $M$. Writing $N=\Omega_R M$, we see $N\in \operatorname{CM}(R)$ and $\c(R)\cong F\oplus \Omega_R N\in \Omega \operatorname{CM}(R)$. So, $\Omega_R \c(R)\in \Omega^2\operatorname{CM}(R)\subseteq \mathcal{X}$. So we have, $\operatorname{\mathbb{E}}^1(\operatorname{CM}(R),\mathcal{X})\subseteq \operatorname{ann}_R \operatorname{Ext}^2_R(N,\Omega_R \c(R))$, and we also have the following inclusions and equalities $$ \operatorname{ann}_R \operatorname{Ext}^2_R(N,\Omega_R \c(R))=\operatorname{ann}_R \operatorname{Ext}^1_R(F\oplus \Omega_R N,\Omega_R \c(R))=\operatorname{ann}_R\operatorname{Ext}^1_R(\c(R),\Omega_R \c(R))\overset{(1)}=\operatorname{tr}_R(\c(R))\overset{(2)}=\c(R)$$ where equality (1) holds by Theorem \ref{traceann} since $\c(R)$ is an ideal of $R$ containing a non-zero-divisor (as the completion of $R$ is reduced, see the last sentence of the discussion in \ref{conductor}); and equality (2) holds by the discussion in \ref{trcon}. This shows $\operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\mathcal{X})\subseteq \operatorname{\mathbb{E}}^1(\operatorname{CM}(R),\mathcal{X})\subseteq \c(R)$.
Similarly, $\Omega_R \Tr \c(R)\in \operatorname{CM}(R)\subseteq \mathcal{Y}$, so we get $$\operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\mathcal{Y})\subseteq \operatorname{ann}_R \operatorname{Tor}^R_1( N,\Omega_R \Tr \c(R))=\operatorname{ann}_R \operatorname{Tor}^R_2(N,\Tr \c(R))=\operatorname{ann}_R \operatorname{Tor}^R_1(F\oplus \Omega_R N,\Tr \c(R))$$ and $\operatorname{ann}_R \operatorname{Tor}^R_1(F\oplus \Omega_R N,\Tr \c(R))=\operatorname{ann}_R \operatorname{Tor}^R_1(\c(R),\Tr \c(R))\overset{(3)}=\operatorname{tr}_R(\c(R))=\c(R)$, where (3) again follows by Theorem \ref{traceann}. This shows $\operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\mathcal{Y})\subseteq \c(R)$.
So now it is enough to prove that $\c(R) \subseteq \operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\mathcal{X}) \cap \operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\mathcal{Y})$. In fact, we will observe that $\c(R) \subseteq \operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\operatorname{Mod} R) \cap \operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\operatorname{Mod} R)$. By Lemma \ref{lemm}, it is enough to observe that $\c(R)\subseteq \operatorname{\mathbb{E}}^0(M,\operatorname{Mod} R)$ for every $M\in \operatorname{CM}(R)$. Since $\widehat R$ is reduced and one-dimensional, and if $M\in \operatorname{CM}(R)$, then $\widehat M\in \operatorname{CM}(\widehat R)$, so for every $i\ge 1$ and every $N\in \operatorname{mod} R$, we get by \cite[Proposition 3.1]{wa} that $0=\c(\widehat R)\operatorname{Ext}^i_{\widehat R}(\widehat M, \widehat N)=\c(\widehat R)\widehat{\operatorname{Ext}^i_R(M,N)}$. Since $\widehat{\c(R)}\subseteq \c(\widehat R)$ by the discussion preceding this result, so we get $\widehat{\c(R)}\widehat{\operatorname{Ext}^i_R(M,N)}=0$. Hence, $\widehat{\c(R)\operatorname{Ext}^i_R(M,N)}=0$, so $\c(R)\operatorname{Ext}^i_R(M,N)=0$. Since this is true for any $N\in \operatorname{mod} R$, so in particular, take the first syzygy $\Omega_R M$ in a resolution of $M$ by finite free $R$-modules, so that $\Omega_R M\in \operatorname{mod} R$. Hence, $\c(R)\operatorname{Ext}^1_R(M,\Omega_R M)=0$, so $\c(R)\subseteq \operatorname{ann}_R \operatorname{Ext}^1_R(M,\Omega_R M)=\operatorname{\mathbb{E}}^0(M,\operatorname{Mod} R)=\operatorname{\mathbb{T}}_0(M,\operatorname{Mod} R)$ by Lemma \ref{lemm}.
To conclude, we have shown $\c(R)\subseteq \operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\operatorname{Mod} R)\subseteq \operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\mathcal{X})\subseteq \operatorname{\mathbb{E}}^1(\operatorname{CM}(R),\mathcal{X})\subseteq \c(R)$, and $\c(R)\subseteq \operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\operatorname{Mod} R)\subseteq \operatorname{\mathbb{T}}_0(\operatorname{CM}(R),\mathcal{Y})\subseteq \c(R)$. This gives all the equalities as claimed in the Proposition. The last part of the proposition follows by taking $\mathcal{X}=\mathcal{Y}=\operatorname{CM}(R)$. \end{proof}
As our second application of Theorem \ref{traceann}, we give an alternative proof of one of the main Theorems of \cite{curve}, namely \cite[Theorem 4.3 and 5.10]{curve}
\begin{prop}\label{cacon} Let $(R,\mathfrak{m})$ be a local Gorenstein ring of dimension one whose completion is reduced. Then, $\ca(R)=\ca^n(R)=\c(R)$ for every $n\ge 2$. \end{prop}
\begin{proof} Pick $s>1$ large enough so that $\ca(R)=\ca^{s+1}(R)$. Since $R$ is Gorenstein and $\c(R)$ is maximal Cohen--Macaulay (as it is an ideal), so by \cite[Construction 12.10]{lw} we observe that $\c(R)\in \Omega^s_R \operatorname{mod} R$, hence $\c(R)\cong F \oplus \Omega^s_R M$ for some $M\in \operatorname{mod} R$ and free $R$-module $F$. So then, $\ca(R)=\ca^{s+1}(R)\subseteq \operatorname{ann}_R \operatorname{Ext}^{s+1}_R(M,\Omega_R \c(R))=\operatorname{ann}_R \operatorname{Ext}^1_R(F\oplus \Omega^s_R M, \Omega_R \c(R))=\operatorname{ann}_R\operatorname{Ext}^1_R(\c(R),\Omega_R \c(R))=\operatorname{tr}_R(\c(R))=\c(R)$, where the last two equalities holds by Theorem \ref{traceann} and discussion \ref{conductor}, \ref{trcon}. So this shows $\ca(R)\subseteq \c(R)$. Now, we also have $\Omega \operatorname{mod} R \subseteq \operatorname{CM}(R)$, so $\operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\operatorname{mod} R)\subseteq \operatorname{\mathbb{E}}^1(\operatorname{mod} R,\operatorname{mod} R)=\ca^2(R)\subseteq \ca^n(R) \subseteq \ca(R)$ for every $n\ge 2$. Hence by Proposition \ref{41} we get $\c(R)\subseteq \operatorname{\mathbb{E}}^0(\operatorname{CM}(R),\operatorname{mod} R) \subseteq \ca^2(R)\subseteq \ca^n(R) \subseteq \ca(R)$ for every $n\ge 2$. This combined with $\ca(R) \subseteq \c(R)$ gives the required claim. \end{proof}
\end{document} |
\begin{document}
\title[]{Rigidity of Beltrami fields with a non-constant proportionality factor} \author[]{Ken Abe} \date{} \address[K. ABE]{Department of Mathematics, Graduate School of Science, Osaka City University, 3-3-138 Sugimoto, Sumiyoshi-ku Osaka, 558-8585, Japan} \email{kabe@osaka-cu.ac.jp}
\subjclass[2010]{35Q31, 35Q35} \keywords{Beltrami fields, Rigidity, Constrained equation} \date{\today}
\maketitle
\begin{abstract} We prove that bounded Beltrami fields must be symmetric if a proportionality factor depends on 2 variables in the cylindrical coordinate and admits a regular level set diffeomorphic to a cylinder or a torus. \end{abstract}
\section{Introduction} We consider 3d steady states of ideal incompressible flows
\begin{align*} (\nabla \times u)\times u+\nabla \pi=0,\quad \nabla \cdot u=0\quad \textrm{in}\ \mathbb{R}^{3},\tag{1.1} \end{align*}\\ where $u$ is the velocity of fluid and $\pi$ is the Bernoulli pressure. Integral curves of the velocity and the vorticity $\nabla \times u$ are called stream lines and vortex lines, respectively. If the Bernoulli pressure $\pi\nequiv \textrm{const.}$ is regular, they lie on level sets of $\pi$, called Bernoulli surfaces. It is known \cite{AK98} that Bernoulli surfaces are diffeomorphic to nested cylinders or tori. The system (1.1) can be written as an elliptic system with constraints, e.g. \cite[p.34]{Grad85}. Indeed, by introducing the current potential $\eta$ such that $\nabla \times u=\nabla \pi\times \nabla \eta$, (1.1) is formally written in an equivalent form
\begin{equation*} \begin{aligned} &\nabla \times u=\nabla \pi\times \nabla \eta,\quad \nabla \cdot u=0,\\ &u\cdot \nabla \pi=0,\quad u\cdot \nabla \eta=1. \end{aligned} \end{equation*}\\ The first line is an elliptic system for given $\pi$ and $\eta$. The second line is constraints to them, called a degenerate hyperbolic system.
The constraints are removed by symmetry, e.g. translation or rotation. In the axisymmetric setting, (1.1) is reduced to the Grad-Shafranov equation \cite{Grad}, \cite{Shafranov}. Existence of solutions with compactly supported vorticity is well known in the study of vortex rings, e.g. \cite{Van13}. Moreover, compactly supported solutions are constructed in \cite{Gav}, \cite{CLV}, \cite{DEPS21}. Existence of smooth non-symmetric solutions to (1.1) with $\pi\nequiv \textrm{const.}$ is unknown.
The non-existence of such non-symmetric solutions is a conjecture of Grad \cite[p.144]{Grad67}, see Constantin et al. \cite[p.529]{CDG21b}. More precisely, symmetries in this conjecture are 4 types: translation, rotation, helix, reflection in a plane. This problem is considered as \textit{rigidity} to (1.1). For the 2d flows, a rigidity result that bounded solutions with no stagnation points must be shear flows is proved by Hamel and Nadirashvili \cite{HN19}. See also rigidity in a strip \cite{HN17} and in a pipe for axisymmetric flows \cite{CDG21b}. The full 3d rigidity to (1.1) with $\pi\nequiv \textrm{const.}$ is unknown, cf. \cite{Shv}. Grad's conjecture is also studied from existence of non-symmetric solutions with piecewise constant pressure \cite{BL96}, \cite{ELP21} and of smooth non-axisymmetric solutions \cite{BKM20}, \cite{BKM20b}, \cite{CDG21}.\\
In this paper, we study rigidity of (1.1) with constant pressure $\pi\equiv \textrm{const.}$ Velocity and vorticity with such the pressure are collinear and (1.1) is reduced to the Beltrami equations
\begin{align*} \nabla \times u=f u,\quad \nabla \cdot u=0\quad \textrm{in}\ \mathbb{R}^{3}.\tag{1.2} \end{align*}\\ The function $f$ is called a proportionality factor. If $f\equiv \textrm{const.}$, $u$ is called a strong Beltrami field. Vortex lines of strong Beltrami fields can be chaotic and non-symmetric, e.g. ABC flows \cite{AK98}. Hence (1.2) with $f\equiv \textrm{const}$. is \textit{not} rigid. It is known \cite{EP12}, \cite{EP15} that strong Beltrami fields describe knots and links of vortex lines and vortex tubes.
If $f\nequiv \textrm{const.}$, vortex lines are confined to a level set $f^{-1}(c)=\{x\in \mathbb{R}^{3}\ |\ f(x)=c \}$ for $c\in \mathbb{R}$ since $f$ is a first integral, i.e.
\begin{align*} u\cdot \nabla f=0. \end{align*}\\ It is known \cite{AK98} that a closed surface $f^{-1}(c)$ with no singular points $\{u=0\}$ is diffeomorphic to a torus. Existence of solutions to (1.2) is unknown unless $f\equiv \textrm{const}$ or under symmetry, cf. \cite{CK20}. Axisymmetric solutions with compactly supported vorticity exist \cite{Chandra}, \cite{Tu89}, \cite{A8}.
In contrast to (1.1) with $\pi\nequiv \textrm{const.}$, rigidity results are known to (1.2). The first rigidity results to (1.2) are Liouville theorems on decay conditions at space infinity \cite{Na14}, \cite{CC15}, e.g. $u=o(|x|^{-1})$ as $|x|\to\infty$. This decay rate is sharp, cf. \cite{EP12}, \cite{EP15}. The another type Liouville theorem is based on a level set condition for $f \nequiv \textrm{const}$.
\begin{thm}[\cite{EP16}] Suppose that $f\in C^{2+\mu}(\mathbb{R}^{3})$ for some $0<\mu<1$. If $f$ admits a regular level set diffeomorphic to a sphere, then any solutions to (1.2) is identically zero. \end{thm}
This Liouville theorem implies non-existence to (1.2) for a broad class of $f$, e.g. radial or having extrema. On the other hand, it implies a certain relation between existence and symmetry of $f$ since symmetric $f$ does not take extrema. The relation between existence and symmetry is indeed Grad's conjecture to (1.1) with $\pi\nequiv\textrm{const.}$
For symmetric $f$ depending on 1 variable in the cylindrical coordinate $(r,\theta,z)$, i.e. $x_1=r\cos\theta$, $x_2=r\sin\theta$, $x_3=z$, any bounded solutions of (1.2) are symmetric, and even for such $f$, solutions may not exist \cite{GPS01}, \cite[Section 5]{CK20}. More precisely, the following 3 cases are known.\\
\noindent (i) For $f=f(z)$, level sets are planes. Any solutions of (1.2) are harmonic on them and singular points are isolated. In particular, bounded solutions are symmetric, i.e. $u=u(z)$.
\noindent (ii) For $f=f(r)$, level sets are cylinders. Any solutions of (1.2) are constant on them and axisymmetric.
\noindent (iii) For $f=f(\theta)$, level sets are half planes. Any solutions of (1.2) are trivial.\\
This rigidity follows by investigating \textit{compatibility} of a constrained evolution equation equivalent to (1.2), see Remarks 2.1. The constraint in (1.2) is understood as compatibility for the constrained evolution equation \cite{EP16} and studied via an exterior differential system by using Cartan's method \cite{CK20}.
For $f$ depending on 2 variables in $(r,\theta,z)$, a variety of surfaces appear as level sets of $f$ such as cylindrical surfaces for $f=f(r,\theta)$, surfaces of revolution for $f=f(r,z)$ and right conoids for $f=f(\theta,z)$. For $f=f(r,\theta)$ and $f=f(r,z)$ having extrema on $(x_1,x_2)$ or $(r,z)$-planes, their level sets admit a cylinder or a torus, cf. \cite{EP16}.
If only 4 types of symmetry are admitted to (1.2) with $f\nequiv \textrm{const.}$, $f=f(r,\theta)$ and $f=f(r,z)$ are 2 types among them. We establish a rigidity result for these $f$. Since translationally and rotationally symmetric solutions exist, divisions of these 2 cases must appear in rigidity to (1.2).
\begin{thm} Suppose that $f\in C^{2+\mu}(\mathbb{R}^{3})$ for some $0<\mu<1$. \\ \noindent (i) If $f=f(r,\theta)$ admits a regular level set diffeomorphic to a cylinder, then any bounded solutions to (1.2) are translationally symmetric.
\noindent (ii) If $f=f(r,z)$ admits a regular level set diffeomorphic to a torus, then any solutions to (1.2) are rotationally symmetric. \end{thm}
\begin{rems} (i) Translationally symmetric solutions of (1.2) form
\begin{align*} u=\partial_2\Psi e_{1}-\partial_1\Psi e_{2}+u^{3}(\Psi)e_3,\quad f=\dot{u}^{3}(\Psi), \end{align*}\\ for some $u^{3}(\cdot)$ and the stream function $\Psi(x_1,x_2)$ satisfying $-\Delta \Psi=\dot{u}^{3}(\Psi){u}^{3}(\Psi)$, where $e_1,e_2,e_3$ is the orthogonal basis in the Cartesian coordinate.
\noindent (ii) Rotationally symmetric (axisymmetric) solutions of (1.2) form
\begin{align*} u=-r^{-1}\partial_z\Psi e_r+r^{-1}\Gamma(\Psi)e_{\theta}+r^{-1}\partial_r\Psi e_z,\quad f=\dot{\Gamma}(\Psi), \end{align*}\\ for some $\Gamma(\cdot)$ and $\Psi(r,z)$ satisfying $-(\Delta_{z,r}-r^{-1}\partial_r)\Psi=\dot{\Gamma}(\Psi)\Gamma(\Psi)$, where $e_r={}^{t}(\cos\theta,\sin\theta,0)$, $e_{\theta}={}^{t}(-\sin\theta,\cos\theta,0)$, $e_z={}^{t}(0,0,1)$ are the orthogonal basis in the cylindrical coordinate. These elliptic problems appear as free boundary problems for translating vortex pairs and vortex rings, see Section 4. Under helical symmetry, (1.1) and (1.2) are reduced to the helical Grad-Shafranov equation \cite[p.196]{Freidberg}. \end{rems}
The proof of Theorem 1.2 is based on the facts that (1.2) can be recasted as a constrained evolution equation on a level set of $f$ \cite{EP16}, \cite{CK20} and that Beltrami fields are solutions to the elliptic equation $-\Delta u=\nabla f\times u+f^{2}u$ as explained below. Unfortunately, solutions of (1.1) with $\pi\nequiv \textrm{const.}$ do not possess neither of them and their rigidity is out of reach. Rigidity of (1.1) with $\pi\nequiv \textrm{const.}$ is unknown even for $\pi$ depending on 1 variable in $(r,\theta,z)$. A crucial difference is failure of a unique continuation property to (1.1) with $\pi\nequiv \textrm{const.}$ as compactly supported solutions exist \cite{Gav}, \cite{CLV}, \cite{DEPS21}. \\
We outline the proof of Theorem 1.2. We show that the symmetric-directional component of $u$ ($u^{3}$ or $\Gamma$) is constant on a level set of $f$. This property can be observed from the above form of symmetric solutions. Since $f$, $u^{3}$ and $\Gamma$ are functions of $\Psi$, $u^{3}$ and $\Gamma$ are constants on a level set of $f$. Moreover, other two components of $u$ are independent of the symmetric variable ($z$ or $\theta$).
To prove this, without assuming symmetry of $u$, we use differential forms and rewrite (1.2) as a constrained evolution equation \cite{EP16}, cf. \cite{CK20},
\begin{align*} \beta_t=-(c+t)\chi*_t\beta,\quad d \beta=0, \tag{1.3} \end{align*}\\
for $\chi=|\nabla f|^{-1}$ and a dual 1-form $\beta$ of $u$ on the surface $f^{-1}(c+t)$, where $d$ denotes the exterior derivative and $*_t$ denotes the Hodge star operator on the surface. By parametrizing the surface by $\xi={}^{t}(\xi_1,\xi_2)$ and $t\geq 0$, this 1-form is written as
\begin{align*} \beta=\beta_{1}(\xi,t)d\xi_1+\beta_{2}(\xi,t)d\xi_2. \end{align*}\\ The constrained equation (1.3) is equivalent to (1.2) for $f\nequiv \textrm{const.}$ and implies that the 1-form satisfies the elliptic equation on the surface
\begin{align*} d(\chi*_{t}\beta)=0,\quad d\beta=0. \tag{1.4} \end{align*}\\ If the surface $f^{-1}(c+t)$ is diffeomorphic to a sphere, $\beta$ is an exact form, i.e. $\beta= d\psi$. By the elliptic equation of the divergence form
\begin{align*} d(\chi*_{t}d \psi)=0, \end{align*}\\ and integration by parts, $\psi$ is constant on the surface. Thus $u$ vanishes in a neighborhood of the regular level set and in $\mathbb{R}^{3}$ by unique continuation.
If the surface is not diffeomorphic to a sphere, the problem (1.4) admits non-trivial solutions and does not imply non-existence to (1.2) for $f\nequiv \textrm{const.}$ A new observation of the present work is that if $f=f(r,\theta)$ or $f=f(r,z)$, each component of $\beta$ can be constant for the symmetric variable. For cylindrical surfaces and surfaces of revolution, we denote the symmetric variable by $\xi_2$, i.e. $\xi_2=z$ or $\xi_2=\theta$. Then the symmetric-directional component $\beta_2$ satisfies
\begin{align*} d(\chi*_{t}d \beta_{2})=0. \tag{1.5} \end{align*}\\ We show that $\beta_{2}=\beta_{2}(t)$ and $\beta_{1}=\beta_{1}(\xi_1,t)$ if the surface $f^{-1}(c+t)$ is diffeomorphic to a cylinder or a torus. If the surface is diffeomorphic to a torus, this follows from integration by parts. For a cylinder, we apply a Liouville theorem for bounded solutions to the elliptic equation (1.5). The property $\beta_{2}=\beta_{2}(t)$ and $\beta_{1}=\beta_{1}(\xi_1,t)$ implies local symmetry of $u$ and global symmetry follows from unique continuation.
It is possible to extend this approach for level sets diffeomorpic to other cylindrical surfaces and surfaces of revolution. On the other hand, rigidity of (1.2) for $f=f(\theta,z)$ is unknown unless $f=f(z)$ and $f=f(\theta)$. The equation (1.5) for $f=f(\theta,z)$ is written with the metric tensor of the surface in the same form as those of $f=f(r,\theta)$ and $f=f(r,z)$ though dependence on the parameter $\xi={}^{t}(\xi_1,\xi_2)$ is different. We also address the constrained equation (1.3) for $f=f(\theta,z)$.
\section{The elliptic equation for $\beta_{2}$}
We derive the equation (1.3) by parametrizing the surface $f^{-1}(c+t)$ by $\xi$. The elliptic equation (1.4) has an explicit form with $\xi$ and is written in a simpler form for $f$ depending on 2 variables in $(r,\theta,z)$. We show that $\beta_{2}$ satisfies the equation (1.5) for $f=f(r,\theta)$ and $f=f(r,z)$.
\subsection{The constrained equation}
We assume that a level set $f^{-1}(c)$ for $c\in \mathbb{R}$ is regular in the sense that $f^{-1}(c+t)$ is a smooth surface for $0\leq t\leq t_0$ with some $t_0 >0$ and $\nabla f(x)\neq 0$ for $x\in f^{-1}(c+t)$. We parametrize the surface $f^{-1}(c)$ by $x=\Phi_0(\xi)$ with $\xi={}^{t}(\xi_1,\xi_2)$ and define $\Phi(\xi, t)$ by the flow of $X=\nabla f /|\nabla f|^{2}$, i.e.
\begin{align*} &\partial_t \Phi=X(\Phi),\quad t>0, \\ &\Phi(\xi,0)=\Phi_0(\xi). \end{align*}\\ The flow $\Phi(\xi,t)$ parametrizes the surface $f^{-1}(c+t)$, i.e. $\Phi(\xi,t)\in f^{-1}(c+t)$. Since $f\in C^{2+\mu}$ for some $0<\mu<1$, $\Phi(\xi,t)$ is $C^{2+\mu}$. We may assume that $\Phi(\cdot,t)$ is defined for $0\leq t\leq t_0$. The equations (1.2) for the dual 1-form $\alpha=\sum_{i=1}^{3}u^{i}dx_i$ of $u=(u^{i})$ are
\begin{align*} d_{\mathbb{R}^{3}}\alpha=f*_{\mathbb{R}^{3}}\alpha,\quad d_{\mathbb{R}^{3}}*_{\mathbb{R}^{3}}\alpha=0, \tag{2.1} \end{align*}\\ where $d_{\mathbb{R}^{3}}$ and $*_{\mathbb{R}^{3}}$ are the exterior derivative and the Hodge star operator in $\mathbb{R}^{3}$, respectively. By the elliptic equation $-\Delta u=\nabla f\times u+f^{2}u$ and $f\in C^{2+\mu}$, $u$ and $\alpha$ are $C^{3+\mu}$. The pullback $\beta=\Phi^{*}\alpha$ by the map $\Phi: (\xi,t)\longmapsto x=\Phi(\xi,t)$ satisfies
\begin{align*} d_{\mathbb{R}^{3}}\beta=(c+t)*_{\mathbb{R}^{3}}\beta,\quad d_{\mathbb{R}^{3}}*_{\mathbb{R}^{3}}\beta=0, \tag{2.2} \end{align*}\\
With the matrices $F=(\partial_1\Phi,\partial_2\Phi,\partial_t\Phi)$ and $\tilde{F}=|F|F^{-1}$,
\begin{align*} \beta&=(u^{1},u^{2},u^{3})F\ {}^{t}(d\xi_1,d\xi_2,dt), \\ *_{\mathbb{R}^{3}} \beta&=(u^{1},u^{2},u^{3}){}^{t}\tilde{F}\ {}^{t}(d\xi_2\wedge dt,dt\wedge d\xi_1,d\xi_1\wedge d\xi_2), \end{align*}\\
where $|F|$ denotes the determinant of $F$. Since $u\cdot \partial_t\Phi=0$, the pullback $\beta$ is a 1-form on a surface and $C^{1+\mu}$,
\begin{align*} \beta =u(\Phi(\xi,t))\cdot \partial_{1}\Phi d \xi_1+u(\Phi(\xi,t))\cdot \partial_{2}\Phi d \xi_2 =:\beta_1(\xi,t)d\xi_1+\beta_2(\xi,t)d\xi_2. \end{align*}\\ We write the metric tensor by ${\mathcal{G}}=(\partial_{i}\Phi\cdot \partial_{j}\Phi)_{1\leq i,j\leq 2}$ and $\mathcal{G}^{-1}=(g^{ij})_{1\leq i,j\leq 2}$. Since
\begin{align*} {}^{t}FF=\left( \begin{matrix} {\mathcal{G}} & 0 \\ 0& \chi^{2} \end{matrix}
\right),\quad \chi=|\nabla f|^{-1}, \end{align*}\\ and $(u^{1},u^{2}, u^{3} )=(\beta_1,\beta_2,0)F^{-1}$, the Hodge dual in $\mathbb{R}^{3}$ is
\begin{align*} *_{\mathbb{R}^{3}}\ \beta
=\chi |{\mathcal{G}}|^{1/2} ( (\beta_1g^{11}+\beta_2g^{21} )d\xi_2\wedge dt+(\beta_1g^{12}+\beta_2g^{22} ) dt\wedge d\xi_1 ). \end{align*}\\ Then the equations (2.2) imply
\begin{align*} &\partial_1\beta_{2}-\partial_2\beta_{1}=0,\\
&\partial_t \beta_{1}=(c+t)\chi|{\mathcal{G}}|^{1/2} (\beta_1g^{12}+\beta_2g^{22}),\\
&\partial_t \beta_{2}=-(c+t)\chi|{\mathcal{G}}|^{1/2} (\beta_1g^{11}+\beta_2g^{21}),\\
&\partial_1(\chi |{\mathcal{G}}|^{1/2} (\beta_1g^{11}+\beta_2g^{21}) )+\partial_2(\chi |{\mathcal{G}}|^{1/2} (\beta_1g^{12}+\beta_2g^{22}) )=0. \end{align*}\\ The last equation follows from the first 3 equations. They can be written as
\begin{align*} v_t=Av,\quad \nabla^{\perp}\cdot v=0, \tag{2.3} \end{align*}\\ for $v={}^{t}(v^{1},v^{2})$, $v^{i}=\beta_{i}$ with the matrix
\begin{align*}
A=(c+t)\chi | {\mathcal{G}} |^{1/2} \left(
\begin{array}{cc}
g^{12} & g^{22} \\
-g^{11} & -g^{21}
\end{array}
\right), \end{align*}\\ where $\nabla={}^{t}(\partial_{\xi_1},\partial_{\xi_2})$ and $\nabla^{\perp}={}^{t}(\partial_{\xi_2}, -\partial_{\xi_1})$. Taking the rotation implies that $v$ satisfies the elliptic equation $\nabla^{\perp}\cdot(Av)=0$ and $\nabla^{\perp}\cdot v=0$. With the Hodge star operator on the surface,
\begin{align*}
*_{t}\ \beta=(\beta_1,\beta_2)|{\mathcal{G}}|^{1/2} {\mathcal{G}}^{-1}\ {}^{t}(d\xi_2,-d\xi_1), \end{align*}\\ (2.3) is written as (1.3), i.e.
\begin{align*} \beta_t=-(c+t)\chi*_{t} \beta,\quad d \beta=0. \end{align*}\\ The elliptic equation (1.4) follows by differentiating the first equation by $d$. The system (2.3) is overdetermined in the sense that the irrotational condition is generally \textit{not} compatible with the evolution equation. Regarding (1.2) as a constrained evolution equation originates from \cite{EP12} in which Cauchy-Kowalevski theorem is used to construct strong Beltrami fields for given initial surface and tangential data.
Clelland and Klotz \cite{CK20} derived a similar evolution equation as (2.3) by using a moving frame and studied (1.2) in terms of an integral manifold to an equivalent exterior differential system by using the Cartan's method. Among other results, they showed that associated integral manifolds are at most 3-dimensional if level sets of $f$ have no umbilic points.
\subsection{Symmetric $f$}
The matrix $A$ has a simpler form for $f$ depending on 2 variables in $(r,\theta,z)$. \\
\noindent (i) $f=f(r,\theta)$. We parametrize a curve in a plane by $(r(\xi_1,t), \theta(\xi_1,t))$, i.e. $x_1=r\cos\theta$ $x_2=r\sin\theta$. Then, the map
\begin{align*} \Phi(\xi,t)=re_{r}(\theta)+ze_z \tag{2.4} \end{align*}\\ for $(r,\theta,z)=(r(\xi_1,t), \theta(\xi_1,t),\xi_2)$, parametrizes a cylindrical surface. The matrix $A$ forms
\begin{align*} A=(c+t)\chi \left(
\begin{array}{cc}
0 &\nu\\
-1/\nu & 0
\end{array}
\right) \tag{2.5} \end{align*}\\ for
\begin{align*}
\chi=\sqrt{|\partial_tr|^{2}+r^{2}|\partial_t\theta|^{2}},\quad \nu=\sqrt{|\partial_1r|^{2}+r^{2}|\partial_1\theta|^{2}}. \end{align*}\\ (ii) $f=f(r,z)$. We parametrize a curve in the $(r,z)$-plane by $(r(\xi_1,t), z(\xi_1,t))$. Then the map (2.4) for $(r,\theta,z)=(r(\xi_1,t),\xi_2,z(\xi_1,t))$, parametrizes a surface of revolution. The matrix $A$ is the same form as (2.5) with different coefficients
\begin{align*}
\chi=\sqrt{|\partial_tr|^{2}+|\partial_tz|^{2}},\quad \nu=\sqrt{{|\partial_1r|^2+|\partial_1z|^2 }}/r. \end{align*}\\ (iii) $f=f(\theta,z)$. We parametrize a curve in the $(\theta,z)$-plane by $(\theta(\xi_1,t), z(\xi_1,t))$. The map (2.4) for $(r,\theta,z)=(\xi_2,\theta(\xi_1,t), z(\xi_1,t))$ parametrizes a right conoid. The matrix $A$ is the same form as (2.5) with different coefficients
\begin{align*}
\chi=\sqrt{ r^{2}|\partial_t\theta|^{2}+|\partial_tz|^{2}},\quad \nu=\sqrt{r^{2}|\partial_1\theta|^{2}+|\partial_1 z|^{2}}. \end{align*}\\ In all the cases (i)-(iii), the elliptic problem for $v$ is written as
\begin{align*} \nabla\cdot(Bv)=0,\quad \nabla^{\perp}\cdot v=0 \tag{2.6} \end{align*}\\ with the matrix
\begin{align*} B=\left(
\begin{array}{cc}
p &0\\
0 & q
\end{array}
\right),\quad p=\chi/\nu,\ q=\chi\nu. \end{align*}\\ In the cases (i) and (ii), $B$ is independent of $\xi_2$. Hence $\partial_2v=\nabla v^{2}$ and
\begin{align*} \nabla \cdot(B\nabla v^{2})=0.
\tag{2.7} \end{align*}\\ This is written as $d(\chi*_td \beta_{2})=0$ in terms of the differential form.
If a level set of $f=f(r,\theta)$ is diffeomorphic to a cylinder, the curve $x_1=r(\xi_1,t)\cos\theta(\xi_1,t)$, $x_2=r(\xi_1,t)\sin\theta(\xi_1,t)$ is diffeomorphic to a circle. We suppose $0\leq \xi_1\leq 2\pi$, $\xi_2\in \mathbb{R}$ and regard $v^{2}$ as a periodic solution to (2.7) in $\mathbb{R}^{2}$. Since the level set $f^{-1}(c)$ is regular, $\nabla f\neq 0$ and $\partial_1\Phi\neq 0$ for $0\leq t\leq t_0$ and $\xi_1\in \mathbb{R}$. Thus, $\chi=|\nabla f|^{-1}$ and $\nu=|\partial_1 \Phi|$ are bounded from above and below by positive constants. We take some $\lambda(t)$ and $\Lambda(t)$ such that
\begin{align*} 0<\lambda(t)\leq p(\xi_1,t),q(\xi_1,t)\leq \Lambda(t),\quad \xi_1\in \mathbb{R},\ 0\leq t\leq t_0. \tag{2.8} \end{align*}
\begin{rems} \noindent (i) For $f=f(z)$, (1.2) is written as a constrained evolution equation without using the cylindrical coordinate. Solutions for such $f$ are $u={}^{t}(v,0)$ and $v={}^{t}(v^1,v^2)$ satisfying
\begin{align*} \partial_3v=-fv^{\perp},\quad \nabla \cdot v=0,\quad \nabla^{\perp}\cdot v=0, \end{align*}\\ for $v^{\perp}={}^{t}(-v^{2},v^{1})$. This evolution equation is \textit{compatible} with the elliptic constraints. Thus for a given harmonic vector field $v(x_1,x_2,0)$ on $\{x_3=0\}$, one can construct non-symmetric solutions to (1.2) for $f=f(z)$. If $u$ is bounded, $v$ must be constant in $\mathbb{R}^{2}$. Hence any bounded solutions are symmetric, i.e. $u=u^{1}(x_3)e_1+u^{2}(x_3)e_2$.
(ii) For $f=f(r)$, any solutions of (1.2) are axisymmetric due to the compatibility of (2.3). Indeed, we take $r=r(t)$ satisfying $df(r(t))/dt=1$ and $\theta=\xi_1$, $z=\xi_2$. Then, the function $\Phi$ in (2.4) parametrizes a cylinder and $v$ satisfies (2.6) for $B=B(t)$. If $p\equiv 1$, differentiating $\nabla \cdot (Bv)=0$ by $t$ implies
\begin{align*} 0=\nabla \cdot ( \partial_t B v+B\partial_tv ) =\nabla \cdot ( \partial_t B v ) =\partial_tq \partial_2v^{2}. \end{align*}\\ By $\nabla \cdot (Bv)=0$, $\partial_i v^{i}=0$ for $i=1,2$. By differentiating each components of $\partial_t v=Av$ by $\xi_1$ and $\xi_2$, $v=v(t)$ follows. If $p\nequiv 1$, applying the same argument to $\nabla\cdot (p^{-1}Bv)=0$ yields $v=v(t)$. Thus, $u=u^{\theta}(r)e_{\theta}+u^{z}(r)e_{z}$.
\noindent (iii) For $f=f(\theta)$, no solutions exist to (1.2) due to the \textit{incompatibility} of (2.3). Indeed, we take $\theta=\theta(t)$ satisfying $df(\theta(t))/dt=1$ and $r=\xi_2$, $z=\xi_1$. Then, the function $\Phi$ in (2.4) parametrizes a half plane. The first equation of (2.6) is $\nabla \cdot (\xi_2 v)=0$. By differentiating this by $t$, $\nabla \cdot (\xi_2^{2} v^{\perp})=0$ and $v=0$ follows. Thus, $u\equiv 0$. \end{rems}
\section{The Liouville theorem}
\subsection{Local symmetry}
If a level set of $f=f(r,z)$ is diffeomorphic to a torus, $d(\chi*_td \beta _{2})=0$ and integration by parts yield $\beta_{1}=\beta_{1}(\xi_1,t)$ and $\beta_{2}=\beta_{2}(t)$ by the compactness of the level set. We assume the boundedness of $u$ and prove this property when the level set of $f=f(r,\theta)$ is diffeomorphic to a cylinder.
\begin{prop} $\beta_{1}=\beta_{1}(\xi_1,t)$, $\beta_{2}=\beta_{2}(t)$,\ $\xi_1\in \mathbb{R}$,\ $0\leq t\leq t_0$. \end{prop}
\begin{proof} Since the diagonal matrix $B$ satisfies the elliptic condition by (2.8) and $v^{2}\in C^{1+\mu}$ is a bounded weak solution to (2.7) for $\xi\in \mathbb{R}^{2}$, applying the Liouville theorem \cite[Corollally 3.12, Theorem 8.20]{GT} implies that $v^{2}$ is constant. By $\nabla^{\perp}\cdot v=0$, $v^{1}$ is independent of $\xi_2$. \end{proof}
\begin{lem} The solution $u$ is translationally or rotationally symmetric in some symmetric open set $U\subset \mathbb{R}^{3}$. \end{lem}
\begin{proof} In both cases (i) and (ii), $\partial_{1}\Phi$ and $\partial_{2}\Phi$ are orthogonal. Thus $u(\Phi)=u(\Phi(\xi,t))$ satisfies
\begin{align*} u(\Phi)
&=(u(\Phi)\cdot \partial_{1}\Phi) \frac{\partial_{1}\Phi}{|\partial_{1}\Phi|^{2}}+(u(\Phi)\cdot \partial_{2}\Phi) \frac{\partial_{2}\Phi}{|\partial_{2}\Phi|^{2}} \\
&=\beta_{1}(\xi_1,t)\frac{\partial_{1}\Phi}{|\partial_{1}\Phi|^{2}}
+\beta_{2}(t)\frac{\partial_{2}\Phi}{|\partial_{2}\Phi|^{2}}. \end{align*}\\
In the case (i), by differentiating $\Phi(\xi,t)=r(\xi_1,t)e_{r}(\theta(\xi_1,t))+\xi_2e_z$ by $\xi_1$ and $\xi_2$,
\begin{align*}
u(\Phi)=\frac{\beta_{1}(\xi_1,t)}{|\partial_1r|^{2}+r^{2}|\partial_1\theta|^{2}}(\partial_1r e_r+r\partial_1\theta e_{\theta})+\beta_{2}(t)e_{z}. \end{align*}\\ The right-hand side is independent of $\xi_2=z$. Thus $u$ is translationally symmetric on the level set $f^{-1}(c+t)$ for $0\leq t\leq t_0$. In particular, $u$ is translationally symmetric in $U=D\times \mathbb{R}$ for some open set $D$ in a plane.
In the case (ii), by differentiating $\Phi(\xi,t)=r(\xi_1,t)e_{r}(\xi_2)+z(\xi_1,t)e_z$ by $\xi_1$ and $\xi_2$,
\begin{align*} u(\Phi)
&=\frac{\beta_{1}(\xi_1,t)}{|\partial_1r|^{2}+|\partial_1z|^{2}}(\partial_1r e_r+\partial_1z e_{z})+\frac{\beta_{2}(t)}{r}e_{\theta}. \end{align*}\\ Each components in the cylindrical coordinate are independent of $\xi_2=\theta$. Thus $u$ is rotationally symmetric on the level set $f^{-1}(c+t)$ for $0\leq t\leq t_0$. In particular, $u$ is rotationally symmetric in a region $U$ rotated some open set in the $(r,z)$-plane around the $z$-axis. \end{proof}
\subsection{Unique continuation}
The local symmetry implies the global symmetry by unique continuation. We use a classical unique continuation result under the boundedness of $|\Delta w|/|w|$, e.g. \cite{Wolff93}.
\begin{prop} Let $w\in C^{2}(\mathbb{R}^{3})$ satisfy
\begin{align*}
|\Delta w|\leq C_R|w|\quad \textrm{in}\ \{|x|<R\}, \end{align*}\\
for each $R>0$ with some $C_R>0$. Assume that $w$ vanishes in some open set in $\{|x|<R\}$. Then, $w\equiv 0$. \end{prop}
\begin{proof}[Proof of Theorem 1.2] For translationally symmetric $u$ in $U$, set $w(x)=u(x)-u(x+\tau e_{z})$ for $\tau\in \mathbb{R}$. Then, $w$ is a Beltrami field with $f$ and vanishes in $U$. Since $-\Delta w=\nabla f\times w+f^{2}w$ in $\mathbb{R}^{3}$, by unique continuation, $w\equiv 0$ in $\mathbb{R}^{3}$. Thus $u$ is translationally symmetric in $\mathbb{R}^{3}$, i.e. $u=u(x_1,x_2)$.
Similarly, for rotationally symmetric $u$ in $U$, set $w(x)=u(x)-{}^{t}R_{\tau}u(R_{\tau}x)$ with $R_{\tau}=(e_{r}(\tau), e_{\theta}(\tau), e_{z} )$ for $\tau\in [0,2\pi]$. Then, applying unique continuation to $w$ implies that $u$ is rotationally symmetric in $\mathbb{R}^{3}$, i.e. $u=u^{r}(r,z)e_r(\theta)+u^{\theta}(r,z)e_{\theta}(\theta)+u^{z}(r,z)e_z$. This completes the proof. \end{proof}
\begin{rems} (i) Under the translational symmetry, (1.2) is reduced to
\begin{align*} \partial_2u^{3}=fu^{1},\quad -\partial_1u^{3}=fu^{2},\quad \partial_1u^{2}-\partial_2u^{1}=fu^{3},\quad \partial_1u^{1}+\partial_2u^{2}=0. \end{align*}\\ With a stream function $\Psi$, ${}^{t}(u^{1},u^{2})=\nabla^{\perp}\Psi$ and ${}^{t}(u^{1},u^{2})\cdot \nabla u^{3}=0$. Hence $u^{3}=u^{3}(\Psi)$, $f=\dot{u}^{3}(\Psi)$ and $\Psi$ is a solution to $-\Delta \Psi=\dot{u}^{3}(\Psi){u}^{3}(\Psi)$.
\noindent (ii) Under the rotational symmetry, (1.2) is reduced to
\begin{align*} -\partial_zu^{\theta}=fu^{r},\quad \partial_zu^{r}-\partial_ru^{z}=fu^{\theta},\quad \partial_r u^{\theta}+u^{\theta}/r=fu^{z},\quad \partial_ru^{r}+u^{r}/r+\partial_z u^{z}=0. \end{align*}\\ With a stream function $\Psi$, $ru^{z}=\partial_r\Psi$, $ru^{r}=-\partial_z\Psi$ and ${}^{t}(ru^{z},ru^{r})\cdot \nabla_{z,r}\Gamma=0$ for $\Gamma=ru^{\theta}$. Hence $\Gamma=\Gamma(\Psi)$, $f=\dot{\Gamma}(\Psi)$ and $\Psi$ is a solution to $-(\Delta_{z,r}-r^{-1}\partial_r)\Psi=\dot{\Gamma}(\Psi)\Gamma(\Psi)$. \end{rems}
\section{Examples of symmetric solutions}
We review existence of translationally and rotationally symmetric solutions to (1.2).
\subsection{Vortex pairs}
Translationally symmetric solutions can be constructed by the elliptic problem for given $u^{3}(\cdot)$,
\begin{align*} -\Delta \Psi=\dot{u}^{3}(\Psi)u^{3}(\Psi)\quad \textrm{in}\ \mathbb{R}^{2}. \end{align*}\\ The simplest solutions are rotationally symmetric solutions, i.e. $\Psi=\Psi(r)$. For such $\Psi$, level sets of $f$ are cylinders, i.e. $f=f(r)$. If $\dot{u}^{3}(\Psi)u^{3}(\Psi)$ is compactly supported, the Biot-Savart law implies the decay ${}^{t}(u^{1},u^{2})=O(r^{-1})$ as $r\to\infty$, cf. \cite{Na14}, \cite{CC15}. Besides rotationally symmetric solutions, there exist periodic solutions for $\dot{u}^{3}(t)u^{3}(t)=t$ or $e^{t}$. For such solutions, level sets of $f$ are deformed cylinders in $\mathbb{R}^{3}$ and ${}^{t}(u^{1},u^{2})$ is merely bounded, e.g. \cite[2.2.2]{MaB}.
Variational solutions also exist. A vortex pair is a pair of translating 2 vortices with opposite signs in $\mathbb{R}^{2}$. They are symmetric for the $x_2$-variable and constructed via the half plane problem:
\begin{equation*} \begin{aligned} -\Delta \Psi&=\dot{u}^{3}(\Psi){u}^{3}(\Psi)\quad \textrm{in}\ \mathbb{R}^{2}_{+},\\ \Psi&=-\gamma\hspace{49pt} \textrm{on}\ \partial\mathbb{R}^{2}_{+}, \\ \partial_1\Psi\to0,\quad \partial_2\Psi&\to -W\hspace{42pt} \textrm{as}\ x_1^{2}+x_2^{2}\to\infty. \end{aligned} \end{equation*}\\ The constant $W>0$ is a speed of a vortex and $\gamma\geq 0$ is a flux measuring a distance from a vortex to the boundary $x_2=0$. A typical choice is $u^{3}(t)=t_{+}^{l}$ for $l>1$ and $t_{+}=\max\{t,0\}$. For such $u^{3}$, variational solutions exist and their vortex is compactly supported in $\mathbb{R}^{2}$ \cite{Yang91}. Level sets of $f$ are two symmetric deformed cylinders in $\mathbb{R}^{3}$ and the decay is ${}^{t}(u^{1},u^{2})=\textrm{const.}+O(r^{-1})$ as $r\to\infty$.
\subsection{Vortex rings}
Rotationally symmetric solutions can be constructed via the elliptic problem for given $\Gamma(\cdot)$:
\begin{equation*} \begin{aligned} -(\Delta_{z,r}-r^{-1}\partial_r)\Psi&=\dot{\Gamma}(\Psi)\Gamma(\Psi)\quad \textrm{in}\ \mathbb{R}^{2}_{+},\\ \Psi&=-\gamma\hspace{41pt} \textrm{on}\ \partial\mathbb{R}^{2}_{+}, \\ r^{-1}\partial_z\Psi\to0,\quad r^{-1}\partial_r\Psi&\to -W\hspace{34pt} \textrm{as}\ z^{2}+r^{2}\to\infty. \end{aligned} \tag{4.3} \end{equation*}\\
For the choice $\Gamma(s)=s_+^{l}$ and $l>1$, variational solutions exist and their vortex is compactly supported in $\mathbb{R}^{3}$ \cite{A8}. Level sets of $f$ are tori in $\mathbb{R}^{3}$ and the decay is $u=\textrm{const.}+O(|x|^{-3})$ as $|x|\to\infty$.
\end{document} |
\begin{document}
\maketitle
\begin{abstract} The main aim of this survey paper is to gather together some results concerning the Calabi type duality discovered by Lee~\cite{Lee} between certain families of spacelike graphs with constant mean curvature in Riemannian and Lorentzian homogeneous 3-manifolds with isometry group of dimension 4. The duality is conformal and swaps mean curvature and bundle curvature, and we will revisit it by giving a more general statement in terms of conformal immersions. This will show that many features in the theory of surfaces with mean curvature $\frac{1}{2}$ in $\mathbb{H}^2\times\mathbb{R}$ or minimal surfaces in the Heisenberg space have nice geometric interpretations in their dual Lorentzian counterparts. We will briefly discuss some applications such as gradient estimates for entire minimal graphs in Heisenberg space~\cite{MN} and the existence of complete spacelike surfaces~\cite{LeeMan}, and we will give an uniform treatment to the behavior of the duality with respect to ambient isometries. Finally, some open questions are posed in the last section. \end{abstract}
\section{Introduction}
The theory of constant mean curvature surfaces in homogeneous 3-manifolds has become relatively relevant during the last decades. Although all started in the ambient spaces with constant sectional curvature $\mathbb{R}^3$, $\mathbb{S}^3$ and $\mathbb{H}^3$, then product spaces $\mathbb{H}^2\times\mathbb{R}$ and $\mathbb{S}^2\times\mathbb{R}$ were also considered, and soon afterwards the Heisenberg space $\mathrm{Nil}_3$, the Berger spheres $\mathbb{S}^3_{\text{Berger}}$, and $\widetilde{\mathrm{Sl}}_2(\mathbb{R})$ (the universal cover of the special linear group equipped with some distinguished left-invariant metrics) came into scene. All these 3-manifolds, except for $\mathbb{H}^3$, lie in a 2-parameter family $\mathbb{E}(\kappa,\tau)$ with the property that they admit a Killing submersion with constant bundle curvature $\tau$ over the simply connected surface with constant curvature $\kappa$ (see definitions in Section~\ref{sec:ekt}). Abresch and Rosenberg's discovery of a geometric quadratic differential which is holomorphic on constant mean curvature surfaces~\cite{AR}, together with the Lawson type correspondence found by Daniel~\cite{Dan}, and the solution to the Bernstein problem in Heisenberg space given by Fern\'{a}ndez and Mira~\cite{FM} encouraged many geometers to work on this theory. We refer the reader to the introductory lecture notes written by Daniel, Hauswirth and Mira~\cite{DHM}.
It is worth pointing out that the spaces $\mathbb{E}(\kappa,\tau)$, $\kappa-4\tau^2\neq 0$, model all simply connected 3-manifolds with isometry group of dimension 4, but recently an increasing attention has been paid to the family of all simply connected homogeneous 3-manifolds, consisting essentially of Lie groups with arbitrary left-invariant metrics. A sample of this is the recent classification of spheres with constant mean curvature in such Lie groups by Meeks, Mira, P\'{e}rez and Ros~\cite{MMPR}. It would be interesting to know whether or not the ideas we will discuss below can be extended to the Lie-group setting.
The spaces $\mathbb{E}(\kappa,\tau)$ have Lorentzian counterparts $\mathbb{L}(\kappa,\tau)$ also admitting a Killing-submersion structure over $\mathbb{M}^2(\kappa)$ with constant bundle curvature $\tau$, but in such a way that the Killing direction is timelike whereas the horizontal distribution is spacelike. Calabi stated in~\cite{Calabi} an interesting duality between minimal graphs in Euclidean space $\mathbb{R}^3=\mathbb{E}(0,0)$ and maximal graphs in Lorentz-Minkowski space $\mathbb{L}^3=\mathbb{L}(0,0)$, based on a clever trick using Poincar\'e's Lemma. Although this duality is often called Calabi duality, it seems that similar ideas had been already considered by other authors, see for instance the work of Catalan~\cite{Catalan} more than one century before. Calabi's duality has a natural interpretation in terms of null curves and holomorphic Weierstrass data (e.g., see~\cite{LLS} where the duality has been used to study conical singularities of maximal surfaces in $\mathbb{L}^3$).
Later on, Calabi's work was generalized to a correspondence between minimal graphs in $\mathbb{S}^2\times\mathbb{R}=\mathbb{E}(1,0)$ and maximal graphs in $\mathbb{S}^2\times\mathbb{R}_1=\mathbb{L}(1,0)$ by Albujer and Al\'{i}as~\cite{AA}, and to a correspondence between constant mean curvature $H$ graphs in $\mathbb{E}(\kappa,\tau)$ and spacelike graphs with constant mean curvature $\tau$ in $\mathbb{L}(\kappa,H)$ by Lee~\cite{Lee}. In Theorem~\ref{thm:duality} we will rewrite Lee's result in the more general language of conformal immersions by allowing the surfaces to be multigraphs rather than graphs. In order to get to this result, we will employ the classification of Killing submersions in~\cite{Man}, though the same statement can also be achieved by means of analytic continuation. The rest of Section~\ref{sec:duality} will be devoted to review other well-known properties of the duality, as well as to recover its explicit expression in coordinates as a first-order \textsc{pde} system, where one can see that the mean curvature equations become the integrability conditions of the system. Although we will give some examples, more of them can be found in~\cite{KL,Lee}.
In Section 3, we will revisit some results in~\cite{LeeMan} showing that the space $\mathbb{L}(\kappa,\tau)$ does not admit complete spacelike surfaces if $\kappa+4\tau^2>0$, and we will provide three proofs of this result of different nature. We will also consider \emph{dual-complete} surfaces rather than complete ones, which is nothing but assuming that the dual surface is complete in $\mathbb{E}(\kappa,H)$. We will use well-known results in the Riemannian setting to give a fairly complete classification of dual-complete spacelike surfaces in $\mathbb{L}(\kappa,\tau)$, see also~\cite{DHM,MR}. This will grant the opportunity to connect the duality with the theory of constant mean curvature surfaces in $\mathbb{E}(\kappa,\tau)$, and we will discuss the relation between the duality and Daniel correspondence~\cite{Dan}, see Remark~\ref{rmk:daniel}.
Critical mean curvature surfaces in $\mathbb{E}(\kappa,\tau)$-spaces are those whose dual surfaces live in constant sectional curvature spaces (i.e., in the Lorentz-Minkowski space $\mathbb{L}^3=\mathbb{L}(0,0)$ or in the anti-de Sitter space $\mathbb{H}_1^3(\kappa)=\mathbb{L}(\kappa,\frac{1}{2}\sqrt{-\kappa})$), that is to say those whose mean curvature $H$ satisfies the relation $4H^2+\kappa=0$. This gives an outstanding interpretation of why surfaces with critical mean curvature have features that the rest of constant mean curvature surfaces fail to enjoy: \begin{itemize}
\item The hyperbolic Gauss map for mean curvature $\frac{1}{2}$ surfaces in $\mathbb{H}^2\times\mathbb{R}$ discovered by Fern\'{a}ndez and Mira~\cite{FM}), the left-invariant Gauss map for minimal surfaces in $\mathrm{Nil}_3(\tau)$ given by Daniel~\cite{Dan2}, and the rest of harmonic Gauss maps for critical mean curvature surfaces, see~\cite{DFM}, correspond to the classical (hyperbolic) Gauss map of surfaces in $\mathbb{L}^3$ and $\mathbb{H}_1^3(\kappa)$, see also~\cite{Lee2}.
\item The Abresch-Rosenberg differential of surfaces with critical mean curvature, which is associated with the harmonic Gauss maps, corresponds to the classical Hopf differential in $\mathbb{L}^3$ and $\mathbb{H}_1^3(\kappa)$. \end{itemize}
In section 4, we will discuss an application in the other direction by employing a Lorentzian property (an estimate for entire graphs in $\mathbb{L}^3$ proved by Cheng and Yau) to get some gradient estimates for entire minimal graphs in $\mathrm{Nil}_3(\tau)$. This result is part of~\cite{MN}, though we will use it here to prove also that entire graphs with positive mean curvature in $\mathbb{L}^3$ have infinite area.
Some other properties will be analized in Section 5. First, we will give a general new result for understanding the behaviour of the duality with respect to isometries, generalizing~\cite{Lee}. Basically we will prove that direct isometries preserving the unit Killing vector field behave well with respect to the duality, in the sense that they correspond to other isometries in the dual setting also preserving the Killing vector field. Such isometries represent a 4-parameter subgroup of isometries of $\mathbb{E}(\kappa,\tau)$ or $\mathbb{L}(\kappa,\tau)$, which is generically the total dimension of the isometry group. Nonetheless, in the cases the correspondence involves an space form, there are 2 extra dimensions in the isometry. Again this applies to the case of critical mean curvature, providing geometric 2-parameter deformations (not by ambient isometries) in the following families of surfaces: \begin{enumerate}
\item Surfaces in $\mathbb{E}(\kappa,\tau)$ with critical constant mean curvature $H$, i.e., such that $\kappa+4H^2=0$.
\item Surfaces in $\mathbb{L}(\kappa,\tau)$ with critical constant mean curvature $H$, i.e., such that $\kappa-4H^2=0$. \end{enumerate} Note that the change of sign in the conditions $\kappa+4H^2=0$ and $\kappa-4H^2=0$ is not a mistake, and comes from the fact that space forms satisfy $\kappa-4\tau^2=0$ in the Riemannian case, and $\kappa+4\tau^2=0$ in the Lorentzian case (see Section~\ref{sec:ekt} and the comments below Proposition~\ref{prop:transform}).
In the last section, we will leave three open original questions whose solutions could improve the understanding of the correspondence in the opinion of the author. We also refer to~\cite{DHM} for a list of open conjectures in $\mathbb{E}(\kappa,\tau)$-spaces, though some of them have been already solved since the document was published in 2009.
\section{A conformal duality}\label{sec:duality}
\subsection{The spaces $\mathbb{E}(\kappa,\tau)$ and $\mathbb{L}(\kappa,\tau)$}\label{sec:ekt}
Let $\tau\in C^\infty(M)$ be a smooth function on a simply connected orientable Riemannian surface $M$. From~\cite{Man}, we get that there is a unique Riemannian submersion $\pi:\mathbb{E}\to M$ such that \begin{enumerate}
\item $\mathbb{E}$ is a simply connected orientable Riemannian 3-manifold,
\item the fibers of $\pi$ are the integral curves of a unit Killing vector field $\xi\in\mathfrak{X}(\mathbb{E})$, and
\item $\overline\nabla_X\xi=(\tau\circ\pi) X\times\xi$, for all $X\in\mathfrak{X}(\mathbb{E})$, where $\overline\nabla$ denotes the Levi-Civita connection in $\mathbb{E}$ and $\times$ stands for the cross product. \end{enumerate} Such a Riemannian submersion is called a \emph{unit Killing submersion} with \emph{bundle curvature} $\tau$ over $M$ (this result also holds in the more general non-simply connected and non-unitary case, see~\cite{LerMan}). Similar arguments to those in~\cite{Man} lead to the existence and uniqueness of a Riemannian submersion $\pi:\mathbb{L}\to M$ fulfilling items 1--3 above, and such that $\mathbb{L}$ is Lorentzian and $\xi\in\mathfrak{X}(\mathbb{L})$ is timelike.
Let us assume that $\pi:\mathbb{E}\to M$ is a (Riemannian or Lorentzian) Killing submersion over $M$ with bundle curvature $\tau$, and whose fibers have infinite length. Given an open set $\Omega\subset M$ and a smooth section $F_0:\Omega\to\mathbb{E}$ (the existence of $F_0$ is guaranteed since the fibers have infinite length, see~\cite{LerMan,Man}), any section of $\pi$ over $\Omega$ can be parameterized in terms of a function $u:\Omega\to\mathbb{R}$ as \begin{equation}\label{eqn:grafo} F_u:\Omega\to\mathbb{E},\qquad F_u(p)=\phi_{u(p)}(F_0(p)), \end{equation} where $\{\phi_t:t\in\mathbb{R}\}$ is the 1-parameter group of isometries associated with $\xi$, the so-called vertical translations. Such a section $F_u$ is usually called a Killing (or vertical) graph over $\Omega$.
Let $d$ be the signed Killing distance to $F_0$ determined unambiguously by the relation $\phi_{d(p)}(F_0(p))=p$ for all $p\in\mathbb{E}$, and consider the projection of its gradient $Z=\pi_*(\overline\nabla d)\in\mathfrak{X}(M)$. Given $u\in C^2(\Omega)$, we define the \emph{generalized gradient} of $u$ as the vector field \begin{equation}\label{eqn:G} Gu=\nabla u-\epsilon Z\in\mathfrak{X}(M), \end{equation} where we will take $\epsilon=1$ or $\epsilon=-1$ depending on whether $\mathbb{E}$ is Riemannian or Lorentzian, respectively. It is easy to check that $F_u$ defines a spacelike surface (i.e., the induced metric in $F_u$ is positive definite) if and only if \begin{equation}\label{eqn:spacelike}
1+\epsilon\|Gu\|^2>0, \end{equation} in which case the mean curvature of $F_u$ can be computed (as a function on $\Omega$) as \begin{equation}\label{eqn:H}
H=\frac{1}{2}\mathrm{div}\left(\frac{Gu}{\sqrt{1+\epsilon\|Gu\|^2}}\right), \end{equation} where the divergence, gradient and norm are computed in the geometry of $M$. Although there is no explicit dependence upon the bundle curvature in~\eqref{eqn:G} or~\eqref{eqn:H}, it is encoded in the vector field $Z$ in the sense that \begin{equation}\label{eqn:tau} \mathrm{div}(JZ)=-2\tau, \end{equation} where $J$ is the $\frac{\pi}{2}$-rotation in the tangent bundle $TM$, see~\cite[Lemma~3.2]{LerMan}. In the duality below, we will essentially exploit the fact that both $H$ and $\tau$ admit the divergence type expressions~\eqref{eqn:H} and~\eqref{eqn:tau}.
Killing submersions give a good framework for studying simply connected homogeneous Riemannian or Lorentzian 3-manifolds with isometry group of dimension 4. These 3-manifolds are classified in two families $\mathbb{E}(\kappa,\tau)$ and $\mathbb{L}(\kappa,\tau)$, where $\mathbb{E}(\kappa,\tau)$ (resp. $\mathbb{L}(\kappa,\tau)$) stands for the total space of the unique Riemannian (resp. Lorentzian) Killing submersion with constant bundle curvature $\tau$ over $\mathbb{M}^2(\kappa)$, the simply connected surface of constant curvature $\kappa$. For each particular choice of $(\kappa,\tau)$ we get the following spaces (we use an index 1 to distinguish the Lorentzian case): \begin{center}
\begin{tabular}{c||c|c|c}
&$\kappa<0$&$\kappa=0$&$\kappa>0$\\\hline\hline
$\tau=0$&$\mathbb{H}^2(\kappa)\times\mathbb{R}$&$\mathbb{R}^3$&$\mathbb{S}^2(\kappa)\times\mathbb{R}$\\
&$\mathbb{H}^2(\kappa)\times\mathbb{R}_1$&$\mathbb{L}^3$&$\mathbb{S}^2(\kappa)\times\mathbb{R}_1$\\\hline
$\tau\neq 0$&$\widetilde{\mathrm{Sl}}_2(\mathbb{R})(\kappa,\tau)$&$\mathrm{Nil}_3(\tau)$&$\mathbb{S}^3_{\text{Berger}}(\kappa,\tau)$\\
&$\widetilde{\mathrm{Sl}}^1_2(\mathbb{R})(\kappa,\tau)$&$\mathrm{Nil}^1_3(\tau)$&$\mathbb{S}^{3,1}_{\text{Berger}}(\kappa,\tau)$
\end{tabular}\end{center} There are some space forms hidden in this table, whose isometry group has dimension 6, namely the Riemannian $\mathbb{R}^3$ and $\mathbb{S}^3(\kappa)$ show up in the family $\mathbb{E}(\kappa,\tau)$ with $\kappa-4\tau^2=0$, whereas the Lorentz-Minkowski space $\mathbb{L}^3$ and the anti-de Sitter space $\mathbb{H}^3_1(\kappa)$ are $\mathbb{L}(\kappa,\tau)$-spaces with $\kappa+4\tau^2=0$. We remark that the hyperbolic space $\mathbb{H}^3(\kappa)$ or the de-Sitter space $\mathbb{S}^3_1(\kappa)$ do not admit unit Killing vector fields.
\subsection{An extended conformal duality}
In the previous section, a graph in a Riemannian or Lorentzian Killing submersion $\pi:\mathbb{E}\to M$ has been defined as a smooth section of $\pi$, i.e., a surface $\Sigma$ in $\mathbb{E}$ such that $\pi_{|\Sigma}:\Sigma\to M$ is a diffeomorphism onto its image, and in particular a graph is transversal to the vertical Killing vector field $\xi$. The graph is said entire if $\pi_{|\Sigma}$ is also surjective. In the Riemannian case, these conditions can be weakened to any of the following three equivalent conditions: \begin{enumerate}
\item[i.] $\pi_{|\Sigma}$ is a local diffeomorphism,
\item[ii.] $\Sigma$ is nowhere vertical, i.e., everywhere transversal to $\xi$,
\item[iii.] the angle function $\nu=\langle N,\xi\rangle$ has no zeroes, where $N$ is a unit normal to $\Sigma$. \end{enumerate} If any of these properties hold, the surface $\Sigma$ is usually called a vertical (spacelike) \emph{multigraph}. In the Lorentzian case, spacelike surfaces are automatically transversal to the Killing direction, so multigraphs turn out to be natural objects in the Lorentzian setting.
As pointed out in the introduction our version of the duality assumes that the surfaces are multigraphs rather than graphs, which generalizes the strictly graphical duality given by Lee~\cite{Lee} in the case $\kappa\leq 0$, and by Albujer and Al\'{i}as~\cite{AA} in the case of $\mathbb{S}^2\times\mathbb{R}$. Although we will assume simple connectedness in the statement, it can be applied to any conformal immersion by considering the universal cover of the Riemann surface we are working with.
\begin{theorem}\label{thm:duality} Let $\Sigma$ be a simply connected Riemann surface, and let $\kappa,\tau,H\in\mathbb{R}$. There is a correspondence between \begin{enumerate}
\item[(a)] Conformal immersions $X:\Sigma\to\mathbb{E}(\kappa,\tau)$ with constant mean curvature $H$ and nowhere vertical.
\item[(b)] Conformal spacelike immersions $\widetilde X:\Sigma\to\mathbb{L}(\kappa,H)$ with constant mean curvature $\tau$. \end{enumerate} The corresponding immersions $X$ and $\widetilde X$ are determined up to a vertical translation and satisfy $\pi\circ X=\widetilde\pi\circ\widetilde X$. \end{theorem}
\begin{proof} Let $X:\Sigma\to\mathbb{E}(\kappa,\tau)$ be a conformal immersion with constant mean curvature $H$. Since $X$ is nowhere vertical, the map $\pi\circ X$ is a local diffeomorphism and we can consider the Riemannian surface $M$ defined as $\Sigma$ endowed with the pullback of the metric of $\mathbb{M}^2(\kappa)$ by $\pi\circ X$. As $M$ is simply connected, there are unique Killing submersions $\pi':\mathbb{E}\to M$ and $\widetilde\pi':\mathbb{L}\to M$, where $\pi'$ is Riemannian and $\widetilde\pi'$ is lorentzian) with constant bundle curvatures $\tau$ and $H$, respectively, and such that $\mathbb{E}$ and $\mathbb{L}$ are simply connected.
Note that $\mathbb{E}$ and $\mathbb{L}$ are locally isometric to $\mathbb{E}(\kappa,\tau)$ and $\mathbb{L}(\kappa,H)$, respectively. More explicitly, since $\pi\circ X:M\to\mathbb{M}^2(\kappa)$ is a local isometry, then it lifts to local isometries $\Psi:\mathbb{E}\to\mathbb{E}(\kappa,\tau)$ and $\widetilde\Psi:\mathbb{L}\to\mathbb{L}(\kappa,H)$ such that the following diagram commutes: \[\begin{tikzcd} \mathbb{E}\arrow[rr,rightarrow, "\pi'"]&& M\arrow[rr,leftarrow,"\widetilde\pi'"]&& \mathbb{L}\arrow[d,rightarrow, "\widetilde\Psi"]\\ \mathbb{E}(\kappa,\tau)\arrow[rr, rightarrow, "\pi"]\arrow[u,leftarrow, "\Psi"]&& \mathbb{M}^2(\kappa)\arrow[u, leftarrow, "\pi\circ X"]\arrow[rr,leftarrow,"\widetilde\pi"]&& \mathbb{L}(\kappa,H) \end{tikzcd}\] Furthermore, the image of $X$ lies in the image of $\Psi$, so the immersion $\Psi^{-1}\circ X:\Sigma\to\mathbb{E}$ is well defined, and it is not only a conformal immersion in $\mathbb{E}$ with constant mean curvature $\tau$ but also an entire graph over $M$. Note that fibers of $\pi'$ have infinite length, since otherwise $\pi'$ would be the Hopf fibration, which does not admit global sections (this excludes the case $\Sigma$ is a sphere and $\tau\neq 0$, see the comments below).
Let $F_0:M\to\mathbb{E}$ be a global section of $\pi'$, which exists by~\cite[Proposition~3.3]{LerMan}. Then there is a function $u$ such that $\Psi^{-1}\circ X$ can be parameterized as $F_u:M\to\mathbb{E}$ where $F_u(p)=\phi_{u(p)}(F_0(p))$ and $\{\phi_t:t\in\mathbb{R}\}$ stands for the 1-parameter group of isometries associated to the vertical Killing vector field in $\mathbb{E}$. The fact that $F_u$ has constant mean curvature $H$ can be expressed as \begin{equation}\label{thm:duality:eqn1}
\mathrm{div}\left(\frac{Gu}{\sqrt{1+\|Gu\|^2}}\right)=2H, \end{equation} where $Gu=\nabla u-Z$, and $Z=\pi'_*(\overline\nabla d)\in\mathfrak{X}(M)$ satisfies $\mathrm{div}(JZ)=-2\tau$.
Let $\widetilde\pi':\mathbb{L}\to M$ be the unique Lorentzian Killing submersion over $M$ with constant bundle curvature $H$ such that $\mathbb{L}$ is simply connected. Given an initial global section $\widetilde F_0:M\to\mathbb{L}$ and the signed distance $\widetilde d$, the vector field $\widetilde Z= \widetilde \pi'_*(\overline\nabla\widetilde d)\in\mathfrak{X}(M)$ satisfies $\mathrm{div}(J\widetilde Z)=-2H$. Hence we can rewrite~\eqref{thm:duality:eqn1} as \begin{equation}\label{thm:duality:eqn2}
\mathrm{div}\left(\frac{Gu}{\sqrt{1+\|Gu\|^2}}+J\widetilde Z\right)=0, \end{equation} The fact that $M$ is simply connected together with Poincar\'e's Lemma tell us that the term inside the divergence in~\eqref{thm:duality:eqn2} equals $-J\nabla v$ for some $v\in C^\infty(M)$. Setting $\widetilde G v=\nabla v+\widetilde Z$, we reach \begin{equation}\label{thm:duality:eqn3}
\frac{Gu}{\sqrt{1+\|Gu\|^2}}=-J\widetilde G v\quad\Longleftrightarrow\quad
\frac{\widetilde Gv}{\sqrt{1-\|\widetilde Gv\|^2}}=JGu \end{equation}
Observe that, in order to check the equivalence in~\eqref{thm:duality:eqn3}, we can take squared norms in the first expression to obtain $\|\widetilde Gv\|<1$ (which implies that the entire graph $\widetilde F_v:M\to\mathbb{L}$ given by $\widetilde F_v(p)=\widetilde\phi_{v(p)}(\widetilde F_0(p))$ is spacelike, see~\eqref{eqn:spacelike}), and the identity $1+\|Gu\|^2=(1-\|\widetilde Gv\|^2)^{-1}$. From here it is easy to prove~\eqref{thm:duality:eqn3}.
Applying now the divergence operator, we get that \begin{equation}\label{thm:duality:eqn5}
\mathrm{div}\left(\frac{\widetilde Gv}{\sqrt{1-\|\widetilde Gv\|^2}}\right)=\mathrm{div}(JGu)=\mathrm{div}(J\nabla u)-\mathrm{div}(JZ)=2\tau, \end{equation} i.e., $\widetilde F_v$ has constant mean curvature $\tau$ in $\mathbb{L}$.
The diffeomorphism $T:F_u(M)\to\widetilde F_v(M)$ determined by the relation $\widetilde\pi'\circ T=\pi'$ is conformal (the proof of this property will be postponed to Section~\ref{sec:coordinates} since it follows from the same argument as in~\cite{Lee,LeeMan} in local coordinates). Hence, it suffices to define $\widetilde X=\widetilde\Psi\circ T\circ\Psi^{-1}\circ X$, which is conformal as the composition of conformal maps, and trivially satisfies $\pi\circ X=\widetilde\pi\circ\widetilde X$.
The way back from a constant mean curvature $\tau$ spacelike graph in $\mathbb{L}(\kappa,H)$ is a consequence of a completely analogous argument, so it will be skipped. \end{proof}
\begin{remark}~ \begin{enumerate}
\item The conformal immersion $X$ is a graph if and only if $\widetilde X$ is a graph, in which case the domains of both graphs in $\mathbb{M}^2(\kappa)$ coincide. This case will be analyzed in detail in Section~\ref{sec:coordinates}.
\item The duality is determined unambiguously by~\eqref{thm:duality:eqn3} up to a vertical translation, which implies it is one-to-one up to vertical translations. The analytic reason is that, when Poincar\'e's Lemma is applied, the function we introduce is determined up to an additive constant.
\item In the case $\Sigma$ is a sphere, since the angle functions of the immersions are positive continuous functions, they must be bounded between two positive constants. Therefore $\pi$ restricts to the surface as a covering map, which has to be one-to one because $\Sigma$ is simply connected. We deduce that $\mathbb{M}^2(\kappa)$ must be also a sphere, i.e., $\kappa>0$. The bundle curvature must be zero, for were it not the case, the Killing submersion would be topologically the Hopf fibration, which does not admit global sections and we would get a contradiction. Moreover, integrating the divergence equation for the mean curvature we get that $H=0$, so this case only leads to horizontal slices $\mathbb{S}^2(\kappa)\times\{t_0\}$, which are dual in $\mathbb{S}^2(\kappa)\times\mathbb{R}$ and $\mathbb{S}^2(\kappa)\times\mathbb{R}_1$.
\item In~\cite{LeeMan} a much more general graphical duality is obtained, where $H$ and $\tau$ are assumed arbitrary over an arbitrary simply connected non-compact base surface. Nonetheless, there is no direct extension of Theorem~\ref{thm:duality} to this general setting, due to the fact that the proof of Theorem~\ref{thm:duality} is implicitly assuming that the multigraph has the same mean curvature at points lying in the same vertical fiber, which is not a natural assumption in the prescribed mean curvature scenario. \end{enumerate} \end{remark}
\subsection{The duality in coordinates}\label{sec:coordinates}
We will apply now Theorem~\ref{thm:duality} to the case the surfaces are graphs, introducing the standard models of $\mathbb{E}(\kappa,\tau)$ and $\mathbb{L}(\kappa,\tau)$ in order to recover the explicit equations defining the duality given by Lee~\cite{Lee}. Let $\kappa\in\mathbb{R}$, and consider the function \[\lambda_\kappa(x,y)=\left(1+\tfrac{\kappa}{4}(x^2+y^2)\right)^{-1},\] defined on the open set $\Omega_\kappa=\{(x,y)\in\mathbb{R}^2:1+\frac{\kappa}{4}(x^2+y^2)>0\}$, which is a disk for $\kappa<0$ and the whole plane otherwise. The conformal metric $\lambda_k^2(\mathrm{d} x^2+\mathrm{d} y^2)$ has constant curvature $\kappa$, and it is (locally) isometric to $\mathbb{M}^2(\kappa)$. We get the well known rotationally invariant models \begin{equation}\label{eqn:models} \begin{aligned} \mathbb{E}(\kappa,\tau)&=\left(\Omega_\kappa\times\mathbb{R},\lambda_k^2(\mathrm{d} x^2+\mathrm{d} y^2)+(\mathrm{d} z+\tau\lambda_k(y\,\mathrm{d} x-x\,\mathrm{d} y))^2\right),\\ \mathbb{L}(\kappa,\tau)&=\left(\Omega_\kappa\times\mathbb{R},\lambda_k^2(\mathrm{d} x^2+\mathrm{d} y^2)-(\mathrm{d} z-\tau\lambda_k(y\,\mathrm{d} x-x\,\mathrm{d} y))^2\right), \end{aligned} \end{equation} on which the Killing submersion takes the form $(x,y,z)\mapsto (x,y)$, and $\partial_z$ is the unit vertical Killing vector field. It is necessary to point out that this model omits a point in the base $\mathbb{M}^2(\kappa)$ when $\kappa>0$, and hence a whole fiber in $\mathbb{E}(\kappa,\tau)$ or $\mathbb{L}(\kappa,\tau)$. In the proof of Theorem~\ref{thm:duality} this issue did not appear since we were working from a coordinate-free point of view.
The global section $F_0(x,y)=(x,y,0)$ allows us to parameterize the graph $F_u$ of a function $u\in C^\infty(\Omega)$ on an open subset $\Omega\subset\Omega_\kappa$ as usual: \[F_u(x,y)=(x,y,u(x,y)).\] If $\Omega$ is open and simply connected, and we consider dual graphs $F_u$ in $\mathbb{E}(\kappa,\tau)$ and $F_v$ in $\mathbb{L}(\kappa,H)$, the generalized gradients $Gu$ of $F_u$ and $\widetilde Gv$ of $F_v$ can be expressed in coordinates as \begin{align*} Gu&=\alpha\frac{\partial_x}{\lambda_\kappa}+\beta\frac{\partial_y}{\lambda_\kappa},\quad\text{where } \begin{cases}\alpha=\frac{u_x}{\lambda_\kappa}+\tau y,\\\beta=\frac{u_y}{\lambda_\kappa}-\tau x,\end{cases}\\ \widetilde Gv&=\widetilde\alpha\frac{\partial_x}{\lambda_\kappa}+\widetilde\beta\frac{\partial_y}{\lambda_\kappa},\quad\text{where } \begin{cases}\widetilde\alpha=\frac{v_x}{\lambda_\kappa}-\tau y,\\\widetilde\beta=\frac{v_y}{\lambda_\kappa}+\tau x.\end{cases} \end{align*} In order to simplify the notation, in the sequel we shall also consider \begin{align*} \omega&=\sqrt{1+\alpha^2+\beta^2},&\widetilde\omega&=\sqrt{1-\widetilde\alpha^2-\widetilde\beta^2}. \end{align*} Then the first identity in~\eqref{thm:duality:eqn3} yields an explicit \textsc{pde} system which allows us to solve for $u$ in terms of $v$: \begin{align*}
\alpha=\frac{\widetilde\beta}{\widetilde\omega}\quad\Leftrightarrow\quad u_x&=\frac{v_y+Hx\lambda_k}{\sqrt{1+(\frac{v_x}{\lambda_k}-H y)^2+(\frac{v_y}{\lambda_k}+H x)^2}}-\tau y\lambda_k,\\
\beta=\frac{-\widetilde\alpha}{\widetilde\omega}\quad\Leftrightarrow\quad u_y&=\frac{-v_x+Hy\lambda_k}{\sqrt{1+(\frac{v_x}{\lambda_k}-H y)^2+(\frac{v_y}{\lambda_k}+H x)^2}}+\tau x\lambda_k.
\end{align*}
Likewise, the second identity in~\eqref{thm:duality:eqn3} gives the \textsc{pde} system: \begin{align*}
\widetilde\alpha=\frac{-\beta}{\omega}\quad\Leftrightarrow\quad v_x&=\frac{-u_y+\tau x\lambda_k}{\sqrt{1+(\frac{u_x}{\lambda_k}+\tau y)^2+(\frac{u_y}{\lambda_k}-\tau x)^2}}+Hy\lambda_k,\\
\widetilde\beta=\frac{\alpha}{\omega}\quad\Leftrightarrow\quad v_y&=\frac{u_x+\tau y\lambda_k}{\sqrt{1+(\frac{u_x}{\lambda_k}+\tau y)^2+(\frac{u_y}{\lambda_k}-\tau x)^2}}-Hx\lambda_k.
\end{align*} These four equations are the so-called \emph{twin relations}, and allow us to solve for $u$ (resp. $v$) when $v$ (resp. $u$) is known. The constant mean curvature equations for $F_u$ and $F_v$ can be regarded as the integrability conditions for the twin relations. Also from the twin relations we get the quite useful formula \begin{equation}\label{eqn:omega} \omega\cdot\widetilde\omega=1. \end{equation}
On the other hand, we can define global references $\{e_1,e_2\}$ tangent to $F_u$ and $\{\widetilde e_1,\widetilde e_2\}$ tangent to $F_v$ (not necessarily orthogonal) as \begin{equation}\label{eqn:tangent-frame} \begin{aligned}
e_1&=\frac{\partial_x+u_x\partial_z}{\lambda_\kappa},&\widetilde e_1&=\frac{\partial_y+v_y\partial_z}{\lambda_\kappa},\\
e_2&=\frac{\partial_x+v_x\partial_z}{\lambda_\kappa},&\widetilde e_2&=\frac{\partial_y+v_y\partial_z}{\lambda_\kappa}. \end{aligned}\end{equation} Using~\eqref{eqn:models} it is easy to check that the first fundamental forms $I$ of $F_u$ and $\widetilde I$ of $F_v$ can be expressed in these references as the matrices \begin{equation}\label{eqn:1ff} \begin{aligned}
\mathrm{I}&\equiv\left(\begin{matrix}
1+\alpha^2&\alpha\beta\\\alpha\beta&1+\beta^2
\end{matrix}\right),&
\widetilde{\mathrm{I}}&\equiv\left(\begin{matrix}
1-\widetilde\alpha^2&-\widetilde\alpha\widetilde\beta\\-\widetilde\alpha\widetilde\beta&1-\widetilde\beta^2
\end{matrix}\right). \end{aligned}\end{equation} From the twin relations we infer that the two matrices in Equation~\eqref{eqn:1ff} satisfy $\widetilde{\mathrm{I}}=\omega^{-2}\mathrm{I}$. Since the global diffeomorphism $T:F_u(\Omega)\to F_v(\Omega)$ given in the proof of Theorem~\ref{thm:duality} reads $T(x,y,u(x,y))=(x,y,v(x,y))$ for all $(x,y)\in\Omega$, it follows that $T_*e_i=\widetilde e_i$ for $i\in\{1,2\}$, so $T$ is conformal (this was was the missing bit in the proof of Theorem~\ref{thm:duality}).
Let $N$ and $\widetilde N$ be upward pointing unit vectors to $F_u$ and $F_v$, respectively. Their horizontal parts satisfy the following identities (see~\cite[Equation~3.3]{LerMan}): \begin{equation}\label{eqn:N} \begin{aligned}
\pi_*N&=\frac{Gu}{\sqrt{1+|Gu|^2}},&
\widetilde\pi_*\widetilde N=\frac{\widetilde Gv}{\sqrt{1-|\widetilde Gv|^2}}. \end{aligned} \end{equation} Since $N$ and $\widetilde N$ are unitary, their vertical components (also called \emph{angle functions} of the immersions $X$ and $\widetilde X$, respectively), are given by \begin{equation}\label{eqn:angle} \begin{aligned}
\nu=\langle N,\partial_z\rangle&=\frac{1}{\sqrt{1+|Gu|^2}}=\frac{1}{\omega},\\
\widetilde\nu=\langle \widetilde N,\partial_z\rangle&=\frac{1}{\sqrt{1-|\widetilde Gv|^2}}=\frac{1}{\widetilde\omega}. \end{aligned} \end{equation} From~\eqref{eqn:omega} and~\eqref{eqn:angle}, we deduce that dual conformal immersions have reciprocal angle functions. Note that the angle function $\nu$ (resp. $\widetilde\nu$) is the cosine (resp. hyperbolic cosine) of the angle between the upward-pointing normal and the Killing vector field, so it satisfies $0<\nu\leq 1$ (resp. $1\leq\widetilde\nu<\infty$).
As a conclusion to this section, we will give several examples, though we will not include the computations, which should be easy to deduce from the twin relations. Further examples can be found in~\cite{KL,Lee}.
\begin{example}\label{example1} Let us consider the function $v(x,y)=0$, which defines a zero mean curvature graph in $\mathbb{L}(\kappa,H)$ but fails to be spacelike over the whole base surface $\mathbb{M}^2(\kappa)$ when $\kappa+4 H^2>0$. The spacelike condition only holds in a disk centered at $(0,0)\in\mathbb{M}^2(\kappa)$. The dual surface $F_u$ is half of the rotationally invariant sphere with constant mean curvature $H$ in $\mathbb{E}(\kappa,\tau)$.
The value of $H$ (if it exists) such that $\kappa+4 H^2=0$ is called the critical mean curvature in $\mathbb{E}(\kappa,\tau)$. Note that constant mean curvature spheres only exist when the mean curvature is supercritical (i.e., $\kappa+4 H^2>0$) and the duality provides a nice way to obtain explicit expressions for them. \end{example}
\begin{example}\label{example2} Let us consider a constant mean curvature $H$ graph in $\mathbb{R}^3$ which is rotationally invariant about the $x$-axis, i.e, a graph of a function $u$ of the form $u(x,y)=\sqrt{r(x)^2-y^2}$ for some $r\in C^\infty(I)$ on an interval $I\subseteq\mathbb{R}$ (a piece of a Delaunay surface). The dual maximal surface $F_v$ in $\mathrm{Nil}_3(H)$ is of the form $v(x,y)=yf(x)$ for some function $f\in C^\infty(I)$ (in particular, $F_v$ is a surface ruled by Euclidean lines). Quite surprisingly, as in the above case of spheres, $v(x,y)$ still defines a graph with zero mean curvature outside the domain of $u$. \end{example}
\begin{example}\label{example2} Let us consider now a constant mean curvature $H$ spacelike graph in $\mathbb{L}^3$ which is invariant under hyperbolic rotations about the $x$-axis, i.e, a graph of a function $v$ of the form $v(x,y)=\sqrt{r(x)^2+y^2}$ for some $r\in C^\infty(I)$ on an interval $I\subseteq\mathbb{R}$. These examples were studied by Daniel in~\cite{Dan2}, and include the surfaces parameterized by \begin{align*}
(x,y)&\mapsto \left(x,y,\sqrt{H^{-2}+x^2+y^2}\right)\\
(x,y)&\mapsto \left(x,y,\sqrt{(2H)^{-2}+y^2}\right)\\
(x,y)&\mapsto \tfrac{1}{H}\left(x-\tfrac{1}{2}\coth(x),\tfrac{1}{2}\coth(x)\sinh(y),\tfrac{1}{2}\coth(x)\cosh(y)\right) \end{align*} The first one is a totally umbilical paraboloid, the second one is a hyperbolic cylinder, and the last one is known as the semithrough. The dual minimal surfaces in $\mathrm{Nil}_3(H)$ are of the form $u(x,y)=yf(x)$ for some function $f\in C^\infty(I)$, including the plane $u(x,y)=0$ and the invariant surface $u(x,y)=Hxy$. \end{example}
\section{Existence of complete spacelike surfaces}
In this section, we will discuss how the conformal theories of constant mean curvature multigraphs in $\mathbb{E}(\kappa,\tau)$ and $\mathbb{L}(\kappa,\tau)$ agree via the duality. We will begin by studying the relation between being complete and being an entire graph, essentially following the ideas in~\cite{LeeMan}.
\begin{lemma}\label{lemma:nonexistence} Let $\pi:\mathbb{L}\to M$ be a Lorentzian Killing submersion and let $\Sigma$ be a complete spacelike surface immersed in $\mathbb{L}$. If $M$ is simply connected, then $\Sigma$ is an entire graph. \end{lemma}
\begin{proof}
The projection $\pi_{|\Sigma}:\Sigma\to M$ is a distance non-decreasing local diffeomorphism, and hence a covering map by~\cite[Lemma~8.1, Ch.~VIII]{KN}. If $M$ is simply connected, then $\pi_{|\Sigma}$ must be one-to-one, i.e., $\Sigma$ must be an entire graph. \end{proof}
\begin{remark}\label{rmk:completeness} The converse is not true, since there are non-complete entire spacelike graphs in $\mathbb{L}(\kappa,\tau)$ with any constant mean curvature $H\geq 0$ provided that $\kappa+4\tau^2<0$. Such graphs can be found among the surfaces invariant under a 1-parameter group of isometries, as shown in the case $(\kappa,\tau,H)=(-1,0,0)$ by Albujer~\cite{A} (the same technique works in the general case). \end{remark}
This gives an idea of the fact that the behavior of spacelike constant mean curvature surfaces in $\mathbb{L}(\kappa,\tau)$ strongly depends on the sign of $\kappa+4\tau^2$, in a similar fashion as it depends on $\kappa+4H^2$ in the Riemannian case. Next result shows that the theory of complete spacelike surfaces is not interesting when the bundle curvature is supercritical. Note that there is no assumption on the mean curvature of the surface. We will give three different proofs of this result.
\begin{theorem} If $\kappa+4\tau^2>0$ and $\kappa\leq 0$, then there are no complete spacelike surfaces or entire spacelike graphs in $\mathbb{L}(\kappa,\tau)$. \end{theorem}
\noindent{\it First proof.}\hskip \labelsep This proof relies on the duality, as well as on a classical trick due to Heinz~\cite{Heinz}. In order to apply Theorem~\ref{thm:duality}, we will assume the mean curvature is constant, though the same argument applies in the general case, see~\cite{LeeMan}. We will assume that $\Sigma\subset\mathbb{L}(\kappa,\tau)$ is an entire graph with constant mean curvature $H$, and reach a contradiction, which will prove the statement in view of Lemma~\ref{lemma:nonexistence}.
The dual graph $F_u$ in $\mathbb{E}(\kappa,H)$ has constant mean curvature $\tau$. Given a bounded domain $\Omega\subset\mathbb{M}^2(\kappa)$ with regular boundary, we can employ the mean curvature equation and the divergence theorem to estimate \begin{equation}\label{eqn:heinz}
2\tau\mathrm{Area}(\Omega)=\int_\Omega\mathrm{div}\left(\frac{Gu}{\sqrt{1+\|Gu\|^2}}\right)=\int_{\partial\Omega}\frac{\langle Gu,\eta\rangle}{\sqrt{1+\|Gu\|^2}}\leq\mathrm{Length}(\partial\Omega), \end{equation} where $\eta$ is a unit outer conormal vector field to $\Omega$ along its boundary, and we have used Cauchy-Schwartz inequality. Taking the infimum in~\eqref{eqn:heinz} over $\Omega$ we get that \begin{equation}\label{eqn:cheeger} 2\tau\leq\inf\left\{\frac{\mathrm{Length}(\partial\Omega)}{\mathrm{Area}(\Omega)}:\Omega\subset\mathbb{M}^2(\kappa) \text{ bounded and regular}\right\}. \end{equation} The \textsc{lhs} in~\eqref{eqn:cheeger} is the so-called Cheeger constant of $\mathbb{M}^2(\kappa)$, which is known to be zero if $\kappa\geq 0$, and $\frac{1}{2}\sqrt{-\kappa}$ if $\kappa\leq 0$. Therefore, Equation~\eqref{eqn:cheeger} yields $\kappa+4\tau^2\leq 0$ if $\kappa\leq 0$, so we are done.
\penalty10000\raisebox{-.09em}{$\Box$}\par
\noindent{\it Second proof.}\hskip \labelsep Now we will give a direct argument using the divergence equation for $\tau$. If $\Sigma$ is an entire spacelike graph in $\mathbb{L}(\kappa,\tau)$, parameterized as $F_v$ for some $v\in C^\infty(\mathbb{M}^2(\kappa))$, then the vertical fibers have infinite length and the generalized gradient satisfies $\mathrm{div}(JGv)=\mathrm{div}(J\nabla v+JZ)=\mathrm{div}(JZ)=-2\tau$, and the spacelike property reads $\|Gv\|\leq 1$. If $\Omega\subset\mathbb{M}^2(\kappa)$ is a bounded domain with regular boundary, we can estimate \[2\tau\mathrm{Area}(\Omega)=\int_\Omega\mathrm{div}(-Gv)=\int_{\partial\Omega}\langle -Gv,\eta\rangle\leq\mathrm{Length}(\partial\Omega),\] where we have used the Cauchy-Schwartz inequality, and $\eta$ denotes a unit outer conormal vector field to $\Omega$ along its boundary. Then we can conclude with a similar argument as in the first proof.
\penalty10000\raisebox{-.09em}{$\Box$}\par
\noindent{\it Third proof.}\hskip \labelsep Let us consider the circle in the $z=0$ plane given by \[\gamma:\mathbb{R}\to\mathbb{L}(\kappa,\tau),\qquad \gamma(t)=(r\cos(t),r\sin(t),0).\] From the assumption $\kappa+4\tau^2>0$, it is easy to check that $\gamma$ is a closed timelike curve for some values of $r\in(0,\frac{2}{\sqrt{-\kappa}})$ if $\kappa<0$, or $r\in(0,\infty)$ if $\kappa=0$. This means that $\mathbb{L}(\kappa,\tau)$ is not a causal spacetime, and in particular it is not distinguishing, so it does not admit complete spacelike surfaces (more information about these definitions and the fact that distinguishing spacetimes admit a Killing submersion structure and complete spacelike surfaces can be found in~\cite{JS}).
\penalty10000\raisebox{-.09em}{$\Box$}\par
In the rest of this section, we will substitute completeness with a weaker condition in order to apply Riemannian results. However we will require that the mean curvature is constant.
\begin{definition} A spacelike conformal immersion $\widetilde X:\Sigma\to\mathbb{L}(\kappa,\tau)$ with constant mean curvature $H$ is said to be dual-complete when the dual immersion $X:\Sigma\to\mathbb{E}(\kappa,H)$ is complete. \end{definition}
Denote by $g$ and $\widetilde g$ the Riemannian metrics in $\Sigma$ that make $X$ and $\widetilde X$ isometric, respectively. Then Equations~\eqref{eqn:1ff} and~\eqref{eqn:angle} yields the conformal relation $g=\widetilde\nu^2\widetilde g$, where $\widetilde\nu$ stands for the angle function of $\widetilde X$. Since $\widetilde\nu\geq 1$, we deduce that any complete spacelike immersion is dual-complete.
\begin{theorem}\label{thm:dual-complete} Let $\widetilde X:\Sigma\to\mathbb{L}(\kappa,\tau)$ be a dual-complete spacelike conformal immersion with constant mean curvature. Then either $\widetilde X(\Sigma)$ is a horizontal slice in $\mathbb{S}^2(\kappa)\times\mathbb{R}_1$ or $\kappa+4\tau^2\leq 0$. \begin{enumerate}
\item If $\kappa+4\tau^2<0$, then $\widetilde X(\Sigma)$ is graph over a simply connected domain bounded by disjoint curves with constant curvature $\pm 2\tau$ in $\mathbb{H}^2(\kappa)$.
\item If $\kappa+4\tau^2=0$, then $\widetilde X(\Sigma)$ is complete, and hence an entire graph. \end{enumerate} In particular, $\Sigma$ is simply connected. \end{theorem}
\begin{proof} Let us consider the dual graph $X:\Sigma\to\mathbb{E}(\kappa,H)$, where $H$ is the mean curvature of $\widetilde X$. Since $X$ defines a complete surface with mean curvature $\tau$, and it is stable since it is nowhere vertical, the arguments in the proof of~\cite[Theorem~3.1]{MPR} show that $X$ is either a horizontal slice in $\mathbb{S}^2\times\mathbb{R}$ or $\kappa+4\tau^2\leq 0$.
If $\kappa+4\tau^2=0$, then the $X$ has critical mean curvature and hence $X(\Sigma)$ is an entire graph (this was proved by Hauswirth, Rosenberg and Spruck~\cite{HRS} for mean curvature $\frac{1}{2}$ surfaces in $\mathbb{H}^2\times\mathbb{R}$, by Daniel and Hauswirth~\cite{DH} for minimal ones in the Heisenberg space, and it extends to the rest of cases by means of the Daniel correspondence, see~\cite{DHM}). The proof that it is complete can be found in~\cite[Theorem~4.6.2]{DHM}. If $\kappa+4\tau^2<0$, then $X$ has subcritical constant mean curvature and it must be a graph over a domain $\Omega\subset\mathbb{H}^2(\kappa)$ bounded by a (possibly empty) family of disjoint curves with constant curvature $\pm 2\tau$ (the sign depends on whether the function defining the graph goes to $+\infty$ or $-\infty$), see~\cite{MR}. \end{proof}
Observe that this result is sharp, in the sense that there are many entire graphs with any constant mean curvature when $\kappa+4\tau^2=0$. If $\kappa=\tau=H=0$, then we only get affine planes in $\mathbb{L}^3$ due to the well-known Calabi-Bernstein theorem~\cite{Calabi}, but in the rest of cases the results of Wan and Au~\cite{Wan,WanAu} can be used to show that the moduli space of entire graphs with constant mean curvature $H$ (up to ambient isometries) is in one-to-one correspondence with the space of holomorphic quadratic differentials in the complex plane (parabolic case) or the unit disk (hyperbolic case), with the exception that the differential must be non-zero in the parabolic case. This correspondence is given by the Hopf differential, and was the key step in Fern\'{a}ndez and Mira's solution to the Bernstein problem in $\mathrm{Nil}_3(\tau)$~\cite{FM}.
In the case $\kappa+4\tau^2<0$, the family of weak-complete graphs is richer. On the one hand, there are plenty of entire graphs with constant mean curvature (for instance, Nelli and Rosenberg~\cite{NR} constructed entire minimal graphs in $\mathbb{H}^2\times\mathbb{R}$ with arbitrary continuous asymptotic values in the ideal boundary, whose duals are entire spacelike maximal graphs in $\mathbb{H}^2\times\mathbb{R}_1$), though completeness is not waranteed, see Remark~\ref{rmk:completeness}. On the other hand, ideal Scherk graphs with constant mean curvature $0\leq H<\frac{1}{2}$ in $\mathbb{H}^2\times\mathbb{R}$ and $\widetilde{\mathrm{Sl}}_2(\mathbb{R})$ constructed as solutions to certain Jenkins-Serrin problems produce dual graph exhibiting the features in item 1 of Theorem~\ref{thm:dual-complete}. These ideal Scherk graphs are conformally the plane, see~\cite{MN}.
\begin{remark}\label{rmk:daniel} It is also worth mentioning here relation with Daniel's isometric correspondence for surfaces in $\mathbb{E}(\kappa,\tau)$-spaces~\cite{Dan}. Given $\kappa_1,\tau_1,H_1\in\mathbb{R}$, consider new parameters $\kappa_2,\tau_2,H_2\in\mathbb{R}$ such that \[\kappa_1-4\tau_1^2=\kappa_2-4\tau_2^2,\qquad \tau_1^2+H_1^2=\tau_2^2+H_2^2.\] Given a simply connected Riemannian surface $\Sigma$, there is an correspondence between isometric immersions of $\Sigma$ with constant mean curvature $H_1$ in $\mathbb{E}(\kappa_1,\tau_1)$ and isometric immersions of $\Sigma$ with constant mean curvature $H_2$ in $\mathbb{E}(\kappa_2,\tau_2)$.
Since Daniel correspondence preserves the angle function, it also preserves locally the graphical condition (not globally in general), so we can connect both correspondences and get that there is a correspondence between four families of conformal immersions of a simply-connected Riemann surface $\Sigma$: \begin{enumerate}
\item[(a)] Conformal immersions $X:\Sigma\to\mathbb{E}(\kappa_1,\tau_1)$ with constant mean curvature $H_1$ and nowhere vertical.
\item[(b)] Conformal immersions $X':\Sigma\to\mathbb{E}(\kappa_2,\tau_2)$ with constant mean curvature $H_2$ and nowhere vertical.
\item[(c)] Conformal spacelime immersions $\widetilde X:\Sigma\to\mathbb{L}(\kappa_1,H_1)$ with constant mean curvature $H_1$.
\item[(d)] Conformal spacelime immersions $\widetilde X':\Sigma\to\mathbb{L}(\kappa_2,H_2)$ with constant mean curvature $H_2$. \end{enumerate} If we call $g$ the pullback metric by $X$ in $\Sigma$, then $X'$ also induces the same metric on $\Sigma$, and both $\widetilde X$ and $\widetilde X'$ induce the metric $\nu^2 g$ on $\Sigma$, so the correspondence between the above items (c) and (d) is also isometric, giving a Lorentzian analogue of Daniel correspondence, see also Palmer's approach~\cite{Palmer}. \end{remark}
The moral of Remark~\ref{rmk:daniel} is that the conformal theory of constant mean curvature surfaces in the cases (a), (b), (c) and (d) is essentially the same, and yields a beautiful correspondence for holomorphic quadratic differentials and harmonic maps in quite different geometric contexts, e.g., the following harmonic functions are related via this 4-sided correspondence (up to conformal diffeomorphisms): \begin{itemize}
\item The classical Gauss map for surfaces with mean curvature $\frac{1}{2}$ in $\mathbb{L}^3$, which takes values in the hyperbolic plane in a natural way.
\item The classical hyperbolic Gauss map for maximal surfaces in the anti-de Sitter space $\mathbb{H}^3_1(-\frac{1}{2})$, see~\cite{LH,SLee} and the references therein.
\item The left-invariant Gauss map of minimal surfaces in $\mathrm{Nil}_3(\frac{1}{2})$, see~\cite{Dan2}.
\item The hyperbolic Gauss map of mean curvature $\frac{1}{2}$ surfaces in $\mathbb{H}^2\times\mathbb{R}$, see~\cite{FM}. \end{itemize} More information about these Gauss maps can be found in~\cite{DFM}. On the other hand, the holomorphic quadratic differentials associated with these harmonic maps also coincide, namely the Hopf differential of constant mean curvature in $\mathbb{L}^3$ or $\mathbb{H}_1^3(\kappa)$, and the Abresch-Rosenberg differential~\cite{AR} of minimal surfaces in $\mathrm{Nil}_3(\tau)$ or mean curvature $\frac{1}{2}\sqrt{-\kappa}$ surfaces in $\mathbb{H}^2(\kappa)\times\mathbb{R}$. In~\cite{DFM} the whole family of harmonic Gauss maps of critical mean curvature surfaces in $\mathbb{E}(\kappa,\tau)$ is studied, though this becomes transparent via the duality since all of them correspond to the classical Gauss map in $\mathbb{L}^3$ or the hyperbolic Gauss map in $\mathbb{H}_1^3(\kappa)$.
\section{Estimates for entire minimal graphs in Heisenberg space}
The main goal of this section is to illustrate how we can translate a Lorentzian property into the Riemannian setting, namely we will obtain gradient estimates for entire minimal graphs in $\mathrm{Nil}_3(\tau)$ following the arguments in~\cite{MN}.
Cheng and Yau~\cite{CY2} proved that the support function $\Phi(x,y,z)=x^2+y^2-z^2$ satisfies the following gradient estimate in a complete spacelike surface in $\mathbb{L}^3$: \begin{equation}\label{eqn:cy}
\|\widehat\nabla\Phi\|^2\leq C(1+\Phi)^2\leq C(1+r^2)^2, \end{equation} for some constant $C>0$, where $\widehat\nabla$ denotes the gradient in the surface and $r=(x^2+y^2)^{1/2}$ is the distance to the origin in the base surface $\mathbb{R}^2$. Since the surface is an entire graph by Lemma~\ref{lemma:nonexistence}, we can parameterize it as $F_v$ for some $v\in C^\infty(\mathbb{R}^2)$. The gradient in~\eqref{eqn:cy} can be worked out as the tangent part of the ambient gradient $\overline\nabla\Phi$, so we deduce that \begin{equation}\label{eqn:L3-estimate1}
\|\widehat\nabla\Phi\|^2=\|\overline\nabla\Phi\|^2+\langle\overline\nabla\Phi,N\rangle^2\geq \langle\overline\nabla\Phi,N\rangle^2=\frac{4(v-xv_x-yv_y)^2}{1-|\nabla v|^2}, \end{equation}
where $N=(1-|\nabla v|^2)^{-1/2}(v_x\partial_x+v_y\partial_y+\partial_z)$ is the upward-pointing unit normal to the surface, and $\nabla v$ is the usual gradient in $\mathbb{R}^2$.
\begin{lemma}\label{lemma:gradient-estimate-L3} Let $F_v$ be an entire spacelike graph in $\mathbb{L}^3$ with positive constant mean curvature. Then there is a constant $A>0$ such that
\[|\nabla v|^2\leq 1-\frac{A}{(1+r^2)^2}.\] \end{lemma}
\begin{proof} In view of~\eqref{eqn:cy} and~\eqref{eqn:L3-estimate1}, it will be enough to show that there is a constant $M>0$ such that $(v-xv_x-yv_y)^2\geq M$ whenever $x^2+y^2>1$. First, we can apply a translation in $\mathbb{L}^3$ such that the origin lies in the surface, but no straight line through the origin is contained in the surface. Were this not possible, the surface would be ruled, and then the surface would be a hyperbolic cylinder, see~\cite{DVVW}, i.e., up to an isometry of $\mathbb{L}^3$ the surface would be given by $v(x,y)=\frac{1}{2H}\sqrt{1+4H^2x^2}$, and the statement follows. In order to estimate $w=v-xv_x-yv_y$, we will use the fact that the surface is the boundary a convex set as proved by Treibergs~\cite{Treibergs}. Hence, up to a mirror reflection with respect to $z=0$, we can also assume that $v$ is a convex function.
Observe that the intersection of the $z$-axis and the tangent line to the surface at $(x,y,v)$ in the direction of the tangent vector $(x,y,xv_x+yv_y)$ is precisely the point $(0,0,w)$. Since the surface is convex and does not contain a line through the origin, we get that \begin{equation}\label{eqn:L3-estimate2} w(x,y)\leq w\left(\frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}}\right)<0,\qquad\quad \text{if }x^2+y^2\geq 1. \end{equation} Since $w$ is continuous and the unit circle is compact, the existence of the desired constant $M$ follows from equation~\eqref{eqn:L3-estimate2}. \end{proof}
\begin{theorem}\label{last-area-estimate} Let $F_u$ be an entire minimal graph in $\mathrm{Nil}_3(\tau)$. \begin{itemize}
\item[(a)] There exists a constant $B>0$ such that $|Gu|\leq B(1+r^2)$,
\item[(b)] There exists a constant $C>0$ such that $|u|\leq C(1+r^2)^{3/2}$. \end{itemize} \end{theorem}
\begin{proof}
The dual graph $F_v$ is an entire spacelike graph in $\mathbb{L}^3$ with constant mean curvature $\tau$. Since these surfaces have reciprocal angle functions, from Lemma~\ref{lemma:gradient-estimate-L3} we get that there exists $A>0$ such that $1+|Gu|^2=(1-|\nabla v|^2)^{-1}\leq A^{-1}(1+r^2)^2<1+A^{-1}(1+r^2)^2$, and we get item (a) by just taking $B=A^{-1/2}$.
Applying the Minkowski inequality to the expression $\nabla u=Gu-Z$, where $Z=-\tau y\partial_x+\tau x\partial_y$, we get that $|\nabla u|\leq |Gu|+|Z|\leq B(1+r^2)+\tau r$. Hence $|\nabla u|$ grows at most quadratically in $r$, from where it is easy to see that there exists a constant $C>0$ satisfying item (b). \end{proof}
Let $F_u$ in $\mathbb{E}(\kappa,\tau)$ and $F_v$ in $\mathbb{L}(\kappa,\tau)$ be dual graphs over a domain $\Omega\subset\mathbb{M}^2(\kappa)$. It follows from~\eqref{eqn:1ff} that the absolute value of the determinant of the Jacobian of $\pi:\mathbb{E}(\kappa,\tau)\to\mathbb{M}^2(\kappa)$ or $\widetilde\pi:\mathbb{L}(\kappa,\tau)\to\mathbb{M}^2(\kappa)$ restricted to $F_u$ or $F_v$ coincides with the angle function $\nu$ or $\widetilde\nu$. Using the change of variables formula, if $f$ is a positive measurable function on $\Omega$, we get that \begin{equation}\label{eqn:integration} \int_{F_u} (f\circ\pi)\nu=\int_{\Omega}f=\int_{F_v}(f\circ\widetilde\pi)\widetilde\nu. \end{equation} In particular, the area of $\Omega$ is equal to the integral of $\nu$ or $\widetilde\nu$ over the surface. In the particular case of minimal graphs in $\mathrm{Nil}_3(\tau)$, this implies that $\nu$ is not integrable since $\mathbb{R}^2$ has infinite area. The estimates given by Theorem~\ref{last-area-estimate} show that $\nu$ is not square integrable either.
\begin{corollary}~ \begin{enumerate}
\item[(a)] The angle function of an entire minimal graph in $\mathrm{Nil}_3(\tau)$ is not square-integrable.
\item[(b)] An entire graph with positive constant mean curvature in $\mathbb{L}^3$ has infinite area. \end{enumerate} \end{corollary}
\begin{proof}
Plugging $f\circ\pi=\nu$ in Equation~\eqref{eqn:integration}, it will be enough to show that $\nu$ is not integrable in $\Omega$. By means of Theorem~\ref{last-area-estimate}, we get that $\nu=(1+|Gu|^2)^{1/2}\leq B'(1+r^2)$ for some $B'>0$. Integrating in polar coordinates in a disk $D_R\subset\mathbb{R}^2$ of radius $R$ centered at the origin, we get that
\[\int_{D_R}\nu=\int_{D_R}\frac{1}{\sqrt{1+|Gu|^2}}\geq\int_0^R\frac{2\pi r\,\mathrm{d} r}{B'(1+r^2)}.\] Since the last integral diverges, we are done. \end{proof}
\section{Other results}
\subsection{Behaviour with respect to isometries}
Lee showed that translating and rotating a surface in $\mathrm{Nil}_3(\tau)$ corresponds to translating and rotating the dual surface in $\mathbb{L}^3$, respectively~\cite{Lee}. Here we will present this result from a more abstract point of view that applies to all $\mathbb{E}(\kappa,\tau)$-spaces (and also works in the Killing submersion setting). Problems related to the congruence of dual surfaces have been already treated in the literature, see~\cite{AL,KL}.
Let $\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)$ be the group of direct isometries of $\mathbb{E}(\kappa,\tau)$ preserving the unit Killing vector field $\xi$ and the orientation. It is worth pointing out that, if $\kappa-4\tau^2\neq 0$, then any isometry $T$ of $\mathbb{E}(\kappa,\tau)$ satisfies $T_*\xi=\pm\xi$, whereas any isometry is direct if $\tau\neq 0$. The group $\mathrm{Iso}_+(\mathbb{L}(\kappa,\tau),\xi)$ is defined likewise. We are not considering the case $T_*\xi=-\xi$ due to the fact that such isometries change the sign of the mean curvature of the graphs.
For any $T\in\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)$, there is a direct isometry $h\in\mathrm{Iso}_+(\mathbb{M}^2(\kappa))$ such that the following diagram is commutative \[\begin{tikzcd}
\mathbb{E}(\kappa,\tau)
\arrow[rr, rightarrow, "T"]&&
\mathbb{E}(\kappa,\tau)
\arrow[d, rightarrow,"\pi"]\\
\mathbb{M}^2(\kappa)
\arrow[u, leftarrow, "\pi"]
\arrow[rr, rightarrow, "h"]&&
\mathbb{M}^2(\kappa) \end{tikzcd}\] This association is a group morphism $\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)\to\mathrm{Iso}_+(\mathbb{M}^2(\kappa))$ whose kernel is the subgroup of vertical translations. Conversely, given $h\in\mathrm{Iso}_+(\mathbb{M}^2(\kappa))$, there is $T\in\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)$ making the diagram commutative, and such a $T$ is unique up to vertical translations. The proof of these results can be found in~\cite{Man}, and similar arguments work in the Lorentzian setting. Therefore we can define a bijective map \[R:\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)\to\mathrm{Iso}_+(\mathbb{L}(\kappa,\tau),\xi),\] such that, for each $T\in\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)$, the image $R(T)$ is the only isometry in $\mathrm{Iso}_+(\mathbb{L}(\kappa,\tau),\xi)$ that projects to the same isometry in $\mathrm{Iso}_+(\mathbb{M}^2(\kappa))$ as $T$, and such that $R(T)(0,0,0)=T(0,0,0)$.
\begin{proposition}\label{prop:transform} Let $X:\Sigma\to\mathbb{E}(\kappa,\tau)$ and $\widetilde X:\Sigma\to\mathbb{L}(\kappa,H)$ be dual conformal immersions, and let $T\in\mathrm{Iso}_+(\mathbb{E}(\kappa,\tau),\xi)$. Then $T\circ X$ and $R(T)\circ\widetilde X$ are dual conformal immersions. \end{proposition}
\begin{proof} Let $h\in\mathrm{Iso}_+(\mathbb{M}^2(\kappa))$ the isometry to which both $T$ and $R(T)$ project in the base surface. Since the computation can be localized, let us assume that $X$ and $\widetilde X$ are given as graphs $F_u$ and $F_v$ over a simply connected domain $\Omega\subset\mathbb{M}^2(\kappa)$. Let $F_{\overline u}$ and $F_{\overline v}$ be the graphs over $h(\Omega)$ associated to $T\circ F_u$ and $R(T)\circ F_v$, respectively. From Equation~\eqref{eqn:N}, where $N$ is a unit normal to $F_u$, and the fact that $T_*N$ is an unit normal to $F_{\overline u}$, the condition $\pi\circ T=h\circ\pi$ allows us to work out \begin{equation}\label{eqn:transform1}
\frac{G\overline{u}}{\sqrt{1+\|G\overline u\|^2}}=\pi_*(T_*N)=h_*(\pi_*N)=\frac{h_*G{u}}{\sqrt{1+\|Gu\|^2}}. \end{equation}
Taking squared norms in~\eqref{eqn:transform1}, we easily reach $\|Gu\|=\|G\overline u\|$ via the isometry $h$. We deduce that $G\overline u=h_*Gu$, and likewise we can check that $\widetilde G\overline v=h_*\widetilde Gv$. Therefore it suffices to apply $h_*$ to Equation~\eqref{thm:duality:eqn3} to realize that $F_{\overline u}$ and $F_{\overline v}$ also satisfy the same twin relations, so they are dual graphs. \end{proof}
Nonetheless, not all the isometries preserve the Killing vector field, and there are cases where this situation becomes specially interesting. In the case the surface has critical constant mean curvature, the dual surface lies in a space form, whose isometry group has dimension 6 (two extra dimensions of isometries not preserving the Killing direction). This leads to non-trivial 2-parameter deformation of critical mean curvature surfaces. More precisely: \begin{enumerate}
\item There is a 2-parameter deformation of constant mean curvature $H$ surfaces in $\mathbb{E}(\kappa,\tau)$ with $\kappa+4H^2=0$. It corresponds to hyperbolic and parabolic rotations in $\mathbb{L}^3$ or in the anti-de Sitter space $\mathbb{H}^3_1(\kappa)$.
\item There is a 2-parameter deformation of spacelike constant mean curvature $H$ surfaces in $\mathbb{L}(\kappa,\tau)$ with $\kappa-4H^2=0$. It corresponds to rotations with respect to non-vertical axes in $\mathbb{R}^3$ or in the round sphere $\mathbb{S}^3(\kappa)$.
\end{enumerate} Proposition~\ref{prop:transform} shows that the above items 1 and 2 reflect all possible non trivial actions of isometries.
\subsection{Hessian equations}
Here we will present a classical way of constructing solutions to the Hessian-one equation $f_{xx}f_{yy}-f_{xy}^2=1$ in Euclidean plane by means of solutions to the minimal surface equation in $\mathbb{R}^3$ (or, equivalently, solutions to the maximal surface equation in $\mathbb{L}^3$). The technique we will explain below, was pointed out by Calabi~\cite{Calabi}, though the same ideas had already been used before. For instance, the same arguments were applied by Osserman~\cite{Osserman} to classify all entire minimal 2-dimensional graphs of the form $(x, y, f(x,y), g(x,y))$ in $\mathbb{R}^4$, see~\cite{Lee3}. Also Nitsche~\cite{Nitsche}, following Heinz' idea, solved the classical Bernstein problem in $\mathbb{R}^3$ by means of entire solutions to the Hessian-one equation.
\begin{lemma}\label{lemma:identities} Let $F_u$ be a graph in $\mathbb{E}(\kappa,\tau)$ with mean curvature $H$, not necessarily constant. Then: \begin{align*}
\frac{1}{\lambda_\kappa^2}\left[\left(\frac{\lambda_\kappa^2(1+\alpha^2)}{\omega}\right)_y-\left(\frac{\lambda_\kappa^2\alpha\beta}{\omega}\right)_x\right]&=\frac{2\tau\alpha}{\omega}-2H\beta+\frac{(\lambda_\kappa)_y}{\lambda_\kappa}\frac{1+\omega^2}{\omega},\\
\frac{1}{\lambda_\kappa^2}\left[\left(\frac{\lambda_\kappa^2(1+\beta^2)}{\omega}\right)_x-\left(\frac{\lambda_\kappa^2\alpha\beta}{\omega}\right)_y\right]&=-\frac{2\tau\beta}{\omega}-2H\alpha+\frac{(\lambda_\kappa)_x}{\lambda_\kappa}\frac{1+\omega^2}{\omega}. \end{align*} \end{lemma}
\begin{proof} It is a long but straightforward computation to check that the following formulas hold true (it suffices to do the derivatives in the \textsc{lhs} in terms of $u$ and gather the resulting terms, taking into account that the second-order terms must be collected into $H$): \begin{equation}\label{lema:Hessian:eqn1} \begin{aligned}
\left(\frac{1+\alpha^2}{\omega}\right)_y-\left(\frac{\alpha\beta}{\omega}\right)_x&=\frac{2\tau\alpha}{\omega}-2H\beta-\kappa\,\frac{\lambda_\kappa}{\omega}\left(x\alpha\beta-\tfrac{y}{2}(\alpha^2-\beta^2)\right),\\
\left(\frac{1+\beta^2}{\omega}\right)_x-\left(\frac{\alpha\beta}{\omega}\right)_y&=-\frac{2\tau\beta}{\omega}-2H\alpha-\kappa\,\frac{\lambda_\kappa}{\omega}\left(\tfrac{x}{2}(\alpha^2-\beta^2)+y\alpha\beta\right). \end{aligned} \end{equation} Since the derivatives of $\lambda_\kappa$ satisfy $(\lambda_\kappa)_x=-\frac{\kappa}{2}x\lambda_\kappa^2$ and $(\lambda_\kappa)_y=-\frac{\kappa}{2}y\lambda_\kappa^2$, we can use them to eliminate $x$ and $y$ in~\eqref{lema:Hessian:eqn1} and reach the following two identities: \begin{align*}
\left(\frac{1+\alpha^2}{\omega}\right)_y-\left(\frac{\alpha\beta}{\omega}\right)_x&=\frac{2\tau\alpha}{\omega}-2H\beta+2\frac{(\lambda_\kappa)_x}{\lambda_\kappa}\frac{\alpha\beta}{\omega}-\frac{(\lambda_\kappa)_y}{\lambda_\kappa}\frac{\alpha^2-\beta^2}{\omega},\\
\left(\frac{1+\beta^2}{\omega}\right)_x-\left(\frac{\alpha\beta}{\omega}\right)_y&=-\frac{2\tau\beta}{\omega}-2H\alpha +\frac{(\lambda_\kappa)_x}{\lambda_\kappa}\frac{\alpha^2-\beta^2}{\omega}+2\frac{(\lambda_\kappa)_y}{\lambda_\kappa}\frac{\alpha\beta}{\omega}. \end{align*} The identities in the statement follow from grouping the different derivatives. \end{proof}
The formulas in the statement may seem artificial, but their \textsc{lhs} are related to $\mathrm{div}(Je_1)$ and $\mathrm{div}(Je_2)$, where $\{e_1,e_2\}$ is the frame given by Equation~\eqref{eqn:tangent-frame}, the divergence is computed in $\mathbb{M}^2(\kappa)$, and $J$ is a $\frac{\pi}{2}$-rotation in the tangent bundle.
If $\kappa=\tau=H=0$, then $\lambda_\kappa\equiv 1$ and the aforementioned formulas have been obtained in the literature by means of different techniques, see~\cite{Osserman,Lee4}. From Lemma~\ref{lemma:identities} we get that \begin{equation}\label{eqn:Hessian1} \begin{aligned}
\left(\frac{1+\alpha^2}{\omega}\right)_y-\left(\frac{\alpha\beta}{\omega}\right)_x&=0,\\
\left(\frac{1+\beta^2}{\omega}\right)_x-\left(\frac{\alpha\beta}{\omega}\right)_y&=0. \end{aligned} \end{equation} If $u$ is defined on a simply-connected domain $\Omega\subseteq\mathbb{R}^2$, then Poincar\'{e}'s Lemma guarantees the existence of $g,h\in C^\infty(\Omega)$ such that~\eqref{eqn:Hessian1} gives \begin{equation}\label{eqn:Hessian2} \frac{1+\alpha^2}{\omega}=g_y,\quad \frac{\alpha\beta}{\omega}=g_x,\quad\frac{1+\beta^2}{\omega}=h_y,\quad\frac{\alpha\beta}{\omega}=h_x. \end{equation} Now the second and third equations in~\eqref{eqn:Hessian2} imply that $g_x=h_y$, so again Poincar\'{e}'s Lemma yields the existence of $f\in C^\infty(\Omega)$ such that $g=f_y$ and $h=f_x$. Thus, the identities in~\eqref{eqn:Hessian2} can be rewritten as \begin{align*} f_{yy}&=\frac{1+\alpha^2}{\omega},& f_{xy}&=\frac{\alpha\beta}{\omega},& f_{xx}&=\frac{1+\beta^2}{\omega}. \end{align*} and hence the function $f$ satisfies \[f_{xx}f_{yy}-f_{xy}^2=\frac{1+\beta^2}{\omega}\cdot\frac{1+\alpha^2}{\omega}-\frac{\alpha^2\beta^2}{\omega^2}=1.\]
\begin{remark} There are further relations that connect these equations with others that will not be explored here. As a sample, any solution of $f_{xx} f_{yy} - f_{xy}^2 =1$ satisfies the property that the gradient map $(x,y)\mapsto (x,y,f_x,f_y)$ is a parametrization of a minimal surface in $\mathbb{R}^4$, or equivalently, a special Lagrangian surface in $\mathbb{C}^2$ or a holomorphic curve in $\mathbb{C}^2$ for some complex structure, see~\cite{Lee3}. \end{remark}
\section{Open questions}
In this last section, we will pose three questions that arose in the above discussions, and that could be of interest in a further development of twin correspondences.
\begin{question}\label{q1} Let $M$ be a Riemannian surface, and let $H\in C^\infty(M)$ be an arbitrary function (with $\int_MH=0$ if $M$ is compact). If $\pi:\mathbb{L}\to M$ is a Lorentzian Killing submersion over $M$ whose fibers have infinite length, is there an entire graph in $\mathbb{L}$ with prescribed mean curvature $H$? \end{question}
This question is equivalent (if $M$ is simply connected) to that of finding entire minimal graphs in a Riemannian Killing submersion over $M$ with prescribed bundle curvature, see~\cite{LeeMan}, and the author conjectures that the solution to Question~\ref{q1} is affirmative. This is the case provided that the base surface is a sphere (even if the Killing vector field is non-unitary), see~\cite{LerMan}. Nonetheless, this question remains unsolved in the case of the Lorentz-Minkowski space $\mathbb{L}^3$. The condition of fibers with infinite length ensures that the submersion admits entire sections.
\begin{question}\label{q2} Calabi associated solutions of the minimal surface equation in $\mathbb{R}^3$ with solutions of the Hessian-one equation. Is it possible to generalize this idea to associate solutions of the constant mean curvature equation in $\mathbb{E}(\kappa,\tau)$ with solutions to another natural non-linear \textsc{pde}? \end{question}
\begin{question}\label{q3} Let $\Sigma$ be a constant mean curvature surface in $\mathbb{R}^3$ which is a graph over a plane with zero boundary values and intersecting the plane orthogonally, and let us consider the dual maximal surface in $\mathrm{Nil}_3^1(\tau)$. Is it always possible to extend this surface as a zero mean curvature surface (not spacelike) beyond the boundary of the domain? \end{question}
In the cases we are able to work it out (see Examples~\ref{example1} and~\ref{example2}), it seems that the dual of such a surface is extended beyond the lightlike boundary, so this property could be understood as a dual Schwarz reflection principle.
\end{document} |
\begin{document}
\title{Analysis of Transition State Theory Rates upon Spatial Coarse-Graining
hanks{AB acknowledges support from the Department of Defense (DoD) through
the National Defense Science \& Engineering Graduate Fellowship (NDSEG) Program.
ML was supported in part by the NSF PIRE Grant OISE-0967140, NSF Grant
1310835, AFOSR Award FA9550-12-1-0187, and ARO MURI Award W911NF-14-1-0247.
Work at Los Alamos National Laboratory (LANL) was supported by the
United States Department of Energy, Office of Basic Energy Sciences,
Materials Sciences and Engineering Division. LANL is operated by
Los Alamos National Security, LLC, for the National Nuclear Security
Administration of the U.S. DOE under Contract No. DE-AC52-06NA25396.
During a visit to LANL, AB also received partial support from the
Center for Nonlinear Studies (CNLS) through the Laboratory Directed
Research and Development Program, which paid for mathematical development
in this work.
}
\begin{abstract} Spatial multiscale methods have established themselves as useful tools for extending the length scales accessible by conventional statics (i.e., zero temperature molecular dynamics). Recently, extensions of these methods, such as the finite-temperature quasicontinuum (hot-QC) or Coarse-Grained Molecular Dynamics (CGMD) methods, have allowed for multiscale molecular dynamics simulations at finite temperature. Here, we assess the quality of the long-time dynamics these methods generate by considering canonical transition rates. Specifically, we analyze the transition state theory (TST) rates in CGMD and compare them to the corresponding TST rate of the fully atomistic system. The ability of such an approach to reliably reproduce the TST rate is verified through a relative error analysis, which is then used to highlight the major contributions to the error and guide the choice of degrees of freedom. Finally, our analytical results are compared with numerical simulations for the case of a 1-D chain. \end{abstract}
\section{Introduction}
Molecular dynamics (MD) --- the direct integration of atomistic equations of motion --- provides a powerful tool for the study of chemical and material processes. Such an approach accurately captures the physics at the atomic scale and, in principle, enables the accurate modeling of a wide range of atomistic systems. However, despite the high speed of modern computers, MD simulations still struggle to access the wildly disparate length and time scales required in many applications. To partially overcome this difficulty, multiscale methods that bridge the length-scales from the nano- to the meso-scales have been proposed. While such methods are certainly promising, relatively little is known of the effect of spatial coarse-graining on the quality of the dynamics. Improving our understanding of these issues is necessary in order to expand upon the range of problems that can be modeled using such an approach.
We concern ourselves here with spatial multiscale processes for which the critical atomistic behaviors are localized yet strongly coupled to the environment through long-range elastic effects. Probably the most well known numerical method to treat such systems is the quasicontinuum (QC) method. Specifically, the QC method aims to solve molecular statics (i.e., molecular dynamics at zero temperature) problems in such cases \cite{EBTadmor:1996,BlancLeBrisLegoll2005,acta.atc,bqce12,bqcf.cmame}. In the QC method, the localized region of interest is treated atomistically in order to preserve a high degree of accuracy, while the behavior of the remainder of the system is approximated using continuum mechanics. This coupling between the length scales is meant to allow for an elastic coupling of the two regions, ensuring proper boundary conditions for the atomistic region. The number of degrees of freedom necessary to describe the system is significantly reduced through the use of the Cauchy-Born approximation and a coarsening of the continuum region via the finite element method (FEM). This greatly reduces its computational cost compared to a fully atomistic solution.
Recently, finite temperature versions of the quasicontinuum method, so-called hot-QC methods \cite{LMDupuy:2005,EBTadmor:2013}, have been developed in order to extend the QC approach to finite-temperature molecular dynamics. Hot-QC was designed to simulate systems held at a constant temperature, which permits an analysis from a thermodynamic perspective. Mathematical approaches to finite temperature equilibrium and dynamics have been given in \cite{parisfinitetemp10,PPMAK:13,KPS,Blanc201384,0951-7715-23-9-006}. Hot-QC aims at preserving any thermodynamic quantity that depends only on a (small) subset of all degrees of freedom. It has recently been pointed out that this property implies that transition state theory (TST) rates between metastable states of the system should be well reproduced insofar as the system's constituents that are essential to the the transitions are approximately local to the fully-resolved atomistic region. This property has been exploited in an extension of these methods --- the hyper-QC method \cite{hyperqc} --- which seeks to efficiently and accurately simulate state-to-state dynamics of spatially coarse-grained rare-event systems through the use of accelerated molecular dynamics \cite{PUSAV2009}.
In this paper, we seek to better understand the error in transition rates introduced by coarse-graining the periphery of the system. In order to isolate this error, we consider the coarse-graining of an atomistic system according to the coarse-grained molecular dynamics (CGMD) formalism described in \cite{PhysRevB.72.144104}. However, we note that our choice of coarse variables differ from that of conventional CGMD, as will be discussed below. CGMD and hot-QC share the same formal basis, but CGMD provides a closed-form expression to the coarse Hamiltonian, which enables a mathematical analysis. Further, it naturally handles the interface between the region to be treated with atomistic detail and the remainder of the system, in contrast to QC methods where so-called ghost forces pose additional challenges \cite{acta.atc}.
In order to obtain closed-form results, we will consider transition rates computed within the purview of harmonic transition state theory (HTST) \cite{GHVineyard:1957}. HTST is often the method of choice to approximate transition rates in hard materials. Our choice for the dividing surface between the two metastable regions and how the dividing surface is affected by the coarse-graining will also be discussed. The error analysis for the TST rate will serve as confirmation of the validity of the approach and provide intuition for the types of error made in the coarsening process for spatial multiscale methods.
The paper is organized as follows: First, we define a coarse-grained energy in terms of the atomistic energy to be used in the thermodynamic calculations. Second, we discuss and analyze the HTST rates in atomistic and coarse-grained systems and derive the relative error in rates due to coarse-graining in terms of eigenvalues of the respective Hamiltonians. We then discuss how these eigenvalues are affected by coarse-graining. Following that, we provide numerical results exhibiting the approximations to the HTST rate made by various coarse-graining schemes for a 1D system and illustrate the major sources of error in these computations. We specifically investigate the impact of the choice of degrees of freedom. Finally, we conclude with general remarks.
\section{The Coarse-Grained Energy}
Consider a system of $N$ particles in $d$ dimensions held at a fixed temperature $T$.
Let $\mathbf{q} \in \mathbb{R}^{dN}$ and $\mathbf{p} \in \mathbb{R}^{dN}$ denote the position and momentum vectors of the particles respectively. When necessary, we will denote the position and momentum vectors of individual particles by $\mathbf{q}_{i}$ and $\mathbf{p}_{i}$ for $1 \leq i \leq N$. For this paper, we will make use of mass-weighted coordinates for the position and momentum vectors; that is, we will consider $\mathbf{\tilde{q}}_{i} = \mathbf{q}_{i}/\sqrt{m_{i}}$, where $m_{i}$ is the mass of the $i$-th particle so that $\mathbf{\tilde{p}}_{i} = \mathbf{p}_{i}/\sqrt{m_{i}}$. However, we will dispense with the tilde notation and still use $\mathbf{q}$ and $\mathbf{p}$ to denote the mass-weighted coordinates for position and momentum respectively. The total energy, or Hamiltonian, of the system will be given by $\mathcal{H}(\mathbf{q},\mathbf{p})$. We assume that the Hamiltonian is separable; that is, we assume that the Hamiltonian may be written as a sum of the kinetic and potential energies of the system:
\begin{equation*}
\mathcal{H}(\mathbf{q},\mathbf{p}) = \mathcal{V}(\mathbf{q}) + \mathcal{K}(\mathbf{p}), \end{equation*}
where $\mathcal{V}(\mathbf{q})$ denotes the potential energy and $\mathcal{K}(\mathbf{p})$ denotes the kinetic energy.
The total kinetic energy $\mathcal{K}(\mathbf{p})$ is given by
\begin{equation*}
\mathcal{K}(\mathbf{p}) =
\sum_{i=1}^{N}\frac{1}{2}\|\mathbf{p}_{i}\|^{2}, \end{equation*}
as usual.
In order to coarse-grain the system, we will partition the particles into representative atoms and constrained atoms. The representative atoms are the subset of atoms which will be fully resolved in the coarsened system while the constrained atoms are those atoms which will have their degrees of freedom removed in the coarse-graining procedure. We will denote the partitioning of the position and momentum vectors for the entire system into representative and constrained components in the following manner:
\begin{equation}\label{Eq:Order}
\mathbf{q}
=
(\mathbf{q}^{r}, \mathbf{q}^{c}),
\quad
\mathbf{p} = (\mathbf{p}^{r}, \mathbf{p}^{c}), \end{equation}
where the superscripts $r$ and $c$ indicate the representative and constrained components, respectively. These superscripts will be used throughout the paper to signal that a given quantity pertains to the representative atoms or the constrained atoms. For example, we will let $N^{r}$ and $N^{c}$ denote the number of representative and constrained atoms, respectively. Of course, we must then have $N = N^{r} + N^{c}$. Throughout the paper, we will often simply refer to the representative atoms as repatoms. Figure \ref{fig:SampleMesh} shows a sample partitioning of particles in a 2D system.
\begin{figure}
\caption{An example of a partitioning of particles in a 2D system. The
filled in circles represent the representative atoms or repatoms
while the empty circles represent the constrained atoms.}
\label{fig:SampleMesh}
\end{figure}
Note that this is only one of the possible ways to coarsen the variables. For other choices, the following derivations are still valid as the same block structure of the resolved basis can be recovered after a simple change of variable.
As we are interested in transitions from one metastable region to another, it will be useful to consider system properties restricted to a given metastable region. Assume that our system initially resides in a metastable region which we will label as $A$ and that we are interested in transitions to an adjacent metastable region which we will label $B$. Let $\Omega_{A}$ denote the set of positions for realizable configurations within the metastable region $A$ for the system. In addition, let $\Omega_{A}^{r}$ denote the set of the positions of the repatoms in these realizable configurations within $A$; that is, let
\begin{equation}\label{Eq:CoarseDomain} \Omega_{A}^{r} := \left\{\mathbf{q}^{r} \in \mathbb{R}^{dN^{r}} :
(\mathbf{q}^{r}, \mathbf{q}^{c}) \in \Omega_{A} \; \text{for some} \;
\mathbf{q}^{c} \in \mathbb{R}^{dN^{c}} \right\}. \end{equation}
We also define $\Omega^{c}_{A}(\mathbf{q}^{r})$ to be the set of constrained atom positions $\mathbf{q}^{c} \in \mathbb{R}^{dN^{c}}$ such that $(\mathbf{q}^{r},\mathbf{q}^{c}) \in \Omega_{A}$.
With these newly-defined sets, we define a potential of mean force that we will take as the potential energy for the coarsened system, as is done in the CGMD, hot-QC, and hyper-QC methods \cite{PhysRevB.72.144104,LMDupuy:2005,EBTadmor:2013,hyperqc}: {
\begin{equation*}
\mathcal{V}^{\textup{\text{cg}}}(\mathbf{q}^{r}, \beta)
:=
-\frac{1}{\beta}\log
\left(
\int_{\Omega^{c}_{A}(\mathbf{q}^{r})}
e^{-\beta\mathcal{V}(\mathbf{q}^{r},\mathbf{q}^{c})}
d\mathbf{q}^{c}
\right), \end{equation*}
where $\beta := (k_{B}T)^{-1}$, $k_{B}$ is Boltzmann's constant, and $T$ is the temperature of the system. It is important to note that the coarse-grained energy is dependent upon the temperature of the system. As mentioned earlier, note that we deviate here from the traditional CGMD method in the choice of the coarse variables: in the original formulation, the coarse variables are defined in terms of finite element shape functions, in contrast to the degrees of freedom of repatoms \cite{PhysRevB.72.144104}.
This definition of the coarse potential is motivated by the fact that it preserves thermodynamic properties that are a function of only repatom degrees of freedom. Further, this choice also implies the following equivalence of partition functions for the original, fully-atomistic, and coarse-grained systems:
\begin{equation}\label{Eq:PartitionFunctionIdentity}
Z_{\mathcal{V}}
:=
\int_{\Omega_{A}}e^{-\beta\mathcal{V}(\mathbf{q})}d\mathbf{q}
=
\int_{\Omega^{r}_{A}}e^{-\beta\mathcal{V}^{\textup{\text{cg}}}(\mathbf{q}^{r},\beta)}d\mathbf{ q}^{r}
=:
Z^{\textup{\text{cg}}}_{\mathcal{V}}, \end{equation}
where $Z_{\mathcal{V}}$ and $Z^{\textup{\text{cg}}}_{\mathcal{V}}$ are the elements of the total partition functions pertaining to the potential energy for the original and coarsened systems, respectively.
We similarly define the coarse-grained kinetic energy to be an effective kinetic energy, which may be computed analytically:
\begin{equation*}
\mathcal{K}^{\textup{\text{cg}}}(\mathbf{p}^{r},\beta)
=
-\frac{1}{\beta}\log
\left(
\int
e^{-\beta\mathcal{K}(\mathbf{p}^{r},\mathbf{p}^{c})}
d\mathbf{p}^{c}
\right)
=
\sum_{i=1}^{N^{r}}
\frac{1}{2}\|\mathbf{p}_{i}^{r}\|^{2}
-
\frac{d}{2\beta}\sum_{i=1}^{N^{c}}\log
\left(\frac{2\pi}{\beta}\right). \end{equation*}
Again, this choice gives consistent thermodynamics for quantities involving only repatom degrees of freedom, and it also yields equal kinetic energy partition functions for the original and coarsened systems.
With the kinetic energy thus defined, the total energy or Hamiltonian of the coarse-grained system is defined to be the sum of the two coarse-grained energies: $\mathcal{H}^{\textup{\text{cg}}}(\mathbf{q}^{r},\mathbf{p}^{r},\beta) := \mathcal{V}^{\textup{\text{cg}}}(\mathbf{q}^{r},\beta) + \mathcal{K}^{\textup{\text{cg}}}(\mathbf{p}^{r},\beta)$. This Hamiltonian can then be used to carry out molecular dynamics simulations.
\section{Transition State Theory (TST) Rate}
We are interested in estimating the rate at which a system residing in the metastable region $A$ crosses over into the metastable region $B$. True transition rates are in general difficult to compute directly. A common approximation to the true transition rate is given by transition state theory. In TST, one assumes that that once the system crosses the (hyper-)surface between states $A$ and $B$ --- the so-called dividing surface --- it will thermalize in state $B$; i.e., it assumes that the trajectory won't rapidly cross back to $A$ or leave to another state $C$ before losing its memory in $B$. This assumption is almost never exactly realized, but nevertheless, it is often an excellent approximation. With these assumptions, we may define the TST rate for the fully atomistic system to be the equilibrium flux across the dividing surface $\Gamma_{AB}$. In the canonical ensemble (NVT), the rate becomes:
\begin{equation*} R^{\text{TST}}_{A \to B}
= \frac{\frac{1}{2}
\int\int_{\Gamma_{AB}}|\mathbf{p}\cdot\mathbf{n}|
e^{-\beta\mathcal{H}(\mathbf{q},\mathbf{p})}
dSd\mathbf{p}
}
{\int\int_{\Omega_{A}}
e^{-\beta\mathcal{H}(\mathbf{q},\mathbf{p})}
d\mathbf{q}d\mathbf{p}
}
=
\frac{1}{2}\sqrt{\frac{2}{\pi\beta}}
\frac{\int_{\Gamma_{AB}}
e^{-\beta\mathcal{V}_{s}(\mathbf{q})}
dS}
{\int_{\Omega_{A}}
e^{-\beta\mathcal{V}(\mathbf{q})}
d\mathbf{q}
}, \end{equation*}
where $\mathbf{n}$ is the vector normal to the dividing surface and $dS$ indicates that the integration with respect to position is taken over the surface $\Gamma_{AB}$. As we will be treating the potential energy at the dividing surface separately from the potential energy in state $A$, for clarity, we denote the potential energy at the dividing surface using $\mathcal{V}_{s}$. This notation will be used throughout the rest of the paper. The remaining variables are as they have been defined in previous sections. Note that in the above equation we are able to integrate the momentum portion of the integral as our dividing surface is taken to be a hyperplane and the momentum integral is carried out over $\mathbb{R}^{dN}$ \cite{TSTVoter}. Let us define the partition function
\begin{equation*}
Z^{\neq}_{\mathcal{V}}
:=
\int_{\Gamma_{AB}}e^{-\beta\mathcal{V}_{s}(\mathbf{q})}dS
\quad \text{and recall that} \quad
Z_{\mathcal{V}}
=
\int_{\Omega_{A}}e^{-\beta\mathcal{V}(\mathbf{q})}d\mathbf{q} \end{equation*}
so that we may write the TST rate in the following way:
\begin{equation}\label{Eq:TSTRate}
R^{\text{TST}}_{A \to B}
=
\frac{1}{2}\sqrt{\frac{2}{\pi\beta}}
\frac{Z^{\neq}_{\mathcal{V}}}{Z_{\mathcal{V}}}. \end{equation}
Analogously, we define the TST rate in the coarse-grained system as:
\begin{equation*}
R^{\text{cg}}_{A \to B}
=
\frac{
\frac{1}{2}
\int\int_{\Gamma^{\text{cg}}_{AB}}
|\mathbf{p}\cdot\mathbf{n}|
e^{-\beta\mathcal{H}^{\text{cg}}(\mathbf{q}^{r},\mathbf{p}^{r},\beta)}
dS^{r}d\mathbf{p}^{r}
}
{
\int\int_{\Omega^{r}_{A}}
e^{-\beta\mathcal{H}^{\text{cg}}(\mathbf{q}^{r},\mathbf{p}^{r},\beta)}
d\mathbf{q}^{r}d\mathbf{p}^{r}
}
=
\frac{1}{2}\sqrt{\frac{2}{\pi\beta}}
\frac{
\int_{\Gamma^{\text{cg}}_{AB}}
e^{-\beta\mathcal{V}^{\text{cg}}_{s}(\mathbf{q}^{r},\beta)}
dS^{r}
}
{
\int_{\Omega^{r}_{A}}
e^{-\beta\mathcal{V}^{\text{cg}}(\mathbf{q}^{r},\beta)}
d\mathbf{q}^{r}
}, \end{equation*}
where $\mathbf{n}$ is the vector normal to the coarse-grained dividing surface, and $dS^{r}$ indicates that the integral for the position of the atoms is taken over the corresponding coarse-grained dividing surface. The superscript $r$ serves as a reminder that this integral involves integrating over only the repatom positions. Again, as we will be treating the potential energy at the dividing surface separately from the potential energy in $\Omega^{r}_{A}$, for clarity, we denote the potential energy at the dividing surface using $\mathcal{V}^{\text{cg}}_{s}$. Let us define
\begin{equation*}
Z^{\textup{\text{cg}},\neq}_{\mathcal{V}}
:=
\int_{\Gamma^{\text{cg}}_{AB}}
e^{-\beta\mathcal{V}^{\text{cg}}_{s}(\mathbf{q}^{r},\beta)}
dS^{r}
\quad \text{and recall that} \quad
Z^{\text{cg}}_{\mathcal{V}}
=
\int_{\Omega^{r}_{A}}
e^{-\beta\mathcal{V}^{\text{cg}}(\mathbf{q}^{r},\beta)}
d\mathbf{q}^{r} \end{equation*}
so that we may write the coarse-grained TST rate as
\begin{equation}\label{Eq:CGTSTRate}
R^{\text{cg}}_{A \to B}
=
\frac{1}{2}\sqrt{\frac{2}{\pi\beta}}
\frac{Z^{\text{cg},\neq}_{\mathcal{V}}}{Z^{\text{cg}}_{\mathcal{V}}}. \end{equation}
While formally simple, the TST approximation to the transition rate usually does not allow for closed-form results because the partition function integrals cannot be carried out for general potentials. This difficulty is compounded by the need to integrate along a potentially complex dividing surface. These two challenges can be addressed through the so-called Harmonic approximation to TST (HTST) \cite{GHVineyard:1957}. HTST introduces two additional assumptions: i) the kinetic bottleneck for the transition corresponds to crossing an energy barrier (culminating at a first order saddle point) that stands between $A$ and $B$, and ii) for the purpose of the calculation of the partition functions, the potential can locally be expanded to second order. These properties will be used to give an explicit definition of a dividing surface and to analytically compute the partition functions entering into the rate expression using this surface. The HTST assumptions are particularly appropriate when states correspond to basins of attraction of a single minimum on the potential energy surface (which is often the case for solid-state kinetics) and when the temperature is sufficiently low.
Consider the saddle point $\mathbf{q}_{\text{s}}$ connecting the states $A$ and $B$. The potential energy around $\mathbf{q}_{\text{s}}$ is then approximated as:
\begin{equation}\label{Eq:Saddle}
\mathcal{V}_{s}(\mathbf{q})
:=
\mathcal{V}(\mathbf{q}_{\text{s}})
+
\frac{1}{2}{\mathbf{u}}\cdot{\mathbf{D}}^{\textup{\text{at}}}{\mathbf{u}},
\qquad
{\mathbf{u}}
:=
\mathbf{q} - \mathbf{q}_{s}, \end{equation}
where ${\mathbf{u}} := \mathbf{q} - \mathbf{q}_{s}$ is the vector displacement of all of the atoms from their positions at the saddle point, and ${\mathbf{D}}^{\textup{\text{at}}}$ is the Hessian matrix evaluated at the saddle point. Explicitly,
\begin{equation*}
{\mathbf{D}}^{\textup{\text{at}}}_{ij} :=
\frac{\partial^{2}\mathcal{V}}
{\partial \mathbf{q}_{i}\partial\mathbf{q}_{j}}(\mathbf{q}_{\text{s}}),
\quad\quad 1 \leq i,j \leq N. \end{equation*}
Note that the above approximation for the potential energy at the saddle point will be used in conjunction with the other HTST assumption in the computation of \eqref{Eq:TSTRate}, specifically the computations involving the dividing surface. A corresponding expansion could be carried out around the potential energy minimum $\mathbf{q}_{\text{m}}$ in state $A$ in terms of a different Hessian matrix $\mathbf{D}^{\textup{\text{at}}}_{\mathrm{m}}$, but this is not necessary with our formulation of the problem.
At a first-order saddle point, the Hessian has one negative eigenvalue while the rest are positive (assuming the absence of free translations or rotations). This offers a natural definition of the dividing surface as the hyperplane that passes through $\mathbf{q}_{s}$ and whose normal vector is the unstable eigenmode of the system's dynamical matrix at this point. For the remainder of the paper, $\Gamma_{AB}$ will be used to denote the dividing surface defined by these conditions. This choice conveniently allows for the explicit calculation of the saddle point partition function as will be shown below.
We will similarly provide an explicit definition for the dividing surface $\Gamma^{\textup{\text{cg}}}_{AB}$ in the coarsened phase space. This requires that we first determine the appropriate saddle point in the coarse-grained phase space and its corresponding dynamical matrix. As we are projecting the fully atomistic system into a repatom subspace of this system, we might expect $\mathbf{q}^{r}_{s}$, the repatom components of the saddle point, to be the transition state in the coarse-grained space. We are interested, then, in verifying whether this is actually the case and computing the associated coarse-grained dynamical matrix.
To begin the derivation of the coarse-grained saddle point, consider the harmonic approximation of \eqref{Eq:Saddle}. By ordering the position variable $\mathbf{q} = (\mathbf{q}^{r}, \mathbf{q}^{c})$ following \eqref{Eq:Order}, the Hessian matrix has the block-form structure
\begin{equation}\label{Eq:AtDynamicalMatrix} \begin{split} \mathbf{D}^{\textup{\text{at}}} :=
\begin{pmatrix}
\mathbf{R} & \mathbf{B} \\
\mathbf{B}^{\textup{\text{T}}} & \mathbf{C}
\end{pmatrix}, \end{split}\end{equation}
where
\begin{equation*}\begin{split}
\mathbf{R}_{ij} = \frac{\partial^{2}\mathcal{V}_{s}}
{\partial\mathbf{q}^{r}_{i}\partial \mathbf{q}^{r}_{j}}(\mathbf{q}_{\text{s}}),
\;\; 1 \leq i,j \leq N^{r};
\quad
\mathbf{C}_{k\ell} = \frac{\partial^{2}\mathcal{V}_{s}}
{\partial\mathbf{q}^{c}_{k}\partial\mathbf{q}^{c}_{\ell}}
(\mathbf{q}_{\text{s}}), \;\; 1 \leq k,\ell \leq N^{c};\\
\mathbf{B}_{mn} = \frac{\partial^{2}\mathcal{V}_{s}}
{\partial\mathbf{q}^{r}_{m}\partial \mathbf{q}^{c}_{n}}(\mathbf{q}_{\text{s}}),
\quad 1 \leq m \leq N^{r}, 1 \leq n \leq N^{c}. \end{split}\end{equation*}
Now, we may define a coarse-grained energy near the saddle point with the domain given by the subspace in \eqref{Eq:CoarseDomain}:
\begin{equation}\label{Eq:CGFreeEnergy}
\mathcal{V}^{\textup{\text{cg}}}_{s}(\mathbf{q}^{r},\beta) :=
-\frac{1}{\beta}\log\left(
\int_{\mathbb{R}^{dN^{c}}}
e^{-\beta\mathcal{V}_{s}(\mathbf{q}^{r},\mathbf{q}^{c})}d\mathbf{q}^{c}
\right). \end{equation}
The integration in the above definition is taken over all of $\mathbb{R}^{dN^{c}}$ rather than $\Omega_{A}^{c}(\mathbf{q}^{r})$ to allow for a closed form expression and is considered to be part of the assumptions for HTST in regards to treating the potential energy as second-order. We may compute the integral directly using \eqref{Eq:Saddle}:
\begin{equation}\label{Eq:CGDividingSurfaceEnergy} \begin{split}
\mathcal{V}^{\textup{\text{cg}}}_{s}(\mathbf{q}^{r},\beta)
&=
\mathcal{V}(\mathbf{q}_{s}) - \frac{1}{\beta}
\log
\left(
\int
e^{-\frac{\beta}{2}\mathbf{u}\cdot\mathbf{D}^{\textup{\text{at}}}\mathbf{u}}
d\mathbf{q}^{c}
\right)
\\ &=
\mathcal{V}(\mathbf{q}_{s}) -\frac{1}{\beta}
\log
\left(
\int
e^{-\frac{\beta}{2}
\left(
(\mathbf{u}^{c} - \mathbf{u}^{c}_{\text{min}})
\cdot \mathbf{C}
(\mathbf{u}^{c} - \mathbf{u}^{c}_{\text{min}})
+
\mathbf{u}^{r} \cdot \mathbf{D}^{\textup{\text{cg}}} \mathbf{u}^{r}
\right)
}
d\mathbf{q}^{c}\right)
\\ &=
\mathcal{V}(\mathbf{q}_{s})
+ \frac{1}{2\beta}
\log
\left(
\frac{\det\mathbf{C}}{(2\pi/\beta)^{dN^{c}}}
\right)
+ \frac{1}{2}
\mathbf{u}^{r} \cdot \mathbf{D}^{\textup{\text{cg}}} \mathbf{u}^{r}, \end{split} \end{equation}
where
\begin{equation}\label{Eq:RelaxedConstrainedAtoms}
\mathbf{u}^{c}_{\text{min}} := -\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{r},
\quad
\mathbf{D}^{\textup{\text{cg}}} := \mathbf{R} - \mathbf{B}\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}, \end{equation}
and we have assumed that the matrix $\mathbf{C}$ is invertible and positive-definite. From this, we can see that $\mathbf{q}^{r}_{s}$ (equivalently, $\mathbf{u}^{r} = \mathbf{0}$) is a saddle point of the coarse-grained system with its corresponding dynamical matrix being
\begin{equation}\label{Eq:CGDynMatrix}
\mathbf{D}^{\textup{\text{cg}}} = \mathbf{R} - \mathbf{B}\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}} \end{equation}
provided that $\mathbf{D}^{\textup{\text{cg}}}$ has both positive and negative eigenvalues. In order for the dividing surface $\Gamma^{\textup{\text{cg}}}_{AB}$ to be well-defined, recall that the matrix $\mathbf{D}^{\textup{\text{cg}}}$ must have only one negative eigenvalue and that the rest of its eigenvalues must be positive. We will elaborate on these requirements for $\mathbf{C}$ and $\mathbf{D}^{\textup{\text{cg}}}$ and under what circumstances they can be guaranteed to be met shortly. Before that, observe that the relation
\begin{equation}\label{Eq:QuadraticPortionofAtomisticEnergy}
\mathbf{u}\cdot\mathbf{D}^{\textup{\text{at}}}\mathbf{u} =
(\mathbf{u}^{c}- \mathbf{u}^{c}_{\text{min}})\cdot
\mathbf{C}(\mathbf{u}^{c} - \mathbf{u}^{c}_{\text{min}})
+
\mathbf{u}^{r}\cdot\mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{r} \end{equation}
arises from multiplying the displacement vector and dynamical matrix in their partitioned forms from \eqref{Eq:AtDynamicalMatrix} and then completing the square for the constrained components. Writing this portion of the atomistic energy in this form makes it clear that for a given $\mathbf{u}^{r}$, $\mathbf{u}^{c}_{\text{min}}$ gives the energy-minimizing displacements for the constrained atoms, thus motivating the choice of notation for this vector. This quantity will play an important role in the discussion on the error in the coarse-grained approximation of the TST rate. An equivalent derivation can be carried out around the energy minimum. However, as this quantity will not be used in the following, the derivation is omitted.
Note that, in order to facilitate the formal analysis, the method we consider here does not exactly correspond to either the CGMD or hot-QC methods. As mentioned earlier, degrees of freedom in CGMD are usually defined in terms of finite element shape functions and not in terms of repatoms. Further, the coarse-grained Hamiltonian is computed only once from
\eqref{Eq:CGFreeEnergy} using a harmonic approximation around a specific value of $\mathbf{q}^{r}$ (usually corresponding to the energy minimum). In the case of hot-QC, additional approximations intervene in the calculation of the coarse-grained Hamiltonian, namely, the harmonic approximation is replaced by a local-harmonic approximation and the integral over the constrained atoms is further approximated using the finite element method and a Cauchy-Born approximation based on a set of nodes (which are distinct from repatoms) placed in the periphery. In the ``static'' variant of hot-QC, the displacement of the nodes is chosen to minimize an approximation of the (free-)energy of the constrained atoms with respect to the instantaneous $\mathbf{q}^{r}$ while in the ``dynamic'' variant, the nodes are allowed to move dynamically in order to reduce the computational cost inherent to the minimization.
Our discussion pertains to a hypothetical method that combines the best of CGMD and hot-QC, i.e., where coarse-graining is carried out {\em exactly} at the harmonic level with respect to the instantaneous $\mathbf{q}^{r}$. At sufficiently low temperature, the error of such a method is therefore dominated by the coarse-graining error and provides a lower bound on the error in an actual CGMD or hot-QC model. A complete analysis of the rate errors in hot-QC would have to consider all of the additional approximations, which is beyond the scope of the current paper. However, in such an analysis, the error contributed solely by the coarse-graining process would exactly correspond to what will be derived below.
Returning to the properties of the matrices $\mathbf{C}$ and $\mathbf{D}^{\textup{\text{cg}}}$, we first note that whether $\mathbf{C}$ is invertible and $\mathbf{D}^{\textup{\text{cg}}}$ has a negative eigenvalue is entirely dependent upon a sensible choice of the repatom region for the problem. Provided that the essential transition behavior is contained within the chosen repatom region, we expect that the matrices $\mathbf{C}$ and $\mathbf{D}^{\textup{\text{cg}}}$ will satisfy these conditions. Assuming this to be the case, it can be shown that $\mathbf{C}$ is also positive definite and that the eigenvalues of $\mathbf{D}^{\textup{\text{cg}}}$ are such that $\Gamma^{\textup{\text{cg}}}_{AB}$ is well defined. We now state without proof a version of Cauchy's Interlacing Theorem to be used in the major theorem of this section proving the preceding statements.
\begin{theorem} [Cauchy's Interlacing Theorem] Let $\mathbf{S}$ be a symmetric $n \times n$ matrix. Define the orthogonal projection matrix $\mathbf{P}$ in block form to be
$\mathbf{P} :=
\begin{pmatrix}
\mathbf{I}_{m} & \mathbf{0} \\
\mathbf{0} & \mathbf{0}
\end{pmatrix}, $
where $\mathbf{I}_{m}$ is an $m \times m$ identity matrix with $m < n$ and the remainder of the blocks are zero matrices of the appropriate dimensions. Let $\mathbf{T}$ denote the upper left $m \times m$ matrix block of $\mathbf{P}^{\textup{\text{T}}}\mathbf{S}\mathbf{P}$:
\begin{equation*}
\mathbf{P}^{\textup{\text{T}}}\mathbf{S}\mathbf{P} =
\begin{pmatrix}
\mathbf{T} & \mathbf{0} \\
\mathbf{0} & \mathbf{0}
\end{pmatrix}, \end{equation*}
where the remainder of the blocks are zero matrices of the appropriate dimensions. If the eigenvalues of $\mathbf{S}$ are $\sigma_{1} \leq \sigma_{2} \leq \cdots \leq \sigma_{n}$ and the eigenvalues of $\mathbf{T}$ are $\tau_{1} \leq \tau_{2} \leq \cdots \leq \tau_{m}$, then
\begin{equation}\label{Eq:InterlacingEigenvalues}
\sigma_{j} \leq \tau_{j} \leq \sigma_{n-m+j}, \;\; \text{for} \;\; 1 \leq j \leq m. \end{equation}
\end{theorem}
\begin{proof} See \cite{KatoPerturbation}. This may be proved using Courant's Min-Max Theorem. \end{proof}
Now, we prove our claim.
\begin{theorem}\label{Thm:WellBehaved} Let $\mathbf{D}^{\textup{\text{at}}}$ and $\mathbf{D}^{\textup{\text{cg}}}$ be the fully atomistic dynamical matrix and the coarse-grained dynamical matrix as defined in \eqref{Eq:AtDynamicalMatrix} and \eqref{Eq:CGDynMatrix}, respectively. Recall that we assume that $\mathbf{D}^{\textup{\text{at}}}$ has only one negative eigenvalue while the rest are positive. For ease of reference,
\begin{equation}\label{Eq:WellBehavedThmRefEq}
\mathbf{D}^{\textup{\text{at}}} =
\begin{pmatrix}
\mathbf{R} & \mathbf{B} \\
\mathbf{B}^{\textup{\text{T}}} & \mathbf{C}
\end{pmatrix}
\;\; \text{and} \;\;
\mathbf{D}^{\textup{\text{cg}}} = \mathbf{R} - \mathbf{B}\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}. \end{equation}
We also require that the matrix $\mathbf{C}$ be non-singular so that the above definitions make sense. If $\mathbf{D}^{\textup{\text{cg}}}$ has a negative eigenvalue, then the remaining eigenvalues of $\mathbf{D}^{\textup{\text{cg}}}$ are positive and the matrix $\mathbf{C}$ is positive definite. In addition, the negative eigenvalue of $\mathbf{D}^{\textup{\text{cg}}}$ is greater than or equal to in absolute value that of $\mathbf{D}^{\textup{\text{at}}}$. In symbols,
\begin{equation*}
|\lambda^{\textup{\text{cg}}}| \geq |\lambda^{\textup{\text{at}}}|. \end{equation*}
\end{theorem}
\begin{proof} As the matrix $\mathbf{D}^{\textup{\text{at}}}$ possesses no zero eigenvalue, it is invertible. Since the matrix $\mathbf{C}$ is also invertible, we may use a standard block-matrix determinant identity and \eqref{Eq:WellBehavedThmRefEq} to show the following:
\begin{equation}\label{Eq:BlockDeterminantIdentity}
\det\mathbf{D}^{\textup{\text{at}}} = \det\mathbf{C}\det\mathbf {D}^{\textup{\text{cg}}}. \end{equation}
Thus, the determinant of $\mathbf{D}^{\textup{\text{cg}}}$ is non-zero, so $\mathbf{D}^{\textup{\text{cg}}}$ is non-singular. We may then compute the inverse of $\mathbf{D}^{\textup{\text{at}}}$ in block form which is
\begin{equation*}
(\mathbf{D}^{\textup{\text{at}}})^{-1} =
\begin{pmatrix}
(\mathbf{D}^{\textup{\text{cg}}})^{-1} & -(\mathbf{D}^{\textup{\text{cg}}})^{-1}\mathbf{B}\mathbf{C}^{-1} \\
-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}(\mathbf{D}^{\textup{\text{cg}}})^{-1} & \mathbf{C}^{-1} + \mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}(\mathbf{D}^{\textup{\text{cg}}})^{-1}\mathbf{B}\mathbf{C}^{-1}
\end{pmatrix}. \end{equation*}
Note that, as the eigenvalues of $(\mathbf{D}^{\textup{\text{at}}})^{-1}$ are the multiplicative inverses of the eigenvalues of $\mathbf{D}^{\textup{\text{at}}},$ $(\mathbf{D}^{\textup{\text{at}}})^{-1}$ has one negative eigenvalue with the rest being positive. We now apply Cauchy's Interlacing Theorem to the matrices $(\mathbf{D}^{\textup{\text{at}}})^{-1}$ and $(\mathbf{D}^{\textup{\text{cg}}})^{-1}$ to place bounds on the eigenvalues of $(\mathbf{D}^{\textup{\text{cg}}})^{-1}$. Let $\lambda^{\textup{\text{at}}}$ denote the single negative eigenvalue of $\mathbf{D}^{\textup{\text{at}}}$. It is important to note that $(\lambda^{\textup{\text{at}}})^{-1}$ is less than all of the other eigenvalues of $(\mathbf{D}^{\textup{\text{at}}})^{-1}$ due to its sign. Since we are given that $\mathbf{D}^{\textup{\text{cg}}}$ possesses a negative eigenvalue and thus that $(\mathbf{D}^{\textup{\text{cg}}})^{-1}$ possesses a negative eigenvalue, Cauchy's Interlacing Theorem immediately implies that the remaining eigenvalues of $(\mathbf{D}^{\textup{\text{cg}}})^{-1}$ are positive by \eqref{Eq:InterlacingEigenvalues}. Let $(\lambda^{\textup{\text{cg}}})^{-1}$ denote the single negative eigenvalue of $(\mathbf{D}^{\textup{\text{cg}}})^{-1}$. Cauchy's Interlacing Theorem implies that $(\lambda^{\textup{\text{at}}})^{-1} \leq (\lambda^{\textup{\text{cg}}})^{-1}$. Keeping in mind that these eigenvalues are negative, this inequality implies the result
\begin{equation}\label{Eq:CurvatureComparison}
|\lambda^{\textup{\text{cg}}}| \geq |\lambda^{\textup{\text{at}}}|. \end{equation}
Inverting the eigenvalues of $(\mathbf{D}^{\textup{\text{cg}}})^{-1}$ to arrive at the eigenvalues of $\mathbf{D}^{\textup{\text{cg}}}$ does not change their sign, so we have finished the proof that the spectrum of $\mathbf{D}^{\textup{\text{cg}}}$ has the properties as claimed in the statement of the theorem.
In order to finish the proof, observe that \eqref{Eq:BlockDeterminantIdentity} implies that the determinant of $\mathbf{C}$ is positive as the determinant of $\mathbf{D}^{\textup{\text{at}}}$ and $\mathbf{D}^{\textup{\text{cg}}}$ are both negative. Cauchy's Interlacing Theorem applied to $\mathbf{D}^{\textup{\text{at}}}$ and $\mathbf{C}$ implies that $\mathbf{C}$ may have at most one negative eigenvalue according to \eqref{Eq:InterlacingEigenvalues}. As having just one negative eigenvalue would force the determinant of $\mathbf{C}$ to be negative and contradict our determinant identity, we must have that all of the eigenvalues of $\mathbf{C}$ are in fact positive. \end{proof}
This theorem proves that $\mathbf{q}^{r}_{s}$ is indeed a saddle point, assuming the repatom region to be appropriately selected. In particular, it shows that no additional transition pathways are introduced in the coarsened system and that if the coarsened system has a transition pathway, it uniquely corresponds to a transition pathway in the fully atomistic system. Most interestingly, this result implies that coarse-graining the system {\em never} decreases the absolute curvature of the actual transition pathway at the barrier in the potential energy surface due to the inequality in the negative eigenvalues of the dynamical matrices. Equivalently, this implies that the magnitude of the imaginary eigenmode frequency for the coarsened system is never less than the magnitude of the imaginary eigenmode frequency in the fully atomistic system. This will have implications for the TST rate in the coarsened system that will be made clear over the course of the analysis of the TST rate approximation in the next section.
Finally, we may now provide our explicit definition of the dividing surface $\Gamma^{\textup{\text{cg}}}_{AB}$ in the coarse-grained system assuming that the repatom region was appropriately chosen. We define this surface to be the hyperplane that passes through the saddle point $\mathbf{q}^{r}_{s}$ and has as its normal vector the unstable eigenmode of $\mathbf{D}^{\textup{\text{cg}}}$. The dynamical matrix $\mathbf{D}^{\textup{\text{cg}}}$ takes the form of a repatom Hessian with a correction due to the interaction between the representative and constrained atoms. This correction can easily be understood from a physical point of view after considering the implications of \eqref{Eq:RelaxedConstrainedAtoms} and \eqref{Eq:QuadraticPortionofAtomisticEnergy}. Given a displacement of the repatoms $\mathbf{u}^{r}$, the matrix $-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}$ in the definition of $\mathbf{D}^{\textup{\text{cg}}}$ applied to $\mathbf{u}^{r}$ will give the displacement of the constrained atoms that will yield a minimized energy for the fully atomistic system in the context of HTST. In other words, $-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}$ finds the relaxed constrained atom configuration for the problem. The application of the $\mathbf{B}$ matrix is necessary for determining the force such a configuration would then exert on the repatoms. Thus, the correction to the repatom Hessian is due to the constrained atoms in their relaxed state.
\section{TST Rate Error Analysis}
With this dividing surface now defined, we may analyze the error in the TST rate made by coarse-graining the system. For this, an absolute error analysis is not useful as most classical transition rates vanish in the zero temperature limit. Therefore, convergence is essentially already guaranteed by the exponentiation of the energy in the Gibbs measure. In order to conduct a more meaningful analysis, the relative error in the TST rate will be examined instead. Using the definitions from \eqref{Eq:TSTRate} and \eqref{Eq:CGTSTRate}, the relative error is seen to be
\begin{equation}\label{Eq:RelativeError}
\left|\frac{R^{TST}_{A \to B} - R^{\text{cg}}_{A \to B}}
{R^{TST}_{A \to B}}\right|
=
\left|1 -
\frac{Z_{\mathcal{V}}}{Z^{\text{cg}}_{\mathcal{V}}}
\frac{Z^{\text{cg},\neq}_{\mathcal{V}}}{Z^{\neq}_{\mathcal{V}}}
\right|. \end{equation}
This relative error can be computed by calculating the two ratios $Z_{\mathcal{V}}/Z^{\text{cg}}_{\mathcal{V}}$ and $Z^{\text{cg},\neq}_{\mathcal{V}}/Z^{\neq}_{\mathcal{V}}$ separately. The first ratio $Z_{\mathcal{V}}/Z^{\text{cg}}_{\mathcal{V}}$ is trivial: as was shown in \eqref{Eq:PartitionFunctionIdentity}, this ratio is simply $Z_{\mathcal{V}}/Z^{\text{cg}}_{\mathcal{V}} = 1$, by construction.
Turning our attention to the second ratio, let us compute the dividing surface partition function for the coarsened system first. By definition,
\begin{equation*}
Z^{\textup{\text{cg}},\neq}_{\mathcal{V}}
=
\int_{\Gamma_{AB}^{\textup{\text{cg}}}}
e^{-\beta\mathcal{V}^{\textup{\text{cg}}}_{s}(\mathbf{q}^{r},\beta)}
dS^{r} . \end{equation*}
Using the result for $\mathcal{V}^{\textup{\text{cg}}}_{s}$ from \eqref{Eq:CGDividingSurfaceEnergy}, we have that
\begin{equation}\label{Eq:IntermediateCGDivZ}
Z^{\textup{\text{cg}},\neq}_{\mathcal{V}}
=
e^{-\beta\mathcal{V}(\mathbf{q}_{s})}
\sqrt{\frac{(2\pi/\beta)^{dN^{c}}}{\text{det}\,\mathbf{C}}}
\int_{\Gamma_{AB}^{\textup{\text{cg}}}}
e^{-\frac{\beta}{2} \mathbf{u^{r}} \cdot \mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{r}}
d\mathbf{u}^{r}. \end{equation}
Now, $\mathbf{D}^{\textup{\text{cg}}}$ is a real, symmetric matrix.
Let $\lambda^{\textup{\text{cg}}}$ denote the single negative eigenvalue of $\mathbf{D}^{\textup{\text{cg}}}$, and let $\mathbf{v}^{\textup{\text{cg}}}$ denote its corresponding normalized eigenvector. Let $\lambda^{\textup{\text{cg}}}_{i}$ for $2 \leq i \leq dN^{r}$ denote the remaining positive eigenvalues of the matrix with $\mathbf{v}^{\textup{\text{cg}}}_{i}$ being their associated normalized eigenvectors which we may choose so that they are orthonormal with respect to one another. Now, for any $\mathbf{q}^{r} \in \Gamma^{\textup{\text{cg}}}_{AB}$, the displacement $\mathbf{u}^{r} = \mathbf{q}^{r} - \mathbf{q}^{r}_{s} \in \mathbb{R}^{dN^{r}}$ must be orthogonal to $\mathbf{v}^{\textup{\text{cg}}}$ as the normal vector to the dividing surface is parallel to this unstable eigenmode. Thus, the displacement $\mathbf{u}^{r}$ may be written as $\mathbf{u}^{r} = \sum_{i=2}^{dN^{r}}\alpha_{i}\mathbf{v}^{\textup{\text{cg}}}_{i}$ for some real constants $\alpha_{i}$. Hence,
\begin{equation*}
\mathbf{u}^{r} \cdot \mathbf{D}^{\textup{\text{cg}}} \mathbf{u}^{r}
=
\left(\sum_{i=2}^{dN^{r}}\alpha_{i}\mathbf{v}^{\textup{\text{cg}}}_{i}\right)\cdot
\left(\sum_{i=2}^{dN^{r}}\alpha_{i}\mathbf{D}^{\textup{\text{cg}}}\mathbf{v}^{\textup{\text{cg}}}_{i}\right)
=
\sum_{i=2}^{dN^{r}}\alpha^{2}_{i}\lambda^{\textup{\text{cg}}}_{i}. \end{equation*}
All of the eigenvalues in the sum are positive. Therefore,
\begin{equation*} \begin{split}
\int_{\Gamma^{\textup{\text{cg}}}_{AB}}
e^{-\frac{\beta}{2}\mathbf{u}^{r}\cdot\mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{r}}
d\mathbf{u}^{r}
&=
\int_{\mathbb{R}^{dN^{r} - 1}}
e^{-\frac{\beta}{2}\sum_{i=2}^{dN^{r}}\lambda_{i}\alpha_{i}^{2}}
d\alpha_{2} \cdots d\alpha_{dN^{r}}
\\ &=
\left(\frac{2\pi}{\beta}\right)^{\frac{dN^{r} - 1}{2}}
\frac{1}{\sqrt{\Pi_{i=2}^{dN^{r}}\lambda^{\textup{\text{cg}}}_{i}}}
=
\left(\frac{2\pi}{\beta}\right)^{\frac{dN^{r} - 1}{2}}
\sqrt{\frac{|\lambda^{\textup{\text{cg}}}|}{|\text{det}\,\mathbf{D}^{\textup{\text{cg}}}|}}. \end{split} \end{equation*}
Substituting this result into \eqref{Eq:IntermediateCGDivZ}, we see that
\begin{equation*}
Z^{\textup{\text{cg}},\neq}_{\mathcal{V}}
=
e^{-\beta\mathcal{V}(\mathbf{q}_{s})}
\left(\frac{2\pi}{\beta}\right)^{\frac{dN - 1}{2}}
\sqrt{\frac{|\lambda^{\textup{\text{cg}}}|}
{\text{det}\,\mathbf{C}\,|\text{det}\,\mathbf{D}^{\textup{\text{cg}}}|}}. \end{equation*}
A similar computation for $Z^{\neq}_{\mathcal{V}}$ yields
\begin{equation*}
Z^{\neq}_{\mathcal{V}}
=
e^{-\beta\mathcal{V}(\mathbf{q}_{s})}
\left(\frac{2\pi}{\beta}\right)^{\frac{dN-1}{2}}
\sqrt{\frac{|\lambda^{\textup{\text{at}}}|}
{|\text{det}\,\mathbf{D}^{\textup{\text{at}}}|}}, \end{equation*}
where $\lambda^{\textup{\text{at}}}$ is the single negative eigenvalue of $\mathbf{D}^{\textup{\text{at}}}$. With this result and the block-matrix identity from \eqref{Eq:BlockDeterminantIdentity}, the desired ratio is
\begin{equation*}
\frac{Z^{\textup{\text{cg}},\neq}_{\mathcal{V}}}{Z^{\neq}_{\mathcal{V}}}
= \sqrt{\frac{\lambda^{\textup{\text{cg}}}}{\lambda^{\textup{\text{at}}}}}. \end{equation*}
Since $Z_{\mathcal{V}}/Z^{\text{cg}}_{\mathcal{V}} = 1$ and since we proved in Theorem~\ref{Thm:WellBehaved} that $\lambda^{\textup{\text{cg}}} /\lambda^{\textup{\text{at}}}\ge 1,$ the relative error for the TST rate approximation made by the coarsened system can be shown to satisfy
\begin{equation}\label{Eq:RelativeErrorResult} \frac{R^{\text{cg}}_{A \to B}-R^{TST}_{A \to B}}
{R^{TST}_{A \to B}}
= \frac{Z^{\text{cg},\neq}_{\mathcal{V}}}
{Z^{\neq}_{\mathcal{V}}}
-1
= \sqrt{\frac{\lambda^{\textup{\text{cg}}}}{\lambda^{\textup{\text{at}}}}} - 1\ge 0. \end{equation}
Thus, the relative error in the TST rate computation is entirely dependent upon the imaginary eigenfrequencies of the two dynamical matrices. In addition, we have that
\begin{equation*}
R^{\textup{\text{cg}}}_{A \to B} \geq R^{TST}_{A \to B}. \end{equation*}
To better understand this error, we will further investigate the eigenvalue $\lambda^{\textup{\text{cg}}}$.
\begin{remark} The relative error in the TST rate found above has no dependence on temperature. This is a consequence of the harmonic approximation of the potential energy used in the beginning of the analysis. If higher order terms in the potential energy approximation are included, a temperature dependence in the relative error will result. This dependence on the thermodynamic temperature $\beta$ will be $\mathcal{O}(\beta^{-2})$ so that this additional error term goes to zero in the zero temperature limit. \end{remark}
\section{Coarse-Grained Eigenvalue Analysis}
In the previous section, it was shown that the relative error in the TST rate is entirely dependent upon the negative eigenvalues of the dynamical matrices for the fully atomistic and coarse-grained systems at their respective transition states. To better understand this error, it is important to understand how the negative eigenvalue for the fully atomistic system is affected by the coarsening process. This analysis will provide greater insight into how the coarsened system relates to the original system as well as for which situations the coarse-grained approximation of the TST rate will be most accurate. This insight will be useful in that it will be suggestive of optimal approaches to coarse-graining a given problem.
In this section, we will again let $\mathbf{D}^{\textup{\text{at}}}$ and $\mathbf{D}^{\textup{\text{cg}}}$ represent the dynamical matrices as defined previously for the fully atomistic and coarse-grained systems. We will let $\lambda^{\textup{\text{at}}}$ denote the single negative eigenvalue of $\mathbf{D}^{\textup{\text{at}}}$ while $\mathbf{u}^{\textup{\text{at}}}$ will denote a normalized eigenvector corresponding to $\lambda^{\textup{\text{at}}}$. As before, we will also let $\lambda^{\textup{\text{cg}}}$ denote the sole negative eigenvalue of $\mathbf{D}^{\textup{\text{cg}}}$, and we will let $\mathbf{v}^{\textup{\text{cg}}}$ denote a normalized eigenvector associated with this eigenvalue. The sign of $\mathbf{v}^{\textup{\text{cg}}}$ will be chosen so that $\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}} \geq 0$. Here, $\mathbf{u}^{\textup{\text{at}},r} \in \mathbb{R}^{dN^{r}}$ denotes a vector consisting of only the repatom elements from the fully atomistic unstable eigenmode. Note that this element does not have the same dimension as $\mathbf{u}^{\textup{\text{at}}}$. Such a convention will be used throughout the remainder of the paper when we wish to consider only the repatom or constrained portion of a given variable. To be clear, when using $c$ as a superscript, it implies that the vector under consideration is an element of $\mathbb{R}^{dN^{c}}$.
To begin the analysis, let us determine the conditions necessary for $\lambda^{\textup{\text{at}}} = \lambda^{\textup{\text{cg}}}$, which would imply no error in the coarse-grained approximation of the TST rate.
\begin{theorem}\label{Thm:NecessaryConditions} If $\lambda^{\textup{\text{at}}} = \lambda^{\textup{\text{cg}}}$, then
$\mathbf{u}^{\textup{\text{at}},r}/\|\mathbf{u}^{\textup{\text{at}},r}\| = \mathbf{v}^{\textup{\text{cg}}}$. \em In addition, we have that
\begin{equation}\label{Eq:ConstrainedForceDifference}
\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\textup{\text{min}}} - \mathbf{u}^{\textup{\text{at}},c})
=
\mathbf{0}, \end{equation}
where $\mathbf{u}^{\textup{\text{at}},c}_{\textup{\text{min}}} := -\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r}$. \end{theorem}
\begin{proof} Suppose that $\lambda^{\textup{\text{at}}} = \lambda^{\textup{\text{cg}}}$ and that $\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}$ is as defined in the statement of the theorem. From the definition of $\mathbf{D}^{\textup{\text{at}}}$ in \eqref{Eq:AtDynamicalMatrix}, we see that $\mathbf{D}^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}}} = \lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}}}$ implies that \begin{equation*} \mathbf{R}\mathbf{u}^{\textup{\text{at}},r} + \mathbf{B}\mathbf{u}^{\textup{\text{at}},c} = \lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}},r}. \end{equation*}
Using this fact, we have from the definition of $\mathbf{D}^{\textup{\text{cg}}}$ in \eqref{Eq:CGDynMatrix} that
\begin{equation}\label{Eq:CGIdealEigenvector}
\mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{\textup{\text{at}},r}
=
\mathbf{R}\mathbf{u}^{\textup{\text{at}},r} + \mathbf{B}\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}
=
\lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}},r}
+ \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c}). \end{equation}
It will be shown later in the proof that $\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} -
\mathbf{u}^{\textup{\text{at}},c}) \leq 0$ and that $\|\mathbf{u}^{\textup{\text{at}},r}\| \neq 0$. For now, let us assume that these two statements are true. As $\lambda^{\textup{\text{cg}}}$ is the absolute minimum of the quadratic form $\mathbf{v}\cdot\mathbf{D}^{\textup{\text{cg}}}\mathbf{v}$ subject to the constraint
$\|\mathbf{v}\| = 1$, we may use \eqref{Eq:CGIdealEigenvector} and our recent assumptions to show that
\begin{equation}\label{Eq:DirectEigenvalueInequality}
\lambda^{\textup{\text{cg}}}
\leq
\frac{\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{\textup{\text{at}},r}}
{\|\mathbf{u}^{\textup{\text{at}},r}\|^{2}}
= \lambda^{\textup{\text{at}}} +
\frac{\mathbf{u}^{\textup{\text{at}},r}\cdot\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} -
\mathbf{u}^{\textup{\text{at}},c})}
{\|\mathbf{u}^{\textup{\text{at}},r}\|^{2}}
\leq
\lambda^{\textup{\text{at}}}. \end{equation}
Since $\lambda^{\textup{\text{at}}} = \lambda^{\textup{\text{cg}}}$, all of the inequalities in the above result are equalities. The first inequality that is now an equality implies that $\mathbf{u}^{\textup{\text{at}},r}/\|\mathbf{u}^{\textup{\text{at}},r}\|$ is a normalized eigenvector of
$\mathbf{D}^{\textup{\text{cg}}}$ associated with $\lambda^{\textup{\text{cg}}}$. The choice $\mathbf{v}^{\textup{\text{cg}}} := \mathbf{u}^{\textup{\text{at}},r}/\|\mathbf{u}^{\textup{\text{at}},r}\|$ satisfies $\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}} \geq 0$. Making the appropriate substitutions into \eqref{Eq:CGIdealEigenvector}, we now have that
\begin{equation*}
\mathbf{D}^{\textup{\text{cg}}}\mathbf{v}^{\textup{\text{cg}}} = \lambda^{\textup{\text{cg}}}\mathbf{v}^{\textup{\text{cg}}} +
\frac{\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})}
{\|\mathbf{u}^{\textup{\text{at}},r}\|} \end{equation*}
from which we immediately see that $\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c}) = \mathbf{0}$.
To finish the proof, we will now prove the two claims made earlier in the proof. Observe that
\begin{equation}\label{ba}
\mathbf{u}^{\textup{\text{at}},r}\cdot\mathbf{B}\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}
=
-\mathbf{u}^{\textup{\text{at}},r} \cdot
\mathbf{B}\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r}
=
-\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r} \cdot
\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r}. \end{equation}
Now, we may use the definition of $\mathbf{D}^{\textup{\text{at}}}$ and the equation $\mathbf{D}^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}}} = \lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}}}$ to show that $\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r} + \mathbf{C}\mathbf{u}^{\textup{\text{at}},c} = \lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}},c}$. If $\mathbf{u}^{\textup{\text{at}},r}$ were a zero vector, we would arrive at the contradictory conclusion that $\mathbf{C}$ has a negative eigenvalue. As this is not the case, $\mathbf{u}^{\textup{\text{at}},r}$ is not a zero vector so that
$\|\mathbf{u}^{\textup{\text{at}},r}\| \neq 0$. Rearranging the equation arising from the definition of $\mathbf{D}^{\textup{\text{at}}}$, we have that
\begin{equation}\label{aa} \mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r} + (\mathbf{C} - \lambda^{\textup{\text{at}}}\mathbf{I})\mathbf{u}^{\textup{\text{at}},c} = \mathbf{0}, \end{equation}
where $\mathbf{I}$ is an identity matrix of the appropriate dimensions. The eigenvalues of $(\mathbf{C} - \lambda^{\textup{\text{at}}}\mathbf{I})$ may easily be shown to be the eigenvalues of $\mathbf{C}$ plus $|\lambda^{\textup{\text{at}}}|$
with the same corresponding eigenvectors as we are only adding a scalar multiple of the identity matrix to $\mathbf{C}$. Since $\mathbf{C}$ is a positive-definite matrix and all of its eigenvalues are positive, adding the positive number $|\lambda^{\textup{\text{at}}}|$ to the eigenvalues of $\mathbf{C}$ does not change their sign. Hence, the eigenvalues of $(\mathbf{C} - \lambda^{\textup{\text{at}}}\mathbf{I})$ are all positive, so this matrix is invertible. Thus, after some further manipulation of \eqref{aa}, we can show that
\begin{equation}\label{bb}
\mathbf{B}\mathbf{u}^{\textup{\text{at}},c}
=
-\mathbf{B}(\mathbf{C} -
\lambda^{\textup{\text{at}}}\mathbf{I})^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r}. \end{equation}
Combining \eqref{ba} with \eqref{bb}, we can obtain
\begin{equation*}
\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} -
\mathbf{u}^{\textup{\text{at}},c})
=
-\mathbf{B}^{\textup{\text{T}}}\mathbf{u}^{\textup{\text{at}},r} \cdot
(\mathbf{C}^{-1} - (\mathbf{C} - \lambda^{\textup{\text{at}}}\mathbf{I})^{-1})\mathbf{B}^{\textup{\text{T}}}
\mathbf{u}^{\textup{\text{at}},r}. \end{equation*}
From the earlier comment regarding the eigenvalues and eigenvectors of $(\mathbf{C} - \lambda^{\textup{\text{at}}}\mathbf{I})$, we see that the matrix $(\mathbf{C}^{-1} - (\mathbf{C} - \lambda^{\textup{\text{at}}}\mathbf{I})^{-1})$ is in fact positive definite. Note that this statement follows from the assumption that $\lambda^{\textup{\text{at}}}$ is strictly negative, but allowing $\lambda^{\textup{\text{at}}} = 0$ does not change the following result as the aforementioned matrix would still be positive semi-definite. In either case,
\begin{equation*}
\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} -
\mathbf{u}^{\textup{\text{at}},c})
\leq 0. \end{equation*}
The claims are proven, so the proof is complete. \end{proof}
The converse of the above theorem is true as well; that is, if \eqref{Eq:ConstrainedForceDifference} holds and $\mathbf{u}^{\textup{\text{at}},r}$ is an eigenvector of $\mathbf{D}^{\textup{\text{cg}}}$, then $\lambda^{\textup{\text{at}}} = \lambda^{\textup{\text{cg}}}$. A simple proof of this statement is to substitute the assumption of the form of $\mathbf{u}^{\textup{\text{at}},r}$ and \eqref{Eq:ConstrainedForceDifference} into \eqref{Eq:CGIdealEigenvector}. In fact, \eqref{Eq:ConstrainedForceDifference} alone is sufficient to prove that the eigenvalues must be identical. The theorem, then, shows that in order for no error to be made in the coarse-graining approximation of the TST rate that the constrained atoms in the unstable eigenmode must interact with the repatom region as if the constrained atoms were in their relaxed configuration. Note that the result does not necessarily imply that $\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} = \mathbf{u}^{\textup{\text{at}},c}$ as the kernel of $\mathbf{B}$ may be non-trivial. As $\mathbf{B}$ is affected by the range of interactions among the constituents in a system, it is not difficult to construct an example where the kernel of this matrix would be non-trivial.
One particular case of interest for an exact coarse-grained approximation of the TST rate occurs when the only non-zero components in the unstable eigenmode are those that lie in the chosen repatom region, or equivalently, when $\mathbf{u}^{\textup{\text{at}},c} = \mathbf{0}$. In such a case, the behavior of interest is extremely localized, so we should not be surprised that we do not lose any information by coarse-graining the components of the system that have no involvement in the transition. This implies that the coarse-graining method works well when the unstable eigenmode is localized; that is, we should expect the coarse-graining scheme to be accurate when the repatom region contains the atoms which have the largest contribution to the norm of the unstable eigenmode $\mathbf{u}^{\textup{\text{at}}}$. Accurately approximating the TST rate in such an instance was the primary motivation for the development of this method. It is also interesting to note that \eqref{Eq:DirectEigenvalueInequality} provides another proof that $\lambda^{\textup{\text{cg}}} \leq \lambda^{\textup{\text{at}}}$.
The presence of $\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}$ in the above condition for no error to be made in the coarse-graining approximation can be explained through the following theorem:
\begin{theorem}\label{Thm:Embedding} Let $\mathbf{v} \in \mathbb{R}^{dN^{r}}$ and let
\begin{equation*}
\mathbf{v}_{\textup{\text{min}}} :=
\begin{bmatrix}
\mathbf{v} \\
-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{v}
\end{bmatrix}. \end{equation*}
Then,
\begin{equation*}
\mathbf{D}^{\textup{\text{at}}}\mathbf{v}_{\textup{\text{min}}} =
\begin{bmatrix}
\mathbf{D}^{\textup{\text{cg}}}\mathbf{v} \\
\mathbf{0}
\end{bmatrix}. \end{equation*}
Thus, $\mathbf{v}\cdot\mathbf{D}^{\textup{\text{cg}}}\mathbf{v} = \mathbf{v}_{\textup{\text{min}}}\cdot\mathbf{D}^{\textup{\text{at}}}\mathbf{v}_{\textup{\text{ min}}}$. In particular, $\lambda^{\textup{\text{cg}}} = \mathbf{v}^{\textup{\text{cg}}}\cdot\mathbf{D}^{\textup{\text{cg}}}\mathbf{v}^{\textup{\text{cg}}} = \mathbf{v}^{\textup{\text{cg}}}_{\textup{\text{min}}} \cdot \mathbf{D}^{\textup{\text{at}}}\mathbf{v}^{\textup{\text{cg}}}_{\textup{\text{min}}}$, where
\begin{equation*}
\mathbf{v}^{\textup{\text{cg}}}_{\textup{\text{min}}} :=
\begin{bmatrix}
\mathbf{v}^{\textup{\text{cg}}} \\
-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{v}^{\textup{\text{cg}}}
\end{bmatrix}. \end{equation*}
\end{theorem}
\begin{proof} Let the vectors be as defined in the problem statement. Then,
\begin{equation*}
\mathbf{D}^{\textup{\text{at}}}\mathbf{v}_{\text{min}}
=
\begin{bmatrix}
\mathbf{R} & \mathbf{B} \\
\mathbf{B}^{\textup{\text{T}}} & \mathbf{C}
\end{bmatrix}
\begin{bmatrix}
\mathbf{v} \\
-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}\mathbf{v}
\end{bmatrix}
=
\begin{bmatrix}
(\mathbf{R} - \mathbf{B}\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}})\mathbf{v} \\
\mathbf{0}
\end{bmatrix}
=
\begin{bmatrix}
\mathbf{D}^{\textup{\text{cg}}}\mathbf{v} \\
\mathbf{0}
\end{bmatrix}. \end{equation*}
The remaining results are immediate consequences of the above equation. \end{proof}
Recall from \eqref{Eq:RelaxedConstrainedAtoms} that the matrix $-\mathbf{C}^{-1}\mathbf{B}^{\textup{\text{T}}}$ yields the relaxed constrained atom configuration for a given repatom configuration; that is, the matrix finds the displacement vector for the constrained atoms that minimizes the total energy of the configuration. Thus, using the notation in the above theorem, it has been shown that $\mathbf{v} \mapsto \mathbf{v}_{\text{min}}$ is a linear, one-to-one mapping of the coarse-grained phase space into a subspace of the fully atomistic phase space that preserves energy differences between configurations as well as forces. We see then that the coarse-graining method removes the constrained degrees of freedom while still taking into account their presence by always considering the constrained atoms to be in their energy-minimizing state. The vector $\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})$ tells how much the constrained system's actual behavior through the transition state deviates from this energy-minimizing assumption. This difference could also be interpreted as a measure of how well the coarsened system captures the boundary conditions for the repatom region due to the long-range effect of the constrained atoms. Note that this result was a consequence of the form of the dynamical matrix as was discussed at the end of the derivation of the dividing surface for the coarse-grained system. Also, recall that the range of the function $\mathbf{v} \mapsto \mathbf{v}_{\text{min}}$ which came about from the use of this method is extremely similar to the subspace of the fully atomistic phase space utilized in the static version of the hot-QC methods.
We can use elements of the first theorem in this section to derive an expression for the difference between $\lambda^{\textup{\text{at}}}$ and $\lambda^{\textup{\text{cg}}}$:
\begin{theorem}\label{Thm:ErrorBound} In general,
\begin{equation}\label{est}
\lambda^{\textup{\text{cg}}} - \lambda^{\textup{\text{at}}}
=
\frac{\mathbf{v}^{\textup{\text{cg}}} \cdot
\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\textup{min}} - \mathbf{u}^{\textup{\text{at}},c})}
{\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{u}^{\textup{\text{at}},r}}. \end{equation}
\end{theorem}
\begin{proof} Recall from \eqref{Eq:CGIdealEigenvector} that
\begin{equation*}
\mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{\textup{\text{at}},r}
=
\lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}},r}
+ \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c}). \end{equation*}
Taking the dot product of both sides of the equation with $\mathbf{v}^{\textup{\text{cg}}}$, we have that
\begin{equation*}
\lambda^{\textup{\text{cg}}}\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{u}^{\textup{\text{at}},r}
=
\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{\textup{\text{at}},r}
=
\lambda^{\textup{\text{at}}}\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{u}^{\textup{\text{at}},r}
+ \mathbf{v}^{\textup{\text{cg}}} \cdot
\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c}). \end{equation*}
The result \eqref{est} follows if $\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{u}^{\textup{\text{at}},r} \neq 0.$ To show this, note that if $\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{u}^{\textup{\text{at}},r} = 0$, then $\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{D}^{\textup{\text{cg}}}\mathbf{u}^{\textup{\text{at}},r} \geq 0$ as $\mathbf{v}^{\textup{\text{cg}}}$ is the only eigenvector of the symmetric matrix $\mathbf{D}^{\textup{\text{cg}}}$ with a negative eigenvalue. However, by \eqref{Eq:QuadraticPortionofAtomisticEnergy} we have that
\begin{equation*}
\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{D}^{\textup{\text{cg}}} \mathbf{u}^{\textup{\text{at}},r}
\leq
\mathbf{u}^{\textup{\text{at}}} \cdot \mathbf{D}^{\textup{\text{at}}} \mathbf{u}^{\textup{\text{at}}}
=
\lambda^{\textup{\text{at}}} < 0. \end{equation*}
Thus, we have a contradiction, so the claim is proven. \end{proof}
The above error \eqref{est} can be broken down into three components: namely, the error in the approximation of $\lambda^{\textup{\text{at}}}$ by $\lambda^{\textup{\text{cg}}}$ is affected by the rotation of the dividing surface in the coarse-grained phase space, how much of the essential components of the transition are captured in the repatom region, and how well the unstable eigenmode is approximated by the coarsened system. We have already mentioned that the $\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})$ quantity is a measure of this third error. The dot product of this quantity with $\mathbf{v}^{\textup{\text{cg}}}$ picks out the portion of the resulting force that impacts the transition. The remaining errors are reflected in the $\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}}$ term. Recall that $\mathbf{u}^{\textup{\text{at}},r}$ is the repatom component of the normal vector to the dividing surface in the fully atomistic phase space while $\mathbf{v}^{\textup{\text{cg}}}$ is the normal vector to the coarse-grained dividing surface. The geometric definition of the dot product states that
\begin{equation*}
\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}}
=
\|\mathbf{u}^{\textup{\text{at}},r}\| \cos(\theta), \end{equation*}
where $\theta$ is the angle between $\mathbf{u}^{\textup{\text{at}},r}$ and $\mathbf{v}^{\textup{\text{cg}}}$. The angle $\theta$ represents how much the dividing surface is rotated as it is projected into the coarse-grained phase space. As the mismatch between the direction of the vectors $\mathbf{u}^{\textup{\text{at}},r}$ and
$\mathbf{v}^{\textup{\text{cg}}}$ increases, the error in the coarse-grained approximation of the TST rate will increase. Physically, this increase is caused by the coarse-grained dividing surface passing through a lower-energy region of the phase space as a result of the rotation. The remaining portion of the error term, $\|\mathbf{u}^{\textup{\text{at}},r}\|^{-1}$, characterizes how much of the essential components of the transition are captured within the repatom region as has been mentioned earlier. Note that the magnitude of an individual component of the unstable eigenmode determines the relative importance of that component to the transition. With these three errors in mind, we could also write the error \eqref{est} found in the theorem as
\begin{equation*}
\lambda^{\textup{\text{at}}} - \lambda^{\textup{\text{cg}}}
=
\frac{\mathbf{v}^{\textup{\text{cg}}} \cdot
\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})}
{\|\mathbf{u}^{\textup{\text{at}},r}\|\cos(\theta)}. \end{equation*}
Above, it was shown that by including all of the atoms that contribute significantly to the localized transition, the coarse-grained approximation of the TST rate would be accurate. The error derived in Theorem \ref{Thm:ErrorBound} suggests that the error in the coarse-grained approximation can be further reduced by choosing the repatom region in such a way so as to minimize the error due to $\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})$. Additional refinement of the repatom region to minimize this error contribution would be similar to what is already done in the quasicontinuum methods when choosing a mesh for the continuum region and will be demonstrated in the next section on numerical results. More interestingly, this error formulation seems to imply the possibility that the coarse-graining problem may be approached with the primary goal of minimizing $\mathbf{v}^{\textup{\text{cg}}} \cdot \mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})$. In such an approach, it may not be as necessary to fully capture the localized region of interest in the repatom region provided this boundary condition norm can be made sufficiently small. This perspective on the error suggests that there might be problems for which this strategy is well-suited. The tradeoff between these two terms will be further discussed below.
The relative error between the two transition rates depends on the ratio between $\lambda^{\textup{\text{cg}}}$ and $\lambda^{\textup{\text{at}}}$. We can, of course, use the above theorem to write this ratio as
\begin{equation*}
\frac{\lambda^{\textup{\text{cg}}}}{\lambda^{\textup{\text{at}}}}
=
1 + \frac{1}{|\lambda^{\textup{\text{at}}}|}
\frac{\mathbf{v}^{\textup{\text{cg}}} \cdot
\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})}
{\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}}}. \end{equation*}
\section{Numerical Results}
\begin{figure}
\caption{The unstable eigenmode for the fully atomistic 1D chain for two different tensile strains determined by the scalar $s$.}
\label{fig:FractureUnstableEigenmode:Ex1}
\label{fig:FractureUnstableEigenmode:Ex2}
\label{fig:FractureUnstableEigenmode}
\end{figure}
\begin{figure}
\caption{Illustration of the localized and delocalized
coarse-graining schemes for a core region containing 6
atoms. Circles represent atoms in the system: filled
circles represent repatoms while the empty circles represent
constrained atoms. The core region is the collection of the 6
contiguous repatoms in the center of the chain. The weakened bond
is represented by the set of two lines connecting the two central
atoms in the figure.}
\label{fig:MeshSchemes}
\end{figure}
\begin{figure}
\caption{Comparison of the fully atomistic unstable eigenmode and the coarse-grained unstable eigenmodes computed for the localized and delocalized coarse-graining schemes for the two different tensile strains. The repatoms for each of the coarse-graining schemes are indicated by the markers. The localized and delocalized coarse-graining schemes contain the same number of degrees of freedom in both graphs. }
\label{fig:FractureCGUnstableEigenmode:Ex2}
\label{fig:FractureCGUnstableEigenmode:Ex1}
\label{fig:FractureCGUnstableEigenmode}
\end{figure}
\begin{figure}\label{fig:FractureEigenvalueComparison:Ex2}
\label{fig:FractureEigenvalueComparison:Ex1}
\label{fig:FractureEigenvalueComparison}
\end{figure}
\begin{figure}
\caption{The denominator of the error term from Theorem \ref{Thm:ErrorBound} for the localized and delocalized approaches to coarse graining for two different tensile strains that measure the rotation of the dividing surface and the relevant portions of the transition captured in the repatom region. Ideally, this term should be equal to 1.}
\label{fig:FractureDotProductComparison:Ex2}
\label{fig:FractureDotProductComparison:Ex1}
\label{fig:FractureDotProductComparison}
\end{figure}
\begin{figure}
\caption{Long-range elastic contribution to the error from Theorem \ref{Thm:ErrorBound} for the localized and delocalized approaches to coarse-graining for two different tensile strains.}
\label{fig:FractureLongRangeComparison:Ex2}
\label{fig:FractureLongRangeComparison:Ex1}
\label{fig:FractureLongRangeComparison}
\end{figure}
In this section, we seek to verify through numerical experiments that the coarse-graining method described in this paper is indeed able to accurately reproduce the fully atomistic TST rate and to compare the error between qualitatively different approaches to coarsening a system with a localized region of interest. For the first coarse-graining scheme, the repatom region will be chosen with the intent of maximizing the magnitude of the projection of the unstable eigenmode onto the repatom space. As the region of interest is localized, this approach implies that the repatoms should be concentrated in this region as well. In the second coarse-graining scheme, the repatom region will consist of a core region in the localized region of interest with additional repatoms placed throughout the remainder of the system so as to better capture the long-range effect that the constrained region has on the repatoms. We will refer to these coarse-graining schemes as the localized repatom mesh and delocalized repatom mesh schemes, respectively.
The system that will be considered in the numerical experiments is a 1-D chain of atoms with fixed endpoints. Only nearest-neighbor interactions are considered. All of the atoms in the chain interact through the same potential except for those atoms which form the central bond, whose interaction potential is made weaker. We are interested in the rate at which this weakened bond breaks, causing the fracture of the chain. The process is expected to primarily involve the atoms nearest to the central bond, resulting in a localized transition. The localized nature of this transition will be demonstrated in the numerical experiments. Note that this system is closely related to that studied in \cite{hyperqc}.
The 1-D chain consists of 202 atoms. The energy contribution due to the central bond in this chain will take the form of a Lennard-Jones potential:
\begin{equation*}
\mathcal{V}_{c}(r)
=
4\varepsilon\left(\left(\frac{\sigma}{r}\right)^{12} -
\left(\frac{\sigma}{r}\right)^{6}\right), \end{equation*}
where $\varepsilon = 1$, $\sigma = \frac{1}{2^{1/6}}$, and $r$ is the length of the bond. The remaining bonds in the chain are treated as harmonic springs; i.e., the potential energy contribution of a single bond is of the form:
\begin{equation*}
\mathcal{V}(r)
=
\frac{1}{2}(r - 1)^{2}. \end{equation*}
Note that the equilibrium bond length for this potential is 1. Letting $\mathbf{q} = (q_{i})_{i=0}^{201}$ denote the position of the atoms in the chain ($q_{i}$ indicating the position of the $(i + 1)$-th atom), the total energy of the chain is
\begin{equation*}
\mathcal{V}_{\text{chain}}(\mathbf{q})
=
\mathcal{V}_{c}(q_{101} - q_{100})
+
\sum_{i=1, i \neq 101}^{201}\mathcal{V}(q_{i} - q_{i-1})
. \end{equation*}
We set the boundary conditions for the endpoints of the chain so that $q_{0} = 0$ and $q_{201} = 201s$, where $s = 1.02$ or $s = 1.035$.
The purpose of the scalar $s$ is to impose a tensile strain on the chain and make fracture energetically favorable. Changing the tensile strain affects the degree of locality of the transition region.
The accuracy of the approximation of the TST rate will be discussed in terms of
\eqref{Eq:RelativeErrorResult}, so we need only compute the negative eigenvalues of the dynamical matrices discussed in the previous sections. Note that we use the same notation for the dynamical matrices, eigenmodes, eigenvalues, etc., in this section as we have before. To determine the negative eigenvalue belonging to $\mathbf{D}^{\textup{\text{at}}}$, we first compute the saddle point or transition state, $\mathbf{q}_{s}$, of the system just described. In our numerical simulations, we found the saddle point by slowly stretching the weakened bond at the center of the chain while simultaneously relaxing the remaining atoms until all of the forces in the chain were found to be zero within a given tolerance. We then numerically compute $\mathbf{D}^{\textup{\text{at}}}$ and diagonalize it to determine both $\lambda^{\textup{\text{at}}}$ and $\mathbf{u}^{\textup{\text{at}}}$. The unstable eigenmode $\mathbf{u}^{\textup{\text{at}}}$ is shown in Figure \ref{fig:FractureUnstableEigenmode} for the two different tensile strains. It is clear from the picture that this transition is fairly well localized in both cases, as the atoms closest to the central bond are the largest contributors to the norm of $\mathbf{u}^{\textup{\text{at}}}$. Physically, this unstable eigenmode simply shows that as the unbroken chain crosses over to the broken state, the central atoms in the two regions move away from one another as indicated by the displacements in the eigenvector. When the strain is greater, the fracture of the chain is more localized.
For this problem, it is possible to analytically determine the saddle point. Before we begin with the derivation of the saddle point, recall that the length of the chain under consideration is $201s$. For clarity, the length of the chain will be denoted by $L$ in this section of the analysis. Now, due to the symmetry of the system about the central bond, we expect the saddle point to display a similar symmetry. That is, we expect that $q_{i} - q_{0} = q_{201} - q_{201 - i}$ for $0 \leq i \leq 100$. As a consequence of this result, we can write the central bond length solely in terms of $q_{100}$. Explicitly, $q_{101} - q_{100} = L - 2q_{100}$. Because of the strictly convex nature of the spring potential and the symmetry in the atomic positions, it is also possible to show that every bond length governed by the spring potential, there are 200 such bonds, is equal to the same value. The bond lengths partition the length of the chain minus the central bond length, so we may compute the bond length to be $\frac{L - (L - 2q_{100})}{200} = \frac{q_{100}}{100}$. With this result and the alternate formula for the central bond length, we have reduced the problem of computing the saddle point down to simply computing $q_{100}$. To compute this value, let us consider the balance of the forces on the 100th atom in the chain. This atom is part of the central bond and interacts via the Lennard-Jones potential with the 101st atom, but it interacts by the spring potential with the atom with index 99. At the saddle point, these forces should cancel. Thus, the force balance equation is
\begin{equation*}
(q_{100} - q_{99}) + 4\varepsilon\left(12\frac{\sigma^{12}}{(q_{101} - q_{100})^{13}} - 6\frac{\sigma^{6}}{(q_{101} - q_{100})^{7}}\right) = 0. \end{equation*}
Substituting our results for the spring bond length and the central bond length, this equation becomes
\begin{equation*}
\frac{q_{100}}{100} + 4\varepsilon\left(12\frac{\sigma^{12}}{(L - 2q_{100})^{13}} - 6\frac{\sigma^{6}}{(L - 2q_{100})^{7}}\right) = 0 \end{equation*}
This non-linear equation can be turned into a 14-degree polynomial. The roots of this resulting polynomial that lie in $(0, \frac{1}{2}L)$ give the possible values of $q_{100}$ in the saddle point. As this is the only position necessary to determine the location of every atom in the saddle point, the roots of the polynomial give the transition state of the problem.
Note that with the bond lengths between the atoms in the chain known, we can derive an analytical expression for the unstable eigenvector $\mathbf{u}^{\textup{\text{at}}}$ in terms of the negative eigenvalue $\lambda^{\textup{\text{at}}}$. To see this, let $\mathbf{u}^{\textup{\text{at}}}_{100}$ be the displacement of the 100th atom, which is the leftmost atom that interacts via the Lennard-Jones potential, and recall that the endpoints of the chain are fixed, so we can take the displacement $\mathbf{u}^{\textup{\text{at}}}_{0} = 0$. Applying $\mathbf{D}^{\textup{\text{at}}}$ to the unstable eigenmode $\mathbf{u}^{\textup{\text{at}}}$ and assuming that $\mathbf{u}^{\textup{\text{at}}}_{100}$ is known, we see that the displacements in the eigenmode for $\mathbf{u}^{\textup{\text{at}}}_{i}$ for $0 < i < 100$ may be determined from a second-order difference equation with two boundary conditions given by $\mathbf{u}^{\textup{\text{at}}}_{0}$ and $\mathbf{u}^{\textup{\text{at}}}_{100}$. Specifically, we have that
\begin{equation*}
-\mathbf{u}^{\textup{\text{at}}}_{i-1} + 2\mathbf{u}^{\textup{\text{at}}}_{i} - \mathbf{u}^{\textup{\text{at}}}_{i+1} = \lambda^{\textup{\text{at}}}\mathbf{u}^{\textup{\text{at}}}_{i} \quad \text{for} \; 0 < i < 100 \end{equation*}
with $\mathbf{u}^{\textup{\text{at}}}_{100}$ taken to be some constant to be determined from normalization and $\mathbf{u}^{\textup{\text{at}}}_{0} = 0$. Solving this difference equation yields the solution
\begin{equation*}
\mathbf{u}^{\textup{\text{at}}}_{i} = \alpha r_{+}^{i} + \beta r_{-}^{i} \quad \text{for} \;
0 \leq i \leq 100, \end{equation*}
where
\begin{equation*}
\alpha
=
\frac{-\mathbf{u}^{\textup{\text{at}}}_{100}}{r_{-}^{100} - r_{+}^{100}},
\quad
\beta
=
\frac{\mathbf{u}^{\textup{\text{at}}}_{100}}{r_{-}^{100} - r_{+}^{100}}, \end{equation*}
and
\begin{equation*}
r_{\pm}
=
1 - \frac{\lambda^{\textup{\text{at}}}}{2} \pm
\sqrt{\frac{\lambda^{\textup{\text{at}}}}{2}\left(\frac{\lambda^{\textup{\text{at}}}}{2} - 2\right)}. \end{equation*}
Note that $r_{+} > 1$ while $r_{-} < 1$. Due to the symmetry of the problem, we can get a similar equation for the displacements on the right-hand side of the chain. The displacements of the two central atoms that interact via the Lennard-Jones potential are determined from the normalization of the eigenmode and the fact that $\mathbf{u}^{\textup{\text{at}}}_{100} = -\mathbf{u}^{\textup{\text{at}}}_{101}$, which is due to the symmetry intrinsic to the problem.
As stated previously, the localized repatom coarse-graining scheme intends to maximize $\|\mathbf{u}^{\textup{\text{at}},r}\|$. This was accomplished in the numerical experiments by constraining a continuous line of atoms at one end of the chain and the mirror image of this grouping at the chain's other end, leaving a contiguous repatom region at the center of the chain. The total number of repatoms was then varied. For the delocalized coarse-graining scheme, the selection of the repatom region began with the inclusion of a contiguous region of repatoms centered around the central bond. Additional repatoms were placed in the periphery with the spacing between them increasing geometrically moving away from the core region. Specifically, the spacing was doubled after starting with a single constrained atom between the core region and the first repatom in the periphery. Following this, there would be two, then four, eight, etc., constrained atoms between each pair of repatoms until the end of the chain was reached. In symbols, we may write the set of indices for the $N$ repatoms in the localized coarse-graining scheme in the following way:
\begin{equation*}
\text{Localized Indices}(N) =
\left\{100 - \ell : 0 \leq \ell \leq \frac{1}{2}N - 1 \right\}
\bigcup \left\{101 + \ell : 0 \leq \ell \leq \frac{1}{2}N - 1 \right\}. \end{equation*}
Of course, here, $N$ must be an even integer with $2 \leq N \leq 200$. In symbols, we may write the set of indices for the repatoms in the delocalized coarse-graining scheme with $N$ repatoms in the core in the following way:
\begin{align*}
\text{De}&\text{-localized Indices}(N) = \text{Localized Indices}(N)
\\ \bigcup &
\left\{100 - \left(\frac{1}{2}N - 1\right) - 2^{\ell} - (\ell - 1) :
\ell \geq 1 \; \text{and}
\; 100 - \left(\frac{1}{2}N - 1\right) - 2^{\ell} - (\ell - 1) > 0
\right\}
\\ \bigcup &
\left\{101 + \left(\frac{1}{2}N - 1\right) + 2^{\ell} + (\ell - 1) :
\ell \geq 1 \; \text{and}
\; 101 + \left(\frac{1}{2}N - 1\right) + 2^{\ell} + (\ell - 1) < 201
\right\}. \end{align*}
The repatom configuration generated by this method is symmetric about the central bond for the delocalized coarse-graining scheme. The number of repatoms in the core region was then also varied. An illustration of these two coarse-graining schemes is provided in Figure \ref{fig:MeshSchemes}.
Once a repatom set was defined for the experiment, $\mathbf{D}^{\textup{\text{at}}}$ and \eqref{Eq:CGDynMatrix} were used to directly compute $\mathbf{D}^{\textup{\text{cg}}}$. A diagonalization of this matrix then yielded $\lambda^{\textup{\text{cg}}}$ and $\mathbf{v}^{\textup{\text{cg}}}$. A comparison of the unstable eigenmodes for the two coarse-graining schemes and the fully atomistic unstable eigenmode for a given resolution are shown in Figure \ref{fig:FractureCGUnstableEigenmode}. The markers in the graph denote which atoms were included in the repatom region for the experiments and provide another illustration of the difference between the repatom regions used in the localized and delocalized repatom mesh methods.
The relative error of the HTST approximation, or $\sqrt{\frac{\lambda^{\textup{\text{cg}}}}{\lambda^{\textup{\text{at}}}}} - 1$, is shown in Figure \ref{fig:FractureEigenvalueComparison} for varying numbers of repatoms and for the two tensile strains. The results show that the relative rate error decreases extremely rapidly, i.e., roughly exponentially in this case, with increasing numbers of repatoms. Achieving relative rate errors of less than 1\% requires only about 40 to 50 degrees of freedom for both cases for $s=1.035$. Further, the localized coarse-graining scheme is seen to outperform the delocalized coarse-graining scheme for all meshes we investigated although the difference is smaller for the more delocalized transition.
To understand the cause of the difference in the accuracy of the two methods, we look to Theorem \ref{Thm:ErrorBound} for the principle components of the error. In Figure \ref{fig:FractureDotProductComparison}, we see a combination of the error due to not fully resolving some of the more essential atoms in the transition and of the rotation of the dividing surface in the coarse-grained phase space, reflected in the term ${\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}}}$, while the the error due to the long-range elastic contributions, given by $\mathbf{v}^{\textup{\text{cg}}} \cdot {\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})}$, is shown in Figure \ref{fig:FractureLongRangeComparison}. In both cases, the localized coarse-graining scheme is seen to be preferable. That the localized coarse-graining scheme had a lower error contribution from the ${\mathbf{u}^{\textup{\text{at}},r} \cdot \mathbf{v}^{\textup{\text{cg}}}}$ term was expected given that the aim of this coarse-graining scheme is the maximization of
$\|\mathbf{u}^{\textup{\text{at}},r}\|$ by concentrating the repatoms in the region where the components of $\mathbf{u}^{\textup{\text{at}}}$ are the largest in terms of absolute value. In contrast, the delocalized coarse-graining scheme distributes some repatoms away from the fracture point, where the components of the unstable eigenmode are smaller in norm, hence leading to suboptimal performance in this case. The better performance of the localized method over the delocalized method in the case of the long-range error is much more surprising and deserves extra attention as the delocalized method was meant to reduce this error specifically.
To that end, let us consider the $\mathbf{v}^{\textup{\text{cg}}} \cdot {\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}} - \mathbf{u}^{\textup{\text{at}},c})}$ term in more detail by first considering the relaxed constrained configuration, or $\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}$, for the two coarse-graining schemes. The relaxed constrained configuration is especially easy to determine for the present potential in both cases as the potential is strictly convex in the periphery. For a line of constrained atoms between two repatoms in this 1-D system, the relaxed configuration is simply given by a linear interpolation of the displacement between the two repatoms. Therefore, we can compute $\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}$ through a simple linear interpolation between the nodes in the two coarse-graining schemes in the periphery. This allows for an easy comparison of $\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}$ and $\mathbf{u}^{\textup{\text{at}},c}$.
The linear interpolation between the repatom components of the coarse-grained unstable eigenmode is reported in Figure \ref{fig:FractureCGUnstableEigenmode}. It is quite evident from this viewpoint that the approximation of the constrained region by the localized method is worse than the approximation due to the delocalized repatom coarse-graining scheme. The additional nodes in the periphery for the delocalized coarse-graining scheme help better capture the long-range behavior of the system. This difference is, however, mitigated by the fact that we are here only considering nearest-neighbor interactions. Therefore, the only difference in the constrained region approximation that truly matters is the difference for the constrained atoms that directly interact with the repatom region. This is reflected in the error derived in Theorem \ref{Thm:ErrorBound} through the kernel of
$\mathbf{B}$. For systems with longer-range interactions, it is conceivable that this long-range effect error may become considerably more important. With all this in mind, the delocalized method should still perform better than the localized method in terms of the $\|\mathbf{B}(\mathbf{u}^{\textup{\text{at}},c}_{\text{min}}
- \mathbf{u}^{\textup{\text{at}},c})\|$ norm. This is, in fact, the case. The localized method however performs better in the long-range error due to the contribution of the dot product with $\mathbf{v}^{\text{cg}}$. As mentioned above, in the localized case, the atoms with the largest values of $\mathbf{v}^{\textup{\text{cg}}}$ do not contribute to the error as they are not coupled to the constrained atoms through $\mathbf{B}$. They are thus effectively shielded by the core region and only a small number of rep-atoms eventually contribute to the error. In contrast, in the delocalized case, a larger number of repatoms with significant values of $\mathbf{v}^{\textup{\text{cg}}}$ contribute, tilting the balance in favor of the localized coarse-graining scheme.
Another key point to keep in mind when considering the long-range error is the mesh used in the periphery. The mesh used in the example above is not the optimal mesh for this system and was chosen instead as a realistic coarse-graining scheme to use without knowing exactly the unstable eigenmode. For this problem, many of the degrees of freedom in the coarse-graining scheme do not contribute much to reducing the long-range error, and this negatively affects the delocalized coarse-graining scheme in a degree of freedom comparison against the localized coarse-graining scheme. A more optimum choice of repatoms in the periphery for the delocalized coarse-graining scheme would make the comparison more favorable. Theorem \ref{Thm:ErrorBound} could be used as a starting point for a derivation of an optimal mesh as is done in \cite{acta.atc} for quasicontinuum methods, but the dot products in the error present challenges to the derivation. Using the Cauchy-Schwarz inequality could help to alleviate this issue; however, the resulting bound is not a good approximation to the actual result and the resulting mesh is suboptimal. We can still consider better, if not optimal, meshes though to see if the delocalized coarse-graining scheme will better handle the long-range error contribution. For the simple delocalized coarse-graining scheme with a single repatom in the periphery on either side of the chain with only a single constrained atom between these peripheral atoms and the core, the delocalized method does indeed become superior compared to the localized mesh in the long-range error for small core regions. However, the localized coarse-graining scheme still ultimately had a lower relative error even in this case. The localized coarse-graining scheme was found to always have a lower relative error than the delocalized coarse-graining scheme in all of the meshes examined in the numerical experiments. Note that in actual implementations, the delocalized coarse-graining scheme might possess other practical advantages. For example, in 1D with nearest-neighbor interactions, $\mathbf{C}$ would exhibit a block structure that could potentially be exploited.
Overall, for the original system considered here, the localized method is superior in all aspects of the error. Consequently, choosing repatoms so as to increase $\|\mathbf{u}^{\textup{\text{at}},r}\|$ is the optimal strategy. In higher dimensions, this may change as the boundary region between the localized repatom and periphery becomes more significant.
\section{Conclusions and Future Considerations}
In this paper, we have demonstrated that the CGMD approach to atomistic coarse-graining can produce an accurate approximation to the TST rate of the fully atomistic system. Over the course of the analysis, we verified that the coarsened system is well-behaved in that no spurious behaviors are introduced through the coarsening process, and we described the projection of the dividing surface into the coarse-grained phase space. The error analysis was extended to show under which circumstances the coarse-grained approximation of the TST rate would be most accurate and highlighted the significant contributions to the error in this estimation. Our numerical results demonstrated the accuracy of two different approaches to the coarse-graining approximation in the context of a 1-D chain undergoing fracture.
The success of these approaches demonstrated that the number of degrees of freedom taken into account to approximate the TST rate could be significantly reduced while still maintaining a highly accurate approximation. While the localized method proved to be superior in the numerical experiments performed here, the accuracy of the approximations made by the two methods were comparable. This is important to note because the implementation of the delocalized method may be more efficient in certain situations.
It is interesting to note that the analysis and error calculations for this method are independent of the basis that is chosen for the problem. While it is certainly natural to consider a basis consisting solely of individual atoms, it may be possible in certain situations to choose a basis for the problem that further decreases the relevant number of degrees of freedom such as the continuous, piecewise linear basis functions used in \cite{PhysRevB.72.144104} and \cite{EBTadmor:2013,hyperqc}. Theoretically, it is possible to initially choose an ideal basis consisting of the eigenvectors for the dynamical matrix at the saddle point as described earlier in the paper. In such a situation, only a single basis element would contribute in any way to the TST rate allowing for a coarsening of the system down to a single element while still maintaining a perfect approximation of the TST rate. While computing such an ideal basis is usually not practical, especially if the point of the simulation is to discover the appropriate escape transition \cite{PUSAV2009}, there may be alternative choices for a basis that are relatively easy to work with, apply to certain classes of problems, and still make the behavior of interest increasingly localized. For future consideration, it would also be interesting to investigate the effect that longer-range interactions have on the constrained region's contribution to the overall error, as discussed in the numerical results section.
\end{document} |
\begin{document}
\title {Transmission through a non-overlapping well adjacent to a finite barrier}
\author{Zafar Ahmed} \email{zahmed@barc.gov.in} \affiliation{Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400 085, India}
\begin{abstract} We point out that a non-overlapping well (at negative energies) adjacent to a finite barrier (at positive energies) is a simple potential which is generally missed out while discussing the one-dimensional potentials in the textbooks of quantum mechanics. We show that these systems present interesting situations wherein transmitivity $(T_b(E))$ of a finite barrier can be changed both quantitatively and qualitatively by varying the depth or width of the well or by changing the distance between the well and the barrier. Using delta (thin) well near a delta (thin) barrier we show that the well induces energy oscillations riding over $T_b(E)$ in the transmitivity $T(E)$ at both the energies below and above the barrier. More generally we show that a thick well separated from a thick barrier also gives rise to energy oscillations in $T(E)$. A well joining a barrier discontinuously (a finite jump) reduces $T(E)$ (as compared to $T_b(E))$ over all energies. When the well and barrier are joined continuously, $T(E)$ increases and then decreases at energies below the barrier. At energy above the the barrier the changes are inappreciable. In these two cases if we separate the well and the barrier by a distance, $T(E)$ again acquires oscillations. Paradoxically, it turns out that a distant well induces more energy oscillations in $T(E)$ than when it is near the barrier. \end{abstract}
\pacs{PACS No.: 03.65.Ge, 03.65.Nk} \maketitle \section{Introduction} In the textbooks of quantum mechanics the solution of Schr{\"o}dinger equation and the consequent results are illustrated through simple one-dimensional potentials. For discrete bound states the square well\cite{1,2,3,4} and double wells\cite {2,3} are studied. Square well, square barrier and semi-infinite step potentials are used for studying continuous energy (scattering) states.\cite{2,3,4} A well with two side barriers is studied for understanding resonances and meta-stable states.\cite{2,3} An overlapping well adjacent to a finite barrier is a well known model for discussing discrete complex energy Gamow-Seigert meta-stable states \cite{5} in alpha decay.
Students may wonder as to what happens if a non-overlapping well (at negative energies) is adjacent to a finite barrier (at negative energies) (see Figs.~1). Perhaps for the want of an application this system has gone undiscussed, however, interesting queries do arise for this kind of potentials. One may wonder as to whether the well (at negative energies) can change (increase/decrease) the transmitivity of the barrier (at positive energies) quantitatively and significantly. One may like to know whether there can be qualitative changes in the transmitivity of the barrier $(T_b(E))$ due to the presence of the well in some class of cases.
In this article we would like to show that a well near a barrier can change the transmitivity of the barrier both quantitatively and qualitatively. In fact a scattering potential well (vanishing at $x \rightarrow \pm \infty$) can give rise to a non-overlapping well adjacent to a finite barrier (NWAFB) as \begin{equation} V(x)=-v_w f(x+d) + v_b f(x), \end{equation} where $f(x)=e^{-x^2}, {\rm sech}^2x, e^{-x^4},....$ see Figs.~1(a). However in this case, a change in the depth of the well or its distance from the barrier would also change the height of the barrier. Consequently, the effect of the well on the transmission property of the original barrier can not come up explicitly. We, therefore, consider wells of zero-range or finite range. Else, if they are scattering wells of infinite range on one side they ought to be joined to the barrier continuously or dis-continuously. In the following we discuss the various possibilities for NWABF.
\section{Various models of non-overlapping well adjacent to a finite barrier} We construct various models of NWAFB using three parameters $v_w,v_b>0$ and $d$. Here $v_w$ is the depth of the well, $v_b$ is height of the barrier and $d$ denotes the separation between the well and the barrier. In these models a change in $d$ does not change the depth of the well or the height of the barrier.
First let us consider both the well and the barrier of zero range. Using the zero range Dirac delta potentials we construct a simple solvable model of NWAFB as \begin{equation} V^{\delta}(x)= -v_w \delta(x+d)+ v_b \delta(x). \end{equation} Using finite range well, we construct a more general model of NWAFB (see Figs.~1(b)) \begin{eqnarray} &&V^F(x)= -v_w V_w(x+d+w_w/2), \quad -d-w_w \le x \le -d \nonumber \\ &&V^F(x)=0, \quad -d \le x \le 0, \nonumber \\ &&V^F(x)= v_b V_b(x), \quad x \ge 0, \end{eqnarray}
where $V_w(x)$ may be chosen as constant (square or rectangular well), $(1-4x^2/w_w^2)$ (parabolic well), $(1-2|x|/w_w)$ (triangular well),
$e^{-x^2/w_w^2}$ (Gaussian well) or $e^{-|x|/w_w}$ (exponential well). It may be mentioned that in some cases $v_b$ may not represent the effective barrier height ($v_m=$maximum of $V_b(x)$). For instance in this article we shall be choosing $V_b(x)=v_b x e^{-x^2}$ where for $v_b=11.5$ we get $v_m \approx 5$.
Using asymptotically converging profiles $f(x)$ and $g(x)$, we construct two-parameter $(v_w,v_b)$ models of NWABF wherein a well of infinite range is juxtaposed to a barrier of infinite range continuously as (see solid curve in Figs.~1(c)) \begin{eqnarray} &&V^{C}(x)= v_b g(x) , x>0 \nonumber \\ &&V^{C}(x)=v_w g(x), x \le 0, \end{eqnarray} and discontinuously as (see dashed curve in Figs.~1(c)) \begin{eqnarray} &&V^{D}(x)=v_b f(x), x>0 \nonumber \\ &&V^{D}(x)=-v_w f(x) , x \le 0. \end{eqnarray}
Here the functions $f(x)$ may be chosen as rectangular profile or as $e^{-x^2}$, $e^{-x^4}$, ${\rm sech}^2 x$..., and $g(x)$ may be taken as $xe^{-x^2}$, $xe^{-x^4}$, $\tanh x~{\rm sech} x$,... . It may be mentioned that the finite range potential like $V(|x|\ge w)=0, V(x<0)=v_w \sin (2\pi x/w), V(x>0)=v_b \sin (2\pi x/w)$ would rather be a NWAFB of type (3) with $d=0$ than of the type (4).
Next we have to solve the Schr{\"o}dinger equation \begin{equation} {d^2 \psi(x) \over dx^2}+{2m \over \hbar^2}(E-V(x)\psi(x)=0. \end{equation} for finding the transmitivity, $T(E)$, of the various potential models discussed above. When the potentials are real and Hermitian the time reversal symmetry ensures that the transmitivity and reflectivity are independent of the direction of incidence of particle whether it is from left or right. Due to this symmetry, in transmission through NWAFB it does not matter whether the incident particle sees the well or the barrier first.
\section{Delta potential model of NWAFB: (2)} The zero range delta potential model of NWAFB is exactly solvable. We solve the Schr{\"o}dinger equation (6) for this potential, $V^{\delta}(x)$ given in Eq.~(1) using just plane waves: $e^{\pm ikx}$ as usual. Let the direction of incidence of the particle at the potential be from the left hand, we can write \begin{eqnarray} &&\psi(x)=A e^{ikx}+ B e^{-ikx}, \quad -\infty < x \le -d \nonumber \\ &&\psi(x)=C e^{ikx}+ D e^{-ikx}, \quad -d < x < 0 \nonumber \\ &&\psi(x)=F e^{ikx}, \quad x \ge 0. \end{eqnarray} The wavefunction (7) has to be continuous at $x=-d$ and 0. However, due the point singularity at $x=-d,0$ in delta functions in Eq.~(2), there occurs a mis-match in the first derivative (see Problem no. 20 and 21 in Ref.\cite{4}) of the wavefunction we get \begin{eqnarray} &&A e^{-ikd} + B e^{ikd} = C e^{-ikd} + D e^{ikd}, \nonumber \\ &&ik[A e^{-ikd} - B e^{ikd}]-ik[C e^{-ikd} - D e^{ikd}] = -{2m \over \hbar^2} v_w [C e^{-ikd} + D e^{ikd}], \nonumber \\ &&C+D=F,\nonumber \\ &&ik[(C-D)-F]={2m \over \hbar^2} v_b F. \end{eqnarray} by eliminating $C,D$ and $F$ from Eq.~(8), we get \begin{eqnarray} && {B \over A}= {u_w (2ik+u_b)+\lambda^2(2ik+u_w) u_b \over (2ik-u_w)(2ik+u_b)+\lambda^2 u_w u_b}, \nonumber \\ && {F \over A}={4k^2 \over (2ik-u_w)(2ik+u_b)+\lambda^2 u_w u_b}, \quad \lambda=e^{2ikd}, u_w={2m v_w \over \hbar^2}, u_b={2m v_b \over \hbar^2}. \end{eqnarray}
These ratios give us the reflectivity $R(E)=|{B \over A}|^2$ and the transmitivity $T(E)=|{F \over A}|^2$. When $v_w=v_b$ the numerator of $B/A$ in Eq.~(9) becomes $\cos ka$ which gives rise reflectivity zeros when $ka=(n+1/2)\pi$ these are the positions of transmission resonances with $T(E)=1.$ When either of $v_w$ and $v_b$ is zero, from Eq.~(9) we get (see Problem no. 21 in \cite{4}) \begin{equation} T_b(E)= {E \over {m v^2 \over 2 \hbar^2}+E}=T_w(E), \quad v=v_w, v_b. \end{equation} This is a particular feature of the delta potential well or barrier that their transmission co-efficients are identical. For all our calculations we choose $2m=\hbar^2=1$, so that energies and lengths are in arbitrary units. In Figs.~2(a), both $T(E)$ and $T_b(E)$ are plotted as a function of energy, $E$, when $v_w=1, v_b=5, d=3$. See the interesting energy-oscillations in solid curve that represent the transmitivity of the total potential $V^{\delta}(x)$: a perturbed barrier. When compared with the transmitivity of the Dirac delta barrier (see the dotted curve) these energy oscillations in $T(E)$ can be seen to be riding around $T_b(E)$ even at large energies ($E>>v_b$). We find that the smaller values of $v_w$ (than 1) create only small excursions (ripples) around the smooth variation of $T_b(E)$.
The depth of the well $v_w$ governs the amplitude of these oscillations. In Figs.~2(c) see that the frequency of these energy-oscillations remain the same but their amplitudes are larger as $v_w$ is increased and made equal to 5$(=v_b)$. Compare Figs.~2(a) with Figs.~2(c) and Figs.~2(b) with Figs.~2(d) to appreciate the effect of the increase in the depth of the well resulting in the increase of amplitude of oscillations.
We find that the frequency of these oscillations is governed by the value of $d$. Larger the value of $d$, more is the frequency of oscillations. Compare figs.~2(a) with Figs.~2(b) and Figs.~2(c) with Figs.~2(d) to appreciate the effect of the increase in $d$.
This simple and exactly solvable model of NWAFB suggests that a well near a barrier neither increases nor decreases the transmitivity of the barrier. Most interestingly, it does both and hence energy oscillations in $T(E)$. Increase in the frequency of these oscillations due to increase in $d$ (perturbation moving away) is paradoxical.
The question arising here is whether energy oscillations in $T(E)$ is the essence of NWAFB of some type or a particular feature of extremely thin delta potentials making up $V^{\delta}(x)$ (2). We therefore need to study the other models given Eqs.~(1,3-5). As the other models of NWAFB are not solvable analytically, in the following we discuss a numerical procedure to find $T(E)$.
\section{A numerical method for the calculation of transmitivity of a one dimensional potential} When the potentials vanish asymptotically one can calculate its transmission co-efficient by solving the Schr{\" o}dinger equation numerically for scattering solutions. We propose to solve Eq.~(6) using Runge-Kutta method\cite{6} of step by step integration (see Appendix). This method consists of solving two first order, linear, one dimensional coupled differential equations \begin{equation} {dy(x)\over dx}=f[x,y(x),z(x)], \quad {dz(x)\over dx}=g[x,y(x),z(x)], \quad y(0)=c_1, \quad z(0)=c_2. \end{equation} In this setting, we introduce $y(x)=\psi(x)$ and $z(x)= {d \psi(x) \over dx}$ and split the Schr{\"o}dinger equation in two first order coupled linear differential equations as \begin{eqnarray} {dy(x) \over dx}= z(x)\nonumber \\ {dz(x) \over dx}= -{2m \over \hbar^2}[E-V(x)]y(x). \end{eqnarray} The Schr{\" o}dinger equation which is a second order differential equation will have two linearly independent solutions as $\psi_1(x)$ and $\psi_2(x)$. We start the numerical integration from $x=0$ using the two sets of initial values as (see Problem no. 22 in Ref.\cite{4} and Ref.\cite{7}) \begin{equation} \psi_1(0)=1,\psi_1^\prime(0)=0; \quad \psi_2(0)=0, \psi_2^\prime(0)=1, \end{equation} such that the Wronskian function $W[\psi_1(x),\psi_2(x)]=\psi_1(x) \psi_2^\prime(x)- \psi_1^\prime(x) \psi_2(x)=1$ which is known to be a constant of motion. Here the prime denotes first differentiation with respect to $x$. On the right, the RK-integration is carried up to (say) $x=w_b$ for the case of a finite range barrier $V_b$ in $V^F(x)$ (3). For infinite range cases like $V^{C}(x)$ (4) and $V^{D}(x)$ (5) RK-integration is to be carried up to (say) $x=D$ such that $V(D)$ is very small. Similarly, on the other side, the RK-integration is to be carried up to $x=-d-w_w$ in case of $V^F(x)$. In case of $V^{C}(x)$ (4) and $V^{D}(x)$ (5) we integrate up to (say) $x=-D$. Let us denote the end values $\psi_1(-d-w_w), \psi_2(-d-w_w), \psi_1^\prime(-d-w_w), \psi_2^\prime(-d-w_w)$ as $\psi_1, \psi_2,\psi_1^\prime,\psi_2^\prime$, respectively. The end values $\psi_1(w_b), \psi_2(w_b), \psi_1^\prime(w_b), \psi_b^\prime(w_b)$ are denoted as $\phi_1, \phi_2, \phi_1^\prime, \phi_2^\prime$, respectively.
As RK-integration is step by step method wherein the calculated value of the function, $\psi(x)$, and its slope (momentum) $\psi^\prime(x)$ at one step serve as initial values for the next step. This suits quantal calculations wherein the wavefunction and its derivative must match everywhere in the domain of the potential. Importantly, then it does not matter whether or not the potential is continuous or has a finite jump discontinuity at one or more number of points in the domain of the potential. We finally write the solution of Eq.~(6) as \begin{eqnarray} &&\psi(x)=A e^{ikx}+ B e^{-ikx},\quad -\infty < x \le -d-w_w \nonumber \\ &&\psi(x)=C_1 \psi_1(x)+ C_2 \psi_2(x),\quad -d-w_w < x \le w_b \nonumber \\ &&\psi(x)=F e^{ikx},\quad x > w_b \end{eqnarray} In case of $V^{C}(x)$ (4) and $V^{D}(x)$ (5), the distances $-d-w_w$ and $w_b$ will be replaced by $-D$ and $D$, respectively. Next by matching $\psi(x)$ and ${d\psi(x) \over dx}$ at these points we get \begin{eqnarray} &&A e^{-ik(d+w_w)}+ B e^{ik(d+w_w)}= C_1 \psi_1 + C_2 \psi_2 \nonumber \\ &&ik(A e^{-ik(d+w_w)}- B e^{ik(d+w_w)})= C_1 \psi_1^\prime + C_2 \psi_2^\prime \nonumber \\ &&C_1 \phi_1 + C_2\phi_2= F e^{ikw_b} \nonumber \\ &&C_1 \phi_1^\prime + C_2 \phi_2^\prime = ik F e^{ikw_b}. \end{eqnarray} Solving Eqs.~(15), we get \begin{eqnarray} &&{B \over A}=-e^{-2ik(d+w_w)} {(\phi_1^\prime-ik \phi_1)(\psi_2^\prime-ik \psi_2)-(\phi_2^\prime-ik\phi_2)(\psi_1^\prime-ik \psi_1) \over (\phi_1^\prime-ik \phi_1)(\psi_2^\prime+ik\psi_2)-(\phi_2^\prime-ik\phi_2)(\psi_1^\prime+ik\psi_1)},\\ \nonumber &&{F\over A}=-{2ik e^{-2ik(d+w_w+w_b)} \over (\phi_1^\prime -ik \phi_1) (\psi_2^\prime + ik \psi_2)-(\phi_2^\prime-ik\phi_2)(\psi_1^\prime + ik \psi_1)}. \end{eqnarray} Here we have used the constancy of the Wronskian $[\phi_1\phi_2^\prime-\phi_1^\prime \phi_2]=W[\phi_1,\phi_2]=1$. The transmitivity (transmission probability) of the total the NWAFB is given by $T(E)$ as in above equation. This may be denoted fully as \begin{equation} T(E)=T(v_w, v_b, w_w, w_b, d, E), \quad T_b(E)=T(v_w=0, v_b, w_w, w_b, d, E), \end{equation} where $T_b(E)$ denotes the transmitivity of the (unperturbed) barrier and $v_w,w_w$ and $d$ may be taken to act as perturbation parameters.
\section{Results and discussions} Using the Eq.~(16), we calculate the transmitivity of various analytically intractable models given in section III. Let us discuss the NWAFB represented by $V^F(x)$ in Eq.~(3). Figs.~3 presents $T(E)$ and $T_b(E)$ when $V_w(x)$ is a rectangular well in $V^F(x)$ (see dotted well in Figs.~1(b)). The form of the barrier is fixed as $V_b(x)=v_b xe^{-x^2}$ and its parameter $v_b=11.5$ this gives $(v_m)$ as about 5 units. In Figs.~3(a), we see only marginal excursions in $T(E)$ when the well is shallow, wide and distant. When the well is deeper but juxtaposed to the barrier ($d=0$) the frequency of oscillations decreases (see Figs.~3(b)). When the well is away from the barrier, $T(E)$ is more oscillatory compare Figs.~3(b) with Figs.~3(c). When the depth of the well is increased to 10 units ($v_w > v_m$) the amplitude of the oscillations increases (see Figs.~3(d)). In NWABF the essence is that the oscillations in $T(E)$ are seen riding around $T_b(E)$. In other words the well induces oscillations in the transmitivity of the adjacent barrier. We would like to remark that a piecewise constant potential mentioned in Ref.\cite{8} (see Eq.~ (22) there) can now be seen as a NWAFB of the type (3), wherein both the well and the barrier are square (rectangular) and $T(E)$ is oscillatory (see Fig.~5 there).
Next we study parabolic well in $V^F(x)$ (3). In Figs.~4(a), this time we find that the well-depth has to be comparable to the barrier height of 5 units for changing $T(E)$ appreciably when compared to $T_b(E)$. The effect of increase in the depth of the well can be seen to enhance the amplitude of of oscillations in $T(E)$ by comparing Figs.~4(a) with Figs.~4(c). $T(E)$ in Figs.~ 4(b) is less oscillatory as compared to that in Figs.~4(c) because the well and barrier are juxtaposed to each other with $d=0$. So in this model too the energy oscillations occurring in $T(E)$ are due to increase in the width or depth of the well or its distance from the barrier. However, these oscillations are less prominent than those of rectangular potential model seen in Figs.~3. The general feature of the NWABF of the type $V^F(x)$ (3) that the transmitivity is more oscillatory when a thinner barrier is away from the well is well demonstrated when one compares Figs.~4(c) and Figs.~4(d).
The oscillations in the transmitivity of rectangular and parabolic models of NWAFB (3) may be attributed \cite{7} to their finite range (finite support) and also to the distance $d$ over which the potential being zero allows the interference of plane waves. Further, the prominence of oscillations in $T(E)$ of rectangular model lies in the fact that rectangular potential well or barriers are most localized profiles between two points than any other profile of finite support\cite{9}.
Fig.~5, displays the qualitatively similar oscillatory transmitivity when quite thin wells ($w_w=0.4$) are used in NWAFB of the type given by $V^F(x)$ in Eq.~(3). The depths of the wells and their distances from the barrier are fixed as $v_w=10$ and $w_w=5$, respectively. These wells taken here are rectangular, parabolic, Gaussian, and triangular (see the line below Eq.~(3)). From this Fig.~5 we conclude that quite thin wells despite being away from the barrier can induce prominent oscillations in $T(E)$ provided they are sufficiently deep. If not so deep the amplitude of oscillations will be small.
Now we study two more modifications of NWAFB which are made up of scattering potentials of infinite range. These are $V^{C}(x)$ (4) and $V^{D}(x)$ (5). In the case of $V^{C}(x)$ (see solid curve in Figs.~1(c)) when the well and the barrier are juxtaposed continuously at $x=0$, we find (see Figs.~6(a)) that if the well is strong it reduces the transmitivity and then increases it only marginally at energies below the barrier. At energies above the barrier height the changes are inappreciable. In the dis-continuous case (see dashed curve in Figs.~1(c)), we find that the hidden well reduces $T(E)$ over all (below and above the barrier) energies (see Figs.~6(b)). This is the characteristic feature of the potential being discontinuous at a point ($x=0$) as the well and the barrier are juxtaposed there in a discontinuous way as in the case of a simple potential step\cite{2,3,4}. Also the well reduces transmitivity of the barrier in an appreciable way only if it is strong (e.g., $w_w >w_b$). We have confirmed absence of energy oscillations in these two models by varying $v_w$ and $v_b$ high and low abundantly. Moreover, in this regard the exact analytic expression \cite{8} $T(E)$ of the Scarf II potential ($V(x)=V_0 \tanh x ~{\rm sech} x$) readily testifies to a non-oscillatory behaviour of NWAFB of the type (4) as a function of energy \begin{equation} T(E) =\frac {\sinh^2 2\pi k}{[(\cosh 2\pi k +\cos 2 \pi p) (\cosh 2\pi k+ \cosh 2\pi q)]}, \end{equation} with $k=\sqrt{E/\Delta}$, $p=\mbox{Re}\,(\sqrt{1/4+iV_0/\Delta})$, $q=\mbox{Im}\,(\sqrt{1/4+iV_0/\Delta})$, and $\Delta= \hbar^2/(2ma^2)$.
However, in the above models $V^{C}(x)$ (4) and $V^{D}(x)$ (5) if the well and barrier are separated by a distance, $d$, the transmitivity will again acquire oscillations. We would like to emphasize that it is the separation between the well and the barrier that plays a crucial role in causing energy-excursions (oscillations) in $T(E)$ with respect to $T_b(E)$.
Figs.~6(c,d) demonstrate that in case of single piece NWAFB (1) when $v_b=5$ and $d=8$ it requires a very deep well $(v_w=2000)$ to get even small excursions in $T(E)$ with respect to $T_b(E)$. Appreciable energy oscillations can be seen in $T(E)$ only if the well is much deeper ($v_w=5000$). This feature is surprising in view of the fact that the NWAFB of the types (Eqs.~(2,3)) in Figs.~2-5 have displayed good energy oscillations even if $v_w$ is twice of $v_b$ or even less than $v_b$.
In all the results presented in Figs.~2-6 (see the dotted curve), in NWABF the general trend of $T(E)$ is determined by the barrier is irrespective of the strength of the well. Broadly, three (Eqs.~(1-3)) types of NWAFB (see Figs.~ 1) entailing single well and a single barrier are possible. However, one has choices of the profiles for the well and the barrier in them. Apart from the results of various profiles presented here in Figs.~(2-6) we have also studied several other profiles and explored various parametric regimes in all three types of NWABF to confirm our findings presented here.
\section{Conclusions} The transmission through a barrier is the phenomenon of positive energy continuum, we conclude that the well (at negative energies) essentially causes energy-excursions (ripples or oscillations) in the transmitivity of the barrier. Howsoever strong the well is the trend of transmitivity as a function of energy is determined only by the barrier. Ordinarily, the finite support(range) of the well may also be attributed\cite{7} to cause energy oscillations in the transmitivity. In this regard, the energy-oscillations in the transmitivity of one-piece smooth potential (1) of infinite range found here are unexpected. However, it has required the well depth to be extremely large (see Figs.~6(d)). The separation between the well and the barrier is {\it sufficient} if not the {\it necessary} condition in giving rise to oscillations in transmitivity. When the well and the barrier are separated away, the potential in the intermediate region is zero. This gives a scope for destructive and constructive interference of plane waves and hence the frequency of energy-oscillations in the transmitivity increases. However, if one views the well as a perturbation to the barrier then the enhanced oscillations in $T(E)$ despite the well being distant is paradoxical. The infinite range well and barrier if joined at a point with no separation ($d=0$) between them do not seem to have energy-oscillations in transmitivity until they are separated.
The energy-oscillations in transmitivity at energy below the barrier suggests a novelty because usually transmitivity is found\cite{7,8,9,10} to be oscillatory at energies above the barrier.
The transmitivity of various potential systems which converge asymptotiacally $(x \rightarrow \pm \infty)$ to zero or to a constant value and which are either continuous or entail finite jump discontinuities can be found using Eq. (16) presented here. In this article we have presented the first and hopefully an exhaustive study of transmission through non-overlapping well adjacent to a finite barrier. We hope that this investigation will be found pedagogically valuable.
\section{Appendix} \appendix{} \numberwithin{equation}{section} \section{}
The Runge-Kutta\cite{6} solution of the coupled first order equations \begin{equation} y^\prime=f(x,y,z), z^\prime=g(x,y,z), \end{equation} are obtained as $y_1,y_2,y_3,...,y_n$ and $z_1,z_2,z_3,...z_n$ starting with the initial values $y_0,z_0$ using the following equations. \begin{eqnarray} &&y_{n+1}=y_n+{h \over 6} [k_1+2k_2+2k_3+k_4], \quad z_{n+1}=z_n+{h \over 6}[m_1+2m_2+2m_3+m_4],~ n \ge 0,~h={D \over n}\nonumber \\ &&k_1=f(x_n,y_n,z_n), \quad m_1=g(x_n,y_n,z_n)\nonumber \\ &&k_2=f(x_n+h/2,y_n+h k_1/2,z_n+h k_1/2), \quad m_2=g(x_n+h/2,y_n+h m_1/2,z_n+h m_1/2) \nonumber \\ &&k_3=f(x_n+h/2,y_n+h k_2/2,x_n+h k_2/2), \quad m_3=g(x_n+h/2,y_n+h m_2/2,x_n+h m_2/2) \nonumber \\ &&k_4=f(x_n+h,y_n+h k_3, z_n+h k_3), \quad m_4=g(x_n+h,y_n+ h m_3, z_n+h m_3). \end{eqnarray} When we solve (11) for $y_0=1,z_0=0$, we get $\psi_1(x)$ and $\psi_1^\prime(x)$ and we get $\psi_2(x)$ and $\psi_2^\prime(x)$ when the starting values are $y_0=1,z_0=0$.
\begin{figure}
\caption{The schematic depiction of various NWAFB. (a): single piece smooth potential (1), (b) $V^F(x)$ (3): parabolic well (dashed line), rectangular well (dotted line) and very thin rectangular well near a barrier, (c) $V^C(x)$ (4): a smooth well continuously juxtaposed to a barrier (solid line) and $V^D(x)$ (5): a smooth well discontinuously juxtaposed to the a barrier.}
\end{figure}
\begin{figure}
\caption{The solid line represents the transmitivity, $T(E)$, of the delta potential model $(V^{\delta})$ of NWAFB (2). The dotted curve represent the transmitivity, $T_b(E)$, of the barrier only. We have a fixed barrier height $V_b=5$ and take(a): $v_w=1,d=3$, (b): $v_w=1, d=1$, (c) $v_w=5,d=3$, (d) $v_w=5, d=1$.}
\end{figure}
\begin{figure}
\caption{The same as in Figs.~2 for the NWAFB of the type $V^F(x)$ (3). Here the barrier $V_b(x)=v_b x e^{-x^2}, v_b=11.5$ is perturbed by a rectangular (square) well. The effective height of the barrier $v_m$ is approximately 5 units. We have taken (a): $v_w=1, d=5, w_w=5$, (b): $v_w=10, d=0, w_w=5$, (c): $v_w=10, d=5, w_w=5$, (d): $v_w=10, d=5, w_w=1$.}
\end{figure}
\begin{figure}
\caption{The same as in Figs.~2 for the NWAFB of the type $V^F(x)$ (3). Here in general the energy-oscillations in $T(E)$ are present but these are less prominent than those in Figs.~2. The same barrier($V_b$) is now perturbed by a parabolic well of finite range. We take (a): $v_w=5, d=5, w_w=5$, (b): $v_w=10, d=0, w_w=5$, (c): $v_w=10, d=5, w_w=5$, (d): $v_w=10, d=5, w_w=1$}
\end{figure}
\begin{figure}
\caption{$T(E)$ (solid lines) and $T_b(E)$ (dotted curve) for various NWAFB of the type $V^F(x)$ (3) when the wells are quite thin($w_w=0.4$). We have $v_w=10, d=5$. These wells are rectangular, parabolic, Gaussian, and triangular used in Eq.~(3) (see the text below Eq~(3)). Thin wells away from the barrier give rise to qualitatively similar transmitivity which is oscillatory. This is an essential feature of the NWAFB of the type in Eqs. (2,3).}
\end{figure}
\begin{figure}
\caption{Transmitivity, $T(E)$ for (a): the continuous (4) and (b): the discontinuous (5) models; the dotted line $(v_w=5)$, thin solid line $(v_w=10)$ and thick solid line $(v_w=15)$. Figs.~(c,d) represent the transmitivities for the single piece smooth NWAFB (1). For a fixed distance ($d=8$) between the well and the barrier and $v_b=5$ Figs.~(c) shows only small excursions in $T(E)$ only when the well is very deep ($v_w=2000$). In Figs.~(d), significant oscillations in $T(E)$ have required even higher value $v_w(=5000$).}
\end{figure}
\end{document} |
\begin{document}
\title{Recurrence and Transience\\ of Frogs with Drift on $\mathbb{Z}^d$} \author{Christian D\"obler, Nina Gantert, Thomas H\"ofelsauer, \\Serguei Popov and Felizitas Weidner} \maketitle
\begin{abstract} We study the frog model on $\mathbb{Z}^d$ with drift in dimension $d \geq 2$ and establish the existence of transient and recurrent regimes depending on the transition probabilities. We focus on a model in which the particles perform nearest neighbour random walks which are balanced in all but one direction. This gives a model with two parameters. We present conditions on the parameters for recurrence and transience, revealing interesting differences between dimension $d=2$ and dimension $d \geq 3$. Our proofs make use of (refined) couplings with branching random walks for the transience, and with percolation for the recurrence.
\noindent \textbf{Keywords:} frog model, interacting random walks, recurrence, transience, branching random walk, percolation.
\noindent \textbf{AMS 2000 subject classification:} primary 60J10, 60K35; secondary 60J80 \end{abstract}
\section{Introduction and main results} The frog model is a model of interacting random walks or, more generally, Markov chains on a graph $G=(V,E)$ in discrete time $\mathbb{N}_0$. It can be described as follows: There is one distinguished vertex $x_0\in V$, called the origin, and at time $0$ there is exactly one active particle (awake frog) at $x_0$. At every other vertex $x$, there is a (possibly zero) number $\eta_x$ of sleeping frogs. The frog at $x_0$ now starts walking randomly on the graph and each time it visits a site with sleeping frogs, they immediately become active and start performing random walks and waking up sleeping frogs themselves, independently of each other and of all other frogs. The transition mechanism of the individual frogs is the same for all frogs. The frog model is called recurrent if the probability that the origin $x_0$ is visited infinitely often equals $1$, otherwise the model is called transient. The frog model with $V=\mathbb{Z}^d$, $E$ the set of nearest-neighbour edges on $\mathbb{Z}^d$, $x_0:=0$, $\eta_x=1$ for each $x\in\mathbb{Z}^d\setminus\{0\}$ and the underlying random walk being simple random walk (SRW) on $\mathbb{Z}^d$ was studied by Telcs and Wormald \cite{TW99} who, however, called it egg model. The name frog model was only later suggested by Durrett. In \cite{TW99}, it is in particular shown that the frog model is recurrent for each dimension $d$. See also \cite{P01}. Note that the frog model on $\mathbb{Z}^d$ with SRW is trivially recurrent for $d=1,2$, due to P\'{o}lya's theorem. Thus, in \cite{GS09} Gantert and Schmidt considered the frog model on $\mathbb{Z}$ with the underlying random walk having a drift to the right. They considered both fixed and i.i.d.~random initial configurations $(\eta_x)_{x\in\mathbb{Z}\setminus\{0\}}$ of sleeping frogs and derived a criterion separating transience from recurrence. In the case of an i.i.d.~initial configuration of sleeping frogs they also proved a zero-one law, which says that the probability of infinitely many returns to $0$ equals $1$ if $\mathbb{E}[\log^+(\eta)]=\infty$, and equals $0$ otherwise. Remarkably, this result only depends on the distribution of $\eta$ and does, in particular, not depend on the value of the drift. The recurrence part of the latter result was generalised to any dimension~$d$ by D\"obler and Pfeifroth in \cite{DP14}. They proved that the frog model on $\mathbb{Z}^d$ with underlying (irreducible) random walk which has an arbitrary drift to the right is recurrent provided that $\mathbb{E}[\log^+(\eta)^{\frac{d+1}{2}}]=\infty$. Another sufficient recurrence condition involving the tail behaviour of $\eta$ is derived in \cite{KZ17}. Kosygina and Zerner proved in \cite{KZ17} a zero-one law under the general condition that the frog trajectories are given by a transitive Markov chain. Recurrence and transience for the frog model on the $d$-ary tree have recently been investigated in \cite{HJJ14} and \cite{HJJ16} by Hoffman, Johnson and Junge. Other publications on the frog model include \cite{AMP02}, \cite{FMS04}, \cite{GNR17}, \cite{HW16}, \cite{JJ16} and \cite{JJ16sto} and \cite{R17} and references therein (the list is not exhaustive).
In the present article we study recurrence and transience of the frog model on $\mathbb{Z}^d$ for $d \geq 2$ when the underlying transition mechanism is not symmetric. We assume that at each vertex in $\mathbb{Z}^d\setminus\{0\}$ there is exactly one sleeping frog at time $0$. Given this assumption, and using the zero-one law proved in \cite{KZ17}, one can now classify the transition laws of the particles in a recurrent and a transient class. Our proofs show that both regimes exist. In order to give more quantitative statements, we focus on a model in which the particles perform nearest neighbour random walks which are balanced in all but one direction. More precisely, set $\mathcal{E}_d=\{\pm e_j \colon 1\leq j\leq d\}$ where $e_j$ denotes the $j$-th standard basis vector in $\mathbb{R}^d$, $j=1,\dotsc,d$. The particles move according to the following transition probabilities, which depend on two parameters $w \in [0,1]$ and $\alpha \in [0,1]$: \begin{equation}\label{transition_function}
\pi_{w,\alpha}(e) =
\begin{cases}
\frac{w(1+\alpha)}{2} & \text{for $e=e_1$} \\
\frac{w(1-\alpha)}{2} & \text{for $e=-e_1$} \\
\frac{1-w}{2(d-1)} & \text{for $e \in\{\pm e_2,\dotsc, \pm e_d\}$}
\end{cases} \end{equation} The parameter $w$ is the weight of the drift axis $e_1$, i.e.~the random walk chooses to go in direction $\pm e_1$ with probability $w$. The parameter $\alpha$ describes the strength of the drift, i.e.~if the random walk has chosen to move in drift direction, it takes a step in direction $e_1$ with probability $\frac{1+\alpha}{2}$ and in direction $-e_1$ with probability $\frac{1-\alpha}{2}$. All other directions are balanced and equally probable. Sometimes we need to consider the corresponding one-dimensional model where we have to demand $w=1$, i.e.~the transition probabilities are defined by $\pi_{\alpha}(e_1)=1-\pi_{\alpha}(-e_1)=\frac{1 + \alpha}{2}$. We denote the frog model on $\mathbb{Z}^d$ with parameters $w$ and $\alpha$ by $\operatorname{FM}(d,\pi_{w,\alpha})$.
First, let us discuss the extreme cases. For $w=1$ the frog model is one-dimensional and thus transient for any $\alpha \in (0,1]$ and recurrent for $\alpha=0$ by \cite{GS09}. For $\alpha =1$ one easily checks that it is transient for any $w \in (0,1]$. If $w=0$, then $\operatorname{FM}(d,\pi_{0, \alpha})$ is equivalent to the symmetric frog model in $d-1$ dimensions and hence recurrent. If $\alpha =0$, we are back in the balanced case and the model is recurrent. This follows from Theorem~\ref{thm_d=2_arbitrary_weight_i} and Theorem~\ref{thm_d>2_arbitrary_weight} below.
In dimension $d=2$ the frog model is recurrent whenever $\alpha$ or $w$ are sufficiently small, i.e.~if the underlying transition mechanism is almost balanced. It is transient for $\alpha$ or $w$ close to $1$.
\begin{thm}\label{thm_d=2_arbitrary_weight}
Let $d =2$ and $w \in (0,1)$.
\begin{enumerate}
\item\label{thm_d=2_arbitrary_weight_i} There exists $\alpha_r = \alpha_r(w) > 0$ such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $0 \leq \alpha \leq \alpha_r$.
\item\label{thm_d=2_arbitrary_weight_ii} There exists $\alpha_t = \alpha_t(w) < 1$ such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is transient for all $\alpha_t \leq \alpha \leq 1$.
\end{enumerate} \end{thm}
\begin{thm}\label{thm_d=2_arbitrary_drift}
Let $d=2$ and $\alpha \in (0,1)$.
\begin{enumerate}
\item\label{thm_d=2_arbitrary_drift_i} There exists $w_r = w_r(\alpha) > 0$ such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $0 \leq w \leq w_r$.
\item\label{thm_d=2_arbitrary_drift_ii} There exists $w_t = w_t(\alpha) < 1$ such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is transient for all $w_t \leq w \leq 1$.
\end{enumerate} \end{thm}
In dimension $d \geq 3$ the frog model is also recurrent if the transition probabilities are almost balanced. Further, for any fixed drift parameter $\alpha \in (0,1]$ it is transient if the weight $w$ is close to $1$. However, in contrast to $d=2$, for fixed $w \in [0,1)$ there is not always a transient regime. This follows from Theorem~\ref{thm_d>2_arbitrary_drift_i} below.
\begin{thm}\label{thm_d>2_arbitrary_weight}
Let $d \geq 3$ and $w \in (0,1)$.
There exists $\alpha_r = \alpha_r(d,w) > 0$ such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $0 \leq \alpha \leq \alpha_r$. \end{thm}
\begin{thm}\label{thm_d>2_arbitrary_drift}
Let $d\geq 3$ and $\alpha \in (0,1)$.
\begin{enumerate}
\item\label{thm_d>2_arbitrary_drift_i} There exists $w_r > 0$, independent of $d$ and $\alpha$, such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $0 \leq w \leq w_r$.
\item\label{thm_d>2_arbitrary_drift_ii} There exists $w_t = w_t(\alpha) < 1$, independent of $d$, such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is transient for all $w_t \leq w \leq 1$.
\end{enumerate} \end{thm}
The results are graphically summarised in Figure~\ref{phase_diagram}. Note that the above theorems only make statements about the existence of recurrent, respectively transient regimes. We do not describe their shapes, as might be suggested by the curves depicted in Figure~\ref{phase_diagram}. For a discussion about their shape we refer the reader to Conjecture~\ref{con_critical_curve} at the end of this paper.
\begin{figure}
\caption{Phase diagram for the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$: the recurrent regime is marked by \ref{recurrent}, the transient one by \ref{transient}.}
\label{recurrent}
\label{transient}
\label{phase_diagram}
\end{figure}
These results show that, in contrast to $d=1$, recurrence and transience do depend on the drift in every dimension $d \geq 2$. This disproves the last conjecture in \cite{GS09} that some condition on the moments of $\eta$ would separate transience from recurrence as in the one-dimensional case.
The paper is organised as follows. In Section~\ref{preliminaries} we introduce notation used throughout the article, and collect some basic facts and results about random walks, percolation and the frog model, which are needed in the proofs. The proofs of the main results are presented in Section~\ref{proofs}. Further questions and conjectures are discussed in Section~\ref{open_problems}.
\section{Preliminaries}\label{preliminaries}
\subsection*{Notation} We refer to the frog model on $\mathbb{Z}^d$ with transition probabilities $\pi$ as $\operatorname{FM}(d, \pi)$. For $w, \alpha \in [0,1]$ and every vertex $x \in \mathbb{Z}^d$ let $(S_n^x)_{n \in \mathbb{N}_0}$ be a discrete time random walk on the lattice~$\mathbb{Z}^d$ starting at $x$ which moves according to the transition function $\pi_{w,\alpha}$ given by \eqref{transition_function}. Then $(S_n^x)_{n \in \mathbb{N}_0}$ describes the trajectory of the frog initially at vertex $x$. It starts to follow this trajectory once it is activated. We assume that the set $\{(S_n^x)_{n \in \mathbb{N}_0} \colon x \in \mathbb{Z}^d\}$ of random walks is independent, i.e.~active particles do not interact. Notice that this set of trajectories entirely determines the behaviour of the frog model. A formal definition of the frog model can be found in \cite{AMP02}. Note that $\pi_{1/d, 0}$ corresponds to a simple random walk on $\mathbb{Z}^d$. We write $\pi_{\text{sym}}$ in this case.
We refer to the frog that is initially at vertex $x\in \mathbb{Z}^d$ as ``frog~$x$''. We write $x \to y$ if frog $x$ (potentially) ever visits $y$, i.e.~$y \in \{S_n^x \colon n \in \mathbb{N}_0\}$. For $x,y \in \mathbb{Z}^d$ and $A \subseteq \mathbb{Z}^d$ we say that there exists a frog path from $x$ to $y$ in $A$ and write $x \fp{A} y$ if there exist $n\in \mathbb{N}$ and $z_1, \ldots, z_n \in A$ such that $x \to z_1$, $z_i \to z_{i+1}$ for all $1\leq i < n$ and $z_n \to y$, or if $x \to y$ directly. Note that $x,y$ are not necessarily in $A$. Also the trajectories of the frogs $z_i$, $1 \leq i \leq n$, do not need to be in $A$. For $x \in \mathbb{Z}^d$ we call the set \begin{equation}\label{def_frog_cluster} F\hspace{-0.04em}C_x=\bigl\{y \in \mathbb{Z}^d \colon x \fp{\mathbb{Z}^d} y\bigr\} \end{equation} the frog cluster of $x$. Note that, if frog $x$ ever becomes active, then every frog $y \in F\hspace{-0.04em}C_x$ is also activated. Observe that, as we only deal with recurrence and transience, the exact activation times are not important, but we are only interested in whether or not a frog is activated.
Further, we often use $(d-1)$-dimensional hyperplanes $H_n$ in $\mathbb{Z}^d$ defined by \begin{equation}\label{definition_hyperplane} H_n := \{x \in \mathbb{Z}^d \colon x_1=n\} \end{equation} for $n \in \mathbb{Z}$.
\subsection*{Some facts about random walks}
We need to deal with hitting probabilities of random walks on $\mathbb{Z}^d$. For $x,y \in \mathbb{Z}^d$ recall that $\{x\to y\}$ denotes the event that the random walk started at $x$ ever visits the vertex $y$. Analogously, for $A\subseteq \mathbb{Z}^d$ we write $\{x \to A\}$ for the event that the random walk started at $x$ ever visits a vertex in $A$.
\begin{lemma}\label{lemma_hitting_probability_SRW} For $d \geq 3$ and $w \in (0,1)$ consider a random walk on $\mathbb{Z}^d$ with transition function~$\pi_{w,0}$. There exists a constant $c=c(d,w) >0$ such that for all $x\in \mathbb{Z}^d$ \begin{equation*} \mathbb{P}(0\to x) \geq c \lVert x \rVert_2^{-(d-2)}, \end{equation*} where $\lVert x \rVert_2 = \bigl(\sum_{i=1}^{d} x_i^2\bigr)^{1/2} $ is the Euclidean norm. \end{lemma}
A proof of the lemma for the simple random walk, i.e.~with transition function $\pi_{\text{sym}}$, can e.g.~be found in \cite[Theorem~2.4]{AMP02} and \cite[Lemma~2.4]{AMP02pt}. The proof can immediately be generalised to our set-up using \cite[Theorem~2.1.3]{LL10}.
\begin{lemma}\label{lemma_hitting_probability_RW_drift} For $d \geq 1$ and $\alpha,w \in (0,1)$ consider a random walk on $\mathbb{Z}^d$ with transition function~$\pi_{w,\alpha}$. Then for each $\gamma > 0$ there is a constant $c = c(d, \gamma, w, \alpha) >0$ such that for all $n \in \mathbb{N}$ and $x \in \mathbb{Z}^d$ with $x_1=-n$ and $\lvert x_i \rvert \leq \gamma\sqrt{n}$, $2 \leq i \leq d$, it holds that \begin{equation*}
\mathbb{P}(x \to 0) \geq c n^{-(d-1)/2}. \end{equation*} \end{lemma}
For a proof see e.g.~\cite[Lemma 3.1]{DP14}.
\begin{lemma}\label{lemma_hitting_probability_hyperplane}
For $d \geq 1$ and $\alpha,w \in (0,1]$ consider a random walk on $\mathbb{Z}^d$ with transition function~$\pi_{w,\alpha}$. Then for every $n \in \mathbb{N}$ and $H_{-n}$ as defined in \eqref{definition_hyperplane}
\begin{equation*}
\mathbb{P}(0\to H_{-n}) = \Bigl(\frac{1-\alpha}{1+\alpha}\Bigr)^n.
\end{equation*} \end{lemma}
\begin{proof}
As $\mathbb{P}(0\to H_{-n}) = \mathbb{P}(0\to H_{-1})^n$ for $n \in \mathbb{N}$, it suffices to prove the lemma for $n =1$. By the Markov property
\begin{equation*}
\mathbb{P}(0\to H_{-1}) = \frac{1-\alpha}{2} + \frac{1+\alpha}{2} \mathbb{P}(0\to H_{-2}).
\end{equation*}
The result follows after a straightforward calculation. \end{proof}
\subsection*{Some facts about percolation}
To prove recurrence we make use of the theory of independent site percolation on $\mathbb{Z}^d$ and therefore give a brief introduction here. Let $p \in [0,1]$. Every site in $\mathbb{Z}^d$ is independently of the other sites declared open with probability $p$ and closed with probability $1-p$. An open cluster is a connected component of the subgraph induced by all open sites. It is well known that for $d \geq 2$ there is a critical parameter $p_c= p_c(d) \in (0,1)$ such that for all $p > p_c$ (supercritical phase) there is a unique infinite open cluster~$C$ almost surely, and for $p<p_c$ (subcritical phase) there is no infinite open cluster almost surely. Furthermore, denoting the open cluster containing the site~$x \in \mathbb{Z}^d$ by $C_x$, it holds that $\mathbb{P}(\lvert C_x \rvert=\infty)>0$ for $p>p_c$, and $\mathbb{P}(\lvert C_x \rvert=\infty)=0$ for $p<p_c$ and all $x \in \mathbb{Z}^d$. The following lemma states that the critical probability $p_c$ is small for $d$ large.
\begin{lemma}\label{lemma_pc_high_d} For independent site percolation on $\mathbb{Z}^d$, \begin{equation*} \lim_{d \to \infty} p_c(d) = 0. \end{equation*} \end{lemma}
Indeed, $p_c(d) = O\bigl(d^{-1}\bigr)$ holds. A proof of this result can e.g.~be found in \cite[Chapter~1, Theorem~7]{BR06}. Further, in the recurrence proofs we use the fact that an infinite open cluster is ``dense'' in $\mathbb{Z}^d$. The following weak version of denseness suffices.
\begin{lemma}\label{percolation_density} Consider supercritical independent site percolation on $\mathbb{Z}^d$. There are constants $a,b>0$ such that \begin{equation*} \mathbb{P}\bigl(\lvert A \cap C_x \rvert \geq a \lvert A \rvert \bigr) > b \end{equation*} for all $A \subseteq \mathbb{Z}^d$ and $x \in \mathbb{Z}^d$. \end{lemma}
\begin{proof}
For $a>0$, $A \subseteq \mathbb{Z}^d$ and $x \in \mathbb{Z}^d$ the FKG-inequality yields \begin{align*} \mathbb{P}\bigl(\lvert A \cap C_x \rvert \geq a \lvert A \rvert \bigr) & \geq \mathbb{P}\bigl( x \in C, \ \lvert A \cap C \rvert \geq a \lvert A \rvert \bigr)\\ & \geq \mathbb{P}( x \in C) \cdot \mathbb{P}\bigl(\lvert A \cap C \rvert \geq a \lvert A \rvert \bigr). \end{align*} Note that $\gamma:=\mathbb{P}( x \in C) \in (0,1)$ (and $\gamma$ does not depend on $x$) since the percolation is supercritical. By the Markov inequality \begin{align*} \mathbb{P}\bigl(\lvert A \cap C \rvert \geq a \lvert A \rvert \bigr) & = 1 - \mathbb{P}\bigl(\lvert A \cap C^c \rvert \geq (1-a) \lvert A \rvert \bigr)\\ & \geq 1- \frac{\mathbb{E} \bigl[\lvert A \cap C^c \rvert \bigr] }{(1-a) \lvert A \rvert}\\ & = 1- \frac{1}{(1-a) \lvert A \rvert} \sum_{y \in A} \mathbb{P}(y \in C^c) \\ & = 1- \frac{1-\gamma}{1-a}>0, \end{align*} for $a$ small enough, which finishes the proof. \end{proof}
\subsection*{Some results about frogs}
As mentioned in the introduction, the frog model presented in this paper satisfies a zero-one law, which is shown in \cite[Theorem~1]{KZ17} in a more general set-up. See also Appendix~A in \cite{KZ17} for a comment on the slightly different definition of recurrence used there.
\begin{thm}[\cite{KZ17}]\label{lemma_zero_one_law} For any $d \geq 1$ and any nearest neighbour transition function $\pi$, we have for $\operatorname{FM}(d,\pi)$ that the probability that the origin is visited infinitely many times by active frogs is either $0$ or $1$. \end{thm}
Due to this zero-one law, to show recurrence, we only need to prove that the origin is visited infinitely often with positive probability.
In the symmetric frog model the set of vertices visited by active frogs, rescaled by time, converges to a convex set. This shape theorem is proven by Alves et al.~in \cite[Theorem 1.1]{AMP02} and we use it in one of the proofs concerning recurrence.
\begin{thm}[\cite{AMP02}]\label{lemma_shape_theorem} Consider $\operatorname{FM}(d,\pi_{\text{sym}})$ and let $\xi_n$ be the set of all sites visited by active frogs by time~$n$ and $\overline{\xi}_n := \{x + (-\frac12, \frac12]^d \colon x \in \xi_n\}$. Then there is a non-empty convex symmetric set $\mathcal{A}=\mathcal{A}(d) \subseteq \mathbb{R}^d$, $\mathcal{A} \neq \{0\}$, such that, for any $0 < \varepsilon < 1$
\begin{equation*}
(1- \varepsilon) \mathcal{A} \subseteq \frac{\overline{\xi}_n}{n} \subseteq (1+ \varepsilon) \mathcal{A}
\end{equation*} for all $n$ large enough almost surely. \end{thm}
\begin{remark} \label{remark_shape} The proof of Theorem~\ref{lemma_shape_theorem} goes through for the ``lazy'' version of the frog model, where in each step a frog decides to stay where it is with probability $q \in (0,1)$, independently of all other frogs. \end{remark}
Further, we need a result on the frog model with death. For $s \in [0,1]$ it is defined just as the usual frog model, but every active frog dies at every step with probability $1-s$ independently of everything else. The parameter $s$ is called the survival probability. We denote this frog model on $\mathbb{Z}^d$ by $\operatorname{FM}^*(d,\pi,s)$ if the underlying random walk has transition function $\pi$. Further, we denote frog clusters in the frog model with death by $F\hspace{-0.04em}C^*$, analogous to the notation introduced in \eqref{def_frog_cluster} for the frog model without death. In this paper we only use the frog model with death in the symmetric case, i.e. $\pi= \pi_{\text{sym}}$. We say that the frog model with death survives if at any time there is at least one active frog. The frog model with death is intensively studied in \cite{AMP02pt} and also in \cite{FMS04} and \cite{LMP05}. We need the following lemma in the proofs concerning transience.
\begin{lemma}\label{lemma_1d_fm}
For $\operatorname{FM}(1,\pi_{1,\alpha})$ with $\alpha > 0$ and $\operatorname{FM}^*(1,\pi_{sym},s)$ with $s < 1$ there is $c>0$ such that $\mathbb{P}(0 \fp{\mathbb{Z}} -n) \leq \mathrm{e}^{-cn}$ for all $n \in \mathbb{N}$. \end{lemma}
\begin{proof}
Let $p$ be the probability that a frog starting from $0$ ever hits the vertex $-1$. In both models we have $p <1$. Obviously, as $s <1$, this is true for $\operatorname{FM}^*(d,\pi_{\text{sym}},s)$. For $\operatorname{FM}(1,\pi_{1,\alpha})$ it follows from Lemma~\ref{lemma_hitting_probability_hyperplane}.
For $n \in \mathbb{N}$ define $Y_n = \lvert\{m > -n \colon m \to -n \}\rvert$ if $-n \in F\hspace{-0.04em}C_0$, respectively $-n \in F\hspace{-0.04em}C_0^*$. Otherwise set $Y_n =0$. If $-n$ is visited by active frogs, then $Y_n$ counts the number of frogs to the right of $-n$ that potentially ever reach $-n$. The process $(Y_n)_{n \in \mathbb{N}}$ is a Markov chain on $\mathbb{N}_0$ with
\begin{equation*}
Y_{n+1} =
\begin{cases}
0 & \text{if $Y_n = 0$,} \\
\operatorname{Binomial}(Y_n+1,p) & \text{if $Y_n > 0$}.
\end{cases}
\end{equation*}
Note that $\mathbb{P}(0 \fp{\mathbb{Z}} -n) = \mathbb{P}(Y_n >0)$ by definition.
A straightforward calculation shows that there is $k_0 \in \mathbb{N}$ such that $\mathbb{P}(Y_{n+1} < Y_n \mid Y_n = k) > \frac23$ for all $k \geq k_0$. Hence, we can dominate the Markov chain $(Y_n)_{n \in \mathbb{N}}$ by the Markov chain $(\widetilde{Y}_n)_{n \in \mathbb{N}}$ on $\{0, k_0, k_0+1, \ldots\}$ with transition probabilities
\begin{align*}
\mathbb{P}(\widetilde{Y}_{n+1} = l \mid \widetilde{Y}_n =k) =
\begin{cases}
\frac{1}{3} & \text{if $l=k+1$, $k > k_0$}, \\
\frac{2}{3} & \text{if $l=k-1$, $k > k_0$}, \\
(1-p)^{k_0+1} & \text{if $l=0$, $k = k_0$}, \\
1-(1-p)^{k_0+1} & \text{if $l=k+1$, $k = k_0$}, \\
1 & \text{if $l=k=0$}
\end{cases}
\end{align*}
for all $n \in \mathbb{N}$ and starting point $\widetilde{Y}_1 = \max\{Y_1, k_0\}$. Obviously, we have $\mathbb{P}(Y_n >0) \leq \mathbb{P}(\widetilde{Y}_n >0)$ for all $n \in \mathbb{N}$.
Let $T_k= \min\{n \in \mathbb{N} \colon \widetilde{Y}_n = k\}$ and $T_{k,l}=T_l - T_k$. Note that $\mathbb{P}(\widetilde{Y}_n >0) = \mathbb{P}(T_0 >n)$. For $t >0$, we apply the Markov inequality and use the strong Markov property to get
\begin{align}\label{proof_lemma_1d_fm_1}
\mathbb{P}(T_0 > n) &= \mathbb{P}\biggl(\sum_{k=k_0}^{\widetilde{Y}_1-1} T_{k+1,k} + T_{k_0,0} > n\biggr) \nonumber\\
&\leq \mathrm{e}^{-tn}\mathbb{E}\biggl[\exp\biggl(t \sum_{k=k_0}^{\widetilde{Y}_1-1} T_{k+1,k} + tT_{k_0,0}\biggr)\biggr] \nonumber\\
&= \mathrm{e}^{-tn} \sum_{l=k_0}^{\infty} \prod_{k=k_0}^{l-1} \mathbb{E}\bigl[\exp(tT_{k+1,k})\bigr] \mathbb{E}\bigl[\exp( tT_{k_0,0})\bigr]\mathbb{P}(\widetilde{Y}_1 = l) \nonumber\\
&= \mathrm{e}^{-tn} \sum_{l=0}^{\infty} \mathbb{E}\bigl[\exp(tT_{k_0+1,k_0})\bigr]^l \mathbb{E}\bigl[\exp( tT_{k_0,0})\bigr]\mathbb{P}(\widetilde{Y}_1 = l+k_0).
\end{align} $\widetilde{Y}_1$ can only be equal to $l+k_0$ if at least one frog to the right of $l-1$ reaches $-1$. Thus, \begin{equation}\label{proof_lemma_1d_fm_2}
\mathbb{P}(\widetilde{Y}_1 = l+k_0) \leq \sum_{i=l}^{\infty} p^{i+1} = p^l \frac{p}{1-p}. \end{equation} Now, we choose $t>0$ small enough such that $\mathbb{E}\bigl[\exp(tT_{k_0+1,k_0})\bigr] < p^{-1}$. Then \eqref{proof_lemma_1d_fm_2} shows that the sum in \eqref{proof_lemma_1d_fm_1} is finite, which yields the claim. \end{proof}
\subsection*{A lemma on Bernoulli random variables}
We will repeatedly use the following simple lemma. Note that the random variables in this lemma do not have to be independent.
\begin{lemma}\label{lemma_sum_rv} For $i \in \mathbb{N}$ let $X_i$ be a Bernoulli($p_i$) random variable with $\inf_{i\in \mathbb{N}}p_i =:p >0$. Then for every $a >0$ and $n \in \mathbb{N}$ \begin{equation*} \mathbb{P} \left(\frac{1}{n} \sum_{i=1}^n X_i \geq a \right) \geq p-a. \end{equation*} \end{lemma}
\begin{proof} Since $\mathbb{E}[X_i]\geq p$ and $\frac{1}{n} \sum_{i=1}^n X_i \leq 1$, we have \begin{align*} p \leq \mathbb{E}\left[\frac{1}{n}\sum_{i=1}^n X_i \right] \leq \mathbb{P} \left(\frac{1}{n}\sum_{i=1}^n X_i \geq a \right) + a , \end{align*} which yields the claim. \end{proof}
\section{Proofs}\label{proofs} In this section we prove the main results of the paper. To show recurrence we always compare the frog model with independent site percolation. To show transience we couple the frog model with branching random walks.
\subsection*{Recurrence for $d \geq 2$ and arbitrary weight} In this section we prove Theorem \ref{thm_d=2_arbitrary_weight_i} and Theorem \ref{thm_d>2_arbitrary_weight}. Throughout this section assume that $w <1$ is fixed. To illustrate the basic idea of the proof we first sketch it for $d=2$. We call a site $x$ in $\mathbb{Z}^2$ open if the trajectory $(S_n^x)_{n \in \mathbb{N}_0}$ of frog $x$ includes the four neighbouring vertices $x \pm e_1, x \pm e_2$ of $x$, i.e.~if $x \to x \pm e_1$ and $x \to x \pm e_2$. Note that for this definition it does not matter whether frog $x$ is ever activated or not. All sites are open independently of each other due to the independence of the trajectories of the frogs. Furthermore, the probability of a site to be open is the same for all sites. Consider the percolation cluster $C_0$ that consists of all sites that can be reached from $0$ by open paths, i.e.~paths containing only open sites. Note that all frogs in $C_0$ are activated as frog $0$ is active in the beginning. In this sense the frog model dominates the percolation. As we are in $d=2$, the probability of a site $x$ being open equals $1$ for $\alpha=0$ and by continuity is close to $1$ if $\alpha$ is close to $0$. Thus, if $\alpha$ is close enough to $0$ the percolation is supercritical. Hence, with positive probability the cluster $C_0$ containing the origin is infinite. By Lemma~\ref{percolation_density} this infinite cluster contains many sites close to the negative $e_1$-axis. This shows that many frogs that are initially close to this axis get activated. Each of them travels in the direction of the $e_1$-axis and has a decent chance of visiting $0$ on its way. Hence, this will happen infinitely many times. This argument shows that the origin is visited by infinitely many frogs with positive probability. Using the zero-one law stated in Theorem~\ref{lemma_zero_one_law} yields the claim.
In higher dimensions the probability of a frog to visit all its neighbours is not close to $1$ however small the drift may be. We can still make the reasoning work by using a renormalization type argument. To make this argument precise let $K$ be a non-negative integer that will be chosen later. We tessellate $\mathbb{Z}^d$ for $d \geq 2$ with cubes $(Q_x)_{x \in \mathbb{Z}^d}$ of size $(2K+1)^d$. For every $x \in \mathbb{Z}^d$ we define \begin{align}\label{def_box} \begin{split}
q_x &= q_x(K) = (2K+1)x,\\
Q_x &= Q_x(K) = \{y \in \mathbb{Z}^d \colon \lVert y-q_x \rVert_{\infty} \leq K\}, \end{split} \end{align} where $\lVert x \rVert_{\infty} = \max_{1 \leq i \leq d}{\lvert x_i \rvert}$ is the supremum norm. We call a site $x \in \mathbb{Z}^d$ open if for every $e \in \mathcal{E}_d$ there exists a frog path from $q_x$ to $q_{x + e}$ in $Q_x$. Otherwise, $x$ is said to be closed. The probability of a site $x$ to be open does not depend on $x$, but only on the drift parameter $\alpha$ and the cube size $K$. We denote it by $p(K, \alpha)$. This defines an independent site percolation on $\mathbb{Z}^d$, which, as mentioned before, is dominated by the frog model in the following sense: For any $x \in C_0$ the frog at $q_x$ will be activated in the frog model, i.e.~$q_x \in F\hspace{-0.04em}C_0$ with $F\hspace{-0.04em}C_0$ as defined in \eqref{def_frog_cluster}.
In the next two lemmas we show that the probability $p(K,\alpha)$ of a site to be open is close to $1$ if the drift parameter $\alpha$ is small and the cube size $K$ is large. We first show this claim for the symmetric case $\alpha=0$.
\begin{lemma} \label{lemma_recurrence_cube_size} For every $w<1$ in the frog model $\operatorname{FM}(d, \pi_{w,0})$ we have
\begin{equation*}
\lim_{K \to \infty} p(K, 0) =1.
\end{equation*} \end{lemma}
\begin{proof} For $d=2$ we obviously have $p(K, 0) = 1$ for all $K \in \mathbb{N}_0$ as balanced nearest random walk on $\mathbb{Z}^2$ is recurrent. Therefore, we can assume $d \geq 3$. The proof of the lemma relies on the shape theorem (Theorem~\ref{lemma_shape_theorem}) for the frog model. This theorem assumes equal weights on all directions. As in our model the $e_1$-direction has a different weight, we need a workaround. We couple our model with a modified frog model on $\mathbb{Z}^{d-1}$ in which the frogs in every step stay where they are with probability $w$ and move according to a simple random walk otherwise. A direct coupling shows that, up to any fixed time, in the modified frog model on $\mathbb{Z}^{d-1}$ there are at most as many frogs activated as in the frog model $\operatorname{FM}(d,\pi_{w,0})$. Note that Theorem~\ref{lemma_shape_theorem} holds true for the modified frog model on $\mathbb{Z}^{d-1}$, see Remark~\ref{remark_shape}. Let $\xi_K$, respectively $\xi_K^{\text{mod}}$, be the set of all sites visited by active frogs by time~$K$ in the frog model $\operatorname{FM}(d,\pi_{w,0})$, respectively the modified frog model on $\mathbb{Z}^{d-1}$. Further, let $\overline{\xi_K^{\text{mod}}} := \{x + (-\frac12, \frac12]^{d-1} \colon x \in \xi_K^{\text{mod}}\}$. By Theorem~\ref{lemma_shape_theorem} there exists a non-trivial convex symmetric set $\mathcal{A}=\mathcal{A}(d) \subseteq \mathbb{R}^{d-1}$ and an almost surely finite random variable~$\mathcal{K}$ such that
\begin{equation*}
\mathcal A \subseteq \frac{\overline{\xi_K^{\text{mod}}}}{K}
\end{equation*} for all $K \geq \mathcal{K}$. This implies that there is a constant $c_1 = c_1(d) > 0$ such that $\lvert \xi_K^{\text{mod}}\rvert \geq c_1 K^{d-1}$ for all $K \geq \mathcal{K}$. By the coupling the same statement holds true for $\xi_K$. As $\xi_K \subseteq Q_0(K)$ and any vertex in $\xi_K$ can be reached by a frog path from $0$ in $Q_0$, this implies \begin{equation*}
\Bigl\lvert\Bigl\{y \in Q_0\colon 0 \fp{Q_0} y\Bigr\} \Bigr\rvert \geq \lvert \xi_K\rvert \geq c_1 K^{d-1} \end{equation*} for all $K \geq \mathcal{K}$. Thus we have at least $c_1 K^{d-1}$ vertices in the box $Q_0$ that can be reached by frog paths from $0$. Each frog in $Q_0$ has a chance to reach the centre $q_{e}$ of a neighbouring box. More precisely, by Lemma~\ref{lemma_hitting_probability_SRW} there is a constant $c_2 =c_2(d)>0$ such that \begin{equation} \label{proof_lemma_recurrence_cube_size_0}
\mathbb{P} \bigl( y \to q_{e} \bigr) \geq \frac{c_2}{K^{d-2}} \end{equation} for any vertex $y \in Q_0$ and $e \in \mathcal{E}_d$. Hence, for any $e \in \mathcal{E}_d$ \begin{align} \label{proof_lemma_recurrence_cube_size_1}
\mathbb{P}\bigl( (0 \fp{Q_0} q_{e})^c \mid K \geq \mathcal{K} \bigr)
&= \mathbb{P} \Bigl( \bigl\{y \not\to q_{e} \text{ for all } y \in Q_0 \text{ with } 0 \fp{Q_0} y\bigr\} \bigm\vert K \geq \mathcal{K} \Bigr) \nonumber\\
&\leq \Bigl(1-\frac{c_2}{K^{d-2}}\Bigr)^{c_1K^{d-1}} \nonumber\\
& \leq \mathrm{e}^{-c_1c_2K}, \end{align} where we used for the first inequality the fact that a frog moves independently of all frogs in $Q_0$ once it will never return to $Q_0$ and the uniformity of the bound in \eqref{proof_lemma_recurrence_cube_size_0}. Therefore, \begin{align} \label{proof_lemma_recurrence_cube_size_2}
p(K,0) &\geq \mathbb{P}\Bigl(\bigcap_{e \in \mathcal{E}_d} \{0 \fp{Q_0} q_{e} \} \Bigm\vert K \geq \mathcal{K} \Bigl) \mathbb{P}_{0}(K \geq \mathcal{K}) \nonumber\\
&\geq \biggl[ 1- 2d \operatorname{e}^{-c_1 c_2 K} \biggr] \mathbb{P}(K \geq \mathcal{K}). \end{align} Since $\mathcal K$ is almost surely finite, we have $\lim_{K \to \infty}\mathbb{P}_{0}(K \geq \mathcal{K}) =1$. Thus, the right hand side of \eqref{proof_lemma_recurrence_cube_size_2} tends to $1$ in the limit $K\to \infty$. \end{proof}
\begin{lemma}\label{lemma_recurrence_small_drift} For fixed $w <1$, in the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ we have for all $K \in \mathbb{N}_0$
\begin{equation*}
\liminf_{\alpha \to 0} p(K, \alpha) \geq p(K,0).
\end{equation*} \end{lemma}
\begin{proof} Let $L(a,b,c,K)$ be the number of possible realizations such that all $q_{x \pm e}$, $e \in \mathcal{E}_d$, are visited by frogs in $Q_0$ for the first time after in total (of all frogs) exactly $a$ steps in $e_1$-direction, $b$ steps in $-e_1$-direction and $c$ steps in all other directions. Note that $L(a,b,c,K)$ is independent of $\alpha$. We have \begin{equation*} p(K, \alpha) = \sum_{a,b,c=1}^\infty L(a,b,c,K) \biggl(\frac{w(1+\alpha)}{2}\biggr)^a \biggl(\frac{w(1-\alpha)}{2}\biggr)^b \biggl(\frac{1-w}{2(d-1)}\biggr)^c.
\end{equation*} The claim now follows from Fatou's Lemma. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm_d=2_arbitrary_weight_i} and Theorem \ref{thm_d>2_arbitrary_weight}] By Lemma \ref{lemma_recurrence_cube_size} and Lemma \ref{lemma_recurrence_small_drift} we can assume that $K$ is big enough and $\alpha >0$ small enough such that $p(K, \alpha)> p_c(d)$, i.e.~the percolation with parameter $p(K, \alpha)$ on $\mathbb{Z}^d$ constructed at the beginning of this section is supercritical.
Consider boxes $B_n = \{-n\} \times [-\sqrt{n},\sqrt{n}]^{d-1}$ for $n \in \mathbb{N}$. By Lemma~\ref{percolation_density} there are constants $a,b > 0$ and $N \in \mathbb{N}$ such that for all $n \geq N$ \begin{equation*}
\mathbb{P}(\lvert B_n \cap C_0\rvert \geq a n^{(d-1)/2})>b. \end{equation*} After rescaling, the boxes $B_n$ correspond to the boxes \begin{equation*} F\hspace{-0.1em}B_n = \{y \in \mathbb{Z}^d \colon \lvert y_1 + (2K+1)n \rvert \leq K,\, \lvert y_i \rvert \leq (2K+1)\sqrt{n} +K,\, 2 \leq i \leq d\}. \end{equation*} Recall that $F\hspace{-0.04em}C_0$ consists of all vertices reachable by frog paths from $0$ as defined in \eqref{def_frog_cluster}, and note that $x \in B_n \cap C_0$ implies $q_x \in F\hspace{-0.1em}B_n \cap F\hspace{-0.04em}C_0$. This shows \begin{equation}\label{proof_thm_recreg_1}
\mathbb{P}(\lvert F\hspace{-0.1em}B_n \cap F\hspace{-0.04em}C_0 \lvert \geq a n^{(d-1)/2})>b \end{equation} for $n$ large enough. Analogously to \eqref{proof_lemma_recurrence_cube_size_1}, by Lemma~\ref{lemma_hitting_probability_RW_drift} and \eqref{proof_thm_recreg_1} the probability that at least one frog in $F\hspace{-0.1em}B_n$ is activated and reaches $0$ is at least \begin{equation*}
\Bigl(1-(1-cn^{-(d-1)/2})^{an^{(d-1)/2}}\Bigr)b \geq \bigl(1 - \mathrm{e}^{-ac}\bigr)b, \end{equation*} where $c=c(K,d,w)>0$ is a constant. Altogether we get by Lemma~\ref{lemma_sum_rv} \begin{align*}
\mathbb{P}(\text{$0$ visited infinitely often}) &= \lim_{n \to \infty} \mathbb{P}(\text{$0$ is visited $\varepsilon n$ many times }) \\
&\geq \liminf_{n \to \infty} \mathbb{P}\biggl( \sum_{i=1}^n \mathds{1}_{\{\exists x \in F\hspace{-0.1em}B_i \cap F\hspace{-0.04em}C_{0} \colon x \to 0 \}} \geq \varepsilon n \biggr) \\
&\geq \bigl(1 - \mathrm{e}^{-ac}\bigr)b - \varepsilon > 0 \end{align*} for $\varepsilon$ sufficiently small. The claim now follows from Theorem~\ref{lemma_zero_one_law}. \end{proof}
\subsection*{Recurrence for $d = 2$ and arbitrary drift}
In this section we prove Theorem~\ref{thm_d=2_arbitrary_drift_i}. Throughout the section let $\alpha < 1$ be fixed. We couple the frog model with independent site percolation on $\mathbb{Z}^2$. Let $K$ be an integer that will be chosen later. We tessellate $\mathbb{Z}^2$ with segments $(Q_x)_{x \in \mathbb{Z}^2}$ of size $2K+1$. For every $x = (x_1, x_2) \in \mathbb{Z}^2$ we define \begin{align*}
q_x &= q_x(K) = \bigl( x_1, (2K+1)x_2\bigr), \\
Q_x &= Q_x(K) = \{y \in \mathbb{Z}^2 \colon y_1 = x_1, \lvert y_2-(2K+1)x_2 \rvert \leq K\}. \end{align*} We call the site $x \in \mathbb{Z}^2$ open if there are frog paths from $q_x$ to $q_{x+e}$ in $Q_x$ for all $e \in \mathcal{E}_2$. As before, we denote the probability of a site to be open by $p(K,w)$. Note that this probability does not depend on $x$.
\begin{lemma}\label{lemma_d_2_arbitrary_drift_percolation_parameter_bound}
For $\alpha <1$, in the frog model $\operatorname{FM}(2,\pi_{w,\alpha})$ we have
\begin{equation*}
\lim_{K \to \infty} \liminf_{w \to 0} p(K,w) =1.
\end{equation*} \end{lemma}
\begin{proof} We claim that there is a constant $c=c(\alpha)>0$ such that for any $K \in \mathbb{N}_0$ and $x \in Q_0$ \begin{equation}\label{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_1}
\liminf_{w \to 0} \mathbb{P}\Bigr(\bigcap_{ e \in \mathcal{E}_2} \{x \to q_{e}\} \Bigl) \geq c. \end{equation} We can estimate the probability in \eqref{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_1} by \begin{equation*} \mathbb{P}\Bigr(\bigcap_{e \in \mathcal{E}_2} \{x \to q_{e}\} \Bigl) \geq \mathbb{P}\bigl(x \to q_{-e_2} \bigr) \mathbb{P}\bigl(q_{-e_{2}} \to q_{-e_1} \bigr) \mathbb{P}\bigl(q_{-e_{1}} \to q_{e_2} \bigr) \mathbb{P}\bigl(q_{e_{2}} \to q_{e_1} \bigr). \end{equation*} The probability of moving in $\pm e_2$-direction for $\lceil w^{-1} \rceil$ steps is $(1-w)^{\lceil w^{-1} \rceil}$. Conditioning on moving in this way, we just deal with a simple random walk on $\mathbb{Z}$. There exists a constant $c_1>0$ such that this random walk hits $-K$ within $\lceil w^{-1} \rceil$ steps with probability at least $c_1$ for all $w$ close to $0$. Therefore, \begin{equation} \label{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_2} \mathbb{P}\bigl(x \to q_{-e_2} \bigr) \geq c_1 (1-w)^{\lceil w^{-1} \rceil}
\geq \frac{c_1}{4}. \end{equation} The probability of moving exactly once in $-e_1$-direction and otherwise in $\pm e_2$-direction within $\lceil w^{-1} \rceil+1$ steps is \begin{equation*} \bigl(\lceil w^{-1} \rceil +1\bigr) \frac{(1-\alpha)w}{2} (1-w)^{\lceil w^{-1} \rceil} \geq \frac{1-\alpha}{8} \end{equation*} for $w$ close to $0$. Therefore, analogously to \eqref{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_2} there exists a constant $c_2>0$ such that \begin{equation*} \mathbb{P}\bigl(q_{-e_2} \to q_{-e_1} \bigr) \geq \frac{c_2(1-\alpha)}{8} \end{equation*} for $w$ sufficiently close to $0$. The two remaining probabilities $\mathbb{P}\bigl(q_{-e_{1}} \to q_{e_2} \bigr)$ and $\mathbb{P}\bigl(q_{e_{2}} \to q_{e_1} \bigr)$ can be estimated analogously, which implies \eqref{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_1}.
If frog $0$ activates all frogs in $Q_0$ and any of these $2K$ frogs manages to visit the centres of all neighbouring segments, then $0$ is open. By independence of the trajectories of the individual particles in $Q_0$ this implies \begin{equation}\label{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_3}
p(K,w) \geq \mathbb{P}\Bigl( \bigcap_{x \in Q_0} \{0 \to x\} \Bigr) \biggl(1- \Bigl(1-\mathbb{P}\Bigl(\bigcap_{1\leq i \leq 4} \{x \to q_{e_i}\}\Bigr)\Bigr)^{2K} \biggr). \end{equation} As in the proof of Lemma~\ref{lemma_recurrence_small_drift} one can show that for $w \to 0$ the first factor in \eqref{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_3} converges to $1$. Therefore, taking limits in \eqref{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_3} and using \eqref{proof_lemma_d_2_arbitrary_drift_percolation_parameter_bound_1} yields the claim. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm_d=2_arbitrary_drift_i}] By Lemma~\ref{lemma_d_2_arbitrary_drift_percolation_parameter_bound} we can choose $K$ big and $w$ small enough such that $p(K,w) > p_c(2)$, where $p_c(2)$ is the critical parameter for independent site percolation on $\mathbb{Z}^2$. As in the proof of Theorem~\ref{thm_d=2_arbitrary_weight_i} and Theorem~\ref{thm_d>2_arbitrary_weight} the coupling with supercritical percolation now yields recurrence of the frog model. As we rescaled the lattice $\mathbb{Z}^2$ slightly different this time, the box $B_n$ defined in the proof of Theorem~\ref{thm_d=2_arbitrary_weight_i} and Theorem~\ref{thm_d>2_arbitrary_weight} now corresponds to the box \begin{equation*} F\hspace{-0.1em}B_n = \{y \in \mathbb{Z}^2 \colon y_1 =-n,\, \lvert y_2\rvert \leq (2K+1)\sqrt{n} +K\}. \end{equation*} Since only asymptotics in $n$ matter for the proof, it otherwise works unchanged. \end{proof}
\subsection*{Recurrence for arbitrary drift and $d \geq 3$}
The proof of Theorem \ref{thm_d>2_arbitrary_drift_i} again relies on the idea of comparing the frog model with percolation. But instead of looking at the whole space $\mathbb{Z}^d$ as in the previous proofs, we consider a sequence of $(d-1)$-dimensional hyperplanes $(H_{-n})_{n \in \mathbb{N}_0}$ with $H_{-n}$ as defined in \eqref{definition_hyperplane}. We compare the frogs in each hyperplane with supercritical percolation, ignoring the frogs once they have left their hyperplane and all the frogs from other hyperplanes. Within a hyperplane we now deal with a frog model without drift, but allow the frogs to die in each step with probability $w$ by leaving their hyperplane, i.e.~we are interested in $\operatorname{FM}^*(d-1,\pi_{\text{sym}},1-w)$. Hence, the argument does not depend on the value of the drift parameter $\alpha<1$.
We start with one active particle in the hyperplane $H_0$. With positive probability this particle initiates an infinite frog cluster in $H_0 $ if $w$ and therefore the probability to leave the hyperplane is sufficiently small. Every frog eventually leaves $H_0$ and has for every $n \in \mathbb{N}$ a positive chance of activating a frog in the hyperplane $H_{-n}$, which might start an infinite cluster there. This is the only time where we need $\alpha <1$ in the proof of Theorem~\ref{thm_d>2_arbitrary_drift_i}. Using the denseness of such clusters we can then proceed as before.
We split the proof of Theorem~\ref{thm_d>2_arbitrary_drift_i} into two parts:
\begin{prop}\label{prop_d>2_arbitrary_drift_large_d}
There is $d_0 \in \mathbb{N}$ and $w_r > 0$, independent of $d$ and $\alpha$, such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $0 \leq w \leq w_r$, $0 \leq \alpha < 1$ and $d \geq d_0$. \end{prop}
\begin{prop}\label{prop_d>2_arbitrary_drift_small_d}
For every $d\geq 3$ there is $w_r = w_r(d) > 0$, independent of $\alpha$, such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $0 \leq w \leq w_r$ and all $0 \leq \alpha < 1$. \end{prop}
We first prove Proposition~\ref{prop_d>2_arbitrary_drift_large_d}. As indicated above we need to study the frog model with death and no drift in $\mathbb{Z}^{d-1}$. To increase the readability of the paper let us first work in dimension $d$ instead of $d-1$ and with a general survival parameter $s$, i.e.~we investigate $\operatorname{FM}^*(d, \pi_{\text{sym}}, s)$ for $d \geq 2$.
We tessellate $\mathbb{Z}^{d}$ with cubes $(Q'_x)_{x \in \mathbb{Z}^{d}}$ of size $3^{d}$. More precisely, for $x \in \mathbb{Z}^{d}$ we define \begin{align*}
Q'_x &= \{y \in \mathbb{Z}^{d} \colon \lVert y-3x \rVert_{\infty} \leq 1\}. \intertext{Further, for technical reasons, for $a \in (\frac23, 1)$ we define}
W_x &= \{y \in Q'_x \colon \lVert y-3x \rVert_1 \leq ad\}, \end{align*} where $\lVert z \rVert_1 = \sum_{i=1}^{2d} \lvert z_i \rvert$ is the graph distance from $z \in \mathbb{Z}^d$ to $0$. Informally, $W_x$ is the set of all vertices in $Q'_x$ which are ``sufficiently close'' to the centre of the cube. Consider the box $Q'_x$ for some $x \in \mathbb{Z}^{d}$ and let $o \in W_x$. If there are frog paths in $Q_x'$ from $o$ to vertices close to the centres of all neighbouring boxes, i.e.~if the event \begin{equation*} \bigcap_{e \in \mathcal{E}_d} \bigcup_{y \in W_{x + e}} \{o \fp{Q'_x} y\} \end{equation*} occurs, we call the vertex $o$ good. Note that this event only depends on the trajectories of all the frogs originating in the cube $Q'_x$ and the choice of $o$. If $o$ is good and is activated, then also the neighbouring cubes are visited. We show that the probability of a vertex being good is bounded from below uniformly in $d$ and this bound does not depend on the choice of $o$.
\begin{lemma}\label{lemma_recurrence_high_d_percolation_parameter_bound} Consider the frog model $\operatorname{FM}^*(d,\pi_{\text{sym}},s)$. There are constants $\beta > 0$ and $d_0 \in \mathbb{N}$ such that for all $d \geq d_0$, $s > \frac34$, $\frac23 < a < 2- \frac{1}{s}$, $x \in \mathbb{Z}^d$ and $o \in W_x$ \begin{equation*} \mathbb{P}(\text{$o$ is good}) > \beta. \end{equation*} \end{lemma}
To show this we first need to prove that many frogs in the cube are activated. In the proof of Theorem \ref{thm_d=2_arbitrary_weight_i} and Theorem \ref{thm_d>2_arbitrary_weight} this is done by means of Lemma~\ref{lemma_recurrence_cube_size} using the shape theorem. Here, we use a lemma that is analogous to Lemma~2.5 in \cite{AMP02pt}.
\begin{lemma}\label{lemma_recurrence_high_d_K_d}
Consider the frog model $\operatorname{FM}^*(d,\pi_{\text{sym}},s)$. There exist constants $\gamma >0$, $\mu > 1$ and $d_0 \in \mathbb{N}$ such that for all $d \geq d_0$, $s > \frac34$, $\frac23 < a < 2- \frac{1}{s}$ and $o \in W_0$ we have
\begin{equation*}
\mathbb{P}\Bigl( \bigl\lvert \bigl\{y \in W_0 \colon o \fp{Q'_0} y \bigr\}\bigr\rvert \geq \mu^{\sqrt{d}} \Bigr) \geq \gamma.
\end{equation*} \end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma_recurrence_high_d_K_d}] The proof consists of two parts. In the first part we show that with positive probability there are exponentially many vertices in $Q'_0$ reached from $o$ by frog paths in $Q'_0$, and in the second part we prove that many of these vertices are indeed in $W_0$. For the first part we closely follow the proof of Lemma~2.5 in \cite{AMP02pt} and rewrite the details for the convenience of the reader.
We examine the frog model with initially one active frog at $o$ and one sleeping frog at every other vertex in $Q'_0$ for $\sqrt{d}$ steps in time. Consider the sets $\mathcal{S}_0=\{o\}$ and $\mathcal{S}_k = \{x \in Q'_0 \colon \lVert x-o \rVert_1=k, \lVert x-o \rVert_{\infty}=1\}$ for $k \geq 1$ and let $\xi_k$ denote the set of active frogs which are in $\mathcal{S}_k$ at time $k$. We will show that, conditioned on an event to be defined later, the process $(\xi_k)_{k \in \mathbb{N}_0}$ dominates a process $(\tilde{\xi_k})_{k \in \mathbb{N}_0}$, which again itself dominates a supercritical branching process. The process $(\tilde{\xi_k})_{k \in \mathbb{N}_0}$ is defined as follows. Initially, there is one particle at $o$. Assume that the process has been constructed up to time $k \in \mathbb{N}_0$. In the next step each particle in $\tilde{\xi}_k$ survives with probability $s$. If it survives, it chooses one of the neighbouring vertices uniformly at random. If that vertex belongs to $\mathcal{S}_{k+1}$ and no other particle in $\tilde{\xi}_k$ intends to jump to this vertex, the particle moves there, activates the sleeping particle, and both particles enter $\tilde{\xi}_{k+1}$. Otherwise, the particle is deleted. In particular, if two or more particles attempt to jump to the same vertex, all of them will be deleted. Obviously, $\tilde{\xi}_k \subseteq \xi_k$ for all $k \in \mathbb{N}_0$.
First, we show that for $d$ large it is unlikely that two particles in $\tilde{\xi}_k$ attempt to jump to the same vertex. To make this argument precise we need to introduce some notation. For $x \in \mathcal{S}_k$ and $y \in \mathcal{S}_{k+1}$ with $\lVert x-y \rVert_1=1$ define \begin{align*} \mathcal{D}_x &= \{z \in \mathcal{S}_{k+1} \colon \lVert x-z\rVert_1 = 1\},\\ \mathcal{A}_y &= \{z \in \mathcal{S}_k \colon \lVert z-y \rVert_1 =1 \},\\ \mathcal{E}_x &= \{z \in \mathcal{S}_k \colon \mathcal{D}_x \cap \mathcal{D}_z \neq \emptyset \}. \end{align*} $\mathcal{D}_x$ denotes the set of possible descendants of $x$, $\mathcal{A}_y$ the set of ancestors of $y$ and $\mathcal{E}_x$ the set of enemies of $x$. Note that $\mathcal{E}_x = \bigcup_{y \in \mathcal{D}_x} (\mathcal{A}_y \setminus \{x\})$ is a disjoint union. Let $n_x=\sum_{i=1}^d \mathds{1}_{\{o_i=0,\, x_i\neq0\}}$. Then one can check that \begin{align}\label{proof_recurrence_high_d_K_d_0}
\lvert \mathcal{D}_x\rvert &= 2(d-\lVert o \rVert_1-n_x) + \lVert o \rVert_1 - (k-n_x) = 2d -\lVert o \rVert_1 - k - n_x,\\
\lvert \mathcal{A}_y\rvert &= k+1. \nonumber \end{align} For $x \in \mathcal{S}_k$ let $\chi(x)$ denote the number of particles of $\tilde{\xi}_k$ in $x$. Note that $\chi(x) \in \{0,2\}$ for any $x \in \mathcal{S}_k$ with $k \in \mathbb{N}$.
Let $\zeta_{xy}^k$ denote the indicator function of the event that there is $z \in \mathcal{E}_x$ with $\chi(z)\geq 1$ such that one of the particles at $z$ intends to jump to $y$ at time $k+1$. If $\zeta_{xy}^k=1$, then a particle on $x$ cannot move to $y$ at time $k+1$.
Further, we introduce the event $U_x= \{\chi(z)=2 \text{ for all } z \in \mathcal{E}_x\}$. This event describes the worst case for $x$, when it is most likely that particles at $x$ will not be able to jump. For $k \leq \sqrt{d}$ we have \begin{equation*}
\mathbb{P}(\zeta_{xy}^k=1) \leq \mathbb{P}(\zeta_{xy}^k=1 \mid U_x) \leq \sum_{z \in \mathcal{A}_y \setminus \{x\}} \frac{2s}{2d} = \frac{ks}{d} \leq \frac{1}{\sqrt{d}}. \end{equation*} Given $\sigma > 0$ we choose $d$ large such that $\mathbb{P}(\zeta_{xy}^k=1) < \sigma$ for all $k \leq \sqrt{d}$. Now, we consider the set of all descendants $y$ of $x$ such that there is a particle at some vertex $z \in \mathcal{E}_x$ that tries to jump to $y$ at time $k+1$. This set contains $\sum_{y \in \mathcal{D}_x} \zeta_{xy}^k$ elements. Let $\zeta_x^k$ denote the indicator function of the event $\bigl\{\sum_{y \in \mathcal{D}_x} \zeta_{xy}^k > 2\sigma d\bigr\}$. If $\zeta_{x}^k=1$, then more than $2\sigma d$ of the $2d$ neighbours of $x$ are blocked to a particle at $x$.
The random variables $\{\zeta_{xy}^k \colon y \in \mathcal{D}_x\}$ are independent with respect to $\mathbb{P}(\cdot \mid U_x)$ as $\mathcal{E}_x = \bigcup_{y \in \mathcal{D}_x} (\mathcal{A}_y \setminus \{x\})$ is a disjoint union. Using $2d-ad-2k \leq \lvert\mathcal{D}_x\rvert \leq 2d$ and a standard large deviation estimate we get for $k \leq \sqrt{d}$ \begin{align*}
\mathbb{P}(\zeta_x^k=1) &\leq \mathbb{P}\biggl(\sum_{y \in \mathcal{D}_x} \zeta_{xy}^k > 2\sigma d \Bigm\vert U_x\biggr) \\
&\leq \mathbb{P}\biggl(\frac{1}{\lvert \mathcal{D}_x\rvert} \sum_{y \in \mathcal{D}_x} \zeta_{xy}^k > \sigma \Bigm\vert U_x \biggr)\\
&\leq \mathrm{e}^{-c_1\lvert\mathcal{D}_x\rvert} \\
&\leq \mathrm{e}^{-c_2 d} \end{align*} with constants $c_1, c_2 > 0$. Next, let us consider the bad event \begin{equation*}
B = \bigcup_{k=1}^{\sqrt{d}} \bigcup_{x \in \tilde{\xi}_k} \{\zeta_x^k=1\}. \end{equation*} Then with $\lvert\tilde{\xi}_k\rvert \leq 2^k \leq 2^{\sqrt{d}}$ we get \begin{equation*}
\mathbb{P}(B) \leq \sqrt{d} \cdot 2^{\sqrt{d}} \cdot \mathrm{e}^{-c_2 d}. \end{equation*} In particular $\mathbb{P}(B)$ can be made arbitrarily small for $d$ large. Conditioned on $B^c$, in each step for every particle there are at least \begin{equation*} \lvert\mathcal{D}_x\rvert-2 \sigma d -1 \geq (2-a-2\sigma)d - 3 \sqrt{d} \end{equation*} available vertices in $\mathcal{S}_{k+1}$, i.e.~vertices a particle at $x$ can jump to in the next step. Thus, conditioned on $B^c$, the process $\tilde{\xi}_k$ dominates a branching process with mean offspring at least \begin{equation*}
\frac{\bigl((2-a-2\sigma)d - 3 \sqrt{d}\bigr) \cdot 2 \cdot s}{2d}. \end{equation*} For $\sigma$ small and $d$ large the mean offspring is bigger than $1$ as we assumed $a < 2-\frac{1}{s}$. Since a supercritical branching process grows exponentially with positive probability, there are constants $c_3 >1$, $q \in (0,1)$ that do not depend on $d$ such that \begin{equation}\label{proof_recurrence_high_d_K_d} \mathbb{P}\bigl( \lvert\tilde{\xi}_{\sqrt{d}}\rvert \geq c_3^{\sqrt{d}}\bigr) \geq q. \end{equation} For the second part of the proof condition on the event $\bigr\{\lvert\tilde{\xi}_{\sqrt{d}}\rvert \geq c_3^{\sqrt{d}}\bigl\}$ and choose $0 < \varepsilon <a-\frac23$. If $\lVert o \rVert_1 \leq (a-\varepsilon)d$, all particles of $\tilde{\xi}_{\sqrt{d}}$ are in $W_0$ for $d$ large. This immediately implies the claim of the lemma. Otherwise, let $n=\lvert\tilde{\xi}_{\sqrt{d}}\rvert$, enumerate the particles in $\tilde{\xi}_{\sqrt{d}}$ and let $\tilde{S}^i$, $1 \leq i \leq n$, denote the position of the $i$-th particle. Further, we define for $1 \leq i \leq n$ \begin{equation*}
X_i =
\begin{cases}
1 & \text{if $\lVert \tilde{S}^i \rVert_1 \leq \lVert o \rVert_1 $}, \\
0 & \text{otherwise.}
\end{cases} \end{equation*} It suffices to show that $\mathbb{P}(X_1=1)>0$. Then Lemma~\ref{lemma_sum_rv} applied to the random variables $X_1, \ldots, X_n$ implies that with positive probability a positive proportion of the particles in $\tilde{\xi}_{\sqrt{d}}$ indeed have $L_1$-norm smaller than $o$, and are thus in $W_0$. Together with \eqref{proof_recurrence_high_d_K_d} this finishes the proof.
For the proof of the claim let $\tilde{S}^1_k$ denote the position of the ancestor of $\tilde{S}^1$ in $\mathcal{S}_k$, where $0 \leq k \leq \sqrt{d}$. Note that $\tilde{S}^1_0 = o$ and $\tilde{S}^1_{\sqrt{d}} = \tilde{S}^1$.
We are interested in the process $(\lVert \tilde{S}_k^1 \rVert_1)_{1 \leq k \leq \sqrt{d}}$. By the construction of the process $(\tilde{\xi}_k)_{k \in \mathbb{N}_0}$ it either increases or decreases by $1$ in every step. The positions $\tilde{S}_k^1$ and $\tilde{S}_{k+1}^1$ differ in exactly one coordinate. If this coordinate is changed from $0$ to $\pm 1$, then $\lVert \tilde{S}_{k+1}^1\rVert_1$ = $\lVert \tilde{S}_k^1 \rVert_1 +1$. If it is changed from $\pm 1$ to $0$, then we have $\lVert \tilde{S}_{k+1}^1\rVert_1$ = $\lVert \tilde{S}_k^1 \rVert_1 -1$. There are at least $(a-\varepsilon)d-\sqrt{d}$ many $\pm 1$-coordinates in $\tilde{S}_k^1$ that can be changed to $0$. As we also know that $\tilde{S}_{k+1}^1 \in \mathcal{D}_{\tilde{S}_k^1}$, we have for all $k \leq \sqrt{d}$ by \eqref{proof_recurrence_high_d_K_d_0} and the choice of $\varepsilon$ \begin{equation*}
\mathbb{P}\bigl(\lVert \tilde{S}_{k+1}^1\rVert_1 = \lVert \tilde{S}_k^1 \rVert_1 -1\bigr)
\geq \frac{(a-\varepsilon)d-\sqrt{d}}{\lvert\mathcal{D}_{\tilde{S}_k^1}\rvert}
\geq \frac{(a-\varepsilon)d - \sqrt{d}}{2d - (a-\varepsilon)d}
> \frac12 \end{equation*} for $d$ large. Hence, $\lVert \tilde{S}_k^1 \rVert_1$ dominates a random walk with drift on $\mathbb{Z}$ started in $\lVert o \rVert_1$. Therefore, \begin{equation*} \mathbb{P}(X_1 = 1) = \mathbb{P}\bigl(\lVert \tilde{S}_{\sqrt{d}}^1 \rVert_1 \leq \lVert o \rVert_1\bigr) \geq \frac12, \end{equation*} which finishes the proof. \end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma_recurrence_high_d_percolation_parameter_bound}] By Lemma~\ref{lemma_recurrence_high_d_K_d}, with probability at least $\gamma$ there are frog paths in $Q'_x$ from $o$ to at least $\mu^{\sqrt{d}}$ vertices in $W_x$ for $d$ large. We divide the frogs on these vertices into $2d$ groups of size at least $\mu^{\sqrt{d}}/2d$ and assign each group the task of visiting one of the neighbouring boxes $W_{x+e}$, $e \in \mathcal{E}_d$. Notice that this job is done if at least one of the frogs in the group visits at least one vertex in the neighbouring box. If all groups succeed, $o$ is good. Any frog in any group is just three steps away from its respective neighbouring box $W_{x+e}$, $e \in \mathcal{E}_d$, and thus has probability at least $(\frac{s}{2d})^3$ of achieving its group's goal. Hence, \begin{equation*} \mathbb{P}(\text{$o$ is good}) \geq \Bigl(1- \Bigl(1-\Bigl(\frac{s}{2d}\Bigr)^3\Bigr)^{\mu^{\sqrt d}/{2d}} \Bigr)^{2d} \gamma
\geq \frac{\gamma}{2} \end{equation*} for $d$ large. \end{proof}
In the other recurrence proofs we couple the frog model with percolation by calling a cube open if its centre is good. Here, the choice of a ``starting'' vertex, like the centre, is not independent of the other cubes. Therefore, we cannot directly couple the frog model with independent percolation. However, the following lemma allows us to compare the distributions of a frog cluster and a percolation cluster.
\begin{lemma}\label{lemma_recurrence_high_d_fc=c} Consider the frog model $\operatorname{FM}^*(d,\pi_{\text{sym}},s)$. Let $\beta >0$ and assume that $\mathbb{P}(\text{$o$ is good}) > \beta$ for all $o \in W_x$, $x \in \mathbb{Z}^d$. Further, consider independent site percolation on $\mathbb{Z}^d$ with parameter $\beta$. Then for all sets $A \subseteq \mathbb{Z}^d$, $v \in \mathbb{Z}^d$ and for all $k \geq 0$ \begin{equation*}
\mathbb{P}(\lvert A \cap C_v\rvert \geq k) \leq \mathbb{P}\Bigl(\Bigl\lvert \bigcup_{x \in A}Q'_x\cap F\hspace{-0.04em}C_{3v}^*\Bigr\rvert \geq k\Bigr). \end{equation*} \end{lemma}
\begin{proof} For technical reasons we introduce a family of independent Bernoulli random variables $(X_o)_{o \in \mathbb{Z}^d}$ which are also independent of the choice of all the trajectories of the frogs and satisfy $\mathbb{P}(X_o=1) = \mathbb{P}(\text{$o$ is good})^{-1}\beta$. Their job will be justified soon. Further, we fix an ordering of all vertices in $\mathbb{Z}^d$.
Now we are ready to describe a process that explores a subset of the frog cluster $F\hspace{-0.04em}C_{3v}^*$. Its distribution can be related to the cluster $C_v$ in independent site percolation with parameter $\beta$. The process is a random sequence $(R_t, D_t, U_t)_{t\in \mathbb{N}_0}$ of tripartitions of $\mathbb{Z}^d$. As the letters indicate, $R_t$ will contain all sites reached by time $t$, $D_t$ all those declared dead by time $t$, and $U_t$ the unexplored sites. We construct the process in such a way that for all $t \in \mathbb{N}_0$, $x \in R_t$ and $e \in \mathcal{E}_d$ there is $y \in W_{x +e}$ such that there is a frog path from $3v$ to $y$ in $\bigcup_{x \in R_t}Q'_x$. We start with $R_0 = D_0 = \emptyset$ and $U_0 = \mathbb{Z}^d$. If $3v$ is good and $X_{3v}=1$, set $U_1 = \mathbb{Z}^d \setminus \{v\}$, $R_1=\{v\}$, and $D_1=\emptyset$. Otherwise, stop the algorithm. If the process is stopped at time $t$, let $U_s = U_{t-1}$, $R_s = R_{t-1}$ and $D_s = D_{t-1}$ for all $s \geq t$. Assume we have constructed the process up to time $t$. Consider the set of all sites in $U_t$ that have a neighbour in $R_t$. If it is empty, stop the process. Otherwise, pick the site $x$ in this set with the smallest number in our ordering. By the choice of $x$ there is $y \in W_x$ such that there is a frog path from $3v$ to $y$ in $\bigcup_{z \in R_t} Q'_z$. Choose any vertex $y$ with this property. If $y$ is good and $X_y = 1$, set \begin{equation*} R_{t+1} = R_t \cup \{x\},\ D_{t+1} = D_t, \ U_{t+1}=U_t \setminus \{x\}. \end{equation*} Otherwise, update the sets as follows: \begin{equation*} R_{t+1} = R_t,\ D_{t+1} = D_t \cup \{x\}, \ U_{t+1}=U_t \setminus \{x\} \end{equation*} In every step $t$ the algorithm picks an unexplored site $x$ and declares it to be reached or dead, i.e.~added to the set $R_{t}$ or $D_t$. The probability that $x$ is added to $R_t$ equals $\beta$. This event is (stochastically) independent of everything that happened before time $t$ in the algorithm. Note that every unexplored neighbour of a reached site will eventually be explored due to the fixed ordering of all sites.
In the same way we can explore independent site percolation on $\mathbb{Z}^d$ with parameter $\beta$. Construct a sequence $(R_t', D_t', U_t')_{t\in \mathbb{N}_0}$ of tripartitions of $\mathbb{Z}^d$ as above, but whenever the algorithm evaluates whether a site $x$ is declared reached or dead we toss a coin independently of everything else. Note that $\bigcup_{t \in \mathbb{N}_0} R_t' = C_v$, where $C_v$ is the cluster containing $v$. This exploration process is well known for percolation, see e.g.~\cite[Proof of Theorem 4, Chapter 1]{BR06}.
By construction, $\bigcup_{t\in \mathbb{N}_0} R_t$ equals the percolation cluster $C_v$ in distribution. The claim follows since for every $x \in \bigcup_{t\in \mathbb{N}_0} R_t$ there is a $y \in W_x$ such that there is a frog path from $3v$ to $y$, i.e.~$y \in F\hspace{-0.04em}C_{3v}^*$. \end{proof}
Now we can show Proposition~\ref{prop_d>2_arbitrary_drift_large_d}. Note that we are again working with the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ (without death).
\begin{proof}[Proof of Proposition~\ref{prop_d>2_arbitrary_drift_large_d}] Throughout this proof we assume that $d$ is so large that Lemma~\ref{lemma_recurrence_high_d_percolation_parameter_bound} is applicable for $d-1$ and $p_c(d-1) < \beta$, where $\beta$ is the constant introduced in the statement of Lemma~\ref{lemma_recurrence_high_d_percolation_parameter_bound}. This is possible because of Lemma~\ref{lemma_pc_high_d}. These assumptions in particular imply that we can use Lemma~\ref{lemma_recurrence_high_d_fc=c} and that the percolation introduced there is supercritical.
Consider the sequence of hyperplanes $(H_{-n})_{n \in \mathbb{N}_0}$ defined in \eqref{definition_hyperplane} and let $A$ denote the event that there is at least one frog $v_n$ activated in every hyperplane $H_{-n}$. For technical reasons we want $v_n$ of the form $v_n = (-n, 3w_n)$ for some $w_n \in \mathbb{Z}^{d-1}$. We first show that $A$ occurs with positive probability. To see this consider the first hyperplane $H_0$ and couple the frogs in this hyperplane with $\operatorname{FM}^*(d-1, \pi_{\text{sym}}, 1-w)$ in the following way: Whenever a frog takes a step in $\pm e_1$-direction, i.e.~leaves its hyperplane, it dies instead. By \cite[Theorem 1.8]{AMP02pt} (or Lemma~\ref{lemma_recurrence_high_d_fc=c}) this process survives with positive probability if $w$ is sufficiently small (independent of the dimension $d$). This means that infinitely many frogs are activated in $H_0$. Obviously, this implies the claim.
From now on we condition on the event $A$. Note that $F\hspace{-0.04em}C_{v_n} \subseteq F\hspace{-0.04em}C_0$ for $n\in\mathbb{N}$. Analogously to the proofs in the last sections we introduce boxes \begin{equation*} F\hspace{-0.1em}B_n' = \{-n\} \times [-(3\sqrt{n}+1), 3\sqrt{n}+1]^{d-1} \end{equation*} for $n \in \mathbb{N}$. We claim that analogously to Lemma~\ref{percolation_density} there are constants $a, b>0$ and $N \in \mathbb{N}$ such that for $n \geq N$ \begin{equation} \label{proof_thm_high_d_1}
\mathbb{P}\bigl(\lvertF\hspace{-0.1em}B_n' \cap F\hspace{-0.04em}C_0\rvert \geq a n^{(d-1)/2}\bigr) \geq b. \end{equation} To prove this claim let $a,b>0$ and $N \in \mathbb{N}$ be the constants provided by Lemma \ref{percolation_density} for percolation with parameter $\beta$. For $n \geq N$ couple the frog model with $\operatorname{FM}^*(d-1, \pi_{\text{sym}}, 1-w)$ in the hyperplane $H_n$ as above. Let $B_n' = [-\sqrt{n}, \sqrt{n}]^{d-1}$ and note that $B_n'$ corresponds to $F\hspace{-0.1em}B_n'$ restricted to $H_n$ after rescaling. Then by Lemma~\ref{lemma_recurrence_high_d_fc=c} and Lemma~\ref{percolation_density} \begin{align*}
\mathbb{P} \bigl(\lvertF\hspace{-0.1em}B_n' \cap F\hspace{-0.04em}C_{v_n}\rvert \geq a n^{(d-1)/2 } | A \bigr)
&\geq \mathbb{P} \bigl(\lvertF\hspace{-0.1em}B_n' \cap (\{-n\} \times F\hspace{-0.04em}C_{3w_n}^*)\rvert \geq a n^{(d-1)/2) } | A \bigr) \\
&\geq \mathbb{P}\bigl(\lvert B_n' \cap C_{w_n}\rvert \geq a n^{(d-1)/2)} | A \bigr) \\
&\geq b. \end{align*} Here, $C_{w_n}$ is the open cluster containing $w_n$ in a percolation model with parameter $\beta$ in $\mathbb{Z}^{d-1}$, independently of the frogs. As $F\hspace{-0.04em}C_{v_n} \subseteq F\hspace{-0.04em}C_0$, this implies inequality~\eqref{proof_thm_high_d_1}.
By Lemma~\ref{lemma_hitting_probability_RW_drift} and \eqref{proof_thm_high_d_1}, the probability that there is at least one activated frog in $F\hspace{-0.1em}B_n'$ that reaches $0$ is at least \begin{equation*}
\Bigl(1-(1-c'n^{-(d-1)/2})^{an^{(d-1)/2}}\Bigr)b \geq \bigl(1 - \mathrm{e}^{-ac'}\bigr)b, \end{equation*} where $c'>0$ is a constant. Altogether we get by Lemma~\ref{lemma_sum_rv} \begin{align*}
\mathbb{P}(\text{$0$ visited infinitely often}) &= \lim_{n \to \infty} \mathbb{P}(\text{$0$ is visited $\varepsilon n$ many times }) \\
&\geq \lim_{n \to \infty} \mathbb{P}\biggl( \sum_{i=1}^n \mathds{1}_{\{\exists x \in F\hspace{-0.1em}B_n' \cap F\hspace{-0.04em}C_{0} \colon x \to 0 \}} \geq \varepsilon n \biggr)\\
&\geq \Bigl(\bigl(1 - \mathrm{e}^{-ac'}\bigr)b - \varepsilon \Bigr) > 0 \end{align*} for $\varepsilon$ sufficiently small. The claim now follows from Theorem~\ref{lemma_zero_one_law}. \end{proof}
To prove Proposition~\ref{prop_d>2_arbitrary_drift_small_d} we again first study the frog model with death $\operatorname{FM}^*(d, \pi_{\text{sym}},s)$ in the hyperplanes and couple it with percolation. This time we use cubes of size $(2K+1)^{d}$ for some $K \in \mathbb{N}_0$. By choosing $K$ large we increase the number of frogs in the cubes. In the proof of the previous proposition this was done by increasing the dimension $d$. For $x \in \mathbb{Z}^d$ and $K \in \mathbb{N}_0$ we define \begin{align*}
q_x &= q_x(K) = (2K+1)x, \\
Q_x &= Q_x(K) = \{y \in \mathbb{Z}^{d} \colon \lVert y-q_x \rVert_{\infty} \leq K\}. \end{align*}
Note that this definition coincides with \eqref{def_box}. In analogy to Lemma~\ref{lemma_recurrence_high_d_fc=c} the frog cluster dominates a percolation cluster.
\begin{lemma}\label{lemma_frog_model_with_death_percolation_arbitrary_d} For $d \geq 2$ consider the frog model $\operatorname{FM}^*(d,\pi_{\text{sym}},s)$ and supercritical site percolation on $\mathbb{Z}^d$. There are constants $s_r(d) < 1$ and $K \in \mathbb{N}_0$ such that for any $s \geq s_r(d)$, $A \subseteq \mathbb{Z}^d$, $v \in \mathbb{Z}^d$ and for all $k \geq 0$ \begin{equation*}
\mathbb{P}(\lvert A \cap C_v \rvert \geq k) \leq \mathbb{P}\Bigl(\Bigl\lvert \bigcup_{x \in A}Q_x\cap F\hspace{-0.04em}C_{q_v}^*\Bigr\rvert \geq k\Bigr). \end{equation*} \end{lemma}
\begin{proof} We couple the frog model with percolation as follows: A site $x \in \mathbb{Z}^{d}$ is called open if for every $e \in \mathcal{E}_{d}$ there exists a frog path from $q_x$ to $q_{x + e}$ in $Q_x$. Note that $x \in C_v$ now implies $q_x \in F\hspace{-0.04em}C_{q_v}^*$ for any $v \in \mathbb{Z}^d$. We denote the probability of a site $x$ to be open by $p(K,s)$. By Lemma~\ref{lemma_recurrence_cube_size} $p(K,1)$ is close to $1$ for $K$ large. As in the proof of Lemma~\ref{lemma_recurrence_small_drift} one can show that $\lim_{s \to 1} p(K,s)= p(K,1)$. Thus, we can choose $K \in \mathbb{N}$ and $s_r >0$ such that $p(K,s) > p_c(d)$ for all $s > s_r$, i.e. the percolation is supercritical. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_d>2_arbitrary_drift_small_d}]
Using Lemma~\ref{lemma_frog_model_with_death_percolation_arbitrary_d} instead of Lemma~\ref{lemma_recurrence_high_d_fc=c} and boxes $Q_x$ instead of $Q_x'$, the proof is analogous to the proof of Proposition~\ref{prop_d>2_arbitrary_drift_large_d}. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm_d>2_arbitrary_drift_i}]
Theorem \ref{thm_d>2_arbitrary_drift_i} follows from Proposition~\ref{prop_d>2_arbitrary_drift_large_d} and Proposition~\ref{prop_d>2_arbitrary_drift_small_d}. \end{proof}
\subsection*{Transience for $d\geq 2$ and arbitrary drift}
\begin{proof}[Proof of Theorem~\ref{thm_d=2_arbitrary_drift_ii} and Theorem~\ref{thm_d>2_arbitrary_drift_ii}] Let the parameters $\alpha>0$ and $d\geq 2$ be fixed throughout the proof. For $x \in \mathbb{Z}^d$ we define \begin{equation}
L_x = \{y \in \mathbb{Z}^d \colon y_i = x_i \text{ for all $2 \leq i \leq d$}\}. \end{equation} $L_x$ consists of all vertices which agree in all coordinates with $x$ except the $e_1$-coordinate. The key observation used in the proof is that all particles mainly move along these lines if the weight $w$ is large.
We dominate the frog model by a branching random walk on $\mathbb{Z}^d$. At time $n=0$ the branching random walk starts with one particle at the origin. At every step in time every particle produces offspring as follows: For every particle located at $x \in \mathbb{Z}^d$ consider an independent copy of the frog model. At any vertex $z \in \mathbb{Z}^d \setminus L_x$ the particle produces $|\{y \in L_x \colon x \fp{L_x} y, y \to z\}|$ many children. Notice that this number might be $0$ or infinite. The particle does not produce any offspring at a vertex in $L_x$. Further, note that the particles reproduce independently of each other as we use independent copies of the frog model to generate the offspring.
One can couple this branching random walk with the original frog model. To explain the coupling, let us briefly describe how to go from the original frog model to the branching random walk. Recall that the frog model is entirely determined by a set of trajectories $(S_n^x)_{n \in \mathbb{N}_0, x \in \mathbb{Z}^d}$ of random walks. We use this set of trajectories to produce the particles in the first generation of the branching random walk, i.e.~the children of the particle initially at $0$, as explained above. Now, assume that the first $n$ generations of the branching random walk have been created. Enumerate the particles in the $n$-th generation. When generating the offspring of the $i$-th particle in this generation, delete all trajectories of the frog model used for generating the offspring of a particle~$j$ with $j < i$ or a particle in an earlier generation, and replace them by independent trajectories. Otherwise, use the original trajectories.
One can check that the branching random walk dominates the frog model in the following sense: For every frog in $\mathbb{Z}^d \setminus L_0$ that is activated and visits $0$ there is a particle at $0$ in the branching random walk. Thus, the number of visits to the origin by particles in the branching random walk is at least as big as the number of visits to $0$ by frogs in the frog model, not counting those visits to $0$ made by frogs initially in $L_0$. Note that, if the frog model was recurrent, then almost surely there would be infinitely many frogs in $\mathbb{Z}^d \setminus L_0$ activated that return to $0$. In particular, also in the branching random walk infinitely many particles would return to $0$. Therefore, to prove transience of the frog model it suffices to show that in the branching random walk only finitely many particles return to $0$ almost surely.
Let $D_n$ denote the set of descendants in the $n$-th generation of the branching random walk. Further, for $i \in D_n$ let $X_n^i$ be the $e_1$-coordinate of the location of particle $i$. Define for $\theta >0$ and $n \in \mathbb{N}_0$ \begin{equation}
\mu = \mathbb{E} \Bigl[ \sum_{i \in D_1} \mathrm{e}^{-\theta X_1^i} \Bigr] \qquad \text{and} \qquad M_n = \frac{1}{\mu^n} \sum_{i \in D_n} \mathrm{e}^{-\theta X_n^i}. \end{equation} We claim that $\mu <1$ for $w$ close to $1$ and $\theta$ small, which, in particular, implies that $(M_n)_{n \in \mathbb{N}_0}$ is well-defined. We show this claim in the end of the proof. We next show that $(M_n)_{n \in \mathbb{N}_0}$ is a martingale with respect to the filtration $(\mathcal{F}_n)_{n \in \mathbb{N}_0}$ with $\mathcal{F}_n=\sigma \bigl(D_1, \ldots, D_{n}, (X^i_1)_{i \in D_1}, \ldots, (X^i_{n})_{i \in D_{n}} \bigr)$.
Obviously, $M_n$ is $\mathcal{F}_n$-measurable. For a particle $i \in D_n$ denote its descendants in generation $n+1$ by $D_{n+1}^i$. Since particles branch independently, we get \begin{align*}
\mathbb{E}[ M_{n+1} | \mathcal{F}_n ] &= \mathbb{E} \Bigl[\frac{1}{\mu^{n+1}} \sum_{i \in D_{n+1}} \mathrm{e}^{-\theta X_{n+1}^i} \bigm\vert \mathcal{F}_n \Bigr] \\
&= \frac{1}{\mu^n} \sum_{i \in D_{n}} \mathrm{e}^{-\theta X_{n}^i} \cdot \frac{1}{\mu} \mathbb{E} \Bigr[ \sum_{j \in D_{n+1}^i} \mathrm{e}^{-\theta \left( X_{n+1}^j - X_{n}^i \right)} \Bigl]. \end{align*} Note that the expectation on the right hand side is independent of $i$ and $n$ and therefore, by the definition of $\mu$, we conclude \begin{align*}
\mathbb{E}[ M_{n+1} | \mathcal{F}_n ] = M_n. \end{align*} This calculation also yields $\mathbb{E}[\lvert M_n\rvert]= \mathbb{E}[M_n]=\mathbb{E}[M_0]=1$, and therefore $M_n \in \mathcal{L}^1$. This in particular implies that $M_n$ is finite almost surely for every $n \in \mathbb{N}_0$. Thus, $X_n^i=0$ can only occur for finitely many $i \in D_n$ almost surely for every $n \in \mathbb{N}_0$, i.e.~in every generation only finitely many particles can be at $0$. By the martingale convergence theorem, there exists an almost surely finite random variable $M_\infty$, such that $\lim_{n \to \infty} M_n = M_\infty$ almost surely. Combining this with $\mu <1$, we get $\lim_{n \to \infty}\sum_{i \in D_{n}} \mathrm{e}^{-\theta X_{n}^i} = 0$ almost surely. Hence, $X_n^i=0$ for some $i \in D_n$ occurs only for finitely many times $n$. Overall, this shows that the branching random walk is transient.
It remains to show $\mu < 1$. Note that the particles in $D_1$ are at vertices in the set $\{y \in \mathbb{Z}^d\setminus L_0 \colon 0 \fp{L_0} y\}$. Therefore, for the calculation of $\mu$ we first need to consider all sites in $L_0$ that are reached from $0$ by frog paths in $L_0$. The idea is to control the number of frogs activated on the negative $e_1$-axis using Lemma~\ref{lemma_1d_fm} and estimating the number of frogs activated on the positive $e_1$-axis by assuming the worst case scenario that all of them will be activated. Then, for every $k \in \mathbb{Z}$ we have to estimate the number of vertices with $e_1$-coordinate $k$ visited by each of these active frogs on the $e_1$-axis. Due to the definition of $\mu$, the sites visited by frogs on the positive $e_1$-axis do not contribute much to $\mu$. Recall that $H_k$ denotes the hyperplane that consists of all vertices with $e_1$-coordinate equal to $k\in \mathbb{Z}$, see \eqref{definition_hyperplane}. For $k,i \in \mathbb{Z}$ define \begin{equation*} N_{k,i} = \lvert\{x \in H_k \setminus L_0 \colon (i,0, \ldots, 0) \to x\}\rvert. \end{equation*} As $N_{k,i}$ equals $N_{k-i,0}$ in distribution for all $i,k \in \mathbb{Z}$, we get \begin{align} \label{proof_transience_arbirtrary_drift_1}
\mu &= \mathbb{E} \Bigl[ \sum_{i \in D_1} \mathrm{e}^{-\theta X_1^i} \Bigr] \nonumber \\
&= \sum_{i=-\infty}^{\infty} \sum_{k=-\infty}^{\infty} \mathbb{P}\bigl(0 \fp{L_0} (i,0, \ldots, 0)\bigr) \mathbb{E}[N_{k,i}] \mathrm{e}^{-\theta k} \nonumber\\
&= \sum_{k=-\infty}^{\infty} \mathbb{E}[N_{k,0}] \mathrm{e}^{-\theta k} \sum_{i=-\infty}^{\infty} \mathrm{e}^{-\theta i} \mathbb{P}\bigl(0 \fp{L_0} (i,0, \ldots, 0)\bigr). \end{align} Note that $\mathbb{P}\bigl(0 \fp{L_0} (i,0, \ldots, 0)\bigr)$ is smaller or equal than the probability of the event $\{0 \fp{\mathbb{Z}} i\}$ in the frog model $\operatorname{FM}(1,1,\alpha)$. Hence, by Lemma~\ref{lemma_1d_fm}, there is a constant $c_1 >0$ such that $\mathbb{P}\bigl(0 \fp{L_0} (i,0, \ldots, 0)\bigr) \leq \mathrm{e}^{c_1i}$ for all $i \leq 0$. Thus, \eqref{proof_transience_arbirtrary_drift_1} implies that for $\theta<c_1$ there is a constant $c_2=c_2(\theta)< \infty$ such that \begin{equation}\label{proof_transience_arbirtrary_drift_2}
\mu \leq c_2 \sum_{k=-\infty}^{\infty} \mathbb{E}[N_{k,0}] \mathrm{e}^{-\theta k}. \end{equation} Next, we estimate $\mathbb{E}[N_{k,0}]$, the expected number of vertices in $H_k \setminus L_0$ visited by a single particle starting at $0$. Recall that the trajectory of frog $0$ is denoted by $(S_n^0)_{n\in \mathbb{N}_0}$. We define $T_k = \min\{n \in \mathbb{N}_0 \colon S_n^0 \in H_k\}$, the entrance time of the hyperplane $H_k$, and $T_k' = \max\{n \in \mathbb{N}_0 \colon S_n^0 \in H_k\}$, the last time frog $0$ is in the hyperplane $H_k$. Obviously, $N_{k,0}=0$ on the event $\{T_k = \infty\}$. Hence, assume we are on $\{T_k < \infty\}$. The particle can only visit a vertex in $H_k \setminus L_0$ at time $T_k$ if the random walk took at least one step in non-$e_1$-direction up to time $T_k$. This happens with probability $\mathbb{E}[1-w^{T_k}]$. Furthermore, the number of vertices visited in $H_k$ after time $T_k$ can be estimated by the number of steps in non-$e_1$-direction taken between times $T_k$ and $T_k'$. This number is binomially distributed and, thus, its expectation equals $(1-w)\mathbb{E}[T_k'-T_k]$. Overall, this implies \begin{align*}
\mathbb{E}[N_{k,0}] \leq \mathbb{P}(T_k < \infty) \bigr(\mathbb{E}\bigl[1- w^{T_k} \mid T_k < \infty \bigr] + (1-w) \mathbb{E}\bigl[T_k' - T_k \mid T_k <\infty\bigr] \bigl). \end{align*} For $k < 0$ the probability $\mathbb{P}(T_k < \infty)$ decays exponentially in $k$ by Lemma~\ref{lemma_hitting_probability_hyperplane}. Therefore, we can choose $\theta$ small such that $\mathbb{P}(T_k < \infty) \mathrm{e}^{-\theta k} \leq \mathrm{e}^{-\theta \lvert k \rvert}$ for all $k \in \mathbb{Z}$. Thus, \eqref{proof_transience_arbirtrary_drift_2} implies \begin{equation}\label{proof_transience_arbirtrary_drift_3}
\mu \leq c_2 \sum_{k=-\infty}^{\infty} \mathrm{e}^{-\theta \lvert k \rvert}\Bigr( \mathbb{E}\bigl[1- w^{T_k} \mid T_k < \infty \bigr] + (1-w) \mathbb{E}\bigl[T_k' - T_k \mid T_k <\infty\bigr] \Bigl). \end{equation} Note that the sum in \eqref{proof_transience_arbirtrary_drift_3} is finite as $\mathbb{E}\bigl[T_k' - T_k \mid T_k <\infty\bigr]$ is independent of $k$. By monotone convergence $\lim_{w \to 1} \mu =0$ and the right hand side of \eqref{proof_transience_arbirtrary_drift_3} is continuous in $w$. Therefore, we can choose $w$ close to $1$ such that $\mu < 1$, as claimed. \end{proof}
\subsection*{Transience for $d=2$ and arbitrary weight}
\begin{proof}[Proof of Theorem~\ref{thm_d=2_arbitrary_weight_ii}] Let $w >0$ be fixed throughout the proof. As in the proof of Theorem~\ref{thm_d=2_arbitrary_drift_ii} and Theorem~\ref{thm_d>2_arbitrary_drift_ii} we dominate the frog model by a branching random walk. This time we use a one-dimensional branching random walk on $\mathbb{Z}$. For the construction of the process, let $\xi$ be the number of activated frogs in an independent one-dimensional frog model $\operatorname{FM}^*(1,\pi_{\text{sym}}, 1-w)$ with two active frogs at $0$ initially. At time $n=0$, the branching random walk starts with one particle in the origin. At every time $n \in \mathbb{N}$, the process repeats the following two steps. First, every particle produces offspring independently of all other particles with the number of offspring being distributed as $\xi$. Then, each particle jumps to the right with probability $\frac{1+\alpha}{2}$ and to the left with probability $\frac{1-\alpha}{2}$.
As an intermediate step to understand the relation between the frog model and this branching random walk on $\mathbb{Z}$, we first couple the frog model with a branching random walk on $\mathbb{Z}^2$ with initially one particle at $0$. Partition the lattice $\mathbb{Z}^2$ into hyperplanes $(H_n)_{n \in \mathbb{Z}}$ as defined in \eqref{definition_hyperplane}. Let the frog model $\operatorname{FM}(2,\pi_{w,\alpha})$ with initially two active frogs at $0 \in H_0$ evolve and stop every frog when it first enters $H_1$ or $H_{-1}$. Every frog leaves its hyperplane in every step with probability $w$. Thus, the number of stopped frogs is distributed according to $\xi$. A stopped frog is in $H_1$ with probability $\frac{1+\alpha}{2}$ and in $H_{-1}$ with probability $\frac{1-\alpha}{2}$. The stopped particles form the offspring of the particle at $0$ in the branching random walk. We repeat this procedure to generate the offspring of an arbitrary particle in the branching random walk. Introduce an ordering of all particles in the branching random walk and let the particles branch one after another. Before generating the offspring of the $i$-th particle, refill every vertex which is no longer occupied by a sleeping frog with an extra independent sleeping frog. Unstop frog $i$ and let it continue its work as usual, ignoring all other stopped frogs. Note that there is a sleeping frog at the starting vertex of frog $i$ that is immediately activated. This explains our definition of $\xi$. Again stop every frog once it enters one of the neighbouring hyperplanes. These newly stopped frogs form the offspring of the $i$-th particle. This procedure creates a branching random walk with independent identically distributed offspring. Every vertex visited in the frog model is obviously also visited by the branching random walk.
Now, project all particles in the intermediate two-dimensional branching random walk onto the first coordinate. This creates a branching random walk on $\mathbb{Z}$ distributed as the one described above. The construction shows that transience of this one-dimensional branching random walk implies transience of the frog model.
To prove that the one-dimensional branching random walk is transient for $\alpha$ close to $1$, we proceed as in the proof of Theorem~\ref{thm_d=2_arbitrary_drift_ii} and Theorem~\ref{thm_d>2_arbitrary_drift_ii}. The proof only differs in the calculation of the parameter $\mu$ defined by \begin{equation*}
\mu = \mathbb{E} \Bigl[ \sum_{i \in D_1} \mathrm{e}^{-\theta X_1^i} \Bigr] \end{equation*} for $\theta >0$ with $D_1$ denoting the set of descendants in the first generation of the branching random walk and $X_1^i$ the $e_1$-coordinate of the location of particle $i \in D_1$. Here, we immediately get \begin{equation*}
\mu = \frac12 \bigl((1-\alpha) \mathrm{e}^{\theta} + (1+\alpha) \mathrm{e}^{-\theta}\bigr) \mathbb{E}[\xi]. \end{equation*} Lemma~\ref{lemma_1d_fm} implies $\mathbb{E}[\xi] < \infty$. Thus, we can choose $\theta = \log\bigl(2\mathbb{E}[\xi]\bigr)$. Then $\lim_{\alpha \to 1} \mu = \frac12$ and by continuity $\mu < 1$ for $\alpha$ close to $1$, as required. \end{proof}
\section{Open Problems}\label{open_problems} We believe that there is a monotone curve separating the transient from the recurrent regime in the phase diagram shown in Figure~\ref{phase_diagram}.
\begin{con}\label{con_critical_curve} For every dimension $d$ there exists a decreasing function $f_d \colon [0,1] \to [0,1]$ such that the frog model $\operatorname{FM}(d,\pi_{w,\alpha})$ is recurrent for all $w,\alpha \in [0,1]$ such that $w<f_d(\alpha)$ and transient for all $w,\alpha \in [0,1]$ such that $w>f_d(\alpha)$. \end{con}
Intuitively, the frog model approximates a binary branching random walk for $d \to \infty$ from below, as each frog activates a new frog in every step if there are 'infinitely' many directions to choose from. This leads to the following conjecture.
\begin{con}\label{con_high_d}
The sequence of functions $(f_d)_{d \in \mathbb{N}}$ is increasing in $d$. \end{con}
In the proof of Theorem~\ref{thm_d>2_arbitrary_drift_i} we use Lemma~\ref{lemma_recurrence_high_d_percolation_parameter_bound} to show that in the frog model with death a frog cluster is dense with positive probability if the survival probability is larger than $\frac34$ and $d$ is large. Indeed, we believe that every infinite frog cluster is dense. Hence, $\operatorname{FM}(d,\pi_{w,\alpha})$ would be recurrent for all $\alpha<1$ if $\operatorname{FM}^*(d-1,\pi_{\text{sym}},1-w)$ has a positive survival probability. Further, we believe that the critical survival probability is decreasing in $d$. See also the discussion in \cite[Chapter~1.2]{AMP02pt}. This would imply that $f_d(1^{-})$ is increasing in $d$.
The comparison with a binary branching random walk raises another question. Let \begin{equation*}
g \colon [0,1] \to [0,1],\ g(\alpha) = \min\bigl\{1, (2(1-\sqrt{1-\alpha^2}))^{-1}\bigr\}. \end{equation*} A binary branching random walk on $\mathbb{Z}^d$ with transition probabilities as in \eqref{transition_function} is recurrent iff $w < g(\alpha)$, see \cite[Section~4]{GM06}.
\begin{question} Does the sequence of functions $(f_d)_{d \in \mathbb{N}}$ converge pointwise to $g$ as $d \to \infty$? \end{question}
\paragraph{Acknowledgements} We thank Noam Berger for useful discussions and comments. We are grateful to the referee for pointing out several glitches in the first version.
\Addresses
\end{document} |
\begin{document}
\title{Characterization of Strict Positive Definiteness on products of complex spheres} \author{Mario H. Castro$^a$, Eugenio Massa$^b$, and Ana Paula Peron$^b$. \\\scriptsize{$^a$Departamento de Matem\'{a}tica, UFU - Uberlândia (MG), Brazil. }\\ \scriptsize{ $^b$Departamento de Matem\'{a}tica, ICMC-USP - S\~{a}o Carlos, Caixa Postal 668, 13560-970 S\~{a}o Carlos (SP), Brazil.}\\ \scriptsize{
e-mails: mariocastro@ufu.br,\,\,\,\, eug.massa@gmail.com,\,\,\,\, apperon@icmc.usp.br. } }
\maketitle
\begin{abstract} In this paper we consider Positive Definite functions on products $\Omega_{2q}\times\Omega_{2p}$ of complex spheres, and we obtain a condition, in terms of the coefficients in their disc polynomial expansions, which is necessary and sufficient for the function to be Strictly Positive Definite. The result includes also the more delicate cases in which $p$ and/or $q$ can be $1$ or $\infty$.
The condition we obtain states that a suitable set in ${\mathbb{Z}}^2$, containing the indexes of the strictly positive coefficients in the expansion, must intersect every product of arithmetic progressions.
\\ \noindent{\bf MSC:} 42A82; 42C10.
\\ \noindent{\bf Keywords:} Strictly Positive Definite Functions, Product of Complex Spheres, Generalized Zernike Polynomial. \end{abstract}
\section{Introduction} The main purpose of this paper is to obtain a characterization of Strictly Positive Definite functions on products of complex spheres, in terms of the coefficients in their disc polynomial expansions: these results are contained in the Theorems \ref{th_main}, \ref{th_main_1p} and \ref{th_main_11}.
Positive Definiteness and Strict Positive Definiteness are important in many applications, for example, Strict Positive Definiteness is required in certain interpolation problems in order to guarantee the unicity of their solution. From a theoretical point of view, the problem of characterizing both Positive Definiteness and Strict Positive Definiteness has been considered in many recent papers, in different contexts. More details on the applications and the literature related to this problem will be given in Section \tref{sec_liter}.
\par
Let $\varOmega$ be a nonempty set. A kernel $K: \varOmega \times \varOmega \to {\mathbb{C}}$ is called {\em Positive Definite} (PD in the following) on $\varOmega$ when \begin{equation}\label{eq-quad-form-geral} \sum_{\mu,\nu=1}^L c_\mu\overline{c_\nu} K(x_\mu,x_\nu) \geq 0, \end{equation} for any $L\ge1$, $c=(c_1,\ldots,c_L) \in {\mathbb{C}}^L$ and any subset $X:=\{x_1,\ldots,x_L\} $ of distinct points in $\varOmega$. Moreover, $K$ is {\em Strictly Positive Definite} (SPD in the following) when it is Positive Definite and the inequality above is strict for $c\neq0$.
If $S^q$ is the $q$-dimensional unit sphere in the Euclidean space ${\mathbb{R}}^{q+1}$, we say that a continuous function $f:[-1,1] \to {\mathbb{R}}$ is PD (resp. SPD) on $S^q$, when the associated kernel $K(v,v'):=f(v\cdot_{\mathbb{R}} v')$ is PD (resp. SPD) on $S^q$ (here ``~$\cdot_{\mathbb{R}}$~" is the usual inner product in ${\mathbb{R}}^{q+1}$).
In \cite{scho-42} it was proved that
a continuous function $f$ is PD on $S^q$, $q\geq1$, if, and only if, it admits an expansion in the form
\begin{equation}\label{eq-scho}\begin{array}{ccc}
&\displaystyle f(t)=\sum_{m\in{\mathbb{Z}}_+} a_mP_m^{(q-1)/2}(t),\quad t\in[-1,1],&\\&\mbox{where $\sum a_mP_m^{(q-1)/2}(1)<\infty$ and $a_m\geq0$ for all $m\in{\mathbb{Z}}_+$. }\end{array}
\end{equation}
In \pref{eq-scho}, $P_m^{(q-1)/2}$ are the Gegenbauer polynomials of degree $m$ associated to $(q-1)/2$ (see \cite[page 80]{szego}) and ${\mathbb{Z}}_+={\mathbb{N}}\cup\pg{0}$.
In
\cite{debao-sun-valdir} it was proved that the function $f$ in \pref{eq-scho} is also SPD on $S^q$, $q\geq2$ if, and only if, the set
$
\{m\in{\mathbb{Z}}_+: a_{m}>0\}
$
contains an infinite number of odd and of even numbers. This condition is equivalent to asking that
\begin{equation}\label{eq_inters_Sd}
\{m\in{\mathbb{Z}}_+: a_{m}>0\}\cap (2{\mathbb{N}}+x)\neq \emptyset \qquad\mbox{for every $x\in{\mathbb{N}}$}.
\end{equation}
The complex case is defined in a similar way: if $\Omega_{2q}$ is the unit sphere in ${\mathbb{C}}^q$, $q\geq2,$ and $\mathbb{D}$ is the unit closed disc in ${\mathbb{C}}$,
then a continuous function $f:\mathbb{D} \to {\mathbb{C}}$ is said to be PD (resp. SPD) on $\Omega_{2q}$ if the associated kernel $K(z,z'):=f(z\cdot z')$ is PD (resp. SPD) on $\Omega_{2q}$, where ``~$\cdot$~" is the usual inner product in ${\mathbb{C}}^q$.
As proved in \cite{P-valdir-pd-esfcompl}, a continuous function $f:\mathbb{D} \to {\mathbb{C}}$ is PD on $\Omega_{2q}$, $q\geq2$ if, and only if, it has the representation in series of the form
\begin{equation}\label{eq-pd-esfq}\begin{array}{ccc}
&\displaystyle f(\xi)=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}R_{m,n}^{q-2}(\xi),\quad \xi\in{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,n}<\infty$ and $a_{m,n}\geq0$ for all $m,n\in{\mathbb{Z}}_+$.}&
\end{array}
\end{equation}
The functions $R_{m,n}^{q-2}$ in \pref{eq-pd-esfq} are the {\em disc polynomials}, or {\em generalized Zernike polynomials} (see Equation \pref{eq-def-pol-disc}).
The condition for $f$ to be SPD was obtained in \cite{meneg-jean-traira,P-valdir-complexapproach}:
$f$ as in \pref{eq-pd-esfq} is SPD on $\Omega_{2q}$ if, and only if,
the set
$
\{m-n\in{\mathbb{Z}}: a_{m,n}>0\}
$
intersects every full arithmetic progression in ${\mathbb{Z}}$, that is,
\begin{equation}\label{eq_inters_Oq}
\{m-n\in{\mathbb{Z}}: a_{m,n}>0\}\cap (N{\mathbb{Z}}+x) \neq \emptyset
\qquad\mbox{for every $N,x\in{\mathbb{N}}$.}
\end{equation}
The characterization of SPD functions on the spheres $S^1$, $\Omega_2$, $S^\infty$ and $\Omega_\infty$ were also considered in \cite{P-valdir-claudemir-spd-compl,men-spd-analysis,meneg-jean-traira}, obtaining similar results (see also in Section \ref{sec_S1}).
Products of real spheres where considered in \cite{P-jean-men-pdSMxSm, jean-men, P-jean-menS1xSm, P-jean-menS1xS1}:
a continuous PD function on $S^q\times S^p$, $q,p\geq1$ can be written as
\begin{equation}\label{eq-pd-esfSd}\begin{array}{c}
\displaystyle f(t,s)=\sum_{m,k\in{\mathbb{Z}}_+} a_{m,k}P_m^{(q-1)/2}(t)P_k^{(p-1)/2}(s),\quad t,s\in[-1,1],\\
\mbox{where $\sum a_{m,k}P_m^{(q-1)/2}(1)P_k^{(p-1)/2}(1)<\infty$ and $a_{m,k}\geq0$ for all $m,k\in{\mathbb{Z}}_+$,}
\end{array}
\end{equation}
and, for $q,p\geq2$, it is also SPD on $S^q\times S^p$ if, and only if, the following condition, obtained in \cite{jean-men}, holds true: in each intersection of the set
$
\{(m,k)\in{\mathbb{Z}}_+^2: a_{m,k}>0\}
$
with the four sets $(2{\mathbb{Z}}_++x)\times(2{\mathbb{Z}}_++y),\ x,y\in\pg{0,1}$, there exists a sequence $(m_i,k_i)$ such that $m_i,k_i\to \infty$.
In fact, this condition is equivalent to the following one:
\begin{equation}\label{eq_inters_SpSq}
\{(m,k)\in{\mathbb{Z}}_+^2: a_{m,k}>0\}\cap (2{\mathbb{N}}+x)\times(2{\mathbb{N}}+y)\neq \emptyset
\qquad\mbox{for every $x,y\in{\mathbb{N}}$.}
\end{equation}
Again, when considering $S^1$ in the place of $S^q$ and/or $S^p$, similar (but not analogous) results are obtained: see \cite{ P-jean-menS1xSm, P-jean-menS1xS1} and Section \ref{sec_S1}.
\subsection{Main results}
The purpose of this paper is to consider the same kind of problems described above for the case of the products $\Omega_{2q}\times\Omega_{2p}$ of two complex spheres.
The characterization of Positive Definiteness in this setting was obtained
in \cite[Theorem 7.1]{P-berg-porcu_gelfand} for $ q,p\in{\mathbb{N}},\;q,p\geq2$: it was proved that a continuous function $f:\mathbb{D} \times \mathbb{D} \to {\mathbb{C}}$ is PD on $\Omega_{2q}\times\Omega_{2p}$ if, and only if, it admits an expansion in the form \begin{equation}\label{eq-pd-prod-esf-compl} \begin{array}{ccc} &\displaystyle f(\xi,\eta)=\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}R_{m,n}^{q-2}(\xi)R_{k,l}^{p-2}(\eta),\quad (\xi,\eta)\in{\mathbb{D}}\times{\mathbb{D}},&\\ &\mbox{where $\sum a_{m,n,k,l}<\infty$ and $a_{m,n,k,l}\geq0$ for all $m,n,k,l\in{\mathbb{Z}}_+$.}&\end{array} \end{equation}
If $p$ and/or $q$ can take the values $1$ or $\infty$, a characterization of Positive Definiteness is also known (see in Section \ref{sec_PD_on_prod}), except for the case $p=q=\infty$, which we address in Theorem \ref{th_PDinfty}. In fact, if we define $ R^{\infty}_{m,n}(\xi):={\xi\vphantom{\overline\xi}}^{m}\overline\xi^{n},\ \xi\in\mathbb{D}$, then the characterization \pref{eq-pd-prod-esf-compl} holds for $ q,p\in{\mathbb{N}}\cup\pg{\infty},\;q,p\geq2$.
Our main results are contained in the following theorems, where we characterize SPD functions on the product of two complex spheres $\Omega_{2q}\times\Omega_{2p}$, $q,p\in{\mathbb{N}}\cup\pg{\infty}$, in terms of the coefficients in their expansions. \begin{theorem}\label{th_main}
Let $ q,p\in{\mathbb{N}}\cup\pg{\infty},\ q,p\geq2$. A continuous function $f:\mathbb{D} \times \mathbb{D} \to {\mathbb{C}}$, which is PD on $\Omega_{2q}\times\Omega_{2p}$, is also SPD on $\Omega_{2q}\times\Omega_{2p}$ if, and only if, considering its expansion as in \pref{eq-pd-prod-esf-compl}, the set $$J':=\pg{(m-n,k-l)\in{\mathbb{Z}}^2:\ a_{m,n,k,l}>0}$$ intersects every product of full arithmetic progressions in ${\mathbb{Z}}$,
that is,
\begin{equation}\label{eq_inters_Opq}
J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset\qquad \mbox{for every $N,M,x,y\in{\mathbb{N}}$.}
\end{equation}
\end{theorem}
It is worth noting the similarities between the characterizations of SPD in the various cases described here, actually, they can always be reduced to a condition on the intersection between a set constructed with the indexes of the nonnegative coefficients in the expansion of the function, and certain arithmetic progressions or products of them: compare the conditions (\ref{eq_inters_Sd}-\ref{eq_inters_Oq}-\ref{eq_inters_SpSq}-\ref{eq_inters_Opq}).
\par
When $p$ and/or $q$ can take the value $1$, we obtain the following characterizations.
\begin{theorem}\label{th_main_1p}
Let $2\leq p\in{\mathbb{N}}\cup\pg{\infty}$. A continuous function $f:\partial\mathbb{D} \times \mathbb{D} \to {\mathbb{C}}$, which is PD on $\Omega_2\times\Omega_{2p}$, is also SPD on $\Omega_2\times\Omega_{2p}$ if, and only if, considering its expansion as
\begin{equation}\label{eq-pd-prod-esf-complO2p}\begin{array}{rcl}
&\displaystyle f(\xi,\eta)=\sum_{m\in{\mathbb{Z}},\,k,l\in{\mathbb{Z}}_+} a_{m,k,l}\xi^mR_{k,l}^{p-2}(\eta),\quad (\xi,\eta)\in\partial{\mathbb{D}}\times{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,k,l}<\infty$ and $a_{m,k,l}\geq0$ for all $m\in{\mathbb{Z}},\,k,l\in{\mathbb{Z}}_+$,}&
\end{array}
\end{equation} the set $$\pg{(m,k-l)\in{\mathbb{Z}}^2:\ a_{m,k,l}>0}$$ intersects every product of full arithmetic progressions in ${\mathbb{Z}}$.
\end{theorem}
\begin{theorem}\label{th_main_11}
A continuous function $f:\partial \mathbb{D} \times \partial\mathbb{D} \to {\mathbb{C}}$, which is PD on $\Omega_2\times\Omega_2$, is also SPD on $\Omega_2\times\Omega_2$ if, and only if, considering its expansion as
\begin{equation}\label{eq-pd-prod-esf-complO2}\begin{array}{rcl}
&\displaystyle f(\xi,\eta)=\sum_{m,k\in{\mathbb{Z}}} a_{m,k}\xi^m\eta^k,\quad (\xi,\eta)\in\partial{\mathbb{D}}\times\partial{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,k}<\infty$ and $a_{m,k}\geq0$ for all $m,k\in{\mathbb{Z}}$,}&\end{array}
\end{equation} the set $$\pg{(m,k)\in{\mathbb{Z}}^2:\ a_{m,k}>0}$$ intersects every product of full arithmetic progressions in ${\mathbb{Z}}$.
\end{theorem}
We observe that the Theorems \ref{th_main_1p} and \ref{th_main_11}
will follow immediately from the same proof as Theorem \ref{th_main}, after rewriting the expansions \pref{eq-pd-prod-esf-complO2p} and \pref{eq-pd-prod-esf-complO2} in order to be formally identical to \pref{eq-pd-prod-esf-compl} (see Lemma \ref{lm_charDD}). This is a remarkable fact considering that, in the real case, when the product involves the sphere $S^1$ (see \cite{P-jean-menS1xS1,P-jean-menS1xSm}) one had to use quite different arguments with respect to the higher dimensional case in \cite{jean-men}.
We remark however that Theorem \ref{th_main_11} is not new, as it is a particular case of the main result in \cite{men-gue-toro}.
\par
This paper is organized in the following way. In Section \ref{sec_liter} we discuss some further literature related to our problem.
In Section \ref{sec_teoria} we set our notation and discuss some known results that will be used later.
Theorem \ref{th_main} is proved in Section \ref{sec_proofmain}.
In Section \ref{sec_infty} we state and prove the mentioned characterization of PD functions on $\Omega_\infty\times\Omega_\infty$. Finally, Section \ref{sec_S1} is devoted to showing how one can deduce, from Theorem \ref{th_main_11}, the characterization of SPD functions on $S^1\times S^1$ proved in \cite{P-jean-menS1xS1}.
\subsection{Literature}\label{sec_liter} Since the first results on
Positive Definite functions on real spheres, obtained by Schoenberg in his seminal paper (\cite{scho-42}), such functions were found to be relevant and have been studied in several distinct areas. In fact, they are both used by researchers directly interested in applied sciences, such as geostatistics, numerical analysis, approximation theory (cf. \cite{cheney-approx-pd, CheneyLight-book, gneiting-2013, porcu-bev-gent}), and by theoretical researchers aiming at further generalizations that, along with their theoretical importance, could become useful in other practical problems.
One important motivation for characterizing Strictly Positive Definite functions comes from certain interpolation problems, where the interpolating function is generated by a Positive Definite kernel. Actually, the unicity of the solution of the interpolation problem is guaranteed only if the generating kernel is also Strictly Positive Definite (cf. \cite{light-cheney,cheney-xu}): consider, for instance, the interpolation function
$$
F(x) = \sum_{j=1}^Lc_jK(x,x_j), \quad x\in \varOmega,
$$
where $X=\{x_1,\ldots,x_L\} \subset \varOmega$ is given and $K$ is a known Strictly Positive Definite kernel in $\varOmega$; then the matrix of the system obtained from the interpolation conditions $F(x_i)=\lambda_i$, $i=1,\ldots,L$, is the matrix $[K(x_i,x_j)]$, whose determinant is positive, thus giving a unique solution for the system. In particular, the case where $\varOmega$ is a real sphere is very important in applications where one needs to assure unicity for interpolation problems with datas given on the Earth surface (which can be identified with the real sphere $S^2$).
Also, the case where $\varOmega$ is the product of a sphere with some other set turns out to be of particular interest for its application to geostatistical problems in space and time, whose natural domain is $S^2\times {\mathbb{R}}$ (see \cite{porcu-bev-gent} and references therein).
Immediate applications in the case of complex spheres are less obvious: we refer to \cite{P-massa-porcu-montee}, where parametric families of Positive Definite functions on complex spheres are provided. It is also worth noting that the Zernike polynomials are used in applications such as optics and optical engineering (cf. \cite{ramos-et-al-zernike-opt,torre} and references therein).
Motivated by these and other applications, several papers appeared dealing with the theoretical problem of characterizing Positive Definiteness and Strict Positive Definiteness:
along with those already mentioned in the introduction, we cite \cite{musin-multi-pd}, where a characterization of real-valued multivariate Positive Definite functions on $S^q$ is obtained, and \cite{ yaglom,hannan,men-rafaela}, where matrix-valued Positive Definite functions are investigated.
In \cite{porcu-berg}, the characterization in \cite{scho-42} is extended to the case of Positive Definite functions on the cartesian product of $S^q$ times a locally compact group $G$, which includes the mentioned case $S^q\times {\mathbb{R}}$ and also generalizes the result obtained in \cite{P-jean-men-pdSMxSm} about Positive Definite functions on products of real spheres. Also, the Positive Definite functions on Gelfand pairs and on products of them were characterized in \cite{P-berg-porcu_gelfand}, while those on the product of a locally compact group with $\Omega_\infty$ in \cite{P-berg-porcu-Omega-inf}.
Concerning the characterization of Strictly Positive Definite functions, we cite also the cases of compact two-point homogeneous spaces and products of them (\cite{barbosa-men, men-victor-prod-esf-esp_homg}) and the case of a torus (\cite{men-gue-toro}).
\section{Notation and known results}\label{sec_teoria}
We first give a brief introduction on the {disc polynomials} that appear in the Equations \pref{eq-pd-esfq} and \pref{eq-pd-prod-esf-compl}:
for $2\leq q\in{\mathbb{N}}$, the function $R_{m,n}^{q-2}$, defined in the disc $\mathbb{D}=\pg{\xi\in{\mathbb{C}}:|\xi|\leq 1}$, is called {\em disc polynomial} (or {\em generalized Zernike polynomial}) of degree $m$ in $\xi$ and $n$ in $\overline\xi$ associated to ${q-2}$, and can be written as (see \cite{koor-II}) \begin{equation}\label{eq-def-pol-disc}
R_{m,n}^{{q-2}}(\xi)=r^{|m-n|}\,e^{i(m-n)\phi}\,R_{\min\{m,n\}}^{({q-2},\,|m-n|)}(2r^2 -1), \quad \xi=re^{i\phi}\in\mathbb{D},\ m,n\in{\mathbb{Z}}_+,\end{equation} where $R_{k}^{(\alpha,\beta)}$ is the usual Jacobi polynomial of degree $k$ associated to the real numbers $\alpha,\beta>-1$ and normalized by $R_{k}^{(\alpha,\beta)}(1)=1$ (see \cite[page 58]{szego}).
For future use we also define \begin{eqnarray}\label{eq_Rm1inf} R^{\infty}_{m,n}(\xi)=R^{-1}_{m,n}(\xi)&:=&{\xi\vphantom{\overline\xi}}^{m}\overline\xi^{n},\qquad \xi\in\mathbb{D}\,. \end{eqnarray}
It is well known (see \cite{koor-II,koor-london})
that the disc polynomials, as well as those defined in \pref{eq_Rm1inf}, satisfy, for $q\in{\mathbb{N}}\cup\{\infty\}$, $\xi\in\mathbb{D}$, and $m,n\in{\mathbb{Z}}_+$, \begin{eqnarray} \label{eq_modulo-Rmn} &R_{m,n}^{q-2}(1) = 1, \quad
|R^{q-2}_{m,n}(\xi)| \leq 1, &\quad \\\label{eq_propridd_Rmn} &R_{m,n}^{q-2} (e^{i\phi}\xi) = e^{i(m-n)\phi}R_{m,n}^{q-2} (\xi),& \quad \phi\in{\mathbb{R}}, \\\label{eq_proprconj_Rmn} &R^{q-2}_{m,n}\pt{\,\overline\xi\,}=\overline{ R^{q-2}_{m,n}(\xi)}.& \end{eqnarray}
Observe that, by \pref{eq_modulo-Rmn}, the series in \pref{eq-pd-esfq} and \pref{eq-pd-prod-esf-compl} converge uniformly in their domain. Moreover, the characterization in \pref{eq-pd-prod-esf-compl} implies that the functions $(\xi,\eta)\mapsto R_{m,n}^{q-2}(\xi)R_{k,l}^{p-2}(\eta)$ are PD on $\Omega_{2q}\times\Omega_{2p}$ for all $m,n,k,l\in{\mathbb{Z}}_+$ (and, by \pref{eq-pd-esfq}, the functions $\xi\mapsto R_{m,n}^{q-2}(\xi)$ are PD on $\Omega_{2q}$).
Another important property is contained in the following lemma.
\begin{lemma}\label{lm_Rto0}
If $q\in{\mathbb{N}}\cup\pg{\infty}$ and $\xi\in\mathbb{D}'=\{\xi\in{\mathbb{C}}:|\xi|< 1\}$, then
\begin{equation}\label{eq_Rto0}
\lim_{\stackrel{m+n\to\infty}{m\neq n}}R_{m,n}^{q-2}(\xi)=0\,.
\end{equation}
If $q\in{\mathbb{N}}\cup\pg{\infty}$ and $\xi=e^{i\phi}\in\partial\mathbb{D}$ then
\begin{equation}\label{eq_Reitet}
R_{m,n}^{q-2}(e^{i\phi})=e^{i(m-n)\phi}\,.
\end{equation}
\end{lemma}
For the proof of \pref{eq_Rto0} when $q\geq2$ see \cite{meneg-jean-traira}.
It is worth noting that the limit is true even without the condition ${m\neq n}$, except for the special case $q=2$ and $\xi=0$.
On the other hand, \pref{eq_Reitet} follows from (\ref{eq_modulo-Rmn}-\ref{eq_propridd_Rmn}).\\
\subsection{Positive Definiteness on complex spheres}\label{sec_PD_on_sing} As we anticipated in the introduction, it is known by \cite{P-valdir-pd-esfcompl} that a continuous function $f:\mathbb{D}\to{\mathbb{C}}$ is PD on $\Omega_{2q}$, $2\leq q\in{\mathbb{N}}$, if, and only if, the coefficients $a_{m,n}$ in the series representation \pref{eq-pd-esfq} satisfy $\sum a_{m,n}<\infty$ and $a_{m,n}\geq0$ for all $m,n\in{\mathbb{Z}}_+$.
In the case of the complex sphere $\Omega_2$, when associating a continuous function $f$ to a kernel via the formula $K(z,z'):=f(z\cdot z')$, one has that $z\cdot z'\in\partial\mathbb{D}$ for every $z,z'\in\Omega_2$, then it becomes natural to consider functions $f$ defined in $\partial\mathbb{D}$. The PD functions on $\Omega_2$ were also characterized in
\cite{P-valdir-pd-esfcompl}, namely, $f:\partial\mathbb{D}\to {\mathbb{C}}$ is PD on $\Omega_2$ if, and only if, \begin{equation}\label{eq-pd-prod1}\begin{array}{ccc} &\displaystyle f(\xi)=\sum_{m\in{\mathbb{Z}}} a_{m}\xi^m,\quad \xi\in{\partial \mathbb{D}},&\\ &\mbox{where $\sum a_{m}<\infty$ and $a_{m}\geq0$ for all $m\in{\mathbb{Z}}$.}&\end{array} \end{equation} In order to write this formula as \pref{eq-pd-esfq}, and then to be able to use the same expansion for all $q\in{\mathbb{N}}$, we use the polynomials $R_{m,n}^{-1}$ defined in \pref{eq_Rm1inf} and we rearrange the coefficients in \pref{eq-pd-prod1} so that \begin{equation}\label{eq-pd-prod1mn} f(\xi)
=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}R_{m,n}^{-1}(\xi),\quad \xi\in\partial\mathbb{D}, \end{equation} with the additional requirement that $a_{m,n}=0 $ if $mn>0$, implying that $$\begin{cases} a_{m,0}:=a_m,&m\geq 0,\\ a_{0,m}:=a_{-m},&m\geq 0. \end{cases}$$ In this way, $f$ is PD on $\Omega_2$ if, and only if, it satisfies the characterization \pref{eq-pd-esfq} with $a_{m,n}=0 $ for $mn>0$ and $\partial\mathbb{D}$ in the place of $\mathbb{D}$.
The complex sphere $\Omega_\infty$ is defined as the sphere of the sequences in the Hilbert complex space $\ell^2({\mathbb{C}})$ having unitary norm.
In \cite{chris-ressel-pd}, it was proved that a continuous function $f:{\mathbb{D}}\to {\mathbb{C}}$ is PD on $\Omega_\infty$ if, and only if, it admits the series representation \begin{equation}\label{eq-pd-esfi}\begin{array}{ccc} &\displaystyle f(\xi)=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}{\xi\vphantom{\overline\xi}}^m\overline{\xi}^n,\quad \xi\in{\mathbb{D}},&\\ &\mbox{where $\sum a_{m,n}<\infty$ and $a_{m,n}\geq0$ for all $m,n\in{\mathbb{Z}}_+$,}&\end{array} \end{equation}
which becomes analogous to the characterization \pref{eq-pd-esfq} if we use the definition of $R_{m,n}^\infty$ in \pref{eq_Rm1inf}. It is also worth noting that $f$ is PD on $\Omega_\infty$ if, and only if, $f$ is PD on $\Omega_{2q}$ for every $q\geq2$.
\subsection{Positive Definiteness on products of spheres} \label{sec_PD_on_prod}
From now on, in order to simplify the exposition, we will use the symbol $ \varXi$ to designate either $\partial \mathbb{D}$ or $\mathbb{D}$, depending if we are considering, respectively, the sphere $\Omega_2$ or a higher dimensional sphere.
When considering products of spheres $\Omega_{2q}\times\Omega_{2p}$, $ q,p\in{\mathbb{N}}\cup\pg\infty$, a continuous functions $f: \varXi\times \varXi \to {\mathbb{C}}$ is said to be PD (resp. SPD) on $\Omega_{2q}\times\Omega_{2p}$, if the associated kernel
\begin{equation}\label{eq-Kfromfprod}
K:[\Omega_{2q}\times\Omega_{2p}]\times[\Omega_{2q}\times\Omega_{2p}]\ni (\,(z,w),(z',w')\,)\mapsto f(z\cdot z',w\cdot w')
\end{equation}
is PD (resp. SPD) on $\Omega_{2q}\times\Omega_{2p}$.
In this section we will justify the following claim: \begin{lemma}\label{lm_charDD}
A continuous function $f: \varXi \times \varXi \to {\mathbb{C}}$ is PD on $\Omega_{2q}\times\Omega_{2p}$, $ q,p\in{\mathbb{N}}\cup\pg\infty$, if and only if, it admits an expansion in the form \begin{equation}\label{eq-pd-prod-esf-compl_DD} \begin{array}{ccc} &\displaystyle f(\xi,\eta)=\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}R_{m,n}^{q-2}(\xi)R_{k,l}^{p-2}(\eta),\quad (\xi,\eta)\in{ \varXi}\times{ \varXi},&\\ &\mbox{where $\sum a_{m,n,k,l}<\infty$ and $a_{m,n,k,l}\geq0$ for all $m,n,k,l\in{\mathbb{Z}}_+$,}&\end{array} \end{equation}
adding the requirement that $a_{m,n,k,l}=0$ if $q=1$ and $mn>0$ (resp. $p=1$ and $kl>0$). \end{lemma} Lemma \ref{lm_charDD} is a generalization of the characterization \pref{eq-pd-prod-esf-compl} to include the cases when $q,p$ can take the values $1$ or $\infty$, replacing $\mathbb{D}$ with $ \varXi$ and redefining the coefficients in the series, where $p$ or $q$ is $1$, as we did in Equation \pref{eq-pd-prod1mn}.
In order to justify the claim, we will use results from \cite{P-berg-porcu_gelfand} and \cite{P-berg-porcu-Omega-inf}, which are stated in a more general setting.
Let $U(p)$ be the locally compact group of the unitary $p\times p$ complex matrices. A continuous function $\widetilde \Phi:U(p)\to {\mathbb{C}}$ is called Positive Definite on $U(p)$
if the kernel $(A,B)\mapsto\widetilde\Phi(B^{-1}A)$ is
Positive Definite on $U(p)$ {(see \cite[page 87]{Berg})}.
The following remark will be useful to translate from this setting to the case of complex spheres in which we are interested
(see also \cite[Section 6]{P-berg-porcu_gelfand}). \begin{remark}\label{rem_U_Om} Let $\Phi: \varXi\to{\mathbb{C}}$ and $\widetilde \Phi:U(p)\to{\mathbb{C}}$ be related by $\widetilde\Phi(A)=\Phi(Ae_p\cdot e_p)$, where $e_p= (1,0,\ldots,0)\in \Omega_{2p}$.
Then $\widetilde\Phi(A)$ depends only on the upper-left element $[A]_{1,1}$ and it can be seen by the definition of Positive Definiteness that $\widetilde\Phi$ is PD on $U(p)$ if, and only if, $\Phi$ is PD on $\Omega_{2p}$.
Moreover, $\widetilde\Phi$ is continuous if, and only if, $\Phi$ is, since $M:U(p)\to \varXi:A\mapsto [A]_{1,1}$ is continuous and admits a continuous right inverse $$M^-: \varXi\to U(p):\xi\mapsto M^-(\xi)\ \text{ such that }\ [M^-(\xi)]_{1,1}=\xi\,.$$ \end{remark}
Now Lemma \ref{lm_charDD} is obtained as follows: \begin{enumerate} \item When $ q,p\in{\mathbb{N}},\;q,p\geq2$, the lemma is exactly the characterization \pref{eq-pd-prod-esf-compl}. \item When $q=1$ and $p\in{\mathbb{N}}$ (or vice-versa) we can use Corollary 3.5 in \cite{P-berg-porcu_gelfand}, observing that we can identify functions on $\Omega_2$ with periodic functions on ${\mathbb{R}}$, and we can take the locally compact group $L=U(p)$, obtaining a characterization for PD functions on $\Omega_2\times U(p)$. Then we can translate the characterization from $U(p)$ to $\Omega_{2p}$, using Remark \ref{rem_U_Om}.
\item When $q=\infty$ and $p\in{\mathbb{N}}$ (or vice-versa) we can use Theorem 1.3 in \cite{P-berg-porcu-Omega-inf}, taking the locally compact group $L=U(p)$ and proceeding as above.
\item When $q=p=\infty$ the claim is a consequence of Theorem \ref{th_PDinfty} in Section \ref{sec_infty}. \end{enumerate}
\section{Proof of the main results}\label{sec_proofmain} In the following we will need to consider matrices whose elements are described by many indexes: for this we will write
$$ \pq{b_{i,j,k,l,...}}_{i=1,..,I,\; j=i,..,J,\; ...}^{k=1,..,K,\;l=1,..,L,\; ...}\,, $$ where the indexes in the lower line are intended to be line indexes and those in the above line are column indexes. Also, we will specify the indexes alone when their ranges are clear.
Let $q,p\in {\mathbb{N}}\cup\pg{\infty}$. From \pref{eq-quad-form-geral} and \pref{eq-Kfromfprod}, the definition of Positive Definiteness on $\Omega_{2q}\times\Omega_{2p}$, for a
continuous function $f: \varXi\times \varXi\to {\mathbb{C}}$, takes the form \begin{equation}\label{eq-quad-form} \sum_{\mu,\nu=1}^Lc_\mu\overline{c_\nu}f(z_\mu\cdot z_\nu, w_\mu\cdot w_\nu) \geq0 \end{equation} for all $L\geq1$, $(c_1,c_2,\ldots,c_L)\in{\mathbb{C}}^L$ and $X=\{(z_1,w_1),(z_2,w_2),\ldots,(z_L,w_L)\}\subset\Omega_{2q}\times\Omega_{2p}$.
As a consequence, if we define the matrix $A_X$ associated to the function $f$ and to the set $X$ by \begin{equation}\label{eq-def-AX} A_X:= [f(z_\mu\cdot z_\nu, w_\mu\cdot w_\nu)]^{\mu=1,\ldots, L}_{\nu=1,\ldots, L}\,, \end{equation} then: \begin{itemize} \item $f$ is PD if, and only if, for every choice of $L$, $X$, and $c^t = (c_1,c_2,\ldots,c_L)$, $$ \overline c^t A_X c \geq 0, $$ that is, $A_X$ is a Hermitian and positive semidefinite matrix (see \cite[page 430]{horn-joh-matrix}); \item $f$ is also SPD if, and only if, for every choice of $L$ and $X$, $$ \overline c^t A_X c = 0 \Longleftrightarrow c=0, $$
that is, $A_X$ is a positive definite matrix. \end{itemize}
Let now $f$ be a continuous function, PD on $\Omega_{2q}\times\Omega_{2p}$, which we can write uniquely as in Lemma \ref{lm_charDD}. If we define the set \begin{equation}\label{eq_defJ} J=\pg{(m,n,k,l)\in{\mathbb{Z}}_+^4:\ a_{m,n,k,l}>0}\,, \end{equation} then, for a finite set $X=\{(z_1,w_1),(z_2,w_2),\ldots,(z_L,w_L)\}\subseteq\Omega_{2q}\times\Omega_{2p}$, we can write \begin{equation}\label{eq_AxsumBx} A_X=\sum_{(m,n,k,l)\in J} a_{m,n,k,l} B_X^{m,n,k,l} \end{equation} where \begin{equation}\label{eq_defBX} B_X^{m,n,k,l}:= [R^{q-2}_{m,n}(z_\mu\cdot z_\nu)\,R^{p-2}_{k,l}( w_\mu\cdot w_\nu)]_{\nu=1,\ldots, L}^{\mu=1,\ldots, L} \end{equation} is the positive semidefinite matrix associated to $X$ and to the function $R_{m,n}^{q-2}(\xi)R_{k,l}^{p-2}(\eta)$.
With these definitions, the following lemma holds. \begin{lemma}\label{lm_Ax_sistBx} The matrix $A_X$ is a positive definite matrix if, and only if, the equivalence \begin{equation}\label{eq_sist_iff_B}
\overline c^t B_X^{m,n,k,l} c = 0\ \ \forall\ (m,n,k,l)\in J\quad \Longleftrightarrow \quad c=0 \end{equation} holds true. \end{lemma} Lemma \ref{lm_Ax_sistBx} is a consequence of the following one. \begin{lemma}
Let $A= \sum_jA_j$, where $A_j$ are positive semidefinite matrices. Then A is positive semidefinite and the condition that $A$ is positive definite is equivalent to
$$\overline c^tA_jc= 0 \ \ \forall j \quad \Longleftrightarrow \quad c=0\,. $$ \end{lemma} \begin{proof} First, $\overline c^tAc=\sum_j\overline c^tA_jc\geq 0$, then one has that $A$ is positive semidefinite too.
\\If A is positive definite and $\overline c^tA_jc=0$ for every $j$, then of course $\overline c^tAc=0$ and so $c=0$.
\\Finally, if $\overline c^tAc=0$ then (sum of nonnegative terms) $\overline c^tA_jc=0$ $\forall j$; if we assume that this system implies $c=0$ then $A$ is positive definite. \end{proof}
In the following proposition we prove one of the two implications of Theorem \ref{th_main}. \begin{prop}\label{th_spd->progr} Let $q,p\in{\mathbb{N}}\cup\pg{\infty}$, $f$ be a continuous function which is PD on $\Omega_{2q}\times\Omega_{2p}$ and consider \begin{equation}\label{eq_defJ'}
J'=\pg{(m-n,k-l)\in{\mathbb{Z}}^2:\ (m,n,k,l)\in J}\,. \end{equation}
If $f$ is SPD on $\Omega_{2q}\times\Omega_{2p}$ then \begin{equation}\label{eq_inters} J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset \ \text{ for every $N,M,x,y\in{\mathbb{N}}\,.$} \end{equation} \end{prop} \begin{proof} Assume $J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)= \emptyset$ for some $N,M,x,y\in{\mathbb{N}}$. Without loss of generality we may assume $M,N\geq2$. \\ Fix a point $(z,w)\in \Omega_{2q}\times\Omega_{2p}$ and take the set of points $$X=\pg{(e^{i2\pi \tau/N}z,e^{i2\pi \sigma/M}w)\in\Omega_{2q}\times\Omega_{2p}:\ \tau=1,..,N,\,\sigma=1,..,M}\,;$$
then, using the Equations (\ref{eq_modulo-Rmn}-\ref{eq_propridd_Rmn}), the matrix in \pref{eq_defBX} reads as $$B_X^{m,n,k,l}=\pq{e^{i2\pi (m-n)(\tau-\lambda)/N}e^{i2\pi (k-l)(\sigma-\zeta) /M}}^{\tau=1,..,N,\,\,\sigma=1,..,M}_{\lambda=1,..,N,\,\,\zeta=1,..,M}. $$ Observe that this matrix factors as the product $B_X^{m,n,k,l}=\overline b^tb$ where b is the line vector $$b=\pq{e^{i2\pi (m-n)\tau/N}e^{i2\pi (k-l)\sigma /M}}^{\tau,\sigma}$$ (we omit the dependence on $X$ and $\pt{m,n,k,l}$ in the notation for $b$).
Then each equation of the system in \pref{eq_sist_iff_B} reads as $\overline c^tB_X^{m,n,k,l}c=\overline{ c}^t\overline b^tbc=0$ and is equivalent to $bc=0$.
At this point we take $c=\pq{e^{-i2\pi \tau x/N}e^{-i2\pi \sigma y /M}}_{\tau,\sigma}$, so that \begin{equation}\label{eq_bcsum} bc=\sum_{\tau,\sigma} e^{i2\pi (m-n-x)\tau/N}e^{i2\pi (k-l-y)\sigma /M}=\sum_{\tau} e^{i2\pi (m-n-x)\tau/N}\sum_{\sigma}e^{i2\pi (k-l-y)\sigma /M}\,. \end{equation} By our assumption, for every $\pt{m,n,k,l}\in J$, either $m-n-x$ is not a multiple of $N$ or $k-l-y$ is not a multiple of $M$. This implies that one of the two sums in \pref{eq_bcsum} is zero and then $bc=0$. \\Then $c$ is a nontrivial solution of the system in \pref{eq_sist_iff_B}. We have thus proved that $J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)= \emptyset$ implies that $f$ is not SPD. \end{proof}
The rest of this section is dedicated to proving the following proposition, which contains the remaining implication of Theorem \ref{th_main}.
\begin{prop}\label{th_progr->spd}
Let $q,p$, $f$ and $J'$ be as in Proposition \tref{th_spd->progr}. If condition \pref{eq_inters} holds true, then $f$ is SPD on $\Omega_{2q}\times\Omega_{2p}$.
\end{prop}
First of all, we prove the following consequence of condition \pref{eq_inters}. \begin{lemma}\label{lm_int_inf_2}
If $A\subset{\mathbb{Z}}^2$ satisfies \begin{equation}\label{eq_inters_lm}
I_{M,N,x,y}:= A\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset \ \text{ for every $N,M,x,y\in{\mathbb{N}}\,,$}
\end{equation} then, for every $N,M,x,y\in{\mathbb{N}}$, the set $$\pg{\min\pg{|\alpha|,|\beta|}: (\alpha,\beta)\in I_{M,N,x,y}}$$ is unbounded and $I_{M,N,x,y}$ is infinite. \end{lemma}
\begin{proof}
Suppose $\pg{\min\pg{|\alpha|,|\beta|}: (\alpha,\beta)\in I_{M,N,x,y}}\subseteq [0,C]$.\\
Let $(\widehat x,\widehat y) \in(N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)$ with $\widehat x,\widehat y>C$ and $D$ be a multiple of $M$ and of $N$ such that $\widehat x-D,\widehat y-D<-C$. Then $(D{\mathbb{Z}}+\widehat x)\times (D{\mathbb{Z}}+\widehat y)\cap I_{M,N,x,y}=\emptyset$ and $$(D{\mathbb{Z}}+\widehat x) \times (D{\mathbb{Z}}+\widehat y)\subseteq(N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y) .$$ As a consequence $(D{\mathbb{Z}}+\widehat x)\times (D{\mathbb{Z}}+\widehat y)\cap A=\emptyset$, which contradicts \pref{eq_inters_lm}.
\end{proof}
The next step will be to prove that we can verify Strict Positive Definiteness only on certain special sets $X\subseteq\Omega_{2q}\times\Omega_{2p}$ (see Lemma \ref{lm_SDP_Xenh}).
In view of Lemma \ref{lm_Rto0}, when calculating $R_{m,n}^{q-2}(z_\mu\cdot z_\nu)$ and considering the limit for $m+n\to \infty$, the obtained behavior is quite different if $|z_\mu\cdot z_\nu|<1$ or $|z_\mu\cdot z_\nu|=1$. In particular, we will have to treat carefully the cases when $|z_\mu\cdot z_\nu|=1$. This happens either if $z_\mu=z_\nu$ (observe that the points in the set $X$ must be distinct but they can have one of the two components in common), or if $z_\mu= e^{i\theta}z_\nu$ with $\theta\in(0,2\pi)$. In this last case we say that the two points $z_\mu,z_\nu\in \Omega_{2q}$ are {\em antipodal}. Our strategy to deal with antipodal points is inspired by \cite{meneg-jean-traira}. We will say that a set of (distinct) points $Y=\pg{(z_\mu,w_\mu): \mu=1,\ldots, L}$ in $\Omega_{2q}\times\Omega_{2p}$ is {\em Antipodal Free} if the following property holds: \begin{itemize}
\item[(AF)]\quad if $\mu\neq\nu$ then $|z_\mu\cdot z_\nu|<1$ unless $z_\mu=z_\nu$ and $|w_\mu\cdot w_\nu|<1$ unless $w_\mu=w_\nu$. \end{itemize}
Of course, since the points in $Y$ are distinct, if $z_\mu=z_\nu$ then $|w_\mu\cdot w_\nu|<1$ (resp. if $w_\mu=w_\nu$ then $|z_\mu\cdot z_\nu|<1$).
\begin{remark}\label{rm_antip1}
Since two distinct points in $\Omega_2$ are always antipodal, if, for instance, $q=1$, then, in an antipodal free set $Y$ in $\Omega_2\times\Omega_{2p}$, all the $z_\mu$ are the same and then $|w_\mu\cdot w_\nu|<1$ for $\mu\neq\nu$. When $p=q=1$ then an antipodal free set $Y$ in $\Omega_2\times\Omega_2$ contains a unique point $(z,w)$. \end{remark}
Consider now an antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and two sets of angles $\Theta=\pg{\theta_\tau: \tau=1,\ldots,t}$ and $\Delta=\pg{\delta_\sigma: \sigma=1,\ldots,s}$ in $[0,2\pi)$.
We define the {\em enhanced set associated to} $Y,\,\Theta$ and $\Delta$ as the set
\begin{equation}\label{eq_defX} X=\pg{(e^{i\theta\tau}z_\mu,e^{i\delta_\sigma}w_\mu):\, \mu=1,\ldots, L,\,\tau=1,\ldots,t,\,\sigma=1,\ldots,s}\,.
\end{equation}
Observe that, by construction, the points that appear in $X$ are all distinct (but now there exist many antipodal points among them).
The following lemma provides a sort of inverse construction.
\begin{lemma}\label{lm_SsubEnh}
Given a finite set $S\subseteq \Omega_{2q}\times\Omega_{2p}$ one can always obtain an antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and
two sets $\Theta$ and $\Delta$ of angles in $[0,2\pi)$, such that $S$ is contained in the enhanced set $X$ associated to $Y,\,\Theta$ and $\Delta$.
\end{lemma}
\begin{proof}
For a finite set $X_1\subseteq \Omega_{2q}$ one can select a maximal subset $Y_1$ not containing antipodal points and then define the set $\Theta$ containing $0$ and all the distinct $\theta\in (0,2\pi)$ that are needed to produce the remaining points as $e^{i\theta}z_\mu$ with $z_\mu\in Y_1$.
For the set $S\subseteq \Omega_{2q}\times\Omega_{2p}$ one produces with this algorithm a maximal subset $Y_1$ not containing antipodal points along with a corresponding set of angles $\Theta$ from all the first coordinates $z$ in $S$, than a maximal subset $Y_2$ not containing antipodal points along with a corresponding set of angles $\Delta$ from all the second coordinates $w$ in $S$.
Then $Y:=Y_1\times Y_2$ will be such that $S$ is contained in the enhanced set associated to $Y,\,\Theta$ and $\Delta$.
\end{proof}
The following two lemmas will make clear why it is useful to consider antipodal free sets.
\begin{lemma}\label{lm_PDlimit}
Let $Y=\pg{(z_\mu,w_\mu): \mu=1,\ldots, L}$ in $\Omega_{2q}\times\Omega_{2p}$ be antipodal free. Then the matrix
$$ [R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)]_\nu^\mu \quad $$ is positive definite provided $n\neq m,\,k\neq l$ and $m+n,k+l$ are large enough.
\end{lemma}
\begin{proof}
Actually, the diagonal elements of the matrix are all equal to $R_{m,n}^{q-2}(1)R_{k,l}^{p-2}(1)=1$, moreover,
condition (AF) implies that if $z_\mu\cdot z_\nu=1$ then $|w_\mu\cdot w_\nu|<1$ and if $w_\mu\cdot w_\nu=1$ then $|z_\mu\cdot z_\nu|<1$.
As a consequence, the non-diagonal elements converge to zero by \pref{eq_Rto0}, when $n\neq m,\,k\neq l$ and $min\pg{m+n,k+l}\to \infty$. Then the matrix, which is Hermitian and with real positive diagonal, becomes strictly diagonally dominant, thus positive definite (\cite[Theorem 6.1.10]{horn-joh-matrix}).
\end{proof}
\begin{lemma}\label{lm_SDP_Xenh}
Let $q,p\in{\mathbb{N}}\cup\pg{\infty}$ and $f$ be a continuous function which is PD on $\Omega_{2q}\times\Omega_{2p}$. Then the following assertions are equivalent:
\begin{itemize}
\item[(i)] $f$ is SPD on $\Omega_{2q}\times\Omega_{2p}$;
\item[(ii)] the matrix $A_X$ defined in \pref{eq-def-AX} is positive definite for every finite set $X$ being the enhanced set associated to some antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and two sets $\Theta$ and $\Delta$ of angles in $[0,2\pi)$.
\end{itemize}
\end{lemma}
\begin{proof}
First observe that $(i)$ is equivalent to:
\begin{equation*}
\text{(iii) $A_S$ is a positive definite matrix for every finite set $S\subseteq \Omega_{2q}\times\Omega_{2p}$.}
\end{equation*}
The implication $(iii)\Longrightarrow(ii)$ is trivial. In order to prove that $(ii)\Longrightarrow(iii)$ observe that, given $S$, one can obtain $X$ as described in Lemma \ref{lm_SsubEnh}: since $S\subseteq X$, then $A_S$ is a principal submatrix of the positive definite matrix $A_X$ and then it is a positive definite matrix itself.
\end{proof}
At this point we can prove Proposition \ref{th_progr->spd}.
\begin{proof}[Proof of Proposition \ref{th_progr->spd}] Let $X$ (finite) be the enhanced set associated to an antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and two sets $\Theta$ and $\Delta$ of angles in $[0,2\pi)$ and
consider the system
\begin{equation}\label{eq_sistB}
\overline c^tB_X^{m,n,k,l}c=0\ \text{for every $(m,n,k,l)\in J$}.
\end{equation}
In view of the Lemmas \ref{lm_Ax_sistBx} and \ref{lm_SDP_Xenh}, all we have to do is to prove that this system implies $c=0$.
Using the property in \pref{eq_propridd_Rmn}, with the notation introduced in \pref{eq_defX} for the elements of $X$, we have
$$B_X^{m,n,k,l}=\pq{e^{i (m-n)(\theta_\tau-\theta_\lambda)}e^{i(k-l)(\delta_\sigma-\delta_\zeta) }R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)}^{\tau,\sigma,\mu}_{\lambda,\zeta,\nu}\,.$$
It is convenient to write this matrix as a block matrix as follows:
$$ B_X^{m,n,k,l}=[R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)A^{m,n,k,l}]_\nu^\mu$$ where $$A^{m,n,k,l}=[e^{i (m-n)(\theta_\tau-\theta_\lambda)}e^{i(k-l)(\delta_\sigma-\delta_\zeta) }]^{\tau,\sigma}_{\lambda,\zeta}\,.$$
The vector $c$ will be correspondingly split as $$c=[c_\mu]_\mu\quad\text{ where }\quad c_\mu=[c_\mu^{\tau\sigma}]_{\tau,\sigma}\,.$$
We have then $$\overline c^tB_X^{m,n,k,l}c=\sum_{\mu,\nu} R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)\overline{c_\nu}^tA^{m,n,k,l}c_\mu.$$
Similar to the proof of Proposition \ref{th_spd->progr}, the matrix $A^{m,n,k,l}$ factors as $A^{m,n,k,l}=\overline{b}^tb$ where $$b=[{e^{i (m-n)\theta_\tau}e^{i (k-l)\delta_\sigma}}]^{\tau\sigma}\,,$$
then we may write
\begin{equation}\label{eq_cBc}
\overline c^tB_X^{m,n,k,l}c=\sum_{\mu,\nu} \overline{bc_\nu}^tbc_\mu R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)\,.
\end{equation}
Observe that since $Y$ is antipodal free we will be able to use Lemma \ref{lm_PDlimit} in order to discuss this quadratic form.
We suppose now for the sake of contradiction that $c\neq0$. Without loss of generality we assume that $c_1^{1,1}\neq0$ and we first aim to prove that
\begin{equation}\label{eq_bcneq0}
bc_1=\sum_{\tau,\sigma}{e^{i (m-n)\theta_\tau}e^{i (k-l)\delta_\sigma}}c_1^{\tau,\sigma}\neq0
\end{equation}
for certain $(m,n,k,l)\in J$.
Actually, by the Theorem 2.4 and the Lemmas 2.5 and 2.6 in \cite{P-jean-menS1xS1}, which use the theory of linear recurrence sequences, and in particular a generalization of the Skolen-Mahler-Lech Theorem due to Laurent \cite[Theorem 1]{Laurent89} (see also \cite{pinkus-spd-herm}),
we know that given the angles $\theta_\tau, \delta_\sigma$ and the vector $c_1$, with $c_1^{1,1}\neq0$, there exist $N,M,x,y\in{\mathbb{N}}$ such that the function defined in ${\mathbb{Z}}^2$
$$ L( \alpha,\beta):=\sum_{\tau,\sigma}{e^{i\, \alpha\,\theta_\tau}e^{i\, \beta\;\delta_\sigma}}c_1^{\tau,\sigma}$$
is not zero
for all $( \alpha,\beta)$ in the set $P:=(N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)$.
By Lemma \ref{lm_int_inf_2} applied to $J'$, there exists a sequence $S:=\pg{(\alpha_i,\beta_i)}\subseteq P\cap J'$ such that $|\alpha_i|,|\beta_i|\to\infty$.
As a consequence, \pref{eq_bcneq0} holds true for every $(m,n,k,l)\in J$ such that $(m-n,k-l)\in S$.
\\ Now we can select $(m-n,k-l)\in S$ with $|m-n|,|k-l|$ as large as we want (which implies that $ m\neq n$, $k\neq l$ and that $m+n$ and $k+l$ are also large). For the corresponding $(m,n,k,l)\in J$, the equation in \pref{eq_sistB} can not be zero in view of Equation \pref{eq_bcneq0} and Lemma \ref{lm_PDlimit}.
We have then proved that a nontrivial solution of system \pref{eq_sistB} can not exist. \end{proof} \begin{remark}
Observe that in the case $p=q=1$, in view of Remark \ref{rm_antip1}, the sum in Equation \pref{eq_cBc} has only one term which is $|bc_1|^2$, then the contradiction follows readily after proving \pref{eq_bcneq0}. \end{remark}
At this point, Theorem \ref{th_main} is a consequence of the Propositions \ref{th_spd->progr} and \ref{th_progr->spd}. The Theorems \ref{th_main_1p} and \ref{th_main_11} follow from the same two propositions after translating back from the expansion in Lemma \ref{lm_charDD} to the usual ones in the Equations \pref{eq-pd-prod-esf-complO2p} and \pref{eq-pd-prod-esf-complO2} (see in the Sections \ref{sec_PD_on_sing} and \ref{sec_PD_on_prod}).
\section{Characterization of Positive Definiteness on $\Omega_\infty\times\Omega_\infty$}\label{sec_infty}
In this section we aim to prove the following:
\begin{theorem}\label{th_PDinfty} Let $f:\mathbb{D}\times \mathbb{D}\to{\mathbb{C}}$ be a continuous function. Then $f$ is PD on $\Omega_\infty\times\Omega_\infty$ if, and only if,
\begin{equation}\label{eq:expandcpinfty}
\begin{array}{c}
\begin{array}{rcll} f(\xi,\eta)&=&\displaystyle\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}R_{m,n}^\infty(\xi)R_{k,l}^\infty(\eta)&\\
&=& \displaystyle\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l} {\xi\vphantom{\overline\xi}}^m\overline{\xi}^n\eta^k\overline{\eta}^l,\quad& (\xi,\eta)\in{\mathbb{D}}\times{\mathbb{D}},\end{array} \\\mbox{where $\sum a_{m,n,k,l}<\infty$ and $a_{m,n,k,l}\geq0$ for all $m,n,k,l\in{\mathbb{Z}}_+$.} \end{array}
\end{equation}
Moreover, the series in Equation \pref{eq:expandcpinfty} is uniformly convergent on ${\mathbb{D}}\times {\mathbb{D}}$.
\end{theorem}
In the proof we will use ideas from \cite{P-berg-porcu-Omega-inf}
and we will need the following lemma, whose proof is analogous to that of Lemma 4.1 in \cite{P-berg-porcu-Omega-inf} and will be omitted.
\begin{lemma}\label{thm:technical}
Let $q,p\in{\mathbb{N}}\cup\pg\infty,\;q,p\geq2$ and $f:\mathbb{D}\times \mathbb{D}\to {\mathbb{C}}$ be a continuous and PD function on $\Omega_{2q}\times\Omega_{2p}$. Given points $w_1,\ldots,w_L\in\Omega_{2p}$ and numbers $c_1,\ldots,c_L\in {\mathbb{C}}$, the function $F:\mathbb{D}\to {\mathbb{C}}$ defined by
\begin{equation}\label{eq:sum1}
F(\xi)=\sum_{j,k=1}^L f(\xi,w_j\cdot w_k)c_j\overline{c_k}
\end{equation} is continuous and PD on $\Omega_{2q}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{th_PDinfty}]
First observe that $f$ is PD on $\Omega_\infty\times\Omega_\infty$ if, and only if, $f$ is PD on $\Omega_{2q}\times\Omega_{2p}$ for every $q,p\geq2$.
It is also easy to see that the function $g(\xi)=\xi$, $\xi\in{\mathbb{D}}$,
is PD on $\Omega_{2q}$ for every $q\geq2$, as well as its conjugate.
By the Schur Product Theorem for Positive Definite kernels, cf. \cite[Theorem 3.1.12]{Berg}, one obtains that also $h(\xi)={\xi\vphantom{\overline\xi}}^{m}\overline{\xi}^{n}$ is PD on $\Omega_{2q}$ for $q\geq2$ and $m,n\in{\mathbb{Z}}_+$, and that ${\xi\vphantom{\overline\xi}}^m\overline{\xi}^n\eta^k\overline{\eta}^l$ is PD on $\Omega_{2q}\times\Omega_{2p}$ for $q,p\ge 2$ and $m,n,k,l\in{\mathbb{Z}}_+$.
As a consequence, any function of the form \pref{eq:expandcpinfty} is continuous and PD on $\Omega_{2q}\times\Omega_{2p}$ for every $q,p\geq2$, and then on $\Omega_{\infty}\times\Omega_\infty$ too.
\par
Now let the continuous function $f:{\mathbb{D}}\times {\mathbb{D}}\to{\mathbb{C}}$ be PD on $\Omega_\infty\times\Omega_\infty$.
For $\eta\in\mathbb{D},\,c\in{\mathbb{C}}$, consider the special case of \pref{eq:sum1} with $L=2,q=\infty,p=2$, $w_1=(\eta,w), w_2=(1,0)\in\Omega_4$, $c_1=1, c_2=c$, that is,
\begin{equation}\label{eq:sum2}
F_{\eta,c}(\xi)=f(\xi,1)(1+|c|^2)+f(\xi,\eta)\overline{c}+f(\xi,\overline{\eta})c.
\end{equation} By Lemma~\ref{thm:technical},
$F_{\eta,c}$ is a continuous PD function on $\Omega_\infty$. Then, using a theorem due to Christensen and Ressel, see \cite{chris-ressel-pd}, it can be written as
$$
F_{\eta,c}(\xi)=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}\pt{\eta,c} {\xi\vphantom{\overline\xi}}^m\overline{\xi}^n,
$$
where $a_{m,n}\pt{\eta,c}\ge 0$ are uniquely determined and satisfy $\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}\pt{\eta,c}<\infty.$
By using $c=1,-1,i$ and proceeding as in the end of the proof of \cite[Theorem 1.2]{P-berg-porcu-Omega-inf}, one obtains that
\begin{equation}\label{eq-cara-f}
f(\xi,\eta)=\frac{1-i}4F_{\eta,1}(\xi)-\frac{1+i}4F_{\eta,-1}(\xi)+\frac{i}2F_{\eta,i}(\xi)=\sum_{m,n\in{\mathbb{Z}}_+} \varphi_{m,n}(\eta){\xi\vphantom{\overline\xi}}^m\overline{\xi}^n,
\end{equation}
where
$$
\varphi_{m,n}(\eta):=\frac{1-i}4a_{m,n}(\eta,1)-\frac{1+i}4a_{m,n}(\eta,-1)+\frac{i}2a_{m,n}(\eta,i), \quad \eta\in\mathbb{D}\,,
$$
and then
\begin{equation}\label{eq-serie-phi-finita}
\m{\sum_{m,n\in{\mathbb{Z}}_+}\varphi_{m,n}(\eta)}<\infty,\qquad \eta\in{\mathbb{D}}.
\end{equation}
Consider now $p\geq2$ and the function $\widetilde f_p:\mathbb{D}\times U(p):(\xi,A)\mapsto f(\xi,A e_p\cdot e_p)$, where $e_p=(1,0,\ldots,0)\in \Omega_{2p}$. By construction, $\widetilde f_p$ is continuous and PD on $\Omega_{\infty}\times U(p)$. By Theorem 1.3 in \cite{P-berg-porcu-Omega-inf}, we can expand $\widetilde f_p$ as $$\widetilde f_p(\xi, A )=\sum_{m,n\in{\mathbb{Z}}_+} \widetilde\varphi^{(p)}_{m,n}(A)R^{\infty}_{m,n}(\xi)=\sum_{m,n\in{\mathbb{Z}}_+} \widetilde\varphi^{(p)}_{m,n}(A){\xi\vphantom{\overline\xi}}^m\overline{\xi}^n\,,$$ where $\widetilde\varphi^{(p)}_{m,n}$ are continuous PD functions on $U(p)$.
By derivation one has that $$ \widetilde \varphi^{(p)}_{m,n}(A) =\frac1{m!n!}\frac{\partial^{m+n}\widetilde f_p(0,A)}{\partial {\xi\vphantom{\overline\xi}}^m\partial\overline{\xi}^n}$$ and
\begin{equation}\label{eq_phi=der}
\varphi_{m,n}(\eta) =\frac1{m!n!}\frac{\partial^{m+n}f(0,\eta)}{\partial {\xi\vphantom{\overline\xi}}^m\partial\overline{\xi}^n},
\end{equation} but by construction
$$\widetilde \varphi^{(p)}_{m,n}(A)= \frac1{m!n!}\frac{\partial^{m+n}\widetilde f_p(0,A)}{\partial {\xi\vphantom{\overline\xi}}^m\partial\overline{\xi}^n}=\frac1{m!n!}\frac{\partial^{m+n}f(0,Ae_p\cdot e_p)}{\partial {\xi\vphantom{\overline\xi}}^m\partial\overline{\xi}^n}= \varphi_{m,n}(Ae_p\cdot e_p).
$$
By Remark \ref{rem_U_Om} we deduce that $\varphi_{m,n}$ is continuous and PD on $\Omega_{2p}$, for every $p\geq2$.
As a consequence, $\varphi_{m,n}$ is PD on $\Omega_\infty$ and thus we can again use the theorem by Christensen and Ressel, in order to conclude that for every $m,n$,
$$
\varphi_{m,n}(\eta) = \sum_{k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}\,\eta^k\overline{\eta}^l, \quad \eta\in{\mathbb{D}}\,,
$$ where ${a_{m,n,k,l}}\geq0$, for every $k,l\in{\mathbb{Z}}_+$, and
$ \sum_{k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}<\infty\,. $
Thus,
$$
f(\xi,\eta)=\sum_{m,n\in{\mathbb{Z}}_+}\sum_{k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}\,{\xi\vphantom{\overline\xi}}^m\overline{\xi}^n\eta^k\overline{\eta}^l\,,
$$
and then $\displaystyle\sum_{m,n,k,l\in{\mathbb{Z}}_+}a_{m,n,k,l}<\infty.$
\end{proof}
\section{A connection with the cases $S^1$ and $S^1\times S^1$} \label{sec_S1}
In this section we aim to show
that one can deduce, from Theorem \ref{th_main_11}, the characterization of Strict Positive Definiteness on $S^1\times S^1$ proved in \cite{P-jean-menS1xS1},
namely, that
a continuous function $f:[-1,1] \times [-1,1] \to {\mathbb{C}}$
which is PD on $S^1\times S^1$, is also SPD on $S^1\times S^1$ if, and only if, considering its expansion as in \pref{eq-pd-esfSd},
the set
$
\{(m,k)\in{\mathbb{Z}}^2: a_{|m|,|k|}>0\}
$
intersects every product of full arithmetic progressions in ${\mathbb{Z}}$, that is,
\begin{equation}\label{eq_inters_S11}
\{(m,k)\in{\mathbb{Z}}^2: a_{|m|,|k|}>0\}\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset\qquad \mbox{for every $N,M,x,y\in{\mathbb{N}}$.}
\end{equation}
Actually, condition \pref{eq_inters_S11} has more similarities with the conditions we obtain here in the Theorems \ref{th_main}, \ref{th_main_1p} and \ref{th_main_11} for the complex spheres, where an intersection with every product of full arithmetic progressions in ${\mathbb{Z}}$ is required, rather than with the known conditions for real spheres in higher dimensions, where only progressions of step 2 are involved (see Equations \pref{eq_inters_Sd} and \pref{eq_inters_SpSq}).
The polynomials $P_m^0$ in \pref{eq-pd-esfSd} are also known as Tchebichef polynomials of the first kind (see \cite[page 29]{szego}) and can be written as $P_m^{0}(\cos\phi)=\cos(m\phi)$, $\phi\in[0,\pi]$. As a consequence, a way of writing \pref{eq-pd-esfSd} often used in literature when $p=q=1$ is the following:
\begin{equation}\label{eq_expS1S1}
f(\cos(\phi),\cos(\psi))=\sum_{m,k\in{\mathbb{Z}}_+} a_{m,k}\cos(m\phi)\cos(k\psi),\quad \phi,\psi\in[0,\pi].
\end{equation}
Below, we will show that one can establish a correspondence between PD (and between SPD) functions on $S^1\times S^1$ and a subset of those on $\Omega_2\times\Omega_2$. We will do it first for the case of a single sphere.
\begin{lemma}\label{lm_bij_1} There exists a bijection between PD (resp. SPD) functions on $S^1$ and PD (resp. SPD) functions on $\Omega_2$ which are invariant under conjugation, that is, $f(e^{i\phi})=f(e^{-i\phi}),\ \phi\in[0,2\pi)$. \end{lemma} \begin{proof} Let $f:\partial\mathbb{D}\to {\mathbb{C}}$ be a PD function on $\Omega_2$ satisfying $f(e^{i\phi})=f(e^{-i\phi})$, then it is real valued and it only depends on the real part. \\ Consider the bijection $$A:\Omega_2\to S^1 :e^{i\phi}\mapsto (\cos(\phi),\sin(\phi))$$ and the surjective map
$$C:\partial\mathbb{D}\to[-1,1]:e^{i\phi}\mapsto \cos(\phi)\,,$$
which admits a right inverse $C^-:x\mapsto e^{i\arccos(x)}$. Then $C\circ C^-=id_{[-1,1]}$ and since $f$ only depends on the real part,
\begin{equation}\label{eq-fCC}
f(C^-\circ C(e^{i\phi}))=f(e^{i\phi}), \quad e^{i\phi}\in\partial \mathbb{D}.
\end{equation} Also observe that
\begin{equation}\label{prod-int-w-compl}
C(w\cdot w')=Aw\cdot_{\mathbb{R}} Aw', \quad w,w'\in\Omega_2.
\end{equation}
Therefore, the bijection in the claim is the following: $$B:f\mapsto \widehat f:=f\circ {C^-}\,,$$ whose inverse is given by $$B^{-1}: \widehat f\mapsto f:=\widehat f \circ C\,.$$
Actually, for kernels $K$ and $\widehat K$ associated, respectively, to $f$ and $\widehat f$, it holds, by (\ref{eq-fCC}-\ref{prod-int-w-compl}), $$\widehat K(Aw,Aw')=\widehat f(Aw\cdot_{\mathbb{R}} Aw')=f(C^-\circ C (w\cdot w'))=f(w\cdot w')=K(w,w'),$$
then the definition of PD (resp. SPD) in \pref{eq-quad-form-geral} becomes equivalent for the two kernels. \end{proof}
The case on a product of spheres is very similar. \begin{lemma}\label{lm_bij_11}
There exists a bijection between PD (resp. SPD) functions on $S^1\times S^1$ and PD (resp. SPD) functions on $\Omega_2\times\Omega_2$ that
are invariant under conjugation in both variables, that is, $f(e^{i\phi},e^{i\psi})=f(e^{-i\phi},e^{i\psi})=f(e^{i\phi},e^{-i\psi}),\ \phi,\psi\in[0,2\pi)$. \end{lemma} \begin{proof} Consider a function $f:\partial\mathbb{D}\times\partial\mathbb{D} \to {\mathbb{C}}$ which is PD on $\Omega_2\times\Omega_2$ and satisfies $f(e^{i\phi},e^{i\psi})=f(e^{-i\phi},e^{i\psi})=f(e^{i\phi},e^{-i\psi})$, then it is real valued and it only depends on the real part of both $e^{i\phi}$ and $e^{i\psi}$.
Thus the proof follows the same lines as that of Lemma \ref{lm_bij_1}, where now the bijection is defined as $$B:f\mapsto \widehat f(\vartriangle,\star):=f( {C^-(\vartriangle),C^-(\star))}\,.$$ \end{proof}
Now we need to establish the correspondence between the coefficients in the expansions of $f$ and $\widehat f$.
For the single sphere case, a continuous function $f$ which is PD on $\Omega_2$ can be written as in \pref{eq-pd-prod1}.
The condition that $f$ is invariant under conjugation, assumed in Lemma \ref{lm_bij_1},
is equivalent to $a_m=a_{-m},\ m\in{\mathbb{Z}}$, then we can rewrite $$f(e^{i\phi})=a_0+\sum_{m\in{\mathbb{N}}} {a_{m}} (e^{im\phi}+e^{-im\phi})= a_0+\sum_{m\in{\mathbb{N}}} {2a_{m}} \cos(m\phi)$$
and the function $\widehat f$ corresponding to $f$ in the bijection from Lemma \ref{lm_bij_1} can be written as
$$\widehat f(\cos(\phi)) =f(C^-(\cos(\phi)))=f(e^{i\phi})
= a_0+\sum_{m\in{\mathbb{N}}} {2a_{m}} \cos(m\phi)\,.$$
If we consider the Schoenberg coefficients $\widehat a_m,\,m\in{\mathbb{Z}}_+$, for $\widehat f$, that is, $$ \widehat f(\cos(\phi))=\widehat a_0+\sum_{m\in{\mathbb{N}}} {\widehat a_{m}} (\cos(m\phi))\,,$$ we obtain the relation \begin{equation}\label{eq_relcoefS1O2}
a_{m}>0\Longleftrightarrow \widehat a_{|m|}>0,\ \ m\in{\mathbb{Z}}\,. \end{equation} Now, by \cite{P-valdir-claudemir-spd-compl}, the condition for $f$ to be SPD on $\Omega_2$ is
\begin{equation}\label{eq_intersO2} \pg{m\in{\mathbb{Z}}:\ a_{m}>0} \cap (N{\mathbb{Z}}+x)\neq \emptyset\qquad \mbox{for every $N,x\in{\mathbb{N}}$,}
\end{equation} which then translates, via \pref{eq_relcoefS1O2}, to the known condition (see \cite{P-valdir-claudemir-spd-compl,barbosa-men}) \begin{equation}
\pg{m\in{\mathbb{Z}}:\ a_{|m|}>0} \cap (N{\mathbb{Z}}+x)\neq \emptyset\qquad \mbox{for every $N,x\in{\mathbb{N}}$.}
\end{equation}
Again, when considering the product of two spheres, the argument is similar.
A continuous function $f$, which is PD on $\Omega_2\times\Omega_2$, is written as in \pref{eq-pd-prod-esf-complO2}
and the condition that $f$ is invariant under conjugation in both variables, assumed in Lemma \ref{lm_bij_11},
is equivalent to
$$a_{m,k}=a_{-m,k}=a_{m,-k}=a_{-m,-k},\ m,k\in{\mathbb{Z}}.$$ Then, proceeding as above, one obtains
\begin{eqnarray} \nonumber f(e^{i\phi},e^{i\psi})&=& a_0+\sum_{m\in{\mathbb{N}}} {2a_{m,0}} \cos(m\phi)+\sum_{k\in{\mathbb{N}}} {2a_{0,k}} \cos(k\psi)+\sum_{m,k\in{\mathbb{N}}} {4a_{m,k}} \cos(m\phi)\cos(k\psi)\\
&=&\widehat f(\cos\phi,\cos\psi)\,. \end{eqnarray} If we denote by $\widehat a_{m,k},\,m,k\in{\mathbb{Z}}_+$, the coefficients in the expansion of $\widehat f$ as in \pref{eq_expS1S1}, we obtain the relation \begin{equation}\label{eq_relcoefS1O2_quad}
a_{m,k}>0\Longleftrightarrow \widehat a_{|m|,|k|}>0\ \ m,k\in{\mathbb{Z}}\,. \end{equation}
Finally, by Theorem \ref{th_main_11}, the condition for $f$ to be SPD on $\Omega_2\times\Omega_2$ is
\begin{equation}\label{eq_intersO2O2}
\pg{(m,k)\in{\mathbb{Z}}^2:\ a_{m,k}>0} \cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset\qquad \mbox{for every $N,M,x,y\in{\mathbb{N}}$,}
\end{equation}
which then translates, via \pref{eq_relcoefS1O2_quad}, to the condition \pref{eq_inters_S11} that we were seeking.
\end{document} |
\begin{document}
\title{Modular invariants for genus 3 hyperelliptic curves }
\author{Sorina Ionica} \address{Sorina Ionica, Laboratoire MIS, Universit\'e de Picardie Jules Verne, 33 Rue Saint Leu, Amiens 80039, France} \email{sorina.ionica@u-picardie.fr}
\author{P{\i}nar K{\i}l{\i}\c{c}er} \thanks{Most of K{\i}l{\i}\c{c}er's work was carried out during her stay in Universiteit Leiden and Carl von Ossietzky Universit\"at Oldenburg. } \address{P{\i}nar K{\i}l{\i}\c{c}er, Johann Bernoulli Instituut voor Wiskunde en Informatica, Rijksuniversiteit Groningen, Nijenborgh 9, 9747 AG Groningen, Netherlands} \email{p.kilicer@rug.nl}
\author{Kristin Lauter} \address{Kristin Lauter, Microsoft Research, One Redmond Way, Redmond, WA 98052, USA} \email{klauter@microsoft.com}
\author{Elisa Lorenzo Garc\'ia} \address{Elisa Lorenzo Garc\'ia, Laboratoire IRMAR, Office 602, Universit\'e de Rennes 1, Campus de Beaulieu, 35042, Rennes Cedex, France} \email{elisa.lorenzogarcia@univ-rennes1.fr }
\author{Maike Massierer} \thanks{Maike Massierer was supported by the Australian Research Council (DP150101689).} \address{Maike Massierer, School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia} \email{maike.massierer@gmail.com}
\author{Adelina M\^{a}nz\u{a}\c{t}eanu} \address{Adelina M\^{a}nz\u{a}\c{t}eanu, Institute of Science and Technology Austria. School of Mathematics, University of Bristol, Bristol, BS8 1JR, UK} \email{am15112@bristol.ac.uk}
\author{Christelle Vincent} \thanks{Vincent is supported by the National Science Foundation under Grant No. DMS-1802323.} \address{Christelle Vincent, Department of Mathematics and Statistics, University of Vermont, 16 Colchester Avenue, Burlington VT 05401} \email{christelle.vincent@uvm.edu}
\makeatletter \let\@wraptoccontribs\wraptoccontribs \makeatother
\maketitle \markleft{S. IONICA ET AL.} \begin{abstract} In this article we prove an analogue of a theorem of Lachaud, Ritzenthaler, and Zykin, which allows us to connect invariants of binary octics to Siegel modular forms of genus 3. We use this connection to show that certain modular functions, when restricted to the hyperelliptic locus, assume values whose denominators are products of powers of primes of bad reduction for the associated hyperelliptic curves. We illustrate our theorem with explicit computations. This work is motivated by the study of the values of these modular functions at CM points of the Siegel upper-half space, which, if their denominators are known, can be used to effectively compute models of (hyperelliptic, in our case) curves with CM.
\end{abstract}
\section{Introduction} \label{sec:intro}
In his beautiful paper, Igusa~\cite{igusa67} proved that there is a homomorphism from a subring (containing forms of even weight) of the graded ring of Siegel modular forms of genus $g$ and level $1$ to the graded ring of invariants of binary forms of degree $2g+2$. In this paper, we consider Siegel modular functions which map to invariants of hyperelliptic curves under this homomorphism, and are thus called {\it modular invariants}.
We are interested in the primes that divide the denominators of certain quotients of these modular invariants.\footnote{Here by denominator we mean the least common multiple of the (rational) denominators that appear in an algebraic number's monic minimal polynomial.} Our work is motivated by the following computational problem: To recognize the value of a modular invariant as an exact algebraic number from a floating point approximation, one must have a bound on its denominator. Furthermore, the running time of the algorithm is greatly improved when the bound is tight.
Igusa~\cite{igusa67} gave an explicit construction of the above-mentioned homomorphism for all modular forms of level 1 which can be written as polynomials in the theta-constants. Our first contribution is an analogue of a result of Lachaud, Ritzenthaler, and Zykin \cite[Corollary~3.3.2]{LRZ}, which connects Siegel modular forms to invariants of plane quartics. Using a similar approach, which first connects Siegel modular forms to Teichm\"uller modular forms, we obtain a construction which is equivalent to Igusa's for modular forms of even weight. We then compute the image of the discriminant of a hyperelliptic curve under this homomorphism, thus extending and rephrasing a result of Lockhart~\cite[Proposition 3.2]{Lockhart}. This allows us to prove our main theorem:
\begin{theorem}\label{prop:newinvariant} Let $Z$ be a period matrix in $\mathcal{H}_3$, the Siegel upper half-plane of genus $3$, corresponding to a smooth genus 3 hyperelliptic curve $C$ defined over a number field $M$. Let $f$ be a Siegel modular form of weight $k$ such that the invariant $\Phi$ obtained in Corollary~\ref{cor:3.3.2} is integral. Then \begin{equation*} j(Z) = \frac{f^{\frac{140}{\gcd(k,140)}}}{\Sigma_{140}^{\frac{k}{\gcd(k,140)}}}(Z) \end{equation*} is an algebraic number lying in $M$. Moreover, if an odd prime $\mathfrak{p}$ of $\mathcal{O}_{M}$ divides the denominator of this number, then the curve $C$ has geometrically bad reduction modulo $\mathfrak{p}$. \end{theorem}
Here, $\Sigma_{140}$ is the Siegel modular form of genus $3$ defined by Igusa \cite{igusa67} in terms of the theta constants (see equation~(\ref{eq:Thetas})) as follows: \begin{equation}\label{eq:sigma140} \Sigma_{140}(Z) = \sum_{i = 1}^{36} \prod_{j \neq i} \vartheta[\xi_j](0,Z)^8, \end{equation} where the $\xi_i$, $i=1,\ldots,36$ are the even theta characteristics we define in Section~\ref{Sec:CM}.
To illustrate this theorem, in Section~\ref{Sec-NewInv} we compute values of several modular invariants whose expressions have a power of $\Sigma_{140}$ in the denominator. For our experiments, we used: genus 3 hyperelliptic CM curves defined over ${\mathbb Q}$, a complete list of which is given in \cite{KS2017}; genus 3 hyperelliptic curves already appearing in some experiments concerning the Chabauty-Coleman method \cite{Chabauty}; and some genus~3 hyperelliptic modular curves \cite{GalbraithThesis,Ogg}.
Note that Theorem~\ref{prop:newinvariant} is an analogue of a result of Goren and Lauter for curves of genus 2 with CM~\cite{GorenLauter07}. The case of CM hyperelliptic curves is interesting because the bound on the primes dividing the denominators of Igusa invariants proved in~\cite{GorenLauter07} is used to improve the algorithms to construct genus $2$ CM curves. We hope that apart from its theoretical interest, our result will allow a similar computation in the case of CM hyperelliptic curves of genus $3$.
\paragraph{Outline.} This paper is organized as follows. We begin in Section \ref{Sec:CM} with some background on theta functions, the Igusa construction and the Shioda invariants of hyperelliptic curves. Only the most basic facts are given, and references are provided for the reader who would like to delve further.
Then, in Section \ref{Sec:ModularInvs}, we give a correspondence that allows us to relate invariants of octics to Siegel modular forms of degree 3. Using this correspondence, we then show in Section \ref{Sec:MainTheorem} that the primes dividing the denominators of modular invariants that have powers of the Siegel modular form $\Sigma_{140}$ as their denominator are primes of bad reduction, which is our main theorem (Theorem \ref{prop:newinvariant} above).
Finally, in Section~\ref{Sec-NewInv} we present the list of hyperelliptic curves of genus $3$ for which we computed the values of several modular invariants having powers of $\Sigma_{140}$ as their denominator, when evaluated at a period matrix of their Jacobian. We compared the factorization of the denominators of these values against that of the denominators of the Shioda invariants of these curves and the odd primes of bad reduction of these curves.
\section{Hyperelliptic curves of genus $3$ with complex multiplication }\label{Sec:CM}
In this section we introduce notation and discuss theta functions and theta characteristics, which are crucial to the definition of the Siegel modular invariants we consider in this paper. We briefly recall Igusa's contruction of a homomorphism between the graded ring of Siegel modular forms and the graded ring of invariants of a binary form. Finally, we define the Shioda invariants of genus 3 hyperelliptic curves.
\subsection{Theta functions and theta characteristics} \label{Sec: hyp}
In this work, by \emph{period matrix} we will mean a $g \times g$ symmetric matrix $Z$ with positive imaginary part, that is, a matrix in in the Siegel upper half-space of genus $g$. (This is sometimes called a \emph{small} period matrix, but for simplicity and since there is no risk of confusion here we call them period matrices.)
In this case, the relationship between the abelian variety and the period matrix is that the complex points of the abelian variety are exactly the complex points of the torus ${\mathbb C}^g/(\mathbb{Z}^g+Z\mathbb{Z}^g)$.
We denote by $\mathcal{H}_g$ the Siegel upper half space. We now turn our attention to the subject of theta functions. For $\omega \in {\mathbb C}^g$ and $Z \in \mathcal{H}_g$, we define the following important series: \begin{equation*} \vartheta(\omega, Z) = \sum_{n \in {\mathbb Z}^{g}}\exp(\pi i n^T Z n + 2 \pi i n^ T \omega), \end{equation*} where throughout this article an exponent of $T$ on a vector or a matrix denotes the transpose.
Given a period matrix $Z \in \mathcal{H}_g$, we obtain a set of coordinates on the torus ${\mathbb C}^g/(\mathbb{Z}^g+Z\mathbb{Z}^g)$ in the following way: A vector $x \in [0,1)^{2g}$ corresponds to the point $x_2 + Z x_1 \in {\mathbb C}^g/(\mathbb{Z}^g+Z\mathbb{Z}^g)$, where $x_1$ denotes the first $g$ entries and $x_2$ denotes the last $g$ entries of the vector $x$ of length $2g$.
For reasons beyond the scope of this short text, it is of interest to consider the value of this theta function as we translate $\omega$ by points that, under the natural quotient map ${\mathbb C}^g \to {\mathbb C}^g/(\mathbb{Z}^g+Z\mathbb{Z}^g)$, map to $2$-torsion points. These points are of the form $\xi_2 + Z \xi_1$ for $\xi \in (1/2){\mathbb Z}^{2g}$. This motivates the following definition: \begin{equation}\label{eq:Thetas} \vartheta[\xi](\omega, Z) = \exp(\pi i \xi_1^T Z \xi_1 + 2 \pi i \xi_1^T (\omega+\xi_2)) \vartheta( \omega+\xi_2 + Z \xi_1, Z), \end{equation} which is given in~\cite[page 123]{Mumford1}. In this context, $\xi$ is customarily called a \emph{characteristic} or \emph{theta characteristic}. The value $\vartheta[\xi](0,Z)$ is called a \emph{theta constant}.
For $\xi \in (1/2){\mathbb Z}^{2g}$, let \begin{equation}\label{eq:estar} e_*(\xi) = \exp(4\pi i \xi_1^T \xi_2). \end{equation} We say that a characteristic $\xi \in (1/2){\mathbb Z}^{2g}$ is \emph{even} if $e_*(\xi) = 1$ and \emph{odd} if $e_*(\xi) = -1$. If $\xi$ is even we call $\vartheta[\xi](0,Z)$ an \emph{even theta constant} and if $\xi$ is odd we call $\vartheta[\xi](0,Z)$ an \emph{odd theta constant}.
We have the following fact about the series $\vartheta[\xi](\omega,Z)$ \cite[Chapter II, Proposition 3.14]{Mumford1}: For $\xi \in (1/2){\mathbb Z}^{2g}$, \begin{equation*} \vartheta[\xi](-\omega,Z) = e_*(\xi) \vartheta[\xi](\omega,Z). \end{equation*} From this we conclude that all odd theta constants vanish. Furthermore, we have that if $n \in \mathbb{Z}^{2g}$ is a vector with integer entries, \begin{equation*} \vartheta[\xi + n](\omega, Z) = \exp(2\pi i \xi_1^T n_2) \vartheta[\xi](\omega, Z). \end{equation*} In other words, if $\xi$ is modified by a vector with integer entries, the theta value at worst acquires a factor of~$-1$. Up to this sign, we note that there are in total $2^{g-1}(2^g+1)$ even theta constants and $2^{g-1}(2^g-1)$ odd ones.
We can now finally fully describe the modular form $\Sigma_{140}$ defined in the introduction (equation \eqref{eq:sigma140}). First, we note that when $g = 3$, there are $36$ even theta characteristics. For simplicity of notation, we give an arbitrary ordering to these even theta characteristics, and label them $\xi_1, \ldots, \xi_{36}$. Then we have \begin{equation*} \Sigma_{140}(Z) = \sum_{i = 1}^{36} \prod_{j \neq i} \vartheta[\xi_j](0,Z)^8, \end{equation*} the $35^{\text{th}}$ elementary symmetric polynomial in the even theta constants.
We will also need another Siegel modular form introduced by Igusa \cite{igusa67} and given by \begin{equation}\label{eq:chi18} \chi_{18}(Z) = \prod_{i=1}^{36} \vartheta[\xi_i](0,Z). \end{equation} Igusa shows that $\Sigma_{140}$ and $\chi_{18}$ are Siegel modular forms for the symplectic group of level 1 $\Sp(6,\mathbb{Z})$.
The significance of these modular forms is the following: in \emph{loc. cit}, Igusa shows that a period matrix $Z$ corresponds to a simple Jacobian of hyperelliptic curve when $\chi_{18}(Z)=0$ and $\Sigma_{140}(Z)\neq 0$ and it is a reducible Jacobian when $\chi_{18}(Z)=\Sigma_{140}(Z)=0$. Moreover, $\chi_{18}$ will appear later as the kernel of Siegel's homomorphism mentioned in the introduction.
\subsection{Igusa's construction}
Let $S(2,2g+2)$ be the graded ring of projective invariants of a binary form of degree $2g+2$. We denote by $\Sp(2g,\mathbb{Z})$ the symplectic group of matrices of dimension $2g$ and by $A(\Sp(2g,\mathbb{Z}))$ the graded ring of modular forms of degree $g$ and level 1. There exists a homomorphism \begin{eqnarray*} \rho \colon A(\Sp(2g,\mathbb{Z}))\rightarrow S(2,2g+2), \end{eqnarray*} which was first constructed by Igusa~\cite{igusa67}. Historically, Igusa only showed that the domain of $\rho$ equals $A(\Sp(2g,\mathbb{Z}))$ when $g$ is odd or $g=2,4$, and that for even $g>4$, a sufficient condition for the domain to be the full ring $A(\Sp(2g,\mathbb{Z}))$ is the existence of a modular form of odd weight that does not vanish on the hyperelliptic locus. Such a form was later exhibited by Salvati Manni in~\cite{Salvati}, from which it follows that the domain of $\rho$ is the full ring of Siegel modular forms.
The kernel of $\rho$ is given by modular forms which vanish on all points in $\mathcal{H}_g$ associated with a hyperelliptic curve. In particular, Igusa shows that in genus 3, the kernel of $\rho$ is a principal ideal generated by the form $\chi_{18}$ defined in equation \eqref{eq:chi18}. Furthermore, Igusa shows that this homomorphism $\rho$ is unique, up to a constant. More precisely, any other map is of the form $\zeta_4^{k}\rho$ on the homogenous part $A(\Sp(2g,\mathbb{Z}))_k$, where $\zeta_4$ is a fourth root of unity. In Section~\ref{Sec:ModularInvs} we display a similar map sending Siegel modular forms to invariants, by going first through the space of geometric Siegel modular forms and then through that of Teichm\"uller forms. As a consequence, our map coincides with the map $\rho$ constructed by Igusa, up to constants. The advantage of our construction is that it allows us to identify a modular form that is in the preimage of a power of the discriminant of the curve under this homomorphism.
\subsection{Shioda invariants}\label{Sec:Shiodas}
We lastly turn our attention to the (integral) invariants under study in this article. We say that polynomials in the coefficients of a binary form corresponding to a hyperelliptic curve that are invariant under the natural action of $\operatorname{SL}_2(\mathbb{C})$ are \emph{invariants of the hyperelliptic curve}, and furthermore that such an invariant is \emph{integral} if the polynomial has integer coefficients. Shioda gave a set of generators for the algebra of invariants of binary octics over the complex numbers~\cite{Shioda}, which are now called \emph{Shioda invariants}. In addition, over the complex numbers, Shioda invariants completely classify isomorphism classes of hyperelliptic curves of genus $3$. More specifically, the Shioda invariants are $9$ weighted projective invariants $(J_2,J_3,J_4,J_5,J_6,J_7,J_8,J_9,J_{10})$, where $J_i$ has degree $i$, and $J_2, \ldots, J_7$ are algebraically independent, while $J_8,J_9,J_{10}$ depend algebraically on the previous Shioda invariants.
In \cite{LerRit}, the authors showed that these invariants are also generators of the algebra of invariants and determine hyperelliptic curves of genus 3 up to isomorphism in characteristic $p>7$. Later, in his thesis \cite{basson}, Basson provided some extra invariants that together with the classical Shioda invariants classify hyperelliptic curves of genus~3 up to isomorphism in characteristics $3$ and $7$. The characteristic $5$ case is still an unpublished theorem of Basson.
\section{Invariants of hyperelliptic curves and Siegel modular forms}\label{Sec:ModularInvs}
The aim of this section is to establish an analogue for the hyperelliptic locus of Corollary~3.3.2 in an article of Lachaud, Ritzenthaler and Zykin \cite{LRZ}. Our result, while technically new, does not use any ideas that do not appear in the original paper. We begin by establishing the basic ingredients necessary, using the same notation as in \cite{LRZ} for clarity, and with the understanding that, when omitted, all details may be found in \emph{loc.\ cit.}
Roughly speaking, the main idea of the proof is to compare three different ``flavors'' of modular forms and invariants of non-hyperelliptic curves (which will here be replaced with invariants of hyperelliptic curves). The comparison goes as follows: to connect analytic Siegel modular forms to invariants of curves, the authors first connect analytic Siegel modular forms to geometric modular forms. Following this, geometric modular forms are connected to Teichm\"uller modular forms, via the Torelli map and a result of Ichikawa. Finally Teichm\"uller forms are connected to invariants of curves.
\subsection{From analytic Siegel modular forms to geometric Siegel modular forms} Let $\mathbf{A}_{g}$ be the moduli stack of principally polarized abelian schemes of relative dimension $g$, and $\pi \colon \mathbf{V}_{g} \to \mathbf{A}_{g}$ be the universal abelian scheme with zero section $\epsilon \colon \mathbf{A}_{g} \to \mathbf{V}_{g}$. Then the relative canonical line bundle over $\mathbf{A}_g$ is given in terms of the rank $g$ bundle of relative regular differential forms of degree one on $\mathbf{V}_g$ over $\mathbf{A}_g$ by the expression \begin{equation*} \boldsymbol{\omega} = \bigwedge^g \epsilon^*\Omega^1_{\mathbf{V}_g/\mathbf{A}_g}. \end{equation*}
With this notation, a \emph{geometric Siegel modular form of genus $g$ and weight $h$}, for $h$ a positive integer, over a field $k$, is an element of the $k$-vector space \begin{equation*} \mathbf{S}_{g,h}(k) = \Gamma(\mathbf{A}_g \otimes k, \boldsymbol{\omega}^{\otimes h}). \end{equation*} If $f \in \mathbf{S}_{g,h}(k)$ and $A$ is a principally polarized abelian variety of dimension $g$ defined over $k$ equipped with a basis $\alpha$ of the 1-dimensional space $\boldsymbol{\omega}_{k}(A)=\bigwedge^g \Omega^1_k(A)$, we define \begin{equation*} f(A, \alpha) = \frac{f(A)}{\alpha^{\otimes h}}. \end{equation*} In this way $f(A, \alpha)$ is an algebraic or geometric modular form in the usual sense, i.e., \begin{enumerate} \item $f(A,\lambda \alpha) = \lambda^{-h} f(A,\alpha)$ for any $\lambda \in k^{\times}$, and \item $f(A,\alpha)$ depends only on the $\bar{k}$-isomorphism class of the pair $(A,\alpha)$. \end{enumerate} Conversely, such a rule defines a unique $f \in \mathbf{S}_{g,h}$.
We first compare these geometric Siegel modular forms to the usual analytic Siegel modular forms:
\begin{proposition}[Proposition 2.2.1 of \cite{LRZ}]\label{prop:2.2.1} Let $\mathbf{R}_{g,h}(\mathbb{C})$ denote the usual space of analytic Siegel modular forms of genus $g$ and weight $h$. Then there is an isomorphism \begin{equation*} \mathbf{S}_{g,h}(\mathbb{C}) \to \mathbf{R}_{g,h}(\mathbb{C}), \end{equation*} given by sending $f \in \mathbf{S}_{g,h}(\mathbb{C})$ to \begin{equation*} \tilde{f}(Z) = \frac{f(A_{Z})}{(2\pi i)^{gh}(dz_1 \wedge \ldots \wedge dz_g)^{\otimes h}}, \end{equation*} where $A_{Z} = \mathbb{C}^g/(\mathbb{Z}^g + Z \mathbb{Z}^g)$, $Z \in \mathcal{H}_g$ and each $z_i \in \mathbb{C}$. \end{proposition}
Furthermore, this isomorphism has the following pleasant property:
\begin{proposition}[Proposition 2.4.4 of \cite{LRZ}]\label{prop:2.4.4} Let $(A,a)$ be a principally polarized abelian variety of dimension $g$ defined over $\mathbb{C}$, let $\omega_1, \ldots, \omega_g$ be a basis of $\Omega^1_{\mathbb{C}}(A)$ and let $\omega = \omega_1 \wedge \ldots \wedge \omega_g \in \boldsymbol{\omega}_{\mathbb{C}}(A)$. If $\Omega = \left(\begin{smallmatrix} \Omega_1 & \Omega_2 \end{smallmatrix} \right)$ is a Riemann matrix obtained by integrating the forms $\omega_i$ against a basis of $H_1(A,\mathbb{Z})$ for the polarization $a$, then $Z = \Omega_2^{-1}\Omega_1$ is in $\mathcal{H}_g$ and \begin{equation*} f(A, \omega) = (2 \pi i)^{gh} \frac{\tilde{f}(Z)}{\det \Omega_2^h}. \end{equation*} \end{proposition}
\subsection{From geometric Siegel modular forms to Teichm\"uller modular forms} We now turn our attention to so-called Teichm\"uller modular forms, which were studied by Ichikawa \cite{ichikawa1}\cite{ichikawa2}\cite{ichikawa3}\cite{ichikawa4}. Let $\mathbf{M}_g$ be the moduli stack of curves of genus $g$, let $\pi \colon \mathbf{C}_g \to M_g$ be the universal curve, and let \begin{equation*} \boldsymbol{\lambda} = \bigwedge^g \pi_* \Omega^1_{\mathbf{C}_g/\mathbf{M}_g} \end{equation*} be the invertible sheaf associated to the Hodge bundle.
With this notation, a \emph{Teichm\"uller modular form of genus $g$ and weight $h$}, for $h$ a positive integer, over a field $k$, is an element of the $k$-vector space \begin{equation*} \mathbf{T}_{g,h}(k) = \Gamma(\mathbf{M}_g \otimes k,\boldsymbol{\lambda}^{\otimes h}). \end{equation*} As before, if $f \in \mathbf{T}_{g,h}(k)$ and $C$ is a curve of genus $g$ defined over $k$ equipped with a basis $\lambda$ of $\boldsymbol{\lambda}_{k}(C)=\bigwedge^g \Omega^1_k(C)$, we define \begin{equation*} f(C, \lambda) = \frac{f(C)}{\lambda^{\otimes h}}. \end{equation*} Again, $f(C, \lambda)$ is an algebraic modular form in the usual sense. Ichikawa proves: \begin{proposition}[Proposition 2.3.1 of \cite{LRZ}]\label{prop:2.3.1} The Torelli map $\theta \colon \mathbf{M}_g \to \mathbf{A}_g$, associating to a curve $C$ its Jacobian $\Jac C$ with the canonical polarization $j$, satisfies $\theta^* \boldsymbol{\omega} = \boldsymbol{\lambda}$, and induces for any field a linear map \begin{equation*} \theta^* \colon \mathbf{S}_{g,h}(k) \to \mathbf{T}_{g,h}(k) \end{equation*} such that $(\theta^*f)(C) = \theta^*(f(\Jac C)).$ In other words, for a basis $\lambda$ of $\boldsymbol{\lambda}_{k}(C)$ and fixing $\alpha$ such that
a basis $\alpha$ of $\boldsymbol{\omega}_k (C)$ whose pullback to $C$ equals $\lambda$, \begin{equation*} f(\Jac C, \alpha) = (\theta^* f)(C,\lambda). \end{equation*} \end{proposition}
\subsection{From Teichm\"uller modular forms to invariants of binary forms} We finally connect the Teichm\"uller modular forms to invariants of hyperelliptic curves. To this end, let $E$ be a vector space of dimension $2$ over a field $k$ of characteristic different from $2$, and put $G = \GL(E)$ and $\mathbf{X}_d = \Sym^d(E^*)$, the space of homogeneous polynomials of degree $d$ on $E$. We define the action of $G$ on $\mathbf{X}_d$, $u \cdot F$ for $F \in \mathbf{X}_d$, by \begin{equation*} (u \cdot F)(x,z) = F(u^{-1}(x,z)). \end{equation*} (By a slight abuse of notation we denote an element of $E$ by the pair $(x,z)$, effectively prescribing a basis. Our reason to do so will become clear later.)
We say that $\Phi$ is an \emph{invariant of degree $h$} if $\Phi$ is a regular function on $\mathbf{X}_d$, homogeneous of degree $h$ (by which we mean that $\Phi(\lambda F) = \lambda^h \Phi(F)$ for $\lambda \in k^{\times}$ and $F \in \mathbf{X}_d$) and \begin{equation*} u \cdot \Phi = \Phi \quad \text{for every} \quad u \in \SL(E), \end{equation*} where the action $u \cdot \Phi$ is given by \begin{equation*} (u \cdot \Phi)(F) = \Phi( u^{-1} \cdot F). \end{equation*} We note the space of invariants of degree $h$ by $\operatorname{Inv}_h(\mathbf{X}_d)$. Note that in what follows we will define an open set of $\mathbf{X}_d^0$, and be interested in the invariants of degree $h$ that are regular on that open set. The definition of invariance is the same, all that changes is the set on which the function is required to be regular.
From now on we require $d\geq 6$ to be even, and put $g = \frac{d-2}{2}$, then the \emph{universal hyperelliptic curve} over the the affine space $\mathbf{X}_d = \Sym^d(E)$ is the variety \begin{equation*} \mathbf{Y}_d = \left\{(F,(x,y,z)) \in \mathbf{X}_d \times \mathbb{P}\left(1,\frac{d}{2},1\right) : y^2 = F(x,z) \right\}, \end{equation*} where $\mathbb{P}(1,g+1,1)$ is the weighted projective plane with $x$ and $z$ having weight $1$ and $y$ having weight $g+1$. The non-singular locus of $\mathbf{X}_d$ is the open set \begin{equation*} \mathbf{X}_d^0 = \{ F \in \mathbf{X}_d : \Disc(F) \neq 0\}. \end{equation*} We denote by $\mathbf{Y}_d^0$ the restriction of $\mathbf{Y}_d$ to the nonsingular locus. The projection gives a smooth surjective $k$-morphism \begin{equation*} \pi \colon \mathbf{Y}_d^0 \to \mathbf{X}_d^0 \end{equation*} and its fiber over $F$ is the nonsingular hyperelliptic curve $C_F : y^2 = F(x,z)$ of genus $g$. In this case we have en explicit $k$-basis for the space of holomorphic differentials of $C_F$, denoted $\Omega^1(C_F)$, given by \begin{equation}\label{eq:basiseta} \omega_1 = \frac{dx}{y}, \omega_2 = \frac{xdx}{y}, \ldots, \omega_g = \frac{x^{g-1}dx}{y}. \end{equation}
Now let $u \in G$ act on $\mathbf{Y}_d$ by \begin{equation*} u \cdot (F, (x,y,z)) = (u \cdot F, u \cdot (x,y,z)), \end{equation*} where the action on $F$ is given by \begin{equation*} (u \cdot F) (x,z) = F(u^{-1}(x,z)) \end{equation*} and the action of $u$ on $(x,y,z)$ is given by replacing the vector $(x,z)$ by $u(x,z)$ and leaving $y$ invariant. Then the projection \begin{equation*} \pi \colon \mathbf{Y}_d^0 \to \mathbf{X}_d^0 \end{equation*} is $G$-equivariant.
Then as in \cite{LRZ}, the section \begin{equation*} \omega = \omega_1 \wedge \ldots \wedge \omega_g \end{equation*} is a basis of the one-dimensional space $\Gamma(\mathbf{X}^0_d, \boldsymbol{\alpha})$, where \begin{equation*} \boldsymbol{\alpha} = \bigwedge^g \pi_* \Omega^1_{\mathbf{Y}_d^0/\mathbf{X}_d^0}, \end{equation*} the Hodge bundle of the universal curve over $\mathbf{X}_d^0$. For every $F \in \mathbf{X}_d^0$, an element $u \in G$ induces an isomorphism \begin{equation*} \phi_u \colon C_F \to C_{u\cdot F}, \end{equation*} and this defines a linear automorphism $\phi^*_u$ of $\boldsymbol{\alpha}$.
For any $h \in \mathbb{Z}$, we define $\Gamma(\mathbf{X}_d^0, \boldsymbol{\alpha}^{\otimes h})^G$ the subspace of sections $s \in \Gamma(\mathbf{X}_d^0, \boldsymbol{\alpha}^{\otimes h})$ such that \begin{equation*} \phi_u^*(s) = s \end{equation*} for every $u \in G$. Then if $\alpha \in \Gamma(\mathbf{X}_d^0, \boldsymbol{\alpha})$ and $F \in \mathbf{X}_d^0$, we define \begin{equation*} s(F,\alpha) = \frac{s(F)}{\alpha^{\otimes h}}. \end{equation*} This gives us the space that will be related to invariants of hyperelliptic curves, which we now define.
In this setting we have the exact analogue of Proposition 3.2.1 of \cite{LRZ}: \begin{proposition}\label{prop:3.2.1} The section $\omega \in \Gamma(X_d^0,\boldsymbol{\alpha})$ satisfies the following properties: \begin{enumerate} \item If $u \in G$, then \begin{equation*} \phi_u^* \omega = \det(u)^{w_0} \omega, \end{equation*} with \begin{equation*} w_0 = \frac{dg}{4}. \end{equation*} \item Let $h \geq 0$ be an integer. The linear map \begin{align*} \tau \colon \operatorname{Inv}_{\frac{gh}{2}}(\mathbf{X}_d^0) &\to \Gamma(\mathbf{X}^0_d, \boldsymbol{\alpha}^{\otimes h})^G \\ \Phi &\mapsto \Phi \cdot \omega^{\otimes h} \end{align*} is an isomorphism. \end{enumerate} \end{proposition}
\begin{proof} The proof of the first part goes exactly as in the original: For $u \in G$, we have that \begin{equation*} (\phi_u^* \omega) (F, \omega) = c(u,F) \omega(F,\omega), \end{equation*} and we can conclude, via the argument given in \cite{LRZ}, that $c(u,F)$ is independent of $F$ and a character $\chi$ of $G$, and that in fact \begin{equation*} c(u,F) = \chi(u) = \det u ^{w_0} \end{equation*} for some integer $w_0$. To compute $w_0$ we again follow the original and set $u = \lambda I_2$ with $\lambda \in k^{\times}$ to obtain \begin{equation*} \frac{\omega_i(\lambda^{-d}F)}{\omega_i(F)} = \frac{x^{i-1}dx}{\sqrt{\lambda^{-d}F(x,y)}} \div \frac{x^{i-1}dx}{\sqrt{F(x,y)}} = \lambda^{d/2}, \end{equation*} since $y = \sqrt{F(x,y)}$, for each $i = 1, \ldots, g$. Hence \begin{equation*} (\phi_u^* \omega) (F, \omega) = \lambda^{dg/2} = \det(u)^{w_0} \end{equation*} and since $\det(u) = \lambda^2$ we have \begin{equation*} {w_0} = \frac{dg}{4} = \frac{d(d-2)}{8}. \end{equation*}
The proof of the second part also goes exactly as in the original, with the replacement of a denominator of $4$ instead of $3$ in the quantity that is denoted $w$ in \cite{LRZ}. \end{proof}
\subsection{Final step} With this in hand, we immediately obtain the analogue of Proposition 3.3.1 of \cite{LRZ}. We begin by setting up the notation we will need. We continue to have $d \geq 6$ an even integer and $g = \frac{d-2}{2}$. Because the fibers of $\pi \colon \mathbf{Y}_d^0 \to \mathbf{X}_d^0$ are smooth hyperelliptic curves of genus $g$, by the universal property of $\mathbf{M}_g$, we get a morphism \begin{equation*} p \colon \mathbf{X}_g^0 \to \mathbf{M}_g^{hyp}, \end{equation*} where this time $\mathbf{M}_g^{hyp}$ is the hyperelliptic locus of the moduli stack $\mathbf{M}_g$ of curves of genus $g$. By construction we have $p^* \boldsymbol{\lambda} = \boldsymbol{\alpha}$, and therefore we obtain a morphism \begin{equation*} p^* \colon \Gamma(\mathbf{M}_g^{hyp}, \boldsymbol{\lambda}^{\otimes h}) \to \Gamma(\mathbf{X}_d^0, \boldsymbol{\alpha}^{\otimes h}). \end{equation*} As in \cite{LRZ}, by the universal property of $\mathbf{M}_g^{hyp}$, we have \begin{equation*} \phi_u^* \circ p^*(s) = p^*(s) \end{equation*} for $s \in \Gamma(\mathbf{M}_g^{hyp}, \boldsymbol{\lambda}^{\otimes h})$. From this we conclude that $p^*(s) \in \Gamma(X_d^0,\boldsymbol{\alpha})^G$, and combining this with the second part of Proposition \ref{prop:3.2.1}, which establishes the isomorphism of $\Gamma(X_d^0,\boldsymbol{\alpha})^G$ and $\operatorname{Inv}_{gh}(X_d^0)$, we obtain:
\begin{proposition}\label{prop:3.3.1} For any even $h \geq 0$, the linear map given by $\sigma = \tau^{-1} \circ p^*$ is a homomorphism \begin{equation*} \sigma \colon \Gamma(\mathbf{M}_g^{hyp}, \boldsymbol{\lambda}^{\otimes h}) \to \operatorname{Inv}_{\frac{gh}{2}}(X_d^0) \end{equation*} satisfying \begin{equation*} \sigma(f)(F) = f(C_F, (p^*)^{-1}\omega) \end{equation*} for any $F \in \mathbf{X}_d^0$ and any section $f \in \Gamma(\mathbf{M}_g^{hyp}, \boldsymbol{\lambda}^{\otimes h})$. \end{proposition}
This is the last ingredient necessary to show the analogue of Corollary 3.3.2 of \cite{LRZ}.
\begin{corollary}\label{cor:3.3.2} Let $f \in \mathbf{S}_{g,h}(\mathbb{C})$ be a geometric Siegel modular form, $\tilde{f} \in \mathbf{R}_{g,h}(\mathbb{C})$ be the corresponding analytic modular form, and $\Phi = \sigma(\theta^*f)$ the corresponding invariant. Let further $F \in \mathbf{X}_d^0$ give rise to the curve $C_F$ equipped with the basis of regular differentials given by the forms $\omega_1, \ldots, \omega_g$ given in equation \eqref{eq:basiseta}. Then if $\Omega = \left( \begin{smallmatrix} \Omega_1 & \Omega_2\end{smallmatrix}\right)$ is a Riemann matrix for the curve $C_F$ obtained by integrating the forms $\omega_i$ against a symplectic basis for the homology group $H_1(C_F,\mathbb{Z})$ and $Z = \Omega_2^{-1}\Omega_1 \in \mathbb{H}_g$, we have \begin{equation*} \Phi(F) = (2 i \pi)^{gh} \frac{\tilde{f}(Z)}{\det \Omega_2^h}. \end{equation*} \end{corollary}
The last two results display a connection between Siegel modular forms of even weight restricted to the hyperelliptic locus and invariants of binary forms of degree $2g+2$.
\section{Denominators of modular invariants and primes of bad reduction}\label{Sec:MainTheorem}
In this Section we prove our main theorem, Theorem \ref{prop:newinvariant}. The proof of this result has three main ingredients. In the previous Section, we have already adapted to the case of hyperelliptic curves a result of Lachaud, Ritzenthaler and Zykin \cite{LRZ} that connects invariants of curves to Siegel modular forms. In this Section, we now generalize a result of Lockhart \cite{Lockhart} to specifically connect the discriminant of a hyperelliptic curve to the Siegel modular form $\Sigma_{140}$ of equation \eqref{eq:sigma140}. Then, we deduce the divisibility of $\Sigma_{140}$ by an odd prime $\mathfrak{p}$ to the bad reduction of the curve using a result of K{\i}l{\i}\c{c}er, Lauter, Lorenzo Garc\'ia, Newton, Ozman, and Streng \cite{KLLNOS}.
\subsection{The modular discriminant}\label{Lockhart}
We first turn our attention to the work of Lockhart,~\cite[Definition 3.1]{Lockhart}, in which the author gives a relationship between the discriminant $\Delta$ of a hyperelliptic curve of genus $g$ given by $y^2=F(x,1)$, which is related to the discriminant $D$ of the binary form $F(x,z)$ by the relation \begin{equation}\label{Disc} \Delta = 2^{4g} D \end{equation} (see \cite[Definition 1.6]{Lockhart}), and a Siegel modular form similar to $\Sigma_{140}$. From a computational perspective, the issue with the Siegel modular form proposed by Lockhart is that its value, as written, will be nonzero only for $Z$ a period matrix in a certain $\Gamma(2)$-equivalence class. Indeed, on page 740, the author chooses the traditional symplectic basis for $H_1(C,\mathbb{Z})$ which is given by Mumford \cite[Chapter III, Section 5]{Mumford}. If one acts on the symplectic basis by a matrix in $\Gamma(2)$, the value of the form given by Lockhart will change by a nonzero constant (the appearance of the principal congruence subgroup of level $2$ is related to the use of half-integral theta characteristics to define the form), but if one acts on the symplectic basis by a general element of $\Sp(6,\mathbb{Z})$, the value of the form might become zero.
As explained in \cite{BILV}, in general to allow for the period matrix to belong to a different $\Gamma(2)$-equivalence class, one must attach to the period matrix an element of a set defined by Poor \cite{poor}, which we call an $\eta$-map. Therefore in general one must either modify Lockhart's definition to vary with a map $\eta$ admitted by the period matrix or use the form $\Sigma_{140}$, which is nonzero for any hyperelliptic period matrix. We give here the connection between these two options. We begin by describing the maps $\eta$ that can be attached to a hyperelliptic period matrix. We refer the reader to \cite{poor} or \cite{BILV} for full details.
Throughout, let $C$ be a smooth hyperelliptic curve of genus $g$ defined over~$\mathbb{C}$ equipped with a period matrix $Z$ for its Jacobian, and for which the branch points of the degree $2$ morphism $\pi \colon C \to \mathbb{P}^1$ have been labeled with the symbols $\{1, 2, \ldots, 2g+1, \infty\}$. We note that this choice of period matrix yields an Abel--Jacobi map, \begin{equation*} AJ \colon \Jac(C) \to {\mathbb C}^g/(\mathbb{Z}^g+Z\mathbb{Z}^g). \end{equation*}
We begin by defining a certain combinatorial group we will need.
\begin{definition} Let $B = \{1, 2, \ldots, 2g+1, \infty\}$. For any two subsets $S_1, S_2 \subseteq B$, we define \begin{equation*} S_1 \circ S_2 = (S_1 \cup S_2) - (S_1 \cap S_2), \end{equation*} the symmetric difference of the two sets. For $S \subseteq B$ we also define $S^c = B - S$, the complement of $S$ in $B$. Then we have that the set \begin{equation*} \{S \subseteq B : \# S \equiv 0 \pmod{2} \} / \{S \sim S^c\} \end{equation*} is a commutative group under the operation $\circ$, of order $2^{2g}$, with identity $\emptyset \sim B$. \end{definition}
Given the labeling of the branch points of $C$, there is a group isomorphism (see \cite[Corollary 2.11]{Mumford} for details) between the $2$-torsion of the Jacobian of $C$ and the group $G_B$ in the following manner: To each set $S \subseteq B$ such that $\# S \equiv 0 \pmod{2}$, associate the divisor class of the divisor \begin{equation}\label{eq:2torsion} e_S = \sum_{i \in S} P_i - (\#S) P_{\infty}. \end{equation}
Then we can assign a map which we denote $\eta$ by sending $S \subseteq B$ to the unique vector $\eta_S$ in $(1/2)\mathbb{Z}^{2g}/{\mathbb Z}^{2g}$ such that $AJ(e_S)=(\eta_S)_2 + Z (\eta_S)_1$. Since there are $(2g+2)!$ different ways to label the $2g+2$ branch points of a hyperelliptic curve $C$ of genus $g$, there are several ways to assign a map $\eta$ to a matrix $Z \in \mathcal{H}_g$. It suffices for our purposes to have one such map $\eta$.
Given a map $\eta$ attached to $Z$, one may further define a set $U_{\eta} \subseteq B$: \begin{equation*} U_{\eta} = \{i \in B - \{\infty\} : e_*(\eta(\{i\})) = -1 \} \cup \{\infty \}, \end{equation*} where for $\xi = \left( \begin{smallmatrix} \xi_1 & \xi_2 \end{smallmatrix} \right) \in (1/2){\mathbb Z}^{2g}$, we write \begin{equation*} e_*(\xi) = \exp(4\pi i \xi_1^T \xi_2), \end{equation*} as in equation \eqref{eq:estar}.
Then following Lockhart \cite[Definition 3.1]{Lockhart}, we define \begin{definition} Let $Z \in \mathcal{H}_g$ be a hyperelliptic period matrix. Then we write \begin{equation} \phi_{\eta}(Z) = \prod_{T \in \mathcal{I}} \vartheta[\eta_{T \circ U_{\eta}}](0,Z)^4 \end{equation} where $\mathcal{I}$ is the collection of subsets of $\{1,2,\ldots, 2g+1,\infty\}$ that have cardinality $g+1$. \end{definition}
\begin{remark} We note that in this work we write our hyperelliptic curves with a model of the form $y^2 = F(x,1)$, where $F$ is of degree $2g+2$. In other words we do not require one of the Weierstrass points of the curve to be at infinity. It is for this reason that we modify Lockhart's definition above, so that the analogue of his Proposition 3.2 holds for $F$ of degree $2g+2$ rather than $2g+1$.
The Siegel modular form that we define here is equal to the one given in his Definition 3.1 for the following reason: Because $T^c \circ U_{\eta} = (T \circ U_{\eta})^c$, it follows that $\eta_{T \circ U_{\eta}} \equiv \eta_{T^c \circ U_{\eta}} \pmod{\mathbb{Z}}$. Therefore $\vartheta[\eta_{T \circ U_{\eta}}](0,Z)$ differs from $\vartheta[\eta_{T^c \circ U_{\eta}}](0,Z)$ by at worse their sign. Since we are raising the theta function to the fourth power, the sign disappears, and the product above is equal to the product given by Lockhart, in which $T$ ranges only over the subsets of $\{1,2,\ldots, 2g+1\}$ of cardinality $g+1$, but each theta function is raised to the eighth power. \end{remark}
We now recall Thomae's formula, which is proven in~\cite{Fay,Mumford} for Mumford's period matrix, obtained using his so-called traditional choice of symplectic basis for the homology group $H_1(C,\mathbb{Z})$, and in~\cite{BILV} for any period matrix.
\begin{theorem}[Thomae's formula]
Let $C$ be a hyperelliptic curve defined over $\mathbb{C}$ and fix $y^2 = F(x,1) = \prod_{i = 1}^{2g+2} (x-a_i)$ a model for $C$. Let $\Omega = \left( \begin{smallmatrix} \Omega_1 & \Omega_2\end{smallmatrix}\right)$ be a Riemann matrix for the curve obtained by integrating the forms $\omega_i$ of equation \eqref{eq:basiseta} against a symplectic basis for the homology group $H_1(C,\mathbb{Z})$ and $Z = \Omega_2^{-1} \Omega_1 \in \mathbb{H}_g$ be the period matrix associated to this symplectic basis. Finally, let $\eta$ be an $\eta$-map attached to the period matrix $Z$. For any subset $S$ of $B$ of even cardinality, we have that \begin{equation*} \vartheta[\eta_{S \circ U_{\eta}}](0,Z)^4 = c\prod_{\substack{i<j \\ i,j \in S}} (a_i - a_j) \prod_{\substack{i<j \\ i,j \not \in S}} (a_i - a_j), \end{equation*} where $c$ is a constant depending on $Z$ and on the model for $C$. \end{theorem}
We now restrict our attention to the case of genus $g = 3$ which is of interest to us in this work. We note that since $Z$ is a hyperelliptic period matrix, by \cite{igusa67} a single one of its even theta constants vanishes, and therefore we have \begin{equation*} \phi_{\eta}(Z) = \Sigma_{140}(Z). \end{equation*}
We then have the following Theorem, which is a generalization to our setting of Proposition 3.2 of \cite{Lockhart} for genus 3 hyperelliptic curves:
\begin{theorem}\label{LockhartGen} Let $C$ be a hyperelliptic curve defined over $\mathbb{C}$ and fix $y^2 = F(x,1) = \prod_{i = 1}^{2g+2} (x-a_i)$ a model for $C$. Let $\Omega = \left( \begin{smallmatrix} \Omega_1 & \Omega_2\end{smallmatrix}\right)$ be a Riemann matrix for the curve obtained by integrating the forms $\omega_i$ of equation \eqref{eq:basiseta} against a symplectic basis for the homology group $H_1(C,\mathbb{Z})$ and $Z = \Omega_2^{-1} \Omega_1 \in \mathbb{H}_g$ be the period matrix associated to this symplectic basis. Then \begin{equation} \Delta^{15} = 2^{180}\pi^{420} \det(\Omega_2)^{-140}\Sigma_{140}(Z), \end{equation} where we recall that $\Delta$ is the discriminant of the model that we have fixed for $C$. \end{theorem}
\begin{proof} We show how to modify Lockhart's proof. We first remind the reader that $\Delta = 2^{12}D$ by \cite[Definition 1.6]{Lockhart}, where again $D$ is the discriminant of the binary form $F(x,z)$. Then as Lockhart does, we use Thomae's formula: \begin{equation*} \vartheta[\eta_{T \circ U_{\eta}}](0,Z)^4 =c \prod_{\substack{i<j \\ i,j \in T}} (a_i - a_j) \prod_{\substack{i<j \\ i,j \not \in T}} (a_i - a_j), \end{equation*} if $T$ is a subset of $\{1,2,\ldots, 7,\infty\}$ of cardinality $4$. Taking the product over all such $T$, we get \begin{equation*} \phi_{\eta}(Z) = c^{70} \prod_{T} \left(\prod_{\substack{i<j \\ i,j \in T}} (a_i - a_j) \prod_{\substack{i<j \\ i,j \not \in T}} (a_i - a_j) \right), \end{equation*} since $\binom{8}{4}=70$.
We now count how many times each factor of $(a_i-a_j)$ appears on the left-hand side: \begin{align*} \# \{ T : i, j \in T \text{ or } i, j \not \in T\} & = \#\{T: i,j \in T\} + \# \{T: i, j \not \in T\} \\ & = \binom{6}{2} + \binom{6}{4} = 2\binom{6}{4} = 30. \end{align*} Therefore, \begin{align*} \phi_{\eta}(Z) &= c^{70} \prod_{\substack{i<j \\ i,j \in B}} (a_i - a_j)^{30},\\ & = c^{70} D^{15},\\ & = 2^{-180}c^{70} \Delta^{15}. \end{align*}
Since $\Sigma_{140}(Z)= \phi_{\eta}(Z)$ when $Z$ is hyperelliptic, we therefore get that \begin{eqnarray}\label{EqWithC} \Delta^{15} = 2^{180}c^{-70}\Sigma_{140}(Z). \end{eqnarray}
We now compute the value of the constant $c$. We denote by $\tilde{Z}$ the period matrix associated to Mumford's so-called traditional choice of a symplectic basis for the homology group $H_1(C,\mathbb{Z})$. Lockhart showed that: \begin{eqnarray}\label{EqLockhart} \Delta^{15} = 2^{180}\pi^{420} \det(\tilde{\Omega}_2)^{-140}\Sigma_{140}(\tilde{Z}), \end{eqnarray} where $\tilde{\Omega}_2$ is the right half of the Riemann matrix obtained by integrating the basis of forms $\omega_i$ of equation \eqref{eq:basiseta} against Mumford's traditional basis for homology.
Now consider again our arbitrary period matrix $Z$ and let $M = \left( \begin{smallmatrix} A & B \\ C & D \end{smallmatrix} \right) \in \Sp(6,\mathbb{Z})$ be such that $M \cdot \tilde{Z}=Z$. Since $\Sigma_{140}$ is a Siegel modular form of weight $140$ for $\Sp(6,{\mathbb Z})$, it follows that $\Sigma_{140}(Z)=\det(C\tilde{Z}+D)^{140}\Sigma_{140}(\tilde{Z})$ and combining equations \eqref{EqWithC} and \eqref{EqLockhart}, we obtain \begin{align*} c &=\pi^{-6} \det(C\tilde{Z}+D)^{2}\det(\tilde{\Omega}_2)^{2}\\ &= \pi^{-6} \det(\tilde{Z}C^T+D^T)^{2}\det(\tilde{\Omega}_2)^{2} \\ & =\pi^{-6}\det(\Omega_2)^{2}. \end{align*} To obtain the last equality we used the fact that $\Omega_2=\tilde{\Omega}_1C^T+\tilde{\Omega}_2D^T$. This concludes the proof. \end{proof}
Up to the factors of $2$ appearing in the formula, this Theorem therefore realizes Corollary \ref{cor:3.3.2}, as it connects explicitly an invariant of a hyperelliptic curve to a Siegel modular form. We furthermore note that the proof above suggests that the constant $c$ in Thomae's formula for a general period matrix $Z$ of the Jacobian of a hyperelliptic curve of genus $g$ is $\pi^{-2g} \det(\Omega_2)^{2}$. Finally, the proof of Theorem~\ref{LockhartGen} could be easily generalized to genus $g > 3$ if $\phi_{\eta}$ were shown to be a modular form for $\Sp(2g,\mathbb{Z})$. We believe this is true, but leave it as future work.
\begin{remark} Let $n_g = \binom{2g+1}{g}$. In \cite[p. 291]{Salvati}, for $Z$ a hyperelliptic period matrix, the author defines the set $M=(\xi_1, \ldots, \xi_{n_g})$ to be the unique, up to permutations, sequence of mutually distinct even characteristics satisfying: \begin{equation*} P(M)(Z)=\vartheta[\xi_1](0,Z)\vartheta[\xi_2](0,Z)\ldots \vartheta[\xi_{n_g}](0,Z)\neq 0. \end{equation*} We note that the eighth power of this form is exactly the form which we denote here by $\phi_{\eta}$, since $\theta[\eta_{T\circ U_{\eta}}](0,Z) \neq 0$ if and only if $T$ has cardinality $g+1$, by Mumford and Poor's Vanishing Criterion for hyperelliptic curves \cite[Proposition 1.4.17]{poor}, and in the product giving $\phi_{\eta}$, each characteristic appears twice.
The author then introduces the modular form \begin{equation*} F(Z)=\sum _{\sigma \in \Sp(2g,{\mathbb F}_2)}P(\sigma \circ M)^8(Z), \end{equation*} where $\Sp(2g,{\mathbb F}_2)$ is the group of $2g \times 2g$ symplectic matrices with entries in ${\mathbb F}_2$, and where the action of $\Sp(2g,{\mathbb F}_2)$ on the set $M$ is given in the following manner: For $\sigma = \left( \begin{smallmatrix} A & B \\ C & D \end{smallmatrix} \right) \in \Sp(2g,{\mathbb F}_2)$ and $m \in (1/2){\mathbb Z}^{2g} \pmod{{\mathbb Z}^{2g}}$ a characteristic, we write \begin{equation*} \sigma \circ m = \begin{pmatrix} D & - C \\ -B & A \end{pmatrix} m + \begin{pmatrix} \diag(C^TD) \\ \diag(A^TB) \end{pmatrix}. \end{equation*} Then $P(\sigma \circ M)$ is simply the form $P(M)$ but with each characteristic $\xi_i$ replaced with $\sigma \circ \xi_i$.
For $Z$ a hyperelliptic matrix, $P(\sigma \circ M)(Z)$ is nonzero exactly when $\sigma \circ M$ is a permutation of $M$, by definition of the set $M$. Therefore, up to a constant, $F$ is simply $\Sigma_{140}$ on the hyperelliptic locus, and therefore $F-\Sigma_{140}$ is of the form $a\chi_{18}$ for $a\in {\mathbb C}$.
The author then proves that $\rho(F)=D^{gn_g/(2g+1)}$, where as before $D$ is the discriminant of the binary form $F(x,z)$ such that the hyperelliptic curve is given by the equation $y^2 = F(x,1)$. We note that the power given here corrects an error in the manuscript \cite{Salvati}, and agrees with the result we obtain in this paper. \end{remark}
\subsection{Proof of Theorem \ref{prop:newinvariant}}
We are now in a position to prove Theorem \ref{prop:newinvariant}. For simplicity, we replace $f^{\frac{140}{\gcd(k,140)}}$ with $\tilde{h}$, a Siegel modular form of weight $\tilde{k} = \frac{140k}{\gcd(k,140)}$, and let $\ell = \frac{k}{\gcd(k,140)}$. Note that $\tilde{k} = 140 \ell$ and is divisible by $4$.
Using the notation of Section \ref{Sec:ModularInvs}, the analytic Siegel modular form $\tilde{h}$ corresponds to a geometric Siegel modular form $h$ by Proposition \ref{prop:2.2.1}. Let $\Phi = \sigma(\theta^* h)$ be the corresponding invariant of the hyperelliptic curve. Then by Corollary \ref{cor:3.3.2}, if the hyperelliptic curve $y^2 = F(x,1)$ has period matrix $Z$, we have \begin{equation*} \Phi(F) = (2 \pi i)^{3\tilde{k}} \det(\Omega_2)^{-\tilde{k}} h(Z). \end{equation*} Therefore we have
\begin{align*} j(Z) = \frac{h}{\Sigma_{140}^\ell}(Z) & = \frac{(2 \pi i)^{-3\tilde{k}} \det(\Omega_2)^{\tilde{k}} \Phi(F)}{\pi^{-420\ell} \det(\Omega_2)^{140\ell}\Disc(F)^{15\ell}}\\ & = 2^{-\frac{140k}{\gcd(k,140)}}\frac{\Phi(F)}{\Disc(F)^{15\ell}}. \end{align*}
We note that since $\Phi$ is assumed to be an integral invariant, it does not have a denominator when evaluated at $F \in \mathbb{Z}[x,z]$. We have thus obtained an invariant as in~\cite[Theorem 7.1]{KLLNOS} (we note that \emph{loc. cit.} assumes throughout that invariants of hyperelliptic curves are integral, see the discussion between Proposition 1.4 and Theorem 1.5), having negative valuation at the prime $\mathfrak{p}$. We conclude that $C$ has bad reduction at this prime.
\section{Computing modular invariants}\label{Sec-NewInv}
In this Section, we consider certain modular functions having $\Sigma_{140}$ in the denominator. We then present a list of hyperelliptic curves of genus 3 for which we computed the primes of bad reduction. As illustration of Theorem~\ref{prop:newinvariant}, we implemented and computed with high precision the modular functions involving the form $\Sigma_{140}$ at period matrices corresponding to curves in our list.
\subsection{Computation of the modular invariants}
For a given hyperelliptic curve model, we used the Molin-Neurohr Magma code~\cite{Molin} to compute a first period matrix and then applied the reduction algorithm given in~\cite{KLLRSS} and implemented by Sijsling in Magma to obtain a so-called reduced period matrix that is $\Sp(2g,{\mathbb Z})$-equivalent to the first matrix, but that provides faster convergence of the theta constants.
Once we obtained a reduced period matrix $Z$, using Labrande's Magma implementation for fast theta function evaluation~\cite{labrande}, we computed the $36$ even theta constants for these reduced period matrices, up to 30,000 bits of precision. \footnote{Apart from curves (1) and (6), we could recognize these values as algebraic numbers with 15,000 bits of precision; for curve (1), we needed 30,000 bits of precision. In fact, for CM field (6), the theta constants obtained using the Magma implementation~\cite{labrande} for high precision (i.e. $\geq 30,000$ bits) were not conclusive. We therefore ran an improved implementation of the naive method to get these values up to 30,000 bits of precision, and recognized the invariants as algebraic numbers after multiplying by the expected denominators.}
Finally, from these theta constants we computed the modular invariants that we define below.
To define our modular invariants, we consider the following Siegel modular forms. Let $h_4$ be the Eisenstein series of weight $4$ given by \begin{equation}\label{h4} h_4(Z)=\frac{1}{2^3}\sum_{\xi}\theta[\xi]^8(Z), \end{equation} where $\xi$ ranges over all even theta characteristics. We denote by $\alpha_{12}$ the modular form of weight 12 defined by Tsuyumine~\cite{Tsuyumine2}: \begin{equation}\label{alpha12} \alpha_{12}(Z)=\frac{1}{2^3\cdot 3^2}\sum_{(\xi_i)}(\theta[\xi_1](Z)\theta[\xi_2](Z)\theta[\xi_3](Z)\theta[\xi_4](Z)\theta[\xi_5](Z)\theta[\xi_6](Z))^4, \end{equation} where $(\xi_1,\xi_2,\xi_3,\xi_4,\xi_5,\xi_6)$ is a maximal azygetic system of even theta characteristics. By this we mean that $(\xi_i)$ is a sextuple of even theta characteristics such that the sum of any three among these six is odd. Notice that $\alpha_{12}$ is one of the 35 generators given by Tsuyumine~\cite{Tsuyumine1} of the graded ring $A(\Gamma_3)$ of modular forms of degree 3 and cannot be written as a polynomial in Eisenstein series.
In the computations below, we consider thus the following three modular functions:
\begin{equation}\label{classInvariants} j_1(Z) = \frac{h_4^{35}}{\Sigma_{140}}(Z), \hspace{1cm} j_2(Z) = \frac{\alpha_{12}^{35}}{\Sigma_{140}^3}(Z),\hspace{1cm} j_3(Z) = \frac{h_4^{5}\alpha_{12}^{10}}{\Sigma_{140}}(Z).\hspace{1cm} \end{equation}
\subsection{Invariants of hyperelliptic curves of genus 3}
We say that a genus 3 curve $C$ over a field $M$ has complex multiplication (CM) by an order $\mathcal{O}$ in a sextic CM field $K$ if there is an embedding $\mathcal{O} \hookrightarrow \End(\Jac(C)_{\overline{M}})$. The curves numbered (1)--(8) below are the conjectural complete list of hyperelliptic CM curves of genus $3$ that are defined over ${\mathbb Q}$. As we mentioned in the introduction, they are taken from a list that can be found in \cite{KS2017}. We note more specifically that the curves (5), (6) and (8) were found by Balakrishnan, Ionica, K\i{}l\i{}\c{c}er, Lauter, Somoza, Streng, and Vincent, and (1), (2), (3), and (7) were computed by Weng~ \cite{Weng}. Moreover, the hyperelliptic model of the curve with complex multiplication by the ring of integers in CM field (3) was proved to be correct by Tautz, Top, and Verberkmoes \cite[Proposition 4]{TTV91}, and the hyperelliptic model of the curve with complex multiplication by the ring of integers in CM field (4) was given by Shimura and Taniyama~\cite{Shimura} (see Example (II) on page 76). For these examples, $\mathcal{O}_K$ denotes the ring of integers of the CM field $K$.
The curves numbered (9)--(10) are non-CM hyperelliptic curves presented in \cite{Chabauty}. They were already used there for experiments, this time related to the Chabauty-Coleman method. The curves numbered (11)--(13) are non-CM modular hyperelliptic curves; a list, which contains $X_0(33), X_0(39)$ and $X_0(41)$, of modular hyperelliptic curves are given by Ogg \cite{Ogg}, then Galbraith in his Ph.D thesis writes equations for these curves \cite{GalbraithThesis}.
When we say that a prime is of bad reduction, we will mean that it is a prime of geometrically bad reduction of the curve. For each curve below, the odd primes of bad reduction are computed using the results in \cite[Section 3]{HypRed} if $p>7$ and in Proposition 4.5 and Corollary 4.6 in~\cite{BW} if $p=3,5,7$. We denote the discriminant of a curve $C$ by $\Delta$, as before.
\begin{itemize} \item[(1)](\cite[\S 6 - 3rd ex.]{Weng}) Let $K = {\mathbb Q}[x]/(x^6+13x^4+50x^2+49)$, which is of class number 1 and contains ${\mathbb Q}(i)$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = x^7+1786x^5+44441x^3+278179x \end{equation*} with $\Delta = -2^{18}\cdot 7^{24}\cdot 11^{12}\cdot 19^7$. The odd primes of bad reduction of $C$ are $7$ and $11$.
\mbox{}
\item[(2)] (\cite[\S 6 - 2nd ex.]{Weng}) Let $K = {\mathbb Q}[x]/(x^6 + 6x^4 + 9x^2 + 1)$, which is of class number $1$ and contains ${\mathbb Q}(i)$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = x^7+6x^5+9x^3+x \end{equation*} with $\Delta = - 2^{18} \cdot 3^8$. The only odd prime of bad reduction of $C$ is $3$. \mbox{}
\item[(3)](\cite[\S 6 - 1st ex.]{Weng}) Let $K = {\mathbb Q}[x]/(x^6 + 5x^4 + 6x^2 + 1) = {\mathbb Q}(\zeta_7 + \zeta_7^{-1}, i)$, which is of class number $1$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = x^7+7x^5+14x^3+7x \end{equation*} with $\Delta = - 2^{18} \cdot 7^7$. The curve $C$ has good reduction at each odd $p\neq 7$ and potentially good reduction at $p=7$.
\mbox{}
\item[(4)] Let $K = {\mathbb Q}[x]/(x^6 + 7x^4 + 14x^2 + 7) = {\mathbb Q}(\zeta_7)$, which is of class number 1 and contains ${\mathbb Q}(\sqrt{-7})$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = x^7-1 \end{equation*} with $\Delta = - 2^{12}\cdot 7^7$. The curve $C$ has good reduction at each odd $p\neq 7$ and potentially good reduction at $p=7$.
\mbox{}
\item[(5)] Let $K = {\mathbb Q}[x]/(x^6 + 42x^4 + 441x^2 + 847)$, which is of class number $12$ and contains ${\mathbb Q}(\sqrt{-7})$.
A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*}
C: y^2 + x^4y= - 7x^6 + 63x^4 - 140x^2 + 393x - 28 \end{equation*}
with $\Delta = - 3^8 \cdot 5^{24} \cdot 7^{7}$. The odd primes of bad reduction of $C$ are $3$ and $5$. \mbox{}
\item[(6)] Let $K = {\mathbb Q}[x]/(x^6 + 29x^4 + 180x^2 + 64)$, which is of class number $4$ and contains ${\mathbb Q}(i)$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = 1024x^7 - 12857x^5 + 731x^3 + 688x \end{equation*} with $\Delta = - 2^{60} \cdot 11^{24} \cdot 43^7$. The only odd prime of bad reduction of $C$ is $11$. \mbox{}
\item[(7)] (\cite[\S 6 - 4th ex.]{Weng}) Let $K = {\mathbb Q}[x]/(x^6 + 21x^4 + 116x^2 + 64)$, which is of class number $4$ and contains ${\mathbb Q}(i)$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = 64x^7 - 124x^5 + 31x^3 + 31x \end{equation*} with $\Delta = - 2^{44} \cdot 31^{7}$. The curve has potentially good reduction at 31.
\mbox{}
\item[(8)] Let $K = {\mathbb Q}[x]/(x^6 +42x^4 +441x^2 +784)$, which is of class number $4$ and contains ${\mathbb Q}(i)$. A model for the hyperelliptic curve with CM by $\mathcal{O}_K$ is \begin{equation*} C: y^2 = 16x^7 + 357x^5 - 819x^3 + 448x \end{equation*} with $\Delta = - 2^{48} \cdot 3^8 \cdot 7^7$. The only odd prime of bad reduction of $C$ is $3$.
\mbox{}
\item[(9)]~(\cite{Chabauty}) The hyperelliptic curve \begin{equation*} C : y^2 = 4x^7 + 9x^6 - 8x^5 - 36x^4 - 16x^3 + 32x^2 + 32x + 8 \end{equation*} is a non-CM curve with $\Delta =2^{37} \cdot 1063$. The only odd prime of bad reduction of $C$ is $1063$.
\mbox{}
\item[(10)]~(\cite{Chabauty}) The hyperelliptic curve \begin{equation*} C : y^2 = -4x^7 + 24x^6 - 56x^5 + 72x^4 - 56x^3 + 28x^2 - 8x + 1 \end{equation*} is a non-CM curve with $\Delta =- 2^{28} \cdot 3^4 \cdot 599$. The odd primes of bad reduction of $C$ are $3$ and $599$.
\mbox{}
\item[(11)]~(\cite{GalbraithThesis}\cite{Ogg}) The hyperelliptic curve \begin{equation*} C : y^2 = x^8 + 10x^6 - 8x^5 + 47x^4 - 40x^3 + 82x^2 - 44x + 33 \end{equation*} is the modular curve $X_0(33)$. It has $\Delta =2^{28} \cdot 3^{12} \cdot 11^6$. The odd primes of bad reduction of $C$ are $3$ and $11$. \mbox{}
\item[(12)] (\cite{GalbraithThesis}\cite{Ogg}) The hyperelliptic curve \begin{equation*} C : y^2 = x^8 - 6x^7 + 3x^6 + 12x^5 - 23x^4 + 12x^3 + 3x^2 - 6x + 1 \end{equation*} is the modular curve $X_0(39)$. It has $\Delta =2^{28} \cdot 3^8 \cdot 13^4$. The odd primes of bad reduction of $C$ are $3$ and $13$. \mbox{}
\item[(13)] (\cite{GalbraithThesis}\cite{Ogg}) The hyperelliptic curve \begin{equation*} C : y^2 = x^8 - 4x^7 - 8x^6 + 10x^5 + 20x^4 + 8x^3 - 15x^2 - 20x - 8 \end{equation*}
is the modular curve $X_0(41)$. It has $\Delta = - 2^{28} \cdot 41^6$. The only odd prime of bad reduction of $C$ is $41$. \end{itemize}
We recall that the discriminant $\Delta$ of a hyperelliptic curve $C$ of genus $3$ is an invariant of degree $14$ (Section $1.5$ of \cite{LerRit}). For our computations, we considered the following absolute\footnote{An \emph{absolute} invariant is a ratio of homogeneous invariants of the same degree.} invariants, derived using the Shioda invariants: \begin{align}\label{shioda} J_2^7/\Delta, J_3^{14}/\Delta^3, J_4^7/\Delta^2, J_5^{14}/\Delta^5, J_6^7/\Delta^3, J_7^2/\Delta, J_8^7/\Delta^4, J_9^{14}/\Delta^9, J_{10}^7/\Delta^5. \end{align}
The numerical data in Table~\ref{tableh} shows the tight connection between the odd primes appearing in the denominators of these invariants, the odd primes of bad reduction for the hyperelliptic curve, and the odd primes dividing the denominators of $j_1,j_2$ and $j_3$. In the denominators of $j_1,j_2$ and $j_3$, we intentionally omitted the denominators of the formulae~\eqref{h4} and~\eqref{alpha12}, i.e. $2^3$ and $2^3\cdot 3^2$. Note that we do not have a proof for the fact that $h_4$ and $\alpha_{12}$ fulfill the condition in Theorem~\ref{prop:newinvariant}, i.e. that their corresponding curve invariants are integral. One can see that for all the curves we considered, a prime $\geq 3$ appears in the denominator of these modular invariants if and only if it is a prime of bad reduction for the curve. Our results are evidence that either the condition in Theorem~\ref{prop:newinvariant} is a reasonable one, or that the result in this theorem may be extended to a larger class of modular forms.
Note that the Shioda invariants $J_2, J_3, \ldots, J_{10}$ are not integral and their denominators factor as products of powers of 2,3,5 and 7 (see~\cite{LerRit} for a set of formulae). This is the reason why these primes may appear in the denominators of the Shioda invariants, even when they are not primes of bad reduction. However, one can see that the primes $>7$ appearing in the denominators of the invariants in Eq.~\eqref{shioda} are exactly the primes of bad reduction, which confirms Theorem 7.1 in~\cite{KLLNOS}. In the Table, all the entries marked by $-$ represent values equal to zero.
\begin{landscape} \begin{table}
\caption{Denominators of invariants} \centering \label{tableh}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|} \hline
& & Odd primes of & denominators & Odd primes in the deno- \\
\small Curve & Discriminant & bad reduction & of $j_1,j_2,j_3$ & minators of invariants in Eq.~\ref{shioda} \\ \hline\hline \multirow{3}{*}{(1)} & \multirow{3}{*}{$-2^{18}\cdot 7^{24}\cdot 11^{12}\cdot 19^7$} & \multirow{3}{*}{$7,11$} & $-7^{80}\cdot 11^{40}$ & $7^{31}\cdot11^{12}, - ,7^{76}\cdot11^{24}$ \\
& & & $7^{240}\cdot 11^{120}$ & $-,7^{114}\cdot11^{36},-$\\
& & & $7^{80}\cdot 11^{40}$ & $5^7\cdot7^{159}\cdot11^{41},-,5^7\cdot7^{197}\cdot11^{60}$\\ \hline \multirow{3}{*}{(2)} & \multirow{3}{*}{$-2^{12}\cdot 3^8$} & \multirow{3}{*}{$3$} & 1 & $3^8\cdot7^7,-,3^{23}\cdot7^{28}$\\ & & & $2^3\cdot 3^{12}$ & $-,3^{38}\cdot7^{42},-$\\ & & & $1$ & $3^{32}\cdot5^7\cdot7^{63},-,3^{47}\cdot5^7\cdot7^{77}$\\ \hline \multirow{3}{*}{(3)} & \multirow{3}{*}{$-2^{18}\cdot 7^7$} & \multirow{3}{*}{none} & $1$ & $1,-,7^{14}$ \\ & & & $2^3$ & $-,7^{21},-$\\ & & & $1$ & $5^7\cdot7^{35},-,5^7\cdot7^{42}$\\ \hline \multirow{3}{*}{(4)} & \multirow{3}{*}{$-2^{12}\cdot 7^7$} & \multirow{3}{*}{none} & $1$ & $-,-,-$ \\ & & & $2^3$ & $-,-,7^7$\\ & & & $1$ & $-,-,-$\\ \hline \multirow{3}{*}{(5)} & \multirow{3}{*}{$-3^8\cdot 5^{24}\cdot 7^{7}$} & \multirow{3}{*}{3,5} & $-$ & $3^8\cdot 5^{31},5^{100},3^{23}\cdot 5^{41}$ \\ & & & $2^3 \cdot 3^{12} \cdot 5^{240}$ & $3^{12}\cdot 5^{120},3^{38}\cdot 5^{72},3^6\cdot 5^{26}$\\ & & & $-$ & $3^{32}\cdot 5^{103},3^{72}\cdot 5^{216},3^{47}\cdot 5^{120}$\\ \hline \multirow{3}{*}{(6)} & \multirow{3}{*}{$-2^{60}\cdot11^{24}\cdot 43^{7}$} & \multirow{3}{*}{11} & $2^{125}\cdot 11^{80}$ & $7^7\cdot11^{24}, -, 7^{28}\cdot11^{48}$ \\ & & & $2^{413}\cdot 11^{240}$ & $-, 7^{42}\cdot11^{72}, -$\\ & & & $2^{135}\cdot 11^{80}$ & $ 5^7\cdot7^{77}\cdot11^{96}, -, 5^7\cdot7^{77}\cdot11^{120}$\\ \hline \multirow{3}{*}{(7)} & \multirow{3}{*}{$-2^{44}\cdot 31^{7}$} & \multirow{3}{*}{none} & $2^{25}$ & $7^7,-,7^{28}$ \\ & & & $2^{113}$ & $-,7^{42}, -$\\ & & & $2^{35}$ & $5^7\cdot7^{63},-,5^7\cdot7^{77}$\\ \hline \multirow{3}{*}{(8)} & \multirow{3}{*}{$-2^{48}\cdot 3^{8}\cdot 7^{7}$} & \multirow{3}{*}{3}
& $2^{85}$ & $ 3^{8}, -, 3^{23}\cdot7^{14}$ \\ & & & $2^{293}\cdot 3^{12}$ & $-, 3^{38}\cdot7^{21}, -$\\
& & & $2^{95}$ & $ 3^{32}\cdot5^7\cdot7^{35}, -, 3^{47}\cdot5^7\cdot7^{42}$\\ \hline \multirow{3}{*}{(9)} & \multirow{3}{*}{$2^{37} \cdot 1063$} & \multirow{3}{*}{1063}
& $ 2^{60} \cdot 1063^{15}$ & $ 5^7 \cdot 7^7 \cdot 1063, 5^{28} \cdot7^{42} \cdot1063^3, 3^7\cdot 7^{28}\cdot 1063^2, $ \\ & & & $1063^{10}$ & $5^{14}\cdot 7^{70}\cdot 1063^5, 3^{14}\cdot 7^{35}\cdot 1063^3,5^2\cdot 7^{14}\cdot 1063, $\\
& & & $1063^5$ & $3^7\cdot 5^{14}\cdot 7^{63}\cdot 1063^4, 5^{14}\cdot 7^{126}\cdot 1063^9, 3^{14}\cdot 5^{14}\cdot 7^{70}\cdot 1063^5$\\ \hline \multirow{3}{*}{(10)} & \multirow{3}{*}{$- 2^{28}\cdot 3^4 \cdot 599$} & \multirow{3}{*}{3,599}
& $ 599^{15}$ & $5^7 \cdot 7^7\cdot 599, 5^{28}\cdot 7^{42}\cdot 599^3, 3\cdot 7^{28}\cdot 599^2, $ \\ & & & $3^5\cdot 599^{10}$ & $3^6\cdot 5^{14}\cdot 7^{70}\cdot 599^5, 7^{42}\cdot 599^3,7^{14}\cdot 599, $\\
& & & $599^5$ & $3^2\cdot 5^7\cdot 7^{63}\cdot 599^4, 5^{14}\cdot 7^{126}\cdot 599^9, 5^{14}\cdot 7^{77}\cdot 599^5$\\ \hline \multirow{3}{*}{(11)} & \multirow{3}{*}{$2^{28}\cdot 3^{12}\cdot 11^6$} & \multirow{3}{*}{3,11}
& $ 3^{40}\cdot 11^{90}$ & $3^{12}\cdot 5^7\cdot 11^6, \cdot3^8 \cdot5^{28}\cdot 7^{42}\cdot 11^{18},3^{31}\cdot 7^{21}\cdot 11^{12},$ \\ & & & $3^{50}\cdot 11^{60}$ & $3^{60} \cdot 5^{14}\cdot 7^{56}\cdot 11^{30}, 3^{50}\cdot 7^{42}\cdot 11^{18}, 3^{14}\cdot 7^{12}\cdot 11^6,$\\
& & & $3^{20}\cdot 11^{30}$ & $3^{41}\cdot 5^7\cdot 7^{49}\cdot 11^{24},3^{136}\cdot 5^{14}\cdot 7^{126}\cdot 11^{54}, 3^{60} \cdot 5^{14}\cdot 7^{70}\cdot 11^{30}$\\ \hline \multirow{3}{*}{(12)} & \multirow{3}{*}{$2^{28}\cdot3^8\cdot13^4$} & \multirow{3}{*}{3,13} & $2^{135} \cdot 3^{120} \cdot 13^{60}$& $3\cdot5^7\cdot 7^7\cdot 13^4, 3^{10}\cdot 5^{28}\cdot 7^{42}\cdot 13^{12}, 7^{28}\cdot 13^8$\\ & & & $3^{45}\cdot13^{40}$ & $5^{14} \cdot7^{70}\cdot 13^{20},7^{42}\cdot 13^{12}, 5^2\cdot 7^{14}\cdot 13^2$\\ & & & $3^{30} \cdot 13^{20}$ & $5^{14}\cdot 7^{63}\cdot 13^{16}, 5^{14}\cdot 7^{126}\cdot 13^{36}, 5^{14} \cdot7^{77}\cdot 13^{20}$\\ \hline \multirow{3}{*}{(13)} & \multirow{3}{*}{$ - 2^{28}\cdot 41^6$} & \multirow{3}{*}{41} & $ 2^{135}\cdot 41^{90}$ & $7^7 \cdot41^6, 7^{42}\cdot 41^{18},3^7 \cdot7^{28}\cdot 41^{12}$\\ & & & $41^{60}$ & $7^{70}\cdot 41^{30}, 3^{14}\cdot 7^{42}\cdot 41^{18}, 3^2 \cdot7^{14}\cdot 41^4$\\ & & & $41^{30}$ & $3^7 \cdot5^7\cdot 7^{63} \cdot41^{24},3^{28} \cdot7^{126}\cdot 41^{54}, 3^{14}\cdot 5^7 \cdot7^{77}\cdot 41^{30}$ \\ \hline \end{tabular}} \end{table} \end{landscape}
We note that because of its large weight, $\Sigma_{140}$ is expensive to compute, so the modular invariants computed here may not be the most convenient to use from a computational point of view. As suggested by Lockhart \cite[p. 741]{Lockhart}, it might be worth finding a Siegel modular form that corresponds to a lower power of the discriminant, especially if one is to pursue further the goal of finding modular expressions for the Shioda invariants. We note that Tsuyumine~\cite{Tsuyumine1} introduced the modular form $\chi_{28}$ of weight 28 such that $\rho(\chi_{28})=D^3$, where as earlier $D$ is the discriminant of the binary form $F(x,z)$ such that the hyperelliptic curve is given by $y^2 = F(x,z)$. The reason for which we chose to work with $\Sigma_{140}$ in the computations is because it was straightforward to implement.
Finally, we note that in the non-hyperelliptic curve case, one could show with similar reasoning as in Theorem~\ref{prop:newinvariant} that a modular function having a power of $\chi_{18}$ in the denominator, when evaluated at a plane quartic period matrix, has denominator divisible by the primes of bad reduction or of hyperelliptic reduction of the curve associated to the period matrix. In this direction, a relationship between $\chi_{18}$ and the discriminant of the non-hyperelliptic curve was shown by Lachaud, Ritzenthaler, and Zykin~\cite[Theorem 4.1.2, Klein's formula]{LRZ}.
\section{Conclusion} We have displayed a connection between the values of certain geometric modular forms of even weight restricted to the hyperelliptic locus and the primes of bad reduction of hyperelliptic curves. A complete description of the Shioda invariants of hyperelliptic curves in terms of modular forms deserves further investigation. However, our result, combined with the bounds obtained in~\cite{KLLNOS} on primes of bad reduction for hyperelliptic curves, yields a bound on the primes appearing in the denominators of modular invariants.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{ Distributed Resource Allocation Over Random Networks Based on Stochastic Approximation \tnoteref{label0}} \author{Peng Yi, Jinlong Lei, Yiguang Hong \corref{cor1}} \ead{yipeng@amss.ac.cn, leijinlong11@mails.ucas.ac.cn, yghong@iss.ac.cn} \address{The Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences} \cortext[cor1]{Correspondence author: Yiguang Hong }
\begin{abstract} In this paper, a stochastic approximation (SA) based distributed algorithm is proposed to solve the resource allocation (RA) with uncertainties. In this problem, a group of agents cooperatively optimize a separable optimization problem with a linear network resource constraint and { allocation feasibility constraints}, where the global objective function is the sum of agents' local objective functions. { Each agent can only get noisy observations of its local function's gradient and its local resource, which cannot be shared by other agents or transmitted to a center. Moreover, there are communication uncertainties such as time-varying topologies (described by random graphs) and additive channel noises.} To solve the RA, we propose an SA-based distributed algorithm, and prove that
agents can collaboratively achieve the optimal allocation with probability one by virtue of ordinary differential equation (ODE) method for SA. Finally,
simulations related to the demand response management in power systems verify the effectiveness of the proposed algorithm. \end{abstract} \begin{keyword} Distributed optimization \sep Resource allocation \sep Stochastic approximation \sep Random graph \sep Demand response
\end{keyword} \tnotetext[label0]{This work was supported by Beijing Natural Science Foundation (4152057), NSFC (61333001), and Program 973 (2014CB845301/2).} \end{frontmatter}
\section{Introduction}
Resource allocation (RA) problem is to allocate the network resource among a group of agents while optimizing
certain performance index. It has drawn much research attention in many areas, such as the media access control in communication networks \cite{RAC}, signal processing in \cite{RAS}, and load demand management in \cite{stevon2}. Hence, various RA models and RA algorithms have been proposed (see \cite{RAC}-\cite{RA10} and the references therein). However, most of existing algorithms need a center to collect the data over networks or to coordinate computation processes among all agents.
In fact, the center-free distributed optimization algorithms have attracted more and more research attention in recent years \cite{Ned2}-\cite{peng}. In various network optimization problems, the optimal decisions are made based on the whole network data, which, however, are collected and stored by each individual agent of the network. The distributed optimization algorithm keeps the data distributed through the network when seeking the optimal decision, and hence eliminates the ``one-to-all" communication burden and protects agents' privacy. Distributed optimization also endows each individual agent with autonomy and reactivity by allowing it to formulate its local objective function and constraints with its local data. From the network viewpoint, the robustness to single point failure and the network scalability can be enhanced with distributed design. Following the seminal work \cite{RA3} of RA in large-scale networks along with the distributed optimization work in \cite{Ned2}-\cite{peng}, various center-free distributed algorithms for RA have been proposed recently in \cite{DRA4}-\cite{DRA8}.
Stochastic approximation (SA) has been adopted in distributed optimization algorithms to address various kinds of uncertainties or to improve the computation efficiency. In \cite{Ned3}, an SA-based distributed algorithm was proposed when each agent can only get the noisy observations of its local gradient, which extended the traditional SA optimization methods (see \cite{nemi}) to distributed settings. In \cite{DSA2}, an SA algorithm was given for distributed root seeking problem under noisy observations, which was also a generalization of distributed optimization problems. In practice, noisy gradient observations also exist in the zero-order distributed optimization algorithm as in \cite{Yuan}, and randomized data sample was considered to reduce the computational complexity in optimization with ``big data", resorting to SA for theoretical analysis (see \cite{DSA3}). Besides, SA algorithms were also adopted for distributed optimization to handle uncertainties in communication systems in \cite{asum,ned}, and \cite{zhang2}.
Nevertheless, the existing distributed works of RA in \cite{DRA4}-\cite{DRA8} have not considered various stochastic uncertainties related to information sharing or data observations. Since the problem data is distributed throughout the network, each agent needs to share its local information with other agents through a communication network, which may involve various of uncertainties. Firstly, the communication network may switch due to packet loss, media access control, or energy constraint. To describe uncertainties of communication topologies, different from the deterministic switching graphs in \cite{Ned2,lou12} and \cite{peng}, we adopt random graph models like \cite{asum,ned,boyd} and \cite{Zhang} here. Secondly, the information shared through the network may not be accurate or may be corrupted by random noises due to quantization errors or channel fading ( referring to \cite{peng}, \cite{zhang2} and \cite{Zhang}). On the other hand, noises can also be actively added to the shared information for privacy protection as discussed in \cite{DCU1}. Moreover, agents may not get the exact local gradient or resource information due to measurement or observation noises.
Main contributions of the paper are summarized as follows. (i) A novel center-free distributed algorithm is proposed to handle the RA problem, where each agent only utilizes noisy observations of its local gradient and resource information, and noisy neighboring information shared through the randomly switching networks. (ii)
The estimates are shown to converge to the optimal allocation with probability one based on the ODE method for SA algorithm. (iii) The proposed model and algorithm are applied to distributed multi-periods demand response management in power systems, along with simulations to show the effectiveness.
The remainder of the paper is organized as follows. The RA problem is formulated and an SA-based distributed algorithm is proposed in Section 2. Then the convergence result for the distributed algorithm is established in Section 3, while simulation studies are shown in Section 4. Finally, the concluding remarks are given in Section 5.
\section{Problem Formulation and Proposed Algorithm}
Firstly, we show related notations and preliminaries about convex analysis. Denote $\mathbf{1}_m=(1,...,1)^T \in \mathbf{R}^m$ and $\mathbf{0}_m=(0,...,0)^T \in \mathbf{R}^m$. $col \{ x_1,\cdots, x_n\}= (x_1^T, \cdots, x_n^T)^T$ stacks the vectors $x_1, \cdots, x_n$. $I_n$ denotes the identity matrix in $\mathbf{R}^{n\times n}$. For a matrix $A=[a_{ij}]$, $a_{ij}$ or $A_{ij}$ stands for the matrix entry in the $i$th row and $j$th column of $A$. $ \otimes $ denotes the Kronecker product. { Denote $ker\{A\}$ and $range\{A\}$ as the null space and range space of matrix $A$, respectively.
For a nonempty closed convex set $\Omega \subset \mathbf{R}^m$ and a point $x \in \mathbf{R}^m$, denote $P_{\Omega} (x)$ as the point in $\Omega$ that is closest to $x$, and call it the projection of $x$ on $\Omega$.
$P_{\Omega} ( x)$ contains only one element for any $x \in \mathbf{R}^m,$ and satisfies
\begin{equation}\label{pro}
\| P_{\Omega} (x)-P_{\Omega} (y) \| \leq \| x-y\| ~~ \forall x, y\in\mathbf{R}^m. \end{equation} For a convex set $\Omega \subset \mathbf{R}^m$ and a point $x \in \Omega$, define
the normal cone to $\Omega$ at $x$ as $N_{\Omega} (x) \triangleq \{ v \in \mathbf{R}^m: \langle v, y-x \rangle \leq 0 ~~\forall y \in \Omega\}$. }
In the following two subsections, we formulate the distributed RA problem with the data observation and communication network models, and propose an SA-based distributed algorithm. \subsection{Problem Formulation} Consider a group of agents $\mathcal{N}=\{1, \cdots, n\}$ that cooperatively decide the optimal network resource allocation (RA), formulated as follows: { \begin{equation}\label{problem} \begin{split} &\min_{x_i \in \mathbf{R}^m,i\in \mathcal{N}} \qquad \sum_{i\in \mathcal{N}} f_i(x_i), \\ &subject \; to \; \sum_{i\in \mathcal{N}} x_i = \sum_{i\in\mathcal{ N}} d_i, \quad x_i \in \Omega_i, i \in \mathcal{N} \end{split} \end{equation} } The local allocation variable $x_i \in \mathbf{R}^m$ is decided by agent $i$, which is also associated with a local objective function $f_i(x_i)$. $d_i$ is the local resource data, and can only be observed by agent $i$. The resource of the whole network is the sum of all local resources, i.e., $\sum_{i \in \mathcal{N}}d_i$. { $\Omega_i$ is the local allocation feasibility constraint of agent $i$, and cannot be known by other agents. Furthermore, $\Omega_i$ is determined by $p_i$ inequality constraints: $ \Omega_i=\{ x\in \mathbf{R}^m: q_{ij} (x) \leq 0, ~ \forall j=1,\cdots, p_i\},$ where $q_{ij}(\cdot),~j=1,\cdots, p_i$ are continuously differentiable convex functions on $\mathbf{R}^m$ . Therefore, RA problem \eqref{problem} is to find an allocation that minimizes the sum of local objective functions while satisfying the network resource constraint and the allocation feasibility constraints.} The following assumptions can also be found in \cite{RAC}-\cite{RA10}.
\begin{assum}\label{assp}
Problem \eqref{problem} has a finite optimal solution.
For any $i \in \mathcal{N}$, $f_i(x_i)$ is { differentiable strictly convex function}, and moreover, its gradient is globally Lipschitz continuous, i.e., there exists a constant $l_c>0$ such that
$\| \nabla f_i(x) -\nabla f_i(y) \| \leq l_c \| x-y\|, \forall x,y \in \mathbf{R}^m .$ \end{assum}
{ The following constraint qualification assumption can be found in \cite{sa}. \begin{assum}\label{assset} For any $i \in \mathcal{N}, $ the set $\Omega_i$ is closed convex set and has nonempty interior points, and $\{ \nabla q_{ij}(x), ~ j \in \mathcal{I}_i(x)\}$ is linearly independent, where $\mathcal{I}_i(x)=\{ j: q_{ij}(x)=0 \}$. \end{assum} }
The data observation model for agent $i$ at time $k$ is given as follows: agent $i$ can get the noisy observation of its gradient $\nabla f_i(x_i)$ at given testing point $x_i(k)$ corrupted with noise $\nu_i(k)$ (that is, $\nabla f_i(x_{i}(k)) + \nu_{i}(k)$) and the noisy local resource information corrupted with noise $\delta_i(k)$ (that is, $d_i+ \delta_i(k)$). The stochastic gradient model should be taken into consideration in the following three cases:
(i) Stochastic optimization: Agent $i$'s local objective function takes the expectation form as { $f_i(x_i)= E_{\phi_i}[g(x_i, \phi_i)] = \int_{\Phi_i} g(x_i, \phi_i)d\mathbb{P}(\phi_i)$,} where $\phi_i$ is a random vector supported on set $\Phi_i\in \mathbf{R}^d$ with probability distribution $\mathbb{P}$, and $g_i : \mathbf{R}^m \times \Phi_i \rightarrow \mathbf{R}$.
It is more practical to utilize noisy gradient $\nabla g_i(x_i,\phi_i)$ given sampling $\phi_i$ rather than exact gradient by performing multi-value integral at each iteration.
In fact, the SA algorithm in \cite{nemi} and DSA algorithm in \cite{Ned3} considered this kind of gradient noise.
(ii) Zero-order optimization: When agent $i$ can only get the value of $f_i(x_i)$ given the testing point $x_i(k)$, the gradient estimation methods, such as the Kiefer-Wolfowitz method in \cite{gfree1} and the randomized coordinate estimation in \cite{Yuan}, can lead to noisy gradient observations.
(iii) Randomized data sample: If the local objective functions are constructed with ``big data", a noisy gradient based on randomly sampled data is an alternative to the exact gradient, which may reduce the overall iteration computational complexity (see \cite{DSA3}).
Given the local data observations, it is important and practical to solve \eqref{problem} in a distributed way, where the agents need to share the local information with neighbors through switching networks and noisy channels.
As we know, switching communication networks can be modeled by random graphs, e.g., \cite{asum}, \cite{ned}. Denote a realization of the random graph at time $k$ as $\mathcal{G}(k)=(\mathcal{N},\mathcal{E}(k))$, where
$\mathcal{E}(k) \subset \mathcal{N}\times \mathcal{N} $ is the edge set at time $k$. If agent $i$ can get information from agent $j$ at time $k$, then $(j,i) \in \mathcal{E}(k)$ and agent $j$ belongs to agent $i$'s neighbor set $\mathcal{N}_i(k)=\{j|(j,i) \in \mathcal{E}(k)\}$ at time $k$. Define adjacency matrix $A(k)=[a_{ij}(k)]$ of $\mathcal{G}(k)$ with $a_{ij}(k)=1$ if $j\in \mathcal{N}_i(k)$, and $a_{ij}(k)=0$ otherwise. Denote by $Deg(k)=diag\{ \sum_{j=1}^n a_{1j}(k),..., \sum_{j=1}^n a_{nj}(k)\} $ the degree matrix, and by $L(k)=Deg(k)-A(k) $ the Laplacian matrix of $\mathcal{G}(k)$.
The following assumption is given for the random graphs $\{\mathcal{G}(k)\}_{k \geq 1}$ (referring to \cite{asum}).
\begin{assum}\label{assg} $\{L(k)\}$ is an i.i.d. sequence with mean denoted by $\bar{L}= E[L(k)]$. Besides, $\bar{L}$ is symmetric with $s_2(\bar{L})>0$, where $s_2(\bar{L})$ denotes the secondly smallest eigenvalue of $\bar{L}$. \end{assum}
\begin{rem} Note that Assumption \ref{assg} does not require the communication graph to be connected or undirected at any time instance. Only the mean graph is required to be undirected and connected, which ensures that the local information can reach any other agents in the average sense. { The gossip model in \cite{boyd} and the broadcast model in \cite{ned} are also consistent with Assumption \ref{assg}.} \end{rem}
\subsection{SA-based Distributed Algorithm}
It is time to propose an SA-based distributed algorithm, based on assumptions on data observations and communication noises.
Denote $x_i(k)$ as agent $i$'s estimate for its local optimal allocation at time $k$, and denote $\lambda_i(k),~z_i(k)$ as the auxiliary variables of agent $i$.
The agents share their auxiliary variables through the communication network at each iteration. If $(j,i)\in \mathcal{E}(k)$, then agent $i$ can get the noisy information of $\{\lambda_j(k),z_j(k)\}$, corrupted with noise $ \zeta_{ij}(k)$ and $ \epsilon_{ij}(k)$, from agent $j$. Namely, $ \lambda_j(k)+\zeta_{ij}(k)$ and $z_j(k) +\epsilon_{ij}(k) $ are the values received by agent $i$ from agent $j$ at time $k$, which are not separable. Moreover, agent $i$ also has the local noisy gradient observation $\nabla f_i(x)+ \nu_{i}(k) $ and noisy resource observation $d_i+\delta_i(k)$.
The SA-based distributed recursive algorithm for agent $i$ is given as follows: \begin{equation}\label{dy1} \begin{array}{l} \hline {\bf SA-based \; Distributed \; Resource \; Allocation \;Algorithm } \\ \hline \displaystyle {x}_{i}( k+1) = P_{\Omega_i} \big( x_i(k) + \alpha_{k} \big( -\big(\nabla f_i(x_i(k))+\nu_i(k)\big) + \lambda_{i }(k) \big) \big),\\ \displaystyle {\lambda}_{i}( k+1) = \lambda_{i}(k) + \alpha_{k} \big( (d_{i } +\delta_{i}(k))- x_{i}(k) \\ \displaystyle \qquad \qquad \quad - \sum_{j=1}^n a_{ij} (k)(\lambda_{i}(k)-(\lambda_{j }(k)+\zeta_{ ij }(k))) \\ \displaystyle \qquad \qquad \quad - \sum_{j=1}^n a_{ij} (k)(z_{i}(k)-(z_{j}(k)+\epsilon_{ ij }(k)) ) \big), \\ \displaystyle{z}_i (k+1) = z_{i}(k) + \alpha_{k}\sum_{j=1}^n a_{ij} (k)\big(\lambda_{i}(k)-(\lambda_{j}(k)+ \zeta_{ ij }(k)) \big),\\ \hline \end{array} \end{equation}
where the step-size $\{ \alpha_k\}$ satisfies \begin{equation}\label{stepsize} \alpha_k >0, ~~ \sum_{k=1}^{\infty} \alpha_k =\infty, ~~\sum_{k=1}^{\infty} \alpha_k^2 < \infty . \end{equation}
Obviously, the algorithm \eqref{dy1} is a {\bf fully distributed} one since each agent only uses its local noisy observations and the noisy information received from its neighbors, and only performs local projection with its local set $\Omega_i$.
{ Since the local objective functions $f_i(x_i)$ is convex and continuously differentiable, the KKT condition of (\ref{problem}) is \begin{equation}\label{kkt} \begin{array}{lll} &&\mathbf{0}_{m} \in \nabla f_i(x_i^*) - \lambda^* + N_{\Omega_i}(x_i^*), i=1,\cdots,n \\ && \sum_{i\in \mathcal{N}} x^*_i = \sum_{i\in\mathcal{ N}} d_i\; \quad x_i^* \in \Omega_i, \end{array} \end{equation} Algorithm \eqref{dy1} is developed by combining the ODE methods for KKT condition \eqref{kkt} and the ODE methods for stochastic approximation. In some sense, $\lambda_i$ in \eqref{dy1} is the local ``copy" of Lagrangian multiplier for $\lambda^*$ in \eqref{kkt}, and $z_i$ in \eqref{dy1} is given for the consensus of $\lambda_i$ to reach the same $\lambda^*$. }
The communication noises $\epsilon_{ij}(k), ~\zeta_{ij}(k)$ can be used to model information sharing uncertainties due to quantization errors (see \cite{peng}) or communication channel fading (see \cite{zhang2} and \cite{Zhang}). Additionally, noises can be actively added to achieve differential privacy protection as done in \cite{DCU1}.
Define the $\sigma$-algebra at time $k$ as: \begin{equation}\label{algebra} \begin{split}
\mathcal{F}_{k} =\sigma \{ & \epsilon_{ij}(t), \zeta_{ij}(t),\delta_{i}(t), \nu_i(t), L(t), ~0 \leq t \leq k, \\&~i,j=1,\cdots, N, ~ X(0), \Lambda(0), Z(0)\} . \end{split} \end{equation} Define $\mathcal{F}_{k}'=\sigma \{ \mathcal{F}_{k}, L(k+1 )\}$. The following assumptions imposed on $\epsilon_{ij}(k), \zeta_{ij}(k), \delta_i(k), \nu_i(k)$ were also adopted in the existing SA and distributed optimization works (see \cite{Ned3}\cite{asum}\cite{ned}\cite{Zhang}).
\begin{assum}\label{assd}
For any $i \in \mathcal{N},$ $\{ \delta_{i }(k)\}$ is an i.i.d. sequence with zero mean and bounded second moments $\sigma_{i,\delta}^{2}=E [\| \delta_{i }(k)\|^2].$ \end{assum}
\begin{assum}\label{asscn}
(i) The communication noises have conditional zero mean, i.e., $E[\zeta_{ij}(k) |\mathcal{F}_{k-1}']=\mathbf{0}$ and
$E[\epsilon_{ij}(k) |\mathcal{F}_{k-1}']=\mathbf{0}$.
(ii) There is a uniform bound on conditional variances of the communication noise , i.e., there exists a constant $\mu>0$ such that for any $i,j \in \mathcal{N}$ and any $k \geq 0$,
$E[\| \zeta_{ij}(k) \|^2 |\mathcal{F}_{k-1}'] \leq \mu^2$ and $ E[\|\epsilon_{ij}(k) \|^2 |\mathcal{F}_{k-1}'] \leq \mu^2.$
(iii)There exists a positive constant $c$ such that for any $i \in \mathcal{N} $ and any $k \geq 0$,
$$ E[ \nu_{i}(k) | \mathcal{F}_{ k-1} ]=0, ~~E[ \| \nu_{i}(k) \|^2 | \mathcal{F}_{k-1} ]
\leq c (1+\|x_{i }(k)\|^2).$$
(iv) For all $ i\in \mathcal{N} $, the sequences $\{L(k)\}$ and $\{ \delta_{i}(k)\}$ are mutually independent.
The sequences $\{L(k)\}$ and $\{ \delta_{i}(k)\}_{ i\in \mathcal{N} }$ are independent of $\mathcal{F}_{k-1}.$
\end{assum}
\section{Convergence Analysis}
In this section, we employ the ODE method for SA algorithm to give the convergence analysis for algorithm \eqref{dy1}. It is shown with the following outline. Theorem \ref{thm1} shows that the equilibrium point of the underlying ODE contains the optimal solution to problem \eqref{problem}, while Lemma \ref{lemode} shows the convergence of the underlying ODE. Then Lemma \ref{lemnoise} investigates properties of the extended noise sequences, and Lemma \ref{lembound} shows that the iteration sequence generated by \eqref{dy1} are bounded. Finally, Theorem \ref{thmcov} shows that the estimates generated by \eqref{dy1} converge to the optimal resource allocation with probability one.
Set $\zeta_{i}(k)=\sum_{j=1}^n a_{ij} (k) \zeta_{ ij} (k)$ and $\epsilon_{i}(k)=\sum_{j=1}^n a_{ij} (k)\epsilon_{ij}(k)$, and \begin{equation} \begin{array}{ll} \nonumber & X(k)=col\{x_1(k),\cdots,x_n(k)\}, \quad \Lambda(k)=col\{\lambda_1(k),\cdots,\lambda_n(k)\}, \\ & Z(k)=col\{z_1(k),\cdots,z_n(k)\}, \quad\; \delta(k) =col\{ \delta_{1}(k),\cdots, \delta_{n}(k)\}, \\ & \nu(k)=col\{ \nu_{1}(k),\cdots,\nu_{n}(k)\}, \quad D=col\{d_1,\cdots,d_n\}, \\ & \zeta(k)=col\{ \zeta_{1}(k),\cdots, \zeta_{n}(k)\}, \quad \epsilon(k)=col\{ \epsilon_{1}(k),\cdots,\epsilon_{n}(k)\},\\ & \nabla f(X(k))=col \{ \nabla f_1(x_1(k)),\cdots, \nabla f_n(x_n(k)) \}. \end{array} \end{equation} Then the recursive algorithm \eqref{dy1} can be rewritten in the compact form as follows: \begin{equation}\label{cra} \begin{array}{l} \displaystyle X(k+1) = P_{\Omega} \big( X(k) + \alpha_{k} \big ( - \nabla f (X(k)) +\Lambda(k) - \nu(k)\big)\big),\\ \displaystyle \Lambda(k+1) = \Lambda(k) + \alpha_{k} \big( -(L(k)\otimes I_m)( \Lambda(k)+ Z(k) ) \\ \displaystyle \qquad \qquad \quad + D- X(k) +\delta(k) + \zeta(k)+\epsilon(k) \big) ,\\ \displaystyle Z(k+1) = Z(k) + \alpha_{k} \big( (L(k)\otimes I_m) \Lambda(k) - \zeta(k) \big), \end{array} \end{equation} where $\Omega=\prod_{i=1}^n \Omega_i$ denotes the Cartesian product of $\Omega_i$.
Denote by $e_1(k)= \big( (\bar{L}-L(k) ) \otimes I_m \big)( \Lambda _k + Z_k) ,$ $e_2(k)= \zeta(k) + \delta(k)+\epsilon(k)$, and by $e_3(k)=\big( (L(k) -\bar{L} ) \otimes I_m \big) \Lambda(k) -\zeta(k)$. We then have
\begin{equation} \begin{array}{l}
\displaystyle X(k+1) = P_{\Omega} \big( X(k) + \alpha_{k}( - \nabla f (X(k)) +\Lambda(k) - \nu(k)) \big),\\
\displaystyle \Lambda(k+1) = \Lambda (k) + \alpha_{k} \big( - ( \bar{L} \otimes I_m) ( \Lambda(k) +Z(k) ) \\
\displaystyle \qquad \qquad + D - X(k)+ e_1(k)+e_2(k) \big), \\
\displaystyle Z(k+1) = Z(k) + \alpha_{k}( ( \bar{L} \otimes I_m) \Lambda(k) + e_3(k)). \end{array} \end{equation}
By setting $S(k)= col\{ X(k),\Lambda(k), Z(k)\}$, we can regard the algorithm \eqref{cra} as an SA algorithm with the following form: \begin{equation}\label{compact} S(k+1) = P_{\Phi} \big( S(k) + \alpha_k (J(S(k)) + \xi(k))\big), \end{equation} where \begin{equation}\label{rootf} \begin{split}
& J(S) = \begin{pmatrix}
& -\nabla f(X) + \Lambda \\
& - ( \bar{L} \otimes I_m) ( \Lambda + Z) + D-X\\
& ( \bar{L} \otimes I_m) \Lambda \end{pmatrix}, \\& \xi(k)= \begin{pmatrix}
& - \nu(k) \\
& e_1(k)+e_2(k) \\
& e_3(k) \end{pmatrix}, \quad \Phi=\Omega \times \mathbf{R}^{mn} \times \mathbf{R}^{mn}. \end{split} \end{equation}
{ The convergence proof of \eqref{dy1} relies on the ODE method for SA (referring to \cite{sa} and \cite{sa2}). Define the following continuous-time projected dynamics as the underlying ODE of \eqref{dy1} \begin{equation}\label{dy2}
\dot{S}=J(S)+z, S(0)=col\{X(0),\Lambda(0),Z(0)\}, \end{equation} with $z\in -N_{\Phi} (S)$ being the minimum force to keep the solution of \eqref{dy2} in $\Phi$, and $J(S)$ is defined by \eqref{rootf}.
\begin{thm}\label{thm1} Under Assumptions \ref{assp},\ref{assset}, and \ref{assg}, \eqref{dy2} has at least one equilibrium point. Furthermore, suppose $S^*=col \{X^*,\Lambda^*,Z^*\}$ is an equilibrium point of \eqref{dy2}, then $S^*$ has $X^*$ as the optimal solution to problem (\ref{problem}). \end{thm}
{\bf Proof}: Because problem \eqref{problem} is assumed to be solvable, there exist optimal solution $X^*$ and $\lambda^*\in \mathbf{R}^m$ such that \eqref{kkt} can be satisfied. Then take $\Lambda=1_n\otimes \lambda^*$, $\bar{L} \otimes I_m \Lambda^*=\mathbf{0}$. By $(\mathbf{1}^T_{n} \otimes I_m) X^*= (\mathbf{1}^T_{n}\otimes I_m) D$ (that is $\sum_{i\in \mathcal{N}} x^*_i = \sum_{i\in\mathcal{ N}} d_i$), we have $D-X \in ker\{\mathbf{1}^T_{n}\otimes I_m\} $. Notice that $ker\{\mathbf{1}^T_{n}\otimes I_m\}$ and $ range\{\mathbf{1}_{n}\otimes I_m\} $ form an orthogonal decomposition of $R^{nm}$ by the fundamental theorem of linear algebra. Combined with $ker(\bar{L} \otimes I_m)=range\{\mathbf{1}_{n}\otimes I_m\}$ due to Assumption \ref{assg}, we have $D-X \in ker(\bar{L} \otimes I_m)^{\perp}$.
Therefore, $D-X \in range(\bar{L} \otimes I_m )$, that is there exists $Z^*$ such that $ -\bar{L} \otimes I_m Z^* + D-X^*=\mathbf{0}$. Hence, combined with \eqref{kkt}, $S^*=col\{X^*,\Lambda^*,Z^*\}$ is an equilibrium point of \eqref{dy2}.
On the other hand, when $S^*=col\{X^*,\Lambda^*,Z^*\}$ is an equilibrium point of \eqref{dy2}, it satisfies: \begin{equation} \begin{array}{lll}\label{eq2} && -\nabla f_i(x_i^*)+ \lambda_i^* \in N_{\Omega_i}(x_i^*), x^*_i \in \Omega_i \\ && (\bar{L} \otimes I_m) ( \Lambda^* + Z^*) -( D-X^*)= \mathbf{0}\\ && (\bar{L} \otimes I_m)\Lambda^* = \mathbf{0} \end{array} \end{equation}
Since $\bar{L}$ is the weighted Laplacian of an undirected connected graph by Assumption \ref{assg}, it follows from $ ( \bar{L} \otimes I_m) \Lambda^*=\mathbf{0}_{mn}$ that $\Lambda^*= \mathbf{1}_n \otimes \lambda^* $ for some $ \lambda^*\in \mathbf{R}^m$. As a result, $\mathbf{0}_{m} \in \nabla f_i(x_i^*) - \lambda^* + N_{\Omega_i}(x_i^*)$. Furthermore, $(\bar{L} \otimes I_m) \Lambda^*+(\bar{L} \otimes I_m)Z^* - (D-X^{*})=\mathbf{0}_{mn}$
implies that $(\bar{L} \otimes I_m)Z^*=D-X^*$. Then by noticing $\mathbf{1}_n^T \bar{L}=\mathbf{0}_n^T$ we derive
$\sum_{i\in \mathcal{N}} d_i=\sum_{i\in \mathcal{N}} x_i^*$. Moreover, $x_i^*\in \Omega_i$ due to the viability of ODE \eqref{dy2}.
Thus, any equilibrium point $S^{*}$ of \eqref{dy2} satisfies the KKT condition \eqref{kkt}, and hence, $X^*$ is the optimal solution to problem \eqref{problem}.
$\blacksquare$
Lemma \ref{lemode} shows that \eqref{dy2} converges to its equilibrium point $S^*$. \begin{lem}\label{lemode} Under Assumptions \ref{assp}, \ref{assset} and \ref{assg}, the trajectories of \eqref{dy2} are bounded and converge to its equilibrium point for any finite initial points. \end{lem}
{\bf Proof}:
Take a Lyapunov function $V(S)=\frac{1}{2}||S-S^* ||$, where $S^*$ is an equilibrium point of \eqref{dy2}. Take $n_{\Omega}(X^*)\in N_{\Omega}(X^*)$ such that $\nabla f(X^*)-\Lambda^*+n_{\Omega}(X^*)=\mathbf{0}$, then \begin{equation} \begin{array}{ll}\label{eq3} &\frac{dV}{dt}=(S -S^{*})^T (J(S) + z)\\ &\leq (X-X^*)^T(-\nabla f(X)+\Lambda + \nabla f(X^*)-\Lambda^*+n_{\Omega}(X^*) ) \\ & + (\Lambda-\Lambda^*)^T(-\bar{L}\otimes I_m (\Lambda+Z)+(D-X)\\ &+\bar{L}\otimes I_m (\Lambda^*+Z^*)-(D-X^*) ) + (Z-Z^*)^T \bar{L}\otimes I_m (\Lambda-\Lambda^*)\\ & \leq -(X-X^*)^T(\nabla f(X)-\nabla f(X^*)) + (X-X^*)^Tn_{\Omega}(X^*)\\
&-(\Lambda-\Lambda^*)^T \bar{L} (\Lambda-\Lambda^*)\leq 0 \end{array} \end{equation}
Hence, any equilibrium point of \eqref{dy2} is Lyapunov stable, and given finite initial point $S(0)$, the trajectories of \eqref{dy2} are bounded and belong to the compact forward invariant set $I_s=\{S | V(S)\leq V(0)\}$.
Denote $E$ as the set within $I_s$ such that $\dot{V}=0$. Then we can show that the maximal invariance set in $E$ can only be $\{ S| \dot{S}=0\}.$
With the strict convexity of $f_i$, $X=X^*$ must hold within set $E$. Furthermore, $\Lambda-\Lambda^* \in ker\{\bar{L}\otimes I_n\}$ by \eqref{eq3} and Assumption \ref{assg}. Therefore, $\dot{Z}=\mathbf{0}_n$ and $Z=Z^*$ within set $E$. Moreover, $\dot{\Lambda}=-L\otimes I_nZ^*+D-X^*$, and $\dot{\Lambda}$ must be $\mathbf{0}$; otherwise $\Lambda$ will go to infinity, which contradicts the boundedness of the trajectories. Hence, $\Lambda^* = \mathbf{1}_n \otimes \lambda^*$. Therefore, all the trajectories of (\ref{dy1}) converge to the points in the maximal invariance set $\{ S| \dot{S}=0\}$. Recalling the Lyapunov stability of $S^*$ and the LaSalle invariance principle, the dynamics (\ref{dy2}) converges to its equilibrium point $S^*$, which leads to the conclusion.
$\blacksquare$}
\subsection{ Extended noise property}
By definition of $\mathcal{F}_k$ given in \eqref{algebra}, $S(k)$ is adapted to $\mathcal{F}_{k-1 }$ according to \eqref{dy1}. The extended noise sequence $\{\xi(k)\}$ is state-dependent, and its properties are shown in Lemma \ref{lemnoise}.
\begin{lem}\label{lemnoise} Suppose Assumptions \ref{assg}, \ref{assd} and \ref{asscn} hold. Then \begin{equation}\label{compon3}
E [\xi(k) | \mathcal{F}_{k-1}]=0,~~ E [ \| \xi(k) \|^2 |\mathcal{F}_{k-1}] \leq c_1 \| S(k)\|^2+c_2~~a.s. \end{equation} for some finite constants $c_1,c_2$. \end{lem}
{\bf Proof}: By Assumption \ref{asscn} (iii), \begin{equation}\label{conde0} \begin{array}{lll}
E[ \nu(k)| \mathcal{F}_{k-1}] & = & 0, \\
E[ \| \nu(k) \|^2| \mathcal{F}_{k-1}] & = & \sum_{i=1}^n E[ \| \nu_i(k) \|^2| \mathcal{F}_{k-1}]\\
& \leq & nc+ c\| X(k)\|^2. \end{array} \end{equation}
Since $a_{ij}(k)$ is adapted to $\mathcal{F}^{'}_{k-1}$, by Assumption \ref{asscn} (i) we obtain \begin{equation}
\begin{array}{lll}
E[ \zeta_i(k)| \mathcal{F}_{k-1}'] & = & \sum_{j=1}^n E [ a_{ij} (k) \zeta_{ ij} (k) |\mathcal{F}_{k-1}'] \\
& = & a_{ij} (k) \sum_{j=1}^n E [ \zeta_{ ij} (k) |\mathcal{F}_{k-1}'] =0, \nonumber \end{array} \end{equation}
and hence $ E[ \zeta (k)| \mathcal{F}_{k-1}'] =0$. By noting that $\mathcal{F}_k \subset\mathcal{F}_k'$ we derive
$$ E[ \zeta (k)| \mathcal{F}_{k-1} ] = E \big [ E[ \zeta (k)| \mathcal{F}_{k-1}'] \big | \mathcal{F}_{k-1} \big]=0. $$
Similarly, it follows that $ E[ \epsilon (k)| \mathcal{F}'_{k-1}] =0$ and $ E[ \epsilon (k)| \mathcal{F}_{k-1} ] =0$.
Since $L(k)$ is independent of $\delta_i(k)$ and $\mathcal{F}_{k-1}$ by Assumption \ref{asscn} (iv), we obtain
$ E[ \delta_i (k)| \mathcal{F}_{k-1}' ] =E[ \delta_i (k)| \mathcal{F}_{k-1},L(k) ]=E[ \delta_i (k)| \mathcal{F}_{k-1} ]. $ Then by Assumptions \ref{assd} and 4 (iv), we have that, for any $i \in \mathcal{N}$
$E [\delta_i(k)|\mathcal{F}_{k-1}' ]=E[ \delta_i (k)| \mathcal{F}_{k-1}]= E[ \delta_i (k)] =0.$
Thus, \begin{equation}\label{conditional1}
E [ e_2(k)| \mathcal{F}'_{k-1} ] = E[ \zeta (k)| \mathcal{F}'_{k-1} ] + E [\delta(k)|\mathcal{F}'_{k-1} ] + E[ \epsilon (k)| \mathcal{F}'_{k-1} ] =0, \end{equation}
which implies that $ E [ e_2(k)| \mathcal{F}_{k-1} ] =E[E[e_2(k)| \mathcal{F}'_{k-1}]\big | \mathcal{F}_{k-1}]=0.$
Note that $\Lambda(k)$ and $Z(k)$ are adapted to $\mathcal{F}_{k-1}$, while
$L(k)$ is independent of $\mathcal{F}_{k-1}$. Then, by Assumption \ref{assg} \begin{equation} \begin{array}{lll}
E[ e_1(k) | \mathcal{F}_{k-1} ] & = & E [ \big( (\bar{L}-L(k) ) \otimes I_m \big) ( \Lambda (k) + Z(k)) |\mathcal{F}_{k-1} ] \\
& = & E [ \big( (\bar{L}-L(k) ) \otimes I_m \big) |\mathcal{F}_{k-1} ] ( \Lambda (k) + Z(k)) \\
& = & E [ \big( (\bar{L}-L(k) ) \otimes I_m \big) ] ( \Lambda (k) + Z(k))=0, \\
E[ e_3(k) | \mathcal{F}_{k-1} ] & = & E [ \big( (\bar{L}-L(k) ) \otimes I_m \big) \Lambda(k) |\mathcal{F}_{k-1} ] \\
& = & E [ \big( (\bar{L}-L(k) ) \otimes I_m \big)] \Lambda(k)=0. \end{array} \end{equation}
Consequently, we conclude that $E [\xi(k) | \mathcal{F}_{k-1}]=0.$ \vskip 5mm
Since $e_1(k)$ is adapted to $ \mathcal{F}_{k-1}'$ and $\mathcal{F}_k \subset\mathcal{F}_k'$ , it follows from \eqref{conditional1} that \begin{equation}\label{conde01} \begin{split}
E[ e_1(k)^T e_2(k) | \mathcal{F}_{k-1} ] &= E \big [ E[ e_1(k)^T e_2(k) | \mathcal{F}_{k-1}'] \big | \mathcal{F}_{k-1} \big] \\
& = E \big [ e_1(k)^T E [e_2(k) | \mathcal{F}_{k-1}^{'} ] \big | \mathcal{F}_{k-1} \big] =0. \end{split} \end{equation}
Since $\Lambda(k)$ and $L(k)$ are adapted to $\mathcal{F}_{k-1}'$, by $ E[ \zeta (k)| \mathcal{F}_{k-1}'] =0$ and $\mathcal{F}_k \subset\mathcal{F}_k'$, we get \begin{equation}\label{conde1} \begin{split}
&E [ \big( (L(k) - \bar{L} ) \otimes I_m \big) \Lambda(k) \big)^T \zeta(k) |\mathcal{F}_{k-1} ] \\
&\quad = E \Big[ E \big [ \big( (L(k) - \bar{L} ) \otimes I_m \big) \Lambda(k) \big)^T \zeta(k) | \mathcal{F}_{k-1}^{'}\big] \Big|\mathcal{F}_{k-1} \Big ] \\
&\quad = E \Big[ \big( (L(k) - \bar{L} ) \otimes I_m \big) \Lambda(k) \big)^T E \big [ \zeta(k) | \mathcal{F}_{k-1}^{'}\big] \Big|\mathcal{F}_{k-1} \Big ] =0. \end{split} \end{equation}
By the conditional H\"older inequality
\begin{equation}\label{Holder}
E[\| X^TY\| \big | \mathcal{F}] \leq(E[\| X \|^2 \big | \mathcal{F}] )^{\frac{1}{2}} (E[\| Y \|^2 \big | \mathcal{F}] )^{\frac{1}{2}} , \end{equation} from Assumption \ref{asscn} (ii) we see that
\begin{equation}
\begin{array}{lll}
& E[ \zeta_{ij}(k) ^T \zeta_{ip}(k) \big | \mathcal{F}_{k-1}' ] \leq E[ \| \zeta_{ij}(k) ^T \zeta_{ip}(k) \| \big | \mathcal{F}_{k-1}' ] \\&\leq(E[\| \zeta_{ij}(k) \|^2 \big | \mathcal{F}_{k-1}'] )^{\frac{1}{2}}
(E[\| \zeta_{ip}(k) \|^2 \big | \mathcal{F}_{k-1}'] )^{\frac{1}{2}} \leq \mu^2. \nonumber
\end{array} \end{equation} Then, since $A(k)$ is adapted to $\mathcal{F}_{k-1}'$, we have \begin{equation} \begin{array}{lll}
E [\|\zeta_i(k) \|^2 | \mathcal{F}_{k-1}' ] & = & E[\sum_{j,p =1}^n a_{ij}(k) a_{ip}(k)\zeta_{ij}(k)^T \zeta_{ip}(k) | \mathcal{F}_{k-1}' ] \\
& = & \sum_{j,p =1}^n a_{ij}(k) a_{ip}(k) E[ \zeta_{ij}(k)^T \zeta_{ip}(k) | \mathcal{F}_{k-1}' ] \\
& \leq &\sum_{j,p =1}^n \mu^2=n^2\mu^2 . \nonumber \end{array} \end{equation} Similarly, $
E[\| \epsilon_i(k) \|^2 | \mathcal{F}_{k-1}' ] \leq n^2 \mu^2~ \forall i \in \mathcal{N}.$ From Assumption \ref{asscn} (iv), it is clear that $\delta_{i}(k)$ is independent of $\mathcal{F}_{k-1}'$, and hence, by Assumption \ref{assd}
$E [\| \delta(k) \|^2 | \mathcal{F}_{k-1}' ]=E [\| \delta(k) \|^2]=\sum_{i=1}^n E [\| \delta_i(k) \|^2 ] .$ In summary, \begin{equation}\label{sndc} \begin{array}{ll}
&E [\| \zeta(k) \|^2 | \mathcal{F}_{k-1}' ] \leq n^3 \mu^2, ~~
E [\| \epsilon(k) \|^2 | \mathcal{F}_{k-1}' ] \leq n^3\mu^2, \\
&E [\| \delta(k) \|^2 | \mathcal{F}_{k-1}' ] = \sum_{i=1}^n\sigma_{i,\delta} \triangleq\sigma_{\delta} . \end{array} \end{equation} Then \begin{equation}
\begin{array}{lll}
& E [ \| e_2(k)\|^2 |\mathcal{F}_{k-1}'] \\ & \leq 3\big( E [\| \epsilon(k) \|^2 |\mathcal{F}_{k-1}'] +E [\| \zeta(k) \|^2 |\mathcal{F}_{k-1}'] +E [\| \delta(k) \|^2 |\mathcal{F}_{k-1}'] \big) \\ & \leq 3(2n^3\mu^2+\sigma_{\delta }^2) \triangleq C_{1}, \nonumber
\end{array} \end{equation} and hence, by $\mathcal{F}_k \subset\mathcal{F}_k'$, we get \begin{equation}\label{conde2}
E [ \| e_2(k)\|^2 |\mathcal{F}_{k-1}]=E \big[E [ \| e_2(k)\|^2 |\mathcal{F}_{k-1}'] \big | \mathcal{F}_{k-1}\big] \leq C_{1}. \end{equation}
Because $ \Lambda (k) $ and $ Z(k)$ are adapted to $\mathcal{F}_{k-1}$, and $L(k)$ is independent of $\mathcal{F}_{k-1}$ by Assumption 4 (iv), we have \begin{equation} \begin{array}{lll}
E [ \| e_1(k) \|^2 |\mathcal{F}_{k-1}] &=E [ \| \big( (\bar{L}-L(k) ) \otimes I_m \big) ( \Lambda (k) + Z(k)) \|^2 |\mathcal{F}_{k-1}] \\
& \leq C_{2 } \| \Lambda(k) + Z(k)\|^2, \nonumber \end{array} \end{equation}
where $C_2=E [\| L(k) - \bar{L} \|^2]$ is finite. It, along with \eqref{conde01} \eqref{conde2}, yields \begin{equation}\label{compon1} \begin{array}{lll}
&&E [ \| e_1(k) +e_2(k)\|^2 |\mathcal{F}_{k-1}] = E [ \| e_1(k) \|^2 |\mathcal{F}_{k-1}] \\
&& + E [ \| e_2(k)\|^2 |\mathcal{F}_{k-1}] + 2E [ e_1(k)^Te_2(k) |\mathcal{F}_{k-1}] \\
&&\leq C_{2 } \| \Lambda(k) + Z(k)\|^2 +C_1 . \end{array} \end{equation}
By \eqref{conde1} \eqref{sndc} and $\mathcal{F}_k \subset\mathcal{F}_k'$, we derive \begin{equation}\label{compon2} \begin{array}{lll}
E [ \| e_3(k) \|^2 |\mathcal{F}_{k-1}] & = & E [ \| \big( (L(k) -\bar{L} ) \otimes \big) \Lambda(k) -\zeta(k) \|^2 |\mathcal{F}_{k-1}] \\
& = & E [ \| \big( (L(k) -\bar{L} ) \otimes I_m\big) \Lambda(k) \|^2 |\mathcal{F}_{k-1}] \\
& \; & - 2E [ \big( \big( (L(k) -\bar{L} ) \otimes I_m \big) \Lambda(k) \big)^T\zeta(k) |\mathcal{F}_{k-1}] \\
& \; & + E [ \| \zeta(k) \|^2 |\mathcal{F}_{k-1}] \leq C_2 ||\Lambda(k) ||^2+n^3\mu^2. \end{array} \end{equation}
In summary, from \eqref{conde0}, \eqref{compon1}, and \eqref{compon2}, we obtain
\begin{equation}\label{compon3} \begin{array}{lll}
E [ \| \xi(k) \|^2 |\mathcal{F}_{k-1}] & = & E [ \| \nu(k) \|^2 |\mathcal{F}_{k-1}]+E [ \| e_3(k) \|^2 |\mathcal{F}_{k-1}]\\
& \; & +E [ \| e_1(k) +e_2(k)\|^2 |\mathcal{F}_{k-1}]\\
& \leq & nc+ c\| X(k)\|^2 + C_2 ||\Lambda(k) ||^2 + n^3\mu^2\\
& \; &+ C_2\|\Lambda(k) + Z(k)\|^2 +C_1 \\
& \leq & c_1 \| S(k)\|^2+c_2 \nonumber \end{array} \end{equation} for some positive constants $c_1, c_2$.
$\blacksquare$
\subsection{Stability} The following result is about the boundedness of the iterations before showing its convergence.
\begin{lem}\label{lembound} Under Assumptions 1-4, $\{ S(k)\}$ generated by the distributed algorithm \eqref{dy1} is bounded with probability one given any finite initial value $S(0)$. \end{lem}
{\bf Proof:}
Denote by $S^{*}$ as an equilibrium point of \eqref{dy2}, i.e., $J(S^{*})\in N_{\Phi}(S^*)$. Then, by Assumption \ref{assp} and the KKT condition \eqref{kkt}, $S^{*}$ is a finite value. Take $v(S)= \| S- S^{*}\|^2$ as a Lyapunov function. { Then from \eqref{compact} and the non-expansive property of the projection operator \eqref{pro} we derive} \begin{equation} \begin{array}{ll}
v( S(k+1)) & = \| S(k+1)- S^{*}\|^2\\
& \leq \| S(k) -S^{*}+ \alpha_k (J(S(k)) + \xi(k))\|^2 \\
& \leq \| S(k )- S^{*}\|^2+ 2\alpha_k (S(k) -S^{*})^T (J(S(k)) + \xi(k)) \\ &+ \alpha_k^2\big(\| J(S(k))\|^2 +2 \xi(k)^TJ(S(k)) +\| \xi(k) \|^2\big) . \nonumber \end{array} \end{equation}
Since $S(k)$ is adapted to $\mathcal{F}_{k-1 }$, by recalling $E [\xi(k) | \mathcal{F}_{k -1}]=0$ from Lemma \ref{lemnoise} we obtain
\begin{equation}\label{inequality0} \begin{array}{lll}
E [ v( S(k+1)) | \mathcal{F}_{k-1}] & \leq & v(S(k))+ 2\alpha_k (S(k) -S^{*})^T J(S(k)) \\
& + & \alpha_k^2 ( \| J(S(k)) \|^2 + E [ \| \xi(k) \|^2 | \mathcal{F}_{k-1}]). \nonumber \end{array} \end{equation} Similar to the proof of Lemma \eqref{lemode}, $(S(k) -S^{*})^T J(S(k)) \leq 0.$ Then by Lemma \ref{lemnoise}, we get \begin{equation}\label{ineq1}
E [ v( S(k+1)) | \mathcal{F}_{k-1}] \leq v(S(k)) + \alpha_k^2 ( \| J(S(k)) \|^2 + c_1 \| S(k)\|^2+c_2) . \end{equation}
{ From Assumption \ref{assp} and taking $n_{\Omega}(X^*)\in N_{\Omega}(X^*)$ such that $\nabla f(X^*)-\Lambda^*+n_{\Omega}(X^*)=\mathbf{0}$, we have \begin{equation} \label{inequality01} \begin{array}{ll}
\| J(S(k)) \|^2 & = \| -\nabla f(X(k))+\Lambda(k) +\nabla f(X^{*}) -\Lambda^{*} +n_{\Omega}(X^*)\|^2 \\
& +\| ( \bar{L} \otimes I_m) \big( (Z(k)-Z^{*})+ (\Lambda(k)-\Lambda^{*}) \big )\\
& + X(k)-X^{*} \|^2 + \| ( \bar{L} \otimes I_m) (\Lambda(k)-\Lambda^{*}) \|^2 \\
&\leq 3(\| \nabla f(X(k)) -\nabla f(X^{*}) \|^2 + \| \Lambda(k)- \Lambda^{*}\|^2\\
&+ \| n_{\Omega}(X^*)\|^2+ \| ( \bar{L} \otimes I_m) (\Lambda(k)-\Lambda^{*}) \|^2 +\| X(k)-X^{*} \|^2 \\
& + \| ( \bar{L} \otimes I_m) (Z(k)-Z^{*}) \|^2 )+ \| ( \bar{L} \otimes I_m) (\Lambda(k)-\Lambda^{*}) \|^2\\
&\leq (3l^2_c+3) \| X(k)-X^{*} \|^2+3c_4\| Z(k)-Z^{*}\|^2\\
&+(3+4c_4) \| (\Lambda(k)-\Lambda^{*}) \|^2 +c_n \\
&\leq (3+ 3l_c^2+4c_4 ) ||S(k)-S^{*}||^2+c_n = c_5 v( S(k))+c_n, \end{array} \end{equation}
where $c_4=\| \bar{L}\|$ and $c_n= \| \nabla f(X^*)-\Lambda^*\|^2$.}
Note that $$\| S(k)\|^2 \leq 2(\| S(k)-S^{*}\|^2 +\| S^{*}\|^2)=2(v( S(k))\|^2 +\| S^{*}\|^2).$$ Incorporated with \eqref{ineq1} and \eqref{inequality01}, it yields \begin{equation}
\begin{array}{lll}
& E [ v( S(k+1)) | \mathcal{F}_{k-1}] \leq v(S(k)) \\
& + \alpha_k^2 \big( c_5v( S(k))+c_n +2c_1v(S(k)) + 2c_1\| S^{*}\|^2+c_2\big)\\
& \triangleq (1+ c_6 \alpha_k^2) v(S(k))+ c_7\alpha_k^2 , \end{array} \end{equation}
where $c_6=2c_1+c_5 , c_7=2c_1\| S^{*}\|^2+c_2+c_n$.
Since $\{\alpha_k\}$ satisfies \eqref{stepsize}, with probability one
$\lim\limits_{k \rightarrow \infty} v(S(k))$ exists and is finite by Lemma \ref{marg} in Appendix. Therefore, $\{ S(k)\}$ is bounded with probability one.
$\blacksquare$
\subsection{Convergence} The following result gives the main convergence result for the SA-based distributed algorithm \eqref{dy1}.
\begin{thm}\label{thmcov} Suppose Assumptions \eqref{assp}-\eqref{asscn} hold. Let sequences $ \{ x_{i}(k)\},~ \{ \lambda_{i}(k) \},~ \{ z_{i}(k) \}$ be produced by \eqref{dy1} given any finite initial values $x_{i}(0),~\lambda_{i}(0),~ z_{i}(0)$. Then $$ \lim_{k \rightarrow \infty} x_{i }(k)=x_i^{*}~~ a.s.,$$
where $X^{*}=col\{x_1^{*}, \cdots, x_n^{*}\}$ is the optimal resource allocation to problem \eqref{problem}. \end{thm}
{\bf Proof:} { Note that $\theta_k$, $Y_k$, $g(\theta)$ and $\Phi$ for \eqref{constrained} correspond to $\theta_k=S(k) $, $Y_k=J(S(k)) + \xi(k)$, $g(\theta)=J(S)$ and $\Phi=\Omega \times \mathbf{R}^{mn} \times \mathbf{R}^{mn}$ for \eqref{compact}. Then we can apply Theorem \ref{CSA} in Appendix to prove the conclusion, and it suffices to check conditions C1-C4 given in Appendix.
Since $S(k)$ is adapted to $\mathcal{F}_{k-1}$, by \eqref{compon3} we drive
$$E[\|Y_k\|^2|\mathcal{F}_{k-1}] \leq \| J(S(k)) \|^2+c_1 \| S(k)\|^2+c_2~~a.s.$$ Then by Lemma \ref{lembound}, \eqref{inequality01} and Assumption \ref{assp} we conclude that C1 hold. From \eqref{rootf} and Lemma \eqref{lemnoise} it is easily seen that C2 holds. By definition of $J(S)$ given by \eqref{rootf} and Assumption \ref{assp} we know that C3 holds. Since $\{ S(k)\}$ is bounded with probability one from Lemma \ref{lembound}, we then have C4.
As a result, C1-C4 hold. Since $\Phi=\Omega \times \mathbf{R}^{mn} \times \mathbf{R}^{mn}$, with Assumption \ref{assset} it is easily seen that $\Phi$ satisfies the similar conditions as $\Omega_i.$
Then, by Theorem \ref{CSA}, $S(k)$ converge with probability one to
the invariant set of \eqref{dy2}. Thus, by Lemma \ref{lemode}, $X(k)$ converges with probability one to the optimal solution $X^{*}$. }
$\blacksquare$
\section{Demand Response Management and Simulations}
In this section, we apply the RA optimization model \eqref{problem} and algorithm \eqref{dy1}
to distributed multi-period demand response management in power systems (see \cite{stevon2} and \cite{peng2}).
Suppose that a group of load aggregators (with index $\mathcal{N}=\{1,\cdots,n\}$) need to decide the load demand in the following $T$ periods $P_i^d\in \mathbf{R}^T$, in order to meet the generation scheduling $P_i^g \in \mathbf{R}^T$ and minimize the disutilities. $P_i^g$ is usually decided by other decision processes based on the generator unit commitment or real-time generation prediction of renewables, which is fixed and assumed to be only informed or observable to agent $i$. Aggregator $i$ formulates its local objective function $f_i(P^d_i)$ to consider the costs or disutilities due to demand response $P_i^d$. { Moreover, $P_i^d\in \Omega_i$ specifies the local response constraints, which considers the lower and upper bounds in each period, the total demand in the following $T$ periods, ramping constraints, and other local specifications.} Hence, the multi-period demand response management problem is formulated as: \begin{equation}\label{problem2} \begin{array}{lll} \min_{P^d_i \in \mathbf{R}^T,i\in \mathcal{N}} & \; & \sum_{i\in \mathcal{N}} f_i(P^d_i)\\ subject \; to \; &\;& \sum_{i\in \mathcal{N}} P^d_i = \sum_{i\in\mathcal{ N}} P^g_i, \quad P_i^d \in \Omega_i \end{array} \end{equation}
In many practical cases, $P_i^g$ can only be observed indirectly through local measurements of wind speed, or solar radiation, or local frequency deviation, and hence, suffers from various observation noises. In addition, $f_i(P_i^d)$ should take full consideration of user's demand requirements, (dis)utility, satisfactory levels, and payoffs, and hence, is influenced by various external factors, such as temperature, electricity price, and renewable generations. Therefore, the gradient observation of $f_i(P_i^d)$ may also be noisy. The aggregators may share information through wireless communication networks with switching topologies and noisy channels. As a result, algorithm \eqref{dy1} can be applied to handle the above challenges for problem \eqref{problem2}. Compared with previous works \cite{stevon2} and \cite{peng2}, the proposed model here considers the demand response in multi-periods and local load response feasibility constraints, and the algorithm can handle various observations and communication uncertainties, which may be more practical in many cases.
In what follows, we give a numerical experiment to illustrate the algorithm performance.
\begin{exa} Consider the following three-period demand response management problem: \begin{equation}\label{exam_problem} \begin{array}{lll} &&\min_{P^d_i \in \mathbf{R}^{3}, i\in \mathcal{N} } \sum_{i\in \mathcal{N}} E_{\Psi_i,\theta_i}[ {P_i^d}^T (Q_i + \Psi_i) P_i^d + (c_i + \theta_i)^T P^d_i] \\ && s.t. \qquad \qquad \sum_{i\in \mathcal{N}} P^d_i =\sum_{i \in \mathcal{N}} P^g_i \\ && \quad \qquad \qquad R_i P_i^d \leq l_i, R_i \in \mathbf{R}^{12\times 3}, l_i\in \mathbf{R}^{12\times 1}, i\in \mathcal{N}, \end{array} \end{equation} { where $R_i P_i^d \leq l_i$ is the compact form of the following local load response feasibility constraints: $ [l_i]_{11} \leq \mathbf{1}^T P_i^d \leq [l_i]_{21}$, $[l_i]_{31}\leq [P_i^d]_{11}-[P^d_i]_{21}\leq [l_i]_{41} $,
$[l_i]_{51}\leq [P_i^d]_{21}-[P_i]_{31}\leq [l_i]_{61}$,
$ [l_i]_{71} \leq [P_i^d]_{11} \leq [l_i]_{81} $,
$ [l_i]_{91} \leq [P_i^d]_{21} \leq [l_i]_{10,1} $ and
$ [l_i]_{11,1} \leq [P_i^d]_{31} \leq [l_i]_{12,1} $. }
The basic simulation experiment settings are given as follows. The number of agents is set to be $10$. $Q_i $ and $c_i$ are randomly generated symmetric positive definite matrices and random vectors, respectively. Each $P^g_i$ and $l_i$ are also randomly generated vector that can ensure Assumptions \ref{assp} and \ref{assset}. \begin{figure}
\caption{ The averaging trajectories of some agents' allocation variables
}
\label{fig_max1}
\end{figure}
Consider a graph set $\mathcal{G}_s$ containing $30$ graphs, each of which is generated according to the random graph model $G(10,P)$, where $P$ is the probability of occurrence for any possible edge. The probability $P$ is randomly and uniformly drawn from $[0.05,0.1]$ for each graph in $\mathcal{G}_s$. Select a graph set $\mathcal{G}_s$ with its union graph being connected. At time $k$, a graph is randomly drawn from the graph set $\mathcal{G}_s$ according to the uniform distribution.
For $i \in \mathcal{N}$, $[\Psi_i]_{ij},~[\theta_i]_j $ are i.i.d. random variables satisfying the Gaussian distribution $N(0,0.5)$ with zero mean and variance $0.5$. Let both the generation observation noise $\delta_i$ and communication noise $\zeta_{ij}$, $\epsilon_{ij}$ be i.i.d. random vectors satisfying the Gaussian distribution $N(\mathbf{0},I_3)$ with zero mean vector and covariance matrix $I_3$. Hence, Assumptions \ref{assd} and \ref{asscn} are satisfied. The stepsize $\alpha_k$ in \eqref{dy1} is set as $\alpha_k=\frac{1}{(k+1)^{0.6}}$.
{\bf Experiment 1}: Given a randomly generated graph set
$\mathcal{G}_s$ and a randomly generated setting for problem \eqref{exam_problem}, we apply algorithm \eqref{dy1} to generate $200$ independent sample paths with iteration length of $8000$. Figure \ref{fig_max1} shows the averaging trajectories of some agents' allocation variables, and illustrates how the agents find the optimal allocation. Moreover, Figure \ref{fig_max2} shows the averaging trajectories of some algorithm performance indexes, including the distance to optimal solution $||P^d-{P^d}^* ||$, function value $f(P^d)$, $||\bar{L}\Lambda ||$, and $ || \sum_{i\in
\mathcal{N}} (P_i^d-P_i^g)||$.
\begin{figure}
\caption{ The averaging trajectories of some performance indexes.
}
\label{fig_max2}
\end{figure}
{\bf Experiment 2}: Let us randomly generate a graph set $\mathcal{G}_s$ and a setting for problem \eqref{exam_problem} at each round of this simulation, and employ algorithm \eqref{dy1} to generate one sample path of this setting with iteration length of $8000$. We repeat the procedure for $100$ rounds, and use Figure \ref{fig_max3} to show the histograph of some performance indexes at iteration time $8000$. It illustrates that algorithm \eqref{dy1} can almost surely find the optimal allocation for different problem settings with only one sample path.
\begin{figure}
\caption{ The histograph of some performance indexes at iteration time $k=8000$.
}
\label{fig_max3}
\end{figure}
\end{exa}
\section{Conclusions} In this paper, an SA-based distributed algorithm was proposed to solve a class of RA optimization problems under various uncertainties. The gradient and resource observation noises were taken into consideration, and the communication network was assumed with randomly switching topologies and noisy communication channels. The algorithm was proved to converge to the optimal solution with probability one by resorting to the ODE method for SA algorithm, which may
demonstrate great potentials of SA algorithm and ODE methods for distributed decision problems over network systems under noisy data observations.
{ \section*{Appendix} Here is the convergence result for the constrained stochastic approximation.
Consider \begin{equation} \label{constrained} \theta_{k+1}=P_{\Phi} \{ \theta_k+\alpha_kY_k\}, \end{equation} where $\Phi \in \mathbf{R}^m$ is a convex constraint set. Next follows the conditions for its convergence analysis.
C1: $\sup_k E [ \| Y_k\|^2]<\infty. $
C2: There is a measurable function $g(\cdot)$ such that
$$ E_k[Y_k]=E[Y_k|\theta_0,Y_i, i<k]=g(\theta_k) .$$
C3: $g(\cdot)$ is continuous.
C4: $\theta_k$ is bounded with probability one.
\begin{thm} \label{CSA}\cite[Theorem 5.2.1 and Theorem 5.2.3] {sa} Let C1-C4, and \eqref{stepsize} hold for algorithm \eqref{constrained}. If $\Phi$ satisfies the same condition as that imposed on $\Omega_i$ in Assumption \ref{assset}, then with probability one $\theta_k$ converges to the invariant set of the following projected ODE in $\Phi$: $$\dot{\theta}=g(\theta)+z ,$$ where $z\in -N_{\Phi} (\theta)$ is the minimum force to keep the trajectories of the projected ODE in $\Phi.$
\end{thm} }
The following lemma shows convergence properties for nonnegative super-martingales. \begin{lem}[Robbins-Siegmund](\cite{supermds}) \label{marg} Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space and $\mathcal{F}_0\subset \mathcal{F}_1 \subset\cdots $ be a sequence of $\sigma-$algebra of $\mathcal{F}$. Let $\{d_k\}$ and $\{w_k\}$ be nonnegative $\mathcal{F}_{k}$-measurable random variables such that
$$ E[d_{k+1}| \mathcal{F}_{k}] \leq (1+\alpha_k) d_k + w_k ,$$ where $\alpha_k \geq 0$ are deterministic scalars with $\sum_{k=1}^{\infty}\alpha_k <\infty$. If $\sum_{k=1}^{\infty} w_k < \infty$, then $\{d_k\}$ converges with probability one to some finite random variable. \end{lem}
\end{document} |
\begin{document}
\title{\Large\bf $W$-entropy formulas on super Ricci flows and Langevin deformation on Wasserstein space over Riemannian manifolds} \author{Songzi Li\footnote{Supported by a Postdoctoral Fellowship at Beijing Normal University.}, \ \ \ Xiang-Dong Li\thanks{Research supported by NSFC No. 11371351 and Key Laboratory RCSDS, CAS, No. 2008DP173182.}\\
}
\maketitle
\thispagestyle{empty} \begin{minipage}{120mm} In this survey paper, we give an overview of our recent works on the study of the $W$-entropy for the heat equation associated with the Witten Laplacian on super-Ricci flows and the Langevin deformation on Wasserstein space over Riemannian manifolds. Inspired by Perelman's seminal work on the entropy formula for the Ricci flow, we prove the $W$-entropy formula for the heat equation associated with the Witten Laplacian on $n$-dimensional complete Riemannian manifolds with the $CD(K, m)$-condition, and the $W$-entropy formula for the heat equation associated with the time dependent Witten Laplacian on $n$-dimensional compact manifolds equipped with a $(K, m)$-super Ricci flow, where $K\in \mathbb{R}$ and $m\in [n, \infty]$. Furthermore, we prove an analogue of the $W$-entropy formula for the geodesic flow on the Wasserstein space over Riemannian manifolds. Our result recaptures an important result due to Lott and Villani on the displacement convexity of the Boltzmann-Shannon entropy on Riemannian manifolds with non-negative Ricci curvature. To better understand the similarity between above two $W$-entropy formulas, we introduce the Langevin deformation of geometric flows on the cotangent bundle over the Wasserstein space and prove an extension of the $W$-entropy formula for the Langevin deformation. Finally, we make a discussion on the $W$-entropy for the Ricci flow from the point of view of statistical mechanics and probability theory. \end{minipage}
\vskip1cm \noindent{\it MSC2010 Classification}: primary 58J35, 58J65; secondary 60J60, 60H30.
\noindent{\it Keywords}: $W$-entropy, Witten Laplacian, Langevin deformation, $(K, m)$-super Ricci flows.
\section{Introduction}
Entropy was introduced by R. Clausius \cite{Cl1} in 1865 in the study of thermodynamics. In 1872, L. Boltzmann \cite{Btz1} introduced the ${\rm H}$-entropy and formally derived the ${\rm H}$-theorem for the evolution equation of the probability distribution of ideal gas (now called the Boltzmann equation). The statistical interpretation of the {\rm H}-entropy was given by Boltzmann \cite{Btz2} in 1877. In 1948, C. Shannon \cite{Shan} introduced the Shannon entropy in the theory of communication and transformation of information. In 1958, J. Nash \cite{Na} used the Boltzmann entropy to study the continuity of solutions of parabolic and elliptic equations.
Now entropy has been an important tool in many areas of mathematics. For example, the Kolmogorov-Sinai entropy plays an important role in the study of dynamical systems and ergodic theory, the exponential decay of the Boltzmann entropy is closely related to the logarithmic Sobolev inequalities, the rate function in Sanov's theorem in the theory of large deviation is the relative Boltzmann entropy with respect to the reference measure. More recently, the displacement convexity of the Boltzmann-Shannon entropy or the R\'enyi entropy is a key tool in Lott, Villani and Sturm's works \cite{LoV, Lo2, V1, V2, St1, St2, St3} to develop analysis and geometry on metric measure spaces.
In 1982, R. Hamilton \cite{H1} introduced the Ricci flow and initiated the program to prove the Poincar\'e conjecture and the Thurston geometrization conjecture using the Ricci flow (see also \cite{H2}). In a seminal paper \cite{P1}, Perelman gave a gradient flow reformulation for the Ricci flow and proved the $W$-entropy formula along the conjugate heat equation of the Ricci flow. More precisely, let $M$ be an $n$-dimensional compact manifold and define \begin{eqnarray*}
\mathcal{F}(g, f)=\int_M (R+|\nabla f|^2)e^{-f}dv, \end{eqnarray*} where $g\in \mathcal{M}=\{{\rm Riemannian\ metric \ on}\ M\}$, $f\in C^\infty(M)$, $R$ denotes the scalar curvature on $(M, g)$, and $dv$ denotes the volume measure on $(M, g)$. Under the constraint condition that the weighted volume measure \begin{eqnarray*} d\mu=e^{-f}dv\end{eqnarray*} is fixed, Perelman \cite{P1} proved that the gradient flow of $\mathcal{F}$ with respect to the standard $L^2$-metric on $\mathcal{M}\times C^\infty(M)$ is given by the following modified Ricci flow for $g$ together with the conjugate heat equation for $f$, i.e., \begin{eqnarray*} \partial_t g&=&-2(Ric+\nabla^2 f),\\ \partial_t f&=&-\Delta f-R. \end{eqnarray*} Moreover, Perelman \cite{P1} introduced the $W$-entropy as follows \begin{eqnarray}
W(g, f, \tau)=\int_M \left[\tau(R+|\nabla f|^2)+f-n\right]{e^{-f}\over
(4\pi\tau)^{n/2}}dv,\label{entropy-1}
\end{eqnarray} where $\tau>0$, and $f\in C^\infty(M)$ such that $$ \int_M (4\pi\tau)^{-n/2}e^{-f}dv=1,$$ and proved that if $(g(t), f(t), \tau(t))$ satisfies the evolution equations \begin{eqnarray} \partial_t g&=&-2Ric,\label{RF}\\
\partial_t f&=&-\Delta f+|\nabla f|^2-R+\frac{n} {2\tau},\label{r-c}\\ \partial_t \tau&=&-1,\nonumber \end{eqnarray} then the following remarkable $W$-entropy formula holds \begin{eqnarray}
{d\over dt}W(g, f, \tau)=2 \int_M \tau\left|Ric+\nabla^2 f-{g\over 2\tau}\right|^2{e^{-f}\over (4\pi \tau)^{n/2}}dv.\label{Entropy-P} \end{eqnarray} In particular, the $W$-entropy is monotonic increasing in $t$ and the monotonicity is strict except that $(M, g(\tau), f(\tau))$ is a shrinking Ricci soliton, i.e., \begin{eqnarray*} Ric+\nabla^2 f={g\over 2\tau}.\label{SRS} \end{eqnarray*} As an application, Perelman \cite{P1} proved the no local collapsing theorem, which ``removes the major stumbling block in Hamilton's approach to geometrization'' and plays an important role in the final resolution of the Poincar\'e conjecture and Thurston's geometrization conjecture.
It is natural and interesting to ask the problems what is the hidden idea for Perelman to introduce the mysterious $W$-entropy, what is the reason for him to call the quantity in $(\ref{entropy-1})$ the $W$-entropy, and whether there is some essential link between the $W$-entropy and the Boltzmann entropy in statistical mechanics and probability theory.
Inspired by Perelman \cite{P1} and related works \cite{N1, N2}, the second author of this paper proved in \cite{Li07} the $W$-entropy formula for the heat equation of the Witten Laplacian on compact
Riemannian manifolds with the $CD(0, m)$-condition and gave a probabilistic interpretation of the $W$-entropy for the Ricci flow. Later, the $W$-entropy formula and a rigidity theorem for the $W$-entropy were proved in \cite{Li12, Li13} for the fundamental solution to the heat equation of the Witten Laplacian on complete
Riemannian manifolds with the $CD(0, m)$-condition, and the $W$-entropy formula was proved in \cite{Li11} for the Fokker-Planck equation of the
Witten Laplacian on complete
Riemannian manifolds with the $CD(0, m)$-condition. The relationship between Perelman's $W$-entropy formula for the Ricci flow and the Boltzmann
${\rm H}$-theorem for the Boltzmann equation was discussed in \cite{Li12b} from the point of view of the statistical mechanics. In \cite{LL15a, LL15b, LL17a, LL17b, SLi15}, we extended the $W$-entropy formula to the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$-condition and on compact Riemannian manifolds equipped with $(K, m)$-super Ricci flows. Moreover, we proved in \cite{LL16, SLi15} an analogue of the $W$-entropy formula for the geodesic flow on the Wasserstein space over Riemannian manifolds with the $CD(0, m)$-condition, which recaptures an important result due to Lott and Villani \cite{LoV, Lo2} on the displacement convexity of the Boltzmann-Shannon entropy on the Wasserstein space over Riemannian manifolds with non-negative Ricci curvature. To better understand the similarity between the $W$-entropy formula for the heat equation of the Witten Laplacian and the $W$-entropy formula for the geodesic flow on the Wasserrstein space over Riemannian manifolds, we introduced in \cite{LL16, SLi15} the Langevin deformation of geometric flows on the Wasserstein space over Riemannian manifolds, which interpolates the backward gradient flow of the Boltzmann-Shannon entropy and the geodesic flow on the Wasserstein space, and proved an extension of the $W$-entropy formula for the Langevin deformation. The rigidity models are also proposed for the Langevin deformation of flows. In particular, two rigidity theorems were proved in \cite{Li12, Li13, LL16, SLi15} for the gradient flow of the Boltzmann-Shannon entropy and the geodesic flow on the Wasserstein space over complete Riemannian manifolds with the $CD(0, m)$-condition.
The purpose of this survey paper is to give an overview of our works in \cite{Li07, Li12, Li12b, Li13, LL15a, LL15b, LL16, LL17a, LL17b, SLi15} and to make a discussion on the $W$-entropy for the Ricci flow from the point of view of statistical mechanics and probability theory.
In 2016, the second author of this paper was invited to give a Special Invited Talk in the 2016 Autumn Meeting of Mathematical Society of Japan. This paper is an improved version of the abstract for this meeting. He would like to thank the committee members of the Mathematical Society of Japan, in particular, Professor S. Aida and Professor K. Kuwae, for their interests and invitation. We would like to thank Professor Feng-Yu Wang for inviting us to submit this paper as a Special Invited Paper to SCIENCE CHINA Mathematics and the Mathematical Society of Japan for their permission.
\section{$W$-entropy formulas for Witten Laplacian on Riemannian manifolds}
Since Perelman's preprint \cite{P1} was published on Arxiv in 2002, many people have studied the $W$-like entropy for other geometric flows on Riemannian manifolds \cite{N1, N2, Ec, LNVV, KN}. Let $(M, g)$ be an $n$-dimensional complete Riemannian manifold with a fixed metric (and with bounded geometry condition), and let $$u={e^{-f}\over (4\pi t)^{n/2}}$$ be a positive solution to the linear heat equation \begin{eqnarray} \partial_t u=\Delta u \label{heat-1} \end{eqnarray} with $\int_M u(x, 0)dv(x)=1$. In \cite{N1, N2}, Ni introduced the $W$-entropy for the linear heat equation $(\ref{heat-1})$ by \begin{eqnarray}
W(f, t)=\int_M \left[t |\nabla f|^2+f-n\right]{e^{-f}\over
(4\pi t)^{n/2}}dv,\label{entropy-2} \end{eqnarray} and proved the following $W$-entropy formula \begin{eqnarray} {d\over dt}W(f, t)&=&-2\int_M t\left(
\left|\nabla^2 f-{g\over 2t}\right|^2+Ric(\nabla f, \nabla f)\right){e^{-f}\over (4\pi t)^{n/2}}dv. \label{NiW} \end{eqnarray} In particular, the $W$-entropy for the linear heat equation $(\ref{heat-1})$ is decreasing on complete Riemannian manifolds with non-negative Ricci curvature. In \cite{LX}, Li and Xu extended Ni's $W$-entropy formula $(\ref{NiW})$ to the heat equation $\partial_t u=\Delta u$ on complete Riemannian manifolds with fixed metric satisfying $Ric\geq -Kg$, where $K\geq 0$ is a constant.
Let $(M, g)$ be a complete Riemannian manifold, $\phi\in C^2(M)$. Let $d\mu=e^{-\phi}dv$, where $dv$ is the Riemannian volume measure on $(M, g)$. The Witten Laplacian, called also the weighted Laplacian, \begin{eqnarray*} L =\Delta -\nabla \phi\cdot\nabla \label{WL} \end{eqnarray*} is a self-adjoint and non-negative operator on $L^2(M, \mu)$. By It\^o's calculus, one can construct the symmetric diffusion process $X_t$ associated to the Witten Laplacian by solving the SDE \begin{eqnarray*} dX_t=\sqrt{2}dW_t-\nabla\phi(X_t)dt, \end{eqnarray*} where $W_t$ is the Brownian motion on $M$. Moreover, it is well known that the transition probability density of the diffusion process $X_t$ is the fundamental solution to the heat equation of $L$, i.e., the heat kernel of the Witten Laplacian $L$. In view of this, it is a fundamental problem to study the heat equation and related properties for the Witten Laplacian on Riemannian manifolds with various geometric conditions.
To develop the study of the $W$-entropy formula for the heat equation of the Witten Laplacian, we need to introduce some notations. Let $n={\rm dim}~ M$, and $m\geq n$ a constant. Following Bakry and Emery \cite{BE}, we introduce \begin{eqnarray*} Ric_{m, n}(L)=Ric+\nabla^2\phi-{\nabla\phi\otimes\nabla\phi\over m-n}, \end{eqnarray*} and call it the $m$-dimensional Bakry-Emery Ricci curvature associated with the Witten Laplacian $L$ on $(M, g, \phi)$. We make the convention that $m=n$ if and only if $L=\Delta$ and $\phi$ is a constant. In this case, $Ric_{m, n}(L)=Ric$. When $m=\infty$, we introduce \begin{eqnarray*} Ric(L)=Ric+\nabla^2\phi. \end{eqnarray*} Following Bakry and Emery \cite{BE}, we say that the $CD(K, m)$ condition holds for the Witten Laplacian $L=\Delta-\nabla\phi\cdot\nabla$ on $(M, g, \phi)$ if and only if $Ric_{m, n}(L)\geq K$, where $K\in \mathbb{R}$ and $m\in [n, \infty]$.
\subsection{The case of $CD(0, m)$-condition}
In \cite{Li07, Li12, Li13}, inspired by Perelman's work on the $W$-entropy formula for Ricci flow, the second author of this paper proved the $W$-entropy formula for the heat equation associated with the Witten-Laplacian on complete Riemannian manifolds with the $CD(0, m)$-condition, which extends the above mentioned result due to Ni \cite{N1, N2}. More precisely, we have
\begin{theorem}\label{Th-A} (\cite{Li07, Li12, Li13}) Let $(M, g)$ be a compact Riemannian manifold, or a complete Riemannian manifold with bounded geometry condition\footnote{Here we say that $(M, g)$ satisfies the bounded geometry condition if the Riemannian curvature tensor ${\rm Riem}$ and its covariant derivatives $\nabla^k {\rm Riem}$ are uniformly bounded on $M$, $k=1, 2, 3$.}, and $\phi\in C^4(M)$ with $\nabla\phi\in C_b^3(M)$. Let $m\in [n, \infty)$, and $u={e^{-f}\over (4\pi t)^{m/2}}$ be a positive solution of the heat equation $\partial_t u=Lu$ when $M$ is compact, or the fundamental solution associated with the Witten Laplacian, i.e., the heat kernel to the heat equation $\partial_t u=Lu$, when $M$ is complete non-compact. Let \begin{eqnarray*} H_m(u, t)=-\int_M u\log u d\mu-{m\over 2}(1+\log(4\pi t)). \end{eqnarray*} Define the $W$-entropy for the Witten-Laplacian by \begin{eqnarray*} W_m(u, t)={d\over dt}(tH_m(u)). \end{eqnarray*} Then \begin{eqnarray}
W_m(u, t)=\int_M\left[t|\nabla f|^2+f-m\right]{e^{-f}\over (4\pi t)^{m/2}}d\mu,\label{WW-0} \end{eqnarray} and \begin{eqnarray}
{d\over dt} W_m(u, t)&=&-2\int_M t \left(\left|\nabla^2 f-{g\over 2t}\right|^2+Ric_{m, n}(L)(\nabla f, \nabla f)\right)ud\mu\nonumber\\ & & \hskip2cm -{2\over m-n}\int_M t \left({\nabla\phi\cdot\nabla f}+{m-n\over 2t}\right)^2ud\mu.\label{W-1} \end{eqnarray} \end{theorem}
By calculation and integration by part, we have \begin{eqnarray*} {d\over dt}H_m(u, t)=-\int_M \left(L\log u+{m\over 2t}\right)ud\mu.\label{Hentropy2} \end{eqnarray*} By \cite{Li05, Li12}, if $Ric_{m, n}(L)\geq 0$, the generalized Li-Yau Harnack inequality (\cite{LY}) holds \begin{eqnarray*} L\log u+{m\over 2t}\geq 0, \ \ \ \forall t>0.\label{LY-1} \end{eqnarray*} Therefore, $H_m(u, t)$ is non-increasing in time $t$ for the heat equation $\partial_t u=Lu$ on complete Riemannian manifolds with the $CD(0, m)$-condition, i.e., $Ric_{m, n}(L)\geq 0$.
As a corollary of Theorem \ref{Th-A}, if $(M, g, \phi)$ is complete Riemannian manifold with the bounded geometry condition and $Ric_{m, n}(L)\geq 0$, then the $W$-entropy for the heat equation $\partial_t u=Lu$ is decreasing in time $t$, i.e., \begin{eqnarray*} {d\over dt}W_m(u, t)\leq 0, \ \ \ \ \ \ \ \forall t\geq 0. \end{eqnarray*} Moreover, under the condition $Ric_{m, n}(L)\geq 0$, it was proved in \cite{Li12} that $W_m(u, t)$ attains its minimum at some point $t=t_0>0$, i.e., \begin{eqnarray*} {d\over dt}W_m(u, t)=0 \ \ \ \ {\rm at\ some}\ \ t=t_0>0, \end{eqnarray*} if and only if $(M, g)$ is isometric to Euclidean space $\mathbb{R}^n$, $m=n$, $\phi\equiv C$ for a constant $C\in \mathbb{R}$, and \begin{eqnarray*}
u(x, t)={e^{-{\|x\|^2\over 4t}}\over (4\pi t)^{n/2}}, \ \ \ \ \forall x\in \mathbb{R}^n, t>0. \end{eqnarray*} In other words, the Euclidean space $\mathbb{R}^n$ is the unique equilibrium state for the $W$-entropy of the Witten-Laplacian in the statistical ensemble of complete Riemannian manifolds $(M, g, \phi)$ with bounded geometry condition and with the $CD(0, m)$-condition.
In \cite{LL15a}, we gave a new proof of Theorem \ref{Th-A} by using the warped product approach. Let $m\in \mathbb{N}$, $m\geq n$. Let $\widetilde{M}=M\times N$, where $(N, g_N)$ is a compact Riemannian manifold with dimension $m-n$. Let $\phi\in C^2(M)$. We consider the following warped product metric on $\widetilde{M}$: \begin{eqnarray*} \widetilde{g}=g_M\bigoplus e^{-{2\phi\over m-n}}g_N.\label{WPM} \end{eqnarray*} Let $\nu_N$ be the normalized volume measure on $N$. Then the volume measure on $(\widetilde{M}, \widetilde{g})$ is $$dvol_{\widetilde{M}}=d\mu\otimes d\nu_N.$$ Let $\widetilde\nabla$ be the Levi-Civita connection on $(\widetilde{M}, \widetilde{g})$, $\widetilde{\nabla}^2$ and $\widetilde{\Delta}$ the Hessian and the Laplace-Beltrami operator on $(\widetilde{M}, \widetilde{g})$. By direct calculation, we have \begin{eqnarray}
\left|\widetilde{\nabla}^2 f-{\widetilde{g}\over 2t}\right|^2
=\left|\nabla^2 f-{g\over 2t}\right|^2+{1\over m-n}\left({\nabla\phi\cdot\nabla f}+{m-n\over 2t}\right)^2,\label{ccc} \end{eqnarray} and \begin{eqnarray*} \widetilde{\Delta}=L+e^{-{2\phi\over m-n}}\Delta_{N}. \end{eqnarray*}
\noindent {\bf A new proof of Theorem \ref{Th-A} (\cite{LL15a})}.\ \ To avoid technical issue, we only consider the case of compact manifolds. Let $u={e^{-f}\over (4\pi t)^{m/2}}$ be a positive solution to the heat equation $\partial_t u=Lu$. Then it satisfies the following heat equation on $(\widetilde{M}, \widetilde{g})$ \begin{eqnarray*} \partial_t u=\widetilde{\Delta} u. \end{eqnarray*} Since $f$ depends only on the variable in the $M$-direction, we have $\widetilde\nabla f=\nabla f$. Therefore the $W$-entropy $W_m(u, t)$ defined by $(\ref{WW-0})$ coincides with the $W$-entropy $\widetilde{W}_{m}(u, t)$ defined on $(\widetilde{M}, \widetilde{g})$ as follows \begin{eqnarray*}
\widetilde{W}_m(u, t)=\int_{\widetilde{M}}\left[t|\widetilde\nabla f|^2+f-m\right]{e^{-f}\over (4\pi t)^{m/2}}dvol_{\widetilde{M}}. \end{eqnarray*} Applying Ni's $W$-entropy formula $(\ref{NiW})$ to $(\widetilde{M}, \widetilde{g})$, we have \begin{eqnarray}
{d\over dt}\widetilde{W}_m(u, t)&=&-2\int_{\widetilde{M}} t \left(\left|\widetilde\nabla^2 f-{\widetilde g\over 2t}\right|^2 +\widetilde{Ric}(\widetilde\nabla f, \widetilde\nabla f)\right)ud\mu dv_N.\label{WP-W-1} \end{eqnarray} By $(\ref{ccc})$ and $\widetilde{Ric}(\widetilde\nabla f, \widetilde\nabla f)=Ric_{m, n}(L)(\nabla f, \nabla f)$, we derive $(\ref{W-1})$ from $(\ref{WP-W-1})$.
$\square$
\begin{remark}\label{rem1} {\rm One of the advantages of the above proof is that: when $m\in \mathbb{N}$ and $m>n$, the quantity ${1\over {m-n}}\left(\nabla \phi\cdot \nabla f+{m-n\over 2t}\right)^2$ appeared in the $W$-entropy formula in Theorem \ref{Th-A} has a natural geometric interpretation. It corresponds to the vertical component of the quantity $\left|\widetilde\nabla^2 f-{\widetilde{g}\over 2t}\right|^2$ on $\widetilde{M}=M\times N$ equipped with the warped product metric $(\ref{WPM})$. } \end{remark}
\subsection{The case of $CD(K, m)$-condition}
Theorem \ref{Th-A} can be viewed as the $W$-entropy formula for the heat equation of the Witten Laplacian on complete Riemannian manicolds with the $CD(0, m)$-condition. It is natural to raise the question whether we can extend Theorem \ref{Th-A} to the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$-condition for general $K\in \mathbb{R}$ and $m\in [n, \infty]$.
In \cite{LL15a}, we extended Theorem \ref{Th-A} to the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$-condition for $K\in \mathbb{R}$ and $m\in [n, \infty)$.
\begin{proposition} (\cite{LL15a})\label{WW1} Let $(M, g)$ be a complete Riemannian manifold with bounded geometry condition, and $\phi\in C^4(M)$ satisfying the condition in Theorem \ref{Th-A}.
Let $u$ be a positive solution to the heat equation $\partial_t u=Lu$. Then, under the $CD(-K, m)$-condition, i.e., $Ric_{m, n}(L)\geq -K$, where $K\in \mathbb{R}$ and $m\in [n, \infty)$, the following Harnack inequality holds \begin{eqnarray*}
{|\nabla u|^2\over u^2}-\left(1+{2\over 3}Kt\right){\partial_t u\over u}\leq {m\over 2t}+{mK\over 2}\left(1+{Kt\over 3}\right), \ \ \ \forall t>0. \label{HIKm} \end{eqnarray*} \end{proposition}
\begin{theorem} (\cite{LL15a, LL15b}) \label{Th-B} Let $u=\frac{e^{-f}}{(4\pi t)^{m/2}}$ be the fundamental solution to the heat equation $\partial_t u=Lu$. Under the same assumption as in Theorem \ref{Th-A}, define \begin{eqnarray} H_{m,K}(u, t)=-\int_M u\log u d\mu-{m\over 2}(1+\log(4\pi t))-\frac m2Kt\Big(1+\frac16Kt\Big), \label{HKm} \end{eqnarray} and the $W$-entropy by the Boltzmann formula \begin{eqnarray} W_{m, K}(u, t)={d\over dt}(tH_{m,K}(u)). \label{WKm} \end{eqnarray} Then \begin{eqnarray*}
W_{m, K}(u, t)=\int_M\left(t|\nabla f|^2+f-m\Big(1+\frac12Kt\Big)^2\right)u d\mu,\label{WmK} \end{eqnarray*} and \begin{eqnarray}
& &\frac{d}{dt}W_{m, K}(u, t)+2t\int_M\left(\Big|\nabla^2 f-\left(\frac1{2t}+\frac K2\right) g\Big|^2\right)u d\mu\nonumber\\ & &\hskip2cm +{2t\over m-n}\int_M \left(\nabla \phi\cdot\nabla f+(m-n)\Big(\frac1{2t}+\frac K2\Big)\right)^2u\ d\mu\nonumber\\ & &\hskip3cm =2t\int_M ({\rm Ric}_{m,n}(L)+Kg)(\nabla f, \nabla f)u\ d\mu.\label{WmK20} \end{eqnarray} In particular, if the $CD(-K, m)$-condition holds, i.e., $Ric_{m, n}(L)\geq -K$, then \begin{eqnarray*} {d\over dt}H_{m, K}(u, t)\leq 0, \end{eqnarray*} and \begin{eqnarray*} \frac{d}{dt}W_{m, K}(u, t) \leq 0. \end{eqnarray*} Moreover, under the $CD(-K, m)$-condition, the left hand side in $(\ref{WmK20})$ equals to zero at some $t=t_0>0$ if and only if $(M, g, \phi)$ is a $(-K, m)$-Ricci soliton, i.e., \begin{eqnarray*} Ric_{m, n}(L)=-Kg. \end{eqnarray*} \end{theorem}
\subsection{The case of $CD(K, \infty)$-condition}
When $m=\infty$, we cannot use the above definition formulas $(\ref{HKm})$ and $(\ref{WKm})$ to introduce $H_{K, \infty}(u, t)$ and to define the $W$-entropy for the Witten Laplacian on Riemannian manifolds with the $CD(K, \infty)$-condition. Based on the reversal logarithmic Sobolev inequality due to Bakry and Ledoux \cite{BL}, we proved the following result.
\begin{theorem} (\cite{LL15b, LL17b, SLi15}) \label{Th-C} Let $M$ be a complete Riemannian manifold with bounded geometry condition, $\phi\in C^4(M)$ with $\nabla\phi\in C_b^3(M)$. Suppose that $Ric+\nabla^2\phi\geq K$, where $K\in \mathbb{R}$ is a constant. Let $u(\cdot, t)=P_tf$ be a positive solution to the heat equation $\partial_t u=Lu$ with $u(\cdot, 0)=f$, where $f$ is a positive and measurable function on $M$. Let \begin{eqnarray*} H_{K}(f, t)=D_K(t)\int_M (f\log f-P_tf\log P_tf )d\mu, \end{eqnarray*} where $D_0(t)={1\over t}$ and $D_{K}(t)={2K\over 1-e^{-2Kt}}$ for $K\neq 0$.Then for all $t>0$ \begin{eqnarray*} {d\over dt}H_{K}(f, t)\leq 0, \end{eqnarray*} and for all $t>0$, we have \begin{eqnarray*} {d^2\over dt^2}H_K(t)+2K\coth(Kt) {d\over dt}H_K(t)
\leq - 2D_K(t)\int_M |\nabla^2\log P_tf|^2P_tfd\mu. \label{KK2} \end{eqnarray*} \end{theorem}
Theorem \ref{Th-C} suggests us a new way to introduce the $W$-entropy for the Witten Laplacian on Riemannian manifolds with the $CD(K, \infty)$-condition \begin{eqnarray*} W_K(f, t)=H_K(f, t)+{\sinh(2Kt)\over 2K}{d\over dt}H_K(f, t). \end{eqnarray*} In this way, for all $t>0$, we prove that (\cite{LL15b}) \begin{eqnarray}
& &{d\over dt}W_{K}(f, t)+(e^{2Kt}+1)\int_M |\nabla^2
\log P_tf|^2 P_tf d\mu\nonumber\\ & & \hskip1cm =-(e^{2Kt}+1)\int_M (Ric(L)-Kg)(\nabla\log P_tf, \nabla\log P_tf)P_tfd\mu.\label{WW1} \end{eqnarray} In particular, for all $t>0$, we have \begin{eqnarray*} {d\over dt}W_{K}(f, t)\leq 0. \end{eqnarray*} Moreover, under the $CD(K, \infty)$-condition, the left hand side in $(\ref{WW1})$ equals to zero at some $t=t_0>0$ if and only if $(M, g, \phi)$ is a $K$-Ricci soliton, i.e., \begin{eqnarray*} Ric+\nabla^2\phi=Kg. \end{eqnarray*}
\section{$W$-entropy formulas for Witten Laplacian on $(K, m)$-super Ricci flows}
In Section $2$, we extend the $W$-entropy formula to the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$-condition. It is an interesting question whether we can further extend the $W$-entropy formula to the heat equation associated with the time dependent Witten Laplacian on compact or complete Riemannian manifolds with time dependent metrics and potentials.
We now introduce the notion of the $(K, m)$-super Ricci flow on manifolds with time dependent metrics and potentials. By definition, we call $(M, g(t), \phi(t), t\in [0, T])$ a $(K, m)$-super Ricci flow if \begin{eqnarray*} {1\over 2}{\partial g\over \partial t}+Ric_{m, n}(L)\geq Kg, \ \ \ \ \forall ~ t\in [0, T], \end{eqnarray*} where $K\in \mathbb{R}$ and $m\in [n, \infty]$ are two constants. Note that a Riemannian manifold equipped with a stationary $(K, m)$-super Ricci flow (i.e., $(g(t), \phi(t))$ is independent of time) if and only if the $CD(K, m)$-condition holds, i.e., \begin{eqnarray*} Ric_{m, n}(L)\geq Kg. \end{eqnarray*} In the case $m=n$, the notion of the $(K, n)$-super Ricci flow is indeed the $K$-super Ricci flow in geometric analysis \begin{eqnarray*} {1\over 2}{\partial g\over \partial t}+Ric\geq Kg, \ \ \ \ \forall ~ t\in [0, T], \end{eqnarray*} and in the case $m=\infty$, the $(K, \infty)$-super Ricci flow equation reads \begin{eqnarray*} {1\over 2}{\partial g\over \partial t}+Ric(L)\geq Kg, \ \ \ \ \forall ~ t\in [0, T]. \label{KKK} \end{eqnarray*} In view of this, the Perelman Ricci flow is indeed the $(0, \infty)$-Ricci flow together with the conjugate heat equation \begin{eqnarray*} {\partial g\over \partial t}&=&-2Ric(L),\\
\ \ \ {\partial\phi\over \partial t}&=&{1\over 2}{\rm Tr} \left({\partial g\over \partial t}\right). \end{eqnarray*}
Let $(M, g(t), \phi(t), t\in [0, T])$ be a complete Riemannian manifold with a family of time dependent metrics $g(t)$ and potentials $\phi(t)$. Let $$L=\Delta_{g(t)}-\nabla_{g(t)}\phi(t)\cdot\nabla_{g(t)}$$ be the time dependent Witten Laplacian on $(M, g(t), \phi(t))$. Let $$d\mu(t)=e^{-\phi(t)}dvol_{g(t)}.$$ Suppose that \begin{eqnarray} {\partial \phi\over \partial t}={1\over 2}{\rm Tr}\left( {\partial g\over \partial t}\right).\label{conjugate} \end{eqnarray} Then $\mu(t)$ is independent of $t\in [0, T]$, i.e., \begin{eqnarray*} {\partial d\mu(t)\over \partial t}=0, \ \ \ t\in [0, T]. \end{eqnarray*}
We now state the main results of this section, which extend Theorems~\ref{Th-A}, \ref{Th-B} and \ref{Th-C} to the heat equation associated with the time dependent Witten Laplacian on compact manifolds with a $(K, m)$-super Ricci flow, where $K\in \mathbb{R}$ and $m\in [n, \infty]$.
\subsection{The case of $(0, m)$-super Ricci flow}
In \cite{LL15a}, we proved the $W$-entropy formula to the heat equation associated with the time dependent Witten Laplacian on compact manifolds equipped with a $(0, m)$-super Ricci flow, which can be regarded as the $m$-dimensional analogue of Perelman's $W$-entropy formula for the Ricci flow.
\begin{theorem}\label{Th-D} (\cite{LL15a}) Let $(M, g(t), \phi(t), t\in [0, T])$ be a compact manifold with family of time dependent metrics and $C^2$-potentials. Suppose that $g(t)$ and $\phi(t)$ satisfy the conjugate equation $(\ref{conjugate})$. Let $u={e^{-f}\over (4\pi t)^{m/2}}$ be a positive solution of the heat equation \begin{eqnarray*} \partial_t u = Lu \end{eqnarray*} with initial data $u(0)$ satisfying $\int_M u(0)d\mu(0)=1$. Let \begin{eqnarray*} H_m(u, t)=-\int_M u\log u d\mu-{m\over 2}(1+\log(4\pi t)). \end{eqnarray*} Define \begin{eqnarray*} W_m(u, t)={d\over dt}(tH_m(u)). \end{eqnarray*} Then \begin{eqnarray*}
W_m(u, t)=\int_M \left[t|\nabla f|^2+f-m\right]ud\mu, \end{eqnarray*} and \begin{eqnarray}
& &{d\over dt}W_m(u, t)+2t\int_M \left|\nabla^2 f-{g\over 2t}\right|^2ud\mu+{2t\over m-n}\int_M \left(\nabla \phi\cdot \nabla f+{m-n\over 2t}\right)^2 ud\mu\nonumber\\ & &\hskip3cm =-2t\int_M \left({1\over 2}{\partial g\over \partial t}+Ric_{m, n}(L)\right)(\nabla f, \nabla f)ud\mu.\label{NW} \end{eqnarray} In particular, if $\{g(t), \phi(t), t\in (0, T]\}$ is a $(0, m)$-super Ricci flow and satisfies the conjugate equation $(\ref{conjugate})$, then $W_m(u, t)$ is decreasing in $t\in (0, T]$, i.e., \begin{eqnarray*} {d\over dt}W_m(u, t)\leq 0, \ \ \ \forall t\in (0, T]. \end{eqnarray*} Moreover, the left hand side in $(\ref{NW})$ identically equals to zero on $(0, T]$ if and only if $(M, g(t), \phi(t), t\in (0, T])$ is a $(0, m)$-Ricci flow in the sense that \begin{eqnarray*} {\partial g\over \partial t}&=&-2{\rm Ric}_{m,n}(L),\\ {\partial \phi\over \partial t}&=&{1\over 2} {\rm Tr}\left( {\partial g\over \partial t}\right). \end{eqnarray*}
\end{theorem}
\subsection{The case of $(K, m)$-super Ricci flow}
In general we have the following result which extends Theorem \ref{Th-B} to $(K, m)$-super Ricci flow for general $K\in \mathbb{R}$ and $m\in [n, \infty)$.
\begin{theorem}\label{Th-E} (\cite{LL15a, LL15b}) Under the same notation as in Theorem \ref{Th-D}, define \begin{eqnarray} H_{m,K}(u, t)=-\int_M u\log u d\mu-{m\over 2}(1+\log(4\pi t))-\frac m2Kt\Big(1+\frac16Kt\Big), \label{HmK} \end{eqnarray} and \begin{align} W_{m,K}(u, t)={d\over dt}(tH_{m,K}(u)). \label{WmK} \end{align} Then \begin{eqnarray*}
W_{m,K}(u, t)=\int_M\left[t|\nabla f |^2+f-m\Big(1+\frac12Kt\Big)^2\right]ud\mu,\label{WmK-0} \end{eqnarray*} and \begin{align}\label{WMK}
& {d\over dt}W_{m,K}(u, t)+2 t\int_M\Big|\nabla^2 f-\left(\frac1{2t}+\frac{K}{2}\right)g\Big|^2u d\mu\nonumber\\ &\ \ \ \ +\frac{2t}{m-n}\int_M\left(\nabla \phi\cdot\nabla f+(m-n)\Big(\frac1{2t}+\frac K2\Big)\right)^2u d\mu\nonumber\\ &\ \ \ \ \ =-2 t\int_M\left({1\over 2}{\partial g\over \partial t}+{\rm Ric}_{m,n}(L)+Kg\right)(\nabla f, \nabla f) ud\mu. \end{align} In particular, if $(M, g(t), \phi(t), t\in (0, T])$ is a $(-K, m)$-super Ricci flow and satisfies the conjugate equation $(\ref{conjugate})$, then $W_{m,K}(u, t)$ is decreasing in $t\in (0, T]$, i.e., \begin{eqnarray*} {d\over dt}W_{m,K}(u, t)\leq 0, \ \ \ \forall t\in (0, T]. \end{eqnarray*} Moreover, the left hand side in $(\ref{WMK})$ identically equals to zero on $(0, T]$ if and only if $(M, g(t), \phi(t), t\in (0, T])$ is a $(-K, m)$-Ricci flow in the sense that \begin{eqnarray*} {\partial g\over \partial t}&=&-2({\rm Ric}_{m,n}(L)+Kg),\\ {\partial \phi\over \partial t}&=&{1\over 2} {\rm Tr}\left( {\partial g\over \partial t}\right). \end{eqnarray*} \end{theorem}
\subsection{The case of $(K, \infty)$-super Ricci flow}
In \cite{LL15b, LL17b, SLi15}, we proved the equivalence between the $(K, \infty)$-super Ricci flow and two families of
logarithmic Sobolev inequalities for the time dependent Witten Laplacian on Riemannian manifolds with time dependent metrics and potentials. Based on this result, we have the following $W$-entropy formula for the time dependent Witten Laplacian on compact Riemannian manifolds with $(K, \infty)$-super Ricci flow, which can be viewed as the natural extension of the $W$-entropy formula for the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, \infty)$-condition.
\begin{theorem}\label{Th-F} (\cite{LL15b, LL17b, SLi15}) Let $(M, g(t), \phi(t), t\in [0, T])$ be a compact $(K, \infty)$-super Ricci flow satisfying the conjugate heat equation $(\ref{conjugate})$. Let $u(\cdot, t)=P_tf$ be a positive solution to the heat equation $\partial_t u=Lu$ with $u(\cdot, 0)=f$, where $f$ is a positive and measurable function on $M$. Define \begin{eqnarray*} H_{K}(f, t)=D_K(t)\int_M (f\log f-P_tf\log P_tf )d\mu, \end{eqnarray*} where $D_0(t)={1\over t}$ and $D_{K}(t)={2K\over 1-e^{-2Kt}}$ for $K\neq 0$.Then for all $t\in [0, T]$ \begin{eqnarray*} {d\over dt}H_{K}(f, t)\leq 0, \end{eqnarray*} and for all $t\in (0, T]$, we have \begin{eqnarray*} {d^2\over dt^2}H_K(t)+2K\coth(Kt) {d\over dt}H_K(t)
\leq - 2D_K(t)\int_M |\nabla^2\log P_tf|^2P_tfd\mu. \end{eqnarray*} Define the $W$-entropy by the revised Boltzmann entropy formula \begin{eqnarray*} W_K(f, t)=H_K(f, t)+{\sinh(2Kt)\over 2K}{d\over dt}H_K(f, t). \end{eqnarray*} Then for all $t\in (0, T]$, we have \begin{eqnarray}
& &{d\over dt}W_{K}(f, t)+(e^{2Kt}+1)\int_M |\nabla^2
\log P_tf|^2 P_tf d\mu\nonumber\\ & &\hskip0.5cm =-(e^{2Kt}+1)\int_M \left({1\over 2}{\partial g\over \partial t}+Ric(L)-Kg\right)(\nabla\log P_tf, \nabla\log P_tf)P_tfd\mu.\label{WW2} \end{eqnarray} In particular, for all $t\in (0, T]$, we have \begin{eqnarray*} {d\over dt}W_{K}(f, t)\leq 0. \end{eqnarray*} Moreover, the left hand side of $(\ref{WW2})$ identically equals to zero on $(0, T]$ if and only if $(M, g(t), \phi(t))$ is the $(K, \infty)$-Ricci flow satisfying the conjugate equation $(\ref{conjugate})$, i.e., for all $t\in (0, T]$, \begin{eqnarray*} {\partial g\over \partial t}&=&-2(Ric+\nabla^2\phi-Kg), \\
{\partial\phi\over \partial t}&=&-R-\Delta \phi+nK. \end{eqnarray*}
\end{theorem}
\section{$W$-entropy formula for geodesic flow on Wasserstein space}
Starting from Brenier's work \cite{Br, BB} on the Monge-Kantorovich optimal transport problem with quadratic cost function, Otto, Lott, McCann, Villani and Sturm \cite{Ot, OtV, LoV, Lo2, V1, V2, St1, St2, St3} among others have developed the optimal transport theory. In particular, they developed an infinite dimensional Riemannian geometry and the theory of the gradient flow on the Wasserstein space over Euclidean space, compact Riemannian manifolds and metric measure spaces. The displacement convexity of the Boltzmann-Shannon entropy or the Renyi entropy along geodesics on the Wasserstein space has been a key tool in \cite{LoV, Lo2, V1, V2, St1, St2, St3} to introduce the notions of the upper bound of the dimension and the lower bound of the Ricci curvature on metric measure spaces. In \cite{MT}, McCann and Topping proved the contraction property of the $L^2$-Wasserstein distance between solutions of the backward heat equation on closed manifolds equipped with the Ricci flow, which extends previous results for the Fokker-Planck equation on Euclidean space (due to Otto \cite{Ot}) and on complete Riemannian manifolds with suitable Bakry-Emery curvature condition (due to Sturm and von Renesse \cite{StR} ). See also \cite{T1, T2}. In \cite{Lo2}, Lott further proved two convexity results of the Boltzmann-Shannon type entropy along the geodesics on the Wasserstein space over closed manifolds equipped with the backward Ricci flow, which are closely related to Perelman's result on the monotonicity of the $W$-entropy for the Ricci flow. In \cite{LL13b}, we extended Lott's convexity results to the Wasserstein space on compact Riemannian manifolds equipped with the backward Perelman Ricci flow.
Let $(M, g)$ be a complete Riemannian manifold equipped with a weighted volume measure $d\mu=e^{-\phi}dv$, where $\phi\in C^2(M)$ and $dv$ denotes the volume measure on $(M, g)$. The Boltzmann-Shannon entropy of the probability measure $\rho d\mu$ with respect to the reference measure $\mu$ is defined by \begin{eqnarray*} {\rm Ent}(\rho):= \int_M \rho \log \rho d\mu. \end{eqnarray*}
Let $P_2(M, \mu)$ (resp. $P_2^\infty(M, \mu)$) be the Wasserstein space (reps. the smooth Wasserstein space) of all probability measures $\rho(x)d\mu(x)$ with density function (resp. with smooth density function) $\rho$ on $M$ such that $\int_M d^2(o, x)\rho(x)d\mu(x)<\infty$, where $d(o, \cdot)$ denotes the distance function from a fixed point $o\in M$. Following Otto \cite{Ot} and Lott \cite{Lo1, Lo2}, the tangent space $T_{\rho d\mu}P_2^\infty(M, \mu)$ is identified as follows \begin{eqnarray*}
T_{\rho d\mu}P_2^\infty(M, \mu)=\left\{s=\nabla_\mu^*(\rho \nabla f): f\in C^\infty(M), \ \ \int_M |\nabla f|^2\rho d\mu<\infty\right\}, \end{eqnarray*} where $\nabla_\mu^*$ denotes the $L^2$-adjoint of the Riemannian gradient $\nabla$ with respect to the weighted volume measure $d\mu$ on $(M, g)$. For $s_i=\nabla_\mu^*(\rho\nabla f_i)\in T_{\rho d\mu} P_2^\infty(M, \mu)$, we introduce Otto's infinite dimensional Riemannian metric on $P_2^\infty(M, \mu)$ as follows \begin{eqnarray*} \langle \langle s_1, s_2\rangle\rangle:=\int_M \nabla f_1\cdot \nabla f_2 \rho d\mu, \end{eqnarray*} provided that \begin{eqnarray*}
\|s_i\|^2:=\int_M |\nabla f_i|^2\rho d\mu<\infty, \ \ \ i=1, 2. \end{eqnarray*} Let $T_{\rho d\mu}P_2(M, \mu)$ be the completion of $T_{\rho d\mu}P_2^\infty (M, \mu)$ equipped with Otto's infinite dimensional Riemannian metric. Then $P_2(M, \mu)$ is an infinite dimensional Riemannian manifold.
By Benamou and Brenier \cite{BB}, for any given $\mu_i=\rho_i d\mu\in P_2(M, \mu)$, $i=0, 1$, the $L^2$-Wasserstein distance between $\mu_0$ and $\mu_1$ coincides with the geodesic distance between $\mu_0$ and $\mu_1$ in $P_2(M, \mu)$ equippped with Otto's infinite dimensional Riemannian metric, i.e., \begin{eqnarray*}
W_2^2(\mu_0, \mu_1)=\inf\limits\left\{{1\over 2}\int_0^1 |\nabla f(x, t)|^2\rho(x, t)d\mu(x): \partial_t \rho=\nabla_\mu^*(\rho \nabla f), \ \rho(0)=\rho_0, \ \rho(1)=\rho_1\right\}. \end{eqnarray*} By \cite{Mc}, given $\mu_0=\rho(\cdot, 0)\mu, ~ \mu_1=\rho(\cdot, 1)\mu\in P_2^\infty(M, \mu)$ , it is known that there is a unique minimizing Wasserstein geodesic $\{\mu(t), t\in [0, 1]\}$ of the form $\mu(t) =(F_t)_*\mu_0$ joining $\mu_0$ and $\mu_1$ in $P_2(M, \mu)$, where $F_t \in {\rm Diff}(M)$ is given by $F_t(x) = \exp_x(-t \nabla f(\cdot, 0))$ for an appropriate Lipschitz function $f(\cdot, t)$. See also \cite{Lo1, Lo2}.
If the Wasserstein geodesic in $P_2(M, \mu)$ belongs entirely to $P_2^\infty(M, \mu)$, then the geodesic flow $(\rho, f)\in T^*P_2^\infty(M, \mu)$ satisfies the transport equation and the Hamilton-Jacobi equation \begin{eqnarray} {\partial_t} \rho-\nabla_\mu^*(\rho \nabla f)&=&0,\label{TA}\\
{\partial_t}f+{1\over 2}|\nabla f|^2&=&0, \label{HJ} \end{eqnarray} with the boundary condition $\rho(0)=\rho_0$ and $\rho(1)=\rho_1$. When $\rho_0, f_0\in C^\infty(M)$, defining $f(\cdot, t)\in C^\infty(M)$ by the Hopf-Lax solution
\begin{eqnarray*} f(x, t)=\inf\limits_{y\in M}\left(f_0(y)+{d^2(x, y)\over 2t}\right),\label{HLS}
\end{eqnarray*}
and solving the transport equation $(\ref{TA})$ by the characteristic method, it is known that $(\rho, f)$ satisfies $(\ref{TA})$ and $(\ref{HJ})$ with $\rho(0)=\rho_0$ and $f(0)=f_0$. See \cite{V1} Sect. 5.4.7. See also \cite{Lo1, Lo2}.
In view of this, the transport equation $(\ref{TA})$
and the Hamilton-Jacobi equation $(\ref{HJ})$ describe the geodesic flow on the cotangent bundle $T^*P_2^\infty(M, \mu)$ over the Wasserstein space $P_2(M, \mu)$.
Note that the Hamilton-Jacobi equation $(\ref{HJ})$ is also called the eikonal equation in geometric optics.
The main result of this section is the following $W$-entropy formula for the geodesic flow on the Wasserstein space $P_2^\infty(M, \mu)$.
\begin{theorem}\label{MT2} (\cite{LL16, SLi15}) Let $(M, g)$ be a compact Riemannian manifold, $\phi\in C^2(M)$, $d\mu=e^{-\phi}dv$. Let $\rho: M\times [0, T]\rightarrow\mathbb{R}^+ $ and $f: M\times [0,T]\rightarrow \mathbb{R}$ be smooth solutions to the transport equation $(\ref{TA})$ and the Hamilton-Jacobi equation $(\ref{HJ})$. For any $m\geq n$, define the $H_m$-entropy and $W_m$-entropy for the geodesic flow $(\rho, f)$ on $T^*P^\infty_2(M, \mu)$ as follows \begin{eqnarray*} H_m(\rho, t)=-{\rm Ent}(\rho(t))-{m\over 2}\left(1+\log(4\pi t^2)\right), \end{eqnarray*} and \begin{eqnarray*} W_m(\rho, t)={d\over dt}(tH_m(\rho, t)). \end{eqnarray*} Then for all $t>0$, we have \begin{eqnarray}
{d\over dt}W_m(\rho, t)&=&-t\int_M \left[\left|\nabla^2 f-{g\over t}\right|^2+Ric_{m, n}(L)(\nabla f, \nabla f) \right]\rho d\mu\nonumber\\
& &\ \ \ \ \ \ \ \ \ \ -{t \over m-n}\int_M \left|\nabla \phi\cdot
\nabla f+{m-n\over t}\right|^2 \rho d\mu.\label{Wgeo} \end{eqnarray} In particular, if $Ric_{m, n}(L)\geq 0$, then $W_m(\rho, t)$ is decreasing in time $t$ along the geodesic flow on $T^*P^\infty_2(M, \mu)$. \end{theorem}
As a corollary of Theorem \ref{MT2}, we can recapture the following beautiful result due to Lott and Villani \cite{LoV, Lo2}.
\begin{corollary}\label{Th-LV} (\cite{LoV, Lo2}) Let $(M, g, \phi)$ be a compact Riemannian manifold with $Ric_{m, n}(L)\geq 0$. Then $t{\rm Ent}(\rho(t))+mt\log t$ is convex in time $t$ along the geodesic on $P_2(M, \mu)$. \end{corollary}
\section{Comparison between Theorem \ref{Th-A} and Theorem \ref{MT2}}
In this section, we compare the $W$-entropy formula $(\ref{W-1})$ in Theorem \ref{Th-A} and the $W$-entropy formula $(\ref{Wgeo})$ in Theorem \ref{MT2}.
\begin{itemize}
\item The $W$-entropy formula $(\ref{W-1})$ for the heat equation of the Witten Laplacian in Theorem \ref{Th-A} and the $W$-entropy formula $(\ref{Wgeo})$ for the geodesic flow on the Wasserstein space in Theorem \ref{MT2} have similar expressions. Moreover, similarly to Corollary \ref{Th-LV}, from Theorem \ref{Th-A} we can derive the following
\begin{corollary} \label{Th-Li} Let $(M, g, \phi)$ be a compact Riemannian manifold with $Ric_{m, n}(L)\geq 0$. Then $t{\rm Ent}(u(t))+{m\over 2}t\log t$ is convex in time $t$ along the heat equation $\partial_t u=Lu$ on $M$. \end{corollary}
\item By \cite{Li12, Li13}, Theorem \ref{Th-A} and a rigidity theorem hold on complete Riemannian manifolds with bounded geometric condition and with the $CD(0, m)$-condition: $W_m(u, t)$ achieves its minimum at some $t=t_0>0$ if and only if $M=\mathbb{R}^n$, $m=n$, and $u(x, t)=u_m(x, t)={1\over (4\pi t)^{m\over 2}}e^{-{\|x\|^2\over 4t}}$ is the heat kernel of the heat equation $\partial_t u=\Delta u$ on $\mathbb{R}^m$. Note that, the Boltzmann-Shannon entropy of the Gussian heat kernel measure $u_m(x, t)dx$ is given by $${\rm Ent}(u_m(t))=-{m\over 2}(1+\log(4\pi t)).$$ Thus the $H_m$-entropy for the heat equation of the Witten Laplacian is given by\footnote{Following Villani \cite{V1, V2}, we call $H_m(u(t))$ the {\it relative entropy} even though it is slightly different from the classical definition of the relative entropy in probability theory.} \begin{eqnarray*} H_m(u(t))={\rm Ent}(u_m(t))-{\rm Ent}(u(t)), \end{eqnarray*} and the $W_m$-entropy for the heat equation of the Witten Laplacian is given by the Boltzmann entropy formula \begin{eqnarray} W_m(u, t):={d\over dt}\left(t[{\rm Ent}(u_m(t))-{\rm Ent}(u(t))]\right).\label{Wm} \end{eqnarray} This gives a natural probabilistic interpretation of the $W$-entropy for the heat equation of the Witten Laplacian on Riemannian manifolds. See also Section $6$ for the probabilistic interpretation of the Perelman $W$-entropy for the Ricci flow.
\item On the other hand, when $m\in \mathbb{N}$, we can check that the following $(\rho_m, f_m)$ \begin{eqnarray*}
\rho_m(x, t)&=&{1\over (4\pi t^2)^{m/2}}e^{-{\|x\|^2\over 4t^2}},\\
f_m(x, t)&=&{\|x\|^2\over 2t}, \end{eqnarray*} where $t>0, x\in \mathbb{R}^m$, is a solution to the transport equation $(\ref{TA})$ and the Hamilton-Jacobi equation $(\ref{HJ})$ on $\mathbb{R}^m$ equipped with the standard Lebesgue measure, i.e., \begin{eqnarray*} {\partial_t} \rho+\nabla\cdot(\rho \nabla f)&=&0,\label{TAm}\\
{\partial_t}f+{1\over 2}|\nabla f|^2&=&0, \label{HJm} \end{eqnarray*} respectively. Moreover, the Boltzmann-Shannon entropy of the probability measure $\rho_m(t, x)dx$ (which equals to $u_m(t^2, x)dx$) is given by \begin{eqnarray*} {\rm Ent}(\rho_m(t))=-{m\over 2}(1+\log(4\pi t^2)). \end{eqnarray*} Thus we can reformulate the $H_m$-entropy and the $W_m$-entropy for the geodesic flow on the Wasserstein space $P_2(M, \mu)$ as follows \begin{eqnarray} H_m(\rho(t))={\rm Ent}(\rho_m(t))-{\rm Ent}(\rho(t)), \label{NHW-2a} \end{eqnarray} and \begin{eqnarray} W_m(\rho, t):={d\over dt}\left(t[{\rm Ent}(\rho_m(t))-{\rm Ent}(\rho(t))]\right).\label{NHW-3a} \end{eqnarray}
\item The relative entropy $H_m(\rho(t))$ defined by $(\ref{NHW-2a})$ is the difference between the Boltzmann-Shannon entropy of the probability measure $\rho(t) d\mu$ on $(M,\mu)$ and the Boltzmann-Shannon entropy of the reference model $\rho_m(t)dx$ on $(\mathbb{R}^m, dx)$, and $W_m(\rho, t)$ defined by $(\ref{NHW-3a})$ is the time derivative of $tH_m(\rho(t))$. In \cite{LL16, SLi15}, similarly to the case of Theorem \ref{Th-A}, we extended the $W$-entropy formula $(\ref{Wgeo})$ in Theorem \ref{MT2} to complete Riemannian manifolds with bounded geometry condition. In view of this, we proved that the rigidity model for the $W$-entropy for the geodesic flow on the Wasserstein space $P_2^\infty(M, \mu)$ over complete Riemannian manifolds with the $CD(0, m)$-condition is $M=\mathbb{R}^n$, $m=n$, $\rho=\rho_m$ and $f=f_m$.
\end{itemize}
\section{Langevin deformation of geometric flows on Wasserstein space}
We can raise a natural question how to understand the similarity between the $W$-entropy formulas in Theorem \ref{Th-A} and Theorem \ref{MT2}. Can we pass through one of them to another one? One possible approach to answer this question is to use the vanishing viscosity limit method from the heat equation to the Hamilton-Jacobi equation. However, it seems that one cannot easily use this approach to pass through the $W$-entropy formula for the heat equation of the Witten Laplacian to the $W$-entropy formula for the geodesic flow on the Wasserstein space.
Inspired by J.-M.Bismut's works (see \cite{Bis05, Bis10}) on the deformation of hypoelliptic Laplacians on the cotangent bundle over Riemannian manifolds, which interpolates the usual Laplacian on the underlying Riemannian manifold $M$ and the Hamiltonian vector field which generates the geodesic flow on the cotangent bundle over $M$, we introduced in \cite{LL16, SLi15} the Langevin deformation of geometric flows on the cotangent bundle of the Wasserstein space over compact Riemannian manifolds
More precisely, for $c\in (0, \infty)$, let $(\rho, f)$ be smooth solution to the following equations
\begin{eqnarray} \partial_t \rho-\nabla_\mu^*(\rho\nabla f)&=&0,\label{flow1}\\
c^2\left(\partial_t f+{1\over 2}|\nabla f|^2\right)&=&-f+V'(\rho). \label{flow2} \end{eqnarray} where $V\in C^\infty ((0, \infty), \mathbb{R})$. Eq.~$(\ref{flow1})$ is indeed the transport equation $(\ref{TA})$, while Eq.~$(\ref{flow2})$ can be viewed as the the Langevin equation on $T^*P_2(M, \mu)$. When $c\rightarrow \infty$, Eq.~$(\ref{flow2})$ implies that $f$ should satisfies the Hamilton-Jacobi equation $(\ref{HJ})$. In this case, $(\rho, f)$ is indeed the geodesic flow on $T^*P_2(M, \mu)$. On the other hand, when $c=0$, Eq.~$(\ref{flow2})$ implies that $f=V'(\rho)$. In this case, $\rho$ is the backward gradient flow of $U(\rho)=\int_M V(\rho)d\mu$ on $P_2(M, \mu)$ equipped with Otto's infinite dimensional Riemannian metric \begin{eqnarray} \partial_t \rho=\nabla_\mu^*\left(\rho\nabla V'(\rho)\right). \label{gradflow} \end{eqnarray}
In \cite{LL16, SLi15}, we proved the existence and uniqueness of the Langevin deformation between the backward gradient flow of the Boltzmann-Shannon entropy ${\rm Ent}(\rho)=\int_M \rho \log \rho d\mu$ (respectively, the Renyi entropy $U(\rho)= {1\over m-1}\int_M \rho^{m}d\mu$ for $m>1$), which is the backward heat equation $\partial_t \rho=-L\rho$ (respectively, the backward porous medium equation $\partial_t \rho=-L\rho^m$) of the Witten Laplacian on $M$, and the geodesic flow on the cotangent bundle of the smooth Wasserstein space $P_2^\infty(M, \mu)$ over compact Riemannian manifolds. Moreover, we proved an extension of the $W$-entropy formula along the Langevin deformation of geometric flows. The rigidity models are also proposed for the Langevin deformation. Due to the limit of the length of the paper, we refer the reader to \cite{LL16, SLi15} for the details of these results.
\section{The $W$-entropy, statistical mechanics and probability theory}
In \cite{P1}, Perelman gave a heuristic interpretation for the $W$-entropy using statistical mechanics. Recall that the partition function for the canonical ensemble at temperature $\beta^{-1}$ is given by $Z_\beta=\int_{\mathbb{R}} e^{-\beta E}d\omega(E)$, where $d\omega(E)$ denotes the ``density of states'' measure, whose physical meaning is the number of microstates with energy levels in the range $[E, E+dE]$. The average of the energy with respect to the Gibbs measure $dP(E)={e^{-\beta E} \over Z_\beta} d\omega(E)$ is $$\langle E\rangle=-{\partial\over \partial \beta}\log Z_\beta,$$ and the Boltzmann entropy $S$ satisfies the Boltzmann entropy formula \begin{eqnarray*} S=\log Z_\beta-\beta {\partial \over \partial \beta}\log Z_\beta. \end{eqnarray*} Equivalently, letting $\tau=\beta^{-1}$, then \begin{eqnarray} S= {\partial \over \partial \tau}(\tau \log Z_\beta).\label{S1} \end{eqnarray} The fluctuation of the energy is given by $$ \sigma:=\langle (E-\langle E\rangle)^2\rangle ={\partial^2 \over \partial \beta^2} \log Z_\beta,$$ and the derivative of the Boltzmann entropy with respect to $\beta$ satisfies \begin{eqnarray*} {\partial S\over \partial \beta}=-\beta \sigma.\label{S2} \end{eqnarray*}
Let $(M, g(\tau))$ be a family of closed Riemannian manifolds,
$dm(\tau)=(4\pi\tau)^{-n/2}e^{-f(\tau)}dv_{g(\tau)}$ a probability measure on $(M, g(\tau))$, where $g(\tau)$ satisfies the backward Ricci flow equation $\partial _\tau g=2Ric$, $f(\tau)$ satisfies the heat equation $\partial_\tau f=\Delta f-|\nabla f|^2+R-{n\over 2\tau}$ and $\tau=T-t$. Assume that there is a canonical ensemble with a ``density of state measure'' $d\omega(E)$ such that the partition function $Z_\beta=\int_{\mathbb{R}} e^{-\beta E}d\omega(E)$ is given by \begin{eqnarray} \log Z_\beta=\int_M \left(-f+{n\over 2}\right)dm,\label{logZ} \end{eqnarray} where $\beta={1\over \tau}$, and the backward time $\tau=T-t$ is regarded as the temperature. Then, using the above formulas in statistical mechanics, Perelman \cite{P1} formally derived that \begin{eqnarray*}
\langle E\rangle&=&=-\tau^2\int_M \left(R+|\nabla f|^2-{n\over 2\tau}\right)dm,\\
S&=&-\int_M \left(\tau(R+|\nabla f|^2)+f-n\right)dm,\\
\sigma&=&2\tau^4\int_M\left|Ric+\nabla^2f-{g\over 2\tau}\right|^2dm. \end{eqnarray*} This yields \begin{eqnarray*} W(g, f, \tau)=-S,\label{W1} \end{eqnarray*} and \begin{eqnarray*}
{d\over dt}W(g, f, \tau)=2 \int_M \tau\left|Ric+\nabla^2 f-{g\over 2\tau}\right|^2dm.\label{W2} \end{eqnarray*} This gives an interpretation of the $W$-entropy $(\ref{entropy-1})$ for the backward Ricci flow by Boltzmann's entropy formula $(\ref{S1})$. However, the problem whether there is a canonical ensemble with a ``density of states'' measure $d\omega(E)$ such that the partition function $Z_\beta=\int_{\mathbb{R}} e^{-\beta E}d\omega(E)$ satisfies Perelman's requirement $(\ref{logZ})$ remains open. See \cite{Li12b} for further discussion on this issue.
In \cite{Li12, Li12b}, the second author of this paper gave a probabilistic interpretation of Perelman's $W$-entropy for the Ricci flow. Observing that \begin{eqnarray*} \log Z_\beta={\rm Ent}(u(\tau))+{n\over 2}(1+\log (4\pi \tau)), \end{eqnarray*} where \begin{eqnarray*} {\rm Ent}(u(\tau))=\int_M u\log udv=-\int_M \left(f+{n\over 2}\log(4\pi \tau)\right){e^{-f}\over (4\pi\tau)^{n/2}}dv \end{eqnarray*} is the Boltzmann-Shannon entropy of the heat kernel measure $dm=u(\tau)dv_{g(\tau)}$ with respect to the volume measure $dv_{g(\tau)}$ on $(M, g(\tau))$, where $u={e^{-f}\over (4\pi\tau)^{n/2}}$. On the other hand, let
$$u_n(x, \tau)={e^{-{\|x\|^2\over 4\tau}}\over (4\pi \tau)^{n/2}}, \ \ \ \forall x\in \mathbb{R}^n, \tau>0$$ be the Gaussian heat kernel on $\mathbb{R}^n$. Then it is well-known that the Boltzmann-Shannon entropy of the Gaussian measure $d\gamma_n(\tau, x)=u_n(\tau, x)dx$ with respect to the Lebesgue measure is given by \begin{eqnarray*} {\rm Ent}(u_n)=-{n\over 2}(1+\log (4\pi \tau)). \end{eqnarray*} Hence $$\log Z_\beta={\rm Ent}(u(\tau))-{\rm Ent}(u_n(\tau))$$
is the difference of the Boltzmann-Shannon entropy of the heat kernel measure $dm=u(\tau)dv_{g(\tau)}$ on $(M, g(\tau))$ and the Boltzmann-Shannon entropy of the Gaussian measure $\gamma_n$ on $\mathbb{R}^n$. In view of this, we have the following probabilistic interpretation of the $W$-entropy for the Ricci flow \begin{eqnarray} W(g, f, \tau):={d\over d\tau}(\tau [{\rm Ent}(u_n(\tau))-{\rm Ent}(u(\tau))]). \label{WEnt} \end{eqnarray}
Similarly to $(\ref{WEnt})$, we can give the probabilistic interpretation of the $W$-entropy for the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(0, m)$-condition. See $(\ref{Wm})$ in Section $5$. By $(\ref{NHW-2a})$ and $(\ref{NHW-3a})$, we have the similar probabilistic interpretation of the $W$-entropy for the geodesic flow on the Wasserstein space over Riemannian manifolds with the $CD(0, m)$-condition. See \cite{Li12}.
It is natural and interesting to ask the question whether we can give a probabilistic interpretation of the $W$-entropy for the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$ and $CD(K, \infty)$-conditions. This question is closely related to the question whether there exist the rigidity models of the $W$-entropy for the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$ and $CD(K, \infty)$-conditions.
In our recent paper \cite{LL17a}, we gave a probabilistic interpretation of the $W_{m, K}$-entropy for the heat equation of the Witten Laplacian on complete Riemannian manifolds with the $CD(K, m)$-condition. More precisely, let $m\in \mathbb{N}$, $M=\mathbb{R}^m$, $g_0$ the Euclidean metric, $\phi_K(x)=-{K\|x\|^2\over 2}$ and $d\mu_K(x)=e^{K\|x\|^2\over 2}dx$, where $K\in \mathbb{R}$. Then $\nabla\phi_K(x)=-Kx$, and $\nabla^2\phi_K=-K{\rm Id}_{\mathbb{R}^m}$. We consider the Ornstein-Ulenbeck operator on $\mathbb{R}^m$ given by $$L=\Delta+Kx\cdot \nabla.$$ Note that $(\mathbb{R}^m, g_0, \phi_{K})$ is a complete shrinking Ricci soliton, i.e., $Ric(L)=-Kg_0$. The Ornstein-Ulenbeck diffusion process on $\mathbb{R}^m$ is the solution to the Langevin SDE $$dX_t=\sqrt{2} dW_t+KX_tdt, \ \ \ X_0=x.$$ It is well-known that the law of $X_t$ is Gaussian $N\left(e^{Kt}x, {e^{2Kt}-1\over K}{\rm Id}\right)$, and the heat kernel of $X_t$ with respect to the Lebesgue measure on $\mathbb{R}^m$ is given by \begin{eqnarray*}
u_{m, K}(x, y, t)=\left({K\over 2\pi (e^{2Kt}-1)}\right)^{m/2}\exp \left(-{K|y-e^{Kt}x|^2\over 2(e^{2Kt}-1)}\right). \end{eqnarray*} By direct calculation, the relative Boltzmann-Shannon entropy of the law of $X_t$ with respect to the Lebesgue measure on $\mathbb{R}^m$ is given by \begin{eqnarray*}
{\rm Ent}(u_{m, K}(x, y, t)| dy)=-{m\over 2}\left(1+\log(4\pi \sigma_K^2(t))\right), \end{eqnarray*} where $\sigma_K^2(t)={e^{2Kt}-1\over 2K}$. When $t\rightarrow 0$, we have \begin{eqnarray*}
{\rm Ent}(u_{m, K}(x, y, t) | dy)=-{m\over 2}\left(1+\log (4\pi t)+Kt+{K^2t^2\over 6}\right)+O(t^4). \end{eqnarray*}
Thus, when $t\rightarrow 0^+$, the second term in the definition formula $(\ref{HmK})$ of the $H_{m, K}$-entropy is asymptotically equivalent (at the order $O(t^4)$) to the Boltzmann-Shannon entropy of the heat kernel at time $t$ of the Ornstein-Uhlenbeck operator on $\mathbb{R}^m$ with respect to the Lebesgue measure on $\mathbb{R}^m$. That is to say, when $t\rightarrow 0^+$, we have \begin{eqnarray*}
H_{m, K}(u(t))={\rm Ent}(u_{m, K}(t)|dy)-{\rm Ent}(u(t)|\mu)+O(t^4), \end{eqnarray*} and \begin{eqnarray*} W_{m, K}(u(t))={d\over dt}\left(tH_{m, K}(u(t))\right). \end{eqnarray*}
To end this paper, let us mention that, in a forthcoming paper \cite{KL16}, Kuwada and the second author of this paper prove an analogue of the $W$-entropy monotonicity theorem on metric measure spaces with the so-called $RCD(0, N)$-condition.
\noindent{\bf Acknowledgement}. We would like to thank Professors S. Aida, D. Bakry, J.-M. Bismut, D. Elworthy, K. Kuwae, M. Ledoux, N. Mok, K.-T. Sturm, A. Thalmaier, F.-Y. Wang and Dr. Yuzhao Wang for their interests and helpful discussions during various stages of this work.
\begin{flushleft}
\noindent
Songzi Li, School of Mathematical Science, Beijing Normal University, No. 19, Xin Jie Kou Wai Da Jie, 100875, China, Email: songzi.li@bnu.edu.cn
Xiang-Dong Li, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, 55, Zhongguancun East Road, Beijing, 100190, China, E-mail: xdli@amt.ac.cn \\ and \\ School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China \end{flushleft}
\end{document} |
\begin{document}
\begin{abstract} We show that any strictly mean convex translator of dimension $n\geq 3$ which admits a cylindrical estimate and a corresponding gradient estimate is rotationally symmetric. As a consequence, we deduce that any translating solution of the mean curvature flow which arises as a blow-up limit of a two-convex mean curvature flow of compact immersed hypersurfaces of dimension $n\geq 3$ is rotationally symmetric. The proof is rather robust, and applies to a more general class of translator equations. As a particular application, we prove an analogous result for a class of flows of embedded hypersurfaces which includes the flow of two-convex hypersurfaces by the two-harmonic mean curvature. \end{abstract}
\maketitle
\section{Introduction}
We are interested in hypersurfaces $X:M^n\to\mathbb{R}^{n+1}$ satisfying the {\it translator} equation \begin{equation}\label{eq:T}\tag{T} \vec H=T^\perp \end{equation} for some constant vector $T\in \mathbb{R}^{n+1}$, where, given a local choice of unit normal field $\nu$, $\vec H=-H\nu$ is the mean curvature vector of the immersion with respect to the choice of mean curvature $H=\mathrm{div}\,\nu$, and $\perp$ denotes the projection onto the normal bundle. We call such immersions \emph{translators}. Up to a time-dependent tangential reparametrization, the family $\{X(\cdot,t)\}_{t\in\mathbb{R}}$ of immersions $X(\cdot,t):M^n\to\mathbb{R}^{n+1}$ defined by $X(x,t):=X(x)+tT$ satisfies the mean curvature flow \begin{equation}\tag{MCF}\label{eq:MCF} \partial_tX(\cdot,t)=\vec H(\cdot,t)\,, \end{equation} where $\vec H(\cdot,t)$ is the mean curvature vector of $X(\cdot,t)$. We therefore also refer to solutions of \eqref{eq:T} as \emph{translating solutions of the mean curvature flow}. It is well-known that translating solutions arise as blow-up limits of the mean curvature flow about type-II singularities \cite{Hm95a,HuSi99b}. More precisely, if a solution $X:M^n\times[0,T)\to\mathbb{R}^{n+1}$ of \eqref{eq:MCF} has \emph{type-II} curvature blow-up (that is, $\limsup_{t\to T}(T-t)\max_{M^n\times\{t\}}H^2=\infty$) then there is a sequence of parabolically rescaled solutions of \eqref{eq:MCF} which converge locally uniformly in $C^\infty$ to a (non-trivial) translating solution of \eqref{eq:MCF}.
Probably the most well-known translator is the Grim Reaper\footnote{So named because, as it translates, it `kills' any compact solution of curve shortening flow which is unfortunate enough to lie in its path.} curve $\Gamma$, which is the graph of the function $x\mapsto -\log\cos x$, $x\in (-\pi/2,\pi/2)$. In dimensions $n\geq 2$, there exists a strictly convex, rotationally symmetric translator asymptotic to a paraboloid, which is commonly referred to as the `bowl' \cite{AW94,CSS07}. The bowl is the unique rotationally symmetric translating complete graph, and the unique translator with finite genus and a single end asymptotic to a paraboloid \cite{MSHS}. In a remarkable study of convex ancient graphical solutions of the mean curvature flow, X.-J.~Wang showed that any strictly convex, entire translator in dimension two is rotationally symmetric, and hence the bowl \cite{Wa11}. Moreover, in every dimension $n\geq 3$, he constructed strictly convex, entire examples without rotational symmetry.
In the setting of two-convex (that is, $\kappa_1+\kappa_2>0$, where $\kappa_1\leq\kappa_2\leq\dots\leq\kappa_n$ denote the principal curvatures) mean curvature flow in dimensions $n\geq 3$, the far-reaching theory of Huisken and Sinestrari \cite{HuSi99a,HuSi99b,HuSi09} shows that regions of high curvature are either uniformly convex and cover a whole connected component of the surface, or else they contain regions which are very close, up to rescaling, to cylindrical segments $[-L,L]\times S^{n-1}$. This suggests that the translating blow-up limits which arise at type-II singularities might be rotationally symmetric. We note that this is true (in dimensions $n\geq 3$) for two-convex self-shrinking solutions which arise as blow-up limits of the mean curvature flow with type-I curvature blow-up (that is, $\limsup_{t\to T}(T-t)\max_{M^n\times\{t\}}H^2<\infty$) since the only possibilities are shrinking spheres $S^n_{\sqrt{-2nt}}$ and cylinders $\mathbb{R}\times S^{n-1}_{\sqrt{-2(n-1)t}}$ \cite[Theorem 5.1]{Hu93}. Recently, Haslhofer \cite{Ha} proved that this is true in the embedded case (even in dimension 2), his proof relying crucially on the non-collapsing theory of \cite{An12} and \cite{HK1}. In fact, he shows that any strictly convex, uniformly two-convex translator which is non-collapsing is necessarily rotationally symmetric. In the immersed setting, we no longer have a non-collapsing property; however, by the work of Huisken and Sinestrari \cite{HuSi09}, we have a cylindrical estimate and a corresponding gradient estimate. Motivated by Haslhofer's result and the Huisken--Sinestrari theory, we prove the following.
\begin{theorem}\label{thm:bowl} Let $X:M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, be a mean convex translator and $C_1<\infty$ a constant such that the following hold: \begin{enumerate}
\item cylindrical estimate: $|A|^2-\frac{1}{n-1}H^2<0$
\item gradient estimate: $|\nabla A|^2\le -C_1\left(|A|^2-\frac{1}{n-1}H^2\right)H^2$ \end{enumerate} where $A$ is the second fundamental form of $X$.Then $M^n$ is rotationally symmetric. \end{theorem}
In fact (assuming $T=e_{n+1}$), we need only prove that the blow-down of $M_t^n:=M^n+te_{n+1}$ is the shrinking cylinder $S^{n-1}_{\sqrt{2(n-1)(1-t)}}\times \mathbb{R}$, since this is enough to deduce rotational symmetry of $M^n$ by \S 3--5 of Haslhofer's paper.
We remark that the cylindrical estimate implies uniform two-convexity, $\kappa_1+\kappa_2\geq \frac{1}{2(n-1)}H$ (see \cite[Lemma 5.1]{HuSi15}). As a consequence, any type-II blow-up limit of a two-convex mean curvature flow in dimensions $n\geq 3$ is rotationally symmetric (even when the mean curvature flow is only immersed). \begin{cor}\label{cor:bowl} Suppose that $X:M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, is a translator which arises as a proper blow-up limit of a two-convex mean curvature flow of immersed hypersurfaces. Then $M^n$ is rotationally symmetric. \end{cor}
We note that Corollary \ref{cor:bowl} fails in dimension 2 without some additional assumption, such as non-collapsing, to rule out the Grim plane $\mathbb{R}\times \Gamma$. This is in accordance with the type-I case, where the non-embedded Abresch--Langer planes $\mathbb{R}\times \gamma_{k,l}$ can arise \cite{AbLa86}.
We remark that our proof of Theorem \ref{thm:bowl} also works (in dimensions $n\geq 2$) if assumptions (1) and (2) are replaced by \begin{enumerate} \item[(1')] cylindrical estimate: $\overline k-\frac{1}{n-1}H<0$ and
\item[(2')] gradient estimate: $|\nabla A|^2\le C_1\kappa_1H^3$, \end{enumerate} where $\overline k$ denotes the inscribed curvature. By work of Brendle \cite[Theorem~1]{Br15} (see also \cite{HaKl15}) and Haslhofer and Kleiner\footnote{The improved gradient estimate (2') follows from \cite[Corollary 2.7]{HK1} as in the proof of Claim \ref{claim:gradient} in Section \ref{sec:F}.} \cite[Corollary 2.7]{HK1}, these assumptions are met for blow-up limits of type-II singularities of two-convex mean curvature flows of \emph{embedded} hypersurfaces. This provides a slightly different perspective of Haslhofer's result.
Apart from dealing with blow-up limits of type-II singularities of two-convex mean curvature flows of \emph{immersed} hypersurfaces, a further motivation for removing the (two-sided) non-collapsing assumption in Haslhofer's result was to study translating solutions of more general curvature flows, where (two-sided) non-collapsing will in general not hold. Let $F$ be given by $F(x)=f(\vec\kappa(x))$ for some smooth function $\displaystyle f:\Gamma^n \subset\mathbb{R}^n \to\mathbb{R}$ of the principal curvatures $\vec\kappa:=(\kappa_1,\dots,\kappa_n)$ defined with respect to some choice of unit normal field $\nu$. Then we can consider solutions $X:M^n\to\mathbb{R}^{n+1}$ of the fully non-linear translator equation \begin{equation}\tag{FT}\label{eq:FT} F=-\inner{\nu}{T} \end{equation} for some $T\in\mathbb{R}^{n+1}$. We will call the function $f:\Gamma^n\to\mathbb{R}$ \emph{admissible} if $\Gamma^n$ is an open, symmetric cone and $f$ is smooth, symmetric, monotone increasing in each variable and 1-homogeneous. These conditions on $f$ are very natural: Indeed, smoothness and symmetry are needed to ensure that $F$ is smooth, monotonicity ensures that \eqref{eq:FT} is elliptic, and homogeneity ensures that $F$ scales like curvature.
Just as for the mean curvature flow, the family $\{X(\cdot,t)\}_{t\in\mathbb{R}}$ of immersions $X(\cdot,t):M^n\to\mathbb{R}^{n+1}$ defined by $X(x,t):=X(x)+tT$ satisfies, up to a time-dependent tangential reparametrization, the corresponding flow\footnote{We have implicitly assumed orientability of solutions of \eqref{eq:FT} and \eqref{eq:F}; however, if $f$ is an \emph{odd} function, \eqref{eq:FT} and \eqref{eq:F} also admit non-orientable solutions.} \begin{equation}\tag{F}\label{eq:F} \partial_tX(\cdot,t)=-F(\cdot,t)\nu(\cdot,t)\,. \end{equation}
Moreover, if \eqref{eq:F} admits an appropriate Harnack inequality (which is true under very mild concavity assumptions for $f$ \cite{An94c}) then solutions of \eqref{eq:FT} arise as blow-up limits of positive speed solutions of \eqref{eq:F} about type-II singularities in a completely analogous way to the case of mean convex mean curvature flow. If $F$ also admits a strong maximum principle for the Weingarten tensor (which also holds under natural concavity conditions for $f$, see Section \ref{sec:SMP}) then our proof goes through with minor modification, and we obtain a result of the following form (where we denote by $\Gamma_+^{m}$ the positive cone $\Gamma_+^m:=\{(z_1,\dots,z_m)\in\mathbb{R}^m:\min_{1\leq i\leq m}\{z_i\}>0\}$ in $\mathbb{R}^m$).
\begin{theorem}\label{thm:bowlF} Let $X:M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, be a solution of \eqref{eq:FT}, where $F$ is given by $F(x)=f(\vec\kappa(x))$ for some admissible $f:\Gamma^n\to\mathbb{R}$ such that \[ \displaystyle \{(0,\hat z):\hat z\in \Gamma_+^{n-1}\}\subset\Gamma^n\subset \Gamma^n_2:=\{z\in \mathbb{R}^n:\min_{1\leq i<j\leq n}\{z_i+z_j\}>0\} \] and either \ben \item[(i)] $f$ is convex, or \item[(ii)] $f$ is concave and the function $f_\ast:\Gamma_+^{n-1}\to\mathbb{R}$ defined by \[ f_\ast(z^{-1}_2,\dots,z^{-1}_n):=f(0,z_2,\dots,z_n)^{-1} \] is concave. \een Suppose that the solution satisfies \begin{enumerate}
\item a cylindrical estimate, and \item a corresponding gradient estimate. \end{enumerate} Then $M^n$ is rotationally symmetric. \end{theorem}
The precise form of the assumptions (1) and (2) will be different depending on whether the speed function $f$ is convex or concave. This is made precise in Section \ref{sec:F}.
As a particular application, we find that translating blow-up limits about type-II singularities of the flows of embedded hypersurfaces studied in \cite{BrHu2} are rotationally symmetric.
\begin{cor}\label{cor:bowlF} Suppose that $X:M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, is a translator which arises as a blow-up limit of an embedded solution of the flow \eqref{eq:F}, where $F$ is given by $F(x)=f(\vec \kappa(x))$ for some concave admissible $f:\Gamma^n\to\mathbb{R}$ such that \bi \item[(i)] $\displaystyle \{(0,\hat z):\hat z\in \Gamma_+^{n-1}\}\subset\Gamma^n\subset \Gamma^n_2:=\{z\in \mathbb{R}^n:\min_{1\leq i<j\leq n}\{z_i+z_j\}>0\}$,
\item[(ii)] $f|_{\partial\Gamma^n}=0$ and \item[(iii)] the function $f_\ast:\Gamma_+^{n-1}\to\mathbb{R}$ defined by \[ f_\ast(z^{-1}_2,\dots,z^{-1}_n):=f(0,z_2,\dots,z_n)^{-1} \] is concave. \ei Then $X$ is rotationally symmetric. \end{cor}
We mention that the class of flows to which the corollary applies includes the flow of two-convex hypersurfaces by the two-harmonic mean curvature, \ba\label{eq:twoharmonicmean} F:=\left(\sum_{i<j}\frac{1}{\kappa_i+\kappa_j}\right)^{-1}\,, \ea and, for $n=3$, the flows of positive scalar curvature hypersurfaces by either the square root of the scalar curvature or the ratio of scalar to mean curvature. Corollary \ref{cor:bowlF} does not include any convex speeds, because, as yet, it is not known if they admit an appropriate gradient estimate (although an appropriate cylindrical estimate was proved in \cite{AnLa14}).
\section{Preliminaries}\label{sec:prelims}
Let $X:M^n\to\mathbb{R}^{n+1}$ be a solution of \eqref{eq:T}. After performing a rotation and a dilation, we can arrange that $T=e_{n+1}$, which we assume from now on. Introducing the height function $h:M^n\to \mathbb{R}$, \[ h(x):=\inner{X(x)}{e_{n+1}}\,, \] we denote \[ V:=\nabla h=\mathrm{proj}_{TM^n}e_{n+1}=e_{n+1}+H\nu\,. \] Then the Weingarten curvature $A$ and the mean curvature $H$ satisfy (see, for instance, \cite{Hm95a}) \ba\label{eq:LaplaceA}
-\Delta A=|A|^2A+\nabla_VA \ea and \ba\label{eq:LaplaceH}
-\Delta H=|A|^2H+\nabla_VH\,. \ea A well-known consequence of \eqref{eq:LaplaceA} and the strong maximum principle is the following splitting theorem (see \cite[Theorem 4.1]{HuSi99b} or the appendix). \begin{theorem}[Splitting Theorem] Let $X:M^n\to\mathbb{R}^{n+1}$ be a locally weakly convex solution of \eqref{eq:T}. Then, either $\kappa_1>0$ or $\kappa_1\equiv 0$ and $M^n$ splits as an isometric product $M^n\cong \mathbb{R}\times \Sigma^{n-1}$. \end{theorem}
We next note that a mean convex translator $X:M^n\to\mathbb{R}^{n+1}$ which satisfies the cylindrical estimate must be locally strictly convex. Indeed, \begin{align}
|A|^2-\frac{1}{n-1}H^2
={}& \frac{1}{n-1}\sum_{1<i<j}(\kappa_j-\kappa_i)^2+\frac{n}{n-1}\kappa_1^2-\frac{2}{n-1}\kappa_1H\nonumber\\ \geq{}& -\kappa_1H\,,\label{eq:roundcrosssection} \end{align} so that, wherever the cylindrical estimate holds, \begin{equation}\label{eq:str.convexity}
\kappa_1\geq -\frac{1}{H}\left(|A|^2-\frac{1}{n-1}H^2\right)>0\,. \end{equation}
Note also that, for a hypersurface satisfying the weak cylindrical estimate $|A|^2-\frac{1}{n-1}H^2\leq 0$, the only points at which $\kappa_1$ can vanish are the cylindrical points, $\kappa_1=0$, $\kappa_2=\kappa_n$.
Since $M^n$ is smooth and $n\geq 2$, local convexity implies that $M^n$ is the boundary of a convex body \cite{Sack}. In particular, $M^n$ is embedded, so we may drop the parametrization $X$ and identify $M^n$ with its image. A further consequence of convexity and the inequality $\inner{\nu}{e_{n+1}}=H>0$ is the fact that $M^n$ can be written globally as the graph of a function $u:\Omega^n:=\proj_{\mathbb{R}^{n}\times\{0\}}(M^n)\to\mathbb{R}$.
Note also that, applied to the gradient estimate, \eqref{eq:roundcrosssection} yields \begin{equation}\label{eq:gradient}
\frac{|\nabla A|^2}{H^4}\le C_1\frac{\kappa_1}{H}\,. \end{equation} Thus, the gradient estimate actually improves wherever $\kappa_1$ is small compared to $H$.
We conclude this section by recalling the following well-known consequence of gradient estimates for the curvature (cf. \cite[Lemma 6.6]{HuSi09}). \begin{lemma}\label{lem:convergence} Let $X:M^n\to\mathbb{R}^{n+1}$ be a mean convex hypersurface and $C<\infty$ a constant such that \[
\sup_{M^n}\frac{|\nabla H|}{H^2}\leq C\,. \] Then \[ \max_{y\in B_{\frac{1}{2CH(x)}}(x)}H(y)\leq 2H(x)\,, \] where $B_{\frac{1}{2CH(x)}}(x)$ is the intrinsic ball of radius $\frac{1}{2CH(x)}$ about the point $x$. \end{lemma} \begin{proof} For any unit length geodesic $\gamma:[0,s]\to M$ joining the points $y=\gamma(0)$ and $x=\gamma(s)$, we have \[ \nabla_{\gamma'}H^{-1}\leq C\,. \] Integrating yields \[ H^{-1}(x)-H^{-1}(y)\leq Cs \] or, if $s\leq \frac{1}{2CH(x)}$, \[ H(y)\leq\frac{H(x)}{1-CH(x)s}\leq 2H(x)\,. \] The claim follows. \end{proof}
\begin{cor}\label{cor:convergence} Let $(X_j:M_j^n\to\mathbb{R}^{n+1},x_j)_{j\in\mathbb{N}}$ be a sequence of strictly mean convex, weakly locally convex pointed smooth hypersurfaces and $C<\infty$ a constant satisfying
\bann X_j(x_j)=0,\quad H_j(x_j)=1\quad\text{and}\quad \sup_{M_j^n}\frac{|\nabla A_j|}{H_j^2}\leq C\,,
\eann where, for each $j\in \mathbb{N}$, $H_j$ and $A_j$ are the mean curvature and second fundamental form, respectively, of $M_j^n$. Then there exists a weakly locally convex pointed $C^2$ hypersurface $(X_\infty:M_\infty^n\to\mathbb{R}^{n+1},x_\infty)$ such that, after passing to a subsequence, $X_j|_{B_j}:B_j\to\mathbb{R}^{n+1}$ converge locally uniformly in $C^2$ to $X_\infty|_{B_\infty}:B_\infty\to\mathbb{R}^{n+1}$, where $B_j$ denotes the intrinsic ball in $M_j^n$ of radius $(2C)^{-1}$ about the point $x_j$. \end{cor}
\section{Proof of Theorem \ref{thm:bowl} and Corollary \ref{cor:bowl}}\label{sec:proof}
We begin by noting that the mean curvature goes to zero at infinity. \begin{lemma}\label{lem:Hto0} For any sequence of points $X_j\in M^n$ with $\norm{X_j}\to\infty$, \[ H(X_j)\to 0\,. \] \end{lemma} \begin{proof} The proof is similar to \cite[Lemma 2.1]{Ha}. Suppose that the lemma does not hold. Then there is a sequence of points $\{X_j\}_{j=1}^\infty\subset M^n$ satisfying $\Vert X_j\Vert\to \infty$ and $\limsup_{j\to\infty}H(X_j)>0$. Passing to a subsequence, we can assume that $\liminf_{j\to\infty}H(X_j)>0$. By translational invariance of \eqref{eq:T}, we can assume, without loss of generality, that $0\in M^n$. Furthermore, after passing to a subsequence, $w_j:=X_j/\Vert X_j\Vert\to w\in S^n$. Consider the sequence $M^n_j:=M^n-X_j$. Since each $M^n_j$ satisfies the translator equation \eqref{eq:T} and has mean curvature uniformly bounded by 1, it follows from standard regularity theory for solutions of either \eqref{eq:T} \cite{GT} or \eqref{eq:MCF} \cite{EcHu91,Eck} that, after passing to a subsequence, $M^n_j$ converges locally uniformly in $C^\infty$ to a weakly convex translator $M^n_\infty$. We claim that $M^n_\infty$ contains the line $\{sw: s\in \mathbb{R}\}$. First note that the closed convex region $\overline \Omega$ bounded by $M^n$ contains the ray $\{sw:s\geq 0\}$, since it contains each of the segments $\{sw_j:0\leq s\leq s_j\}$, where $s_j:=\Vert X_j\Vert$ and $w_j:=X_j/\Vert X_j\Vert$. By convexity, it also contains the set $\{rsw+(1-r)X_j:s\geq 0, 0\leq r\leq 1\}$ for each $j$. It follows that the closed convex region $\overline\Omega_j$ bounded by $M^n_j$ contains the set $\{rsw-rs_jw_j:s>0, 0\leq r\leq 1\}$. In particular, choosing $s=2s_j$, $\{\vartheta (w-w_j)+\vartheta w:0\leq \vartheta\leq s_j\}\subset \overline\Omega_j$ and, choosing $s=s_j/2$, $\{\vartheta w_j-\vartheta (w-w_j):-s_j/2\leq \vartheta\leq 0\}\subset \overline\Omega_j$. Taking $j\to\infty$, we find $\{sw:s\in\mathbb{R}\}\subset\overline \Omega_\infty$. The claim now follows from convexity of $\overline \Omega_\infty$ since $0\in M^n_\infty$. We conclude that $\kappa_1$ reaches zero somewhere on $M^n_\infty$. By the splitting theorem, the limit splits as an isometric product $M^n_\infty\cong \mathbb{R}\times\Sigma^{n-1}$; in particular, $\kappa_1\equiv 0$. On the other hand, by the strong maximum principle, we must have $H>0$ everywhere (since, by hypothesis, $H(0)>0$). The cylindrical estimate now implies that $\Sigma^{n-1}$ is umbilic (recall \eqref{eq:roundcrosssection}) and hence a round sphere. But this contradicts the fact that $M^n_\infty+te_{n+1}$ satisfies mean curvature flow. \end{proof}
It follows that $H$ attains a maximum at some point $O$, which we call the `tip' of $M^n$. By translational invariance of \eqref{eq:T}, we can assume, without loss of generality, that $O$ is the origin.
Recall that the gradient field of the height function is given by \[ V=\proj_{TM^n}(e_{n+1})=e_{n+1}+H\nu\,. \] By the translator equation \eqref{eq:T}, \[ \norm{V}^2=1-H^2\, \] and, differentiating \eqref{eq:T}, \[ V=-A^{-1}(\nabla H)\,. \] Moreover, since $A$ is non-degenerate, at any critical point $X$ of $H$ we must have $V(X)=0$ and hence $\nu(X)=-e_{n+1}$. By strict convexity of $M^n$ (recall \eqref{eq:str.convexity}), we conclude that $H$ has precisely one critical point, the origin, and $H\leq H(0)=1$.
Next, observe that \[ \nabla V=HA\,. \] Since $A$ is positive definite, it follows from standard ODE theory that we can find, for each $X\in M^n\setminus\{0\}$, a unique integral curve $\phi_X:(0,\infty)\to M^n$ of $ V$ through $X$ such that \[ \lim_{s\searrow 0}\phi_X(s)=0\quad\text{and}\quad\lim_{s\to \infty}\Vert\phi_X(s)\Vert=\infty\,. \]
If we parametrize the integral curves by height, so that \begin{equation}\label{eq:s} h(\phi_X(s))=\inner{\phi_X(s)}{e_{n+1}}=s\,, \end{equation} then we obtain \bann
\phi_{X}'=\frac{V\circ\phi_X}{1-H^2\circ\phi_X}=\frac{V\circ\phi_X}{\|V\circ \phi_{X}\|^2}\,. \eann Note that, by Lemma \ref{lem:Hto0}, the reparametrized curves are still defined on $(0,\infty)$.
\begin{comment} is well-defined and has the property that if $s(X_i)\to \infty$ for a sequence of points $X_i\in M$, then also $\Vert X_i\Vert\to \infty$. Note also that $s(X)$ is the distance between $X$ and $0$ along the integral curve of $\widehat V$ joining the two points. We will derive asymptotics for $M^n$ in terms of both the parameter $s$ and the height function $h(X):=\inner{X}{e_{n+1}}$. Observe that \[ s(X)\geq\Vert X\Vert\geq h(X) \] for any $X\in M^n$.
The following lemma bounds $h$ from below by $s$. \begin{lemma}[Equivalent asymptotics]\label{lem:heightbound} There exists $s_0>0$ such that \[ h(X)\geq \frac{1}{2} s(X) \] for all $X\in M^n$ such that $s(X)\ge s_0$. \end{lemma} \begin{proof} Let $\phi$ be an integral curve of $\widehat V$ emanating from the tip. Observe that \[ \inner{\phi(s)}{e_{n+1}}'=\inner{\widehat V(\phi(s))}{e_{n+1}}=\norm{V(\phi(s))}=\sqrt{1-H^2(\phi(s))}\,. \] By Lemma \ref{lem:Hto0}, there exists $s_0<\infty$ such that $H\leq \frac{2}{3}$ for all $X\in M^n$ such that $s(X)\geq s_0/4$. Thus, \[ \inner{\phi(s)}{e_{n+1}}'\geq\frac{2}{3} \] for all points $X$ on the curve $\phi$ such that $s(X)\geq s_0/4$. Consider now any point $X$ on the curve $\phi$ with $s(X)>s_0$. Setting $s_X:=s(X)\ge s_0$, it follows that \begin{align*} h(X)=\inner{\phi(s_X)}{e_{n+1}}\geq{}& \int_{s_0/4}^{s_X}\inner{\phi(s)}{e_{n+1}}'ds\\ \geq{}&\frac{2}{3}\left(s_X-\frac{s_0}{4}\right)\ge\frac12 s_X \,. \end{align*} \end{proof} \end{comment}
We will use the improved gradient estimate \eqref{eq:gradient} to extract a lower bound for $H$ along the flow of $V$. \begin{lemma}[Lower bound for $H$]\label{lem:Hlowerbound} There exists $h_0>0$ such that \[ H(X)\geq \frac{1}{\sqrt{4C_1h(X)}} \] for all $X\in M^n$ with height $h(X)$ at least $h_0$. \end{lemma} \begin{proof} Let $\phi:(0,\infty)\to M$ be an integral curve of $V$ emanating from the tip and parametrized by height. Set $f(s):=H^{-1}(\phi(s))$. Applying the gradient estimate \eqref{eq:gradient}, we obtain \begin{equation}\label{eq:Hlowerbound1}
(f')^2\leq \frac{|\nabla H|^2|\phi'|^2}{H^4}\leq \frac{C_1\kappa_1f}{1-H^2}\,. \end{equation} On the other hand, \bann -\nabla_{\phi'}H=A(\phi',V)=\frac{A(V,V)}{\norm{V}^2}\geq \kappa_1\,. \eann That is, \begin{equation}\label{eq:Hlowerbound2} \kappa_1f\leq \frac{f'}{f}\,. \end{equation} Putting \eqref{eq:Hlowerbound1} and \eqref{eq:Hlowerbound2} together yields \[ (f^2)'\leq \frac{2C_1}{1-H^2}\,. \] By Lemma \ref{lem:Hto0}, there exists $h_1>0$ such that $H(X)\leq \frac{1}{\sqrt{3}}$ for all $X\in M^n$ with height $h(X)$ at least $h_1$. Thus, for any $s\ge h_1$, we obtain $(f^2)'\leq 3C_1$. Integrating this between $h_1$ and $s$ yields \[ \frac{1}{H^2(\phi(s))}\le 3C_1(s-h_1)+\frac{1}{H^2(\phi(h_1))}\le 4C_1 s \] with the last inequality being true provided that $s$ is large enough; in particular, $s\ge \max\{h_1, (C_1 H^2(\phi(h_1)))^{-1}\}$. Since $\phi$ is parametrized by height, the lemma then follows with $h_0=\max\{h_1, (C_1 H^2(h_1))^{-1}\}$, where $H(h_1):=\min \{H(X):X\in M^n, h(X)=h_1\}>0\,$. \end{proof}
Next, we derive a lower bound for the `girth' of $M^n$. This estimate plays a key role in obtaining an upper bound for $H$.
\begin{lemma}[Girth estimate]\label{lem:girthestimate} There exists $h_0<\infty$ such that \[ \Vert\proj_{\mathbb{R}^n\times\{0\}}X\Vert\ge\sqrt {\frac{h(X)}{16C_1}} \] for any $X\in M^n$ with height $h(X)$ at least $h_0$. \end{lemma} \begin{proof}
The idea of the proof is the following: If the claim did not hold, then, writing $M=\graph u$, there would be a point where $u$ is both large and has large gradient (compared to $\sqrt h$). But this would contradict the lower bound $H\gtrsim h^{-\frac{1}{2}}$ since $H=\frac{1}{\sqrt{1+|Du|^2}}$.
By Lemma \ref{lem:Hlowerbound}, there exists $h_1>0$ such that \ba\label{eq:Hlowerbond1} H(X)\geq \frac{1}{\sqrt{4C_1h(X)}} \ea for all $X\in M^n$ with height $h(X)\geq h_1$. Suppose, contrary to the claim, that there is a point $X=(x,u(x))\in M^n$ with $h(X)\geq h_0:=2h_1$ but \[ \frac{h(X)}{\norm{x}}>\sqrt{16C_1h(X)}\,. \] Set $\ell:=\norm{x}$ and consider the curve $\gamma:[0,\ell]\to M^n$ given by $\gamma(s):=(\hat\gamma(s),u(\hat\gamma(s)))$, where $\hat\gamma(s):=s\frac{x}{\ell}$ is the straight line in $\Omega$ joining 0 and $x$. Slightly abusing notation, set $h(s):=h(\gamma(s))$. Then \[
h'=\nabla_{\gamma'}h\leq \norm{V}\sqrt{1+(D_{\hat\gamma'}u)^2}\leq \sqrt{1-H^2}\sqrt{1+|Du|^2}=\frac{\sqrt{1-H^2}}{H}\leq\frac{1}{H}\,. \] Let $s_1\in[0,\ell]$ be the point at which $h(\gamma(s_1))=h_1$. By the mean value theorem, there is a point $s_2\in [s_1,\ell]$ such that \bann \frac{1}{H(\gamma(s_2))}\geq h'(s_2)=\frac{h(X)-h_1}{\ell-s_1}\geq \frac{h(X)}{2\ell}>\sqrt{\frac{16}{4}C_1h(X)}\ge\sqrt{4C_1h(\gamma(s_2))}\,. \eann This contradicts \eqref{eq:Hlowerbond1} and we conclude that \[ \Vert\proj_{\mathbb{R}^n\times\{0\}}X\Vert\geq\sqrt{\frac{h(X)}{16C_1}} \] for all $X\in M^n$ with height $h(X)\geq h_0:=2h_1$. \begin{comment} First fix $s_0>0$ such that, via Lemmas \ref{lem:heightbound} and \ref{lem:Hlowerbound}, \begin{equation}\label{eq:GE1} H(X)\ge \frac{1}{\sqrt {7C_1s(X)}}\quad\text{and}\quad h(X)\ge \frac12 s(X) \end{equation}
for all $X\in\{s(X)\ge \frac{s_0}{4}\}$. Consider $X=(x,u(x))$ on $M^n$ with $s(X)=s\ge s_0$ and denote $\ell=|x|$. Since $M^n$ is convex, the straight line \[ \gamma:[0,\ell]\to \mathbb{R}^n\,,\,\,\gamma(t)=t\frac x\ell \] is in $\Omega:=\proj_{\mathbb{R}^n\times\{0\}}(M)$ (the domain of $u$). The statement of the lemma is then equivalent to \begin{equation}\label{eq:GE2} \ell\ge \frac{1}{60 C_1}\sqrt s, \end{equation} which we will show by contradiction. So suppose instead that \begin{equation}\label{eq:GE3} \ell< \frac1c \sqrt s\,,\,\,\text{where }c= 60 C_1. \end{equation} Note that $u$ is increasing along $\gamma$, since $M^n$ is convex and $0=u(\gamma(0))<u(\gamma(t))$ for any $t>0$, so that \[
\frac{s}{2}\le h(X)=u(x)=\int_0^\ell\frac{d}{dt} u(\gamma(t)) dt=\int_0^\ell Du\cdot \frac x\ell dt=\int_0^\ell \left|Du\cdot \frac x\ell \right|dt\,.
\]
We claim that there exists $t_0\in [0, \ell]$ such that $|Du(\gamma(t_0))|\ge \frac{c}{4}\sqrt s$. To see this, assume on the contrary that $|Du(\gamma(t))|< \frac{c}{4}\sqrt s$ for all $t\in [0, \ell]$. Then \[ \frac s2=\int_0^\ell Du\cdot \frac x\ell dt\le \int_{0}^\ell \frac{c}{4}\sqrt s dt=\ell \frac{c}{4}\sqrt s, \]
which contradicts our assumption \eqref{eq:GE3}. So let $t_0$ be the maximum $t\in [0, \ell]$ such that $|Du(\gamma(t))|\ge \frac{c}{4}\sqrt s$. Hence, either $t_0=\ell$ or else for any $t\in (t_0, \ell]$ $|Du(\gamma(t))|< \frac{c}{4}\sqrt s$. We next show that in either case $u(\gamma(t_0))\ge s/4$. By \eqref{eq:GE1}, this is trivially true if $t_0=\ell$ and in the general case we find \[
u(x)-u(\gamma(t_0))=\int_{t_0}^\ell \left|Du\cdot \frac x\ell \right|dt\le (\ell- t_0) \frac{c}{4}\sqrt s\le \ell \frac{c}{4}\sqrt s, \] which, using \eqref{eq:GE1} and \eqref{eq:GE3}, implies that \[ u(\gamma(t_0))\ge u(x)-\frac{s}{4}\ge \frac s4. \] This estimate implies that \[ s(Y)\ge h(Y)\ge s/4\ge s_0/4\,, \] where $Y=(\gamma(t_0), u(\gamma(t_0)))$, so that, by \eqref{eq:GE1}, \[ H(Y)\ge (\sqrt {7C_1s (Y)})^{-1}\ge (\sqrt {14 C_1h (Y)})^{-1}\ge (\sqrt {14 C_1s})^{-1}. \]
On the other hand, since $|Du(\gamma(t_0)|\ge \frac c4 \sqrt s$, we find
\[ H(Y)=\frac{1}{\sqrt{1+|Du(\gamma(t_0))|^2}}\le \frac{1}{\sqrt{1+(c/4)^2s}}. \] The last two inequalities then imply \[ \sqrt{14 C_1s}\ge \sqrt{1+(c/4)^2s}\ge \sqrt{(c/4)^2s}\implies (14 C_1)^2> c^2/4^2,\] which, recalling the value of $c$, yields a contradiction. \end{comment} \end{proof}
Next, we find at each height $h$ a point with $H\sim h^{-\frac{1}{2}}$. \begin{lemma}\label{lem:Hupperbound} There exists $h_0<\infty$ such that, for any $h\ge h_0$, there is a point $X\in M\cap \overline B_{\sqrt{2nh}}(h e_{n+1})$ satisfying \[ H(X)\le \sqrt{\frac{h_0}{h}}\,, \] where $\overline B_R(X)$ denotes the closed ball in $\mathbb{R}^{n+1}$ of radius $R$ centred at $X$.
\end{lemma} \begin{proof} We first show that at each height $h$, the ball of radius $\sqrt{2nh}$ centred at $h e_{n+1}$ intersects $M^n$. \begin{claim}\label{claim:ball1} For each height $h>0$, $\overline B_{\sqrt{2nh}}(he_{n+1})\cap M^n\neq\emptyset$. \end{claim} \begin{proof} Under mean curvature flow, the tip of the translator reaches the point $he_{n+1}$ after time $t=h$. On the other hand, under mean curvature flow, the radius of $\partial B_R(he_{n+1})$ shrinks to the point $h e_{n+1}$ in time $t=R^2/2n$. Thus, if the ball $\overline B_R(he_{n+1})$ does not intersect $M^n$, the avoidance principle necessitates $R^2<2nh$. \end{proof}
On the other hand, using Lemma \ref{lem:girthestimate}, the ball in the previous claim can be scaled so that it no longer intersects $M^n$. \begin{claim}\label{claim:ball2} There exists $0<h_0<\infty$ such that $B_{\sqrt{h/h_0}}(h e_{n+1})\cap M^n=\emptyset$ for all $h\geq h_0$. \end{claim} \begin{proof} By Lemma \ref{lem:girthestimate}, there exists $h_1>0$ such that $\Vert\proj_{\mathbb{R}^n\times\{0\}}X\Vert\geq\sqrt{\frac{h(X)}{16C_1}}$ for any $X\in M^n$ with height $h(X)$ at least $h_1$. Set $h_2:=\max\{1,2h_1\}$, $R:=\delta\sqrt h$, where $\delta:=\frac{1}{2}\min\{\frac{1}{\sqrt{16C_1}},1\}$, and consider, for any $h\ge h_2$, the cylinder $Q_R$ centred at the point $he_{n+1}$ with radius $R$ and height $2R$. Then, for any $X\in Q_R\cap M^n$, we have $h(X)\geq h-R\geq h_1$, so that \[
R\geq \|\proj_{\mathbb{R}^n\times\{0\}}(X)\|\geq \sqrt{\frac{h(X)}{16C_1}}\geq\sqrt{\frac{h-R}{16C_1}}\,. \] Rearranging, this becomes \[ (1-16C_1\delta^2)\sqrt{h}\leq\delta\,. \] But this implies $h\le 2/3$. To avoid a contradiction, we must conclude that $Q_R\cap M^n=\emptyset$. The claim then follows with $h_0:=\max\{h_2,\delta^{-2}\}$.
\end{proof} By Claim \ref{claim:ball2}, there exists $h_1<\infty$ such that $B_{n\sqrt{h/h_1}}(h e_{n+1})\cap M^n=\emptyset$ for all $h\geq \frac{h_1}{n^2}$. Set $h_0:=\max\{h_1,\frac{n}{2}\}$ and define, for any $h\ge h_0$, \[ \rho=\inf\{r:B_{r\sqrt h}(h e_{n+1})\cap M\ne\emptyset\}. \] By Claims \ref{claim:ball1} and \ref{claim:ball2}, we know that $\frac{n}{\sqrt{h_0}}\le \rho\le \sqrt{2n}$. Thus, there exists a point $X\in \overline B_{\rho\sqrt h}(h e_{n+1})\cap M^n$. Since $M^n$ and $B_{\rho\sqrt h}(he_{n+1})$ are tangent at $X$ and $M^n$ lies outside of $ B_{\rho\sqrt h}(he_{n+1})$, we have $H(X)\le\frac{n}{\rho\sqrt h}\le \sqrt{\frac{h_0}{h}}$. \end{proof}
Finally, we need to show that $\kappa_1/H$ goes to zero as $h\to\infty$.
\begin{comment} \begin{lemma} Denote by $B_r$ the intrinsic ball of radius $r$ centred at the tip. Then \bann \min_{B_r}\kappa_1(X)=o(r^{-1})\,. \eann In particular, \bann \min_{B_r}\frac{\kappa_1}{H}=o(r^{-\frac{1}{2}})\,. \eann \end{lemma} \begin{proof} The Ricci curvature of $M$ is given by \[ \mathrm{Ric}=HA-A^2\,. \]
Applying the two-convexity assumption yields
\[ \mathrm{Ric}\geq \alpha\kappa_1 Hg\,. \] Now fix $X\in M$ such that $r(X)\geq 1$, say, and let $\gamma$ be any unit speed, length minimizing geodesic joining the tip, $0=\gamma(0)$, to $X=\gamma(R)$. Then, at any point $\gamma(r)$ along the geodesic, \begin{align*} \mathrm{Ric}(\gamma',\gamma')\geq{}\alpha \kappa_1 H
\geq{}&\frac{\kappa_1}{c\sqrt{r}}\geq\frac{\kappa_1}{c\sqrt{R}}\,, \end{align*} where $c>0$ depends on $\alpha$ and $C$. So suppose that there exists $\varepsilon>0$ such that $\kappa_1\geq c\varepsilon(n-1)\pi^2r^{-1}$ for all sufficiently large $r$. Then, for $R$ large enough, \begin{align*} \mathrm{Ric}(\gamma',\gamma')\geq(n-1)\frac{\pi^2}{\sqrt{R}}\frac{\varepsilon}{r}=(n-1)\frac{\pi^2}{R^2}\frac{R}{r}\varepsilon\sqrt{R}\,. \end{align*} Thus, for $R\geq \varepsilon^{-2}$, \begin{align*} \mathrm{Ric}(\gamma',\gamma')\geq(n-1)\frac{\pi^2}{R^2}\,. \end{align*} The second variation formula now allows us to construct a length decreasing variation of $\gamma$ as in the proof of Myers' Theorem, a contradiction.
Finally, we note that the Weingarten tensor satisfies \ba\label{eq:LaplaceA}
-\Delta A=|A|^2A+\nabla_VA\,. \ea In particular, the strong maximum principle implies that $\min_{B_r}\kappa_1$ is non-increasing in $r$. The first claim follows. The second follows from Lemmas \ref{lem:heightbound} and \ref{lem:Hlowerbound}. \end{proof} \end{comment}
\begin{lemma}[Asymptotics for $\frac{\kappa_1}{H}$]\label{lem:kappa1bound} For any sequence of points $X_j$ with $h(X_j)\to\infty$, \[ \frac{\kappa_1}{H}(X_j)\to 0\,. \] \end{lemma} \begin{proof} Suppose there exists a sequence of points $X_j\in M^n$ with $h_j:=h(X_j)\to\infty$ but $\limsup_{j\to\infty}\frac{\kappa_1}{H}(X_j)>0$. Passing to a subsequence, we can assume that $\liminf_{j\to\infty}\frac{\kappa_1}{H}(X_j)>0$. We may choose another sequence of points $Y_j\in M^n$ such that \[ \frac{\kappa_1}{H}(Y_j)=\min_{U_j}\frac{\kappa_1}{H}\,, \] where $U_j:=\{X\in M^n:h(X)\leq h_j\}$. Since $M^n$ is non-compact, we know that $\frac{\kappa_1}{H}(Y_j)\to 0$ \cite{Hm94}. Moreover, the strong maximum principle implies $h(Y_j)=h_j\to\infty$ since, combining \eqref{eq:LaplaceA} and \eqref{eq:LaplaceH}, the tensor $Z:=A/H$ satisfies \bann -\Delta Z(u,u)=\nabla_VZ(u,u)+2\inner{\nabla Z(u,u)}{\frac{\nabla H}{H}}\,. \eann
Now set $\lambda_j:=H(Y_j)$ and consider the sequence $M^n_j:=\lambda_j(M^n-Y_j)$. Then \[ H_j(0)= 1\quad \text{ and }\quad \frac{\kappa^j_1}{H_j}(0)\to 0 \,, \] where $H_j$ and $\kappa_1^j$ are the mean curvature and smallest principal curvature, respectively, of $M^n_j$. It now follows from the gradient estimate \eqref{eq:gradient} (see Corollary \ref{cor:convergence}) that, after passing to a subsequence, the sequence $M^n_j\cap B_j$ converges locally uniformly in $C^2$ to a non-empty limit $M^n_\infty\cap B_\infty$, where $B_j$ is the intrinsic ball in $M^n_j$ of radius $(2C_1)^{-1}$ about the origin. But since the sequence $M^n_j\cap B_j$ satisfies \[ H_j(X)=\lambda_j^{-1}\inner{\nu_j(X)}{e_{n+1}}\,, \] where $\nu_j(X)$ is the normal to $M^n_j$, the limit $M^n_\infty\cap B_\infty$ satisfies \ba\label{eq:axis} \inner{\nu_\infty}{e_{n+1}}\equiv 0\,,
\ea where $\nu_\infty$ is the normal to $M^n_\infty$. In particular, $\kappa^\infty_1\equiv 0$ in $B_\infty$, and we conclude from the cylindrical estimate that $M^n_\infty\cap B_\infty$ lies in a cylinder of radius $(n-1)$. But this implies that the ratio $|\nabla H_j|/H_j^2$ goes to zero on all of $B_j$, and, iterating Corollary \ref{cor:convergence} and passing to a diagonal subsequence, we deduce that $M^n_j$ converges locally uniformly in $C^2$ to a round orthogonal cylinder of radius $(n-1)$. Moreover, by \eqref{eq:axis}, the axis of the cylinder is parallel to $e_{n+1}$. By compactness of the constant height slices, a subsequence of $X_j$ converges to a point in the limit of height zero. But this contradicts $\liminf_{j\to\infty}\frac{\kappa_1}{H}(X_j)>0$. \begin{comment} For each $r>0$ choose a point $x_r\in \partial B_r(0)$ such that \[ \kappa_1(x_r)=\min_{B_r(0)}\kappa_1=\min_{\partial B_r(0)}\kappa_1=o(r^{-1})\,. \] Since $\kappa_1+\kappa_2\geq \alpha H$, we can arrange that $\kappa_2(x_r)\geq \frac{\alpha}{2}H>\kappa_1(x_r)$ for all $r$ sufficiently large. In particular, the orthogonal compliment $E_{x_r}$ of the eigenspace of $\kappa_1(x_r)$ has dimension $n-1$. Let $v\in E_{x_r}$ be any unit vector in this compliment and consider the geodesic $\gamma(s)=\exp_{x_r}sv$. Another argument (to be included) based on the gradient estimate and Myers' Theorem implies, if $r$ is large enough, that $\gamma$ stays minimizing for no further than a distance $L=\frac{4\pi n}{\alpha H(x_r)}$. In particular, the diameter of $\exp(E_{x_r})$ is bounded from above by $L$. But, on the other hand, Corollary \ref{cor:girthestimate} says that this diameter should be bounded below by $r^{\frac{1}{2}}$. Putting this together yields \[ H\leq Cr^{-\frac{1}{2}}\,. \] \end{comment} \end{proof}
\begin{comment} \begin{lemma}[Girth estimate: Mark II] Define \[ \overline R(h):=\sup\{R:M^n\cap B_R(he_{n+1})=\emptyset\}\,. \] There exists $h_0>0$ such that \[ \overline R(h)\geq h^{\frac{1}{2}} \] for all $h\geq h_0$. \end{lemma} \begin{proof} Define $\underline R(h):=\inf\{R:M^n\cap C_R^{-1}(h)=\emptyset\}$, where $C_R^{-}(h)$ is the intersection of the cylinder of radius $R$ about the axis $e_{n+1}$ and the halfspace $\{\inner{X}{e_{n+1}}\leq h\}$. We claim that $\overline R/\underline R\to 1$ as $h\to\infty$. Indeed, as we have seen, given any sequence of points $X_j\in M$ with $h(X_j)\to\infty$, the rescaled hypersurfaces $M_j:=H(X_j)(M-X_j)$ converge to the cylinder of radius $(n-1)$ and axis $e_{n+1}$.... \end{proof} \end{comment}
We are now ready to prove that the blow-down of our translator is the shrinking cylinder. \begin{lemma} Denote by $M_t^n:=M^n+te_{n+1}$. Given any sequence $h_j\to\infty$, the sequence $\{M_{t,j}^n\}_{j=1}^\infty$ of mean curvature flows \ba\label{eq:rescaledMCF} M_{t,j}^n:=h_j^{-\frac{1}{2}}\left( M^n_{h_jt}-h_je_{n+1}\right)\,,\quad t\in(-\infty,1) \ea converges locally uniformly in $C^\infty$ to the shrinking cylinder $S^{n-1}_{r(t)}\times \mathbb{R}$, where $r(t):=\sqrt{2(n-1)(1-t)}$. \end{lemma} \begin{proof} By Lemmas \ref{lem:Hto0}, \ref{lem:Hlowerbound}, \ref{lem:girthestimate} and \ref{lem:Hupperbound}, there is a sequence $X_j\in M^n$ with $h_j:=h(X_j)\to\infty$, $H(X_j)\sim h_j^{-\frac{1}{2}}$ and $\proj_{\mathbb{R}^n\times\{0\}}X_j\sim h_j^{-\frac{1}{2}}$. Moreover, by Lemma \ref{lem:kappa1bound}, $\frac{\kappa_1}{H}(X_j)\to 0$. As in the proof of Lemma \ref{lem:kappa1bound}, we can use Corollary \ref{cor:convergence} and the cylindrical estimate to deduce that, after passing to a subsequence, $M^n_j:=h_j^{-\frac{1}{2}}(M^n-h_je_{n+1})$ converges locally uniformly in $C^2$ to a limit $M^n_\infty$ which is congruent to a round, orthogonal cylinder. Since the limit encloses the ray $\{se_{n+1}:s>0\}$, its axis must be parallel to $e_{n+1}$. It follows that $H\sim h^{-\frac{1}{2}}$. We can now conclude, by the same argument, that for any sequence $\lambda_j\to\infty$ and any $R>0$, the sequence \ba\label{eq:convergetocylinder} M_{j}^n=R\lambda_j^{-\frac{1}{2}}\left( M^n-\lambda_je_{n+1}\right) \ea converges subsequentially to a round orthogonal cylinder with axis parallel to $e_{n+1}$. Setting $\lambda_j:=R^2h_j$ where $R:=\sqrt{1-t}$, and applying standard regularity theory (see \cite{EcHu91} or \cite{Eck}), we deduce, after passing to a subsequence, that \eqref{eq:rescaledMCF} converges locally uniformly in $C^\infty$ to a shrinking cylinder with axis parallel to $e_{n+1}$. It is also clear from \eqref{eq:convergetocylinder} that the radius of the limit goes to zero as $t\to 1$. We conclude that the limit is $S^{n-1}_{r(t)}\times \mathbb{R}$. Since the limit is the same for any convergent subsequence, the convergence holds for the entire sequence. \end{proof}
\begin{cor}[Asymptotics for $H$] \[ H=\sqrt{\frac{n-1}{2}}h^{-\frac{1}{2}}+o\left(h^{-\frac{1}{2}}\right)\,. \] \end{cor}
The remainder of the proof of Theorem \ref{thm:bowl} now follows from the work of Haslhofer \cite[Sections 3--5]{Ha}.
Finally, we show, briefly, how Corollary \ref{cor:bowl} follows from Theorem \ref{thm:bowl}. \begin{proof}[Proof of Corollary \ref{cor:bowl}] The cylindrical estimate follows immediately from the cylindrical estimate of Huisken and Sinestrari \cite[Theorem 1.5]{HuSi09}, \[
|A|^2-\frac{1}{n-1}H^2\leq \varepsilon H^2+C_\varepsilon\,, \] as the lower order term is annihilated under the rescaling no matter how small we take $\varepsilon$ (see \cite[Section 4]{HuSi99a}). The strong maximum principle then gives the strict inequality.
The gradient estimate follows from the gradient estimate of Huisken and Sinestrari \cite[Theorem 6.1 and Remark 6.2]{HuSi09}, \[
|\nabla A|^2\leq Cg_1g_2\,, \]
where $g_1:=2C_\varepsilon+\varepsilon H^2-\left(|A|^2-\frac{1}{n-1}H^2\right)$ for arbitrary $\varepsilon>0$ and $g_2=C_\delta+\delta H^2-\left(|A|^2-\frac{1}{n-1}H^2\right)$ with $\delta=\delta(n)$ fixed. In particular, $g_1\leq c_nH^2+C_n$, so that \[
\frac{|\nabla A|^2}{H^4}\leq C\left( c_n+\frac{C_n}{H^2}\right)\left(\frac{2C_\varepsilon}{H^2}+\varepsilon -\frac{|A|^2-\frac{1}{n-1}H^2}{H^2}\right)\,. \] Under the rescaling, all lower order terms are annihilated, and the claim follows, as for the cylindrical estimate, by taking $\varepsilon\to 0$. \end{proof}
\section{Flows by non-linear functions of curvature}\label{sec:F}
We now consider solutions of \eqref{eq:FT} and prove Theorem \ref{thm:bowlF} and Corollary~\ref{cor:bowlF}. Let us begin with a discussion of the conditions (1)--(2) of Theorem~\ref{thm:bowlF} which will replace the corresponding conditions in Theorem \ref{thm:bowl}.
\subsection{Flows by convex speeds} For speeds $F=f(\vec\kappa)$ given by convex admissible $f:\Gamma^n\to\mathbb{R}$, the cylindrical estimate takes the form \ba\label{eq:cylindricalFconvex} \kappa_1+\kappa_2-\beta_1^{-1}F>0\,, \ea where $\beta_1=f(0,1,\dots,1)$ is the value $F$ takes on the cylinder $\mathbb{R}\times S^{n-1}_1$. We claim that $\kappa_1$ is bounded from below by $\kappa_1+\kappa_2-\beta_1^{-1}F$, and that the only points at which both $\kappa_1$ and $\kappa_1+\kappa_2-\beta_1^{-1}F$ vanish are the cylindrical points: $\kappa_1=0$, $\kappa_2=\kappa_n$. \begin{claim}\label{claim:cylindricalconvexF} Set \[ \Lambda:=\left\{z\in\Gamma^n:\min_{1\leq i<j\leq n}\left(z_i+z_j\right)-\beta_1^{-1}f(z)>0\right\}\,. \] Then \ben \item $\Lambda\subset \Gamma_+$, and \item $\partial\Lambda\cap \partial\Gamma_+=\cup_{\sigma\in P_n}\{k(\lambda_{\sigma(1)},\lambda_{\sigma(2)},\dots,\lambda_{\sigma(n)}):k\geq 0\}$, where $\lambda_1=0$ and $\lambda_2=\dots=\lambda_n=1$ and $P_n$ denotes the set of permutations of the set $\{1,\dots,n\}$. \een In particular, there is a constant $\beta_2>0$ such that \[ \min_{1\leq i<j\leq n}\left(z_i+z_j\right)-\beta_1^{-1}f(z)\leq \beta_2\min_{1\leq i\leq n}z_i\,. \] \end{claim} \begin{proof} Note that, as a super-level set of a concave function, $\Lambda$ is convex. Note also that $(0,1,\dots,1)\in\partial\Lambda$. Thus, by symmetry and convexity, we have $(1,\dots,1)\in\Lambda$. Finally, by homogeneity and \emph{strict} monotonicity of $f$, the only points in $\overline\Lambda$ of the form $(0,z_2,\dots,z_n)$ for $0<z_i$ are those with $z_2=\dots=z_n$. Claims (1) and (2) follow. The existence of $\beta_2$ then follows from compactness of the set $\Lambda\cap\{\Vert z\Vert=1\}$ and homogeneity of $f$. \end{proof}
The gradient estimate then takes the form \ba\label{eq:gradientFconvex}
\frac{|\nabla A|^2}{F^4} \leq C_1\frac{\kappa_1}{F}\,. \ea
We remark that \eqref{eq:cylindricalFconvex} holds on blow-up limits of two-convex flows by convex admissible speeds \cite{AnLa14}; however, it is unknown (to the authors) whether a gradient estimate of the form \eqref{eq:gradientFconvex} holds, except when $F$ is the mean curvature. We note that, by a similar argument as in Claim \ref{claim:gradient} below, the estimate \bann
\frac{|\nabla A|^2}{F^4}\leq C_1 \eann would suffice.
\subsection{Flows by concave speeds} For speeds $F=f(\vec\kappa)$ given by concave admissible $f:\Gamma^n\to\mathbb{R}$, the cylindrical estimate takes the form \ba\label{eq:cylindricalFconcave1} \kappa_n-\beta_1^{-1}F<0\,, \ea where $\beta_1=f(0,1,\dots,1)$ is the value $F$ takes on the cylinder $\mathbb{R}\times S^{n-1}_1$. We claim that $\kappa_1$ is bounded from below by $\beta_1^{-1}F-\kappa_n$, and that the only points at which both $\kappa_1$ and $\kappa_n-\beta_1^{-1}F$ vanish are the cylindrical points (cf. \cite[Proposition 3.6 and Theorem 3.8]{BrHu2}). \begin{claim}\label{claim:cylindricalconcaveF} Set \[ \Lambda:=\{z\in\Gamma^n:\max_{1\leq i\leq n} z_i-\beta_1^{-1}f(z)<0\}\,. \] Then \ben \item $\Lambda\subset \Gamma_+$, and \item $\partial\Lambda\cap \partial\Gamma_+=\cup_{\sigma\in P_n}\{k(\lambda_{\sigma(1)},\lambda_{\sigma(2)},\dots,\lambda_{\sigma(n)}):k\geq 0\}$, where $\lambda_1=0$ and $\lambda_2=\dots=\lambda_n=1$ and $P_n$ denotes the set of permutations of the set $\{1,\dots,n\}$. \een In particular, there is a constant $\beta_2>0$ such that \[ \beta_1^{-1}f(z)-\max_{1\leq i\leq n}z_i\leq \beta_2\min_{1\leq i\leq n}z_i\,. \] \end{claim} \begin{proof} The proof is the same as the proof of Claim \ref{claim:cylindricalconvexF}. \end{proof}
The gradient estimate then takes the form \ba\label{eq:gradientFconcave1}
\frac{|\nabla A|^2}{F^4}\leq C_1\frac{\kappa_1}{F} \ea
We remark that \eqref{eq:cylindricalFconcave1} holds on blow-up limits of two-convex flows by concave admissible speeds \cite{LaLy} (cf. \cite[Theorem 3.1]{BrHu2}). Moreover, making use of \cite[Theorem 6.1]{BrHu2}, we find that the gradient estimate also holds if the underlying flow is embedded.
\begin{claim}\label{claim:gradient} Let $X:M^n\times[0,T)\to\mathbb{R}^{n+1}$ be an embedded solution of \eqref{eq:F}, where $F$ is given by $F=f(\vec\kappa)$ for some admissible $f:\Gamma^n\to\mathbb{R}$ satisfying the conditions of Corollary \ref{cor:bowlF}. Then there is a constant $C=C(n,M_0)$ and, for any $\varepsilon>0$, a constant $F_\varepsilon=F(\varepsilon,n,M_0)$ such that \[
\frac{|\nabla A|^2}{F^4}\leq \varepsilon+C\frac{\kappa_1}{F} \] wherever $F>F_\varepsilon$. \end{claim} \begin{proof} We will make use of the gradient estimate of \cite[Theorem 6.1]{BrHu2}, which provides a constant $\Lambda=\Lambda(n,M_0)$ such that \ba\label{eq:BrHugradient}
|\nabla A|^2\leq\Lambda F^4\,. \ea We note that the interior non-collapsing estimate \cite{ALM13} and Sections 5 and 6 of \cite{BrHu2} apply to embedded flows satisfying the conditions of Corollary \ref{cor:bowlF}.
So suppose that the claim does not hold. Then there is a constant $\varepsilon_0>0$ and a sequence of points $(x_j,t_j)\in M^n\times[0,T)$ with $F(x_j,t_j)\to\infty$ such that \[
\frac{|\nabla A|^2}{F^4}(x_j,t_j)>\varepsilon_0+j\frac{\kappa_1}{F}(x_j,t_j)\,. \] If \[ \limsup_{j\to\infty}\frac{\kappa_1}{F}(x_j,t_j)>0
\] then we would obtain a contradiction to \eqref{eq:BrHugradient}. Otherwise, passing to a subsequence, translating in space and time, and rescaling by $\lambda_j:=F(x_j,t_j)$, we obtain a sequence of flows $X_j:M^n\times(-\lambda_j^2t_j,0]\to\mathbb{R}^{n+1}$ with
\bann X_j(x_j,0)=0\,,\quad F_j(x_j,0)=1\,,\quad\kappa_1(x_j,0)\to 0\quad\text{and}\quad|\nabla A|^2(x_j,0)>\frac{\varepsilon_0}{2}\,.
\eann By \cite[Theorem 6.1]{BrHu2}, this sequence converges in a uniform parabolic neighbourhood of $(x_j,0)$ locally uniformly in $C^2$ to some non-empty smooth limit flow. By the cylindrical estimate \cite{LaLy} (cf. \cite[Theorem 3.1]{BrHu2}), this limit must satisfy $\kappa_n-\beta_1^{-1}F\leq 0$. By Claim \ref{claim:cylindricalconcaveF}, this implies $\kappa_1\geq 0$. Since $\kappa_1$ reaches zero at the origin, we can now conclude from the splitting theorem and Claim~\ref{claim:cylindricalconcaveF} that the limit is contained in a shrinking cylinder. But this contradicts the fact that $\frac{|\nabla A|^2}{F^4}\ge \frac{\varepsilon_0}{2}$ at some point on the limit. \end{proof} It follows that blow-up limits of \eqref{eq:F} with speeds satisfying the conditions of Corollary \ref{cor:bowlF} satisfy \[
\frac{|\nabla A|^2}{F^4}\leq C\frac{\kappa_1}{F}\,. \]
Note that flows by concave speeds are interior non-collapsing \cite{ALM13}. Moreover, the non-collapsing estimate improves at a singularity \cite{LaLy}. Thus, we can replace the cylindrical estimate by \bann \overline k-\beta_1^{-1}F<0\,. \eann
This formulation of the cylindrical estimate is non-trivial in dimension $n=2$, but stronger than \eqref{eq:cylindricalFconcave1} when $n\geq 3$.
Armed with these facts, and the splitting theorem of the Appendix, we can proceed almost exactly as in Section \ref{sec:proof} to show (assuming, without loss of generality, that $f(0,1,\dots,1)=n-1$), that the blow-down of $M^n_t:=M^n+te_{n+1}$ is the shrinking cylinder $S^{n-1}_{\sqrt{2(n-1)(1-t)}}\times \mathbb{R}$.
By the conditions on $F$, the remainder of the proof differs only slightly from \cite[Sections 3-5]{Ha}. Indeed, the linearization of \eqref{eq:F} is the equation \ba\label{eq:LF}
(\partial_t-\Delta_F)u=|A|^2_Fu\,,
\ea where, in an orthonormal frame of eigenvectors for $A$, $\Delta_F:=\frac{\partial f}{\partial \kappa_{i}}\nabla_i\nabla_i$ and $|A|_F:=\frac{\partial f}{\partial \kappa_{i}}\kappa_i^2$. Solutions of the linearized flow on a translating solution of \eqref{eq:F} correspond to solutions of the linearized translator equation \ba\label{eq:LFT}
-\Delta_Fu=\nabla_Vu+|A|^2_Fu \ea on the corresponding solution of \eqref{eq:FT}. Since the speed $F$ satisfies this equation, the strong maximum principle implies that \bann \sup_{h\leq h_0}\frac{\vert u\vert}{F}\leq \sup_{h=h_0}\frac{\vert u\vert}{F} \eann for any $u$ satisfying \eqref{eq:LFT} on a strictly convex solution of \eqref{eq:FT}.
By the invariance of \eqref{eq:F} under ambient isometries, the functions \[ u_{J,O}(x,t):=\inner{J(X(x,t)-O)}{\nu(x,t)} \] satisfy \eqref{eq:LF} for any rotation generator $J\in \mathfrak{so}(n+1)$ and translation generator $O\in\mathbb{R}^{n+1}$.
Recalling that we have normalized $f$ so that $f(0,1,\dots,1)=n-1$, observe that (modulo a time-dependent tangential reparametrization) the shrinking cylinders \bann C:S^{n-1}\times\mathbb{R}\times (-\infty,1){}&\to S^{n-1}_{r(t)}\times\mathbb{R}\subset \mathbb{R}^{n+1}\\ (\vartheta,h,t){}&\mapsto \left( r(t)\vartheta,h\right) \eann with $r(t):=\sqrt{2(n-1)(1-t)}$ satisfy \eqref{eq:F}. By symmetry and homogeneity of $f$, we find, for each $j=2,\dots,n$, that \[ \frac{\partial f}{\partial\kappa_j}=\frac{r}{n-1}\sum_{i=2}^n\frac{\partial f}{\partial\kappa_i}\kappa_i=\frac{r}{n-1}\sum_{i=1}^n\frac{\partial f}{\partial\kappa_i}\kappa_i=\frac{r}{n-1}F=1 \] on the shrinking cylinder, so that \[ \Delta_F=\frac{\partial f}{\partial\kappa_1}\nabla_h\nabla_h+\frac{1}{r^2}\Delta_{S^n} \] and \[
|A|^2_F=\frac{1}{2(1-t)}\,. \] It is now clear that the decay estimate \cite[Proposition 4.1]{Ha} and the contradiction argument in \cite[Section 5]{Ha} apply in the non-linear setting. This proves Theorem \ref{thm:bowlF}. Corollary \ref{cor:bowlF} then follows, since, by \cite{LaLy} (cf. \cite[Theorem 3.1]{BrHu2}) and Claim \ref{claim:gradient}, the assumptions of Theorem \ref{thm:bowlF} hold on blow-up limits of solutions of \eqref{eq:F}.
\begin{comment} \begin{proof}[Proof of Corollary \ref{cor:bowlF}] First, we note that the function $f:\Gamma^n\to\mathbb{R}$ given by \bann f(z):=\left(\sum_{i<j}\frac{1}{z_i+z_j}\right)^{-1}\,, \eann where $\Gamma^n:=\{z\in \mathbb{R}^n:z_i+z_j>0\,\forall\,i,j\}$, is concave, and inverse-concave on the faces of the positive cone. Indeed, concavity follows from the fact that $f$ is the harmonic mean of the $\frac{n(n-1)}{2}$ linear functions $r_{ij}(z):=z_i+z_j$ of $z$. Inverse-concavity follows from the fact that, for any $z\in\Gamma^{n-1}_+$, \bann f_\ast(z_2,\dots,z_n):={}&f(0,z_2^{-1},\dots, z_n^{-1})^{-1}\\ ={}&\sum_{i=2}^n\frac{1}{z^{-1}_i}+\sum_{1<i<j}\frac{1}{z^{-1}_i+z^{-1}_j}\\ ={}&\sum_{i=2}^nz_i+\sum_{1<i<j}\frac{1}{z^{-1}_i+z^{-1}_j} \eann is a linear combination of the mean and the $\frac{n(n-1)}{2}$ harmonic means $(r_\ast)_{ij}:=\frac{1}{z^{-1}_i+z^{-1}_j}$. Thus, solutions of \eqref{eq:FT} with $F$ given by \eqref{eq:twoharmonicmean} admit the splitting theorem, Theorem \ref{thm:splittingF} (see also Remarks \ref{rem:splittingF}).
By \cite[Theorem 3.1]{BrHu2}, any solution of \eqref{eq:F} with $F$ given by \eqref{eq:twoharmonicmean} satisfies \[ \frac{1}{n-1}H-\beta^{-1}F\leq \varepsilon F+C_\varepsilon\,, \] where $\beta:=\frac{4}{(n-1)(n+2)}$ is the value $F$ takes on the unit cylinder $\mathbb{R}\times S^{n-1}$. Thus, any proper blow-up limit satisfies \[ \frac{1}{n-1}H-\beta^{-1}F<0\,. \] In particular, the blow-up is weakly convex, since \cite[Proposition 3.6]{BrHu2} \bann \frac{3(n-2)}{(n-1)(n+2)}\kappa_1\geq\beta^{-1} F-\frac{1}{n-1}H\,. \eann Next, we note that the set $\{\kappa_1=0\}\cap\{\frac{1}{n-1}H-\beta^{-1}F=0\}$ coincides with the set of cylindrical points $\{\kappa_1=0,\kappa_2=\dots=\kappa_n\}$. Indeed, since, as functions of the principal curvatures, $H$ is linear and $F$ is strictly concave in non-radial directions, we see that the constant $H$ slices of the sub-level sets of $\frac{1}{n-1}H-\beta^{-1}F$ are strictly convex. The claim follows since the halfspace $\{\kappa_1>0\}$ supports $\{\frac{1}{n-1}H-\beta^{-1}F=0\}$ at a cylindrical point $\kappa_1=0$, $\kappa_2=\dots=\kappa_n$.
Next, we prove that there is a constant $C$ so that, for any solution of \eqref{eq:F}, \[
\frac{|\nabla A|^2}{F^3}\leq \varepsilon F-C\left(\frac{1}{n-1}H-\beta^{-1}F\right) \] for any $\varepsilon>0$ wherever $F>F_\varepsilon$ is sufficiently large. Suppose that this is not the case. Then there is an $\varepsilon_0>0$ and a sequence of points $(x_j,t_j)$ so that $F(x_j,t_j)\to\infty$ and \[
\frac{|\nabla A|^2}{F^3}(x_j,t_j)>\varepsilon_0 F(x_j,t_j)-\Lambda\left(\frac{1}{n-1}H-\beta^{-1}F\right)(x_j,t_j) \] for any $\Lambda$ whenever $j$ is sufficiently large. If there exists $\alpha>0$ such that $\left(\frac{1}{n-1}H-\beta^{-1}F\right)(x_j,t_j)>-\alpha F(x_j,t_j)$ for all sufficiently large $j$, we obtain a contradiction to \cite[Theorem 6.1]{BrHu2}. If this is not the case, then we have $\left(\frac{1}{n-1}H-\beta^{-1}F\right)(x_j,t_j)\to 0$, and, after rescaling, we obtain a sequence of flows with \bann
\frac{|\nabla A|^2}{F^4}(x_j,t_j)>1 \quad\text{ and }\quad \frac{\frac{1}{n-1}H-\beta^{-1}F}{F}(x_j,t_j)\to 0\,.
\eann By similar considerations as in Lemma \ref{lem:convergence} and Corollary \ref{cor:convergence}, the gradient estimate in \cite[Theorem 6.1]{BrHu2} implies that the sequence converges in a uniform neighbourhood $B_R(x_j)\times(t_j-R^2,t_j]$ of $(x_i,t_j)$ locally uniformly in $C^2$ to some non-empty smooth limit flow . But this limit must satisfy $\frac{1}{n-1}H-\beta^{-1}F\leq 0$, with equality at some point. By the strong maximum principle (see \cite[Proposition 2.4]{BrHu2}), we conclude that $\frac{1}{n-1}H-\beta^{-1}F\equiv 0$ and it follows from above that the limit is a shrinking cylinder and the convergence is global. But this contradicts the fact that $\frac{|\nabla A|^2}{F^4}\ge 1$ at some point.
Armed with these facts and the splitting theorem of the Appendix, we can proceed almost exactly as in Section \ref{sec:proof} to show (assuming, without loss of generality, that $f(0,1,\dots,1)=n-1$,) that the blow-down of $M^n_t:=M^n+te_{n+1}$ is the shrinking cylinder $S^{n-1}_{\sqrt{2(n-1)(1-t)}}\times \mathbb{R}$. By the conditions on $F$, the remainder of the proof differs only formally from \cite[Sections 3-5]{Ha}. \end{proof} \end{comment}
\begin{comment} \section{Remarks on Haslhofer's paper} \begin{claim} Let $\tau\in(0,1)$ be fixed. If $B(h)>0$ for $h$ large, then there exists a sequence $h_m\to \infty$ such that \[ B(h_m)\le 2\tau^{-1/2} B(\tau h_m)\,,\,\,\forall m\in\mathbb{N}.\] \end{claim} \begin{proof} Let $h_0$ be such that $B(h_0)>0$ and set $h_n=\tau^{-n} h_0$. Then if \[ B(\tau h_n)\ge \frac{\tau^\frac12}{2} B(h_n)\text{ for infinitely many }n\] we are done. Assume now that the above is not true and thus there exists some $N\in \mathbb{N}$ such that \[ B(\tau h_n)< \frac{\tau^\frac12}{2} B(h_n)\,,\,\,\forall n\ge N.\] In this case, set $h= h_N$. By the above we know that $B(h)>0$. Reset now $h_n=\tau^{-n} h$. Then \[ B(\tau h_n)< \frac{\tau^\frac12}{2} B(h_n)\,,\,\,\forall n\ge 1.\] Iterating we have \[B(h)<\frac{\tau^\frac12}{2} B(h_1)<\left(\frac{\tau^\frac12}{2}\right)^2 B(h_2)<\dots<\left(\frac{\tau^\frac12}{2}\right)^n B(h_n).\] Now since \[B(h)=O(h^\frac12)\implies B(h)\le Ch^\frac12\] we have \[ B(h)<\left(\frac{\tau^\frac12}{2}\right)^n \tau^{-\frac12} h^\frac12<h^\frac12 2^{-\frac n2}\,\, \forall n\in \mathbb{N}\] and letting $n\to \infty$ we have $B(h)=0$, which is a contradiction. \end{proof} \end{comment}
\section{Appendix: The splitting theorem}\label{sec:SMP}
We include here a proof of the splitting theorem for solutions of \eqref{eq:F}.
\begin{theorem}[Splitting Theorem]\label{thm:splittingF} Let $X:M^n\times(0,t_0]\to\mathbb{R}^{n+1}$, $n\geq 2$, be a weakly convex solution of \eqref{eq:F}, where $F$ is given by $F(x)=f(\vec\kappa(x))$ for some admissible $f:\Gamma^n\to\mathbb{R}$ such that \bi \item[(i)] $\displaystyle \{(0,\hat z):\hat z\in \Gamma^{n-1}_+\}\subset \Gamma^n$ and the function $f_\ast:\Gamma_+^{n-1}\to\mathbb{R}$ defined by \bann f_\ast(z_2^{-1},\dots,z_n^{-1}):=f(0,z_2,\dots,z_n)^{-1} \eann is concave. \ei Suppose also that \bi \item[(ii)] $\vec\kappa(M^n\times(0,t_0])\subset \overline \Gamma{}_0^n$ for some cone $\Gamma_0^n$ satisfying $\overline\Gamma{}_0^n\setminus\{0\}\subset \Gamma_2^n$, where $\Gamma_2^n:=\{z\in\mathbb{R}^n:\min_{1\leq i<j\leq n}\{z_i+z_j\}>0\}$. \ei Then $\kappa_1(x_0,t_0)=0$ for some $x_0\in M^n$ only if $\kappa_1\equiv 0$ and $M^n$ splits isometrically as a product $M^n\cong \mathbb{R}\times\Sigma^{n-1}$. \end{theorem} \begin{proof} This was established for convex speeds in \cite[Theorem 4.21]{La}. The proof for speeds satisfying the weaker inverse-concavity condition is similar:
Suppose that $\kappa_1$ reaches zero at an interior space-time point $(x_0,t_0)$. By hypothesis, $\kappa_1<\kappa_2$ at this point. Let $U$ be the largest space-time neighbourhood of $(x_0,t_0)$ in $M^n\times(0,t_0]$ such that $\kappa_1<\kappa_2$. Then $U$ is open, $\kappa_1$ has a unique principal direction field $e_1$ in $U$, and both are smooth in $U$.
Differentiating $\kappa_1=A(e_1,e_1)$ yields \bann \nabla_k\kappa_1=\nabla_kA_{11}+2A(\nabla_ke_1,e_1)\,, \eann so that \ba\label{eq:Dk1} \nabla_kA_{11}=\nabla_k\kappa_1=0 \ea at $(x_0,t_0)$ for each $k$. Note that $\nabla_ke_1\perp e_1$ since $e_1$ has constant length. Differentiating the eigenvalue identity $A(e_1)=\kappa_1e_1$ yields the remaining components: \bann (A-\kappa_1I)(\nabla_ke_1)=\left(\nabla_k\kappa_1 I-\nabla_kA\right)(e_1)\,, \eann so that \ba\label{eq:De1} \nabla_ke_1=-R\left(\nabla_kA(e_1)\right)\,,
\ea where $R:=(A-\kappa_1I)|_{e_1^\perp}^{-1}\circ \proj_{e_1^\perp}$. Next, consider the time derivative \bann \partial_t\kappa_1=\nabla_tA_{11}+2A(\nabla_te_1,e_1)\,, \eann where the covariant time derivative $\nabla_t$ is defined on vector fields $v$ via $\nabla_tv=[\partial_t,v]-HA(v)$, and extended to tensor fields by the Leibniz rule. This yields \bann \partial_t\kappa_1=\nabla_tA_{11} \eann at $(x_0,t_0)$. Finally, we compute the Hessian, \bann \nabla_k\nabla_l\kappa_1=\nabla_k\nabla_lA_{11}+4\nabla_kA(\nabla_le_1,e_1)+2A(\nabla_k\nabla_le_1,e_1)+2A(\nabla_ke_1,\nabla_le_1)\,. \eann Applying \eqref{eq:De1} and the Codazzi identity, we obtain \bann \nabla_k\nabla_l\kappa_1=\nabla_k\nabla_lA_{11}-2R(\nabla_1A(e_k),\nabla_1A(e_l)) \eann at $(x_0,t_0)$.
In an orthonormal frame of eigenvectors of $A$, we have the evolution equation \cite{An94a} \ba\label{eq:FevolveA}
(\nabla_t-\Delta_F)A_{ij}=|A|_F^2A_{ij}+\frac{\partial^2 F}{\partial A_{pq}\partial A_{rs}}\nabla_iA_{pq}\nabla_jA_{rs}\,,
\ea where $\Delta_F:=\frac{\partial F}{\partial A_{kl}}\nabla_k\nabla_l$ and $|A|^2_F:=\frac{\partial F}{\partial A_{kl}}A^2_{kl}$, and we conclude \bann
(\partial_t-\Delta_F)\kappa_1=|A|_F^2\kappa_1+N(A,\nabla A)\,, \eann where \bann N(A,\nabla A):={}&2A\big((\nabla_t-\Delta)e_1,e_1\big)+\frac{\partial^2F}{\partial A_{pq}\partial A_{rs}}\nabla_1A_{pq}\nabla_1A_{rs}\\ {}&+2\frac{\partial F}{\partial A_{kl}}\Big[2R(\nabla_1A_k,\nabla_1A_l)-A\big(R(\nabla_1A_k),R(\nabla_1A_l)\big)\Big]\,.
\eann Observe that, at any boundary point $Z\in \mathrm{Sym}_{\Gamma^n\cap\partial\Gamma^n_+}$, the space of symmetric $n\times n$ matrices with eigenvalues $z$ in $\Gamma^n\cap\partial\Gamma^n_+$, we have, for any totally symmetric $T\in \mathbb{R}^n\otimes\mathbb{R}^n\otimes\mathbb{R}^n$, \bann N(Z,T)={}&B^p(Z,T)T_{p11}+\sum_{p,q,r,s>1}Q^{pq,rs}(Z)T_{1pq}T_{1rs}\,, \eann where
\bann B^1(Z,T):={}&\left.\left(\frac{\partial^2 F}{\partial A_{11}\partial A_{11}}T_{111}+2\sum_{p,q>1}\frac{\partial^2 F}{\partial A_{pq}\partial A_{11}}T_{1pq}\right)\right|_{Z}\,,\\
B^p(Z,T):={}&R^{pq}\left.\left(\frac{\partial F}{\partial A_{11}}T_{11q}+2\sum_{k>1}\frac{\partial F}{\partial A_{1k}}T_{k1q}\right)\right|_{Z} \quad\text{for}\quad p>1 \eann and
\bann Q^{pq,rs}(Z):=\left.\left(\frac{\partial^2 F}{\partial A_{pq}\partial A_{rs}}+\frac{\partial F}{\partial A_{pr}}R^{qs}\right)\right|_{Z}\,. \eann We claim that, as quadratic forms on the space of $(n-1)\times (n-1)$ symmetric matrices, \ba\label{eq:ICcond} Q\geq 2\frac{DF\otimes DF}{F}
\ea at any $Z\in \mathrm{Sym}_{\Gamma_+^n\cap\partial\Gamma^n_+}$. Indeed, embedding the space $\mathrm{Sym}_{\Gamma_+^{n-1}}$ of positive definite $(n-1)\times(n-1)$ symmetric matrices into the space $\mathrm{Sym}_{\overline \Gamma{}^n_+}$ of non-negative definite $n\times n$ symmetric matrices via the natural inclusion, the inverse-concavity condition is equivalent to concavity of the function $F_\ast:\mathrm{Sym}_{\Gamma_+^{n-1}}\to\mathbb{R}$ defined by $F_\ast(Z^{-1}):=F(Z)^{-1}$, where $F(Z):=f(z)$ and $z$ is the $n$-tuple of eigenvalues of $Z$. Differentiating this identity in the direction of $B\in \mathrm{Sym}_{\mathbb{R}^{n-1}}$, we find \bann
-D_{X}F_\ast|_{Z^{-1}}={}&
-\frac{1}{F^2(Z)}D_BF|_{Z}\,, \eann where $X:=Z^{-1}BZ^{-1}$. Differentiating once more yields
\bann D_{X}D_{X}F_\ast|_{Z^{-1}}+2D_{XZX}F_\ast|_{Z^{-1}}
={}&\frac{2}{F^3(Z)}(D_BF|_{Z})^2-\frac{1}{F^2(Z)}D_BD_BF|_{Z} \eann and we conclude
\bann 0\leq{}&-D_XD_XF_\ast|_{Z^{-1}}\\
={}&\frac{1}{F^2(Z)}\left.\left( D^2F-\frac{2DF\otimes DF}{F}+2DF\ast Z^{-1}\right)\right|_{Z}(B,B)\,, \eann where $\ast$ denotes the product $(R\ast S)^{pq,rs}:=R^{pr}S^{qs}$. This implies \eqref{eq:ICcond}.
We now return to the evolution equation for $\kappa_1$. Note that $N$ is Lipschitz with respect to $A$. Thus, denoting by $\overline A$ the projection of $A$ onto $\partial\mathrm{Sym}_{\Gamma^n_+}$, we obtain \bann (\partial_t-\Delta_F)\kappa_1+B^k\nabla_k\kappa_1\geq{}&-\vert N(A,\nabla A)-N(\overline A,\nabla A)\vert\\ \geq{}&-C\Vert A-\overline A\Vert\\ ={}&-C\kappa_1\,, \eann where $C$ is the worst Lipschitz constant of $N(\cdot,\nabla A)$ on the set $U$. Note that $C$ is bounded on any compact subset of $U$. The strong maximum principle now implies that $\kappa_1\equiv 0$ on $K$ for any compact subset $K$ of $U$. It follows that $U\subset\{(x,t)\in M^n\times(0,t_0]:\kappa_1(x,t)=0\}\subset U$ and we deduce that $U$ is closed, and hence equal to $M^n\times (0,t_0]$. But in that case, we must have, by \eqref{eq:ICcond}, \bann 0\equiv \frac{\partial F}{\partial A_{pq}}\nabla_1A_{pq}\,. \eann By monotonicity of $F$, we conclude that $\nabla_1A\equiv 0$.
Using standard arguments, we can now deduce the splitting: Observe that, for any $v\in \Gamma(\ker(A))$, \bann 0\equiv \nabla_k(A(v))=\nabla_kA(v)+A(\nabla_kv)=A(\nabla_kv)\,. \eann Thus, $\nabla_kv\in \Gamma(\ker(A))$ whenever $v\in \Gamma(\ker(A))$; that is, $\ker(A)\subset TM^n$ is invariant under parallel translation in space. Since, for any $v\in \Gamma(\ker A)$ and any $u\in TM^n$, we have \bann {}^{X}\hspace{-3pt}D_uX_\ast v= X_\ast \nabla_uv-A(u,v)\nu=X_\ast \nabla_uv\in X_\ast\ker A\,, \eann where ${}^{X}\hspace{-3pt}D$ is the pull-back of the Euclidean connection along $X$, we deduce that $X_\ast\ker A\subset T\mathbb{R}^{n+1}$ is parallel (in space) with respect to ${}^{X}\hspace{-3pt}D$.
Moreover, using the evolution equation \eqref{eq:FevolveA} for $A$, we obtain \bann \nabla_tA(v)={}&\Delta_F A(v)\\ ={}&\frac{\partial F}{\partial A_{kl}}\left[\nabla_k\left(\nabla_lA(v)\right)-\nabla_l(A(\nabla_kv))-A(\nabla_l\nabla_kv)\right]\\ ={}&0\,, \eann so that \bann A(\nabla_tv)=\nabla_t(A(v))-\nabla_tA(v)=0\,; \eann that is, $\ker A$ is also invariant with respect to $\nabla_t$. Since, for any $v\in \Gamma(\ker(A))$, we have $\nabla_vF=\frac{\partial F}{\partial A_{kl}}\nabla_vA_{kl}\equiv 0$, this implies that \bann {}^{X}\hspace{-3pt}D_tX_\ast v=(\nabla_vF)\nu+X_\ast \nabla_tv=X_\ast \nabla_tv\,, \eann and we deduce that $X_\ast\ker A$ is also parallel in time. We conclude that the orthogonal compliment of $X_\ast\ker(A)$ is a constant (in space and time) subspace of $\mathbb{R}^{n+1}$.
Now consider any geodesic $\gamma:\mathbb{R}\to M^n\times\{t\}$, $t\in (0,t_0]$, with $\gamma'(0)\in \ker(A)$. Then, since $\ker(A)$ is invariant under parallel translation, $\gamma'(s)\in\ker(A)$ for all $s$, so that \bann {}^{X}\hspace{-3pt}D_sX_\ast\gamma'= X_\ast\nabla_s\gamma'-A(\gamma',\gamma')\nu=0\,. \eann
Thus, $X\circ\gamma$ is geodesic in $\mathbb{R}^{n+1}$. We can now conclude that $X$ splits off a line, $M^n\cong \mathbb{R}\times\Sigma^{n-1}$, such that $\mathbb{R}$ is flat ($T\mathbb{R}$ is spanned by $\ker(A)$) and $\Sigma^{n-1}$ is strictly convex ($T\Sigma^{n-1}$ is spanned by the rank space of $A$) and maps into the constant subspace $\left( X_\ast\ker(A)\right)^\perp\cong \mathbb{R}^{n}$.
It follows that $X\big|_{\{0\}\times\Sigma^{n-1}\times (0,t_0]}$ satisfies \ba\label{eq:tildeF} \partial_t\widetilde X(\widetilde x,t)=-\widetilde F(\widetilde x,t)\widetilde\nu(\widetilde x,t)\,,
\ea for all $(\widetilde x,t)\in \{0\}\times \Sigma^{n-1}\times(0,t_0]$, where $\widetilde \nu=\nu\big|_{\{0\}\times\Sigma\times(0,T]}$ and $\widetilde F$ is given by the restriction of $f$ to $\Gamma_+^{n-1}\cong\{z\in \overline\Gamma_+:z_1=0, z_{2}>0,\dots,z_n>0\}$. \end{proof}
\begin{rmks}\label{rem:splittingF}\mbox{} \begin{enumerate} \item By condition (ii), the cross-section $\Sigma^{n-1}$ in the splitting must be compact \cite{Hm94}, and we conclude, by uniqueness of solutions of \eqref{eq:tildeF}, that the isometric splitting persists until the maximum time. \item Flows by convex admissible speeds defined on the faces of $\Gamma^n_+$ automatically satisfy condition (i). \item If $n=2$, flows by admissible speeds defined on the faces of $\Gamma{}^2_+$ automatically satisfy condition (i).
\item \label{rem:uniform} Condition (ii) can be arranged if the flow preserves any form of \emph{uniform} two-convexity. This is the case for flows by convex speeds, which preserve $\kappa_1+\kappa_2\geq\alpha F$, flows of surfaces (trivially) and flows by concave speeds satisfying $f\big|_{\partial\Gamma^n}\equiv 0$, $\Gamma^n\subset\Gamma_2^n$, which preserve $\kappa_n\leq CF$ or $H\leq CF$. \end{enumerate} \end{rmks}
\end{document} |
\begin{document}
\title{Finite Automata for the Sub- and Superword Closure of CFLs: Descriptional and Computational Complexity\thanks{This work was partially funded by the DFG project ``Polynomial Systems on Semirings: Foundations, Algorithms, Applications''.}}
\author{Georg Bachmeier \and Michael Luttenberger \and Maximilian Schlund} \institute{Technische Universit\"{a}t M\"{u}nchen, \email{\{bachmeie,luttenbe,schlund\}@in.tum.de} }
\maketitle
\begin{abstract} We answer two open questions by (Gruber, Holzer, Kutrib, 2009) on the state-complexity of representing sub- or superword closures of context-free grammars (CFGs): (1) We prove a (tight) upper bound of $2^{\mathcal{O}(n)}$ on the size of nondeterministic finite automata (NFAs) representing the subword closure of a CFG of size $n$. (2) We present a family of CFGs for which the minimal deterministic finite automata representing their subword closure matches the upper-bound of $2^{2^{\mathcal{O}(n)}}$ following from (1). Furthermore, we prove that the inequivalence problem for NFAs representing sub- or superword-closed languages is only NP-complete as opposed to PSPACE-complete for general NFAs. Finally, we extend our results into an approximation method to attack inequivalence problems for CFGs. \end{abstract}
\section{Introduction}
Given a (finite) word $w= w_1w_2\ldots w_n$ over some alphabet $\Sigma$, we say that $u$ is a {\em (scattered) subword or subsequence} of $w$ if $u$ can be obtained from $w$ by erasing some letters of $w$. We denote the fact that $u$ is a subword of $w$ by $u\sw w$, and alternatively say that $w$ is a {\em superword} of $u$. As shown by Higman \cite{Higman52} in 1952 $\sw$ is a well-quasi-order on $\Sigma^*$, implying that {\em every} language $L\subseteq \Sigma^\ast$ has a finite set of $\sw$-minimal elements. This proves that both the subword (also: downward) closure $\dc{L} := \{u\in \Sigma^\ast \mid \exists w\in L\colon u \sw w\}$ and the superword (also: upward) closure $\uc{L} := \{ w\in\Sigma^\ast \mid \exists u\in L\mid u \sw w\}$ are regular for \emph{any} language $L$. While in general, we cannot effectively construct a finite automaton accepting $\dc{L}$ resp.\ $\uc{L}$, for specific classes of languages effective constructions are known.
It is well-known that this is the case when when $L$ is given as a context-free grammar (CFG). This was first shown by van Leeuwen \cite{vLeeuwen78} in 1978. Later, Courcelle gave an alternative proof of this result in \cite{Courcelle91}. Section \ref{sec:upperbound} builds up on these results by Courcelle. We also mention that for Petri-net languages an effective construction is known thanks to Habermehl, Meyer, and Wimmel \cite{DBLP:conf/icalp/HabermehlMW10}.
These results can be used to tackle undecidable questions regarding the ambiguity, inclusion, equivalence, universality or emptiness of languages by over-approximating one or both languages by suitable regular languages \cite{MohriNederhof01,DBLP:conf/fase/LongCMM12,DBLP:journals/fmsd/GantyMM12,DBLP:conf/icalp/HabermehlMW10}: For instance, consider the scenario where we are given a procedural program whose runs can be described as a pushdown automaton resp.\ a CFG $G_1$ and a context-free specification $G_2$ of all safe executions, and we want to check whether all runs of the system conform to the safety specification $\mathcal{L}(G_1) \subseteq \mathcal{L}(G_2)$. As $\mathcal{L}(G_1)\cap \overline{\dc{\mathcal{L}(G_2)}} \neq \emptyset \Rightarrow \mathcal{L}(G_1) \not\subseteq \mathcal{L}(G_2)$, we can obtain at least a partial answer to the otherwise undecidable question. Of course, in the case $\mathcal{L}(G_1)\subseteq \dc{\mathcal{L}(G_2)}$ no information is gained, and one needs to refine the problem e.g.\ by using some sort of counter-example guided abstraction refinement as done e.g.\ in \cite{DBLP:conf/fase/LongCMM12}.
\paragraph*{Contributions and Outline} Our first results (Sections \ref{sec:upperbound} and \ref{sec:debu}) concern the blow-up incurred when constructing a (non-)deterministic finite automaton (NFA resp.\ DFA) for the subword closure of a language given by a context-free grammar $G$ where we improve the results of \cite{Gruber:2009:MSH:1551570.1551577}: For a CFG $G$ of size $n$, \cite{Gruber:2009:MSH:1551570.1551577} shows that an NFA recognizing $\dc\mathcal{L}(G)$ has at most $2^{2^{\mathcal{O}(n)}}$ states, and there are CFGs requiring at least $2^{\Omega(n)}$ states. (For linear CFGs the upper and lower bounds are both single exponential.) The upper bound of \cite{Gruber:2009:MSH:1551570.1551577} is established by analyzing the inductive construction of \cite{vLeeuwen78}. We improve this result in Section \ref{sec:upperbound} to $2^{\mathcal{O}(n)}$ by slightly adapting Courcelle's construction \cite{Courcelle91} (we also briefly discuss that naively applying Courcelle's construction cannot do better than $2^{\Omega(n\log n)}$ in general). This result of course yields immediately an upper bound of $2^{2^{\mathcal{O}(n)}}$ on the size of minimal DFA representing accepting $\dc\mathcal{L}(G)$. In Section~\ref{sec:debu} we show this bound is tight already over a binary alphabet. To the best of our knowledge, so far only examples were known which showcase the single-exponential blow-up when constructing an NFA accepting the subword closure of a context-free grammar\cite{Gruber:2009:MSH:1551570.1551577} resp.\ a DFA accepting the subword closure of a DFA or NFA \cite{DBLP:journals/fuin/Okhotin10}. We then study in Section \ref{sec:equiv} the equivalence problem for NFAs recognizing subword- resp.\ supword-closed languages. While for general NFAs this problem is \textsf{PSPACE}-complete, we show that it becomes \textsf{coNP}-complete under this restriction. We combine these results in Section \ref{sec:application} to derive a conceptual simple semi-decision procedure for checking language-inequivalence of two CFGs $G_1,G_2$: we first construct NFAs for $\dc{\mathcal{L}(G_1)}$ and $\dc\mathcal{L}(G_2)$, and check language-inequivalence of these NFAs; if the NFAs are inequivalent, we construct a witness of the language-inequivalence of $G_1$ and $G_2$; otherwise we refine the grammars, and repeat the test on the so obtained new grammars. This approach is motivated by the abstraction-refinement approach of \cite{DBLP:conf/fase/LongCMM12} for checking if the intersection of two context-free languages is empty. We experimentally evaluate our approach by comparing it to {\em cfg-analyzer\xspace} of \cite{DBLP:conf/icalp/AxelssonHL08} which uses incremental SAT-solving
to tackle the language-inequivalence problem.
\section{Preliminaries} By $\Sigma$ we denote a finite alphabet. For every natural number $n$, let $\Sigma^{\le n}$ denote the words of length at most $n$ over $\Sigma$. The empty word is denoted by $\ew$; the set of all finite words by $\Sigma^\ast$.
We measure the \emph{size} $\abs{G}$ of a CFG $G$ as the total number of symbols on the right hand sides of all productions. The size of an NFA is simply measured as the number of states (this is an adequate measure for a constant alphabet, since the number of transitions is at most quadratic in the number of states).
Throughout the paper we will always assume that all CFGs are reduced, i.e.~do not contain any unproductive or unreachable nonterminals (any CFG can be reduced in polynomial time). Let $X$ be a nonterminal in a CFG $G$. We define $\mathcal{L}(X)$ as the set of all words $w\in\Sigma^\ast$ derivable from $X$. If $S$ is the start symbol of $G$, then $\mathcal{L}(G) := \mathcal{L}(S)$. Moreover, $\Sigma_X \subseteq \Sigma$ denotes the set of all terminals reachable from $X$. Overloading notation we sometimes write $\dc{X}$ for $\dc{\mathcal{L}(X)}$.
The dependency graph of a CFG $G$ is the finite graph with nodes the nonterminals of $G$ where there is an edge from $X$ to $Y$ if there is a production $X\to \alpha Y \beta$ in $G$. We say that $X$ {\em depends directly on} $Y$ (written as $X\dep Y$) if $X\neq Y$ and there is an edge from $X$ to $Y$. The reflexive and transitive closure of $\dep$ is denoted by $\deppo$. We write $X\equiv Y$ if $X\deppo Y\wedge Y\deppo X$, i.e.\ if $X$ and $Y$ are located in a common strongly-connected component of the dependency graph. We say that $G$ is strongly connected if the dependency graph is strongly connected.
From \cite{Courcelle91} we recall some useful facts concerning the subword closure: \begin{lemma} \label{lem:facts-courcelle} For any nonterminals $X,Y,Z$ in a CFG $G$ it holds that: \begin{enumerate} \item $\dc(\mathcal{L}(X) \cup \mathcal{L}(Y)) = \dc{\mathcal{L}(X)} \cup \dc{\mathcal{L}(Y)}$ \item $\dc(\mathcal{L}(X) \cdot \mathcal{L}(Y)) =\dc{\mathcal{L}(X)}\cdot \dc{\mathcal{L}(Y)}$ \item $X \equiv Y \Rightarrow \dc{X} = \dc{Y}$ \label{equiv-scc} \item If $X \rightarrow^* \alpha Y \beta Z \gamma $ for $Y,Z \equiv X$ then $\dc{X} = \Sigma_X^*$ \end{enumerate} \end{lemma}
\newcommand{q_{\text{en}}}{q_{\text{en}}} \newcommand{q_{\text{ex}}}{q_{\text{ex}}}
\section{Computing the Subword Closure of CFGs} \label{sec:upperbound}
In this section we describe an optimized version of the construction in \cite{Courcelle91} to compute an NFA for the subword closure of a CFG $G$ of size $2^{\mathcal{O}(\abs{G})}$, which is asymptotically optimal. We first illustrate the construction by a simple example.
As explained at the end of the next section, a naive implementation of the construction of \cite{Courcelle91} leads to an automaton of size $2^{\Omega (n)} n! = 2^{\Omega(n \log n)}$ whereas our approach achieves the (optimal) bound of $2^{\mathcal{O}(n)}$.
\subsection{Construction by Example}\label{sec:ex-bound} Consider the grammar $G$ with start symbol $S$ defined by the productions:
\begin{tabular}{b{7cm}b{3cm}} $\begin{array}{ll@{\hspace{1cm}}ll} S &\to XaU \mid UaU \mid X & X &\to ZbY \mid \ew \\[1mm] Y &\to XYa \mid b & U &\to VZ \mid acb \\[1mm] V &\to ZU \mid \ew & Z &\to cZ \mid bc \\[1mm] \end{array}$
&
\begin{tikzpicture} \node (S) at (0,0) {$S$}; \node (X) at (-1,-1) {$X$}; \node (Y) at (-2,-1) {$Y$}; \node (Z) at (0,-2) {$Z$}; \node (U) at (0,-1) {$U$}; \node (V) at (1,-1) {$V$};
\path[->] (S) edge (X)
edge (U)
(X) edge[bend left] (Y)
edge (Z)
(Y) edge[bend left] (X)
edge[loop left] (Y)
(Z) edge[loop right] (Z)
(U) edge[bend left] (V)
edge (Z)
(V) edge[bend left] (U)
edge (Z)
; \end{tikzpicture} \end{tabular}
\noindent On the right-hand side, the dependency graph is shown where an edge $x\to y$ stands for $x\depeq y$. To simplify the construction, we first transform the grammar $G$ into a certain normal form $G'$ (with $\dc{\mathcal{L}(G)}=\dc{\mathcal{L}(G')}$) and then construct an NFA from $G'$.
In the first step we compute the SCCs of $G$, here $\{X,Y\}$ and $\{U,V\}$. Since $Y \to XYa$ (with $Y\equiv X$ and $X\equiv X$), we know that $\dc Y = \dc X = \Sigma_X^*= \{a,b,c\}^*$. We therefore can replace any occurrence of $Y$ by $X$ (thereby removing $Y$ from the grammar) and redefine the rules for $X$ to $X \to aX \mid bX \mid cX \mid \varepsilon$. In case of the SCC $\{U,V\}$ the grammar is linear w.r.t.\ $U$ and $V$, i.e.\ starting from either of the two we can never produce sentential forms in which the total number of occurrences of $U$ and $V$ exceeds one. Hence, we can identify $U$ and $V$ without changing the subword closure. Finally, we introduce unique nonterminals for each terminal symbol and restrict the right-hand side of each production to at most two symbols by introducing auxiliary nonterminals $W$ and $T$: \begin{tabular}{b{9cm}b{4cm}} $\begin{array}{ll@{\hspace{0.3cm}}ll} S &\to XW \mid UW \mid X & W &\to A_aU \\[1mm] X &\to A_aX \mid A_bX \mid A_cX \mid A_\varepsilon & U &\to UZ \mid ZU \mid A_aT \mid A_\varepsilon \\[1mm] T &\to A_c A_b & Z &\to A_cZ \mid A_bA_c \\[1mm] A_a &\to a & A_b &\to b\\[1mm] A_c &\to c & A_\varepsilon &\to \varepsilon\\[1mm] \end{array}$
& \scalebox{0.68}{ \begin{tikzpicture} \node (S) at (-1,0) {$S$}; \node (X) at (-2,-1) {$X$}; \node (W) at (-1,-1) {$W$}; \node (Z) at (1,-2) {$Z$}; \node (U) at (0,-1) {$U$}; \node (T) at (0,-2) {$T$}; \node (1) at (-2,-3) {$A_{\ew}$}; \node (a) at (-1,-3) {$A_a$}; \node (b) at (1,-3) {$A_b$}; \node (c) at (0,-3) {$A_c$};
\path[->]
(S) edge (X)
edge (W)
edge (U)
(X) edge[loop left] (X)
edge (a)
edge (1)
edge (c)
edge (b)
(W) edge (a)
edge (U)
(Z) edge[loop right] (Z)
edge (c)
edge (b)
(U) edge[loop right] (U)
edge (Z)
edge (T)
edge (a)
edge (1)
(T) edge (c)
edge (b)
; \end{tikzpicture} } \end{tabular}
\noindent Note that the dependency graph of this transformed grammar is now acyclic apart from self-loops. Because of this, we can directly transform the grammar into an {\em acyclic} equation system (or straight-line program, or algebraic circuit) whose solution is a regular expression for $\dc S$: \[ \begin{array}{ll@{\hspace{1cm}}ll} \dc A_a &= (a + \varepsilon) & \dc A_b &= (b + \varepsilon)\\ \dc A_c &= (c + \varepsilon) & \dc A_\varepsilon &= \varepsilon\\ \dc Z &= c^*(\dc A_b \dc A_c) & \dc T &= \dc A_c \dc A_b \\ \dc U &= \Sigma_Z^* (\dc A_a \dc T) \Sigma_Z^* & \dc W &= \dc A_a \dc U \\ \dc X &= \Sigma_X^* & \dc S &= \dc X \dc W + \dc U \dc W + \dc X \\ \end{array} \] In order to obtain an NFA for $\dc S$, we evaluate this equation system from bottom to top while re-using as many of the already constructed automata as possible. For instance, consider the equation: \[ \dc S = \dc X \dc W + \dc U \dc W + \ew \cdot \dc X \] Because of acyclicity of the equation system, we may assume inductively that we have already constructed NFAs $\sA_{\dc X}$, $\sA_{\dc W}$, and $\sA_{\dc U}$ for $\dc X$, $\dc W$, and $\dc U$, respectively. To construct the NFA for $\dc S$, we first make two copies $\bl{\sA}{1}$, $\bl{\sA}{2}$ of each of these automata. Automata with superscript $(1)$ will be used exclusively for variable occurrences to the left of the concatenation operator, while automata with superscript $(2)$ will be used for the remaining occurrences. We then read quadratic monomials, like $ \dc X \dc W$, as an $\ew$-transition connecting $\bl{\sA}{1}_{\dc X}$ with $\bl{\sA}{2}_{\dc W}$ as shown in Figure~\ref{fig:1} where all edges represent $\ew$-transitions.
\begin{figure}
\caption{Efficient re-use of re-occuring NFAs in Courcelle's construction.}
\label{fig:1}
\end{figure}
We do not claim that this construction yields the smallest NFA, but it is easy to describe and yields an NFA of sufficiently small size in order to deduce in the following subsections an asymptotically tight upper bound on the number of states. We recall that using a CFG of size $3n+2$ to succinctly represent the singleton language $\{a^{2^n}\}$, the bound of $2^{\Theta(n)}$ follows \cite{Gruber:2009:MSH:1551570.1551577}.
In \cite{DBLP:conf/concur/AtigBT08} it is remarked that a straight-forward implementation of Courcelle's construction yields an NFA ``single exponential'' size w.r.t.~$\abs{G}$. However, no detailed complexity analysis is given. Consider the CFG with start-symbol $A_n$ and consisting of the rules $A_0 \to a$ and for all $1\leq k \leq n:\quad A_k \to A_iA_j \quad \forall 0\leq i,j \leq (k-1)$. If we compute an NFA for $\dc{A_n}$ via the straight-forward bottom-up construction it will have size
$a_n := |\sA_{\dc A_n}|$ with $ a_n = 2 + \sum_{0\leq i,j \leq (n-1)} (a_i + a_j). $ It is easy to show that $a_n \geq 2^{n}n! \in 2^{\Omega(n\log n)}$. Hence, the crucial part to achieve the optimal bound of $2^{\mathcal{O}(n)}$ is to reuse already computed automata. We just remark that one can also achieve similar savings, by factoring out common terms in the right hand side of the acyclic equations. A subsequent bottom-up construction leads to an NFA of size $2^{\mathcal{O}(n)}$ as well but the constant hidden in the $\mathcal{O}$ is larger and the analysis is more involved. Note that this also shows that we can construct a regular expression of size $2^{\mathcal{O}(n)}$ representing the subword closure.
\subsection{Normal Form for Computing the Subword Closure} To simplify our construction, we will assume that our grammar has a special form which is similar to CNF but with unary rules allowed. Any CFG can be transformed into this form with at most linear blowup in size preserving its subword closure (but not its language).
\begin{definition} A CFG $G$ is in quadratic normal form (QNF) if for every terminal $x\in\Sigma\cup \{\varepsilon\}$ there is a unique nonterminal $A_x$ with the only production $A_x \to x$ and every other production is in one of the following forms: \begin{itemize} \item $X\to YX$ or $X \to XY$ (with $Y \neq X$) \item $X \to Y$ or $X\to YZ$ (with $Y,Z \neq X$) \end{itemize} A grammar in QNF is called \emph{simple} if \begin{itemize} \item for all $X \to YX$ or $X \to XY$, we have $X \dep Y$ \item for all $X \to Y$ or $X \to YZ$, we have $X \dep Y,Z$. \end{itemize} \end{definition} Note that the dependency graph associated with a grammar in simple QNF is acyclic with the exception of self-loops.
First, we need a small lemma that allows us to eliminate all linear productions ``within'' some SCC, i.e.~productions of the form $X \to \alpha Y \beta$ such that $X\neq Y$ but $Y\deppo X$. \begin{lemma} \label{lem:scc} Let $G$ be a strongly connected linear CFG with nonterminals $\vars = \{X_1,\dots,X_n\}$ so that every production is either of the form $X\to \alpha Y \beta$ or $X \to \alpha$ for $\alpha,\beta\in\Sigma^\ast$.
Consider the grammar $G'$ which we obtain from $G$ by replacing in every production of $G$ every occurrence of a nonterminal $X_i$ by $Z$.
We then have that $\dc \mathcal{L}(Z) = \dc \mathcal{L}(X_i)$ for all $i\in [n]$. \end{lemma} Using the preceding lemma, we can show that it suffices to consider only CFG in simple QNF in the following. \begin{theorem} \label{thm:simpleQNF}
Every CFG $G$ can be transformed into a CFG $G'$ in simple QNF such that $\dc{\mathcal{L}(G)} = \dc{\mathcal{L}(G')}$ and $|G'| \in \mathcal{O}(|G|)$. \end{theorem} \begin{proof}[sketch] First, we use Lemma \ref{lem:facts-courcelle} to simplify all productions involving an $X$ with $X \Rightarrow^* \alpha X \beta X \gamma$. Then we apply Lemma \ref{lem:scc} to contract SCCs to a single non-terminal. Finally, we introduce auxiliary variables for the terminals and we binarize the grammar (keeping unary rules like \cite{DBLP:journals/didactica/LangeL09}). \end{proof}
\begin{theorem} \label{thm:upperbound} For any CFG $G$ in simple QNF with $n$ nonterminals there is an NFA $\sA$ with at most $2\cdot 3^{n-1}$ states which recognizes the subword closure of $G$, i.e.~$\dc \mathcal{L}(G) = \mathcal{L}(\sA)$. \end{theorem} \begin{proof}[sketch] Since the dependency graph of a grammar in simple QNF is a DAG (if we ignore self-loops), we can order the nonterminals according to a topological ordering of this graph. We proceed bottom-up to inductively build an NFA for $\dc \mathcal{L}(G) = \dc S$ as in section \ref{sec:ex-bound}. Since our grammar is in QNF, at each stage we only have to produce at most two copies of every automaton representing the subword-closure of a ``lower'' nonterminals $Y$. Inductively, for each of these $Y$ we can build NFAs with at most $2\cdot 3^i$ many states where $i$ is $Y$'s position in the topological ordering. Using the ``biparitite wiring'' sketched in Figure~\ref{fig:1} the size of the automaton for $X$ can then be estimated as \[ \abs{\sA_S} \le 2 + \sum_{Y\colon S\dep Y} 2 \cdot \abs{\sA_Y} \le 2 + 4 \cdot \sum_{i=0}^{n-2} 3^{i} = 2\cdot 3^{n-1}. \] \end{proof}
\begin{corollary} For every CFG $G$ of size $n$ there is an NFA $A$ of size $2^{\mathcal{O}(n)}$ and a DFA $D$ of size $2^{2^\mathcal{O}(n)}$ with $\dc \mathcal{L}(G) = \mathcal{L}(A) = \mathcal{L}(D)$. \end{corollary}
\section{CFG $\to$ DFA: Double-exponential Blowup}\label{sec:debu} As seen in the preceding section, moving from a context-free grammar $G$ representing a subword-closed language to a language-equivalent NFA $\cA$, the size of the automaton is bounded from above by $2^{O(\abs{G})}$. For superword closures \cite{Gruber:2009:MSH:1551570.1551577} prove the same upper bound for the size of the NFA. From both results we immediately obtain the upper bound $2^{2^{O(\abs{G})}}$ on the size of the minimal language-equivalent DFA recognizing the sub- or superword closure of a CFG $G$. This bound is essentially tight as witnessed by the family of finite languages $$ L_k = \bigcup_{j=1}^k \{0,1\}^{j-1} \{0\} \{0,1\}^k \{0\} \{0,1\}^{k-j}. $$ $L_k$ contains exactly all those words $w\in\{0,1\}^{2k+1}$ which contain two $0$s which are separated by exactly $k$ letters. Using the idea of iterated squaring in order to succinctly encode the language $\{a^{2^n}\}$ as a context-free grammar (resp.\ straight-line program) of size $\mathcal{O}(n)$, also the language $L_{2^n}$ can be represented by a context-free grammar of size $\mathcal{O}(n)$. One then easily shows that the Myhill-Nerode relation w.r.t.\ $L_{2^n}$, $\dc{L_{2^n}}$, and $\uc{L_{2^n}}$, respectively, has at least $2^{2^{n}}$ equivalence classes:
\begin{theorem}\label{thm:debu} There exists a family of CFGs $G_n$ of size $\mathcal{O}(n)$ (generating a finite language) such that the minimal DFA accepting either $L(G_n)$, or $\dc L(G_n)$, or $\uc L(G_n)$, has at least $2^{2^n}$ states. \end{theorem}
\section{Equivalence of NFAs modulo Sub-/Superword Closure} \label{sec:equiv} As hinted at in the introduction, one application of the sub- resp.~superword closure is (in-)equivalence checking of CFGs by regular over-approximation. For this, we must solve the equivalence problems for NFAs representing sub/sup-word closed languages. Naturally, the question arises how hard this is.
Let $\sA$ and $\sB$ denote NFAs over the common alphabet $\Sigma$, having $n_{\sA}$ and $n_{\sB}$ many states, respectively. Recall that the universality problem for NFAs, i.e.\ $\mathcal{L}(\sA) \stackrel{?}{=} \Sigma^\ast$, and hence also the equivalence problem $\mathcal{L}(\sA)\stackrel{?}{=}\mathcal{L}(\sB)$ are \textsf{PSPACE}-complete. Only recently, it was shown in~\cite{DBLP:journals/fuin/RampersadSX12} that these problems \emph{stay} \textsf{PSPACE}-complete even when restricted to NFAs representing languages which are closed w.r.t.\ either prefixes or suffixes or factors. However, in~\cite{DBLP:journals/fuin/RampersadSX12} it was also shown that for subword-closed NFAs (i.e.\ $\dc \mathcal{L}(\sA)= \mathcal{L}(\sA)$), universality is decidable in linear time as $\mathcal{L}(\sA) = \Sigma^*$ holds if and only if there is an SCC in $\sA$ whose labels cover all of $\Sigma$. It is easily shown that a similar result also holds for superword-closed NFAs (i.e.\ $\uc \mathcal{L}(\sA) = \mathcal{L}(\sA)$): We have $\mathcal{L}(\sA) = \Sigma^*$ if and only if $\ew \in \mathcal{L}(\sA)$.
In this section we show that both equivalence problems, i.e.\ $\dc \mathcal{L}(\sA) \stackrel{?}{=} \dc \mathcal{L}(\sB)$ and $\uc \mathcal{L}(\sA) \stackrel{?}{=} \uc \mathcal{L}(\sB)$, are \textsf{coNP}-complete, hence are easier than in the general case (unless $\textsf{NP} = \textsf{PSPACE}$). In the following, we write more succinctly $\sA \dceqp \sB$ and $\sA \uceqp \sB$ for these two problems. The following lemma is easy to prove: \begin{lemma} \label{lem:dc-uc-NFA} Let $\sA$ be an NFA. Define $\sA^{\dc{}}$ as the NFA we obtain from $\sA$ by adding for every transition $q\xrightarrow{a} q'$ of $\sA$ the $\ew$-transition $q\xrightarrow{\ew} q'$. Similarly, define $\sA^{\uc{}}$ to be the NFA we obtain by adding the loops $q\xrightarrow{a} q$ for every state $q$ and every terminal $a\in\Sigma$ to $\sA$. Then $\dc{\mathcal{L}(\sA)} = \mathcal{L}(\sA^{\dc{}})$ and $\uc{\mathcal{L}(\sA)} = \mathcal{L}(\sA^{\uc{}})$. \end{lemma} To prove that both $\sA \uceqp \sB$ and $\sA \dceqp \sB$ are \textsf{coNP}-complete we will give a polynomial bound on the length of a {\em separating word}, i.e.\ a word $w$ in the symmetric difference of $\mathcal{L}(\sA^{\dc{}})$ and $\mathcal{L}(\sB^{\dc{}})$ resp.\ of $\mathcal{L}(\sA^{\uc{}})$ and $\mathcal{L}(\sB^{\uc{}})$.
We first show that the DFA obtained from $\sA^{\dc{}}$ resp.\ $\sA^{\uc{}}$ using the powerset construction has a particular simple structure: \begin{lemma}\label{lem:powerset} Let $\sA$ be an NFA. Let $\sD^{\dc{}}_{\sA}$ (resp.\ $\sD^{\uc{}}_{\sA}$) be the DFA we obtain from $\sA^{\dc{}}$ (resp.\ $\sA^{\uc{}}$) by means of the powerset construction. For any transition $S \xrightarrow{a} T$ of $\sD^{\dc{}}_{\sA}$ ($\sD^{\uc{}}_{\sA}$) it holds that $S\supseteq T$ (resp.\ $S\subseteq T$). \end{lemma} Thus, the transition relation of $\sD^{\dc{}}_{\sA}$ (disregarding self-loops) can be ``embedded'' into the lattice of subsets of the states of $\sA$, which has height at most $n_{\sA}-1$. Hence the DFA $\sD^{\dc{}}_{\sA}$ has small diameter (although even the minimal DFA for the subword closure can be super-polynomially \emph{larger} than an NFA \cite{DBLP:journals/fuin/Okhotin10}): \begin{corollary} With the assumptions of the preceding lemma: The length of the longest simple path in $\sD^{\dc{}}_{\sA}$ (resp.\ $\sD^{\uc{}}_{\sA}$) is at most $n_{\sA}-1$. \end{corollary} To bound the length of a shortest separating word $w$ of two NFAs w.r.t.\ sub-/superword closure, consider the direct sum of the corresponding DFAs and observe that a run on $w$ either has to ``make progress'' in the first, or in the second DFA: \begin{lemma}\label{lem:sep-len} Let $\sA$ and $\sB$ be two NFAs. If $\sA \not\equiv_{\dc{}} \sB$ (resp.\ $\sA \not\equiv_{\uc{}} \sB$), then there exists a separating word of length at most $n_{\sA}+n_{\sB}-2$. \end{lemma}
\begin{theorem} The decision problems $\sA \dceqp \sB$ and $\sA \uceqp \sB$ are in \textsf{coNP}. \end{theorem} To show \textsf{coNP}-hardness, recall the proof that the equivalence problem for star-free regular expressions is \textsf{coNP}-hard by reduction from \textsf{TAUT}: Given a formula $\phi$ in propositional calculus, we build a regular expression $\rho$ (without Kleene stars) over $\Sigma=\{0,1\}$ that enumerates exactly the satisfying assignments of $\phi$. Hence, $\rho \in \textsf{TAUT}$ iff $\mathcal{L}(\rho) = \Sigma^n$ iff $\dc{\mathcal{L}(\rho)} = \Sigma^{\leq n}$, since the subword closure can only add new words of length less than $n$ (analogously for $\uc$).
\begin{theorem} \label{thm:eq-coNP} The decision problems $\sA \dceqp \sB$ and $\sA \uceqp \sB$ are \textsf{coNP}-hard. \end{theorem}
\section{Application to Grammar Problems} \label{sec:application} We apply our results to devise an approximation approach for the well-known undecidable problem whether $\mathcal{L}(G_1) = \mathcal{L}(G_2)$ for two CFGs $G_1, G_2$. Possible attacks on this problem include exhaustive search for a word in the symmetric difference $w \in (L_1 \symdiff L_2) \cap \Sigma^{\leq n}$ w.r.t.~some increasing bound $n$ e.g.~by using incremental SAT-solving \cite{DBLP:conf/icalp/AxelssonHL08}. Unfortunately, this quickly becomes infeasible for large problems. Previous work has successfully applied regular approximation for ambiguity detection \cite{DBLP:conf/icalp/Schmitz07,DBLP:journals/scp/BrabrandGM10} or intersection non-emptiness of CFGs \cite{DBLP:conf/fase/LongCMM12}.
A high-level description of our approach to (in-)equivalence-checking is given in Figure \ref{fig:eq-alg}. \begin{figure}
\caption{Equivalence checking via subword closure approximation.}
\label{fig:eq-alg}
\end{figure} Of course the procedure will not terminate if $\mathcal{L}(G_1) = \mathcal{L}(G_2)$, so in practice a timeout will be used after which the algorithm will terminate itself and output ``Maybe equal''. Steps (1) and (2) might take time (at most) double exponential in the size of the grammars $G_1$ and $G_2$: Recall that the construction of Section~\ref{sec:upperbound} yields in the worst-case an NFA $\sA_{i}$ whose number of states is exponential in the size of the given CFG $G_i$. To check if $\dc{\mathcal{L}(G_1)} = \dc{\mathcal{L}(G_2)}$, an on-the-fly construction of the power-set automaton for $\sA_1\times \sA_2$ can be used which terminates as soon as a set of states is reached which contains at least one accepting state of, say, $\sA_1$ but no accepting state of $\sA_2$. Using Lemma~\ref{lem:sep-len}, we can safely terminate the exploration of simple paths if their length exceeds the bound stated in Lemma~\ref{lem:sep-len}. In the worst case this might take time exponential in the size of $\sA_1$ and $\sA_2$, so at most double exponential in the size of $G_1$ and $G_2$.
In the following, we describe in greater detail how we generate a separating word $w'$ in $\mathcal{L}(G_1)$ or $\mathcal{L}(G_2)$ if we find a separating word $w\in \dc\mathcal{L}(G_1)\oplus\dc\mathcal{L}(G_2)$, resp.\ how we refine $G_1$ and $G_2$ if $\dc\mathcal{L}(G_1)=\dc\mathcal{L}(G_2)$.
\subsection{Witness Generation for $\mathcal{L}(G_1) \neq \mathcal{L}(G_2)$} If our check in step (2) returns ``Not equal'' we know that $\dc{\mathcal{L}(G_1)} \neq \dc{\mathcal{L}(G_2)}$ and we obtain a word $w\in \dc{\mathcal{L}(G_1)} \symdiff \dc{\mathcal{L}(G_2)}$, w.l.o.g.~assume in the following $w\in \dc{\mathcal{L}(G_1)} \setminus \dc{\mathcal{L}(G_2)}$. This word has length linear in $\abs{\sA_1}$ and $\abs{\sA_2}$, i.e.\ at most exponential w.r.t.\ $\abs{G_1}$ and $\abs{G_2}$.
To obtain a (direct) certificate for the fact that $\mathcal{L}(G_1) \neq \mathcal{L}(G_2)$, we construct a superword $w'\supw w$ with $w'\in \mathcal{L}(G_1)$ -- such a $w'$ is guaranteed to exist as it is the reason for $w\in\dc{\mathcal{L}(G_1)}$. Straight-forward induction on $w$ shows: \begin{lemma}\label{lem:meins} For $w\in\Sigma^\ast$ a DFA recognizing $\dc\mathcal{L}(\{w\})$ resp.\ $\uc\mathcal{L}(\{w\})$ and having at most $\abs{w}+2$ states can be constructed in time polynomial in $\abs{w}$. \end{lemma} We can therefore intersect $G_1$ with a DFA accepting $\uc\mathcal{L}(\{w\})$, to obtain a new CFG $G_1'$ whose size is at most cubic in $\abs{w}$\cite{BarHillel61,DBLP:series/sci/NederhofS08}, i.e.~exponential in the size of $G_1$. From this grammar, we can obtain in time linear in $\abs{G'_1}$ a shortest word $w'$ in $\mathcal{L}(G'_1)= \mathcal{L}(G_1)\cap \uc{\mathcal{L}(\{w\})}$. The length of $w'$ is at most exponential in $\abs{G'_1}$, i.e.\ at most double exponential in $\abs{G_1}$.
In practice, shorter witnesses are preferable, so we construct the shortest word in $\overline{\mathcal{L}(\sA_2)} \cap \mathcal{L}(G_1)$. In theory this might incur in a triple exponential blow-up resulting from complementing $\sA_2$, but we can find a separating word $w'$ which is \emph{not} a superword of $w$ and hence is usually shorter.
\subsection{Refinement}
In case that the test in step (2) returns ``Equal'', we refine both grammars such that subsequent subword-approximations may find a counterexample to equality. Assume that our equivalence check yields $\dc{\mathcal{L}(G_1)} = \dc{\mathcal{L}(G_2)}$. A possible refinement strategy is to cover $L:=\dc{\mathcal{L}(G_1)}$ using a finite number of regular languages $L\subseteq L' := L_0 \cup L_1 \cup \cdots \cup L_k$ and then to repeat the equivalence check for all pairs of refined languages $\mathcal{L}(G_1) \cap L_i$ and $\mathcal{L}(G_2) \cap L_i$ for all $i$. The requirement $L'\supseteq L$ protects the refinement from cutting off potential witnesses.
A simple method is covering using prefixes: Here we generate all prefixes $p_1,\dots,p_k$ of words in $L$ of increasing length (up to some small bound $d$ called the \emph{refinement depth}) and set $L_i:= p_i\Sigma^*$ and $L_0 = \dc{\{ p_i \mid i\in[k]\}}$. Since $\bigcup_i L_i \supseteq L$ this strategy preserves potential witnesses and since any counterexample eventually appears as a prefix, this yields a semi-decision procedure for grammar inequivalence. In our experiments we disregard the finite language $L_0$ (which can also be checked by enumeration) and only check refinement using the infinite sets $p_i\Sigma^*$ with the goal of quickly finding \emph{some} (not the shortest) distinguishing word. This strategy is often able to tell apart different CFLs after few iterations as shown in the following.
\subsection{Implementation and Experiments} We implemented the inequivalence check in an extension\footnote{The fork is available from \url{https://github.com/regularApproximation/newton}.} of the \textsc{FPsolve}\xspace tool \cite{ELS14}. The additional code comprises roughly 1800 lines of C++ and uses libfa\footnote{http://augeas.net/libfa/} to handle finite automata.
Our worst-case descriptional complexity results for the subword closure of CFGs (exponential sized NFA, double-exponential sized DFA) and our remarks on the length of possible counterexamples might suggest that our inequivalence checking procedure is merely of academic interest. Here we briefly show that this is not the case, and that overapproximation via subword closures is actually quite fast in practice.
The paper \cite{DBLP:conf/icalp/AxelssonHL08} presents cfg-analyzer\xspace, a tool that uses SAT-solving to attack several undecidable grammar problems by exhaustive enumeration. We demonstrate the feasibility of our approximation approach on several slightly altered grammars (cf.~\cite{Tratt12}) for the \textsf{PASCAL} programming language\footnote{Available from \url{https://github.com/nvasudevan/experiment/tree/master/grammars/mutlang/acc} .}. The altered grammars were obtained by adding, deleting, or mutating a single rule from the original grammar \cite{Tratt12}. We used \textsc{FPsolve}\xspace and cfg-analyzer\xspace to check equivalence of the altered grammar with the original. Both tools were given a timeout of $30$ seconds. We want to stress that we do not strive to replace enumeration-based tools like cfg-analyzer\xspace, but rather envision a combined approach: Use overapproximations like the subword closure (with small refinement depth) as a quick check and resort to more computationally demanding techniques like SAT-solving for a thorough test. Also note that it is not too hard to find examples where enumeration-based tools cannot detect inequivalence anymore, e.g.~by considering grammars with large alphabet (like C\# or Java) for which the shortest word in the language is already longer than $20$ tokens. Here we just showcase an example where both approaches can be fruitfully combined.
Table~\ref{table:exp} demonstrates that even if our tool uses the very simple prefix-refinement (which is the main bottleneck in terms of speed), we can successfully solve $100$ cases where cfg-analyzer\xspace has to give up after $30$ seconds and even in cases where both tools find a difference, \textsc{FPsolve}\xspace does so much faster. \begin{table}[th] \begin{center} \begin{tabular}{ccccccccc} scenario & \# instances & \# CA & $t_{\mathrm{CA}}$ & \#FP & $t_{\mathrm{FP}}$ & \#$(CF \wedge FP)$ & $t^{\wedge}_{\mathrm{CA}}$ & $t^{\wedge}_{\mathrm{FP}}$ \\ \hline add & 700 & 190 & 17.9 & 18 & 2.43 & 8 & 10.7 & 4.97 \\ delete & 284 & 61 & 17.8 & 34 & 0.424 & 10 & 14.4 & 0.464 \\ empty & 69 & 32 & 18.7 & 1 & 1.35 & 1 & 5.62 & 1.35 \\ mutate & 700 & 167 & 19.1 & 100 & 1.3 & 36 & 15.8 & 2.87 \\ switchadj & 187 & 16 & 20.5 & 2 & 5.46 & 1 & 9.68 & 0.34 \\ switchany & 328 & 35 & 18 & 9 & 3.72 & 8 & 9.09 & 2.84 \\ \hline \hline $\sum$ & 2268 & 501 & -- & 164 & -- & 64 & -- & -- \\ \end{tabular} \end{center} \caption{Numbers of solved instances for different scenarios and respective average times: \#CA: solved by cfg-analyzer\xspace, \#FP: solved by \textsc{FPsolve}\xspace, \#$(CA \wedge FP)$: solved by both tools, $t^{\wedge}_{\mathrm{tool}}$: time needed by \emph{tool} on instances from $(CA \wedge FP)$.} \label{table:exp} \end{table}
\section{Discussion and Future Work} \label{sec:conclusion} Motivated by the language-equivalence problem for context-free languages, we have studied the problems of the space requirements of representing the subword closure of CFGs by NFAs and DFAs, and the computational complexity of the equivalence problem of subword-closed NFAs. We have shown how to construct from a context-free grammar $G$ an NFA accepting $\dc\mathcal{L}(G)$ consisting of at most $2^{\mathcal{O}(\abs{G})}$ states -- a small gap between the lower bound of $\Omega(2^{\abs{G}})$ and our upper bound of $\mathcal{O}(3^{\abs{G}})$ for grammars in QNF remains for future work. A further question is if this bound can be improved in the case of languages given by as deterministic pushdown automata. We have further shown that the upper-bound on the size of DFA accepting $\dc\mathcal{L}(G)$ of $2^{2^{\mathcal{O}(\abs{G})}}$ is tight. Interestingly, a binary alphabet suffices for the presented languag family $L_k$: for instance the worst-case example of \cite{DBLP:journals/fuin/Okhotin10}, which showcases the exponential blow-up suffered when constructing an DFA for the subword closure of a language given as DFA or NFA, requires an unbounded alphabet. We note that a unary context-free language cannot lead to this double exponential blow-up -- this follows from the proof of Theorem 3.14 in \cite{Gruber:2009:MSH:1551570.1551577} (see also Lemma~\ref{lem:meins} here).
Regarding the language-equivalence problem, we have shown that it becomes \textsf{coNP}-complete when restricted to sub- resp.~superword-closed NFAs. This is somewhat surprising given the fact that it stays \textsf{PSPACE}-complete for many related families (e.g.~for prefix-, suffix-, or factor-closed languages). Finally, we have briefly described an approach to tackle the equivalence problem for CFGs using the presented results, though much work remains to turn our current implementation into a mature tool: In particular, since the intersection of two regular overapproximations is again a regular overapproximation, it could be fruitful to combine the subword closure (or variants like \cite{DBLP:conf/fase/LongCMM12}) with other regular approximation techniques like \cite{MohriNederhof01}. We also need to improve the refinement of the approximations when scaling the problem size.
\appendix \section{Missing proofs} \subsection*{Proof of Lemma \ref{lem:scc}} \begin{proof} Since $G$ is strongly connected, $\dc{X_i} = \dc{X_j}$ for all $i,j\in[n]$, hence it suffices to show the statement for $X_1$. Clearly, $\mathcal{L}(Z) \supseteq \mathcal{L}(X_1)$ hence also $\dc{Z} \supseteq \dc{X_1}$. For the other inclusion let $w\in \dc{Z}$, i.e.~we have a word $w'$ with $w\sw w'\in \mathcal{L}(Z)$ possessing some derivation $Z \Rightarrow u_0 Z v_0 \Rightarrow u_0u_1 Z v_1 v_0 \Rightarrow \dots \Rightarrow w'$. Since $G$ is strongly connected there must be an $X_{j_1}$ reachable from $X_1$ with $X_{j_1} \to u_0X_{k_1}v_0$ for some $Y$. Continuing this reasoning we generate a superword of $w'$ (with some ``junk''-strings $\alpha_l,\beta_l$) by following the derivation of $w'$: \[X_1 \Rightarrow^* \alpha_0 X_{j_1} \beta_0 \Rightarrow \alpha_0 u_0 X_{k_1} v_0 \beta_0 \Rightarrow^*
\alpha_0 u_0 \alpha_1 u_1 X_{k_2} v_1 \beta_1 v_0 \beta_0 \Rightarrow \dots \Rightarrow w'' \] with $w' \sw w''$. Since, $w \sw w'$ we have $w\in \dc{X_1}$. \end{proof}
\subsection*{Proof of Theorem \ref{thm:simpleQNF}} \begin{proof} The following steps achieve the desired result: \begin{enumerate} \item For every $x \in \Sigma \cup \{\varepsilon\}$ replace every occurrence of $x$ in a production by $A_x$ and finally add the production $A_x \to x$. \item For every production $X \to \alpha Y \beta Z \gamma$ with $Y,Z \equiv X$ replace all productions with lhs $Y$ such that $Y\equiv X$ (i.e.~from the same SCC as $X$) by the productions $X \to A_xX$ for all $x\in \Sigma_X$ and add $X \to A_\varepsilon$.
\item Transform the grammar into 2NF, i.e.~such that every production is of the form $X \to \alpha$ with $|\alpha| \leq 2$ (cf.~\cite{DBLP:journals/didactica/LangeL09}). \item Contract every strongly connected component of the grammar into a univariate grammar via Lemma \ref{lem:scc} \footnote{Here we implicitly treat nonterminals from lower SCCs as terminals, since CFLs are closed under substitution this is fine.}. \end{enumerate}
It is easy to check that $G'$ is indeed in simple QNF, moreover steps (1) and (3) do not change the language of the grammar. In step (2) we ensure that $\mathcal{L}(X) = \Sigma_X^*$ if $X\Rightarrow^* \alpha X \beta X \gamma$ (see Lemma \ref{lem:facts-courcelle}). Step (4) also preserves the subword closure (by Lemma \ref{lem:scc}), thus altogether $\dc{\mathcal{L}(G)} = \dc{\mathcal{L}(G')}$. Step (2) reduces the size of $G$, steps (1) and (3) lead to a linear growth, and step (4) does not change the size so together there exists a constant $c$ (independent of $G$) such that $|G'| \leq c\cdot |G|$. \end{proof}
\subsection*{Proof of Theorem \ref{thm:upperbound}} Before describing the proof, we state some useful definitions: \begin{definition} Given a nonterminal $X$ in a grammar in simple QNF with production set $P$, we define the following sets of nonterminals and terminals: \begin{itemize} \item $Q(X) := \{YZ\in \vars\cdot \vars : (X \to YZ \in P) \}$ (``quadratic monomials'') \item $L(X) := \{Y\in \vars : X \to Y \in P \}$ (``linear monomials'') \item $C_l(X) := \{Y\in \vars : X \to YX \in P \}$ (``left coefficients'') \item $C_r(X) := \{Y\in \vars : X \to XY \in P \}$ (``right coefficients'') \item $\Sigma_l(X) := \Sigma \cap \bigcup \{ \dc L(Y) \mid Y\in C_l(X)\}$ (``left alphabet'') \item $\Sigma_r(X) := \Sigma \cap \bigcup \{ \dc L(Y) \mid Y\in C_r(X)\}$ (``right alphabet'') \end{itemize} \end{definition} Note that $\Sigma_l(X)$ (resp.~$\Sigma_r(X)$) is simply the set of terminals reachable from any element of $C_l(X)$ (resp.~$C_r(X)$), and therefore can easily be computed. Since $G$ is in simple QNF we have $Y \ndeppo X$ for each $Y$ with $X\dep Y$.
\begin{proof} For every nonterminal $X$ of $G$, let $n(X) = \{ Y \mid X \deppo Y \}$ be the number of nodes reachable from $X$ in the dependency graph. We proceed by induction on $n(X)$.
Pick any nonterminal $X$ with $n(X)=1$. Such an nonterminal has to exist as otherwise the dependency graph would contain a nontrivial cycle. By definition of simple QNF, $G$ can only contain a single rule rewriting $X$ which has to be of the form $X\to a$ for some $a\in \Sigma$. Then the following NFA $\sA_X$ obviously satisfies $\dc X = \mathcal{L}(\sA_X)$ and $\abs{\sA_X} \le 2\cdot 3^{n(X)-1}$: \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\node[initial,state,accepting] (0) {$q_{\text{en}}$}; \node[state,accepting] (1) [right of = 0] {$q_{\text{ex}}$}; \path (0) edge node[above] {$\ew,a$} (1); \end{tikzpicture} \end{center} In the following every automaton constructed will have these special states $q_{\text{en}}$ and $q_{\text{ex}}$ to which we will simply refer to as entry and exit states, respectively.
Now, let $X$ be any remaining nonterminal of $G$ with $n(X)>0$, i.e.\ there is at least one nonterminal $Y\neq X$ such that $X\dep Y$. By virtue of Lemma~\ref{lem:facts-courcelle} and Lemma~\ref{lem:scc} we have \[ \dc X = \Sigma_l(X)^* \left( \bigcup_{YZ \in Q(X)} \dc Y \cdot \dc Z \cup \bigcup_{Y \in L(X)} \dc Y\right)\Sigma_r(X)^*. \] where by definition of simple QNF we have $Y \ndeppo X$ and $Z\ndeppo X$ implying $n(X) > n(Y), n(Z)$. So by induction, we have already constructed for every $Y$ with $X \dep Y$ an NFA $\sA_Y$ such that $\dc Y = \mathcal{L}(\sA_Y)$ i.e.\ \[ \dc X = \Sigma_l(X)^* \left( \bigcup_{YZ \in Q(X)} \mathcal{L}(\sA_Y) \cdot \mathcal{L}(\sA_Z) \cup \bigcup_{Y \in L(X)} \mathcal{L}(\sA_Y)\right)\Sigma_r(X)^*. \] It remains to construct $\sA_X$. To this end we use the last equality but only use at most two instances of every automaton $\sA_Y$: Initially, we let $\sA_X$ be the disjoint union of all automata $\{ \sA^{(i)}_Y \mid i \in [2], X\dep Y\}$ where $\sA^{(1)}_Y$ and $\sA^{(2)}_Y$ denote two distinct copies of $\sA_Y$. Here we assume that these states are suitably renamed, in particular, the entry and exit states of all these automata are assumed to be distinct from $q_{\text{en}}$ and $q_{\text{ex}}$ so that we may add also $q_{\text{en}}$ and $q_{\text{ex}}$ to the states of $\sA_X$. Both $q_{\text{en}}$ and $q_{\text{ex}}$ are final with $q_{\text{en}}$ also the unique initial state of $\sA_X$. Finally, we add additional $\ew$-transitions to $\sA_X$ to mimic the productions rewriting $X$ (see also Subsection~\ref{sec:ex-bound}): \begin{itemize} \item For each $YZ\in Q(X)$: Add $\ew$-transitions (1) from $q_{\text{en}}$ to the entry state of $\bl{\sA}{1}_Y$, (2) from the exit state of $\sA_Y^{(1)}$ to the entry state of $\sA_Z^{(2)}$, and (3) from the exit state of $\sA_Z^{(2)}$ to $q_{\text{ex}}$. \item For each $Y \in L(X)$: Add $\ew$-transitions (1) from $q_{\text{en}}$ to the entry state of $\bl{\sA}{2}_Y$, and (2) from the exit state of $\sA_Y^{(2)}$ to $q_{\text{ex}}$. \item For each $a\in \Sigma_l(X)$: Add a self-loop $q_{\text{en}} \overset{a}{\longrightarrow} q_{\text{en}}$. \item For each $a \in \Sigma_r(X)$: Add a loop $q_{\text{ex}} \overset{a}{\longrightarrow} q_{\text{ex}}$. \end{itemize}
By induction, we have $\abs{\sA_Y} \le 2\cdot 3^{n(Y)-1}$ for all $Y$ with $X\dep Y$, so $|\sA_X|$ is bounded by \[ \abs{\sA_X} = 2 + 2 \cdot \sum_{Y\colon X\dep Y} \abs{\sA_Y} \le 2 + 4 \cdot \sum_{Y\colon X\dep Y} 3^{n(Y)-1} \] Using breadth-first search, we can assign every nonterminal $Z$ with $X\depeq^\ast Z$ a unique number $i(Y)\in [n(X)]$ such that $i(Y) \ge n(Y)$. We then may continue: \[ \abs{\sA_X} \le 2 + 4 \cdot \sum_{Y\colon X\dep Y} 3^{i(Y)-1} \le 2 + 4 \cdot \sum_{\shortstack{$\scriptstyle Z\colon X\depeq^\ast Z$\\$\scriptstyle Z\neq X$}} 3^{i(Z) - 1} \le 2 + 4 \cdot \sum_{i=0}^{n(X)-2} 3^{i} = 2\cdot 3^{n(X)-1}. \]
\end{proof}
\subsection*{Proof of Theorem \ref{thm:debu}} \begin{proof} For $k\in\N$ consider the language $L_k$ of words $w\in\{0,1\}^{2k+1}$ such that $w_{j} = w_{j+k+1} = 0$ for some $j\in \{1,\dots,k\}$. We can write $L_k$ as \[ L_k = \bigcup_{j=1}^k \{0,1\}^{j-1} \{0\} \{0,1\}^k \{0\} \{0,1\}^{k-j}. \] We in particular interested in $L_k$ for $k=2^n$. The following CFG of size $\mathcal{O}(n)$ with $L(X'_n) = L_{2^n}$ achieves an exponential compression: \[ \begin{array}{lcl@{\hspace{1cm}}lcl} X'_n & \to & X_{n-1} X'_{n-1} \mid X'_{n-1} X_{n-1}\\ X'_{n-1} & \to & X_{n-2} X'_{n-2} \mid X'_{n-2} X_{n-2} & X_{n-1} & \to & X_{n-2} X_{n-2}\\
& \vdots & & & \vdots & \\ X'_{1} & \to & X_{0} X'_{0} \mid X'_{0} X_{0} & X_{1} & \to & X_{0} X_{0}\\ X'_0 & \to & 0Y_n 0 & X_0 & \to & 0 \mid 1\\[1mm] Y_n & \to & Y_{n-1} Y_{n-1}\\ Y_{n-1} & \to & Y_{n-2} Y_{n-2}\\
& \vdots &\\ Y_1 & \to& Y_0 Y_0\\ Y_0 &\to & 0 \mid 1\\[1mm] \end{array} \] The grammar uses repeated squaring to achieve the required compression while the ``primed'' nonterminals $X'_i$ nondeterministically choose where to insert a word from the set $\{0\} \{0,1\}^{2^n} \{0\}$ into a word of $\{0,1\}^{2^n}$.
We show that any two words $w_1,w_2 \in \{0,1\}^{2^n}$ with $w_1\neq w_2$ are inequivalent w.r.t.~the Myhill-Nerode relation of $L_ {2^n}$ which implies that the minimal DFA for $L_{2^n}$ must have at least $2^{2^n}$ states: Consider the first position from the right where $w_1$ and $w_2$ differ, so w.l.o.g.~we have $w_1 = \alpha 0 \beta$ and $w_2=\alpha' 1 \beta$ for some $\alpha,\alpha',\beta \in \{0,1\}^*$. As a distinguishing word set $v := 1^{2^{n}-\abs{\beta}}01^{2^{n}-\abs{\alpha}-1}$. Note that \[w_1v = \alpha 0 \beta 1^{2^{n}-\abs{\beta}} 0 1^{2^{n}-\abs{\alpha}-1} \in L_{2^n}, \] \[w_2v = \alpha'1 \beta 1^{2^{n}-\abs{\beta}} 0 1^{2^{n}-\abs{\alpha}-1} \notin L_{2^n}. \] The crucial observation is that from $\abs{w_1v} = \abs{w_2v} = 2\cdot 2^n+1$ it also follows that $w_1v \in \dc L_{2^n}$ and $w_2v \notin \dc L_{2^n}$ since the subword closure can only add new words of length at most $2\cdot 2^n$. This shows that also the minimal DFA for $\dc L_n$ must have at least $2^{2^n}$ states. The very same argument works for $\uc L_{2^n}$, showing that the minimal DFA for $\uc L_{2^n}$ is of size at least double-exponential in the size of the CFG for $L_{2^n}$ as well. \end{proof}
\subsection*{Proof of Lemma \ref{lem:dc-uc-NFA}} \begin{proof} We start with $\dc{\mathcal{L}(\sA)} = \mathcal{L}(\sA^{\dc{}})$: Pick any $w\in \dc{\mathcal{L}(\sA)}$. Then there is some $w'\supw w$ such that $w'\in \mathcal{L}(\sA)$, and thus by construction also $w'\in \mathcal{L}(\sA^{\dc{}})$. That is there is an accepting run $q_0 \xrightarrow{x_0} q_1 \xrightarrow{x_1} \ldots \xrightarrow{x_l} q_{l+1}$ with $q_{l+1}\in F$ and $w'=x_0x_1\ldots x_l$ (with potentially $x_i=\ew$ for some $i$). Using the additional $\ew$-transitions of $\sA^{\dc{}}$ we therefore can turn this sequence into an accepting sequence for $w$ by simply replacing those $x_i$ by $\ew$ which do not occur in $w$. For the other direction, one can reverse this argument by recalling that for any $\ew$-transition $q\xrightarrow{\ew} q'$ added to $\sA^{\dc{}}$ there is some $a\in\Sigma$ such that $q\xrightarrow{a} q'$ is a transition of $\sA$.
Consider now the second claim $\uc{\mathcal{L}(\sA)} = \mathcal{L}(\sA^{\uc{}})$: Choose some $w\in \uc{\mathcal{L}(\sA)}$. Then there is some $w'\subw w$ such that $w'\in \mathcal{L}(\sA) \subseteq \mathcal{L}(\sA^{\uc{}})$. Any accepting run $q_0 \xrightarrow{x_0} q_1 \xrightarrow{x_1} \ldots \xrightarrow{x_l} q_{l+1}$ (with $q_{l+1}\in F$ and $w'=x_0x_1\ldots x_l$) of $\sA^{\uc{}}$ can then be extended to an accepting run of $\sA^{\uc{}}$ for $w$ by using the additional loops of $\sA^{\uc{}}$ to consume any letters occurring exclusively in $w$. In the other direction given an accepting run of $\sA^{\uc{}}$ we simply strip it by any loops which is guaranteed to yield an accepting run (for a scattered subword) of $\sA$ as the transition relations of $\sA$ and $\sA^{\uc{}}$ only differ in loops. \end{proof}
\subsection*{Proof of Lemma \ref{lem:powerset}} \begin{proof} Recall that the state (sets) of $\sD^{\dc{}}_{\sA}$ are closed w.r.t.\ taking $\ew$-successors in $\sA^{\dc{}}$. As $\sA^{\dc{}}$ was obtained from $\sA$ by introducing for every transition $q\xrightarrow{a} q'$ ($a\in\Sigma$) the $\ew$-transition $q\xrightarrow{\ew} q'$, this means that, if $q\in S$, then every state reachable from $q$ in the directed graph underlying $\sA$ has to be included in $S$, too. As for any transition $S\xrightarrow{a} T$ in $\sD^{\dc{}}_{\sA}$, $T$ is a subset of the states reachable from $S$, the claim follows.
In case of the superword closure, pick any transition $S \xrightarrow{a} T$ of $\sD^{\uc{}}_{\sA}$ and any state $q\in S$. Then by construction of $\sA^{\uc{}}$ there is the loop $q\xrightarrow{a} q$ in $\sA^{\uc{}}$ which implies that also $q\in T$ by definition of the powerset construction. \end{proof}
\subsection*{Proof of Lemma \ref{lem:sep-len}} \begin{proof} Assume $\sA \not\equiv_{\dc{}} \sB$, and let $w$ be a shortest separating word. Consider the unique run of the product DFA $\sD^{\dc{}}_{\sA}\times \sD_{\sB}^{\dc{}}$ on $w=w_0w_1\ldots w_l$: \[ (L_0,R_0) \xrightarrow{w_0} (L_1,R_1) \xrightarrow{w_1} \ldots \xrightarrow{w_l} (L_l, R_l). \] By the preceding lemma we then have $L_i \supseteq L_{i+1}$ and $R_i \supseteq R_{i+1}$ along the run. As $w$ is assumed to be a shortest separating word, it has to hold that $\neg (L_i = L_{i+1} \wedge R_i = R_{i+1})$ for all $i=1,\ldots,l-1$. In other words, we have \[ n_{\sA} + n_{\sB} \ge \abs{L_0} + \abs{R_0} > \abs{L_1} + \abs{R_1} > \ldots > \abs{L_l} + \abs{R_l} \ge 2 \] from which the claim immediately follows.
In the case of the superword closure one deduces in the same way that the accepting run for a shortest separating word has to satisfy: \[ 2 \le \abs{L_0} + \abs{R_0} < \abs{L_1} + \abs{R_1} < \ldots < \abs{L_l} + \abs{R_l} \le n_{\sA} + n_{\sB} \] \end{proof}
\subsection*{Proof of Thorem \ref{thm:eq-coNP}} \begin{proof} Let $\varphi$ be a formula of propositional calculus in disjunctive normal form. We construct a regular expression which encodes all satisfying assignments of $\varphi$:
Let $x_1,x_2,\ldots,x_n$ be the propositional variables occurring in $\varphi$, and assume that $\varphi= \bigvee_{i\in[k]} C_i$ with $C_{i} = \bigwedge_{j\in[l_i} L_{i,j}$ and $L_{i,j}$ literals. Further, we may assume that in every conjunction $C_i$ is contradiction free. We associate with every $C_i$ a simple regular expression $\rho_i$ enumerating all satisfying assignments of $D_i$: Initially, set $\rho_i = \emptyset$. Going from $j=1$ to $j=n$, if $x_j$ occurs in $C_i$, then set $\rho_i := \rho_i 1$; if $\neg x_j$ occurs in $C_i$, set $\rho_i := \rho_i 0$; otherwise set $\rho_{i} := \rho_i (0+1)$. Finally, set $\rho := \rho_1 + \rho_2 + \ldots + \rho_k$. Obviously, the size of $\rho$ is polynomial in the size of $\varphi$. Further, we can compute an NFA $\sA$ from $\rho$ in time polynomial in $\abs{\rho}$, such that $\mathcal{L}(\rho) =\mathcal{L}(\sA)$. Note that $\mathcal{L}(\sA)=\mathcal{L}(\rho)\subseteq \Sigma^n$ by construction. In particular, $\mathcal{L}(\sA)=\mathcal{L}(\rho) = \Sigma^n$ if and only if $\varphi$ is a tautology.
It therefore suffices to show that $\dc{\mathcal{L}(\sA)} = \Sigma^{\le n}$ (resp.\ $\uc{\mathcal{L}(\sA)} = \Sigma^{\ge n}$) if and only if $\mathcal{L}(\sA) = \Sigma^n$. But this is easy as the subword closure resp.\ superword closure can only add words of length less resp.\ greater than $n$. \end{proof}
\end{document} |
\begin{document}
\thispagestyle{empty} \title{Closed inverse subsemigroups of graph inverse semigroups} \begin{abstract} As part of his study of representations of the polycylic monoids, M.V. Lawson described all the closed inverse submonoids of a polycyclic monoid $P_n$ and classified them up to conjugacy. We show that Lawson's description can be extended to closed inverse subsemigroups of graph inverse semigroups. We then apply B. Schein's theory of cosets in inverse semigroups to the closed inverse subsemigroups of graph inverse semigroups: we give necessary and sufficient conditions for a closed inverse subsemigroup of a graph inverse semigroup to have finite index, and determine the value of the index when it is finite. \end{abstract}
\ams{20M18}{20M30, 05C25}
\section{Introduction} Graph inverse semigroups were introduced by Ash and Hall \cite{AshHall}: the construction, recalled in detail in section \ref{grinvsgp}, associates to any directed graph $\Gamma$ an inverse semigroup $\mathcal S(\Gamma)$ whose elements are pairs of directed paths in $\Gamma$ with the same initial vertex. If $\Gamma$ has a single vertex and $n$ edges with $n>1$, then $\mathcal S(\Gamma)$ is the polycyclic monoid $P_n$ as defined by Nivat and Perrot \cite{NivPer}: if $n=1$ then $\mathcal S(\Gamma)$ is the bicyclic monoid $B$ with an adjoined zero. Ash and Hall give necessary and sufficient conditions on the structure of $\Gamma$ for $\mathcal S(\Gamma)$ to be congruence-free, and they use graph inverse semigroups to study the realisation of finite posets as the posets of ${\mathcal J}$--classes in finite semigroups. The structure of graph inverse semigroups as HNN extensions of inverse semigroups with zero was presented in \cite[section 5]{DomGil}. For more recent work on the structure of graph inverse semigroups, we refer to \cite{JonLaw, MesMitch, MMMP}. Connections between graph inverse semigroups and graph $C^*$--algebras have been fruitfully studied in \cite{Pat}.
As part of his study of representations of the polycylic monoids, Lawson \cite{LwPoly} described all the closed inverse submonoids of a polycyclic monoid $P_n$ and classified them up to conjugacy. We show in section \ref{cliss} that Lawson's description can be extended to closed inverse subsemigroups of graph inverse semigroups. As in Lawson's study, there are three types: finite chains of idempotents, infinite chains of idempotents, and closed inverse subsemigroups of {\em cycle type} that are generated (as closed inverse subsemigroups) by a single non-idempotent element. In section \ref{clissindex} we apply Schein's theory of cosets in inverse subgroups \cite{Sch} to the closed inverse subsemigroups of graph inverse semigroups as classified in section \ref{cliss}: we give necessary and sufficient conditions for a closed inverse subsemigroup $L$ of $\mathcal S(\Gamma)$ to have finite index, and determine the value of the index when it is finite.
\section{Preliminaries}
\subsection{Cosets} \label{cosets} Let $S$ be an inverse semigroup with semilattice of idempotents $E(S)$. We recall that the \emph{natural partial order} on $S$ is defined by \[ s \leqslant t \Longleftrightarrow \text{there exists} \; e \in E(S) \; \text{such that} \; s=et \,.\] A subset $A \subseteq S$ is \emph{closed} if, whenever $a \in A$ and $a \leqslant s$, then $s \in A$. The closure $\nobraupset{B}$ of a subset $B \subseteq S$ is defined as \[ \nobraupset{B} = \{ s \in S : s \geqslant b \; \text{for some} \; b \in B \}\,.\] A subset $L$ of $S$ is \emph{full} if $E(S) \subseteq L$.
Let $L$ be a closed inverse subsemigroup of $S$, and let $t \in S$ with $tt^{-1} \in L$. Then the subset \[ \upset{Lt} = \{ s \in S : \text{there exists $x \in L$ with} \; s \geqslant xt \} \] is a (right) coset of $L$ in $S$. For the basic theory of such cosets we refer to \cite{Sch}: the essential facts that we require are contained in the following result.
\begin{prop}{\cite[Proposition 6.]{Sch}} \label{cosetsofL} Let $L$ be a closed inverse subsemigroup of $S$. \begin{enumerate} \item Suppose that $C$ is a coset of $L$. Then $\upset{CC^{-1}}=L$. \item If $t \in C$ then $tt^{-1} \in L$ and $C = \upset{Lt}$. Hence two cosets of $L$ are either disjoint or they coincide. \item Two elements $a,b \in S$ belong to the same coset $C$ of $L$ if and only if $ab^{-1} \in L$. \end{enumerate} \end{prop}
We note that the cosets of $L$ partition $S$ if and only if $L$ is full in $S$. The cardinality of the set of cosets of $L$ in $S$ is the {\em index} of $L$ in $S$, denoted by $[S:L]$.
The closed inverse submonoids of free inverse monoids were completely described by Margolis and Meakin in \cite{MarMea}. For other related work on inverse subsemigroups of finite index, see \cite{AlAliGil} and the first author's PhD thesis \cite{amal_thesis}.
\subsection{Graph inverse semigroups} \label{grinvsgp} Let $\Gamma$ be a finite directed graph with vertex set $V(\Gamma)$ and edge set $E(\Gamma)$. Let $\vec{{\mathscr P}}(\Gamma)$ be the path category of $\Gamma$, with source and target maps ${\mathbf d}$ and ${\mathbf r}$. We note that $\vec{{\mathscr P}}(\Gamma)$ admits \emph{empty} (or \emph{length zero}) paths that consists of a single vertex. The {\em graph inverse semigroup} $\mathcal S(\Gamma)$ of $\Gamma$ has underlying set $$\{ (v,w) : v,w \in \vec{{\mathscr P}}(\Gamma) \,, {\mathbf d}(v)={\mathbf d}(w) \} \cup \{ 0 \}$$ equipped with the binary operation \[ (t,u)(v,w) = \begin{cases} (t,pw) & \text{if $u=pv$ in $\vec{{\mathscr P}}(\Gamma)$}, \\ (pt,w) & \text{if $v=pu$ in $\vec{{\mathscr P}}(\Gamma)$}, \\ 0 & \text{otherwise.} \end{cases} \] This composition is illustrated in the following diagrams:
\begin{center} \includegraphics[height=5cm]{grinvsgp_fig1.pdf} \end{center}
The inverse of $(v,w)$ is given by $(v,w)^{-1} = (w,v)$. The idempotents of $\mathcal S(G)$ are the pairs $(u,u)$ and $0$: if we identify $E(\mathcal S(G))$ with $\vec{{\mathscr P}}(G) \cup \{ 0 \}$, then $\vec{{\mathscr P}}(G) \cup \{ 0 \}$ becomes a semilattice with ordering given by \begin{equation} \label{grinvord} u \leqslant v \quad \text{if and only if} \; v \; \text{is a suffix of} \; u \end{equation} and composition (meet) \begin{equation} \label{grinvmeet} u \wedge v = \begin{cases} u & \text{if $v$ is a suffix of $u$},\\ v & \text{if $u$ is a suffix of $v$},\\ 0 & \text{otherwise} \end{cases} \end{equation} Hence $u \wedge b$ is non-zero if and only if one of $u,v$ is a suffix of the other: in this case we say that $u,v$ are {\em suffix comparable}.
The natural partial order on non-zero elements of $\mathcal S(G)$ is then given by $(t,u) \leqslant (v,w)$ if and only if there exists a path $p \in \Gamma$ such that $t=pv$ and $u=pw$: that is, we descend in the natural partial order from $(v,w)$ by prepending the same prefix to each of $v$ and $w$., and ascend from $(v,w)$ by deleting an identical prefix from each of $v$ and $w$. Recall that an inverse semigroup $S$ with zero $0 \in S$ is said to be $E^*$--{\em unitary}, if whenever $e \in E(S), e \ne 0$ and $s \in S$ with $s \geqslant e$ then $s \in E(S)$. It is clear that graph inverse semigroups are $E^*$--unitary. For further structural results about graph inverse semigroups, we refer to \cite{JonLaw,MesMitch}. Graph inverse semigroups as topological inverse semigroups have been recently studied in \cite{MMMP}.
\section{Closed inverse subsemigroups of graph inverse semigroups} \label{cliss} Our first result generalizes -- and closely follows -- Lawson's classification \cite[Theorem 4.3]{LwPoly} of closed inverse submonoids of the polycyclic monoids $P_n$ to the closed inverse subsemigroups of graph inverse semigroups $\mathcal S(\Gamma)$. Given Lawson's insights, the generalization is largely routine, but it is perhaps slightly surprising that the classification extends from bouquets of circles (giving the polycyclic monoids as graph inverse semigroups) to arbitrary finite directed graphs, and so we have presented it in detail. Our notational conventions also differ slightly from those in \cite{LwPoly}.
\begin{theorem} \label{clisms_of_grinv} In a graph inverse semigroup $\mathcal S(\Gamma)$ there are three types of proper closed inverse subsemigroups $L$: \begin{enumerate} \item Finite chain type: $L$ consists of a finite chain of idempotents. \item Infinite chain type: $L$ consists of an infinite chain of idempotents. \item Cycle type: $L$ has the form \[ L = L_{p,d} = \{ (vp^{r}d,vp^{s}d) : r,s \geqslant 0 \; \text{with} \; v \; \text{a suffix of}\; p \} \cup \{(q,q) : q \; \text{a suffix of }d \}, \] where $p$ is a directed circuit in $\Gamma$, $d$ is a directed path in $\Gamma$ starting at the initial point of $p$, and where $p,d$ do not share a non-trivial prefix. In this case, $L$ is the smallest closed inverse subsemigroup of $\mathcal S(\Gamma)$ containing $(d,pd)$. \end{enumerate} \end{theorem}
\begin{proof} It is easy to see that closed chains of idempotents are indeed closed inverse subsemigroups. For the cycle type, if $L = L_{p,d}$ then any two elements are suffix comparable, and we have \begin{align*} (q,q)(q',q') &= (q \wedge q', q \wedge q') \; \text{for suffixes $q,q'$ of $d$}, \\ (q,q) (vp^{r}d,vp^{s}d) &= (vp^{r}d,vp^{s}d) = (vp^{r}d,vp^{s}d)(q,q). \end{align*} Now consider $ (vp^{r}d,vp^{s}d) (wp^{j}d,wp^{k}d)$: write $p=v_0v$ and suppose that $s<j$. Then $wp^jd = wp^{j-s-1}v_0vp^sd$ and so \[ (vp^{r}d,vp^{s}d) (wp^{j}d,wp^{k}d) = (wp^{j-s-1}v_0vp^{r}d,wp^{k}d) = (wp^{j-s+r}d,wp^kd) \in L \,, \] and a similar calculation applies if $s>j$. If $s=j$ and $v$ is a suffix of $w$, say $w=v_1v$, then \begin{align*}
(vp^{r}d,vp^{s}d) (wp^{s}d,wp^{k}d) &= (vp^{r}d,vp^{s}d) (v_1vp^{s}d,wp^{k}d) \\ &= (v_1vp^{r}d,wp^{k}d) \\ &= (wp^{r}d,wp^{k}d) \in L \end{align*} and a similar calculation applies if $s=j$ and $w$ is a suffix of $v$. Hence $L$ is a subsemigroup of $\mathcal S(\Gamma)$, and since the inverse of an element of $L$ is clearly also in $L$, we deduce that $L$ is an inverse subsemigroup of $\mathcal S(\Gamma)$. Since we ascend in the natural partial order in $L$ by deleting identical prefixes from the paths $vp^{r}d$ and $vp^{s}d$, or from a given suffix of $d$, it is also clear that $L$ is closed.
If $F$ is a closed inverse subsemigroup of $\mathcal S(\Gamma)$ and contains $(d,pd)$, then for any $m,n \geqslant 0$ we have $(pd,d)^m(d,pd)^n = (p^md,d)(d,p^nd) = (p^md,p^nd) \in F$. Ascending in the natural partial order, we may obtain any element of $L_{p,d}$, and so $L_{p,d} \subseteq F$.
Let $L$ be a closed inverse subsemigroup of $\mathcal S(\Gamma)$. If $w$ and $w'$ are paths occurring in elements of $L$ and are not suffix comparable, then the product of the idempotents $(w,w)$ and $(w',w')$ in $L$ is equal to $0$, and so $0\in L$ and by closure $L=\mathcal S(\Gamma)$. Hence if $L$ is proper, any two paths occurring in elements of $L$ are suffix comparable and hence have the same terminal vertex. By definition, if $(u,v) \in \mathcal S(\Gamma)$ then $u,v$ have the same initial vertex: hence if $(u,v) \in L$ then $u,v$ have the same initial and the same terminal vertex in $\Gamma$. Suffix comparability then ensures that any proper closed inverse subsemigroup of $\mathcal S(\Gamma)$ consisting entirely of idempotents is either a finite or an infinite chain. We note that in the second case, in order to obtain directed paths of arbitrary length, $\Gamma$ must contain a directed circuit.
We shall now describe those closed inverse subsemigroups of $\mathcal S(\Gamma)$ which contain non-idempotent elements. Suppose that $L \neq E(L)$ is a closed inverse subsemigroup of $\mathcal S(\Gamma)$. Then there exists $(u,v) \in L,$ with $u \neq v$, and we may assume that the path $u$ is shorter than the path $v$. Hence $u$ is a suffix of $v$ and so $v=pu$ for some path $p$. Since $u$ and $v$ have the same initial and terminal vertices, $p$ must be a directed circuit in $\Gamma$. If $p$ and $u$ share a common prefix, with $p=ap_1$ and $u=au_1$ then \[ (u_1,p_1u) \geqslant (au_1,ap_1u) = (u,pu) \] and so by closure, $(u_1,p_1u) \in L$.
Amongst the non-idempotent elements $(u,pu) \in L$, choose $u=d$ to have smallest possible length, and then having chosen $d$, choose $p$ to be a non-empty directed circuit of smallest possible length. Then $d,p$ do not share a non-trivial prefix. Now for any $m \geqslant 0$ we have $(p^md,p^md) \in E(L)$ and so, if $(w_1,w_2) \in L$, each $w_i$ is a suffix of some directed path
$p^{m_i}d$. Since $L$ is closed, every suffix of $d$ is in $L$ and by minimality of $|d|$, every element of $L$ that contains a suffix of $d$ is an idempotent $(q,q)$. Hence if $|w_i| < |d|$ we have $w_1=w_2$. So we may now assume that for $i=1,2$ we have
$|w_i| \geqslant |d|$, and so $w_1 = up^rd$, $w_2=vp^sd$ for some $r,s \geqslant 0$ and suffixes $u,v$ of $p$. \end{proof}
From the result of the previous Theorem, we may immediately conclude the following:
\begin{cor} If the graph $\Gamma$ contains no directed circuit, then every proper closed inverse subsemigroup of $\mathcal S(\Gamma)$ is a chain of idempotents. \end{cor}
Our next result, based on \cite[Theorem 4.4]{LwPoly} which treats the polycyclic monoids, classifies the closed inverse subsemigroups of a graph inverse semigroup up to conjugacy. We begin with the following definitions
\begin{definition} Let $L =\upset{u,u}$ be a closed inverse subsemigroup of finite chain type in a graph inverse semigroup $\mathcal S(\Gamma)$. We call the initial vertex of the directed path $u$ the \textit{root} of $L$. \end{definition}
Adapting ideas of \cite[Section 1.3]{Loth} from words to paths in $\Gamma$:
\begin{definition} Two paths $p,q$ in $\Gamma$ are \textit{conjugate} if there are paths $u,v$ in $\Gamma$ such that $p=uv$ and $q=vu$. Equivalently, (see \cite[Proposition 1.3.4]{Loth}) there exists a path $w$ in $\Gamma$ such that $wp=qw$. Conjugate paths must be directed circuits in $\Gamma$, and conjugacy amounts to the selection of an alternative initial edge. \end{definition}
The following Lemma is due to Lawson and is extracted from the proof of \cite[Theorem 4.4]{LwPoly}.
\begin{lemma} \label{cong_in_E} Let $S$ be an $E^*$--unitary inverse semigroup. If $H$ and $K$ are conjugate closed inverse subsemigroups of $S$ with $H \ne S \ne K$ and $H \subseteq E(S)$ then $K \subseteq E(S)$. Moreover, if $H$ has a minimum idempotent, then so does $K$. \end{lemma}
\begin{proof} There exists $s \in S$ with $s^{-1}Hs \subseteq K \; \text{and} \; sKs^{-1} \subseteq H$. Let $k \in K$: then $k \ne 0$ and $sks^{-1} \in H$ and so $sks^{-1} \in E(S)$. It follows that $s^{-1}(sks^{-1})s=(s^{-1}s)k(s^{-1}s) \in E(S)$ and $(s^{-1}s)k(s^{-1}s) \leqslant k$. Since $S$ is $E^*$--unitary, we deduce that $k \in E(S)$.
Now suppose that $m \in H \subseteq E(S)$ is the minimum idempotent and that $e \in K$. Then $m \leqslant ses^{-1}$ and so \[ s^{-1}ms \leqslant s^{-1}ses^{-1}s = es^{-1}s \leqslant e \] and so $s^{-1}ms$ is a minimum idempotent in $K$. \end{proof}
\begin{theorem} \label{cong_in_Gamma} \leavevmode \begin{enumerate} \item Let $L$ be a closed inverse subsemigroup of $\mathcal S(\Gamma)$ of finite chain type. Then all closed inverse subsemigroups conjugate to $L$ are of finite chain type. Two closed inverse subsemigroups $L =\upset{u,u}$ and $K =\upset{v,v}$ are conjugate in $\mathcal S(\Gamma)$ if and only if they
have the same root. \item Let $L$ be a closed inverse subsemigroup of $\mathcal S(\Gamma)$ of infinite chain type. Then all closed inverse subsemigroups conjugate to $L$ are also of infinite chain type. Two closed inverse subsemigroups of infinite chain type are conjugate if and only if there are idempotents $(s,s)\in L$ and $(t,t)\in K$ such that for all paths $p$ in $\Gamma,$ we have that $(ps,ps)\in L$ if and only if $(pt,pt)\in K$. \item Let $L$ be a closed inverse subsemigroup of $\mathcal S(\Gamma)$ of cycle type. The only closed inverse subsemigroups conjugate to $L$ are of cycle type. Moreover, $L_{p,d}$ is conjugate to $L_{q,k}$ if and only if $p$ and $q$ are conjugate directed circuits in $\Gamma$. \end{enumerate} \end{theorem}
\begin{proof} (a) It follows from Lemma \ref{cong_in_E} that if $L$ has finite chain type then so does every closed inverse subsemigroup conjugate to $L$ .
Suppose that $L$ and $K$ have the same root $x_0 \in V(\Gamma)$. Then $(u,v) \in \mathcal S(\Gamma)$, and for any suffix $w$ of $u$ we have \[ (v,u)(w,w)(u,v) = (v,v) \in K \,.\] Similarly, for any suffix $t$ of $v$, $(u,v)(t,t)(v,u) = (u,u) \in L$. Hence $L$ and $K$ are conjugate.
Conversely, suppose that $L$ and $K$ are conjugate, with conjugating element $(p,q) \in \mathcal S(\Gamma)$, so that for any suffixes $w$ of $u$ and $t$ of $v$ we have \[ (q,p)(w,w)(p,q) \in K \; \text{and} \; (p,q)(t,t)(q,p) \in L \,.\] Then $(q,p)({\mathbf r}(u),{\mathbf r}(u))(p,q) \in K$, so that $p$ and ${\mathbf r}(u)$ are suffix-comparable: hence $p$ also ends at ${\mathbf r}(u)$, and $(q,p)({\mathbf r}(u),{\mathbf r}(u))(p,q)=(q,q) \in K$. Therefore $q$ is a suffix of $v$. Similarly, $p$ is a suffix of $u$.
Let $v=v_1q$: then \[ (p,q)(v,v)(q,p) = (p,q)(v_1q,v_1q)(q,p) = (v_1p,v_1p) \in L \] and so $v_1p$ is a suffix of $u$. Let $u=u_0v_1p$: then \[ (q,p)(u,u)(p,q) = (q,p)(u_0v_1p,u_0v_1p)(p,q) = (u_0v_1q,u_0v_1q) \in K\] and so $u_0v_1q$ is a suffix of $v$. But $v=v_1q$ and so $u_0$ is a vertex (namely the root of $L$), and $u=v_1p$. Hence $u$ and $v$ have the same initial vertex, and so $L$ and $K$ have the same root.
(b) By Lemma \ref{cong_in_E} any closed inverse subsemigroup\ $K$ conjugate to $L$ must be of chain type, and by part (a) $K$ must be infinite. Suppose that $(t,s)L(s,t) \subseteq K \; \text{and} \; (s,t)K(t,s) \subseteq L$. Since $0 \not\in K$ we have, for all $(u,u) \in L$, that $s$ is suffix comparable with $u$ and similarly for all $(v,v) \in K$, that $t$ is suffix comparable with $v$. If we consider
$u$ with $|u| \geqslant |s|$ then $s$ must be a suffix of $u$ and by closure of $L$ we have $(s,s) \in L$. Similarly $(t,t) \in K$. Suppose that $(ps,ps) \in L$. Then $(t,s)(ps,ps)(s,t) = (pt,pt) \in K$ and similarly if $(pt,pt) \in K$ then $(ps,ps) \in L$.
Conversely, if $s$ and $t$ exist as in the Theorem and $(w,w) \in L$ then $s$ is suffix comparable with $w$. \\ If $w$ is a suffix of $s$, with $s=hw$, then \[ (t,s)(w,w)(s,t) = (t,hw)(w,w)(hw,t) = (t,t) \in K \] and if $s$ is a suffix of $w$ with $w=ps$ then $(ps,ps) \in L$ and so \[ (t,s)(w,w)(s,t) = (t,s)(ps,ps)(s,t) = (pt,pt) \in K \,.\] Similarly $(s,t)K(t,s) \subseteq L$, and $L$ and $K$ are conjugate.
(c) By parts (a) and (b), any closed inverse subsemigroup of $\mathcal S(\Gamma)$ that is conjugate to $L_{p,d}$ must be of cycle type. Suppose that the closed inverse subsemigroups $L_{p,d}$ and $L_{q,k}$ are conjugate in $\mathcal S(\Gamma)$, and so there exists $(s,t) \in \mathcal S(\Gamma)$ such that \begin{align} (t,s) L_{p,d} (s,t) \subseteq L_{q,k} \label{cong1}\\ (s,t) L_{q,k} (t,s) \subseteq L_{p,d}. \label{cong2} \end{align} Since $L_{q,k}$ is closed and $L_{p,d}$ is the smallest closed inverse subsemigroup of $\mathcal S(\Gamma)$ containing $(d,pd)$, then \eqref{cong1} is equivalent to $(t,s)\,(d,pd)\,(s,t) \in L_{q,k}$. Also, since $0 \not\in L_{q,k}$ we must have $s\,$ suffix-comparable with $u$ and $v$ whenever $(u,v)$ is an element of $L_{p,d}\,$. Hence $(s,s) \in L_{p,d}\,$, and similarly $(t,t) \in L_{q,k}$.
First suppose that $s = up^ad\,$ and $t = vq^bk\,$ for some $a,b \geqslant 0$, where $u$ is a suffix of $p$ and $v$ is a suffix of $q$. Write $p=hu$: then \begin{align*} (t,s)\,(d,pd)\,(s,t) &= (vq^bk,up^ad)\,(d,pd)\,(up^ad,vq^bk) \\ &= (vq^bk,up)\,(u,vq^bk) \\ &= (vq^bk,uhvq^bk) \in L_{q,k}. \end{align*} It follows that $uhvq^bk = vq^mk\,$ for some $m \geqslant 0$. Comparing lengths of these directed paths, we see that $m>b$, and then after cancellation we obtain $uhv = vq^{m-b}$. Hence $uh$ is conjugate to some power of $q$, and since $uh$ is a conjugate of $p$, we conclude that $p$ is conjugate to some power of $q$.
Now suppose that $s$ is a suffix of $d$ and write $d=cs$. With $t$ as before, we now obtain \[ (t,s)\,(d,pd)\,(s,t) = (vq^bk,s)\,(cs,pcs)\,(s,vq^bk) = (cvq^bk,pcvq^bk) \in L_{q,k}. \] It follows that $pcvq^bk = cvq^mk$ for some $m \geqslant 0$. Again $m>b$ and after cancellation we obtain $pcv=cvq^{m-b}$. Here we see directly that $p$ is conjugate to a power of $q$.
Now suppose that $s$ is a suffix of $d$ and write $d=cs$, and that $t$ is a suffix of $k$ and write $k=jt$. We now obtain \[ (t,s)\,(d,pd)\,(s,t) = (t,s)\,(cs,pcs)\,(s,t) = (ct,pct) \in L_{q,k}. \] Since by assumption $p\,$ is not the empty path, we have $ct = wq^ak\,$ and $pct = wq^bk\,$ for some suffix $w$ of $q$ and some $a,b \geqslant 0$. Again comparing lengths, we see that $b>a$, and then $pct = pwq^ak = wq^bk$. After cancellation we obtain $pw=wq^{b-a}$ and again $p$ is conjugate to a power of $q$.
Hence for each possibility of $s$, we deduce from \eqref{cong1} that $p\,$ is conjugate to some power of $q\,$. Using equation \eqref{cong2} we deduce similarly that $q$ is conjugate to a power of $p\,$. Again comparing lengths, we conclude that $p$ and $q$ are conjugate.
Conversely, if $p,q$ are conjugate, suppose that $p=uv$ and $q=vu$. Then it is esay to check that setting $s=k$ and $t=vd$ furnishes a pair $(s,t)$ satisfying \eqref{cong1} and \eqref{cong2}. \end{proof}
\begin{remark} \label{bi_poly} For the polycyclic monoids $P_{n}$ $(n\geqslant 2)$, we obtain the classification of closed inverse submonoids up to conjugacy given in \cite[Theorem 4.4]{LwPoly} by applying Theorem \ref{cong_in_Gamma} to the graph $\Gamma$ with one vertex and $n$ loops labelled $a_{1},\dots, a_{n}.$ For the case $n=1$, with a single loop labelled $a$, we obtain the graph inverse semigroup $\mathcal S(\Gamma)= B \cup \{0\}$, where $B$ is the bicyclic monoid. A proper closed inverse subsemigroup $L$ of $\mathcal S(\Gamma)$ cannot contain $\{0\}$ and so is a proper closed inverse subsemigroup of $B$. If $L \subseteq E(B)$ then by Theorem \ref{clisms_of_grinv}, $L$ is either $E(B)$ itself or is of finite chain type, and part (a) of Theorem \ref{cong_in_Gamma}, then shows that all closed inverse subsemigroup of finite chain type in $B$ are then conjugate. By Theorem \ref{clisms_of_grinv}, a closed inverse subsemigroup $L$ of $B$ of cycle type consists of elements of the form $(qp^r,qp^s)$ with $r,s\geqslant 0$, and where where $p=a^m$ for some $m \geqslant 1$ and $q=a^k$ for some $k$ with $0 \leqslant k \leqslant m-1$: that is, elements of the form $(a^{rm+k},a^{sm+k})$. The subsemigroup $L$ is therefore isomorphic to the fundamental simple inverse $\omega$--semigroup $B_m$, discussed in \cite[section 5.7]{HoBook}. \end{remark}
\section{The index of closed inverse subsemigroups in graph inverse semigroups} \label{clissindex}
We first discuss the index of closed inverse subsemigroups of finite and infinite chain type in $\mathcal S(\Gamma)$. For a fixed path $w$ in $\Gamma$ and a vertex $v$ of $w$, we define $N^{\Gamma}_{v,w}$ to be the number of distinct directed paths in $\Gamma$ whose initial vertex is $v$ but whose first edge is not in $w$. The empty path $v$ is one such path.
\begin{theorem} \label{idpt_type_grinv} \leavevmode \begin{enumerate} \item Let $L$ be a closed inverse subsemigroup of finite chain type in $\mathcal S(\Gamma)$, with minimal element $(w,w)$. Then $L$ has infinite index in $\mathcal S(\Gamma)$ if and only if there exists a non-empty directed circuit $c\,$ in $\Gamma$ and a (possibly empty) directed path $g$ from some vertex $v_0$ of $w$ to a vertex of $c\,$ and with $g$ having no edge in common with $c \cup w$. \item If $L = \nobraupset{(w,w)}$ and $L$ has finite index in $\mathcal S(\Gamma)$ then \[ [\mathcal S(\Gamma):L] = \sum_{v \in V(w)} N^{\Gamma}_{v,w} \,.\] \item Let $L$ be a closed inverse subsemigroup of infinite chain type in $\mathcal S(\Gamma)$. Then $L$ has infinite index in $\mathcal S(\Gamma)$. \end{enumerate} \end{theorem}
\begin{proof} (a) Let $L$ have finite chain type. A coset representative of $L$ has the form $(q,u)$ where $q$ is some suffix of $w$, and $(q,u)$ have the same initial vertex. If $L$ has infinite index, then there are infinitely many distinct choices for $(q,u)$ and since $\Gamma$ is finite, there must be a directed circuit in $\Gamma$ as described.
Conversely, suppose that $g,c\,$ exist. Let $q\,$ be the suffix of $w\,$ that has initial vertex $v_0\,$. Then $q \in L$ and so for any $k \geqslant 0\,$ the coset $C_k = L\upset{q,gc^k}$ exists. Now for $k>l,\,$ we have if $g$ is non-empty, that \[ (q,gc^k)(q,gc^l)^{-1} = (q,gc^k)(gc^l,q) = 0 \not\in L \] and so by part (c) of Proposition \ref{cosetsofL}, the cosets $C_k\,$ and $C_l\,$ are distinct. If $g$ is empty then we have \[ (q,c^k)\,(q,c^l)^{-1} = (q,c^k)\,(c^l,q) = (q,c^{k-l}q) \not\in L \] and again the cosets $C_k\,$ and $C_l\,$ are distinct.
(b) By part (a) there are no directed cycles accessible from any vertex of $w$, and so $N^{\Gamma}_{v,w}$ is finite for each vertex $v$ of $w$. A coset representative of $L$ has the form $(s,t)$ where $s$ is a suffix of $w$. Suppose that two such elements, $(s_1,t_1)$ and $(s_2,t_2)$, represent the same coset. Then $(s_1,t_1)(t_2,s_2) \in L$: in particular the product is non-zero and so $t_1, t_2$ are suffix comparable. We may assume that $t_2=ht_1$: then $(s_1,t_1)(ht_1,s_2) = (hs_1,s_2)$ and this is in $L$ if and only if $s_2 = hs_1$. Therefore $\nobraupset{L(s_1,t_1)} = \nobraupset{L(s_2,t_2)}$ if and only if $(s_2,t_2)=(hs_1,ht_1)$, and so the distinct coset representatives are the pairs $(s,t)$ where $s$ is a suffix of $w$, $s$ and $t$ have the same initial vertex, but do not share the same initial edge. It follows that the number of distinct cosets is $\sum_{v \in V(w)} N^{\Gamma}_{v,w}$, and $L$ itself is represented by $({\mathbf r}(w),{\mathbf r}(w))$.
(c) If $L$ has infinite chain type then the elements of $L$ comprise the idempotents determined by an infinite sequence of directed paths in $\Gamma,\,$ each of which is a suffix of the other. Eventually then, we find a path $cq$ where $c$ is a directed circuit, and $(cq,cq) \in L$. Then for each $k \geqslant 0,\,$ the element $(q,c^kq)$ represents a coset $C_k = L\upset{q,c^kq}\,.$ \\ Now for $k>l,\,$ \[ (q,c^kq)\,(q,c^lq)^{-1} = (q,c^kq)\,(c^lq,q) = (q,c^{k-l}q) \not\in L \] and the cosets $C_k\,$ and $C_l\,$ are distinct. \end{proof}
\begin{example} We illustrate the index computation in part (b) of Theorem \ref{idpt_type_grinv} with $\Gamma$ equal to the finite chain with $n$ edges $e_1, \dotsc , e_n$ and $n+1$ vertices $v_0, v_1, \dotsc , v_n$: \[ \xymatrixcolsep{3pc} \xymatrix{ v_n \ar[r]^{e_n} & v_{n-1} \ar[r]^{e_{n-1}} & \dotsc \ar[r] & v_1 \ar[r]^{e_1} & v_0 } \] Here $\mathcal S(\Gamma)$ is finite, and every closed inverse subsemigroup is of finite cycle type and has finite index. The number of paths in $\Gamma$ with initial vertex $v_j$ is $j+1$, and so
\[ |\mathcal S(\Gamma)| = \sum_{j=0}^n (j+1)^2 = \sum_{j=1}^{n+1} j^2 = \frac{1}{6}(n+1)(n+2)(2n+3) \,.\]
We let $w$ be the path $e_n \dotsm e_1$ and $L = \upset{w,w}$. Since $w$ has $n+1$ suffixes, we have $|L|=n+1$. An element $(s,t)$ lies in a coset of $L$ if and only if $s$ is a suffix of $w$ and ${\mathbf d}(s)={\mathbf d}(t)$: hence the total number of elements in all the cosets of $L$ is $\sum_{j=0}^n (j+1) = \sum_{j=1}^{n+1} j = \frac{1}{2}(n+1)(n+2)$.
Now $N^{\Gamma}_{v_i,w} = 1$ since only the length zero path at $v_i$ is counted, and so $[\mathcal S(\Gamma):L] = n+1$.
Let $q_i$ be the path $e_i \dotsc e_1$, so that $q_n = w$, and set $q_0 = v_0$. The $n+1$ cosets are then represented by the elements $(q_i,v_i)$, $0 \leqslant i \leqslant n$, and \begin{align*} \nobraupset{L(q_i,v_i)} &= \nobraupset{\{ (q_k,q_k)(q_i,v_i) : 0 \leqslant k \leqslant n \}} \\ &= \upset{\{ (q_k,e_k \dotsm e_{i+1} : i < k \leqslant n \} \cup \{ (q_i,v_i) \}} \\ &= \{ (q_n, e_n \dotsc e_{i+1}), \dotsc , (q_{i+1},e_{i+1}),(q_i,v_i) \} \end{align*}
and so $|\nobraupset{L(q_i,v_i)}|=n-i+1$. Counting the total number of elements in all the cosets of $L$ we obtain \[ \sum_{i=0}^n (n-i+1) = \sum_{j=1}^{n+1} \, \frac{1}{2}(n+1)(n+2) \] as before. \end{example}
We now discuss the closed inverse subsemigroups of cycle type.
\begin{theorem} \label{cycle_type_grinv} A closed inverse subsemigroup\ $L_{p,d}$ of cycle type in $\mathcal S(\Gamma)$, such that $p$ is a circuit with at least two distinct edges, has infinite index in $\mathcal S(\Gamma)$. \end{theorem}
\begin{proof} Write $p=uv$ where each of $u,v$ is non-empty and one contains an edge not in the other. Let $c$ be the conjugate circuit $vu$. Then for $k \geqslant 1,\,$ the element $(vd,c^k) \in \mathcal S(\Gamma)$ and determines a coset $C_k = L_{p,d}\upset{vd,c^k}$. Then for $k>l,\,$ \[ (vd,c^k)\,(vd,c^l)^{-1} = (vd,c^k)\,(c^l,vd) = (vd,c^{k-l}vd) = (vd,up^{k-l}d) \not\in L_{p,d} \] and the cosets $C_k$ and $C_l$ are distinct. \end{proof}
We now consider a graph $\Gamma$ containing an edge $a$ that is a directed circuit of length one, and a closed inverse subsemigroup $L_{a^m,\,d}$ of cycle type.
\begin{theorem} \label{loops} \leavevmode \begin{enumerate} \item A closed inverse subsemigroup\ $L = L_{a^m,\,d}$ of $\mathcal S(\Gamma)$ is of infinite index if there exists a directed cycle $c$ in $\Gamma$ that contains an edge $e$ with $a \ne e$, and a (possibly empty) directed path $g$ from some vertex $v_0$ of $d$ to a vertex of $c$ and with $g$ having no edge in common with $c \cup d$. \item Let $L=L_{a^m,\,d}$ where $a$ is a directed circuit in $\Gamma$ of length one, and there are no other directed circuits in $\Gamma$ attached to a vertex of $d$. Then $L$ has finite index in $\mathcal S(\Gamma)$, given by \[ [\mathcal S(\Gamma):L_{a^m,d}] = (m-1)N^{\Gamma}_{{\mathbf d}(a),a} + \sum_{v \in V(d)} N^{\Gamma \setminus \{ a \}}_{v,d}\,. \] \end{enumerate} \end{theorem}
\begin{proof} (a) Suppose that $c,g$ exist and let $q$ be the suffix of $d$ with initial vertex $v_0$.
Let $C_k = L\upset{q,gc^k}\,$. Then if $g$ is non-empty, for $k>l$, \[ (q,gc^k)\,(q,gc^l)^{-1} = (q,gc^k)\,(gc^l,q) = 0 \not\in L \] and the cosets $C_k$ and $C_l$ are distinct. If $g$ is empty, then \[ (q,c^k)\,(q,c^l)^{-1} = (q,c^k)\,(c^l,q) = (q,c^{k-l}q) \not\in L \] and the cosets $C_k\,$ and $C_l\,$ are again distinct.
(b) We are now reduced to the case that the only directed circuits in $\Gamma$ that can be attached to a vertex of $d$ are powers of the loop $a$. A coset representative of $L=L_{a^m,d}$ has the form $(a^rd,w)$ with $r \geqslant 0$, or $(q,w)$ where $q$ is a proper suffix of $d$. Hence $w$ has the same initial vertex $v$ as $d$ or of some proper suffix of $d$. We can only construct finitely many representatives of the form $(q,w)$. We do not need to consider paths of the form $(d,a^kw)$ for any $k \geqslant 0$ since $(d,a^kw)(w,d) = (d,a^kd) \in L$. The analysis in the proof of part (b) of Theorem \ref{idpt_type_grinv} can then be repeated to show that the number of cosets obtained this way is $\sum_{v \in V(d)} N^{\Gamma \setminus \{ a \}}_{v,d}$.
We now consider representatives of the form $(a^rd,w)$ with $r \geqslant 1$. Here $w$ must have the form $w=a^st$ for some $s \geqslant 0$ and some (possibly empty) directed path $t$ not containing the edge $a$. If $r \equiv s \pmod{m}$ then $(a^rd,a^st)(t,d) = (a^rd,a^sd) \in L$ and so $\nobraupset{L(a^rd,a^st)} = \nobraupset{L(d,t)}= \nobraupset{L(s,t_1)}$ for some suffix $s$ of $d$ and path $t_1$ with the same initial vertex as $s$ but not sharing the same first edge. Hence $\nobraupset{L(a^rd,a^st)}$ will be counted within the sum $\sum_{v \in V(d)} N^{\Gamma \setminus \{ a \}}_{v,d}$. Now fix $t$ and consider the cosets $\nobraupset{L(a^rd,a^st)}$ with $r \not\equiv s \pmod{m}$. Now given $L\upset{a^{r_1}d,a^{s_1}t}$ and $L\upset{a^{r_2}d,a^{s_2}t}$ with $s_1 \geqslant s_2$, we have \[ (a^{r_1}d,a^{s_1}t)(a^{r_2}d,a^{s_2}t)^{-1} = (a^{r_1}d,a^{s_1}t)(a^{s_2}t,a^{r_2}d) = (a^{r_1}d,a^{s_1-s_2+r_2}d) \] and $(a^{r_1}d,a^{s_1-s_2+r_2}d) \in L$ if and only if $r_2-s_2 \equiv r_1-s_1 \pmod{m}$. Hence for a fixed $t$ we can produce exactly $m-1$ distinct cosets of the form $L\upset{a^rd,a^st}$.
But for distinct paths $t_1$ and $t_2$, $a^{s_1}t_1$ cannot be suffix comparable with $a^{s_2}t_2$ and so \[ (a^{r_1}d,a^{s_1}t_1)(a^{r_2}d,a^{s_2}t_2)^{-1} = 0 \ne L \] and the cosets determined by distinct paths $t_1$ and $t_2$ are distinct. Hence each of the $N^{\Gamma}_{{\mathbf d}(a),a}$ paths $t$ starting at ${\mathbf d}(a)$, but not having $a$ as its initial edge, contributes $m-1$ cosets. \end{proof}
\begin{example} \label{bicyclic_with_zero} As in Remark \ref{bi_poly}, we suppose that $\Gamma$ consists only of the vertex $x$ and a loop $a$ at $x$ so that the graph inverse semigroup $\mathcal S(\Gamma)$ is the bicyclic monoid $B$ with a zero adjoined. From Theorem \ref{idpt_type_grinv}, the closed inverse submonoids of $B$ contained in $E(B)$ have infinite index. Part (b) of Theorem \ref{loops} tells us that that the closed inverse submonoid $B_m=L_{a^m,x}$ of $B$ has index $m$. \end{example}
\begin{example} \label{cycle_type_eg} Let $\Gamma$ be the following graph: \[ \xymatrixcolsep{3pc} \xymatrix{ x' \ar[r]^h & y' \\ x \ar@(ul,dl)[]_a \ar[r]^{e} \ar[u]_g & y \ar[r]^{f} \ar[u]_k & z } \] and let $L = L_{a^2,ef}$. Then we have \[ N^{\Gamma \setminus a}_{z,ef} = 1 \,, N^{\Gamma \setminus a}_{y,ef} = 2 \,, N^{\Gamma \setminus a}_{x,ef} = 3 \,,\] counting the paths in the sets $\{z \}$ , $\{ y,k\}$ and $\{ x,g,gh \}$ respectively, and $N^{\Gamma}_{x,a} = 6$, counting the paths in the set $\{x,e,g,ef,ek,gh\}$. From part (b) of Theorem \ref{loops} we find that $[\mathcal S(\Gamma),L]=12$ and a complete set of coset representatives is \[ \left\{(z,z),(f,y),(f,k),(ef,x),(ef,g),(ef,gh), \right.\] \[ \left. (ef,a),(ef,ag),(ef,agh),(ef,ae),(ef,aek),(ef,aef) \right\} \,.\] \end{example}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Infinite quantum well: a coherent state approach}
\author{P. L. Garc\'{\i}a de Le\'on$^{1,2}$, J. P. Gazeau$^{1,3}$ and J. Queva$^{1,3}$} \address{$^1$ Laboratoire Astroparticules et Cosmologie, B\^{a}timent Condorcet, 10, rue Alice Domon et L\'eonie Duquet, 75205 Paris Cedex 13}
\address{$^2$ Universit\'e Paris Est - Institut Gaspard Monge (IGM-LabInfo), 5 Bd. Descartes, Champs-sur-Marne, 77454 Marne-la-Vall\'ee Cedex 2}
\address{$^3$ Universit\'{e} Paris Diderot-Paris7, B\^{a}timent des Grands Moulins, 75205 Paris Cedex 13}
\ead{pgarcia@apc.univ-paris7.fr, gazeau@apc.univ-paris7.fr, queva@apc.univ-paris7.fr} \begin{abstract} A new family of 2-component vector-valued coherent states for the quantum particle motion in an infinite square well potential is presented. They allow a consistent quantization of the classical phase space and observables for a particle in this potential. We then study the resulting position and (well-defined) momentum operators. We also consider their mean values in coherent states
and their quantum dispersions. \end{abstract}
\begin{keyword}
\sep Vector coherent states, quantization, infinite square well
\PACS{03.65.-w, 03.65.Ca} \end{keyword} \end{frontmatter}
\section{Introduction}
Even though the quantum dynamics in an infinite square well potential represents a rather unphysical limit situation,
it is a familiar textbook problem and a simple tractable model for the confinement of a quantum particle. On the other hand this model has a serious drawback when it is analyzed in more detail. Namely, when one proceeds to a canonical standard quantization, the definition of a momentum operator with the usual form $-\rmi\hbar \rmd/\rmd x$ has a doubtful meaning. This subject has been discussed in many places (see \cite{bfv} for instance), and the attempts of circumventing this anomaly range from self-adjoint extensions \cite{bfv} to $\mathcal{PT}$ symmetry approaches \cite{znojil}.
First of all, the canonical quantization assumes the existence of a momentum operator (essentially) self-adjoint in $\mathrm{L}^2(\mathbb{R})$ that respects some boundary conditions on the boundaries of the well. As has been shown, these conditions cannot be fullfilled by the usual derivative form of the momentum without the consequence of losing self-adjointness. Moreover there exists an uncountable set of self-adjoint extensions of such a derivative operator which makes truly delicate the question of a precise choice based on physical requirements \cite{bfv,vogity}.
When the classical particle is trapped in an infinite well of real interval $\Delta$, the Hilbert space of quantum states is $\mathrm{L}^2(\Delta,\rmd x)$ and the quantization problem becomes similar, to a certain extent, to the quantization of the motion on the circle $S^1$. Notwithstanding the fact that boundary conditions are not periodic but impose instead that the wave functions in position representation vanish at the boundary, the momentum operator $\widehat{p}\,$ for the motion in the infinite well should be the counterpart of the angular momentum operator $\widehat{L}$ for the motion on the circle. Since the energy spectrum for the infinite square well is $\{n^2, \, n\in \N^{\ast}\}$, we should expect that the spectrum of $\widehat{p}$ should be $\Z^{\ast}$, like the one for $\widehat{L}$ without the null eigenvalue. This similarity between the two problems will be exploited in the present paper. We will adapt the coherent states (CS's) on the circle \cite{debgo,kopap,delgo} to the present situation by constructing two-component vector CS's, in the spirit of \cite{aeg}, as infinite superpositions of spinors eigenvectors of $\widehat{p}\,$.
In the present note, we first describe the CS quantization procedure. We recall the construction of
the CS's for the motion on the circle and the resulting quantization. We then revisit the infinite square well problem and propose a family of vector CS's suitable for the quantization of the related classical phase space. Note that various constructions of CS's for the infinite square well have been carried out, like the one in \cite{jpajpg} or yet the one resting upon the dynamical $SU(1,1)$ symmetry \cite{frank}. Finally, we present the consequences of our choice after examining basic quantum observables, derived from this quantization scheme, like position, energy, and a quantum version of the problematic momentum. In particular we focus on their mean values in CS's (``lower symbols'') and quantum dispersions. As will be shown, the classical limit is recovered after choosing appropriate limit values of some parameters present in the expression of our CS's.
\section{\label{sec:level1} The approach via coherent state quantization}
Coherent state quantization \cite{gahulare,ber,klau2,gapi,gmm,gaga,gahulare1} is an alternative way of representing classical observables into a quantum system. The states used in it include Glauber and Perelomov CS's but lie in a wider definition that admits a large range of state families resolving the identity. Identity resolution is here the crucial condition.
In fact, these states form a frame of reference well suited to represent classical quantities and, in that sense, work as a natural quantization procedure which is in one-to-one correspondence with the choice of the frame. The validity of a precise frame choice is asserted by comparing spectral characteristics of quantum observables $\widehat{f}$ with data from the observational space. Unlike canonical quantization where the whole model rests upon a pair of conjugated variables within the Hamilton formalism \cite{dirac}, here we need the following elements.
First of all let $X=\{x\in X\}$ be a set equipped with a measure $\mu(\rmd x)$, and let $\mathrm{L}^2(X, \mu)$ be the Hilbert space of square integrable functions $f(x)$ on $X$: \begin{equation}
| f |^2 = \int_{X} | f(x)|^2 \, \mu(\rmd x) < \infty\, ,\qquad
\langle f_1 | f_2 \rangle = \int_{X} \overline{f_1(x)} f_2(x) \, \mu(\rmd x)\, . \end{equation} The set $X$ can be taken as the phase space of a particular problem as will be the case in this paper. Next we need a finite or infinite orthonormal set $\mathbf{S} = \{ \phi_n(x), n=1,2,\dots\}$, selected among the elements of $\mathrm{L}^2(X, \mu)$. This set spans, by definition, the separable Hilbert subspace ${\mathcal H}_{\mathbf S}$ and must obey the following condition: \begin{equation}\label{factor}
0 < {\mathcal N} (x) \equiv \sum_n | \phi_n (x) |^2 < \infty \ \mbox{almost everywhere}\, . \end{equation}
Now let us define the family of \textit{coherent} states $\{ | x \rangle\}_{x\in X}$ \underline{in} $ {\mathcal H}_{\mathbf S}$ through the following linear superposition: \begin{equation}
| x\rangle \equiv \frac{1}{\sqrt{{\mathcal N} (x)}} \sum_n \overline{\phi_n (x)} | n\rangle\, , \end{equation}
where the states $|n\rangle$ are in one to one correspondence with the functions in the set $\mathbf{S}$. This is an injective map $X \ni x \mapsto | x \rangle \in {\mathcal H}_{\mathbf S}$ (which should be continuous with respect to some minimal topology affected to $X$ for which the latter is locally compact): These coherent states have two main features: they are normalized, $\langle x | x \rangle = 1 $ and crucially, they resolve the identity in ${\mathcal H}_{\mathbf S}$ \begin{equation}\label{resoid}
\int_X | x\rangle \langle x | \, {\mathcal N}(x)\,\mu(\rmd x)= \I_{{\mathcal H}_{\mathbf S}}. \end{equation} The CS quantization of a {\it classical} observable $f(x)$ on $X$, consists then in associating to $f(x)$ the operator \begin{equation} \label{oper}
\widehat f := \int_X f(x) |x\rangle\langle x| \, {\mathcal N}(x)\,\mu(\rmd x). \end{equation} This ``diagonal'' decomposition (in a topological weak sense) may reveal to be valid for a wide class of operators. The function $f(x) \equiv \widehat {A}_f (x)$ is called upper (or contravariant) symbol of the operator $ \widehat f$ and is non-unique in general. On the other hand, the mean value
$\langle x| \widehat f | x\rangle \equiv \check{A}_f(x)$ is called lower (or covariant) symbol of $ \widehat f$.
\section{Quantization of the particle motion on the circle $S^1$}
The motion in the infinite square well potential can be seen as a particular case of the motion on the circle $S^1$, once we have identified the boundaries of the well with each other and imposed Dirichlet conditions on them. Functions on this domain will behave as pinched waves on a circle so it is useful to expose first the more general case.
Applying our scheme of quantization we can define the CS's on the circle. The measure space $X$ is the cylinder $S^1 \times \R = \{ x \equiv (q,p) \, | \, 0 \leq q < 2\pi , \, p,q \in \R \}$, \emph{i.e.} the phase space of a particle moving on the circle, where $q$ and $p$ are canonically conjugate variables. We consistently choose the measure on $X$ as the usual one, invariant (up to a factor) with respect to canonical transformations: $\mu(\rmd x) = \frac{1}{2\pi} \, \rmd q\, \rmd p $. The functions $\phi_n (x)$ forming the orthonormal system needed to construct CS's are suitably weighted Fourier exponentials: \begin{equation} \label{ficir} \phi_n (x) = \left(\frac{\epsilon}{\pi}\right)^{1/4}\, \rme^{-\frac{\epsilon}{2}(p-n)^2} \, \rme^{ \rmi nq}\, , \qquad n\in \Z \, , \end{equation}
where $\epsilon > 0$ can be arbitrarily small. This parameter includes the Planck constant together with the physical quantities characterizing the classical motion (frequency, mass, etc.). Actually, it represents a regularization. Notice that the continuous distribution $x \mapsto | \phi_n(x) |^2$ is the normal law centered at $n$ (for the angular momentum variable $p$). We establish a one-to-one correspondence between the functions $\phi_n$ and the states $| n\rangle$ which form an orthonormal basis of some generic separable Hilbert space $\mathcal{H}$ that can be viewed or not as a subspace of $\mathrm{L}^2(X, \mu(\rmd x))$. CS's, as vectors in $\mathcal{H}$, read then as \begin{equation}\label{ccs}
| p, q \rangle = \frac{1}{\sqrt{{\mathcal N} (p)}}\, \left(\frac{\epsilon}{\pi}\right)^{1/4} \sum_{n \in \Z}
\rme^{-\frac{\epsilon}{2}(p-n)^2} \, \rme^{- \rmi nq} | n\rangle\, , \end{equation} where the normalization factor \begin{equation}\label{norci} \mathcal{N}(x) \equiv \mathcal{N}(p) = \sqrt{\frac{\epsilon}{\pi}}\sum_{n \in \Z} \rme^{-\epsilon (p-n)^2} < \infty\, , \end{equation} is a periodic train of normalized Gaussian functions and is proportional to an elliptic Theta function. Applying the Poisson summation yields the alternative form: \begin{equation} \label{Poisnorci} \mathcal{N}(p) = \sum_{n \in \Z} \rme^{2\pi \rmi np}\, \rme^{-\frac{\pi^2}{\epsilon} n^2}\, . \end{equation} From this formula it is easy to prove that $\lim_{\epsilon \to 0}\mathcal{N}(p) = 1$.
The CS's (\ref{ccs}) have been previously proposed, however through quite different approaches, by De Bi\`evre-Gonz\'alez (1992-93) \cite{debgo}, Kowalski-Rembieli\'nski-Papaloucas (1996) \cite{kopap}, and Gonz\'alez-Del Olmo (1998) \cite{delgo}.
\subsection{Quantization of classical observables}
The quantum operator acting on ${\mathcal H}$, associated to the classical observable $f(x)$, is obtained as in (\ref{oper}). For the most basic one, i.e. the classical observable $p$ itself, the procedure yields \begin{equation}\label{psym}
\widehat p = \int_{X} \mathcal{N}(p)\, p\, | p,q \rangle \langle p, q | \mu (\rmd x) = \sum_{n \in \Z}
n\, | n\rangle \langle n| , \end{equation} and this is nothing but the angular momentum operator, which reads in angular position representation (Fourier series): $ \widehat{p} = -\rmi\frac{\partial}{\partial q}$.
For an arbitrary function $f(q)$, we have \begin{align}
\widehat{f(q)}& = \int_{X} \mu (\rmd x) \mathcal{N}(p) f(q) \, | p,q \rangle \langle p, q | \nonumber \\ &= \sum_{n,n' \in \Z}
\rme^{-\frac{\epsilon}{4}\,(n-n')^2} \,c_{n-n'}(f)| n\rangle \langle n' | , \label{f(beta)} \end{align}
where $c_{n}(f)$ is the $n$-th Fourier coefficient of $f$. In particular, we have for the angular position operator $\widehat{q}\,$: \begin{equation} \label{opangle}
\widehat{q} = \pi \I_{{\mathcal H}} + \rmi \sum_{n\neq n'}
\frac{ \rme^{-\frac{\epsilon}{4}(n-n')^2}}{n-n'}\, | n\rangle \langle n' |\, .
\end{equation} The shift operator is the quantized counterpart of the ``Fourier fundamental harmonic'': \begin{equation} \label{opfourier}
\widehat{ \rme^{\rmi q}} = \rme^{-\frac{\epsilon}{4}}\, \sum_{n}
| n + 1\rangle \langle n |. \end{equation} The commutation rule between (\ref{psym}) and (\ref{opfourier}) gives \begin{equation} [ \, \widehat{p}, \widehat{ \rme^{\rmi q}} \,] = \widehat{ \rme^{\rmi q}}, \end{equation} and is canonical in the sense that it is in exact correspondence with the classical Poisson bracket \begin{equation} \left\{ p, \rme^{\rmi q} \right\} = \rmi \rme^{\rmi q}. \end{equation} Some interesting aspects of other such correspondences are found in \cite{rabeie}. For arbitrary functions of $q$ the commutator \begin{equation}
[ \, \widehat{p}, \widehat{f(q)} \,] = \sum_{n, n'}(n-n')
\rme^{-\frac{\epsilon}{4}(n-n')^2}\,c_{n-n'}(f)\, | n\rangle \langle n' |, \end{equation} can arise interpretational difficulties. In particular, when $f(q)=q$, i.e. for the angle operator \begin{equation} \label{ccrcir}
[ \,\widehat{p}, \widehat{q} \,] = \rmi \sum_{n \neq n'}
\rme^{-\frac{\epsilon}{4}(n-n')^2}\, | n\rangle \langle n' |\, ,
\end{equation} the comparison with the classical bracket $\left\{ p, q \right\} = 1$ is not direct. Actually, these difficulties are only apparent if we consider instead the $2\pi$-periodic extension to $\mathbb{R}$ of $f(q)$. The position observable $f(q)=q$, originally defined in the interval $[0,2\pi)$, acquires then a sawtooth shape and its periodic discontinuities are accountable for the discrepancy. In fact the obstacle is circumvented if we examine, for instance, the behaviour of the corresponding lower symbols at the limit $\epsilon \to 0$. For the angle operator we have \begin{align} \label{lowsymb}
\nonumber \langle p_0, q_0 | \, \widehat{q} \, | p_0, q_0 \rangle &= \pi + \frac{1}{2}\, \left(1 + \frac{\mathcal{N}(p_0 - \frac{1}{2})}{\mathcal{N}(p_0)}\right)\, \sum_{n \neq 0} \rmi\, \frac{ \rme^{-\frac{\epsilon}{2}n^2 + \rmi n q_0}}{n} \\ & \underset{\epsilon \to 0}{\sim} \pi + \sum_{n \neq 0} \rmi\, \frac{ \rme^{\rmi n q_0}}{n}\,, \end{align} where we recognize at the limit the Fourier series of $f(q)$. For the commutator, we recover the canonical commutation rule modulo Dirac singularities on the lattice $ 2\pi \Z$. \begin{align} \label{symbcom}
\nonumber \langle p_0, q_0 | [\, \widehat{p}, \widehat{q}\, ]\, | p_0, q_0 \rangle &= \frac{1}{2}\,\left(1 + \frac{\mathcal{N}(p_0 - \frac{1}{2})}{\mathcal{N}(p_0)}\right)\left( -\rmi + \sum_{n\in \Z} \rmi \rme^{-\frac{\epsilon}{2}n^2 + \rmi n q_0}\right) \\ & \underset{\epsilon \to 0}{\sim} -\rmi + \rmi\sum_{n } \delta(q_0 - 2 \pi n). \end{align}
\section{Quantization of the motion in an infinite well potential}
\subsection{The standard quantum context} Any quantum system trapped inside the infinite square well $0 \leqslant q \leqslant L$ must have its wave function equal to zero beyond the boundaries. It is thus natural to impose on the wave functions the conditions \begin{equation}\psi (q) = 0, \qquad q \geqslant L \quad \mbox{and} \quad q \leqslant 0\, . \label{3.1} \end{equation} Since the motion takes place only inside the interval $[0, L]$, we may as well ignore the rest of the line and replace the constraints (\ref{3.1}) by the following ones: \begin{equation} \label{domH} \psi \in \mathrm{L}^2([0, L],\rmd q), \qquad \psi (0) = \psi (L) = 0\,. \end{equation} Moreover, one may consider the periodized well and instead impose the cyclic boundary conditions $\psi (n L) = 0, \, \forall n \in \Z$.
In either case, stationary states of the trapped particle of mass $m$ are easily found from the eigenvalue problem for the Schr\"odinger operator with Hamiltonian: \begin{equation} H \equiv H_{\rm w} = - \frac{\hbar^2}{2m} \frac{\rmd^2}{\rmd x^2} \, . \label{3.2} \end{equation} This Hamiltonian is self-adjoint \cite{simon} on an appropriate dense domain in (\ref{domH}). Then \begin{equation} \Psi (q,t) = \rme^{-\frac{\rmi }{\hbar} Ht} \Psi (q,0)\, , \label{time-evol} \end{equation} where $\Psi (q,0) \equiv \psi (q)$ obeys the eigenvalue equation \begin{equation} H \psi (q) = E \psi (q)\, , \end{equation} together with the boundary conditions (\ref{domH}). Normalized eigenstates and corresponding eigenvalues are then given by \begin{align}\label{PTstate} \psi_n (q) &= \sqrt{\frac{2}{L}} \sin \left(n\pi \frac{q}{L}\right)\, , \quad 0 \leqslant q \leqslant L \, ,\\ H \psi_n &= E_n \psi_n \, , \qquad n = 1, 2, \dotsc , \end{align} with \begin{equation} E_n = \frac{\hbar^2\pi^2}{2mL^2} n^2 \; \equiv \; \hbar \omega n^2 \, , \qquad
\omega = \frac{\hbarÊ\pi^2}{2mL^2} \equiv \frac{2\pi}{T_r} \, , \end{equation} where $T_r$ is the ``revival'' time to be compared with the purely classical round trip time.
\subsection{The quantum phase space context}
The classical phase space of the motion of the particle is the infinite strip
$X = [0,L]\times \R = \{x= (q,p) \; | \; q\in [0,L]\, , p \in \R\}$ equipped with the measure: $\mu(\rmd x) = \rmd q\, \rmd p$. A phase trajectory for a given non-zero classical energy $E_{\mathrm{class}}= \frac{1}{2}m v^2$ is represented in the figure \ref{figure1}.
Typically, we have two phases in the periodic particle motion with a given energy: one corresponds to positive values of the momentum, $p=mv$ while the other one is for negative values, $p=-mv$. This observation naturally leads us to introduce the Hilbert space of two-component complex-valued functions (or spinors) square-integrable with respect to $\mu(\rmd x)$ : \begin{equation} \label{twohilb}
\mathrm{L}^2_{\C^2}(X,\mu(\rmd x))
\simeq \C^2 \otimes \mathrm{L}^2_{\C}(X,\mu(\rmd x))
= \bigg\lbrace \Phi(x) =\bigg(\begin{matrix} \phi_+(x)\\ \phi_-(x) \end{matrix}\bigg) ,
\ \phi_{\pm} \in \mathrm{L}^2_{\C}(X,\mu(\rmd x))\bigg\rbrace\, . \end{equation}
We now choose our orthonormal system as formed of the following vector-valued functions $\Phi_{n,\epsilon} (x)$, $\kappa = \pm$, \begin{align} \label{ortsysiw} \nonumber \Phi_{n,+} (x) & = \bigg(\begin{matrix} \phi_{n,+}(x)\\ 0 \end{matrix}\bigg)\, , \qquad \Phi_{n, -} (x) = \bigg(\begin{matrix} 0\\ \phi_{n,-}(x)\end{matrix}\bigg)\, , \\ \phi_{ n, \kappa}(x) & = \sqrt{c}\, \rme^{-\frac{1}{2\rho^2}(p-\kappa p_n)^2}\, \sin \left(n\pi\frac{q}{L}\right)\, ,\qquad \kappa = \pm\,, \ n=1,2, \dotsc,\,\end{align} where \begin{equation} \label{norm}
c=\frac{2}{\rho L \sqrt{\pi}} , \qquad
p_n = \sqrt{2m E_n} = \frac{\hbar \pi}{L}\, n \, , \end{equation} and the half-width $\rho > 0$ is a parameter which has the dimension of a momentum, say $\rho = \hbar \pi \vartheta/L$ with $\vartheta >0$ a dimensionless parameter. This parameter can be arbitrarily small (like for the classical limit) and, of course, arbitrarily large (for a very narrow well, for instance).
The functions $\Phi_{n,\kappa} (x)$ are continuous, vanish at the boundaries $q=0$ and $q=L$ of the phase space, and obey the essential finiteness condition (\ref{factor}): \begin{align} \nonumber
0 < \mathcal{ N} (x) & \equiv \mathcal{ N} (q,p) \equiv \mathcal{ N}_+ (x) + \mathcal{ N}_- (x)
= \sum_{\kappa= \pm}\sum_{n=1}^{\infty} \Phi_{n, \kappa}^{\dag} (x) \Phi_{n, \kappa} (x) \\
&= c\, \sum_{n=1}^{\infty}\left[ \rme^{-\frac{1}{\rho^2}(p - p_n)^2} + \rme^{-\frac{1}{\rho^2}
(p + p_n)^2}\right] \sin^2\left( n\pi\frac{q}{L}\right)< \infty \, . \label{factor1} \end{align}
The expression of $\mathcal{N}$ simplifies to : \begin{equation} \label{norma}
\mathcal{N}(q,p)
= c\; \mathcal{S}(q,p)
= c \; \Re\left\{\frac{1}{2}\sum_{n=-\infty}^\infty \big[1 - \rme^{\rmi 2\pi n \frac{q}{L}}\big]
\rme^{-\frac{1}{\rho^2}(p-p_n)^2}\right\}. \end{equation}
It then becomes apparent that $\mathcal{N}$ and $\mathcal{S}$ can be expressed in terms of elliptic theta
functions. Function $\mathcal{S}$ has no physical dimension whereas $\mathcal{N}$ has the same
dimension as $c$, that is the inverse of an action.
We are now in measure of defining our vector CS's \cite{aeg}. We set up a one-to-one correspondence between the functions $\Phi_{n,\kappa}$'s and two-component states \begin{equation} \label{twocomp}
| n, \pm \rangle \deq |\pm\rangle \otimes | n \rangle\, , \qquad |+\rangle = {\,1\, \choose 0}\, , \quad |-\rangle = {\,0\, \choose 1}\, , \end{equation} forming an orthonormal basis of some separable Hilbert space of the form $\mathcal{K}= \C^2 \otimes \mathcal{H}$. The latter can be viewed also as the subspace of $\mathrm{L}^2_{\C^2}(X,\mu(\rmd x))$ equal to the closure of the linear span of the set of $\Phi_{n,\kappa}$'s. We choose the following set of $2\times 2$ diagonal real matrices for our construction of vectorial CS's: \begin{equation} \label{opF} \mathrm{F}_n(x)
= \begin{pmatrix}
\phi_{n, +} (q,p) & 0 \\
0 & \phi_{n, -}(q,p) \end{pmatrix} \, . \end{equation}
Note that $\mathcal{N}(x) = \sum_{n=1}^{\infty} \mathrm{tr}(\mathrm{F}_n(x)^2)$. Vector CS's, $| x , \chi \rangle \in \C^2 \otimes \mathcal{H}= \mathcal{K} $, are now defined for each $x \in X$ and $\chi \in \C^2$ by the relation \begin{equation}
| x , \chi \rangle = \frac{1}{\sqrt{\mathcal{N}(x)}} \; \sum_{n=1}^{\infty}
\mathrm{F}_n (x)\; |\chi\rangle \otimes |n\rangle \; . \label{def-vcs} \end{equation} In particular, we single out the two orthogonal CS's \begin{equation} \label{vectcs}
|x, \kappa\rangle = \frac{1}{\sqrt{\mathcal{N}(x)}} \sum_{n=1}^{\infty} \mathrm{F}_n(x) | n, \kappa\rangle \, , \qquad \kappa = \pm\, . \end{equation}
By construction, these states also satisfy the infinite square well boundary conditions, namely $|x, \kappa \rangle_{q=0}=|x,\kappa\rangle_{q= L}=0$. Furthermore they fulfill the normalizations \begin{equation} \label{normVCS}
\langle x,\kappa| x,\kappa \rangle = \frac{\mathcal{N}_{\kappa}(x)}{\mathcal{N}(x)}\, ,
\qquad \sum_{\kappa = \pm } \langle x,\kappa| x,\kappa \rangle = 1\, , \end{equation} and the resolution of the identity in $\mathcal{K}$: \begin{align}
\nonumber \int_X | x\rangle\langle x| \mathcal{N}(x) \mu(\rmd x)
&=\sum_{\kappa,\kappa' = \pm } \sum_{n,n'=1}^\infty \int_{-\infty}^\infty\int_0^L \mathrm{F}_n(q,p)
\mathrm{F}_{n'}(q,p) | n, \kappa\rangle\langle n', \kappa'| \rmd q \rmd p \\
&= \sum_{\kappa = \pm }\sum_{n=1}^\infty | n, \kappa\rangle\langle n, \kappa|
= \sigma_0 \otimes \mathbb{I}_\mathcal{H}= \mathbb{I}_\mathcal{K}\, . \end{align} where $\sigma_0$ denotes the $2\times 2$ identity matrix consistently with the Pauli matrix notation $\sigma_{\mu}$ to be used in the following.
\subsection{Quantization of classical observables}
The quantization of a generic function $f(q,p)$ on the phase space is given by the expression (\ref{oper}), that is for our particular CS choice: \begin{align} \label{VCSquant} \nonumber &\widehat{f}
= \sum_{\kappa = \pm }\int_{-\infty}^\infty \int_0^L \, f(q,p)| x,\kappa\rangle\langle x, \kappa | \mathcal{N}(q,p) \rmd q \rmd p
\\
& = \sum_{n,n'=1}^{\infty} | n \rangle\langle n' | \otimes \begin{pmatrix}
\widehat{f}_+ & 0 \\
0 & \widehat{f}_- \end{pmatrix} \, , \end{align} where \begin{equation}
\widehat{f}_{\pm}=\int_{-\infty}^\infty \rmd p \int_0^L \rmd q \, \phi_{n,\pm}(q,p) f(q,p) \overline{\phi_{n',\pm}}(q,p)\, . \end{equation} For the particular case in which $f$ is function of $p$ only, $f(p)$, the operator is given by \begin{align} \nonumber \widehat{f}
& = \sum_{\kappa = \pm }\int_{-\infty}^\infty \int_0^L \, f(p)| x,\kappa\rangle\langle x, \kappa | \mathcal{N}(q,p) \rmd q \rmd p
\nonumber\\
&= \frac{1}{\rho\sqrt{\pi}}\, \sum_{n=1}^{\infty} | n \rangle\langle n | \otimes \begin{pmatrix} \widehat{f}_+ & 0 \cr
0 & \widehat{f}_- \end{pmatrix}\, , \end{align} with \begin{equation} \widehat{f}_\pm= \int_{-\infty}^{\infty} \rmd p\, f(p) \exp\big(-\frac{1}{\rho^2}(p\mp p_n)^2\big)\, . \end{equation}
Note that this operator is diagonal on the $|n,\kappa \rangle$ basis.
\subsubsection{Momentum and Energy}
In particular, using $f(p)=p$, one gets the operator \begin{equation}\label{p}
\widehat{p}= \sum_{n=1}^{\infty} p_n \, \sigma_3\otimes| n\rangle\langle n| \, , \end{equation} where $\sigma_3=\bigl(\begin{smallmatrix} 1 & 0 \\0 & -1 \end{smallmatrix}\bigr)$ is a Pauli matrix.
For $f(p)=p^2$, which is proportional to the Hamiltonian, the quantum counterpart reads as \begin{equation} \label{qkinene}
\widehat{p^2} = \frac{\rho^2}{2}\mathbb{I}_{\mathcal{K}} + \sum_{n=1}^{\infty} p_n^2 \,\sigma_0\, \otimes | n\rangle\langle n|
= \frac{\rho^2}{2}\mathbb{I}_{\mathcal{K}} + (\widehat{p}\,)^2 \, . \end{equation} Note that this implies that the operator for the square of momentum does not coincide with the square of the momentum operator. Actually they coincide up to O$(\hbar^2)$.
\subsubsection{Position}
For a general function of the position $f(q)$ our quantization procedure yields the following operator:
\begin{equation}
\widehat{f}
= \sum_{n,n'=1}^{\infty} \rme^{-\frac{1}{4\rho^2}(p_n-p_{n'})^2}\left[d_{n-n'}(f)- d_{n+n'}(f)\right] \sigma_0\, \otimes | n\rangle\langle n'| \, , \end{equation} where \begin{equation} d_{m}(f)\equiv \frac{1}{L}\int_{0}^{L} f(q)\cos\Big(m\pi\frac{q}{L}\Big) \rmd q\, . \end{equation}In particular, for $f(q)=q$ we get the ``position'' operator \begin{equation} \widehat{q}
= \frac{L}{2}\mathbb{I}_{\mathcal{K}} - \frac{2 L}{\pi^2} \sum_{n, n' \geq 1, \above0pt n+n'=2k+1}^{\infty}
\rme^{-\frac{1}{4\rho^2}(p_n-p_{n'})^2} \left[\frac{1}{(n-n')^2}-\frac{1}{(n+n')^2}\right]\, \sigma_0\, \otimes | n\rangle\langle n'| \, , \end{equation} with $k\in \mathbb{N}$. Note the appearance of the classical mean value for the position on the diagonal.
\subsubsection{Commutation rules}
Now, in order to see to what extent these momentum and position operators differ from their classical (canonical) counterparts, let us consider their commutator: \begin{align}
[\, \widehat{q},\widehat{p}\, ] &= \frac{2\hbar}{\pi} \!\!\!\! \sum_{n\neq n'\above0pt n+n'=2k+1}^{\infty} \!\!\!\! C_{n,n'}\, \sigma_3 \otimes | n\rangle\langle n'| \\
C_{n,n'} &= \rme^{-\frac{1}{4\rho^2}(p_n-p_{n'})^2}(n-n')\left[\frac{1}{(n-n')^2}-\frac{1}{(n+n')^2}\right] . \end{align} This is an infinite antisymmetric real matrix. The respective spectra of finite matrix approximations of this operator and of position and momentum operators are compared in figures \ref{EV1} and \ref{EV2} for various values of the regulator $\rho=\hbar \pi \vartheta/L= \vartheta$ in units $\hbar = 1$, $L=\pi$. When $\rho$ takes large values, one can see that the eigenvalues of $[\, \widehat{q},\widehat{p}\, ]$ accumulate around $\pm \rmi$, i.e. they become almost canonical. Conversely, when $\rho\to 0$ all eigenvalues become null, which corresponds to the classical limit.
\subsubsection{Evolution operator}
The Hamiltonian of a spinless particle trapped inside the well is simply $H = p^2/2m$. Its quantum counterpart
therefore is $\widehat{H} = \widehat{p^2}/2m$. The unitary evolution operator, as usual, is given by \begin{equation}
U(t)
= \rme^{-\frac{\rmi }{\hbar} \widehat{H}t}
= \rme^{-i \omega_{\vartheta} t}\sum_{n=1}^\infty \rme^{- \frac{\rmi p_n^2 t}{2m\hbar}} \sigma_0\otimes | n\rangle\langle n| \, . \end{equation}
Note the appearance of the global time-dependent phase factor with frequency $\omega_{\vartheta}$ which can be compared with
the revival frequency
\begin{equation} \label{globfreq} \omega_{\vartheta} = \frac{\hbar \pi^2 \vartheta^2}{4mL^2} = \frac{\omega \vartheta^2}{2}\, . \end{equation}
\section{Quantum behaviour through lower symbols} Lower symbols are computed with normalized CS's. The latter are denoted as follows \begin{equation} \label{normVCS}
|x\rangle = |x, +\rangle + |x, -\rangle\, . \end{equation} Hence, the lower symbol of a quantum observable $A$ should be computed as
$$\check{A}(x) = \langle x| A | x\rangle \equiv \check{A}_{++}(x) + \check{A}_{+-}(x) + \check{A}_{-+}(x) + \check{A}_{--}(x)\, .$$ This gives the following results for the observables previously considered :
\subsubsection{Position}
In the same way, the mean value of the position operator in a vector CS $|x\rangle$ is given by: \begin{equation}
\langle x| \widehat{q}\, | x\rangle = \frac{L}{2} - Q(q,p)\, , \end{equation} where we can distinguish the classical mean value for the position corrected by the function \begin{align} Q(q,p)&= \frac{2L}{\pi^2}\frac{1}{\mathcal{S}} \sum_{\substack{n,n'=1, n\neq n'\\ n+n' =2k+1}}^\infty
\rme^{-\frac{1}{4\rho^2}(p_n-p_{n'})^2}\left[\frac{1}{(n-n')^2}-\frac{1}{(n+n')^2}\right]\times\nonumber\\
& \times \Big[ \rme^{-\frac{1}{2\rho^2}[(p-p_n)^2 + (p-p_{n'})^2]} + \nonumber\\
& + \rme^{-\frac{1}{2\rho^2}[(p+p_n)^2 +
(p+p_{n'})^2]}\Big]\sin\Big( n\pi\frac{q}{L}\Big) \sin\Big( n'\pi\frac{q}{L}\Big)\, . \end{align} This function depends on the parameter $\vartheta$ as we show in figure \ref{VMQ} with a numerical approximation using finite matrices. As for $\widehat p$, we calculate the dispersion defined as \begin{equation} \Delta Q=\sqrt{\check{q^2}-\check{q}^2}. \end{equation} Its behaviour for different values of $\vartheta$ is shown in figure \ref{DQ}.
\subsubsection{Time evolution of position}
The change through time of the position operator is given by the transformation
$\widehat{q}\,(t) := U^\dag(t)\,\widehat{q}\, U(t)$, and differs
from $\widehat{q}$ by the insertion of an oscillating term in the series. Its lower symbol is given by \begin{equation}
\langle x| \widehat{q}\,(t) | x\rangle
= \frac{\displaystyle{L}}{2} - Q(q,p,t)\, , \end{equation} where this time the series have the form \begin{align} Q(q,p,t) &= \frac{2L}{\pi^2}\frac{1}{\mathcal{S}} \sum_{\substack{n,n'=1, n\neq n'\\ n+n' =2k+1}}^\infty
\rme^{- \frac{\rmi }{2m\hbar} (p_n^2-p_{n'}^2)\, t} \, \rme^{-\frac{1}{4\rho^2}(p_n-p_{n'})^2}\times\nonumber\\
&\times \left[\frac{1}{(n-n')^2}-\frac{1}{(n+n')^2}\right] \sin \left(n\pi\frac{q}{L}\right) \sin \left(n'\pi\frac{q}{L}\right)\times\nonumber\\
&\times \left[ \rme^{ -\frac{1}{2\rho^2}[(p-p_n)^2 + (p-p_{n'})^2]} + \rme^{-\frac{1}{2\rho^2}[(p+p_n)^2 +
(p+p_{n'})^2]}\right]\, . \end{align} Note that the time dependence manifests itself in the form of a Fourier series of with frequencies $ (n^2-{n'}^2)\,\hbar \pi^2/2mL^2$. This corresponds to the circulation of the wave packet inside the well.
\subsubsection{Momentum}
The mean value of the momentum operator in a vector CS $|x\rangle$ is given by the affine combination: \begin{align} \label{mvmom}
\langle x| \widehat{p}\, | x\rangle & = \frac{\mathcal{ M} (x)}{\mathcal{ N} (x)}\, , \nonumber\\
\mathcal{ M} (x) & = c\,\sum_{n=1}^{\infty}p_n \, \left[ \rme^{-\frac{1}{\rho^2}(p - p_n)^2} - \rme^{-\frac{1}{\rho^2}(p + p_n)^2}\right] \sin^2\left( n\pi\frac{q}{L}\right)\, . \end{align} This function reproduce the profile of the function $p$, as can be seen in the figure \ref{VMP}. We calculate then the dispersion $\Delta P$, defined as \begin{equation} \Delta P=\sqrt{\check{p^2}-\check{p}^2}, \end{equation}
using the mean values in a CS $|x\rangle$. Its behaviour as a function of $x$ is shown in figure \ref{DP}.
\subsubsection{Position-momentum commutator} The mean value of the commutator in a normalized state $\Psi = \bigl( \begin{smallmatrix} \phi_+\\ \phi_- \end{smallmatrix} \bigr)$ is the pure imaginary expression: \begin{align} \label{meanvaluecom}
\nonumber \langle \Psi | [\, \widehat{q},\widehat{p}\, ] | \Psi &\rangle = \frac{2\rmi \hbar}{\pi} \sum_{n\neq n'\above0pt n+n'=2k+1}^{\infty} \rme^{-\frac{1}{4\rho^2}(p_n-p_{n'})^2}(n-n')\, \times \\
\times & \left[\frac{1}{(n-n')^2}-\frac{1}{(n+n')^2}\right]\Im{\left(\langle \phi_+|n\rangle \langle n'| \phi_+\rangle - \langle \phi_-|n\rangle \langle n'| \phi_-\rangle\right)}\, . \end{align} Given the symmetry and the real-valuedness of states (\ref{vectcs}), the mean value of the commutator when $\Psi$ is one of our CS's vanish, even if the operator does not. This result is due to the symmetric spectrum of the commutator around $0$. As is shown in Part c) of figures \ref{EV1}, the eigenvalues of the commutator tend to $\pm \rmi\hbar$ as $\rho$, i.e. $\vartheta$, increases. Still, there are some points with modulus less than
$\hbar$. This leads to dispersions $\Delta Q\Delta P$ in CS's $|x\rangle$ that are no longer bounded from below by $\hbar/2$. Actually, the lower bound of this product, for a region in the phase space as large as we wish, decreases as $\vartheta$ diminish. A numerical approximation is shown in figure \ref{DQDP}.
\section{Discussion}
From the mean values of the operators obtained here, we verify that our CS quantization gives well-behaved momentum and position operators. The classical limit is reached once the appropriate limit for the parameter $\vartheta$ is found. If we consider the behaviour of the observables as a function of the dimensionless quantity $\vartheta = \rho L/\hbar\pi$, at the limit $\vartheta \to 0$ and when the Gaussian functions for the momentum become very narrow, the lower symbol of
the position operator is $\check{q} \sim L/2$. This corresponds to the classical average value position in the well.
On the other hand, at the limit
$\vartheta \to \infty$, for which the involved Gaussians spread to constant functions, the mean value
$\langle x|\hat{q}|x\rangle$ converges numerically to the function $q$. In other words, our position operator yields
a fair quantitative description for the quantum localization within the well. The lower
symbol $\langle x|\hat{p}|x\rangle$ behaves as a stair-step function for $\rho$ close to $0$ and progressively fits
the function $p$ when $\rho$ increases. These behaviours are well illustrated in the figures \ref{VMQ}
and \ref{VMP}. The effect of the parameter $\vartheta$ is also noticeable in the dispersions of $\widehat q$ and
$\widehat p$. Here, the variations of the full width at half maximum of the Gaussian function reveal different dispersions for the operators.
Clearly, if a classical behaviour is sought, the values of $\vartheta$ have to be chosen near $0$. This gives localized
values for the observables. The numerical explorations shown in figures \ref{DQ} and \ref{DP} give a good account
of this modulation.
Consistently with the previous results, the behaviour of the product $\Delta Q \Delta P$
at low values of $\vartheta$ shows uncorrelated observables at any point in the phase space, whereas
at large values of this parameter the product is constant and almost equal to the canonical quantum lowest limit
$\hbar/2$. This is shown in figure \ref{DQDP}.
It is interesting to note that if we replace the Gaussian distribution, used here for the $p$ variable in the construction of the CS's, by any positive even probability distribution $\R\in p \mapsto \varpi (p)$ such that $\sum_n \varpi (p-n) < \infty$ the results are not so different! The momentum spectrum is still $\Z$ and the energy spectrum has the form $\{n^2 + \mathrm{constant}\}$. In this regard, an interesting approach combining mathematical statistics concepts and group theoretical constructions of CS's has been recently developed by Heller and Wang \cite{heller1,heller2}.
The work presented here has possible applications to those particular physical problems where the square well is used as a model for impenetrable barriers \cite{bryant}, in the spirit of what has been done in \cite{thilo}.
The generalization to higher-dimensional infinite potential wells is more or less tractable, depending on the geometry of the barriers. This includes quantum dots and other quantum traps. Nevertheless, we believe that the simplicity and the universality of the method proposed in the present work should reveal itself useful for this purpose.
Author Garc\'{\i}a de Le\'on wishes to acknowledge the Consejo Nacional de Ciencia y Tecnolog\'{\i}a (CONACyT) for its support.
\pagebreak
\appendix
\begin{figure}
\caption{Phase trajectory of the particle in the infinite square-well.}
\label{figure1}
\end{figure}
\begin{figure}
\caption{
Eigenvalues of $\widehat{q}$, $\widehat{p}$ and $[\, \widehat{q},\widehat{p}\, ]$ for increasing values of
the characteristic momentum $\rho =\hbar \pi \vartheta/L$ of the system, and computed for $N\times N$
approximation matrices. Units have been chosen such that $\hbar = 1$, $L=\pi$ so that $\rho=\vartheta$ and $p_n=n$.
Note that for $\widehat{q}$ with $\rho$ small, the eigenvalues adjust to the
classical mean value $L/2$. The spectrum of $\widehat{p}$ is independent of $\rho$ as is shown in (\ref{p}). For
the commutator, the values are purely imaginary. }
\label{EV1}
\end{figure}
\begin{figure}
\caption{
Continued from figure \ref{EV1}: $N\times N$ approximation matrices eigenvalues of $\widehat{q}$,
$\widehat{p}$ and $[\, \widehat{q},\widehat{p}\, ]$ for increasingly larger values of
$\rho = \hbar \pi \vartheta/L= \vartheta$ in units $\hbar = 1$, $L=\pi$. The spectrum of $\widehat{p}$
is independent of $\rho$ as is shown in (\ref{p}). For the commutator, the eigenvalues are
purely imaginary and tend to accumulate around $\rmi\hbar$ and $-\rmi\hbar$ as $\rho$ increases.}
\label{EV2}
\end{figure}
\begin{figure}
\caption{
The lower symbol $\check{q}$ depicted for various values of $\rho =\hbar \pi \vartheta/L= \vartheta$
in units $\hbar = 1$, $L=\pi$. Note the way the mean value fits the function $q$ when $\rho$ is large,
and approaches the classical average in the well for low values of the parameter.}
\label{VMQ}
\end{figure}
\begin{figure}
\caption{
The lower symbol $\check{p}$ depicted for various values of $\rho =\hbar \pi \vartheta/L =\vartheta$
in units $\hbar = 1$, $L=\pi$. The function becomes smoother when $\rho$ is large.}
\label{VMP}
\end{figure}
\begin{figure}
\caption{
Variance of $q$ depicted for various values of $\rho =\hbar \pi \vartheta/L =\vartheta$ in units
$\hbar = 1$, $L=\pi$.
Note how different dispersions are revealed just by changing
the width of the Gaussian function of the $p$ variable. Low dispersion, close to classical, is found
for $\vartheta$
near $0$ and the quantum behaviour is recovered at large values of the parameter.}
\label{DQ}
\end{figure}
\begin{figure}
\caption{
Variance of $p$ depicted for various values of $\rho =\hbar \pi \vartheta/L =\vartheta$ in units
$\hbar = 1$, $L=\pi$. Consistently with $\check q$, a well localized momentum is
found for low values of the parameter. This is actually expected since the Gaussian becomes very narrow. }
\label{DP}
\end{figure}
\begin{figure}
\caption{
Product $\Delta Q \Delta P$ for various values of $\rho =\hbar \pi \vartheta/L= \vartheta$ in units
$\hbar = 1$, $L=\pi$. Note the modification of the vertical scale
from one picture to another. Again, the pair position-momentum tends to decorrelate at low values of the
parameter, like they should do in the classical limit. On the other hand it approaches the usual quantum-conjugate
pair at high values of $\rho$.}
\label{DQDP}
\end{figure}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Adaptive lasso and Dantzig selector for spatial point processes intensity estimation}
\runtitle{Lasso and Dantzig selector for spatial point processes}
\begin{aug}
\author[A]{\fnms{Achmad} \snm {Choiruddin}\ead[label=e1]{choiruddin@its.ac.id}},
\author[B]{\fnms{Jean-Fran\c cois} \snm{Coeurjolly}\ead[label=e2,mark]{jean-francois.coeurjolly@univ-grenoble-alpes.fr}}
\and
\author[B]{\fnms{Frédérique} \snm{Letué}\ead[label=e3]{frederique.letue@univ-grenoble-alpes.fr}}
\address[A]{Department of Statistics, Institut Teknologi Sepuluh Nopember (ITS), Indonesia, \printead{e1}}
\address[B]{Laboratoire Jean Kuntzmann, Université Grenoble Alpes, France, \printead{e2} \printead{e3}}
\end{aug}
\begin{abstract}
Lasso and Dantzig selector are standard procedures able to perform variable selection and estimation simultaneously. This paper is concerned with extending these procedures to spatial point process intensity estimation. We propose adaptive versions of these procedures, develop efficient computational methodologies and derive asymptotic results for a large class of spatial point processes under an original setting where the number of parameters, i.e. the number of spatial covariates considered, increases with the expected number of data points. Both procedures are compared theoretically, in a simulation study, and in a real data example.
\end{abstract}
\begin{keyword}
\kwd{estimating equations}
\kwd{high-dimensional statistics}
\kwd{linear programming}
\kwd{regularization methods}
\kwd{spatial point pattern}
\end{keyword}
\end{frontmatter}
\section{Introduction} \label{sec:intro}
Spatial point processes are stochastic processes which model random locations of points in space, such as random locations of trees in a forest, locations of disease cases, earthquake occurrences and crime events \citep[e.g.][]{baddeley2015spatial,choiruddin2021quantifying,coeurjolly2019understanding,moller2003statistical}. To understand the arrangement of points, the intensity function is the standard summary function \cite{baddeley2015spatial,coeurjolly2019understanding}. When one seeks to describe the probability of observing a point at location $u\in \mathbb R^d$ in terms of covariates, the most popular model for the intensity function, $\rho$, is
\begin{align}
\label{eq:int}
\rho (u;\boldsymbol{\beta})=\exp\{\boldsymbol{\beta}^\top \mathbf{z}(u)\}, \quad u \in D \subset \mathbb{R}^d,
\end{align}
where, for $p\ge 1$, $\mathbf{z}(u)=\{ z_1(u),\ldots,z_p(u)\}^\top$, represent spatial covariates measured at location $u$ and $\boldsymbol{\beta}=\{\beta_1,\ldots,\beta_p\}^\top$ is a real $p$-dimensional parameter.
The score of the Poisson likelihood, i.e. the likelihood if the underlying process is assumed to be the Poisson process, remains an unbiased estimating equation to estimate $\boldsymbol{\beta}$ even if the point pattern does not arise from a Poisson process \citep{waagepetersen2007estimating}. Such a method is well-studied in the literature and extended in several ways to gain efficiency \citep[e.g.][]{guan2015quasi,guan2010weighted} when the number of covariates is moderate. Standard results cover the consistency and asymptotic normality of the maximum Poisson likelihood under the increasing domain asymptotic (see e.g.~\cite{guan2015quasi} and references therein).
When a large number of covariates is available, variable selection is unavoidable. Performing estimation and selection for spatial point processes intensity models has received a lot of attention. Recent developments consider techniques centered around regularization methods \citep[e.g.][]{choiruddin2018convex,daniel2018penalized,rakshit2021variable,yue2015variable} such as lasso technique. In particular, \cite{choiruddin2018convex} consider several composite likelihoods penalized by a large class of convex and non-convex penalty functions and obtain asymptotic results under the increasing domain asymptotic framework.
The Dantzig selector is an alternative procedure to regularization techniques. It was initially proposed for linear models by~\cite{candes2007dantzig} and subsequently extended to more complex models \cite[e.g.][]{antoniadis2010dantzig,dicker2010regularized,james2009generalized}. In particular, \cite{dicker2010regularized} generalizes this approach to general estimating equations.
One of the main advantages of the Dantzig selector is its implementation which, for linear models, results in a linear programming. Since then, the Dantzig selector and lasso procedures have been compared in different contexts \citep[e.g.][]{bickel2009simultaneous,james2009dasso}.
In this paper, we compare lasso and Dantzig selector when they are applied to intensity estimation for spatial point processes. We compare these procedures in the complex asymptotic framework where the number of informative covariates, say $s_n$ and the number of non-informative covariates, say $p_n-s_n$, may increase with the mean number of points. Our asymptotic results are developed under a setting which embraces both increasing domain asymptotic and infill asymptotic which are often considered in the literature (see also Section~\ref{sec:background}). Such a setting is almost never considered in the spatial point processes literature (see again e.g.~\cite{guan2015quasi}).
It is well-known that the Poisson likelihood can be approximated as a quasi-Poisson regression model, see e.g.~\cite{baddeley2015spatial}. For the adaptive lasso procedure, our theoretical contributions can also be seen as extension of work such as \cite{fan2004nonconcave} which provides asymptotic results for estimators from regularized generalized linear models. However, in our spatial framework, the more standard sample size must be substituted by, here, a mean number of points. Furthermore, observations are no more independent, i.e. our results are valid for a large class of spatial dependent point processes. Note also that \cite{fan2004nonconcave} assumes $s_n=s$.
The theoretical contributions of the present paper are two-fold. First, for the adaptive lasso procedure, the question stems from extending the work by~\cite{choiruddin2018convex} which considers only an increasing domain asymptotic, and assumes $s_n=s$ and $p_n=p$. For the Dantzig selector, our contributions are to extend the standard methodology to spatial point processes, propose an adaptive version and derive theoretical results.
This yields different computational and theoretical issues. As revealed by our main result, Theorem~\ref{thm:main}, the adaptive lasso and Dantzig selector procedures share several similarities but also some slight differences.
We first prove that both procedures satisfy an oracle property, i.e. methods can correctly select the nonzero coefficients with probability converging
to one, and second that the estimators of the nonzero coefficients are asymptotically normal. However, the conditions under which results are valid for the Dantzig selector are slightly more restrictive.
Our conducted simulation study and application to environmental data also demonstrate that both procedures behave similarly.
\section{Background and framework} \label{sec:background}
Let $\mathbf{X}$ be a spatial point process on $\mathbb{R}^d$, $d\ge 1$. We view $\mathbf{X}$ as a locally finite random subset of $\mathbb{R}^d$. Let $D \subset \mathbb{R}^d$ be a compact set of Lebesgue measure $|D|$ which will play the role of the observation domain. A realization of $\mathbf{X}$ in $D$ is thus a set $\mathbf{x}=\{x_1, \ldots, x_m\}$, where $x \in D$ and $m$ is the observed number of points in $D$. Suppose $\mathbf{X}$ has intensity function $\rho$ and second-order product density $\rho^{(2)}$. Campbell theorem states that, for any function $k: \mathbb{R}^d \to [0,\infty)$ or $k: \mathbb{R}^d \times \mathbb{R}^d \to [0,\infty)$
\begin{align}
\label{eq:campbell}
\mathbb{E} \sum_{u \in \mathbf{X}} k(u) ={\int_{\mathbb{R}^d} k(u) \rho (u)\mathrm{d}u}, \quad
\mathbb{E} \sum_{u,v \in \mathbf{X}}^{\neq} k(u,v)=\int_{\mathbb{R}^d \times \mathbb{R}^d} k(u,v) \rho^{(2)} (u,v)\mathrm{d}u \mathrm{d}v.
\end{align}
Based on the first two intensity functions, the pair correlation function $g$ is defined by
\begin{align*}
g(u,v)=\frac{\rho^{(2)}(u,v)}{\rho(u)\rho(v)}, \quad u,v \in D,
\end{align*}
when both $\rho$ and $\rho^{(2)}$ exist with the convention $0/0=0$. The pair correlation function is a measure of departure of the model from the Poisson point process for which $g=1$. For further background materials on spatial point processes, see for example \cite{baddeley2015spatial,moller2003statistical}.
For our asymptotic considerations, we assume that a sequence $(\mathbf{X})_{n\ge 1}$ is observed within a sequence of bounded domains $(D_n)_{n\ge 1}$. We denote by $\rho_n$ and $g_n$ the intensity and pair correlation of $\mathbf{X}_n$. With an abuse of notation, we denote by $\mathbb E$ and $\mathrm{Var}$, the expectation and variance under $\mathbf{X}_n$. we assume that the intensity $\rho_n$ writes $\rho_n(u) = \exp \{ \boldsymbol{\beta}_0^\top \mathbf{z}(u)\}$, $u\in D_n$. We thus let $\boldsymbol{\beta}_0$ denote the true parameter vector and assume it can be decomposed as $\boldsymbol{\beta}_0 = \{\beta_{01},\ldots,\beta_{0s_n},\beta_{0(s_n+1)},\ldots,\beta_{0p_n}\}^\top=(\boldsymbol{\beta}^{\top}_{01},\boldsymbol{\beta}^{\top}_{02})^\top = (\boldsymbol{\beta}_{01}^\top, \mathbf 0^\top)^\top$. Therefore, $\boldsymbol{\beta}_{01} \in \mathbb R^{s_n}$, $\boldsymbol{\beta}_{02}=\mathbf 0 \in \mathbb R^{p_n-s_n}$ and $\boldsymbol{\beta}_0 \in \mathbb R^{p_n}$, where $s_n$ is the number of non-zero coefficients, $p_n-s_n$ the number of zero coefficients and $p_n$ the total number of parameters. We underline that it is unknown to us which coefficients are non-zero and
which are zero. Thus, we consider a sparse intensity model where in particular $s_n$ and $p_n$ may diverge to infinity as $n$ grows.
For any $\boldsymbol{\beta}\in \mathbb R^{p_n}$ or for the spatial covariates $\mathbf{z}(u)$, we use a similar notation, i.e. $\boldsymbol{\beta}=(\boldsymbol{\beta}_1^\top,\boldsymbol{\beta}_2^\top)^\top$ and $\mathbf{z}(u)=\{\mathbf{z}_1(u)^\top,\mathbf{z}_2(u)^\top\}^\top$, $u\in D_n$. We let $\mu_n = \mathbb E\{N(D_n)\}$, that is the expected number of points in $D_n$. By Campbell theorem, we have
\[
\mu_n = \int_{D_n} \rho_n(u,\boldsymbol{\beta}_0) \mathrm{d} u = \int_{D_n} \exp \{ \boldsymbol{\beta}_0^\top \mathbf{z}(u)\} \mathrm{d} u
=\int_{D_n} \exp \{ \boldsymbol{\beta}_{01}^\top \mathbf{z}_1(u)\} \mathrm{d} u
\]
Note that $\mu_n$ is a function of $D_n, \boldsymbol{\beta}_{01}, \mathbf{z}_1(u), s_n$. In this paper, we assume that $\mu_n\to \infty$ as $n\to \infty$. That kind of assumption is very general and embraces the well-known frameworks called increasing domain asymptotics and infill asymptotics. For the increasing domain context, $D_n \to \mathbb R^d$ and usually $\boldsymbol{\beta}_{01}$ depends on $n$ only through $s_n$. For the infill asymptotics, $D_n=D$ is assumed to be a bounded domain of $\mathbb R^d$ and usually $z_1(u)=1$, $\boldsymbol{\beta}_{01}=\theta_n \to \infty$ as $n\to \infty$. In some sense, the parameter $\mu_n$ plays the role of the sample size in standard inference.
To reduce notation in the following, unless it is ambiguous, we do not index $\mathbf{X}$, $\rho$, $g$, $\boldsymbol{\beta}_0$, $\boldsymbol{\beta}$, $\mathbf{z}(u)$ with $n$.
\section{Methodologies} \label{sec:method}
\subsection{Standard methodology}
If $\mathbf{X}$ is a Poisson point process, then, on $D_n$, $\mathbf{X}$ admits a density with respect to the unit rate Poisson point process \citep{moller2003statistical}. This yields the log-likelihood function for $\boldsymbol{\beta}$, which, for the intensity model~\eqref{eq:int}, is proportional to
\begin{equation}\label{eq:likepois}
\ell_n(\boldsymbol{\beta})
= \sum_{u \in \mathbf{X} \cap D_n} \boldsymbol{\beta}^\top \mathbf{z}(u) - \int_{D_n} \rho(u; \boldsymbol{\beta})\mathrm{d}u.
\end{equation}
The gradient of~\eqref{eq:likepois} writes
\begin{equation}
\label{eq:Un}
\mathbf{U}_n (\boldsymbol{\beta})= \frac{\mathrm d}{\mathrm d \boldsymbol{\beta}} \ell_n(\boldsymbol{\beta}) = {\sum_{u \in \mathbf{X} \cap D_n}\mathbf{z}(u)} - {\int_{D_n} \mathbf{z}(u) \rho(u; \boldsymbol{\beta})\mathrm{d}u}.
\end{equation}
If $\mathbf{X}$ is not a Poisson point process, Campbell Theorem shows that \eqref{eq:Un} remains an unbiased estimating equation. Hence, the maximum of~\eqref{eq:likepois}, still makes sense for non-Poisson models. Such an estimator, which can be viewed as composite likelihood has received a lot of attention in the literature and asymptotic properties are well-established when $p_n=p$ and $p$ is moderate \citep[e.g][]{guan2015quasi,guan2010weighted,waagepetersen2007estimating}.
We end this section with the definition of the two following $p_n\times p_n$ matrices
\begin{align}
\mathbf{A}_n(\boldsymbol{\beta})&={\int_{D_n} \mathbf{z}(u)\mathbf{z}(u)^\top \rho(u;\boldsymbol{\beta})\mathrm{d}u} \label{eq:An} \\
\mathbf{B}_n(\boldsymbol{\beta})&=\mathbf{A}_n(\boldsymbol{\beta}) + {\int_{D_n} \int_{D_n} \mathbf{z}(u)\mathbf{z}(v)^\top \{g(u,v)-1\} \rho(u;\boldsymbol{\beta}) \rho(v;\boldsymbol{\beta}) \mathrm{d}u \mathrm{d}v}. \label{eq:Bn}
\end{align}
The matrix $\mathbf{A}_n(\boldsymbol{\beta})$ corresponds to the sensitivity matrix defined by \linebreak$\mathbf{A}_n(\boldsymbol{\beta})=-\mathbb E \{\mathrm d \mathbf U_n(\boldsymbol{\beta}) / \mathrm d \boldsymbol{\beta}^\top \}$ while $\mathbf{B}_n(\boldsymbol{\beta})$ corresponds to the variance of the estimating equation, i.e. $\mathbf{B}_n(\boldsymbol{\beta})= \mathrm{Var}\{ \mathbf U_n(\boldsymbol{\beta}) \}$. By passing, we point out that $\mathbf{A}_n(\boldsymbol{\beta}) = -\mathrm d \mathbf U_n(\boldsymbol{\beta}) / \mathrm d \boldsymbol{\beta}^\top $.
Let $\mathbf M_n$ be some $p_n\times p_n$ matrix, e.g. $\mathbf{A}_n(\boldsymbol{\beta})$ or $\mathbf{B}_n(\boldsymbol{\beta})$. Such a matrix is decomposed as
\begin{align}
\label{partition}
\mathbf{M}_n=
\begin{bmatrix}
\mathbf{M}_{n,1} \\
\mathbf{M}_{n,2}
\end{bmatrix}
=
\begin{bmatrix}
\mathbf{M}_{n,11} & \mathbf{M}_{n,12}\\
\mathbf{M}_{n,21} & \mathbf{M}_{n,22}
\end{bmatrix},
\end{align}
where $\mathbf{M}_{n,1}$ (resp. $\mathbf{M}_{n,2}$) is the first $ s_n \times p_n$ (resp. the following $ (p_n-s_n) \times p_n$) components of $\mathbf{M}_n$ and $ \mathbf{M}_{n,11}$ (resp. $ \mathbf{M}_{n,12}$, $ \mathbf{M}_{n,21}$, and $\mathbf{M}_{n,22}$) is the $s_n \times s_n$ top-left corner (resp. the $s_n \times (p_n-s_n)$ top-right corner, the $(p_n-s_n) \times s_n$ bottom-left corner, and the $(p_n-s_n) \times (p_n-s_n)$ bottom-right corner) of $\mathbf{M}_n$. In what follows, for a squared symmetric matrix $\mathbf{M}_n$, $\nu_{\min}(\mathbf M_n)$ and $\nu_{\max}(\mathbf M_n)$ denote respectively the smallest and largest eigenvalue of $\mathbf M_n$. Finally, $\|\mathbf y \|$ denotes the Euclidean norm of a vector $\mathbf y$, while $\|\mathbf M_n\| = \sup_{\|\mathbf y\|\neq 0} \|\mathbf M_n \mathbf y\|/\|\mathbf y\|$ denotes the spectral norm. We remind that the spectral norm is subordinate and that for a symmetric definite positive matrix $\|\mathbf M_n\|=\nu_{\max}(\mathbf M_n)$.
\subsection{Adaptive lasso (AL)} \label{sec:al}
When the number of parameters is large, regularization methods allow one to perform both estimation and variable selection simultaneously. When $p_n=p$, \cite{choiruddin2018convex} consider several regularization procedures which consist in adding a convex or non-convex penalty term to~\eqref{eq:likepois}. The proposed methods are unchanged even when the number of covariates diverges. In particular, the adaptive lasso consists in maximizing
\begin{align} \label{regmed}
Q_n( \boldsymbol{\beta})= \frac{1}{\mu_n}\ell_n( \boldsymbol{\beta}) - {\sum_{j=1}^{p_n} \lambda_{n,j}|\beta_{j}|},
\end{align}
where the real numbers $\lambda_{n,j}$ are non-negative tuning parameters. We therefore define the adaptive lasso estimator as
\begin{align}
\hat \boldsymbol{\beta}_{\mathrm{AL}}= \arg\max_{\boldsymbol{\beta} \in \mathbb{R}^{p_n}} Q_n( \boldsymbol{\beta}).
\end{align}
When $\lambda_{n,j}=0$ for $j=1,\dots,p_n$, the method reduces to the maximum composite likelihood estimator and when $\lambda_{n,j}=\lambda_n$ to the standard lasso estimator. If $\beta_1$ acts as an intercept, meaning that $z_1(u)=1, \forall u \in D_n$, it is often desired to let this parameter free. This can be done by setting $\lambda_{n,1}=0$ in the second term of \eqref{regmed}. Finally, the choice of $\mu_n$ as a normalization factor in~\eqref{regmed}
follows the implementation of the adaptive lasso procedure for generalized linear models in the standard software (e.g. \texttt{R} package \texttt{glmnet}~\cite{friedman2010regularization}).
\subsection{Adaptive (linearized) Dantzig selector (ALDS)} \label{sec:alds}
When applied to a likelihood \citep{candes2007dantzig,james2009generalized}, the Dantzig selector estimate is obtained by minimizing $\|\boldsymbol{\beta}\|_1$ subject to the infinite norm of the score function {bounded} by some threshold parameter $\lambda$. In the spatial point process setting, we propose an adaptive version of the Dantzig selector estimate as the solution of the problem
{\begin{align}
\label{MADSvec}
\min \|{\boldsymbol{\Lambda}}_n \boldsymbol{\beta} \|_1 \mbox{ subject to } |(\mathbf U_n(\boldsymbol{\beta}))_j| \le \lambda_{n,j} \quad \text{ for } j=1,\dots,p_n
\end{align}
where {${\boldsymbol{\Lambda}}_n=\mathrm{diag}(\lambda_{n,1},\cdots,\lambda_{n,p_n} )$} and where $\mathbf U_n(\boldsymbol{\beta})$ is the estimating equation given by~\eqref{eq:Un}. It is worth pointing out $\lambda_{n,j}=0$ for $j=1,\dots,p_n$ reduces the criterion \eqref{MADSvec} to $\mathbf U_n(\boldsymbol{\beta})=0$, which leads to the maximum composite likelihood estimator. Similarly to the adaptive lasso procedure, the intercept can be let free by setting $\lambda_{n,1}=0$. However, in the following, we assume, for the ALDS procedure, that $\lambda_{n,j}>0$ for convenience, in order to rewrite~\eqref{MADSvec} in the following matrix form
}
\begin{align}
\label{MADS}
\min \|{\boldsymbol{\Lambda}}_n \boldsymbol{\beta} \|_1 \mbox{ subject to } (\mu_n)^{-1} \Big \|{\boldsymbol{\Lambda}}_n^{-1} \mathbf{U}_n( {\boldsymbol{\beta}}) \Big\|_\infty \leq 1.
\end{align}
{We claim that the whole methodology and the proofs could be redone without involving the notation ${\boldsymbol{\Lambda}}_n^{-1}$ and so that Theorem~\ref{thm:main} remains valid if one does not regularize the intercept term for example.}
Due to the nonlinearity of the constraint vector, standard linear programming can no more be used to solve \eqref{MADS}. This results in a non-convex optimization problem. In particular the feasible set $\{\boldsymbol{\beta} : \| {\boldsymbol{\Lambda}}_n^{-1} \mathbf U_n(\boldsymbol{\beta})\|_\infty \leq 1\}$ is non-convex, which makes the method difficult to implement and to analyze from a theoretical point of view. In the context of generalized linear models, \cite{james2009generalized} consider the iterative reweighted least squares method and define an iterative procedure where at each step of the algorithm the constraint vector corresponds to a linearization of the updated pseudo-score. Such a procedure is not straightforward to extend from~\eqref{MADS} and remains complex to analyze from a theoretical point of view. As an alternative, we follow~\cite[][Chapter 3]{dicker2010regularized} and propose to linearize the constraint vector by expanding $\mathbf U_n(\boldsymbol{\beta})$ around $\tilde \boldsymbol{\beta}$, an initial estimate of $\boldsymbol{\beta}_0$, using a first order Taylor approximation; i.e. we substitute $\mathbf U_n(\boldsymbol{\beta})$ by $\mathbf U_n(\tilde \boldsymbol{\beta}) + \mathbf A_n(\tilde \boldsymbol{\beta}) (\tilde \boldsymbol{\beta} - \boldsymbol{\beta})$. Such a linearization enables now the use of standard linear programming.
We term
adaptive linearized Dantzig selector (ALDS) estimate and denote it by $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ the solution to the optimization problem
\begin{align}
\label{ADS2}
\min \|{\boldsymbol{\Lambda}}_n \boldsymbol{\beta} \|_1 \mbox{ subject to } (\mu_n)^{-1}\; \Big \|{\boldsymbol{\Lambda}}_n^{-1} \big \{\mathbf{U}_n( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} - {\boldsymbol{\beta}}) \big \} \Big\|_\infty \leq 1.
\end{align}
Properties of $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ depend on properties of $\tilde \boldsymbol{\beta}$ which are made precise in the next section.
\section{Asymptotic results} \label{sec:results}
Our main result relies upon the following conditions:
\begin{enumerate}[($\mathcal C$.1)]
\item The intensity function has the log-linear specification given by~\eqref{eq:int} where $\boldsymbol{\beta} \in \mathbb R^{p_n}$. \label{C:intensity}
\item $(\mu_n)_{n\ge 1}$ is an increasing sequence of real numbers, such that $\mu_n\to \infty$ as ${n} \to \infty$. \label{C:nun}
\item The covariates $\mathbf z$ satisfy
\[
\sup_{n\geq 1} \; \sup_{i=1,\dots,p_n} \; \sup_{u \in \mathbb{R}^d} |z_i(u)| < \infty
\qquad \mbox{ and } \qquad
\inf_{n\ge 1} \inf_{\boldsymbol \phi\in \mathbb R^{p_n}, \|\boldsymbol \phi\|=1} \inf_{u\in D_n} \{\boldsymbol \phi^\top \mathbf{z}(u)\}^2 >0
\] \label{C:cov}
\item The intensity and pair correlation satisfy
\[
\int_{D_n}\int_{D_n} \rho(u;\boldsymbol{\beta}_0)\rho(v;\boldsymbol{\beta}_0)|g(u,v)-1| \mathrm{d} u\mathrm{d} v = O(\mu_n).
\] \label{C:g}
\item The matrix $B_{n,11}(\boldsymbol{\beta}_0)$ satisfies
\[
\liminf_{n} \inf_{\boldsymbol \phi \in \mathbb R^{s_n}, \|\boldsymbol \phi\|=1} \boldsymbol \phi^\top
\big\{(\mu_n)^{-1}\mathbf{B}_{n,11}(\boldsymbol{\beta}_0) \big\}\boldsymbol \phi>0.
\] \label{C:Bn}
\item For any $\boldsymbol \phi \in \mathbb R^{s_n}\setminus \{0\}$, the following convergence holds in distribution as $n\to \infty$:
\[
\sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top \mathbf U_{n,1}(\boldsymbol{\beta}_0) \stackrel{d}{\to} N(0,1)
\]
where $\sigma^2_{\boldsymbol \phi} = \boldsymbol \phi^\top \mathbf{B}_{n,11}(\boldsymbol{\beta}_0) \boldsymbol \phi$. \label{C:clt}
\item The initial estimate $\tilde {\boldsymbol{\beta}}$ satisfies $\|\tilde {\boldsymbol{\beta}} - {\boldsymbol{\beta}_0} \|=O_{\mathrm{P}}(\sqrt{p_n/{\mu_n}})$ and is such that $\|\mathbf A_{n,11}(\tilde \boldsymbol{\beta})^{-1}\|= O_\mathrm{P}(\mu_n^{-1})$.
\label{C:initial}
\item As $n\to \infty$, we assume that $s_n,p_n$ and $\mu_n$ are such that as $n\to \infty$
\[\left\{
\begin{array}{ll}
\max \left(\frac{p_n^4}{\mu_n} , \frac{s_n^2 p_n^3}{\mu_n}\right) \to 0& \quad \text{ for the AL estimate} \\
\frac{s_n^3 p_n^4}{\mu_n} \to 0&\quad \text{ for the ALDS estimate.}
\end{array}\right.
\] \label{C:snpn}
\item Let $a_n=\max_{j=1,\ldots,{s_n}} \lambda_{n,j}$ and $b_n=\min_{j={s_n}+1,\ldots,p_n} \lambda_{n,j}$. We assume that these sequences are such that, as $n \to \infty$
\[\left\{
\begin{array}{lll}
a_n \sqrt{s_n \mu_n }\to 0, & b_n \sqrt{\frac{\mu_n}{p_n^2}} \to \infty &\quad \text{ for the AL estimate} \\
a_n \sqrt{s_n^3 \mu_n }\to 0, & b_n \sqrt{\frac{\mu_n}{p_n^3}} \to \infty &\quad \text{ for the ALDS estimate.}
\end{array}\right.
\]
\label{C:anbn}
\end{enumerate}
Condition~\cond{C:intensity} specifies the form of intensity models considered in this paper. In particular, note that we do not assume that $\boldsymbol{\beta}$ is an element of a bounded domain of $\mathbb R^{p_n}$. Condition~\cond{C:nun} specifies our asymptotic framework where we assume to observe in average more and more points in $D_n$. As already mentioned, this may cover increasing domain type or infill type asymptotics. To our knowledge, only \cite{choiruddin2021information} consider a similar asymptotic framework in order to construct information criteria for spatial point process intensity estimation. The context is however very different here as we consider a {large number of covariates} and we study methodologies (adaptive lasso or Dantzig) which are able to produce a sparse estimate.
Condition~\cond{C:cov} is quite standard and is not too restrictive. Note that conditions~\cond{C:intensity}-\cond{C:cov} allow us in Lemma~\ref{lem:rhoubeta} to prove that in a `neighbordhood' of $\boldsymbol{\beta}_0$, $\int \rho(u;\boldsymbol{\beta})= O(\mu_n)$, a useful result widely used in our proofs. The last part of Condition~\cond{C:cov} asserts that at any location the covariates are linearly independent. Condition~\cond{C:cov} also implies first that $\liminf_{n} \inf_{\boldsymbol \phi, \|\boldsymbol \phi\|=1} \boldsymbol \phi^\top\big\{(\mu_n)^{-1}\mathbf{A}_{n,11}(\boldsymbol{\beta}_0) \big\}\boldsymbol \phi>0$ and second that $\liminf_{n} \inf_{\boldsymbol \phi, \|\boldsymbol \phi\|=1} \boldsymbol \phi^\top\big\{(\mu_n)^{-1}\mathbf{A}_{n}(\boldsymbol{\beta}_0) \big\}\boldsymbol \phi>0$. Condition~\cond{C:Bn} is a similar assumption but for the submatrix $\mathbf B_{n,11}(\boldsymbol{\beta}_0)$ which corresponds to $\mathrm{Var}\{\mathbf U_{n,1}(\boldsymbol{\beta}_0)\}$. Condition~\cond{C:g} is also natural. Combined with~Condition~\cond{C:cov}, this implies that $\mathrm\mathbf B_{n}(\boldsymbol{\beta}_0)= O(\mu_n p_n)$. When $p_n=p$ (and therefore $s_n=s$) and in the increasing domain framework, such an assumption can be satisfied by a large class of spatial point processes such as determinantal point processes, log-Gaussian Cox processes {and Neyman-Scott point processes \cite[see][]{choiruddin2018convex}. When $p_n=p$ and in the infill asymptotic framework, these assumptions are also valid for many spatial point processes, as discussed by~\cite{choiruddin2021information}}. Condition~\cond{C:clt} is required to derive the asymptotic normality of $\hat \boldsymbol{\beta}_{01}$. Under a specific framework, such a result has already been obtained for a large class of spatial point processes: by \cite{biscio:waagepetersen:19,waagepetersen2009two} under the increasing domain framework and $p_n=p$ and~\cite{choiruddin2017spatial} when $p_n\to \infty$; by~\cite{choiruddin2021information} in the infill/increasing domain asymptotics frameworks and $p_n=p$.
Condition ($\mathcal C$.\ref{C:initial}) is very specific to the ALDS estimate which requires a preliminary estimate of $\boldsymbol{\beta}$. That condition is not unrealistic as a simple choice for $\tilde \boldsymbol{\beta}$ could be the maximum of the composite likelihood function~\eqref{eq:likepois}, see the remark after Theorem~\ref{thm:main}. Of course, we do not require that $\tilde \boldsymbol{\beta}$ produces a sparse estimate.
Condition ($\mathcal C$.\ref{C:snpn}) reflects the restriction on the number of covariates that can be considered in this study. For the AL estimate, this assumption is very similar to the one required by~\cite{fan2004nonconcave} when $\mu_n$ is replaced by $n$ and where the number of non-zero coefficients $s_n$ is constant.
Condition ($\mathcal C$.\ref{C:anbn}) contains the main ingredients to derive sparsity properties, consistency and asymptotic normality. We first note that if $\lambda_{n,j}=\lambda_n$, then $a_n=b_n=\lambda_n$, whereby it is easily deduced that the two conditions on $a_n$ and $b_n$ cannot be satisfied simultaneously even if $p_n=p$. This justifies the introduction of an adaptive version of the Dantzig selector and motivates the use of the adaptive lasso. The condition $a_n\sqrt{s_n \mu_n}\to 0$ for the adaptive lasso is similar to the one imposed by \cite{fan2004nonconcave} when $\mu_n$ is replaced by $n$ and $s_n=s$ in their context. However, we require a slightly stronger condition on $b_n$ than the one required by \cite{fan2004nonconcave}. In our setting, their assumption would be written as $b_n \sqrt{\mu_n/p_n} \to \infty$. However, we would have to assume that $\nu_{\max}\big(\mathbf{A}_{n}(\boldsymbol{\beta}_0)\big)=O(\mu_n)$. Such a condition is not straightforwardly satisfied in our setting since, for instance, the conditions~\cond{C:nun}-\cond{C:g}
only imply that $\nu_{\max}\big(\mathbf{A}_{n}(\boldsymbol{\beta}_0)\big)=O({p_n \mu_n})$.
As already mentioned, we do not assume that $\tilde \boldsymbol{\beta}$ satisfies any sparsity property. We believe this is the main reason why conditions~\cond{C:snpn})-\cond{C:anbn} contain slightly stronger assumptions for the ALDS estimate than for the AL estimate. We now present our main result, whose proof is provided in Appendices~\ref{sec:proofAL}-\ref{sec:proofALDS}.
\begin{theorem}
\label{thm:main}
Let $\hat \boldsymbol{\beta}$ denote either $\hat \boldsymbol{\beta}_{\mathrm{AL}}$ or $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$. Assume that the conditions ($\mathcal C$.1)-($\mathcal C$.\ref{C:anbn}) hold, then the following properties hold.
\begin{enumerate}[(i)]
\item $\hat \boldsymbol{\beta}$ exists. Moreover $\hat \boldsymbol{\beta}_{\mathrm{AL}}$ satisfies,
${\displaystyle
\|\hat \boldsymbol{\beta}_{\mathrm{AL}} -\boldsymbol{\beta}_0\| = O_{\mathrm P}
\left( \sqrt{\frac{p_n}{\mu_n}} \right)}$.
\item Sparsity: $\mathrm{P}(\hat \boldsymbol{\beta}_{2}=0) \to 1$ as $n \to \infty$.
\item Asymptotic Normality: for any $\boldsymbol \phi \in \mathbb R^{s_n}\setminus \{0\}$ such that $\|\boldsymbol \phi\|<\infty$
\[
\sigma_{\boldsymbol \phi}^{-1} \, \boldsymbol \phi^\top \mathbf A_{n,11}(\boldsymbol{\beta}_0)
(\hat \boldsymbol{\beta}_{1}- \boldsymbol{\beta}_{01})\xrightarrow{d} \mathcal{N}(0, 1)
\]
\end{enumerate}
in distribution, where $\sigma^2_{\boldsymbol \phi} = \boldsymbol \phi^\top \mathbf{B}_{n,11}(\boldsymbol{\beta}_0) \boldsymbol \phi$.
\end{theorem}
To derive the consistency of $\hat \boldsymbol{\beta}_{\mathrm{AL}}$, a careful look at the proof in Appendix~\ref{sec:proofAL} shows that the condition $a_n \sqrt{{\mu_n s_n}/p_n} \to 0$ could be sufficient. The convergence rate, i.e. $O_{\mathrm P} ( \sqrt{p_n/\mu_n})$, is $\sqrt{p_n}$ times the convergence rate of the estimator obtained when $p_n$ is constant \citep[see ][Theorem 1]{choiruddin2018convex}. It also corresponds to the rate of convergence obtained by \cite{fan2004nonconcave} for generalized linear models when $p_n\to \infty$ and when $\mu_n$ corresponds to the standard sample size. It is worth pointing out that a possible diverging number of non-zero coefficients $s_n$ does not affect the rate of convergence. It does however impose a more restrictive condition on $a_n$.
Still on Theorem~\ref{thm:main} (i), its proof shows that this result remains valid when $\lambda_{n,j}=0$ for $j=1,\dots,p_n$. In other words, the maximum composite likelihood estimator is consistent with the same rate of convergence. Hence a simple choice for the initial estimate $\widetilde \boldsymbol{\beta}$ defining the ALDS estimate is the maximum of the Poisson likelihood given by~\eqref{eq:likepois}.
Theorem~\ref{thm:main} (iii) would be the result one would obtain if $p_n-s_n=0$. Therefore, the efficiency of $\hat \boldsymbol{\beta}_{\mathrm{AL},1}$ and $\hat \boldsymbol{\beta}_{\mathrm{ALDS},1}$ is the same as the estimator of $\boldsymbol{\beta}_{01}$ obtained by maximizing \eqref{eq:likepois} based on the sub model knowing that $\boldsymbol{\beta}_{02}=\mathbf{0}$. In other words, when $n$ is sufficiently large, both estimators are as efficient as the oracle one.
We end this section with the following remark. Despite the asymptotic properties for $\hat \boldsymbol{\beta}_{\mathrm{AL}}$ and $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ and the conditions under which they are valid, are (almost) identical, the proofs are completely different and rely upon different tools. For $\hat \boldsymbol{\beta}_{\mathrm{AL}}$, our contribution is to extend the proof by~\cite{choiruddin2018convex} where only the increasing domain framework was considered, i.e. $\mu_n=O(|D_n|)$ and $D_n\to \mathbb R^d$ as $n\to \infty$ and where $s_n=s$ and $p_n=p$. The results for $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ are the first ones available for spatial point processes. To handle this estimator, we first have to study existence and optimal solutions for the primal and dual problems.
\section{Computational considerations} \label{sec:comp}
To lessen notation, we remove the index $n$ in different quantities such as $\mathbf U_n, \lambda_{n,j}, \ell_n$, and $\mu_n$. For the practical point of view, we define $\mu=N(D)$ the number of data points in $D$.
\subsection{Berman-Turner approach}
Before discussing how AL and ALDS estimates are obtained we first remind the Berman-Turner approximation \citep{baddeley2015spatial} used to derive the Poisson likelihood estimate~\eqref{eq:likepois}. The so-called Berman-Turner approximation consists in discretizing the integral term in \eqref{eq:likepois} as
\begin{align*}
{\int_{D} \rho(u; \boldsymbol{\beta}) \mathrm{d}u} \approx {\sum_{i=1}^{M} w(u_i) \rho (u_i; \boldsymbol{\beta})},
\end{align*}
where $u_i, i=1,\ldots,M$ are points in $D$ consisting of the $m$ data points and $M-m$ dummy points and where the quadrature weights $w(u_i)>0$ are non-negative real numbers such that ${\sum_i w(u_i)}=|D|$. Using this specific integral discretization, \eqref{eq:likepois} is then approximated by
\begin{align}
\label{eq:appx:pois}
\ell(\boldsymbol{\beta}) \approx \tilde \ell(\boldsymbol{\beta}) = {\sum_{i=1}^{M} w_i \{y_i \log \rho_i(\boldsymbol{\beta}) - \rho_i(\boldsymbol{\beta})\}},
\end{align}
where $w_i=w(u_i), y_i=w_i^{-1} \mathbf{1}(u_i \in \mathbf{X} \cap D)$ and $\rho_i(\boldsymbol{\beta})=\rho (u_i; \boldsymbol{\beta})$.
Equation \eqref{eq:appx:pois} is formally equivalent to the weighted likelihood function of independent Poisson variables $y_i$ with weights $w_i$. The method to approximate \eqref{eq:Un} and \eqref{eq:An} follows along similar lines which respectively results in
\begin{align}
\mathbf{U}(\boldsymbol{\beta}) \approx \tilde{\mathbf{U}}(\boldsymbol{\beta}) = {\sum_{i=1}^{M} w_i \mathbf{z}_i \{y_i - \rho_i(\boldsymbol{\beta})\}}, \quad
\mathbf{A}(\boldsymbol{\beta}) \approx \tilde{\mathbf{A}}(\boldsymbol{\beta}) = {\sum_{i=1}^{M} w_i \mathbf{z}_i \mathbf{z}_i^\top \rho_i(\boldsymbol{\beta})}, \label{eq:approx:An}
\end{align}
where $\mathbf{z}_i=\mathbf{z}(u_i)$. Thus, standard statistical software for generalized linear models can be used to obtain the estimates. This fact is implemented in the $\mathtt{spatstat}$ $\texttt{R}$ package by $\texttt{ppm}$ function with option $\texttt{method="mpl"}$ \citep{baddeley2015spatial}.
\subsection{Adaptive lasso (AL)} \label{sec:comp:AL}
First, given a current estimate $\check \boldsymbol{\beta}$, \eqref{eq:appx:pois} is approximated using second order Taylor approximation to apply iteratively reweighted least squares,
\begin{align}
\tilde \ell(\boldsymbol{\beta}) \approx \ell_Q(\boldsymbol {\beta})
=- \frac{1}{2} {\sum_{i=1}^M \psi_i (y_i^*-\boldsymbol{\beta}^\top \mathbf{z}_i)^2+C(\check \boldsymbol{\beta})} \label{eq:quad},
\end{align}
where $C(\check \boldsymbol{\beta})$ is a constant, $y_i^*$ and $\psi_i$ are the new working response values and weights,
$
y_i^*=\mathbf{z}_i^\top \check \boldsymbol{\beta}+\{y_i - \exp(\check \boldsymbol{\beta}^\top\mathbf{z}_i)\}/\{\exp(\check \boldsymbol{\beta}^\top\mathbf{z}_i)\}, \;\psi_i= w_i \exp(\check \boldsymbol{\beta}^\top\mathbf{z}_i).
$
Second, a penalized weighted least squares problem is obtained by adding penalty term. Therefore, we solve
\begin{align}
\label{eq:glmnet}
{\displaystyle \min_{\boldsymbol{\beta} \in \mathbb{R}^{p}} \Omega(\boldsymbol{\beta})}={\displaystyle \min_{\boldsymbol{\beta} \in \mathbb{R}^{p}} \left\{-\frac{1}{N(D)}\ell_Q(\boldsymbol{\beta})+\sum_{j=1}^{p} \lambda_j|\beta_j|\right\}}
\end{align}
using the coordinate descent algorithm \citep{friedman2010regularization}. The method consists in partially minimizing ($\ref{eq:glmnet}$) with respect to $\beta_j$ given $\check \beta_l$ for $l \neq j$, $l,j=1,\ldots,p$, that is
\begin{align*}
{\displaystyle \min_{\beta_j} \Omega (\check \beta_1, \ldots, \check \beta_{j-1}, \beta_j, \check \beta_{j+1}, \ldots, \check \beta_{p})}.
\end{align*}
With a few modifications \eqref{eq:glmnet} is solved using the $\mathtt{glmnet}$ $\texttt{R}$ package \citep{friedman2010regularization}. More detail about this implementation can be found in~\cite[][Appendix C]{choiruddin2018convex}.
\subsection{Adaptive (linearized) Dantzig selector (ALDS)} \label{sec:comp:ALDS}
Given $\tilde {\boldsymbol{\beta}}$, \eqref{ADS2} is a linear problem, simple to implement. The main task is to compute the vectors $\mathbf{U}(\tilde \boldsymbol{\beta})$ and $\mathbf{A}(\tilde \boldsymbol{\beta})(\tilde {\boldsymbol{\beta}}- {\boldsymbol{\beta}})$. Typically, $\tilde {\boldsymbol{\beta}}$ is chosen from the maximum composite likelihood estimate. Then, $\mathbf{U}(\tilde \boldsymbol{\beta})$ and $\mathbf{A}(\tilde \boldsymbol{\beta})(\tilde {\boldsymbol{\beta}}- {\boldsymbol{\beta}})$ are approximated by \eqref{eq:approx:An}. This results in solving
\begin{align*}
\min {\sum_{j=1}^{p} \lambda_j |\beta_j|} \mbox{ subject to } {N(D)}^{-1}\Big |\tilde{\mathbf{U}}_{j}(\tilde \boldsymbol{\beta}) + \big\{\tilde{\mathbf{A}}(\tilde \boldsymbol{\beta})(\tilde {\boldsymbol{\beta}}- {\boldsymbol{\beta}})\big\}_j \Big | \leq \lambda_j, \quad \text{ for } j=1,\dots,p_n,
\end{align*}
where $ \tilde{\mathbf{U}}_{j}(\tilde \boldsymbol{\beta})$ and $ \{\tilde{\mathbf{A}}(\tilde \boldsymbol{\beta})(\tilde {\boldsymbol{\beta}}- {\boldsymbol{\beta}})\}_j$ are the $j$-th components of vectors $\tilde{\mathbf{U}}(\tilde \boldsymbol{\beta})$ and $\tilde{\mathbf{A}}(\tilde \boldsymbol{\beta})(\tilde {\boldsymbol{\beta}}- {\boldsymbol{\beta}})$.
\subsection{Tuning parameter selection}
\label{tuning}
Both AL and ALDS rely on the proper regularization parameters $\lambda_j$ to avoid from unnecessary bias due to too small $\lambda_j$ selection and from large variance due to too large $\lambda_j$ choice, so the selection of $\lambda_j$ becomes an important task. To tune the $\lambda_j$, we follow \cite{choiruddin2018convex,zou2006adaptive} and define $\lambda_j= \lambda |\tilde \beta_j|^{-\nu}$, where $\lambda\geq 0, \nu>0$ and $\tilde {\boldsymbol{\beta}}$ is the maximum composite likelihood estimate. The weights $|\tilde \beta_j|^{-\nu}$ serve as a prior knowledge to identify the non-zero coefficients since, for a constant $\lambda$, large (resp. small) $\tilde \beta_j$ will force $\lambda_j$ close to zero (resp. infinity) \cite{zou2006adaptive},
{Our theoretical results are not in line with this stochastic way of setting the regularization parameters but we believe this choice is pertinent in our context. Here is the intuition: let $\lambda_{nj}=\lambda_n/|\tilde \beta_j|$. Using the $\sqrt{\mu_n/p_n}$-consistency of $\tilde \beta_j$, $\lambda_{nj}=O_P(\lambda_n)$ for non-zero coefficients, while, for zero coefficients, we may conjecture that $\lambda_{nj} = O_P(\lambda_n \sqrt{\mu_n/p_n})$. Forgetting the $O_P$, we may conjecture that $a_n\asymp \lambda_n$ and $b_n\asymp\lambda_n \sqrt{\mu_n/p_n}$. Hence, considering the AL procedure for instance, that would mean that we require $\lambda_n$ to be such that $\lambda_n \sqrt{s_n\mu_n}\to 0$ and $\lambda_n \mu_n/p_n^{3/2} \to \infty$, which constitutes a non empty condition. For instance, assuming~($\mathcal C$.\ref{C:snpn}), the sequence $\lambda_n=(s_n\mu_n)^{-\eta}$ satisfies~($\mathcal C$.\ref{C:anbn}) as soon as $1/2<\eta<5/9$, since as $n\to \infty$
\[
\lambda_n \sqrt{s_n\mu_n} = (s_n\mu_n)^{1/2-\eta}\to 0
\qquad \text{ and } \qquad
\frac{\lambda_n \mu_n}{p_n^{3/2}}=\left(\frac{\mu_n}{s_n^2 p_n^3}\right)^{\eta/2} \left(\frac{\mu_n}{p_n^4}\right)^{1-3\eta/2}
p_n^{5/2-9\eta/2} \to \infty.
\]
Considering rigorously the stochastic choice $\lambda_n/|\tilde \beta_j|$ for the regularization parameters is left for future research.}
The remaining task is to specify $\lambda$. We follow here the literature, mainly \cite{choiruddin2021information}, and propose to select $\lambda$ as the minimum of the Bayesian information criterion, BIC($\lambda$), for spatial point processes
\begin{align*}
\mathrm{BIC}(\lambda)=-2 \ell\{\hat{\boldsymbol{\beta}} (\lambda)\} + p_* \log N(D),
\end{align*}
where $\ell\{\hat{\boldsymbol{\beta}} (\lambda)\}$ is the maximized composite likelihood function, $p_*$ is the number of non-zero elements in $\hat \boldsymbol{\beta}(\lambda)$ and $N(D)$ is the number of observed data points.
\section{Numerical results} \label{sec:num}
In Sections~\ref{sec:sim}-\ref{sec:appl}, we compare the AL and ALDS for intensity modeling of a simulated and real data. In particular, the real data example comes from an environmental data where 1146 locations of Acalypha diversifolia trees given by Figure~\ref{fig:acaldi} are surveyed in a 50-hectare region ($D =1000m \times 500m$) of the tropical forest of Barro Colorado Island (BCI) in central Panama \cite[e.g.][]{hubbell2005barro}. A main question is how this tree species profits from environmental habitats \cite{choiruddin2020regularized,waagepetersen2009two} that could be related to the 15 environmental covariates depicted in Figure~\ref{cov} and their 79 interactions. With a total of 94 covariates, we perform variable selection using the AL and ALDS to determine which covariates should be included in the model. We center and scale the 94 covariates to sort the important covariates according to the magnitudes of $\hat \boldsymbol{\beta}$. The subset of such covariates are also considered to construct a realistic setting for the simulation studies.
\begin{figure}
\caption{Plot of 1146 Acalypha diversifolia tree locations observed in the tropical forest of Barro Colorado Island}
\label{fig:acaldi}
\end{figure}
\begin{figure}
\caption{Maps of covariates used in simulation study and in application. From left to right: Elevation, slope, Aluminium (row 1), Boron, Calcium, Copper (row 2), Iron, Potassium, Magnesium (row 3), Manganese, Phosporus, Zinc (row 4), and Nitrogen, Nitrigen mineralisation, pH (row 5).}
\label{cov}
\end{figure}
\subsection{Simulation study} \label{sec:sim}
The simulated point patterns are generated from Poisson and Thomas cluster processes with intensity~\eqref{eq:int}. To generate point patterns from Thomas process with intensity \eqref{eq:int} \cite[e.g.][]{choiruddin2018convex}, we first generate a parent point pattern from a stationary Poisson point process $\mathbf{C}$ with intensity $\kappa=4 \times10^{-4}$. Given $\mathbf{C}$, offspring point patterns are generated from inhomogeneous Poisson point process $\mathbf{X}_c, c \in \mathbf{C}$ with intensity
\begin{align*}
\rho_{child}(u)=\exp\{\boldsymbol{\beta}^\top \mathbf{z}(u)\} k(u-c;\gamma)/\kappa,
\end{align*}
where $k(u-c;\gamma)=(2 \pi \gamma^2)^{-1} \exp(-\|u-c\|^2/(2 \gamma^2))$.
The point process $\mathbf{X}=\cup_{c \in \mathbf{C}}\mathbf{X}_c$ is indeed an inhomogeneous Thomas point process with intensity \eqref{eq:int}. We set $\gamma=5$ and $15$. The smaller the $\gamma$, the more clustered the point patterns, leading to moderate clustering for $\gamma=15$ and highly clustering for $\gamma=5$.
The covariates $\mathbf{z}(u)$ used for the simulation experiment are from the BCI data. In addition to the 15 environmental factors depicted by Figure~\ref{cov}, we add interaction between two covariates until we obtain the desired number of covariates $p=21, 41$ or $81$. For each $p$, we consider three mean numbers of points ($\mu$) which increase when the observation domain expands. More precisely, we set that $\mu_1=150$ (resp. $\mu_2=600,\mu_3=2400$) points is generated in $D_1=[0,250]\times[0,125]$ (resp. $D_2=[0,500]\times [0,250]$, $D_3=[0,1000]\times[0,500]$). When $D_1$ or $D_2$ is considered, we simply rescale the covariates to fit the desired observation domain. We fix $\beta_2=1$ and $\beta_3=-1$ while the rest are set to zero, {so $s=2$}. The parameter $\beta_1$ acts as intercept and is tuned to control the average number of points.
\begin{table}[ht]
\centering
\setlength{\tabcolsep}{7pt}
\renewcommand{0}{1.1}
\caption{True positive rate (TPR), false positive rate (FPR) in percentage, RMSE and average time in seconds obtained for AL and ALDS estimates based on 500 simulations from inhomogeneous Poisson point processes observed on different observation domains.}
\label{tab:poisson}
\begin{tabular}{rrrrrrrrr}
\hline
& \multicolumn{2}{c}{TPR} & \multicolumn{2}{c}{FPR} &\multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Time}\\
& AL & ALDS & AL & ALDS & AL & ALDS & AL & ALDS \\
\hline
\multicolumn{1}{l}{$D_1$ ($\mu_1=150$)} &&&&&&&&\\
$p=20$ & 57 & 57 & 23 & 23 & 2.4 & 2.4 & 0.3 & 0.3 \\
$p=40$ & 7 & 7 & 15 & 15 & 2.9 & 2.9 & 2.0 & 2.0\\
$p=80$ & 0 & 0 & 8 & 8 & 2.8 & 2.8 & 4.0 & 4.0 \\
\multicolumn{1}{l}{$D_2$ ($\mu_2=600$)} &&&&&&&&\\
$p=20$ & 100 & 100 & 3 & 4 & 0.3 & 0.3 & 0.3 & 0.3\\
$p=40$ & 97 & 96 & 4 & 5 & 0.5 & 0.5 & 0.8 & 0.6 \\
$p=80$ & 86 & 86 & 8 & 8 & 0.9 & 0.9 & 3.0 & 3.0\\
\multicolumn{1}{l}{$D_3$ ($\mu_3=2400$)} &&&&&&&&\\
$p=20$ & 100 & 100 & 0 & 0 & 0.1 & 0.1 & 0.4 & 0.3\\
$p=40$ & 100 & 100 & 0 & 0 & 0.1 & 0.1 & 0.9 & 0.9\\
$p=80$ & 100 & 100 & 0 & 0 & 0.1 & 0.1 & 3.0 & 3.0 \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\setlength{\tabcolsep}{7pt}
\renewcommand{0}{1.1}
\caption{True positive rate (TPR), false positive rate (FPR) in percentage, RMSE and average time in seconds obtained for AL and ALDS estimates based on 500 simulations from inhomogeneous Thomas point processes with $\kappa=4\times 10^{-4}$ and $\gamma=15$ (moderate clustering) observed on different observation domains.}
\label{tab:MC}
\begin{tabular}{rrrrrrrrr}
\hline
& \multicolumn{2}{c}{TPR} & \multicolumn{2}{c}{FPR} &\multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Time}\\
& AL & ALDS & AL & ALDS & AL & ALDS & AL & ALDS \\
\hline
\multicolumn{1}{l}{$D_1$ ($\mu_1=150$)} &&&&&&&&\\
$p=20$ & 59 & 59 & 44 & 44 & 8.2 & 8.2 & 0.3 & 0.3 \\
$p=40$ & 20 & 20 & 30 & 30 & 11.0 & 11.0 & 2.0 & 2.0 \\
$p=80$ & 0 & 0 & 15 & 15 & 8.3 & 8.3 & 4.0 & 4.0 \\
\multicolumn{1}{l}{$D_2$ ($\mu_2=600$)} &&&&&&&&\\
$p=20$ & 91 & 89 & 53 & 47 & 2.6 & 2.2 & 0.3 & 0.3 \\
$p=40$ & 88 & 86 & 48 & 43 & 4.9 & 3.8 & 1.0 & 0.6 \\
$p=80$ & 80 & 80 & 35 & 35 & 7.7 & 7.7 & 5.0 & 5.0 \\
\multicolumn{1}{l}{$D_3$ ($\mu_3=2400$)} &&&&&&&&\\
$p=20$ & 100 & 100 & 59 & 43 & 1.0 & 0.8 & 0.9 & 1.0 \\
$p=40$ & 100 & 100 & 56 & 39 & 1.7 & 1.1 & 2.0 & 3.0 \\
$p=80$ & 100 & 100 & 52 & 52 & 3.2 & 3.2 & 8.0 & 8.0 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\setlength{\tabcolsep}{7pt}
\renewcommand{0}{1.1}
\caption{True positive rate (TPR), false positive rate (FPR) in percentage, RMSE and average time in seconds obtained for AL and ALDS estimates based on 500 simulations from inhomogeneous Thomas point processes with $\kappa=4\times 10^{-4}$ and $\gamma=5$ (high clustering) observed on different observation domains.}
\label{tab:HC}
\begin{tabular}{rrrrrrrrr}
\hline
& \multicolumn{2}{c}{TPR} & \multicolumn{2}{c}{FPR} &\multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Time}\\
& AL & ALDS & AL & ALDS & AL & ALDS & AL & ALDS \\
\hline
\multicolumn{1}{l}{$D_1$ ($\mu_1=150$)} &&&&&&&&\\
$p=20$ & 64 & 64 & 72 & 72 & 39.0 & 39.0 & 0.5 & 0.5 \\
$p=40$ & 29 & 29 & 72 & 72 & 130.0 & 130.0 & 6.0 & 6.0 \\
$p=80$ & 0 & 0 & 36 & 36 & 69.0 & 69.0 & 6.0 & 6.0 \\
\multicolumn{1}{l}{$D_2$ ($\mu_2=600$)} &&&&&&&&\\
$p=20$ & 91 & 90 & 66 & 61 & 4.1 & 3.6 & 0.4 & 0.3 \\
$p=40$ & 84 & 80 & 70 & 64 & 13.0 & 10.0 & 2.0 & 0.7 \\
$p=80$ & 80 & 80 & 74 & 74 & 53.0 & 53.0 & 10.0 & 10.0 \\
\multicolumn{1}{l}{$D_3$ ($\mu_3=2400$)} &&&&&&&&\\
$p=20$ & 100 & 100 & 66 & 51 & 1.4 & 1.1 & 1.0 & 1.0 \\
$p=40$ & 100 & 100 & 69 & 51 & 2.6 & 1.9 & 3.0 & 4.0 \\
$p=80$ & 100 & 100 & 70 & 70 & 7.1 & 7.1 & 10.0 & 10.0 \\
\hline
\end{tabular}
\end{table}
For each model and setting, we generate 500 independent point patterns and estimate the parameters for each of these using the AL and ALDS procedures. The performances of AL and ALDS estimates are compared in terms of the true
positive rate (TPR), false positive rate (FPR), and root-mean squared error (RMSE). We also report the computing time. The TPR (resp.\ FPR) are the
expected fractions of informative (resp.\ non-informative)
covariates included in the selected model, so we would expect to obtain a high (resp. low) TPR (resp. FPR).
The RMSE is defined for an estimate $\hat \boldsymbol{\beta}$ by
\begin{align*}
\mathrm{RMSE}=\left\{ \sum_{j=2}^{p} { \hat{\mathbb{E}}(\hat \beta_j-\beta_j)^2} \right\}^\frac{1}{2}
\end{align*}
where $\hat{\mathbb{E}}$ is the empirical mean.
Tables~\ref{tab:poisson}-\ref{tab:HC} report results respectively for the Poisson model and the Thomas model with moderate and high clustering.
In the situation where point patterns come from Poisson processes, AL and ALDS perform very similarly. In particular, both methods do not work well in a small spatial domain with large $p$. The performances improve significantly as $D$ expands even for large $p$. When the point patterns exhibit clustering (Tables~\ref{tab:MC}-\ref{tab:HC}), in general AL and ALDS tend to overfit the intensity model by selecting too many covariates (indicated by higher FPR) which yields higher RMSE. ALDS sometimes performs slightly better in terms of RMSE. Results tend to deteriorate in the high clustering situation but remain very satisfactory: for the three considered models, the TPR (resp. FPR, RMSE) increases (decreases) when $|D|$ grows for given $p$ while for given $D$ (especially when $D=D_2,D_3$), the results remain quite stable when $p$ increases. In terms of computing time, no major difference can be observed.
\subsection{Application to the forestry dataset} \label{sec:appl}
We model the intensity function for the point pattern of the Acalypha diversifolia using~\eqref{eq:int} depending on 94 environmental described previously. The overall $\boldsymbol{\beta}$ estimates from the AL and ALDS procedures are presented in Table~\ref{tab:betaest}. We only report in Table~\ref{tab:selection} the top 12 important variables.
Among 94 environmental variables, the AL and ALDS respectively selects 32 and 33 important covariates (most of them are similar). We sort the magnitude of $\hat \boldsymbol{\beta}$ to identify the 12 most informative covariates. It turns out that these 12 covariates are similar for both procedures (see Table~\ref{tab:selection}). For the rest of selected covariates, the rankings are slightly different but the magnitudes are very similar (see Table~\ref{tab:betaest}).
\setlength{\tabcolsep}{2.5pt}
\renewcommand{0}{1}
\begin{table}[ht]
\caption{Twelve most important covariates selected by AL and ALDS for modeling the intensity of Acalypha diversifolia point pattern}
\label{tab:selection}
\centering
\begin{tabular}{lrr}
\hline
Covariates & AL & ALDS \\
\hline
Ca:N.min & -0.89 & -0.89 \\
K:N.min & 0.58 & 0.53 \\
Al:Mg & -0.51 & -0.47 \\
pH & 0.48 & 0.46 \\
B & -0.46 & -0.43 \\
Ca & 0.45 & 0.42 \\
Al:Fe & 0.38 & 0.39 \\
Fe:K & -0.31 & -0.33 \\
B:P & -0.30 & -0.29 \\
P:Nz & 0.27 & 0.26 \\
Fe & 0.26 & 0.25 \\
Mn & -0.24 & -0.24 \\
Number of selected covariates & 32 & 33 \\
\hline
\end{tabular}
\end{table}
\section{Discussion}\label{sec:conl}
In this paper, we develop the adaptive lasso and Dantzig selector for spatial point processes intensity estimation and provide asymptotic results under an original setting where the number of non-zero and zero coefficients diverge with the mean number of points. We demonstrate that both methods share identical asymptotic properties and perform similarly on simulated and real data. This study supplements previous ones \citep[see e.g.][]{bickel2009simultaneous} where similar conclusions on linear models and generalized linear models were addressed.
\cite{choiruddin2018convex} considered extensions of lasso type methods by involving general convex and non-convex penalties. In particular, composite likelihoods penalized by SCAD or MC+ penalty showed interesting properties. To integrate such an idea for the Dantzig selector, we could consider the optimization problem
\begin{align*}
\min \sum_{i=1}^{p}{p_{\lambda_j}(\beta_j)} \mbox{ subject to } {\mu}^{-1}\; \Big |\mathbf{U}_j( \tilde {\boldsymbol{\beta}}) + [\mathbf{A}( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} - {\boldsymbol{\beta}})]_j \Big | \leq p^{\prime}_{\lambda_j}(\beta_j), && j=1,\ldots,p
\end{align*}
where $p^{\prime}_{\lambda}(\theta)$ is the derivative with respect to $\theta$ of a general penalty function $p_\lambda$. However, such an interesting extension would make linear programming unusable and theoretical developments more complex to derive.
We leave this direction for further study.
{Another direction for further study is to derive results for the selection of regularization parameters. As mentioned earlier, a challenging and definitely interesting perspective would be the validity of Theorem~\ref{thm:main} when we define the regularization parameters in a stochastic way such as $\lambda_{n,j}=\lambda_n/|\tilde \beta_j|$.}
On a similar topic, \cite{choiruddin2021information} studied information criteria such as AIC, BIC, and the composite-versions under similar asymptotic framework for selecting intensity model of spatial point process. These criteria could be extended for tuning parameter selection in the context of regularization methods for spatial point process.
\begin{appendix}
\section{Additional notation and auxiliary Lemmas} \label{auxLemma}
Lemmas~\ref{bound}-\ref{lem:rhoubeta} are used for the proof of Theorem~\ref{thm:main} in both cases $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{AL}},\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$. Throughout the proofs, the notation $\mathbf X_n = O_{\mathrm P} (x_n)$ or $\mathbf X_n = o_{\mathrm P} (x_n)$ for a random vector $\mathbf X_n$ and a sequence of real numbers $x_n$ means that $\|\mathbf X_n\|=O_{\mathrm P}(x_n)$ and $\|\mathbf X_n\|=o_{\mathrm P}(x_n)$. In the same way for a vector $\mathbf V_n$ or a squared matrix $\mathbf M_n$, the notation $\mathbf V_n=O(x_n)$ and $\mathbf M_n=O(x_n)$ mean that $\|\mathbf V_n\|=O(x_n)$ and $\|\mathbf M_n\|=O(x_n)$.
\begin{lemma} \label{bound}
Under conditions~\cond{C:nun}, \cond{C:cov} and~\cond{C:g}, the following result hold as $n\to \infty$
\begin{align}
\max\left\{ \|\mathbf{U}_n(\boldsymbol{\beta}_0)\|,\|\mathbf{U}_{n,2}(\boldsymbol{\beta}_0)\|\right\} =O_\mathrm{P} \left( \sqrt { p_n \mu_n} \right)
\quad \text{ and } \quad
\mathbf{U}_{n,1}(\boldsymbol{\beta}_0) =O_\mathrm{P} \left( \sqrt { {s_n \mu_n} } \right).
\label{ch2:eq:ln}
\end{align}
\end{lemma}
\begin{proof}
Using Campbell Theorems~\eqref{eq:campbell}, the score vector $\mathbf{U}_n(\boldsymbol{\beta}_0)$ is unbiased and has variance $\mathrm{Var} \mathbf{U}_{n}( \boldsymbol{\beta}_0)= \mathbf{B}_{n}( \boldsymbol{\beta}_{0})$. {By Condition~\cond{C:cov} for any $u\in D_n$, $\mathbf{z}(u)\mathbf{z}(u)^\top= O(p_n)$. Hence, $\mathbf A_n(\boldsymbol{\beta}_0) = O(p_n \mu_n)$. By definition of~\eqref{eq:Bn} and conditions~\cond{C:nun} and~\cond{C:g}, we deduce that $\mathbf B_n(\boldsymbol{\beta}_0)= O(p_n \mu_n)$. We deduce that $\mathrm{Var}\{\mathbf U_n(\boldsymbol{\beta}_0)\}= O(p_n \mu_n)$. In the same way, $\mathrm{Var}\{\mathbf U_{n,2}(\boldsymbol{\beta}_0)\}= O\{(p_n-s_n) \mu_n\} = O(p_n \mu_n)$ and $\mathrm{Var}\{\mathbf U_{n,1}(\boldsymbol{\beta}_0)\}= O(s_n \mu_n)$. } The result is proved since for any centered real-valued stochastic process ${Y_n}$ with finite variance $\mathrm{Var}({Y_n})$, ${Y_n}=O_\mathrm{P}{\big\{\sqrt{\mathrm{Var} ({Y_n})}\big\}}$.
\end{proof}
The next lemma states that in the vicinity of $\boldsymbol{\beta}_0$, $\rho(u;\boldsymbol{\beta})$ and $\rho(u;\boldsymbol{\beta}_0)$ have the same behaviour.
\begin{lemma}\label{lem:rhoubeta}
(i) Let $(\zeta_n)_{n\ge 1}$ be any sequence such that $\zeta_n=o(1/\sqrt{p_n})$ and let $\kappa$ be any non-negative real number, then under the Conditions~\cond{C:intensity}-\cond{C:cov}, we have
\[
\sup_{\|\boldsymbol{\beta} - \boldsymbol{\beta}_0\| \le \kappa \zeta_n} \int_{D_n} \rho(u;\boldsymbol{\beta}) \mathrm{d} u = O(\mu_n).
\]
(ii) Similary for any random vector $\boldsymbol{\beta}$ such that $\|\boldsymbol{\beta}-\boldsymbol{\beta}_0\|=o_\mathrm{P}(1/\sqrt{p_n})$, then
\[
\int_{D_n} \rho(u;\boldsymbol{\beta}) \mathrm{d} u = O_\mathrm{P}(\mu_n).
\]
(iii) In addition, under condition~\cond{C:snpn}, (i)-(ii) are valid for the sequence defined by $\zeta_n= \sqrt{p_n/ \mu_n}$.
\end{lemma}
\begin{proof}
(i)-(ii) We only focus on (i) as (ii) follows along similar lines. For any $u \in D_n$, there exists, by Condition~\cond{C:cov}, a constant $\kappa<\infty$ (independent of $u, \boldsymbol{\beta}$ and $\boldsymbol{\beta}_0$) such that
\[
- \kappa \sqrt{p_n}\|\boldsymbol{\beta} - \boldsymbol{\beta}_0\| \leq (\boldsymbol{\beta}-\boldsymbol{\beta}_0)^\top \mathbf{z} (u) \leq \kappa \sqrt{p_n}\|\boldsymbol{\beta} - \boldsymbol{\beta}_0\|.
\]
Since, $\int \rho(u;\boldsymbol{\beta})\mathrm{d} u = \int \exp\{(\boldsymbol{\beta}-\boldsymbol{\beta}_0)^\top \mathbf{z} (u)\} \rho(u;\boldsymbol{\beta}_0) \mathrm{d} u$, we deduce that
\[
\mu_n \, \exp(- \kappa \sqrt{p_n}\|\boldsymbol{\beta} - \boldsymbol{\beta}_0\|) \le \int_{D_n} \rho(u;\boldsymbol{\beta}) \mathrm{d} u \le \mu_n \, \exp( \kappa \sqrt{p_n}\|\boldsymbol{\beta} - \boldsymbol{\beta}_0\|)
\]
which yields the result by definition of $\zeta_n$.\\
(iii) Condition~\cond{C:snpn} implies in particular that $\sqrt{p_n^2/\mu_n} \to 0$ as $n \to \infty$.
\end{proof}
\section{Proof of Theorem~\ref{thm:main} when $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{AL}}$} \label{sec:proofAL}
\subsection{Existence of a root-$(\mu_n/p_n)$ consistent local maximizer}
The first result presented hereafter shows that there exists a local maximizer of $Q_n(\boldsymbol{\beta})$ which is a consistent estimator of $\boldsymbol{\beta}_0$.
\begin{proposition}
\label{proposition:AL}
Assume that the conditions~\cond{C:nun}-\cond{C:g} hold. If in addition $p_n^4/\mu_n\to 0$ and $a_n\sqrt{s_n \mu_n/p_n}\to 0$ as $n\to \infty$, then there exists a local maximizer ${\hat \boldsymbol{\beta}_{\mathrm{AL}}}$ of $Q_n(\boldsymbol{\beta})$ such that
\begin{align*}
{\bf \| \hat \boldsymbol{\beta}_{\mathrm{AL}} -\boldsymbol{\beta}_0\|}=O_\mathrm{P}\big\{ \sqrt{p_n/\mu_n} \big\}.
\end{align*}
\end{proposition}
Note that the conditions on $a_n, s_n, \mu_n$ and $p_n$ are actually implied by conditions~\cond{C:snpn} and~\cond{C:anbn}.
In the proof of this result and the following ones, the notation $\kappa$ stands for a generic constant which may vary from line to line. In particular this constant is independent of $n$, $\boldsymbol{\beta}_0$ and $\mathbf k$.
\begin{proof}
Let $\mathbf{k}\in \mathbb{R}^{p_n}$. We remind the reader that the estimate of $\boldsymbol{\beta}_0$ is defined as the maximum of the function $Q_n$, given by~\eqref{regmed}, over $\mathbb R^{p_n}$.
To prove Proposition~\ref{proposition:AL}, we aim at proving that for any given $\epsilon>0$, there exists sufficiently large $K>0$ such that for $n$ sufficiently large
\begin{equation}
\label{ch2:eq:15}
\mathrm{P}\bigg\{\sup_{\|\mathbf{k}\| = K} \Delta_n(\mathbf k)>0\bigg\}\leq \epsilon,
\quad \mbox{ where } \Delta_n(\mathbf k) = Q_n(\boldsymbol{\beta}_0+\sqrt{p_n/\mu_n}\mathbf{k})-Q_n(\boldsymbol{\beta}_0).
\end{equation}
Equation~\eqref{ch2:eq:15} will imply that with probability at least $1-\epsilon$, there exists a local maximum in the ball $\{\boldsymbol{\beta}_0+\sqrt{p_n/\mu_n}\mathbf{k}:\|\mathbf{k}\| \leq K\}$, and therefore a local maximizer $\boldsymbol{\hat{\beta}}$ such that $\|{ \boldsymbol {\hat \beta}-\boldsymbol{\beta}_0}\|=O_\mathrm{P}(\sqrt{p_n/\mu_n})$. We decompose $\Delta_n(\mathbf k)$ as $\Delta_n(\mathbf k)= T_1+T_2$ where
\begin{align*}
T_1 & = \mu_n^{-1} \left\{ \ell_n(\boldsymbol{\beta}_0+\sqrt{p_n/\mu_n}\mathbf{k})-\ell_n( \boldsymbol{\beta}_0) \right\} \\
T_2 & = \sum_{j=1}^{p_n} \lambda_{n,j} \left( |\beta_{0j}|- |\beta_{0j}+\sqrt{p_n/\mu_n}k_j| \right).
\end{align*}
Since $\rho(u;\cdot)$ is infinitely continuously differentiable and $\ell_n^{(2)}(\boldsymbol{\beta}) =-\mathbf A_n(\boldsymbol{\beta})$, then using a second-order Taylor expansion there exists $t\in (0,1)$ such that
\begin{align*}
\mu_n T_1 =& \, \sqrt{p_n/\mu_n} \mathbf k^\top \ell_n^{(1)}(\boldsymbol{\beta}_0) + T_{11}+T_{12} \\
\end{align*}
where
\begin{align*}
T_{11} =&- \frac12 \frac{p_n}{\mu_n}\mathbf k^\top \mathbf{A}_n(\boldsymbol{\beta}_0) \mathbf k \\
T_{12}=&+ \frac12\frac{p_n}{\mu_n}\mathbf k^\top \left\{ \mathbf{A}_n(\boldsymbol{\beta}_0) -\mathbf{A}_n(\boldsymbol{\beta}_0 + t\sqrt{p_n/\mu_n} \mathbf k) \right\} \mathbf k .
\end{align*}
By condition~\cond{C:cov}
\[
T_{11} = -\frac12 p_n \frac{\mathbf k^\top \{\mu_n^{-1}\mathbf A_n(\boldsymbol{\beta}_0)\} \mathbf k}{\|\mathbf k\|^2} \, \|\mathbf k\|^2 \le -\frac{\alpha}2 p_n \|\mathbf k\|^2
\]
where $\alpha = \liminf_{n\ge 1}\inf_{\mathbf \boldsymbol \phi, \|\mathbf \boldsymbol \phi\|=1} \boldsymbol \phi^\top \{\mu_n^{-1}\mathbf A_n(\boldsymbol{\beta}_0)\} \boldsymbol \phi>0$. Now, for some $\tilde \boldsymbol{\beta}$ on the line segment between $\boldsymbol{\beta}_0$ and $\boldsymbol{\beta}_0 + t{\sqrt{p_n/\mu_n}} \mathbf k$
\[
T_{12} = \frac12 \, \frac{p_n}{\mu_n} \mathbf k^\top
\left\{
\int_{D_n} \mathbf{z}(u) \mathbf{z}(u)^\top t \sqrt{\frac{p_n}{\mu_n}} \mathbf k^\top \mathbf{z}(u) \rho(u;\tilde \boldsymbol{\beta}) \mathrm{d} u
\right\}
\mathbf k.
\]
By conditions~\cond{C:nun}-\cond{C:cov} and Lemma~\ref{lem:rhoubeta}
\[
T_{12} = O\left( \|\mathbf k\|^3 \frac{p_n}{\mu_n} p_n \sqrt{\frac{p_n}{\mu_n}} \sqrt{p_n} \mu_n\right) = O\left(p_n \sqrt{\frac{p_n^4}{\mu_n}} \right) = o(p_n).
\]
Hence, for $n$ sufficiently large
\[
\mu_n T_1 \le \sqrt{\frac{p_n}{\mu_n}} \mathbf k^\top \ell_n^{(1)}(\boldsymbol{\beta}_0) - \frac{\alpha}4 p_n \|\mathbf k\|^2.
\]
Regarding the term $T_2$ we have,
\[
T_2\leq \sum_{j=1}^{s_n} \lambda_{n,j}
\left\{
|\beta_{0j}|- \left|\beta_{0j}+ \sqrt{\frac{p_n}{\mu_n}} k_j\right|
\right\}
\leq a_n \sqrt{\frac{p_n}{\mu_n}} \sum_{j=1}^{s_n} |k_j| \leq a_n \sqrt{\frac{s_np_n}{\mu_n}} \|\mathbf k\|.
\]
We deduce that for $n$ large enough, there exists $\kappa$ such that $T_2 \leq \kappa d_n^2 \|k\|$ whereby we deduce that
\[
\Delta_n(\mathbf k) \leq
\frac{1}{\mu_n}\sqrt{\frac{p_n}{\mu_n}} \mathbf k^\top \ell_n^{(1)}(\boldsymbol{\beta}_0)
-\frac{\alpha}4 \frac{p_n}{\mu_n} \|\mathbf k\|^2 + a_n \sqrt{\frac{s_np_n}{\mu_n}} \|\mathbf k\|.
\]
By the assumption of Proposition~\ref{proposition:AL}, $a_n\sqrt{s_np_n/\mu_n}= a_n \sqrt{s_n\mu_n/p_n} p_n/\mu_n = o(p_n/\mu_n)$, whereby we deduce that for $n$ sufficiently large
\[
\Delta_n(\mathbf k) \le \frac1{\mu_n} \sqrt{\frac{p_n}{\mu_n}} \mathbf k^\top \ell_n^{(1)}(\boldsymbol{\beta}_0)
-\frac{\alpha}8 \frac{p_n}{\mu_n} \|\mathbf k\|^2.
\]
Now for $n$ sufficiently large,
\begin{align*}
\mathrm{P}\bigg\{{\sup_{\|\mathbf{k}\|= K} \Delta_n(\mathbf{k})>0}\bigg\} &\leq
\mathrm{P}\bigg\{ \|\ell_n^{(1)}(\boldsymbol{\beta}_0)\| \ge \frac\alpha8 K p_n \sqrt{\frac{\mu_n}{p_n}} \bigg\} \\
&= \mathrm{P}\bigg\{ \|\ell_n^{(1)}(\boldsymbol{\beta}_0)\| \ge \frac\alpha8 K \sqrt{p_n \mu_n} \bigg\}<\varepsilon
\end{align*}
for any given $\varepsilon>0$ since $\ell_n^{(1)}(\boldsymbol{\beta}_0) = \mathbf U_{n}(\boldsymbol{\beta}_0)= O_\mathrm{P}(\sqrt{p_n\mu_n})$ by Lemma~\ref{bound}.
\end{proof}
\subsection{Sparsity property for $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{AL}}$}
The sparsity property for $\hat \boldsymbol{\beta}_{\mathrm{AL}}$ follows directly from Proposition~\ref{proposition:AL} and the following Lemma~\ref{sparsity}.
\begin{lemma}
\label{sparsity}
Assume the conditions~\cond{C:nun}-\cond{C:g} and~\cond{C:snpn}-\cond{C:anbn} hold, then with probability tending to $1$, for any {$\boldsymbol{\beta}_1 \in \mathbb{R}^{s_n}$} satisfying $\|{\boldsymbol{\beta}_1 - \boldsymbol{\beta}_{01}}\|=O_\mathrm{P}(\sqrt{p_n/\mu_n})$, and for any constant $K_1 > 0$,
\begin{align*}
Q_n\Big\{({\boldsymbol{\beta}_1}^\top,\mathbf{0}^\top)^\top \Big\}
= \max_{\| \boldsymbol{\beta}_2\| \leq K_1 \sqrt{p_n/\mu_n}}
Q_n\Big\{({\boldsymbol{\beta}_1}^\top,{\boldsymbol{\beta}_2}^\top)^\top \Big\}.
\end{align*}
\end{lemma}
\begin{proof}
Let $\varepsilon_n= K_1 \sqrt{p_n/\mu_n}$. It is sufficient to show that with probability tending to $1$ as ${n\to \infty}$, for any ${\boldsymbol{\beta}_1}$ satisfying $\|{\boldsymbol{\beta}_1 -\boldsymbol{\beta}_{01}}\|=O_\mathrm{P}(\sqrt{p_n/\mu_n})$, we have for any $j=s_n+1, \ldots, p_n$
\begin{equation}
\label{sparsitya}
\frac {\partial Q_n(\boldsymbol{\beta})}{\partial\beta_j}<0 \quad
\mbox { for } 0<\beta_j<\varepsilon_n, \mbox{ and}
\end{equation}
\begin{equation}
\label{sparsityb}
\frac {\partial Q_n(\bf \boldsymbol{\beta})}{\partial\beta_j}>0 \quad
\mbox { for } -\varepsilon_n<\beta_j<0.
\end{equation}
From \eqref{eq:likepois},
\begin{align*}
\frac {\partial \ell_n(\boldsymbol{\beta})}{\partial\beta_j} = \frac {\partial \ell_n{(\boldsymbol{\beta}_0)}}{\partial\beta_j} + R_n,
\end{align*}
where $R_n = {-} \int_{D_n} z_j(u)\big\{\rho(u;\boldsymbol{\beta})-\rho(u;\boldsymbol{\beta}_0)\big\} \mathrm{d}u$.
Let $u \in \mathbb{R}^d$. By Taylor expansion, there exists $ t\in (0,1), $ such that
\begin{align*}
\rho(u;\boldsymbol{\beta}) = \rho(u;\boldsymbol{\beta}_0) + (\boldsymbol{\beta}-\boldsymbol{\beta}_0)^\top \mathbf{z}(u) \rho\{u;\boldsymbol{\beta}_0 + t(\boldsymbol{\beta}-\boldsymbol{\beta}_0 )\}.
\end{align*}
By condition~\cond{C:intensity}-\cond{C:cov} and Lemma~\ref{lem:rhoubeta}, we have for $n$ sufficiently large
\[
|R_n| \le \kappa \|\boldsymbol{\beta}-\boldsymbol{\beta}_0\| \sqrt{p_n} \int_{D_n} \rho(u;\boldsymbol{\beta}_0) \mathrm{d} u =
O_{\mathrm P}\left( \sqrt{\frac{p_n}{\mu_n}} \sqrt{p_n} \mu_n \right) =
O_{\mathrm P}\left( {p_n} \sqrt{\mu_n} \right).
\]
Following the proof of Lemma~\ref{bound}, we can derive $\mathrm{Var}({\partial \ell_n{(\boldsymbol{\beta}_0)}}/{\partial\beta_j} ) = \mathrm{Var}[\{\mathbf U_n(\boldsymbol{\beta}_0) \}_j ]=O(\mu_n)$ whereby we deduce that
\begin{equation}
\label{Op}
\frac {\partial \ell_n(\boldsymbol{\beta})}{\partial\beta_j} = O_{\mathrm P} (p_n \sqrt{\mu_n}).
\end{equation}
Now, we want to prove (\ref{sparsitya}). Let $0<\beta_j<\varepsilon_n$ and remind that the sequence $b_n$ is given by~\cond{C:anbn}. Then, for $n$ sufficiently large,
\begin{align*}
\mathrm{P} \left\{ \frac {\partial Q_n(\boldsymbol{\beta})}{\partial\beta_j}<0 \right\}&=\mathrm{P} \left\{ \frac {\partial \ell_n(\boldsymbol{\beta})}{\partial\beta_j} - \mu_n{\lambda_{n,j}}\sign(\beta_j)<0 \right\}\\
&=\mathrm{P} \left\{ \frac {\partial \ell_n(\boldsymbol{\beta})}{\partial\beta_j}< \mu_n{\lambda_{n,j}} \right\}\\
& \geq \mathrm{P} \left\{ \frac {\partial \ell_n(\boldsymbol{\beta})}{\partial\beta_j}< \mu_n b_n \right\}\\
&= \mathrm{P} \left\{ \frac {\partial \ell_n(\boldsymbol{\beta})}{\partial\beta_j}< p_n\sqrt{\mu_n} \; \sqrt{\frac{\mu_n}{p_n^2}}b_n \right\}.
\end{align*}
The assertion (\ref{sparsitya}) is therefore deduced from (\ref{Op}) and from the assumption that $b_n \sqrt{\mu_n/p_n^2} \to \infty$ as $n \to \infty$. We proceed similarly to prove (\ref{sparsityb}).
\end{proof}
\subsection{Asymptotic normality for $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{AL}}$}
\begin{proof}
As shown in Proposition~\ref{proposition:AL}, there is a root-$(\mu_n/p_n)$ consistent local maximizer $\hat \boldsymbol{\beta}_{\mathrm{AL}}$ of $Q_n(\boldsymbol{\beta})$, and it can be shown that there exists an estimator $\hat\boldsymbol{\beta}_{\mathrm{AL},1}$ in Proposition~\ref{proposition:AL} that is a root-$(\mu_n/p_n)$ consistent local maximizer of $ Q_n \{({\boldsymbol{\beta}_1}^\top,\mathbf{0}^\top)^\top \Big\}$, which is regarded as a function of $\boldsymbol {\beta}_1$, and that satisfies
\begin{align*}
\frac {\partial Q_n(\hat\boldsymbol{\beta}_{\mathrm{AL}})}{\partial\beta_j}=0 \quad
\mbox { for } j=1,\ldots,s_n \mbox { and } \hat\boldsymbol{\beta}_{\mathrm{AL}}=( \hat\boldsymbol{\beta}_{\mathrm{AL},1}^\top,\mathbf{0}^ \top)^\top.
\end{align*}
There exists $t\in (0,1)$ and $\boldsymbol{\check{\beta}}= \hat\boldsymbol{\beta}_{\mathrm{AL}} + t(\boldsymbol{\beta}_0-\hat\boldsymbol{\beta}_{\mathrm{AL}})$ such that for $j=1,\cdots,s_n$
\begin{align}
0
=&\frac {\partial \ell_n{(\hat\boldsymbol{\beta}_{\mathrm{AL}})}}{\partial\beta_j}-\mu_n{\lambda_{n,j}}\sign({\hat \beta_{\mathrm{AL},j}}) \nonumber\\
=&\frac {\partial \ell_n{(\boldsymbol{\beta}_0)}}{\partial\beta_j}+{\sum_{l=1}^{s_n} \frac {\partial^2 \ell_n{( \boldsymbol{\check{\beta}})}}{\partial\beta_j \partial\beta_l}}({\hat \beta_{\mathrm{AL},l}-\beta_{0l}})-\mu_n{\lambda_{n,j}}\sign({\hat \beta_{\mathrm{AL},j}}) \nonumber\\
=&\frac {\partial \ell_n{(\boldsymbol{\beta}_0)}}{\partial\beta_j}+{\sum_{l=1}^{s_n} \frac {\partial^2 \ell_n{( \boldsymbol{\beta}_0)}}{\partial\beta_j \partial\beta_l}}({\hat \beta_{\mathrm{AL},l}-\beta_{0l}})+{\sum_{l=1}^{s_n} \Psi_{n,jl}({\hat \beta_{\mathrm{AL},l}}-\beta_{0l})} \nonumber \\
&-\mu_n\lambda_{n,j}\sign( {\hat \beta_{\mathrm{AL},j}} )
\label{eq:0equal}
\end{align}
where
\begin{align*}
\Psi_{n,jl}=\frac {\partial^2 \ell_n{(\boldsymbol{\check{\beta}})}}{\partial\beta_j \partial\beta_l}-\frac {\partial^2 \ell_n{(\boldsymbol{\beta}_0)}}{\partial\beta_j \partial\beta_l}.
\end{align*}
Let $\mathbf{U}_{n,1}(\boldsymbol{\beta}_{0})$ (resp. $\ell^{(2)}_{n,1}(\boldsymbol{\beta}_{0})$) be the first $s_n$ components (resp. $s_n \times s_n$ top-left corner) of $\mathbf{U}_{n}(\boldsymbol{\beta}_{0})$ (resp. $\ell^{(2)}_{n}(\boldsymbol{\beta}_{0})$). Let also $\boldsymbol \Psi_n$ be the $s_n \times s_n$ matrix containing $\Psi_{n,jl}, j,l=1,\ldots,s_n$. Finally, let the vector $\mathbf{p}'_n$
\begin{align*}
\mathbf{p}'_n&={
\{\lambda_{n,1}\sign({\hat \beta_{\mathrm{AL},1}} ),\ldots,
\lambda_{n,s_n}\sign( {\hat \beta_{\mathrm{AL},s_n}} )\}^\top}.
\end{align*}
These notation allow us to rewrite~\eqref{eq:0equal} as
\begin{equation}
\label{eq:tmp}
\mathbf U_{n,1}(\boldsymbol{\beta}_0) - \mathbf A_{n,11}(\boldsymbol{\beta}_0) (\hat \boldsymbol{\beta}_{\mathrm{AL},1}-\boldsymbol{\beta}_{01}) +
\boldsymbol \Psi_n (\hat \boldsymbol{\beta}_{\mathrm{AL},1}-\boldsymbol{\beta}_{01}) - \mu_n \mathbf{p}'_n =0.
\end{equation}
Let $\boldsymbol \phi \in \mathbb R^{s_n}\setminus \{0\}$ and $\sigma^2_\phi = \boldsymbol \phi^\top \mathbf B_{n,11}(\boldsymbol{\beta}_0) \boldsymbol \phi$, then
\[
\sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\mathbf U_{n,1}(\boldsymbol{\beta}_0) - \sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\mathbf A_{n,11}(\boldsymbol{\beta}_0) (\hat \boldsymbol{\beta}_{\mathrm{AL},1}-\boldsymbol{\beta}_{01}) +
\sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\boldsymbol \Psi_n (\hat \boldsymbol{\beta}_{\mathrm{AL},1}-\boldsymbol{\beta}_{01}) - \mu_n \sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\mathbf{p}'_n =0.
\]
Now, by condition~\cond{C:Bn}, $\sigma_{\boldsymbol \phi}^{-1} = O(\mu_n^{-1/2})$ and by the definition of $a_n$, $\mathbf p'_n= O(a_n \sqrt{s_n})$. By conditions~\cond{C:intensity}-\cond{C:cov}, there exists some $\tilde \boldsymbol{\beta}$ on the line segment between $\boldsymbol{\beta}_0$ and $\check \boldsymbol{\beta}$ such that
\[
\boldsymbol \Psi_n = \int_{D_n} \mathbf{z}_1(u) \mathbf{z}_1(u)^\top (\check \boldsymbol{\beta}-\boldsymbol{\beta}_0)^\top \mathbf{z}(u) \rho(u;\tilde \boldsymbol{\beta}) \mathrm{d} u
\]
whereby we deduce from conditions~\cond{C:intensity}-\cond{C:cov} and Lemma~\ref{lem:rhoubeta} that
\[
\|\boldsymbol \Psi_n\| = O_{\mathrm P} \left( s_n \sqrt{\frac{p_n}{\mu_n}} \sqrt{p_n} \mu_n\right) = O_{\mathrm P}(s_np_n\sqrt{\mu_n}).
\]
The last two results and conditions~\cond{C:snpn}-\cond{C:anbn} yield that
\begin{align*}
\sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\boldsymbol \Psi_n (\hat \boldsymbol{\beta}_{\mathrm{AL},1}-\boldsymbol{\beta}_{01})&=
O_{\mathrm P} \left( \frac1{\sqrt{\mu_n}} s_np_n\sqrt{\mu_n} \sqrt{\frac{p_n}{\mu_n}} \right) =
O_{\mathrm P} \left( \sqrt{\frac{s_n^2 p_n^3}{\mu_n}}\right) = o_{\mathrm P}(1)\\
\mu_n \sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\mathbf{p}'_n &=
O \left(\mu_n \frac{1}{\sqrt{\mu_n}} a_n\sqrt{s_n} \right) = O(a_n\sqrt{s_n\mu_n}) = o(1).
\end{align*}
These results finally lead to
\[
\sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\mathbf A_{n,11}(\boldsymbol{\beta}_0) (\hat \boldsymbol{\beta}_{\mathrm{AL},1}-\boldsymbol{\beta}_{01}) = \sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top\mathbf U_{n,1}(\boldsymbol{\beta}_0) + o_{\mathrm P}(1)
\]
and finally to the proof of the result using Slutsky's lemma and condition~\cond{C:clt}.
\end{proof}
\section{Proof of Theorem~\ref{thm:main} when $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$} \label{sec:proofALDS}
\subsection{Existence and optimal solutions for the primal and dual problems} \label{sec:existenceALDS}
For $\boldsymbol{\beta} \in \mathbb R^{p_n}$, we let $\boldsymbol\Delta_n (\boldsymbol{\beta}) = \mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} -\boldsymbol{\beta} )$.
\begin{lemma} \label{lem:existenceALDS}
There exists a solution to the problem~\eqref{ADS2}.
\end{lemma}
\begin{proof}
Following \cite{candes:romberg:05}, we state that \eqref{ADS2} is equivalent to
\begin{align}
\label{ADS-linear}
\min_{\boldsymbol{\beta}, {\boldsymbol u} } \sum_j u_j \mbox{ subject to } \begin{cases}
{\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta} \leq {\boldsymbol u} \\
-{\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta} \leq {\boldsymbol u} \\
\mu_n^{-1}{\boldsymbol{\Lambda}}_{n}^{-1} \boldsymbol \Delta_n(\boldsymbol{\beta}) - \boldsymbol 1_{p_n} \leq {\mathbf{0}} \\
- \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \boldsymbol \Delta_n(\boldsymbol{\beta}) - \boldsymbol 1_{p_n} \leq {\mathbf{0} }
\end{cases}
\end{align}
where $\boldsymbol u \in \mathbb{R}^{p_n}$ is an additional parameter vector to be optimized and $\tilde {\boldsymbol{\beta}}$ is the initial estimator. Note that~\eqref{ADS-linear} is a linear problem with $4p_n$ linear inequality constraints. To prove the existence of ALDS estimates, we need to derive dual problem of \eqref{ADS-linear} and prove that strong duality holds. To derive the dual problem, we first construct the Lagrangian form associated with the problem \eqref{ADS-linear} considering the main arguments by \cite[section 5.2]{boyd2004convex}
\begin{align*}
L(\boldsymbol{\beta}; \boldsymbol u ; \boldsymbol{\alpha}) & = \sum_j u_j
+ \boldsymbol{\alpha}_1^\top ({\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta} -\mathbf u)
+ \boldsymbol{\alpha}_2^\top (-{\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta} -\mathbf u)\notag\\
& \quad + \boldsymbol{\alpha}_3^\top \Big[ \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1}
\boldsymbol \Delta_n(\boldsymbol{\beta})- \boldsymbol 1_{p_n} \Big] + \boldsymbol{\alpha}_4^\top \Big[ - \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \boldsymbol \Delta_n(\boldsymbol{\beta}) - \boldsymbol 1_{p_n} \Big]\\
& = (\boldsymbol 1_{p_n} - \boldsymbol{\alpha}_1 - \boldsymbol{\alpha}_2)^\top \boldsymbol u + \Big \{ (\boldsymbol{\alpha}_1-\boldsymbol{\alpha}_2)^\top{\boldsymbol{\Lambda}}_{n}
- \mu_n^{-1} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}})
\Big \}\boldsymbol{\beta} \notag \\
& \quad + (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top \Big \{ \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \Big(\mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} \Big) \Big \} - (\boldsymbol{\alpha}_3+\boldsymbol{\alpha}_4)^\top \boldsymbol 1_{p_n},
\end{align*}
where $\boldsymbol{\alpha} =(\boldsymbol{\alpha}_1^\top, \boldsymbol{\alpha}_2^\top, \boldsymbol{\alpha}_3^\top, \boldsymbol{\alpha}_4^\top)^\top \in \mathbb{R}^{4p_n}$ is the dual vector (which can be viewed as a Lagrange multiplier).
The dual function $h$ is defined by
\begin{align}
h(\boldsymbol{\alpha})&=\inf_{\boldsymbol{\beta}, \boldsymbol u} L(\boldsymbol{\beta}; \boldsymbol u; \boldsymbol{\alpha}) \\
&=
\begin{cases}
(\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top \Big \{ \mu_n^{-1}{\boldsymbol{\Lambda}}_{n}^{-1} \Big(\mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} \Big) \Big \} - (\boldsymbol{\alpha}_3+\boldsymbol{\alpha}_4)^\top \boldsymbol 1_{p_n} , \text{ if} \\
\quad
\begin{cases}
\boldsymbol 1_{p_n} - \boldsymbol{\alpha}_1 - \boldsymbol{\alpha}_2 ={\mathbf{0}} \\
(\boldsymbol{\alpha}_1-\boldsymbol{\alpha}_2)^\top{\boldsymbol{\Lambda}}_{n}
- \mu_n^{-1} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) ={\mathbf{0}}
\end{cases}
\\
-\infty \text{ otherwise}.
\end{cases} \nonumber
\end{align}
For any $\boldsymbol{\alpha} =(\boldsymbol{\alpha}_1^\top, \boldsymbol{\alpha}_2^\top, \boldsymbol{\alpha}_3^\top, \boldsymbol{\alpha}_4^\top)^\top \in \mathbb{R^+}^{4p_n}$, $h(\boldsymbol{\alpha})$ is a lower bound to the optimality problem \eqref{ADS-linear} (see \cite[p.216]{boyd2004convex}). To find the best lower bound comes to solve the dual problem: $\max_{\boldsymbol{\alpha} \geq {\mathbf{0}}} h(\boldsymbol{\alpha}).$
Recall that problem \eqref{ADS-linear} is a linear program with linear inequality constraints, so that strong duality holds if the dual problem is feasible \cite[see][p.227]{boyd2004convex}, that is to say if there exists some $\boldsymbol{\alpha} =(\boldsymbol{\alpha}_1^\top, \boldsymbol{\alpha}_2^\top, \boldsymbol{\alpha}_3^\top, \boldsymbol{\alpha}_4^\top)^\top \in \mathbb{R^+}^{4p_n}$ such that
\begin{eqnarray}
\boldsymbol 1_{p_n} - \boldsymbol{\alpha}_1 - \boldsymbol{\alpha}_2 &=& {\mathbf{0}} \notag \\
(\boldsymbol{\alpha}_1-\boldsymbol{\alpha}_2)^\top{\boldsymbol{\Lambda}}_{n}
- \mu_n^{-1} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) &=&{\mathbf{0}}.
\notag
\end{eqnarray}
Moreover, we remark that
\begin{eqnarray}
\boldsymbol{\alpha}_1 \geq {\mathbf{0}} \notag , \; \boldsymbol{\alpha}_2\geq {\mathbf{0}} \notag , \; \boldsymbol{\alpha}_3 \geq {\mathbf{0}} \notag, \; \boldsymbol{\alpha}_4 &\geq& {\mathbf{0}} \notag \\
\boldsymbol 1_{p_n} - \boldsymbol{\alpha}_1 - \boldsymbol{\alpha}_2 &=&{\mathbf{0}} \notag \\
(\boldsymbol{\alpha}_1-\boldsymbol{\alpha}_2)^\top{\boldsymbol{\Lambda}}_{n}
- \mu_n^{-1} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) &=&{\mathbf{0}}
\notag
\end{eqnarray}
is equivalent to
\begin{eqnarray}
\boldsymbol{\alpha}_1 \geq {\mathbf{0}} \notag , \;
\boldsymbol{\alpha}_2=\boldsymbol 1_n - \boldsymbol{\alpha}_1 \geq {\mathbf{0}} \notag ,\;
\boldsymbol{\alpha}_3 \geq {\mathbf{0}} \notag ,\;
\boldsymbol{\alpha}_4 &\geq& {\mathbf{0}} \notag \\
(2\boldsymbol{\alpha}_1-\boldsymbol 1_{p_n})^\top{\boldsymbol{\Lambda}}_{n}
- \mu_n^{-1} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) &=&{\mathbf{0}}
\notag
\end{eqnarray}
which is also equivalent to
\begin{eqnarray}
\boldsymbol{\alpha}_1 = \frac{1}{2} \Big\{\boldsymbol 1_{p_n} + \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} (\boldsymbol{\alpha}_3 - \boldsymbol{\alpha}_4) \Big\}&\geq& {\mathbf{0}} \notag \\
\boldsymbol{\alpha}_2 =\boldsymbol 1_{p_n} - \boldsymbol{\alpha}_1 = \frac{1}{2} \Big\{\boldsymbol 1_{p_n} - \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} (\boldsymbol{\alpha}_3 - \boldsymbol{\alpha}_4) \Big\}&\geq& {\mathbf{0}} \notag \\
\boldsymbol{\alpha}_3 \geq {\mathbf{0}} \notag,\;
\boldsymbol{\alpha}_4 &\geq& {\mathbf{0}} \notag.
\end{eqnarray}
This comes to the condition that there exists $(\boldsymbol{\alpha}_3^\top, \boldsymbol{\alpha}_4^\top)^\top \in \mathbb{R^+}^{2p_n}$ such that
\begin{equation}
\label{existence}
\mu_n^{-1} \| {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} (\boldsymbol{\alpha}_3 - \boldsymbol{\alpha}_4) \|_{\infty} \leq 1 .
\end{equation}
Therefore, the dual problem associated with \eqref{ADS-linear} is
\begin{align}
\max_{\boldsymbol{\alpha}_3, \boldsymbol{\alpha}_4 \geq 0} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^\top \Big[ \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \Big\{\mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} \Big\} \Big] - (\boldsymbol{\alpha}_3+\boldsymbol{\alpha}_4)^\top \boldsymbol 1_{p_n} \nonumber \\
\text{subject to } \mu_n^{-1} \| {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} (\boldsymbol{\alpha}_3 - \boldsymbol{\alpha}_4) \|_{\infty} \leq 1 . \label{eq:dual2}
\end{align}
The condition \eqref{existence} is always true as far as the matrix ${\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1}$ is non zero. Indeed, let $\mathbf y \in \mathbb{R}^{p_n}$ such that $\mathbf \{{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} y \} \neq 0$. Now define
\begin{eqnarray*}
\alpha_{3j} &=& \frac{y_j}{\mu_n^{-1} \|{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} y \|_{\infty}} \mathbf 1(y_j>0), \\
\alpha_{4j} &=& \frac{-y_j}{\mu_n^{-1} \| {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} y \|_{\infty}} \mathbf 1(y_j<0).
\end{eqnarray*}
Clearly, $(\boldsymbol{\alpha}_3^\top, \boldsymbol{\alpha}_4^\top)^\top \in \mathbb{R^+}^{2p_n}$,
\eqref{existence} is always verified. This ends the proof.
\end{proof}
Note that the dual problem~\eqref{eq:dual2} can be unequivocally reparameterized in terms of $\boldsymbol{\gamma}=\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4$ as follows
\begin{align}
\max_{\boldsymbol{\gamma} \in \mathbb{R}^{p_n}} \boldsymbol{\gamma}^\top \Big[ \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \Big\{\mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} \Big\} \Big] - \|\boldsymbol{\gamma}\|_1 \nonumber \\
\text{subject to } \mu_n^{-1} \|\boldsymbol{\gamma}^\top {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1} \|_{\infty} \leq 1 \label{eq:dual}
\end{align}
due to complementary slackness conditions. Now, we derive the following optimality conditions and obtain optimal primal and dual solutions.
We derive the following optimality conditions ensuring the Karush-Kuhn-Tucker (KKT) conditions and thus obtain optimal primal and dual solutions.
\begin{lemma}
\label{lemma:opt}
Consider the primal and dual problems defined by \eqref{ADS2} and \eqref{eq:dual}.
Suppose that the matrix ${\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta} }) {\boldsymbol{\Lambda}}_{n}^{-1}$ is non zero and that ${\hat \boldsymbol{\beta}}$ and $\hat{\boldsymbol{\gamma}}$ verify
\begin{align}
\mu_n^{-1} \Big\|{\boldsymbol{\Lambda}}_{n}^{-1}
\boldsymbol \Delta_n({\hat \boldsymbol{\beta}})
\Big\|_\infty & \leq 1 \label{eq:fea1} \\
\mu_n^{-1} \Big\|\hat \boldsymbol{\gamma}^\top {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) {\boldsymbol{\Lambda}}_{n}^{-1} \Big\|_\infty & \leq 1 \label{eq:fea2} \\
\mu_n^{-1}
\hat \boldsymbol{\gamma}^\top {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) \hat \boldsymbol{\beta} & = \|{\boldsymbol{\Lambda}}_n \hat \boldsymbol{\beta} \|_1 \label{eq:slack1} \\
\mu_n^{-1} \hat \boldsymbol{\gamma} ^\top {\boldsymbol{\Lambda}}_{n}^{-1} \boldsymbol\Delta_n({\hat \boldsymbol{\beta}}) & = \|\hat \boldsymbol{\gamma} \|_1. \label{eq:slack2}
\end{align}
Then the Karush-Kuhn-Tucker (KKT) conditions for \eqref{ADS2} are fulfilled and ${\hat \boldsymbol{\beta}}$ and $\hat\boldsymbol{\gamma}$ are the optimal primal and dual solutions.
\end{lemma}
\begin{proof}
We start by writing the Karush-Kuhn-Tucker (KKT) conditions for the problem~\eqref{ADS2}:
\begin{align}
{\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta} &\leq {\boldsymbol u} \label{KKT1} \\
-{\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta}&\leq {\boldsymbol u} \label{KKT2} \\
\mu_n^{-1}{\boldsymbol{\Lambda}}_{n}^{-1} {\boldsymbol \Delta_n(\boldsymbol{\beta})} - \boldsymbol 1_{p_n}& \leq {\mathbf{0}} \label{KKT3} \\
- \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} {\boldsymbol \Delta_n(\boldsymbol{\beta})} - \boldsymbol 1_{p_n} &\leq {\mathbf{0}} \label{KKT4} \\
\boldsymbol{\alpha}_1 \geq 0, \boldsymbol{\alpha}_2 &\geq 0, \label{KKT5} \\
\boldsymbol{\alpha}_3 \geq 0, \boldsymbol{\alpha}_4 &\geq 0, \label{KKT6} \\
\forall i \; \alpha_{1i}\{({\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta})_i -u_i\}&=0 \label{KKT7}\\
\forall i \; \alpha_{2i}\{-({\boldsymbol{\Lambda}}_{n} \boldsymbol{\beta})_i -u_i\}&=0 \label{KKT8}\\
\forall i \; \alpha_{3i}[\mu_n^{-1}\{{\boldsymbol{\Lambda}}_{n}^{-1} {\boldsymbol \Delta_n(\boldsymbol{\beta})}\}_i - 1] &=0 \label{KKT9}\\
\forall i \; \alpha_{4i}[-\mu_n^{-1}\{{(}{\boldsymbol{\Lambda}}_{n}^{-1} {\boldsymbol \Delta_n(\boldsymbol{\beta})}\}_i - 1] &=0 \label{KKT10}\\
1-\boldsymbol{\alpha}_1 - \boldsymbol{\alpha}_2 &=0 \label{KKT11}\\
(\boldsymbol{\alpha}_1 - \boldsymbol{\alpha}_2)^T {\boldsymbol{\Lambda}}_{n} - \mu_n^{-1} (\boldsymbol{\alpha}_3-\boldsymbol{\alpha}_4)^T {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) &=0 \label{KKT12}
\end{align}
Let ${\hat \boldsymbol{\beta}}$ and $\hat\boldsymbol{\gamma}$ satisfy~\eqref{eq:fea1}-\eqref{eq:slack2}. \eqref{KKT3} and \eqref{KKT4} are obviously satisfied under~\eqref{eq:fea1}. If one defines $\hat{\boldsymbol \alpha}_3$ and $\hat{\boldsymbol \alpha}_4$ such that $\hat{\alpha}_{3i}=(0,\hat{\gamma}_i)_+$ and $\hat{\alpha}_{4i}=(0,-\hat{\gamma}_i)_+$, then $\hat{\boldsymbol{\gamma}}=\hat{\boldsymbol \alpha}_3-\hat{\boldsymbol \alpha}_4$ and~\eqref{KKT6} is satisfied.
Now, we define
\begin{align*}
\hat{\boldsymbol{\alpha}}_1 &= \frac{1}{2} \{\mathbf 1_{p_n} + \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) {\boldsymbol{\Lambda}}_{n}^{-1}\} (\hat \boldsymbol{\alpha}_3 - \hat \boldsymbol{\alpha}_4) \\
\hat{\boldsymbol{\alpha}}_2 &= \frac{1}{2} \{\mathbf 1_{p_n} - \mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) {\boldsymbol{\Lambda}}_{n}^{-1}\} (\hat \boldsymbol{\alpha}_3 - \hat \boldsymbol{\alpha}_4)
\end{align*}
which ensures~\eqref{KKT11} and~\eqref{KKT12}. In addition, under~\eqref{eq:fea2} implies that~\eqref{KKT5} is also true.
From the definition of $\boldsymbol \alpha_3$ and $\boldsymbol \alpha_4$, we rewrite~\eqref{eq:slack2} as
\[
\sum_i
\left(
\hat{\alpha}_{3i}[\mu_n^{-1}\{{\boldsymbol{\Lambda}}_{n}^{-1} {\boldsymbol \Delta_n(\hat{\boldsymbol{\beta}})}\}_i - 1]
+
\hat{\alpha}_{4i}[-\mu_n^{-1}\{{\boldsymbol{\Lambda}}_{n}^{-1} {\boldsymbol \Delta_n(\hat{\boldsymbol{\beta}})}\}_i - 1]
\right)
=0.
\]
From~\eqref{KKT4}-\eqref{KKT5} each term in the above sum is the sum of two negative terms, whereby we dedude that~\eqref{KKT9}-\eqref{KKT10} necessarily hold.
With a similar argument, by using~\eqref{eq:slack1}, we also deduce that~\eqref{KKT7}-\eqref{KKT8} are also true by setting in particular $u_i = |({\boldsymbol{\Lambda}}_n \hat \boldsymbol{\beta})_i|$. And that latter choice implies that~\eqref{KKT1}-\eqref{KKT2} are also satisfied.
\end{proof}
\subsection{A few auxiliary statements}
Before tackling more specifically the proof of Theorem~\ref{thm:main} for the ALDS estimator, we present a few auxiliary results that will be used.
\begin{lemma} \label{lem:aux} Assume conditions \cond{C:intensity}-\cond{C:initial} hold.\\
(i)
\begin{equation}\label{lambda12}
\|{\boldsymbol{\Lambda}}_{n,11} \|= a_n
, \qquad
\|{\boldsymbol{\Lambda}}_{n,22} ^{-1}\| = \frac1{b_n}.
\end{equation}
(ii) For any $t\in [0,1]$ and $\check \boldsymbol{\beta} = \boldsymbol{\beta}_0 + t(\tilde \boldsymbol{\beta} -\boldsymbol{\beta}_0)$, we have
\begin{align*}
\mathbf{A}_n( {\check \boldsymbol{\beta}})&=O_\mathrm{P}\left(p_n \mu_n\right) \\
\mathbf{A}_{n,1}(\check {\boldsymbol{\beta}}) & =O_\mathrm{P}\left(\sqrt{p_n s_n} \mu_n\right) \\
\mathbf{A}_{n,2}(\check {\boldsymbol{\beta}}) &=O_\mathrm{P}\left(p_n \mu_n\right) \\
\mathbf{A}_{n,11}(\check {\boldsymbol{\beta}}) & =O_\mathrm{P}\left( s_n \mu_n\right)\\
\mathbf{A}_{n,21}(\check {\boldsymbol{\beta}}) &=O_\mathrm{P}\left(\sqrt{p_ns_n} \mu_n \right) \\
\mathbf A_n (\check \boldsymbol{\beta}) - \mathbf A_n (\tilde \boldsymbol{\beta}) &=
O_\mathrm{P}\left(p_n^2 \sqrt{\mu_n}\right) \\
\mathbf A_{n,1} (\check \boldsymbol{\beta}) - \mathbf A_{n,1} (\tilde \boldsymbol{\beta}) &=
O_\mathrm{P}\left( \sqrt{s_np_n^3\mu_n} \right) \\
\mathbf A_{n,11} (\check \boldsymbol{\beta}) - \mathbf A_{n,11} (\tilde \boldsymbol{\beta}) &=
O_\mathrm{P}\left( {\sqrt{s_n^2p_n^2 \mu_n}}\right).
\end{align*}
(iii)
\begin{equation}
\label{eq:Unbetatilde}
\max \left\{ \|\mathbf U_{n}(\tilde \boldsymbol{\beta})\|, \|\mathbf U_{n,2}(\tilde \boldsymbol{\beta})\| \right\} =
O_\mathrm{P} ( \sqrt{p_n^3 \mu_n}).
\end{equation}
\end{lemma}
\begin{proof}
(i) follows from conditions on $a_n$ and $b_n$. \\
(ii) follows from conditions \cond{C:intensity}-\cond{C:cov}, ($\mathcal{C}$.\ref{C:initial}) and Lemma~\ref{lem:rhoubeta}. We only prove the assertions for the matrices $\mathbf A_n(\tilde \boldsymbol{\beta})$ and $\mathbf A_n(\check \boldsymbol{\beta})-\mathbf A_n(\tilde \boldsymbol{\beta})$. The other cases follow along similar lines. First,
\begin{align*}
\| \mathbf A_n(\tilde \boldsymbol{\beta}) \| \leq \int_{D_n} \|\mathbf{z}(u)\|^2 \rho(u;\tilde \boldsymbol{\beta}) \mathrm{d} u = O_\mathrm{P}(p_n \mu_n).
\end{align*}
Second, using Taylor expansion, there exists $\boldsymbol{\beta}^\prime$ on the segment between $\tilde \boldsymbol{\beta}$ and $\check \boldsymbol{\beta}$ such that $\rho(u;\check\boldsymbol{\beta})-\rho(u;\tilde\boldsymbol{\beta})= (\check \boldsymbol{\beta} - \tilde \boldsymbol{\beta} )^\top \mathbf{z}(u)\rho(u;\boldsymbol{\beta}^\prime)$ whereby we deduce that
\begin{align*}
\|\mathbf A_n(\check \boldsymbol{\beta})-\mathbf A_n(\tilde \boldsymbol{\beta})\| &\leq \int_{D_n} \|\mathbf{z}(u)\|^3 \|\check \boldsymbol{\beta}-\tilde \boldsymbol{\beta}\| \rho(u;\boldsymbol{\beta}^\prime) \mathrm{d} u\\
&=O_\mathrm{P}\left(\mu_n p_n^{3/2} \|\tilde \boldsymbol{\beta}-\boldsymbol{\beta}_0\|\right) = O_\mathrm{P} \left(p_n^2 \sqrt{\mu_n} \right).
\end{align*}
(iii) We only have to prove it for $\|\mathbf U_n(\tilde \boldsymbol{\beta})\|$. Using Taylor expansion, there exists $\check \boldsymbol{\beta}$ such that $\mathbf U_n(\tilde \boldsymbol{\beta}) = \mathbf U_n(\boldsymbol{\beta}_0) - \mathbf A_n(\check \boldsymbol{\beta}) (\tilde \boldsymbol{\beta}-\boldsymbol{\beta}_0)$. Using Lemma~\ref{bound}, (ii) and condition~\cond{C:initial}, we obtain
\begin{align*}
\|\mathbf U_n(\tilde \boldsymbol{\beta}) \| = O_\mathrm{P} \left( \sqrt{p_n \mu_n} + p_n \mu_n \sqrt{p_n/\mu_n} \right) = O_\mathrm{P}\left( \sqrt{p_n^3\mu_n}\right).
\end{align*}
\end{proof}
\subsection{Sparsity property for $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$}
The sparsity property of $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ follows from the following Lemma.
\begin{lemma} \label{lemma:sparsity}
Let $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ and $\hat \boldsymbol{\gamma}$ satisfy the following conditions
\begin{align}
\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} & =\mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1} \Big \{\mathbf{U}_{n,1}( \tilde{\boldsymbol{\beta}}) + \mathbf{A}_{n,1}(\tilde {\boldsymbol{\beta}})\tilde {\boldsymbol{\beta}} - \mu_n{\boldsymbol{\Lambda}}_{n,11} \sign(\hat \boldsymbol{\gamma}_1) \Big \} \label{eq:betahat1} \\
\hat \boldsymbol{\beta}_{\mathrm{ALDS,2}} & = \mathbf{0} \label{eq:betahat2} \\
\hat \boldsymbol{\gamma}_1 & = \mu_n {\boldsymbol{\Lambda}}_{n,11}\mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1} {\boldsymbol{\Lambda}}_{n,11} \sign(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}). \label{eq:gammahat1} \\
\hat \boldsymbol{\gamma}_2 & = \mathbf{0}. \label{eq:gammahat2}
\end{align}
Then, under the conditions~\cond{C:intensity}-\cond{C:anbn}, the following two statements hold.\\
(i)
\begin{align*}
\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} -\boldsymbol{\beta}_{01} &= \mathbf A_{n,11}(\boldsymbol{\beta}_0)^{-1} \mathbf U_{n,1}(\boldsymbol{\beta}_0) + o_\mathrm{P}\left( \frac{1}{s_n\sqrt{\mu_n}} \right) =
O_\mathrm{P}\left( \sqrt{\frac{s_n}{\mu_n}}\right).
\end{align*}
(ii) With probability tending to 1, $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ and $\hat \boldsymbol{\gamma}$ given by \eqref{eq:betahat1}-\eqref{eq:gammahat2} satisfy conditions~\eqref{eq:fea1}-\eqref{eq:slack2} and are thus the primal and dual optimal solutions (whence the notation $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$).
\end{lemma}
It is worth mentioning that the rate $o_\mathrm{P}(1/s_n\sqrt{\mu_n})$ in Lemma~\ref{lemma:sparsity} (i) is required to derive the central limit theorem proved in Appendix~\ref{sec:proofCLT_ALDS}. That required rate of convergence imposes some stronger restriction on the sequence~$a_n$.
\begin{proof}
(i) Using Taylor expansion, there exists $\check \boldsymbol{\beta}$ on the line segment between $\tilde \boldsymbol{\beta}$ and $\boldsymbol{\beta}_0$ such that $\mathbf U_{n,1} (\tilde \boldsymbol{\beta}) =\mathbf U_{n,1} (\boldsymbol{\beta}_0) - \mathbf A_{n,1}(\check \boldsymbol{\beta})(\tilde \boldsymbol{\beta} - \boldsymbol{\beta}_0)$ which leads, by noticing that $\boldsymbol{\beta}_{02}=0$, to
\begin{align*}
\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}-\boldsymbol{\beta}_{01} =&\mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1}
\bigg[ \left\{
\mathbf{A}_{n,1}(\tilde {\boldsymbol{\beta}}) -\mathbf{A}_{n,1}(\check {\boldsymbol{\beta}})\right\} \left\{ \tilde {\boldsymbol{\beta}}-\boldsymbol{\beta}_0\right\} \\
&+ \mathbf U_{n,1}(\boldsymbol{\beta}_0) - \mu_n{\boldsymbol{\Lambda}}_{n,11} \sign(\hat \boldsymbol{\gamma}_1) \bigg].
\end{align*}
Condition~\cond{C:cov} ensures that $\|\mathbf A_{n,11}(\tilde \boldsymbol{\beta})^{-1}\| =O_\mathrm{P}(\mu_n^{-1})$. Let $\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}-\boldsymbol{\beta}_{01} = \mathbf A_{n,11}(\boldsymbol{\beta}_0)^{-1} \mathbf U_{n,1}(\boldsymbol{\beta}_0) + T_1+T_2+T_3$ where
\begin{align*}
T_1&=\left\{\mathbf A_{n,11}(\tilde\boldsymbol{\beta})^{-1} -\mathbf A_{n,11}(\boldsymbol{\beta}_0)^{-1} \right\} \mathbf U_{n,1}(\boldsymbol{\beta}_0) \\
T_2&= \mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1}
\left\{
\mathbf{A}_{n,1}(\tilde {\boldsymbol{\beta}}) -\mathbf{A}_{n,1}(\check {\boldsymbol{\beta}})\right\} \left\{ \tilde {\boldsymbol{\beta}}-\boldsymbol{\beta}_0\right\} \\
T_3 &= \mu_n \mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1} {\boldsymbol{\Lambda}}_{n,11} \sign(\hat \boldsymbol{\gamma}_1).
\end{align*}
Regarding the term $T_1$ we have
\[
T_1 = \mathbf A_{n,11}(\tilde\boldsymbol{\beta})^{-1} \left\{ \mathbf A_{n,11}(\boldsymbol{\beta}_0)- \mathbf A_{n,11}(\tilde\boldsymbol{\beta})\right\} \mathbf A_{n,11}(\boldsymbol{\beta}_0)^{-1}
\mathbf U_{n,1}(\boldsymbol{\beta}_0).
\]
Condition~\cond{C:cov} ensures that $\max(\|\mathbf A_{n,11}(\tilde \boldsymbol{\beta})^{-1}\|,\|\mathbf A_{n,11}(\boldsymbol{\beta}_0)^{-1}\|)= O_\mathrm{P}(\mu_n^{-1})$. Using this, Lemma~\ref{lem:aux} and Lemma~\ref{bound} we obtain
\[
T_1 = O_\mathrm{P} \left( \frac{1}{\mu_n} \sqrt{s_n^2 p_n^2 \mu_n} \frac{1}{\mu_n} \sqrt{s_n \mu_n}\right) = O_\mathrm{P} \left( \frac{\sqrt{s_n^3p_n^2}}{\mu_n}\right).
\]
With similar arguments, we have
\[
T_2 = O_\mathrm{P} \left( \frac{1}{\mu_n} \sqrt{s_np_n^3\mu_n}\sqrt{p_n/\mu_n}\right) =
O_\mathrm{P} \left( \frac{\sqrt{s_np_n^4}}{\mu_n}\right).
\]
Condition~\cond{C:snpn} ensures that
\[
T_1+T_2= O_\mathrm{P}\left( \frac{\sqrt{s_np_n^4}}{\mu_n}\right) = o_\mathrm{P}\left( \frac1{s_n \sqrt{\mu_n}}
\right).
\]
Now, regarding the last term
\[
T_3 = O_\mathrm{P}\left( \mu_n \frac1{\mu_n} a_n \sqrt{s_n} \right) = O_\mathrm{P} (a_n \sqrt{s_n}).
\]
And we observe that condition~\cond{C:anbn} is sufficient to establish that $T_3=o_\mathrm{P}(1/s_n\sqrt{\mu_n})$ which proves (i) using again condition~\cond{C:cov} and Lemma~\ref{bound}.
(ii) We have to show that with probability tending to 1, $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ and $\hat \boldsymbol{\gamma}$ given by \eqref{eq:betahat1}-\eqref{eq:gammahat2} satisfy conditions~\eqref{eq:fea1}-\eqref{eq:slack2}.
By \eqref{eq:betahat1}-\eqref{eq:gammahat2},
\begin{align*}
\mu_n^{-1}\hat \boldsymbol{\gamma}^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n(\tilde {\boldsymbol{\beta}}) \hat \boldsymbol{\beta}_{\mathrm{ALDS}} &= \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}}) \hat \boldsymbol{\beta}_{\mathrm{ALDS},1} \\
& = \sign(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}})^\top{\boldsymbol{\Lambda}}_{n,11} \big\{\mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})\big\}^{-1} {\boldsymbol{\Lambda}}_{n,11}{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}}) \hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} \\
&= \sign(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}})^\top{\boldsymbol{\Lambda}}_{n,11} \hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} \\
& = \ \|{\boldsymbol{\Lambda}}_{n,11} \hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}\|_1= \|{\boldsymbol{\Lambda}}_{n} \hat \boldsymbol{\beta}_{\mathrm{ALDS}}\|_1,
\end{align*}
so, \eqref{eq:slack1} is satisfied. Now, we want to show that \eqref{eq:slack2} holds. We have
\begin{align*}
\mu_n^{-1} \hat \boldsymbol{\gamma}^\top{\boldsymbol{\Lambda}}_{n}^{-1} \big \{\mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} - \hat \boldsymbol{\beta}_{\mathrm{ALDS}}) \big \} = \mathbf{I} + \mathbf{II},
\end{align*}
where
\begin{align*}
\mathbf{I} = \; & \mu_n^{-1} \hat \boldsymbol{\gamma}^\top{\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) = \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}}),\\
\mathbf{II} = \; & \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} - \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,11}( \tilde {\boldsymbol{\beta}}) \hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} \\
= \; & \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} \\
&- \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \{\mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} -\mu_n {\boldsymbol{\Lambda}}_{n,11} \sign(\hat\boldsymbol{\gamma}_1)\} \\
= \; & \hat \boldsymbol{\gamma}_1^\top \sign(\hat\boldsymbol{\gamma}_1) - \mu_n^{-1} \hat \boldsymbol{\gamma}_1^\top{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}}),
\end{align*}
from \eqref{eq:betahat1}-\eqref{eq:gammahat2}. By summing $\mathbf{I}$ and $\mathbf{II}$, we deduce that \eqref{eq:slack2} holds.
To prove \eqref{eq:fea2} holds, we use \eqref{eq:gammahat2} and decompose the vector \linebreak$\mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n(\tilde {\boldsymbol{\beta}}){\boldsymbol{\Lambda}}_{n}^{-1} \hat \boldsymbol{\gamma} $ as
\begin{align*}
\mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \mathbf{A}_n(\tilde {\boldsymbol{\beta}}){\boldsymbol{\Lambda}}_{n}^{-1} \hat \boldsymbol{\gamma} = \mu_n^{-1}
\begin{bmatrix}
\mathbf{I}^\prime \\
\mathbf{II}^\prime
\end{bmatrix}
= \mu_n^{-1}
\begin{bmatrix}
{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}}) {\boldsymbol{\Lambda}}_{n,11}^{-1} \hat{\boldsymbol \gamma_1 } \\
{\boldsymbol{\Lambda}}_{n,22}^{-1} \mathbf{A}_{n,21}(\tilde {\boldsymbol{\beta}}) {\boldsymbol{\Lambda}}_{n,11}^{-1} \hat{\boldsymbol \gamma_1}
\end{bmatrix}.
\end{align*}
By~\eqref{eq:gammahat1}
\begin{align*}
\mu_n^{-1} \| \mathbf{I}^\prime \|_\infty & = \mu_n^{-1} \| {\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}}){{\boldsymbol{\Lambda}}_{n,11}^{-1}} \hat \boldsymbol{\gamma}_1 \|_\infty \\
& = \| \sign(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}) \|_\infty =1.
\end{align*}
Regarding $\mathbf{II}^\prime$, by \eqref{eq:gammahat1}, conditions on $a_n$ and $b_n$, conditions~($\mathcal{C}$.\ref{C:intensity})-($\mathcal{C}$.\ref{C:cov}), ($\mathcal{C}$.\ref{C:initial}) and Lemma~\ref{lem:aux}(i)-(ii), we have
\begin{align*}
\mu_n^{-1}\mathbf{II}^\prime & = \mu_n^{-1} {\boldsymbol{\Lambda}}_{n,22}^{-1} \mathbf{A}_{n,21}(\tilde {\boldsymbol{\beta}}){\boldsymbol{\Lambda}}_{n,11}^{-1} \hat \boldsymbol{\gamma}_1 \\
& ={\boldsymbol{\Lambda}}_{n,22}^{-1} \mathbf{A}_{n,21}(\tilde {\boldsymbol{\beta}}){{\boldsymbol{\Lambda}}_{n,11}^{-1}}{\boldsymbol{\Lambda}}_{n,11}\mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1}{\boldsymbol{\Lambda}}_{n,11} \sign(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}) \\
& ={\boldsymbol{\Lambda}}_{n,22}^{-1} \mathbf{A}_{n,21}(\tilde {\boldsymbol{\beta}})\mathbf{A}_{n,11}(\tilde {\boldsymbol{\beta}})^{-1}{\boldsymbol{\Lambda}}_{n,11} \sign(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}}) \\
&=
O_\mathrm{P} \left(
\frac1{b_n} \sqrt{p_ns_n}\mu_n \frac1{\mu_n} a_n \sqrt{s_n}
\right) =
O_\mathrm{P} \left(
\frac{a_n \sqrt{s_n^2 p_n}}{b_n}
\right) \\
&= O_\mathrm{P} \left( a_n \sqrt{s_n^3 \mu_n} \frac1{b_n}\sqrt{\frac{p_n^3}{\mu_n}} \, \frac{1}{s_np_n} \right).
\end{align*}
Hence, $\mu_n^{-1}\|\mathbf{II}^\prime\|_\infty = o_\mathrm{P}(1)$ by condition~\cond{C:snpn} and
\eqref{eq:fea2} is satisfied with probability tending to 1. We finally focus on \eqref{eq:fea1}. Note that
\begin{align*}
\mu_n^{-1} {\boldsymbol{\Lambda}}_{n}^{-1} \{\mathbf{U}_{n}( \tilde {\boldsymbol{\beta}}) &+ \mathbf{A}_n( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} - \hat \boldsymbol{\beta}_{\mathrm{ALDS}}) \} = \mu_n^{-1}
\begin{bmatrix}
\tilde {\mathbf{I}} \\
\tilde {\mathbf{II}}
\end{bmatrix} \\
& = \mu_n^{-1}
\begin{bmatrix}
{\boldsymbol{\Lambda}}_{n,11}^{-1} \{\mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} - \hat \boldsymbol{\beta}_{\mathrm{ALDS}}) \} \\
{\boldsymbol{\Lambda}}_{n,22}^{-1} \{\mathbf{U}_{n,2}( \tilde {\boldsymbol{\beta}}) + \mathbf{A}_{n,2}( \tilde {\boldsymbol{\beta}}) (\tilde {\boldsymbol{\beta}} - \hat \boldsymbol{\beta}_{\mathrm{ALDS}}) \}
\end{bmatrix}.
\end{align*}
Regarding $\tilde {\mathbf{I}}$, from~\eqref{eq:betahat1}-\eqref{eq:betahat2},
\begin{align*}
\mu_n^{-1} \|\tilde {\mathbf{I}}\|_\infty = & \mu_n^{-1} \|{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}}) +{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} - {\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,11}( \tilde {\boldsymbol{\beta}}) \hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} \|_\infty \\
= & \mu_n^{-1} \|{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}}) +{\boldsymbol{\Lambda}}_{n,11}^{-1} \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} \\
& - {\boldsymbol{\Lambda}}_{n,11}^{-1} \{\mathbf{U}_{n,1}( \tilde {\boldsymbol{\beta}})+ \mathbf{A}_{n,1}( \tilde {\boldsymbol{\beta}}) \tilde {\boldsymbol{\beta}} -\mu_n{\boldsymbol{\Lambda}}_{n,11} \sign(\hat\boldsymbol{\gamma}_1)\} \|_\infty \\
= & \|\sign(\hat\boldsymbol{\gamma}_1) \|_\infty = 1.
\end{align*}
Now, consider $\tilde {\mathbf{II}}$. By the sparsity of $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ and $\boldsymbol{\beta}_0$ we can write
\[
\mu_n^{-1} \tilde {\mathbf{II}} = \mu_n^{-1} \boldsymbol\Lambda_{n,22}^{-1} \left\{
\mathbf U_{n,2}(\tilde \boldsymbol{\beta}) + \mathbf A_{n,2}(\tilde \boldsymbol{\beta}) (\tilde \boldsymbol{\beta}-\boldsymbol{\beta}_0) +
\mathbf A_{n,21}(\tilde \boldsymbol{\beta}) (\boldsymbol{\beta}_{01}-\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}})
\right\}.
\]
We combine Lemma~\ref{lem:aux} (i)-(iii) and Lemma~\ref{sparsity} to derive
\begin{align*}
\mu_n^{-1} \tilde {\mathbf{II}} &= O_\mathrm{P}
\left\{
\frac{1}{\mu_n} \frac1{b_n}
\left(
\sqrt{p_n\mu_n} + p_n \mu_n \sqrt{\frac{p_n}{\mu_n}} + \sqrt{p_n s_n} \mu_n \sqrt{\frac{s_n}{\mu_n}}
\right)
\right\}\\
& =O_\mathrm{P} \left( \frac{1}{b_n} \sqrt{\frac{p_n^3}{\mu_n}}\right) = o_\mathrm{P}(1)
\end{align*}
by condition~\cond{C:anbn}. Hence, $\mu_n^{-1}\| \tilde{\mathbf{II}} \|_\infty = o_\mathrm{P}(1)$ and
\eqref{eq:fea1} is satisfied with probability tending to 1.
\end{proof}
\subsection{Asymptotic normality for $\hat \boldsymbol{\beta}=\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$}
\label{sec:proofCLT_ALDS}
\begin{proof}
By Lemma~\ref{lem:aux}, $\mathbf A_{n,11}(\boldsymbol{\beta}_0)=O_\mathrm{P}(s_n\mu_n)$. This and Lemma~\ref{lemma:sparsity} show that
\[
\mathbf A_{n,11}(\boldsymbol{\beta}_0) \left(\hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} -\boldsymbol{\beta}_{01}\right)= \mathbf U_{n,1}(\boldsymbol{\beta}_0) + o_\mathrm{P}(\sqrt{\mu_n}).
\]
Let $\boldsymbol \phi \in \mathbb R^{s_n}\setminus\{0\}$ with $\|\boldsymbol \phi\|<\infty$ and let $\sigma^2_{\boldsymbol \phi} = \boldsymbol \phi^\top \mathbf B_{n,11}(\boldsymbol{\beta}_0) \boldsymbol \phi$. Now
\[
\sigma_{\boldsymbol \phi}^{-1}
\boldsymbol \phi^\top
\mathbf A_{n,11}(\boldsymbol{\beta}_0) \left( \hat \boldsymbol{\beta}_{\mathrm{ALDS,1}} -\boldsymbol{\beta}_{01}\right) =
\sigma_{\boldsymbol \phi}^{-1}\boldsymbol \phi^\top
\mathbf U_{n,1}(\boldsymbol{\beta}_0) +
\sigma_{\boldsymbol \phi}^{-1} \boldsymbol \phi^\top o_\mathrm{P}(\sqrt{\mu_n}).
\]
By condition~\cond{C:Bn}, $\sigma_{\boldsymbol \phi}^{-1}=O(1/\sqrt{\mu_n})$.
The result is therefore deduced from condition~\cond{C:clt} and Slutsky's theorem.
\end{proof}
\section{Resulting $\boldsymbol{\beta}$ estimates for the BCI dataset}
\setlength{\tabcolsep}{1.35pt}
\renewcommand{0}{1.1}
\begin{table}[H]
\caption{Values of $\hat \boldsymbol{\beta}_{\mathrm{AL}}$ and $\hat \boldsymbol{\beta}_{\mathrm{ALDS}}$ for the real data example}
\label{tab:betaest}
\centering
\begin{tabular}{lrrrrrrrrrrrrrr}
\hline
& Int & elev & grad & Al & B & Ca & Cu & Fe & K & Mg & Mn & P & Zn & N \\
\hline
AL & -6.252 & 0 & 0 & 0 & -0.461 & 0.448 & 0 & 0.260 & 0 & 0 & -0.244 & 0 & 0 & 0 \\
ALDS & -6.245 & 0 & 0 & 0 & -0.429 & 0.419 & 0 & 0.245 & 0 & 0 & -0.235 & 0 & 0 & 0 \\
\hline
& N.min & pH & AlB & AlCa & AlCu & AlFe & AlK & AlMg & AlMn & AlP & AlZn & AlN & AlN.min & AlpH \\
\hline
AL & 0.076 & 0.477 & -0.162 & 0 & 0 & 0.379 & 0 & -0.514 & 0 & -0.022 & 0 & 0.103 & 0 & -0.033 \\
ALDS & 0.077 & 0.464 & -0.221 & 0 & 0 & 0.394 & 0 & -0.471 & 0 & -0.008 & 0 & 0.103 & 0 & 0 \\
\hline
& BCa & BCu & BFe & BK & BMg & BMn & BP & BZn & BN & BN.min & BpH & CaCu & CaFe & CaK \\
\hline
AL & 0 & 0 & 0.152 & 0 & 0 & 0 & -0.299 & 0 & -0.071 & 0 & 0 & 0 & 0.144 & 0 \\
ALDS & 0 & -0.093 & 0.183 & 0 & 0 & 0 & -0.286 & -0.080 & -0.027 & 0.053 & 0 & 0 & 0.142 & 0 \\
\hline
& CaMg & CaMn & CaP & CaZn & CaN & CaN.min & CapH & CuFe & CuK & CuMg & CuMn & CuP & CuZn & CuN \\
\hline
AL & -0.155 & 0 & 0.104 & 0 & 0 & -0.888 & 0 & 0 & -0.091 & 0 & 0.134 & 0.148 & 0 & 0 \\
ALDS & -0.125 & 0 & 0.095 & 0 & 0 & -0.890 & 0.042 & 0 & -0.013 & 0 & 0.130 & 0.148 & 0 & 0 \\
\hline
& CuN.min & CupH & FeK & FeMg & FeMn & FeP & FeZn & FeN & FeN.min & FepH & KMg & KMn & KP & KZn \\
\hline
AL & 0 & 0 & -0.311 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -0.051 \\
ALDS & 0 & 0 & -0.331 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
& KN & KN.min & KpH & MgMn & MgP & MgZn & MgN & MgN.min & MgpH & MnP & MnZn & MnN & MnN.min & MnpH \\
\hline
AL & 0.198 & 0.580 & 0.023 & 0 & 0 & -0.011 & 0 & 0 & 0 & 0 & 0 & -0.050 & 0.107 & 0 \\
ALDS & 0.161 & 0.530 & 0 & -0.003 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -0.047 & 0.100 & 0 \\
\hline
& PZn & PN & PN.min & PpH & ZnN & ZnN.min & ZnpH & NN.min & NpH & N.minpH & elevgrad &&&\\
\hline
AL & 0 & 0.269 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.054 & 0 &&& \\
ALDS & 0 & 0.258 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.054 & 0 &&& \\
\hline
\end{tabular}
\end{table}
\end{appendix}
\end{document} |
\begin{document}
\maketitle \defAbstract{Abstract} \begin{abstract} We consider a multivariate non-linear Hawkes process in a multi-class setup where particles are organised within two populations of possibly different sizes, such that one of the populations acts excitatory on the system while the other population acts inhibitory on the system. The goal of this note is to present a class of Hawkes Processes with stable dynamics without assumptions on the spectral radius of the associated weight function matrix. This illustrates how inhibition in a Hawkes system significantly affects the stability properties of the system. \end{abstract}
{\it Key words} : Multivariate nonlinear Hawkes processes, Stability, Piecewise deterministic Markov processes, Lyapunov functions. \\
{\it MSC 2000} : 60G55; 60G57; 60J25; 60Fxx
\section{Introduction and main result}\label{sec:1} We consider a system of interacting Hawkes processes structured within two populations. We shall label the two populations with ``$+$'' or ``$-$'' signaling that the population acts excitatory or inhibitory on the system, respectively. Let $N_+,N_{-} \in \mathbb {N}$ be the number of units in each population. Introduce weight functions given by \begin{eqnarray}\label{eq:weightfunctions} h_{++} ( t)& =&\frac{c_{++}}{N_{+}} e^{ -\nu_{+}t}, \quad h_{+-}( t)=\frac{c_{+-}}{N_{+}}e^{ -\nu_{+}t},\\ h_{-+}( t)&=&\frac{c_{-+}}{N_{-}} e^{ -\nu_{-}t},\quad h_{--}( t)=\frac{c_{--}}{N_{-}} e^{ -\nu_{-}t}, \end{eqnarray} for $ t \geq 0.$ In the above formula, $h_{+-}$ indicates the weight function from a unit in the excitatory group ``$+$'' to a unit in the inhibitory group ``$-$'', and so on. The coefficients of the system of interacting Hawkes processes are the exponential leakage terms $ \nu_+ > 0 , \nu_- > 0 $ and the weights $ c_{++} , c_{+-} , c_{-+}, c_{--} $ satisfying that \begin{equation}
c_{++} \geq 0, \; c_{+-} \geq 0, \; c_{--} \leq 0, \; c_{-+} \leq 0. \end{equation} The multivariate linear Hawkes process with these parameters is given as \begin{eqnarray}\label{eq:dyn} Z_{+}^i (t) &= & \int_0^t \int_0^\infty
{\bf 1}_{ \{ z \leq \psi^{i}_+ ( X_{+} ({s-})) \}} \pi_{+}^i (ds, dz) , 1 \le i\leq N_{+}, \\ Z_{-}^j (t) &= & \int_0^t \int_0^\infty
{\bf 1}_{ \{ z \leq \psi^{j}_- ( X_{-} ({s-})) \}} \pi_{-}^j (ds, dz) ,1 \le j\leq N_{-}, \\ X_+ (t) &= & e^{- \nu_+ t } X_+ (0) + \frac{c_{++}}{N_+} \sum_{i=1}^{N_{+}}\int_0^t e^{- \nu_+ ( t- s) } Z_{+}^i (d s) + \frac{c_{-+}}{N_-} \sum_{j=1}^{N_{-}}\int_0^t e^{- \nu_+ ( t- s) } Z_{-}^j (ds) ,\\ X_{-} (t) &= & e^{- \nu_- t } X_{-} (0 )+ \frac{c_{+-}}{N_+} \sum_{i=1}^{N_{+}}\int_0^t e^{- \nu_- ( t- s) } Z_{+}^i (ds) +\frac{ c_{--}}{N_-} \sum_{j=1}^{N_{-}}\int_0^t e^{- \nu_- ( t- s) } Z_{-}^j (ds) , \end{eqnarray} where the jump rate functions $\psi^i_+ : \mathbb {R} \to \mathbb {R}_+ , \psi^i_- : \mathbb {R} \to \mathbb {R}_+ $ are given by \begin{equation}\label{eq:aplus} \psi^i_{\pm} (x) = a^i_{\pm} + \max (x, 0), \; \mbox{ where } a^i_{\pm } > 0 , \end{equation} and where the $ \pi^i_{\pm} , i \geq 1, $ are i.i.d. Poisson random measures on $ \mathbb {R}_+ \times \mathbb {R}_+ $ having intensity $ dt dz.$
Notice that the process $ ( X_+, X_{-} ) $ is a piecewise deterministic Markov process having generator \begin{multline} A g (x, y ) = - \nu_+ x \partial_x g (x,y ) - \nu_- y \partial_y g (x, y ) + \sum_{i=1}^{N_+} \psi^i_+ ( x) [ g ( x + \frac{c_{++}}{N_+} , y+ \frac{c_{+-}}{N_+} )- g(x,y ) ] \\ +\sum_{j=1}^{N_-} \psi_-^j (y ) [ g ( x + \frac{c_{-+}}{N_-} , y + \frac{c_{--}}{N_-} ) - g(x, y ) ] , \end{multline} for sufficiently smooth test functions $g.$
Classical stability results for multivariate nonlinear Hawkes processes found e.g. in \cite{bm} or in the recent paper \cite{manonetal}, which is devoted to the study of the stabilising effect of inhibitions, are stated in terms of an associated weight function matrix $\Lambda,$ imposing that the spectral radius of $\Lambda$ is strictly smaller than one. In this case the process is termed to be {\it subcritical}. This spectral radius stability condition has a natural interpretation in terms of a multitype branching process with immigration which is spatially structured and where each jump of a given type ($+ $ or $-$) gives rise to future jumps of the same or of the opposite type, see \cite{ho}. The subcriticality condition ensures the recurrence of this process (see \cite{kaplan}). In our system, the weight function matrix is given by \begin{equation}\label{eq:Lambda} \Lambda = \left( \begin{array}{cc}
\frac{c_{++}}{\nu_+} & \frac{|c_{-+}|}{\nu_+} \\
\frac{c_{+-}}{\nu_-} & \frac{|c_{--}|}{\nu_-} \end{array} \right) . \end{equation} Notice that in \eqref{eq:Lambda}, negative synaptic weights do only appear through their absolute values. This is due to the fact that using the Lipschitz continuity of the rate functions leads automatically to considering absolute values and does not enable us to make profit from the inhibitory action of $c_{-+} $ and $ c_{--}. $ Obviously, having sufficiently fast decay, that is, $ \min (\nu_+ , \nu_-) >> 1, $ is a sufficient condition fo subcriticality.
The purpose of this note is to show how the presence of sufficiently high (in absolute value) negative weights helps stabilising the process without imposing such a subcriticality condition, in particular, without imposing $ \nu_+, \nu_- $ being large. To the best of our knowledge, only few results have been obtained on this natural question in the literature. \cite{bm} gives an attempt in this direction but does only deal with the case when $ c_{+- } $ and $ c_{-+}$ are of the same sign (see Theorem 6 in \cite{bm}), and \cite{manonetal} do only work with the positive part of the weight functions, without profiting from the explicit inhibitory part within the system.
Our approach is based on the construction of a convenient Lyapunov function using the inhibitory part of the dynamics. As such, this approach is limited to the present Markovian framework where the weight functions are decreasing exponentials.
In the following, we shall write $$ c_{++}^* := c_{++} - \nu_+ , \; c_{-- }^* := c_{--} - \nu_{-} .$$ Notice that $ c_{++}^* $ could be interpreted as the net increase of $ X_+ $ due to self-interactions of $X_+ $ with itself. $ c_{--}^* $ is always negative.
\begin{ass}\label{aslol} We assume the following inequalities. \begin{eqnarray}\label{eq:stab}
c_{++}^* +c_{--}^* &<& 0 ,\\
( c_{++}^*- c_{--}^* )^2 &< &4 c_{+- } | c_{-+}| ,\\
c_{++}^* - c_{--}^* &>& 0. \end{eqnarray} \end{ass}
This assumption ensures that the system is balanced. Notice that Assumption \ref{aslol} does not imply - nor is implied by - that the spectral radius of $\Lambda$ is strictly smaller than $1$. For example, if Assumption \ref{aslol} is satisfied for some parameters $ ( c_{++},c_{+-},c_{-+},c_{--},\nu,\nu ) , $ i.e., $\nu_+ = \nu_- = \nu, $ such that additionally $ c_{++} + c_{--} < 0, $ then for all $C>1 $ and all $\varepsilon>0,$ the set of parameters $( Cc_{++},Cc_{+-},Cc_{-+},Cc_{--},\varepsilon\nu,\varepsilon\nu ) $ satisfies Assumption \ref{aslol} as well. But the associated offspring matrix $\Lambda_{C,\varepsilon}$ of the scaled parameters is equal to $(C/\varepsilon) \Lambda , $ and thus the spectral radius is also scaled by $C/\varepsilon$.
\begin{ass}\label{ass:2d} We assume that either $ \nu_+ \neq \nu_- $ or $ \nu_+ = \nu_- $ and $( c_{++},c_{+-} ) ,( c_{-+},c_{--}) $ are linearly independent. \end{ass}
We are now able to state our main result. It states that under Assumptions \ref{aslol} and \ref{ass:2d}, the process $ X = (X_+, X_{-} ) $ is positive Harris recurrent, together with a strong mixing result. To state our result, for any $ t > 0 $ and for $ z = (x,y ) \in \mathbb {R}^2 ,$ we write $ P_t ( z, \cdot )$ for the transition semigroup of the process, defined through $P_t (z, A) = E_z ( 1_A (X (t)) ) .$ Moreover, for any pair of probability measures $\mu_1, \mu_2 $ on $ {\mathcal B} (\mathbb {R}^2)$ and for any function $ V : \mathbb {R}^2 \to [1, \infty [, $ we put
$$ \| \mu_1- \mu_2 \|_{ V} := \sup_{ g : |g| \le V } | \mu_1 ( g) - \mu_2 (g) | .$$
\begin{theo}\label{theo:harris} Grant Assumptions \ref{aslol} and \ref{ass:2d}. \\ 1) Then the process $ X = (X_+, X_- ) $ is positive recurrent in the sense of Harris, and its unique invariant probability measure $ \mu $ possesses a Lebesgue continuous part. \\
2) There exists a function $V (x, y ) : \mathbb {R}^2 \to [1, \infty [ $ such that $\lim_{ |x| + |y| \to \infty } V ( x,y ) = \infty $ and there exist $ c_1, c_2 > 0 $ such that for all $z \in \mathbb {R}^2$ and all $ t \geq 0, $ \begin{equation}\label{eq:last}
\| P_t(z , \cdot ) - \mu\|_{ V} \le c_1 V (z) e^{ - c_2 t} . \end{equation} \end{theo}
\begin{rem} Notice that if Assumption \ref{ass:2d} is not satisfied, that is, if $ \nu_+ = \nu_- $ and if $$\left( \begin{array}{c} c_{-+}\\
c_{--} \end{array}\right) \in H:= \mathbb {R} \left( \begin{array}{c} c_{++}\\
c_{+-} \end{array}\right),$$ then it is easily shown that almost surely, $ dist ( X (t) , H) \to 0 $ as $t \to \infty $ and that $H$ is invariant under the dynamics. Moreover, the restriction of the dynamics to $H$ is Harris recurrent, having a unique invariant measure $ \mu $ which is absolutely continuous with respect to the Lebesgue measure on $H.$ However, it is easy to show that the original process $X,$ defined on $ \mathbb {R}^2, $ is not Harris in this case, since it is not $ \mu-$irreducible. \end{rem}
\section{Proof of Theorem \ref{theo:harris}} This section is devoted to the proof of Theorem \ref{theo:harris}.
\subsection{A Lyapunov function for $X$} We start this section with the following useful property. \begin{prop}\label{prop:Feller} The process $ X$ is a Feller process, that is, for any $f : \mathbb {R}^2 \to \mathbb {R}$ which is bounded and continuous, we have that $\mathbb {R}^2 \ni (x, y ) = z \mapsto E_{z} f (X (t) ) = P_t f (z) $ is continuous. \end{prop}
The proof of this result follows from classical arguments, see e.g.\ the proof of Proposition 4.8 in \cite{evaflow}, or \cite{ikeda1966}.
The next result shows that if the cross-interactions, that is, influence from $ X_+ $ to $X_{-} $ and vice versa, are sufficiently strong, then -- under mild additional assumptions -- it is possible to construct a Lyapunov function for the system that does mainly profit from the inhibitory part of the jumps.
\begin{prop}\label{prop:lyapunov} Grant Assumption \ref{aslol} and put $$ V ( x, y ) := \left\{ \begin{array}{ll} V_{++} ( x, y ) : = c_{+- } x^2 -c_{-+}y^2 - (c_{++}^* - c_{--}^*) xy & x \in \mathbb {R}_+ , y\in \mathbb {R}_+ \\ V_{+-} (x,y ) := c_{+- } x^2 + q y^2 - (c_{++}^* - c_{--}^*) xy & x\in \mathbb {R}_+ , y \in \mathbb {R}_- \\ V_{-+} (x,y ) := px^2 -c_{-+}y^2 - (c_{++}^* - c_{--}^*) xy & x \in \mathbb {R}_- , y\in \mathbb {R}_+ \\ V_{--} (x,y ) := p x^2 + qy^2 - (c_{++}^* - c_{--}^*) xy & x \in \mathbb {R}_- , y\in \mathbb {R}_- \end{array} \right\} , $$ with $p$ so small such that $$ - (c_{++}^* - c_{--}^* ) (c_{--} - \nu_+ - \nu_- ) + 2 p c_{-+} > 0 $$ and $q$ so large such that $$ (c_{++}^* - c_{--}^* ) [ \nu_+ + \nu_- - c_{++} ] + 2 q c_{+- } > 0 \mbox{ and } 4 pq > (c_{++}^* - c_{--}^*)^2 .$$
Then $\lim_{ |x| + |y| \to \infty } V ( x,y ) = \infty $ and there exist $ \kappa, c, K > 0 $ such that \begin{equation}
A V (x,y ) \le - \kappa V ( x, y ) + c 1_{\{ | x| + |y| \geq K\}} . \end{equation} \end{prop}
\begin{proof} We calculate $ A V ( x, y ) = A^1 V (x, y ) + A^2 V (x, y ) , $ with $$ A^1 V ( x, y ) = - \nu_+ \partial_x V (x,y ) - \nu_- \partial_y V (x, y ) $$ and $ A^2 $ the jump part of the generator.
{\bf Part 1.1} Suppose first that $ x \geq |c_{-+}|/ N_- , y \geq | c_{--} |/N_- . $ Then $$ A V (x, y ) = A^1 V_{++} (x, y) + A^2 V_{++} (x,y ) = a_{++} x^2 + b_{++} xy + d_{++} y^2 + L_{++} (x,y ) ,$$ where $L_{++} $ is a polynomial of degree $1.$ A straightforward calculus shows that \begin{eqnarray*} a_{++} &=& c_{+-} (c_{++}^* + c_{--}^* ) , \\ b_{++} &=& - (c_{++}^* - c_{--}^* ) (c_{++}^* + c_{--}^* )\\ d_{++} & =& - c_{-+} (c_{++}^* + c_{--}^* ) , \end{eqnarray*} proving that $$ A V( x, y ) = (c_{++}^* + c_{--}^* ) V (x,y) + L_{++} (x,y ) .$$ This implies that there exist $K, \kappa > 0 $ such that $$ A V ( x, y ) \le - \kappa V ( x,y ) $$ for all $ x > K , y > K,$ since $ c_{++}^* + c_{--}^* < 0 $ by assumption.
{\bf Part 1.2} Suppose now that $0 \le x < |c_{-+} |/N_- $ and $y \geq | c_{--} |/N_- .$ Then a jump of one of the inhibitory neurons will lead to a change $ x \mapsto x + c_{-+}/N_- < 0 .$ In this case we obtain $$ A V (x,y) = A V_{++} ( x,y ) + \sum_{j=1}^{N_-}(a^{j}_- + y ) ( V_{-+} ( x + \frac{c_{-+}}{N_-}, y + \frac{c_{--}}{N_-} ) - V_{++} ( x + \frac{c_{-+}}{N_-}, y + \frac{c_{-+}}{N_-} )) .$$ But
$$ |V_{-+} ( x + \frac{c_{-+}}{N_-}, y + \frac{c_{--}}{N_-} ) - V_{++} ( x + \frac{c_{-+}}{N_-}, y + \frac{c_{-+}}{N_-} )| \le C ,$$
since $ | x| < |c_{-+} |,$ and therefore $$ A V (x,y) \le A V_{++} ( x,y ) + L (y) , $$ where $ L(y) $ is a monomial in $ y.$
The other case $0 \le y < |c_{--} |/N_- $ and $x \geq | c_{-+} |/N_- $ is treated analogously.
{\bf Part 2.1} Suppose now that $ x \geq |c_{-+}|/N_- , y \leq - c_{+-} /N_+ . $ Then $$ A V (x, y ) = A^1 V_{+-} (x, y) + A^2 V_{+-} (x,y ) = a_{+-} x^2 + b_{+-} xy + d_{+-} y^2 + L_{+-} (x,y ) ,$$ where $L_{+-} $ is a polynomial of degree $1.$ We obtain \begin{eqnarray*} a_{+-} &=& c_{+-} (c_{++}^* + c_{--}^* ) , \\ b_{+-} &=& (c_{++}^* - c_{--}^* ) (\nu_+ + \nu_- - c_{++} ) + 2 q c_{+- } \\ d_{+- } & =& - 2 \nu_- q . \end{eqnarray*} Since $ b_{+-} > 0 $ by choice of $q,$ this implies that for a suitable positive constant $ \kappa > 0 ,$ $$ A V (x,y) \le - \kappa V(x,y) + L_{+-} (x,y) , $$ which allows to conclude as before.
{\bf Part 2.2} The cases $ x \geq |c_{-+}|/N_- , 0 \geq y > - c_{+-}/N_+ $ or $ 0 \le x < |c_{-+}|/N_- , y \leq - c_{+-}/N_+ $ are treated analogously to Part 1.2.
{\bf Part 3} Suppose now that $ x \le - c_{++} /N_+ , y \geq - c_{- -} /N_- . $ Then $$ A V (x, y ) = A^1 V_{-+} (x, y) + A^2 V_{-+} (x,y ) = a_{-+} x^2 + b_{-+} xy + d_{-+} y^2 + L_{-+} (x,y ) ,$$ where $L_{-+} $ is a polynomial of degree $1$ and where \begin{eqnarray*} a_{-+} &=& -2 \nu_+ p , \\ b_{-+} &=& (c_{++}^* - c_{--}^* ) ( \nu_+ + \nu_- - c_{--} ) + 2 p c_{-+} \\ d_{-+} & =& - c_{-+} (c_{++}^* + c_{--}^* ) . \end{eqnarray*} Notice that by choice of $p,$ $ b_{-+} > 0 .$ The conclusion of this part follows analogously to the previous parts 1.1 and 2.1.
{\bf Part 4} Suppose finally that $ x \le - c_{++}/N_+ , y \le - c_{+ -} /N_+ . $ Then $$ A V (x, y ) = A^1 V_{--} (x, y) + A^2 V_{--} (x,y ) = a_{--} x^2 + b_{--} xy + d_{--} y^2 + L_{--} (x,y ) ,$$ where $L_{--} $ is a polynomial of degree $1$ and where \begin{eqnarray*} a_{--} &=& -2 \nu_+ p , \\ b_{--} &=& (c_{++}^* - c_{--}^* ) (\nu_+ + \nu_- ) \\ d_{--} & =& - 2 \nu_- q , \end{eqnarray*} leading to the same conclusion as in the previous parts. \end{proof}
As a consequence of Proposition \ref{prop:lyapunov}, the process $X$ is stable in the sense that it necessarily possesses invariant probability measures, maybe several of them. The uniqueness of the invariant probability measure together with the Harris recurrence will follow from the following local Doeblin type lower bound.
\begin{prop}\label{thm:Doeblin} For all $ T> 0 $ and for all $z_* = (x_*, y_*) \in \mathbb {R}^2 $ the following holds. There exist $R > 0 , $ an open set $ I \subset \mathbb {R}^2 $ with strictly positive Lebesgue measure and a constant $\beta \in (0, 1), $ depending on $I , R$ and the coefficients of the system with \begin{equation}\label{doblinminorization} P_{T} (z , dz' ) \geq \beta 1_C (z) \nu ( dz') , \end{equation} where $ C = B_R ( z_* ) $ is the (open) ball of radius $R$ centred at $z_* ,$ and where $ \nu $ is the uniform probability measure on $ I.$ \end{prop}
\begin{proof} We start with the case $ \nu_+ \neq \nu_- ,$ under the assumption that $ c_{++}, c_{--}, c_{+-}, c_{-+} \neq 0 . $ In this case, \cite{Clinet} in the proof of their Lemma 6.4 establish the lower bound \eqref{doblinminorization} for the four-dimensional Markov process $ \bar X = (X_{++}, X_{+-}, X_{-+}, X_{--} ) $ given by $$
X_{++} (t) = e^{- \nu_+ t } X_{++} (0) + \frac{c_{++}}{N_+} \sum_{i=1}^{N_{+}}\int_0^t e^{- \nu_+ ( t- s) } Z_{+}^i (d s) , $$ $$ X_{-+} (t) = e^{- \nu_+ t } X_{-+} (0)+ \frac{c_{-+}}{N_-} \sum_{j=1}^{N_{-}}\int_0^t e^{- \nu_+ ( t- s) } Z_{-}^j (ds),$$ $$X_{+-} (t) = e^{- \nu_- t } X_{+-} (0) + \frac{c_{+-}}{N_+} \sum_{i=1}^{N_{+}}\int_0^t e^{- \nu_- ( t- s) } Z_{+}^i (ds) ,$$ $$ X_{--} (t) = e^{- \nu_- t } X_{--} (0)+ \frac{ c_{--}}{N_-} \sum_{j=1}^{N_{-}}\int_0^t e^{- \nu_- ( t- s) } Z_{-}^j (ds) , $$ where $ X_{++} (0) + X_{-+} (0) = X_+ (0) , X_{--} (0) + X_{+-} (0) = X_- (0) .$
More precisely, they show that for any $ \bar z_* \in \mathbb {R}^4 ,$ there exist $\bar R > 0 , $ an open rectangle $ \bar I \subset \mathbb {R}^4 $ with strictly positive Lebesgue measure and a constant $\bar \beta \in (0, 1), $ such that $$\bar P_{T} (\bar z , d\bar z' ) \geq \bar \beta 1_{\bar C} (\bar z) \bar \nu ( d \bar z') , $$ where $ \bar C = B_R ( \bar z_* ) $ is the (open) ball of radius $\bar R$ centred at $\bar z_* ,$ and where $ \bar \nu $ is the uniform probability measure on $ \bar I.$ The above formula can be interpreted in the following way: For any $ \bar z \in \bar C, $ with probability $\bar \beta, $ the law of $ \bar X (T) $ is equal to the law of $ U = (U_1, U_2, U_3, U_4) $ where $ U $ is a uniform random vector on $ \bar I .$ Since $ \bar I $ is supposed to be a rectangle, this implies in particular the independence of its coordinates $ U_1, \ldots , U_4.$
Notice that we have $ X (T) = A \bar X (T) ,$ where $$ A = \left( \begin{array}{cccc} 1&0&1&0\\ 0&1&0&1 \end{array} \right) .$$ We now show how the above result implies the local lower bound for the original process $X.$ For that sake let $ z_* \in \mathbb {R}^2 $ be arbitrary and fix any $ \bar z_* \in \mathbb {R}^4 $ such that $A \bar z_* = z_* .$ Let $\bar R$ be the associated radius and choose $R$ such that $ B_R ( z_* ) \subset A B_{\bar R} ( \bar z_*) .$ Then for all $ z \in B_R (z_* ) $ and $ \bar z \in B_{\bar R} ( z_* ) $ with $ A \bar z = z, $ $$ P_z ( X (T) \in \cdot ) = P_{\bar z} (A \bar X (T) \in \cdot ) \geq \bar \beta \P ( A U \in \cdot ) .$$ Since $$ A U = \left( \begin{array}{c} U_1 + U_3 \\ U_2 + U_4 \end{array} \right) ,$$ by independence of the coordinates $ U_1, \ldots , U_4,$ this implies the desired result for the two-dimensional Markov process $X $ as well.
We finally deal with the case $ \nu_+ = \nu_- $ and $( c_{++},c_{+-} ) ,( c_{-+},c_{--}) $ linearly independent. Fix $ z_* = (x_*, y_* ) $ and $ M> |x_*|+ |y_*| $ arbitrarily and let $ H := \{ z = (x, y ) : |x| \le M, |y|\le M \} .$ Recall \eqref{eq:aplus} and introduce finally the event $E$ given by \begin{itemize} \item $\pi^1_{+} ( [ 0,T] \times [ 0,a^1_{+}]) =1,$ \item $\pi_{+}^1 ( [ 0,T] \times ]a^1_{+}, a^1_{+}+c_{++} + M ) =0,$ \item $\pi_{+}^i ( [ 0,T] \times [ 0, a^i_{+}+c_{++} + M ) =0$ for all $ 2 \le i \le N_+,$ \item $\pi^1_{-} ( [ 0,T] \times [ 0,a^1_{-}]) =1,$ \item $\pi_{-}^1 ( [ 0,T] \times ] a^1_{-}, a^1_{-}+c_{+-} + M ) =0,$ \item $\pi_{-}^j ( [ 0,T] \times [ 0, a^j_{-}+c_{+-} + M ) =0$ for all $ 2 \le j \le N_- .$ \end{itemize} Define the substochastic kernel $$
Q^T_{z} ( A) =P_z( E\cap \{ X ({T}) \in A\} )=P( E) P_z( X ({T}) \in A | E) . $$ The conditional law of $ X ({T}) $ given $ E,$ under $P_z,$ is equal to the law of $$ Y_z ({T}) =ze^{-\nu_+ T}+e^{-\nu_+ U_{+}}\left(\begin{array}{c}c_{++}/N_+ \\c_{+-}/N_+ \end{array}\right)+ e^{-\nu_+ U_{-}}\left( \begin{array}{c}c_{-+}/N_-\\c_{--}/N_- .\end{array}\right) , $$ where the two jump-times $U_{+},U_{-}$ are independent uniform variables on $[ 0,T] .$ Since $$C=\left( \begin{array}{cc} c_{++}/N_+ & c_{-+}/N_-\\c_{+-}/N_+& c_{--}/N_- \end{array} \right) $$ is invertible and the law of $( e^{-\nu_+ U_{+}},e^{-\nu_+ U_{-}})$ is equivalent with the Lebesgue measure on $[ e^{-\nu_+ T},1] ^2, $ the law of $ Y_z ({T})$ has density $$
f_{z}:v\mapsto | det\; C|^{-1} f\circ C^{-1}( v-ze^{-\nu_+ T}), $$ where $f$ is the density of $( e^{-\nu_+ U_{+}}, e^{-\nu_+ U_{-}})$. The density is positive on the interior of its support $$ supp( Y_z ({T}))= e^{-\nu_+ T}z +C [ e^{-\nu_+ T},1]^2 . $$
Since $C$ is a homeomorphism, it is an open mapping. Thus we can find balls $B_r ( v_{0})\subset B_{2r} ( v_{0}) \subset C [ e^{- \nu_+ T},1]^2 $ for all $T>1.$ Take now $T$ so large that $e^{-\nu_+ T}\sup_{v \in H} \| v \| <r.$ For such $T$ and all $ z \in H$ we have $$
\overline{B}_r ( v_{0}) \subset e^{-\nu_+ T} z + B_{2r} ( v_{0})\subset supp( Y_z ({T})) . $$ Note now that $H\times \overline{B}_r ( v_{0}) \ni (z,v) \mapsto f_{z}( v) $ is continuous, so the positivity of the density gives $\inf_{z \in H,v\in \overline{B}_r ( v_{0}) } f_{z}( v):=\alpha>0.$ We therefore conclude that $$ Q^T_{z}( A)\geq P( E) \cdot \alpha \cdot \lambda ( A\cap B_r ( v_{0}) ), $$ for all $z \in H,$ where $ \lambda $ denotes the Lebesgue measure on $\mathbb {R}^2.$ This proves the desired result. \end{proof}
We do now dispose of all ingredients to conclude the proof of Theorem \ref{theo:harris}.
\begin{proof}[Proof of Theorem \ref{theo:harris}] 1) We apply Proposition \ref{thm:Doeblin} with $z_* = 0 .$ Let $R$ be the associated radius.
By Proposition \ref{prop:lyapunov}, we know that for a suitable compact set $K \subset \mathbb {R}^2, $ $X $ comes back to $K $ infinitely often almost surely. For $ z = ( x, y ), $ write \begin{equation}\label{eq:flowy}
\varphi_t (z) = ( \varphi^{(1)}_t ( x) ,\varphi^{(2)}_t (y) ) = (e^{ - \nu_+ t}x , e^{- \nu_- t} y) \end{equation}
for the flow of the process in between successiv jumps and let $ \| z\|_1 := |x| + |y|.$ Then \begin{equation}
\sup_{z \in K, t \geq 0} \| \varphi_t (z) \|_1 := F < \infty \; \; \mbox{ and } \; \; \sup_{z \in K} \| \varphi_t (z) \|_1 \to 0 \end{equation} as $t \to \infty .$ Therefore there exists $t_* $ such that $\varphi_t (z) \in B_{R } ( 0) $ for all $t \geq t_* , $ for all $ z \in K .$ Hence, $$ \inf_{z\in K} P_z ( X ({t_* + s }) \in B_R ( 0 ), 0 \le s \le 2T ) > 0 . $$ Consequently, the Markov chain $(X ({kT}))_{k \in \mathbb {N}} $ visits $ B_{R } ( 0 )$ infinitely often almost surely. \\ The standard regeneration technique (see e.g. \cite{dashaeva}) allows to conclude that $(X ({kT}))_{k \in \mathbb {N}} $ and therefore $(X(t))_t $ are Harris recurrent. This concludes the proof of the Harris recurrence of the process.
2) The sampled chain $ (X ({kT }))_{k \geq 0 }$ is Feller according to Proposition \ref{prop:Feller}. Moreover it is $ \nu-$irreducible, where $ \nu $ is the measure introduced in Proposition \ref{thm:Doeblin}, associated with the point $z_* = (0,0) .$ Since $\nu$ is the uniform measure on some open set of strictly positive Lebesgue measure, the support of $\nu $ has non-empty interior. Theorem 3.4 of \cite{MT1992} implies that all compact sets are `petite' sets of the sampled chain. The Lyapunov condition established in Proposition \ref{prop:lyapunov} allows to apply Theorem 6.1 of \cite{MT1993} which implies the second assertion of the theorem. \end{proof}
\begin{thebibliography}{99} \bibitem{bm} {\sc Br\'emaud, P., Massouli\'e, L.} \newblock Stability of nonlinear Hawkes processes. \newblock {\em The Annals of Probability}, 24(3) (1996) 1563-1588.
\bibitem{Clinet} {\sc Clinet, S., and Yoshida, N.} \newblock Statistical inference for ergodic point processes and application to Limit Order Book. \newblock {\em Stoch. Proc. Appl}, 127 (2017), 1800-1839.
\bibitem{manonetal} {\sc Costa, M., Graham, C., Marsalle, L., Tran, Viet Chi} \newblock Renewal in Hawkes processes with self-excitation and inhibition. \newblock {\em arXiv preprint arXiv:1801.04645}, 2018.
\bibitem{dfh} {\sc Delattre, S., Fournier, N., Hoffmann, M.} \newblock Hawkes processes on large networks. \newblock {\em Ann. App. Probab.} 26 (2016), 216--261.
\bibitem{SusEva}
{\sc Ditlevsen, S., L\"ocherbach, E.}
\newblock Multi-class oscillating systems of interacting neurons.
\newblock {\em Stoc. Proc. and their Appl.} 127 (2017), 1840--1869.
\bibitem{ho} {\sc Hawkes, A. G., Oakes, D.} \newblock A cluster process representation of a self-exciting process. \newblock {\em J. Appl. Probab. }11 (1974), 493–503.
\bibitem{evaflow} R.~H{\"o}pfner and E.~L{\"o}cherbach. \newblock Statistical models for {B}irth and {D}eath on a {F}low: Local
absolute continuity and likelihood ratio processes. \newblock {\em Scandinavian Journal of Statistics}, 26(1):107--128, 1999.
\bibitem{ikeda1966} N.~Ikeda, M.~Nagasawa, and S.~Watanabe. \newblock A construction of {M}arkov processes by piecing out. \newblock {\em Proc. Japan Acad.}, 42(4):370--375, 1966.
\bibitem{dashaeva} E.~L\"ocherbach and D.~Loukianova. \newblock On {N}ummelin splitting for continuous time {H}arris recurrent
{M}arkov processes and application to kernel estimation for multi-dimensional
diffusions. \newblock {\em Stoch. Proc. Appl.}, 118:1301--1321, 2008.
\bibitem{kaplan} {\sc Kaplan, N.} \newblock The Multitype Galton-Watson Process with Immigration. \newblock {\em Ann. Probab.}, 6:947--953, 1973.
\bibitem{MT1992} S.P. Meyn and R.L. Tweedie. \newblock Stability of {Markovian processes I : Criteria for discrete-time
chains.} \newblock {\em Adv. Appl. Probab.}, 24:542--574, 1992.
\bibitem{MT1993} S.P. Meyn and R.L. Tweedie. \newblock Stability of {Markovian processes III : Foster-Lyapunov} criteria for
continuous-time processes. \newblock {\em Adv. Appl. Probab.}, 25:487--548, 1993. \end{thebibliography}
\end{document}
\section{A priori estimates} \begin{prop}\label{prop:2} Grant Assumption \ref{ass:1}. Any solution $(X^N_t)_{t\geq 0}$ to \eqref{eq:dyn} satisfies that \begin{equation}\label{eq:nice} \frac1N \sum_{i=1}^N \E \int_0^t f ( X^{N, i }_s ) ( (X^{N, i }_s)^2 + \sigma^2 ) ds \le \frac3N \sum_{i=1}^N \E ( (X^{N, i }_0)^2) + 4 \sigma^2 f ( \sqrt{2} \sigma ) t \end{equation} and \begin{equation} \frac1N \sum_{i=1}^N \E (X^{N, i }_t)^2 \le \frac1N \sum_{i=1}^N \E ( (X^{N, i }_0)^2) + \frac{4 \sigma^2}{3 } f( \sqrt{2} \sigma ) t. \end{equation} \end{prop}
\begin{proof} For $ x = ( x^1, \ldots , x^N), $ put $ V(x) := \frac1N \sum_{i=1}^N (x^i)^2 $ and let $ V_t := V( X^N_t) .$ Then \begin{multline*} d V_t = \frac2N \sum_{i=1}^N b ( x^i ) x^i dt - \frac1N \sum_{i=1}^N (X^{N, i }_{t-})^2 \int_\mathbb {R} \int_0^\infty
{\bf 1}_{ \{ z \le f ( X^{N, i}_{s-}) \}} {\mathbf{N}}^i (ds,du, dz) \\ + \frac{1}{N} \sum_{i=1}^N \sum_{ j \neq i } \int_0^t \int_\mathbb {R} \int_0^\infty [ \frac{2 u X^{N, i}_{s-}}{\sqrt{N}} + \frac{u^2 }{N} ] {\bf 1}_{ \{ z \le f ( X^{N, j}_{s-}) \}} {\mathbf{N}}^j (ds,du, dz) . \end{multline*} Taking expectation and writing $ v_t= \E V_t, $ this yields \begin{equation}\label{eq:vt} d v_t \le - \frac1N \sum_{i=1}^N E [ f ( X^{N, i }_t ) ( (X^{N, i }_t)^2 - \sigma^2 ) ] dt. \end{equation} Now we use that $ x^2 - \sigma^2 \geq \frac{x^2 + \sigma^2}{3} - \frac{4 \sigma^2}{3} {\bf 1}_{ x^2 \le 2 \sigma^2 } $ and the fact that $f$ is non-decreasing to deduce from this that $$ \frac1N \sum_{i=1}^N \E \int_0^t f ( X^{N, i }_s ) ( (X^{N, i }_s)^2 + \sigma^2 ) ds \le 3 v_0 + 4 \sigma^2 f ( \sqrt{2} \sigma ) t, $$ which is the first assertion, and that $$ d v_t \le \frac{4 \sigma^2 }{3} f( \sqrt{2} \sigma ) ,$$ implying the second assertion. \end{proof}
WE SHOULD ALSO SAY HERE THAT THIS IMPLIES THE EXISTENCE OF A SOLUTION !
\section{Convergence of the associated empirical measures} We endow the set ${\mathbb D}(\mathbb {R}_+, \mathbb {R} )$ of c\`adl\`ag functions on $\mathbb {R}_+$ taking values in $\mathbb {R} $ with the topology of the Skorokhod convergence on compact time intervals, see Jacod and Shiryaev \cite{js}.
\begin{theo}\label{theo:6} Grant Assumption \ref{ass:1}. Consider a probability distribution $g_0$ on $\mathbb {R}$ such that $\int_\mathbb {R} y^2 g_0(dy) = v_0 <\infty$. For each $N\geq 1$, consider the unique solution $(X^N_t)_{t\geq 0}$ to \eqref{eq:dyn} starting from some i.i.d. $g_0$-distributed initial conditions $X^{N,i}_0$.
\vskip0.2cm
(i) The sequence of processes $(X^{N,1}_t)_{t\geq 0}$ is tight in ${\mathbb D}(\mathbb {R})$.
\vskip0.2cm
(ii) The sequence of empirical measures $ \mu_N=N^{-1}\sum_{i=1}^N \delta_{(X^{N,i}_t)_{t\geq 0}}$ is tight in ${\mathcal P}({\mathbb D}(\mathbb {R}))$. \end{theo}
\begin{proof} First, it is well-known that point (ii) follows from point (i) and the exchangeability of the system, see Sznitman \cite[Proposition 2.2-(ii)]{s}. We thus only prove (i). We consider a probability distribution $g_0$ on $\mathbb {R}_+$ such that $\int_0^\infty x g_0(dx)<\infty$ and, for each $N\geq 1$, the unique solution $(X^N_t)_{t\geq 0}$ to \eqref{eq:dyn} starting from some i.i.d. $g_0$-distributed initial conditions $X^{N,i}_0$. To show that the family $((X^{N,1}_t)_{t\geq 0})_{N\geq 1}$ is tight ${\mathbb D}(\mathbb {R}_+)$, we use the criterion of Aldous, see Jacod and Shiryaev \cite[Theorem 4.5 page 356]{js}. It is sufficient to prove that
\vskip0.2cm
(a) for all $ T> 0$, all $\varepsilon >0$, $ \lim_{ \delta \downarrow 0} \limsup_{N \to \infty } \sup_{ (S,S') \in A_{\delta,T}}
\P ( |X_{S'}^{N, 1 } - X_S^{N , 1 } | > \varepsilon ) = 0$, where $A_{\delta,T}$ is the set of all pairs of stopping times $(S,S')$ such that $0\leq S \leq S'\leq S+\delta\leq T$ a.s.,
\vskip0.2cm
(b) for all $ T> 0$, $\lim_{ K \uparrow \infty } \sup_N
\P ( \sup_{ t \in [0, T ] } |X_t^{N, 1 }| \geq K ) = 0$.
\vskip0.2cm To check (a), consider $(S,S')\in A_{\delta,T}$ and write \begin{multline*} X_{S'}^{N, 1 } - X_S^{N , 1 } = - \int_S^{S'} \int_\mathbb {R} \int_0^\infty X^{ N, 1 }_{s- } {\bf 1}_{\{ z \le f ( X_{s- }^{N, 1} ) \}} {\mathbf{N}}^1 (ds, du, dz ) + \int_S^{S'} b(X^{N, 1 }_s) ds \\ + \frac{1}{ \sqrt{N} } \sum_{j=2}^N \int_S^{S'} \int_\mathbb {R} \int_0^\infty u {\bf 1}_{\{ z \le f ( X_{s- }^{N, j} ) \}} {\mathbf{N}}^j (ds, du, dz ) , \end{multline*} implying that \begin{multline*}
|X_{S'}^{N, 1 } - X_S^{N , 1 }| \le | \int_S^{S'} \int_\mathbb {R} \int_0^\infty X^{ N, 1 }_{s- } {\bf 1}_{\{ z \le f ( X_{s- }^{N, 1} ) \}} {\mathbf{N}}^1 (ds, du, dz ) | \\
+ \delta + | \frac{1}{ \sqrt{N} } \sum_{j=2}^N \int_S^{S'} \int_\mathbb {R} \int_0^\infty u {\bf 1}_{\{ z \le f ( X_{s- }^{N, j} ) \}}
{\mathbf{N}}^j (ds, du, dz ) | =: I_{S, S'} + |J_{S, S'} |, \end{multline*} since $b$ is bounded.
We first note that $I_{S,S'}>0$ implies that $\tilde I_{S,S'}:= \int_S^{S'} \int_\mathbb {R} \int_0^\infty {\bf 1}_{\{ z \le f ( X_{s- }^{N, 1} ) \}} {\mathbf{N}}^i (ds, du, dz)\geq 1$, whence $$ \P ( I_{S, S'} > 0 )\leq \P (\tilde I_{S,S'}\geq 1)\leq \E[\tilde I_{S,S'}]\le \E\Big[ \int_S^{S+\delta} f( X_s^{N, 1 } ) ds \Big] \le \delta, $$ since $ f$ is bounded. We proceed similarly to check that $$
\P ( |J_{S, S'}| \geq \varepsilon ) \le \frac{1}{\varepsilon^2} \E[(J_{S,S'})^2 ]\leq \frac{\sigma^2}{N\varepsilon^2 } \sum_{j=2}^N\E\Big[ \int_S^{S+\delta} f( X_s^{N, j} ) ds\Big]
\le \frac{\sigma^2}{\varepsilon^2} \| f \|_\infty \delta . $$ To check (b), we write, using the same notation as above,
$$ \sup_{s \le T} | X_s^{N, 1}| \le \int_0^T \int_\mathbb {R} \int_0^\infty |X^{ N, 1 }_{s- } | {\bf 1}_{\{ z \le f ( X_{s- }^{N, 1} ) \}} {\mathbf{N}}^1 (ds, du, dz ) + \| b\|_\infty T + \sup_{s \le T }|J_ {0, s}| ,$$ where \begin{multline*}
\E \int_0^T \int_\mathbb {R} \int_0^\infty |X^{ N, 1 }_{s- } | {\bf 1}_{\{ z \le f ( X_{s- }^{N, 1} ) \}} {\mathbf{N}}^1 (ds, du, dz ) = \E \int_0^T |X^{ N, 1 }_{s } | f ( X_{s- }^{N, 1} ) ds\\
\le \E \int_0^T [ (X^{ N, 1 }_{s } )^2 + 1 ] f ( X_{s- }^{N, 1} ) ds \le C_T , \end{multline*} where $C_T$ does not depend on $N, $ which follows from \eqref{eq:nice}. Moreover, since $ J (0, t ) $ is a square integrable local martingale,
$$ \E \sup_{ s \le T} | J_{0, s }| \le C \E [J_{0, T }^2] \le C \sigma^2 \int_0^T \E [ f( X_s^{N, 1} ) ds \le C \sigma^2 \|f\|_\infty T ,$$ concluding the proof. \end{proof}
\begin{rem} We can probably get better results if we suppose that $$ x b(x) \le - C x^2 $$
for all $|x| \geq K $ for a suitable $K.$ \end{rem}
\section{The limit process} We start with some informal discussion of how the limit process of the particle system $X^N $ should a priori look like, if it exists. So we suppose that there exists a process $ (Y^1, Y^2 , Y^3, \ldots ) \in {\mathbb D} ( \mathbb {R}_+, \mathbb {R})^\mathbb {N} $ such that for all $ K > 0, $ we have weak convergence $ {\mathcal L }(X^{N, 1, }, \ldots , X^{N, K} ) \to {\mathcal L} ( Y^1, \ldots, Y^K) $ in ${\mathbb D} (\mathbb {R}_+, \mathbb {R} ) .$ Since the law of the $N-$particle system $ (X^{N, 1}, \ldots, X^{N, N} ) $ is symmetric, the law of $ Y $ must be exchangeable, that is, for all finite permutations $\pi, $ we have that $ {\mathcal L} ( Y^{\pi ( 1) }, Y^{\pi ( 2) } , \ldots ) = {\mathcal L} (Y).$ In particular, the random limit $$ \mu := \lim_{N\to \infty}\frac1N \sum_{i=1}^N \delta_{Y^i } $$ exists.
Supposing that $ \mu_N$ converges, it necessarily converges towards $\mu. $ Therefore, $Y_t$ should solve the limit system \begin{equation}\label{eq:dynlimit} Y^i_t = Y^i_0 + \int_0^t b(Y^i_s) ds - \int_0^t \int_\mathbb {R} \int_0^\infty Y^i_{s- } {\bf 1}_{ \{ z \le f ( Y^i_{s-}) \}} {\mathbf{N}}^i (ds,du, dz) + \sigma \int_0^t \sqrt{ \mu_t ( f) } d B_t , i \in \mathbb {N}, \end{equation} where $(B_t)_{t\geq 0}$ is a standard one-dimensional Brownian motion which is independent of the Poisson random measures.
{\it Discussion of $ \mu.$}
The presence of the common Brownian motion $ B$ implies that even in the large population limit, particles do not become independent. However, they are conditionally independent given the Brownian motion path. Therefore, $\mu $ will be the conditional law of the solution given the Brownian path, that is, $P-$almost surely
$$ \mu ( \cdot ) = P ( Y^i \in \cdot | (B_t)_{ t \geq 0 } ) = P( Y^i \in \cdot | B ) ,$$ for any $ i \in \mathbb {N} .$ The conditionning on $B$ reflects the correlations between the particles.
We are now going to give a precise mathematical definition of what we call a {\it strong solution of the non-linear limit process}. \begin{defin} Fix some $T > 0 $ and let $ (\Omega, ({\mathcal F}_t)_{ t \in [0, T ] }, P) $ be a filtered probability space on which are defined an $ ({\mathcal F}_t)_{ t \in [0, T ] }-$Poisson random measure $ {\mathbf{N}} ( ds, du, dz ) $ and an $({\mathcal F}_t)_{ t \in [0, T ] }-$Brownian motion $ B$ in dimension one. We say that an $({\mathcal F}_t)_{ t \in [0, T ] }-$adapted process $ (Y_t)_{ t \in [0, T ] } $ is a strong solution of the non-linear limit problem if \begin{equation}\label{eq:dynlimit} Y_t = Y_0 + \int_0^t b(Y_s) ds - \int_0^t \int_\mathbb {R} \int_0^\infty Y_{s- } {\bf 1}_{ \{ z \le f ( Y_{s-}) \}} {\mathbf{N}} (ds,du, dz) + \sigma \int_0^t \sqrt{ \mu_t ( f) } d B_t , \end{equation} where the process $ \mu_t $ is $({\mathcal F}_t)_{ t \in [0, T ] }-$adapted such that $P-$almost surely,
$$ \mu ( \cdot ) = P ( Y \in \cdot | B ) .$$ \end{defin} To prove the well-posedness of this limit equation \eqref{eq:dynlimit} is not evident. The common jumps of the particles, due to their scaling with $ 1/\sqrt{N} $ and the fact that they are centred, by the Central Limit Theorem, create the single Brownian motion $ B_t $ which is underlying each particle's motion and which induces a common noise factor for all particles. To prove the trajectorial uniqueness of \eqref{eq:dynlimit}, due to the presence of jumps and of the diffusive term at the same time demands actually some non-trivial work. Roughly speaking, the jump terms demand to work in an $L^1 - $framework, and the diffusive terms to work in an $L^2-$framework. Carl Graham \cite{carl} in his important paper of 1992 proposes a unified approach to deal both with jump and with diffusion terms in a non-linear framework, and we shall rely on his ideas in the sequel. The presence of the random volatility term $ \mu_t ( f) $ which involves conditional expectation causes however additional technical difficulties in our present frame, due to the fact that conditional expectation does not behave in a continuous way. Another difficulty comes from the fact that the jumps do behave in a ``non-Lipschitz"-way comparable to the TCP process - indeed, even if two particles have been close by just before jumping, and if one of the particles jumps but not the other, the distance between the two right after jumping migth be very big. For this reason, a classical Wasserstein-$1-$coupling is difficult for the jump terms.
In order to overcome these difficulties, we need to work under the following additional assumption on the jump rate of each particle.
\begin{ass}\label{ass:2}
$f \in C^3(\mathbb {R} , \mathbb {R}_+ )$ is strictly increasing, bounded and lowerbounded. Moreover, $\sup_{x} [f'(x)/f(x)+|f''(x)|/f'(x) + |f''' (x)| /| f'' (x)| + |b' (x)| / f'(x) ]<\infty$. \end{ass}
As a consequence, we have that for a suitable constant $C,$
$$ |f'' ( x) - f'' (y) | + |f'(x) - f' (y) | + |b(x) - b(y) | \le C | f(x) - f(y) |.$$
\begin{theo}\label{prop:42} Suppose that $f$ satisfies Assumptions \ref{ass:1} and \ref{ass:2}.
Then pathwise uniqueness holds for the nonlinear SDE \eqref{eq:dynlimit}. \end{theo}
\begin{proof} Consider two solutions $ (Y_t)_{t \geq 0}$ and $ (\tilde Y_t)_{t \geq 0 } , $ defined on the same probability space and driven by the same Poisson random measure ${\mathbf{N}} $ and the same Brownian motion $ B,$ and with $ Y_0 = \tilde Y_0.$ We consider $ Z_t := f(Y_t) - f( \tilde Y_t) ,$ for all $ t \le T.$ Then
\begin{multline*} Z_t = \int_0^t \left( b ( Y_s ) f' ( Y_s ) - b ( \tilde Y_s) f' ( \tilde Y_s) \right) ds +\frac12 \int_0^t ( f'' ( Y_s) \mu_s ( f) - f'' ( \tilde Y_s ) \tilde \mu_s ( f) ) \sigma^2 ds \\ + \int_0^t ( f' ( Y_s) \sqrt{\mu_s (f)} +f' (\tilde Y_s ) \sqrt{ \tilde \mu_s (f)} ) \sigma d B_s \\ - \int_0^t \int_\mathbb {R} \int_0^\infty \left( f (Y_{s- }) - f( \tilde Y_{s-}) \right) {\bf 1}_{ \{ z \le f ( Y_{s-}) \wedge f ( \tilde Y_{s-}) \}} {\mathbf{N}} (ds, du, dz)\\ + \int_0^t \int_\mathbb {R} \int_0^\infty [f(0)- f( Y_{s-} )] {\bf 1}_{ \{ f ( \tilde Y_{s-} ) < z \le f ( Y_{s-} ) \}} {\mathbf{N}} (ds, du, dz) \\ + \int_0^t \int_\mathbb {R} \int_0^\infty [ f( \tilde Y_{s-} ) - f(0) ] {\bf 1}_{ \{ f ( Y_{s-} ) < z \le f ( \tilde Y_{s-} ) \}} {\mathbf{N}} (ds,du, dz) = : A_t + M_t +\Delta_t , \end{multline*} where $ A_t $ denotes the bounded variation part of the evolution, $M_t$ the martingale part and $ \Delta_t$ the sum of the three jump terms. Notice that $$M_t= \int_0^t ( f' ( Y_s) \sqrt{\mu_s (f)} -f' (\tilde Y_s ) \sqrt{ \tilde \mu_s (f)} ) \sigma d B_s$$ is a square integrable martingale since $ f$ is bounded.
We wish to obtain a control on $ |Z^* _t | := \sup_{ s\le t } |Z_s | .$ We first take care of the jumps of $ |Z_t|.$ Notice first that, since $f$ is bounded, \begin{multline*}
\Delta (x,y):= (f(x) \wedge f(y)) | f (x) - f(y ) | + | f (x ) - f( y ) | \; \Big| | f ( x \wedge y ) - f(0) | - | f (x) - f(y ) | \Big| \\
\le C | f (x) - f( y ) | \end{multline*} implying that
$$ \E \sup_{s \le t } | \Delta_s | \le C \E \int_0^t | f(Y_s^i) - f(\tilde Y_s^i ) | ds \le C t \, \E |Z_t^* | . $$
Moreover, for a constant $C$ depending on $\sigma^2 ,$ $\| f \|_\infty , \| f'\|_\infty, \| f'' \|_\infty $ and $ \| b \|_\infty , $ \begin{multline*}
\E \sup_{ s \le t } | A_s | \le C \int_0^t \E |b ( Y_s ^i ) - b ( \tilde Y_s^i ) | ds + C \int_0^t \E |f' ( Y_s ^i ) - f' ( \tilde Y_s^i ) | ds
\\
+ C \left[ \int_0^t | f'' ( Y_s ^i ) -f '' ( \tilde Y_s^i ) | ds + \int_0^t | \mu_s ( f) - \tilde \mu_s ( f) | ds \right] . \end{multline*}
We know that $ |b ( Y_s ^i ) - b ( \tilde Y_s^i ) | + |f' ( Y_s ^i ) - f' ( \tilde Y_s^i ) | + |f'' ( Y_s ^i ) - f'' ( \tilde Y_s ) | \le C |f ( Y_s ) - f ( \tilde Y_s ) |= C | Z_s| .$ Therefore,
$$ \E \sup_{ s \le t } | A_s | \le C \E \left[ \int_0^t | Z_s | ds + \int_0^t | \mu_s ( f) - \tilde \mu_s ( f) | ds \right].$$ Moreover,
$$ |\mu_s (f)- \tilde \mu_s (f) | = \Big| \E \left( f ( Y_s ) - f ( \tilde Y_s ) | B \right) \Big| \le \E \left( | f ( Y_s ) - f ( \tilde Y_s^i )| | B \right) = \E ( |Z_s| | B) ,$$ and thus,
$$ \E \int_0^t | \mu_s ( f) - \tilde \mu_s ( f) | ds \le \E \int_0^t |Z_s| ds \le t \E | Z^*_t| .$$ Putting all these upper bounds together we conclude that for a constant $C$ not depending on $t,$
$$ \E \sup_{s \le t} |A_s| \le C t \E |Z_t^*| .$$
Finally, we treat the martingale part using the Burkholder-Davis-Gundy inequality, and we obtain $$
\E \sup_{s \le t} |M_s| \le C \E \left[ \left( \int_0^t (f' (Y_s ) \sqrt{ \mu_s ( f) } - f' (\tilde Y_s ) \sqrt{ \tilde \mu_s ( f) })^2 ds \right)^{1/2}\right].$$ But \begin{multline}\label{eq:varquadratique}
(f' (Y_s ) \sqrt{ \mu_s ( f) } - f' (\tilde Y_s ) \sqrt{ \tilde \mu_s ( f) })^2 \le C \left[ ( (f' (Y_s ) - f' (\tilde Y_s ))^2 + (\sqrt{ \mu_s ( f) } - \sqrt{ \tilde \mu_s ( f) })^2 \right] \\
\le C | Z_t^*|^2 + C (\sqrt{ \mu_s ( f) } - \sqrt{ \tilde \mu_s ( f) })^2 , \end{multline}
where we have used once more that $ | f' (x) - f' (y) | \le C | f(x) - f(y) | $ and that $f$ and $f'$ are bounded.
Finally, since $ f$ is lowerbounded,
$$| \sqrt{ \mu_s ( f) } - \sqrt{ \tilde \mu_s ( f) }|^2 \le C | \mu_s ( f) - \tilde \mu_s ( f) |^2 \le C \left( \E ( |Z_s^*| | B ) \right)^2 \le C \left( \E ( |Z_t^*| | B ) \right)^2,$$
since $ |Z_s^* | \le | Z_t^*| ,$ which implies the control of
$$ \E \sup_{s \le t} |M_s| \le C \sqrt{t} \E | Z_t^* | .$$
The above upper bounds imply that, for a constant $C$ not depending on $t, $
$$ \E |Z_t^*| \le C (t + \sqrt{t} ) \E | Z_t^* | ,$$
and therefore, for $ t $ sufficiently small, $ \E |Z_t^*| = 0,$ which implies the assertion. \end{proof}
The same ideas now allow us to prove that
\begin{theo} Suppose that $f$ satisfies Assumptions \ref{ass:1} and \ref{ass:2}.
Then there exists a strong solution of the nonlinear SDE \eqref{eq:dynlimit}. \end{theo}
\begin{proof} The proof is done using a classical Picard-iteration. Therefore we introduce the sequence of processes $ Y_t^{[0] } \equiv Y_0 , $ and $$ Y^{[n+1]}_t := Y_0 + \int_0^t b( Y_s^{[n]} ) ds - \int_\mathbb {R} \int_0^\infty Y^{[n+1]}_{s- } {\bf 1}_{ \{ z \le f ( Y^{[n]}_{s-}) \}} {\mathbf{N}} (ds,du, dz) + \sigma \int_0^t \sqrt{ \mu^n_t ( f) } d B_t ,$$ where
$$ \mu^n = P ( Y^{[n]} \in \cdot | B ) .$$
Then the same strategy as the one of the proof of Proposition \ref{prop:42} allows to show that $$\delta_t^n := \E \sup_{s \le t } | f ( Y_s^{[n]} ) - f( Y_s^{[n-1]} ) | $$ satisfies $$ \delta_t^n \le C (t + \sqrt{t} ) \delta_t^{n-1} ,$$ for all $ n \geq 1 , $ for a constant $C$ only depending on the parameters of the model, but not on $ n, $ neither on $t. $ Choose $t_1 $ such that $$ C (t_1 + \sqrt{t_1} ) \le \frac13.$$
Since $ \sup_{s \le t_1 } | f ( Y_s^{[0]} ) | = f ( Y_0) \le \| f \|_\infty, $ we deduce from this that
$$ \delta_{t_1}^n \le 3^{- n } \| f \|_\infty .$$ We want to deduce from this that the sequence of processes $ (f(Y^{[n]} ))_n $ converges in the Skorokhod space $D( [0, t_1], \mathbb {R} ) .$ For that sake, for c\`adl\`ad functions $\eta, \xi\in D([0,t_1 ],\mathbb {R})$ we consider the distance $d_S(\eta,\xi)$ defined by \begin{equation} \label{def:skorohod_like_dist}
d_S(\eta,\xi)=\inf_{\phi\in I}\left\{\|\phi\|_{[0,t_1],*}\varepsilone \|\eta-\xi(\phi)\|_{[0,t_1],\infty}\right\}, \end{equation}
where $I$ is the set of non-decreasing functions $\phi:[0,t_1]\to [0,t_1]$ satisfying $\phi(0)=0$ and $\phi(t_1)=t_1$ and where for any function $\phi\in I$ the norm $\|\phi\|_{[0,t_1],*}$ is defined as $$
\|\phi\|_{[0,t_1],*}=\sup_{0\leq s<t\leq t_1}\log\left(\frac{\phi(t)-\phi(s)}{t-s}\right). $$ The metric $d_S(\cdot,\cdot)$ is equivalent to the classical Skorokhod distance. More importantly the metric space $(D([0,t_1],\mathbb {R}),d_S)$ is Polish, see for instance \cite{Billingsley:68}.
Clearly,
$$ d_S ( \eta, \xi ) \le \sup_{s \le t_1} | \eta ( s) - \xi ( s) | .$$ Therefore,
$$ \sum_{n \geq 1} \P ( d_S ( f ( Y^{[n]} ), f ( Y ^{[n-1]} )) > 2^{-n} ) \le \sum_{n \geq 1} \P ( \sup_{s \le t_1 } | f ( Y_s^{[n]} ) - f( Y_s^{[n-1]} ) | > 2^{-n} ) \le \sum_{n } 2^n \delta_{t_1}^n < \infty ,$$ implying that almost surely, \begin{equation}\label{eq:fort}
d_S ( f ( Y^{[n]} ), f ( Y ^{[n-1]} )) \le \sup_{s \le t_1 } | f ( Y_s^{[n]} ) - f( Y_s^{[n-1]} ) | \le 2^{-n } , \end{equation} for sufficiently large $n.$ This implies that almost surely, the sequence of processes $ (f(Y^{[n]} ))_n $ is a Cauchy sequence, hence it converges in the Skorokhod space $D( [0, t_1], \mathbb {R} ) $ to a limit process $ f ( Y^\infty ) .$ $f$ being continuous and strictly increasing, this implies the almost sure convergence of $ Y^{[n]} \to Y $ in $D( [0, t_1], \mathbb {R} ) .$ Finally, \eqref{eq:fort} also implies that
$$ \sup_{s \le t_1} | f ( Y_s^{[n]} ) - f (Y_s) | \to 0 $$ almost surely, as $n \to \infty .$
It remains to prove that $Y$ is solution of the limit equation which follows by standard arguments (note that the jump term does not cause troubles because it is of finite activity). The most important point is to notice that
$$ \mu_t^n ( f) = E ( f ( Y_t^{[n]} | B ) \to E ( f (Y_t) | B ) $$ almost surely, which follows from the almost sure convergence of $ f ( Y_t^{[n]} ) \to f (Y_t ) ,$ using dominated convergence.
Finally, once the convergence is proven on the time interval $ [0, t_1 ], $ we can proceed iteratively over successive intervals $ [ k t_1, (k+1) t_1] $ to conclude the proof. \end{proof}
\begin{rem} So the above result implies trajectoral uniqueness. Does this also imply uniqueness in law of the limit process? What about Yamada-Watanabe theorem??? \end{rem}
\section{Convergence to the limit system} \subsection{An auxiliary particle system and coupling using KMT} Let $N_t$ be a standard Poisson process of rate $1,$ independent of ${\mathbf{N}}^1 .$ Let moreover $ U_n , n \geq 1, $ be i.i.d. variables, distributed according to $ \mu, $ independent of everything else. Put finally $ Z_t := \sum_{n = 1}^{N_t} U_n , $ which is a centered compound Poisson process.
Then we can rewrite the dynamics of $ X_t^{N, 1 } $ as $$ X_t^{N, 1 } = X_0^{N, 1 } + \int_0^t b( X^{N, 1}_s ) ds - \int_0^t \int_\mathbb {R} \int_0^\infty
X^{N, 1}_{s-} {\bf 1}_{ \{ z \le f ( \tilde X^{N, 1}_{s-}) \}} {\mathbf{N}}^1 (ds,du, dz) + \frac{1}{\sqrt{N} } Z_{A_t^{N, X} } , $$ where $$ A_t^{N, X} = \sum_{ j = 2}^N \int_0^t f ( X_s^{N, j } ) ds . $$
The important point is that we can couple the centered compound Poisson process $ Z$ with a Brownian motion. Indeed,
\begin{lem}\label{lem:KMT} The centered compound Poisson process can be constructed on the same sample space as $ \sigma B_t , $ $ \sigma^2 = Var (U_1) , $ such that
$$ \sup_{t \geq 0} \frac{ | Z_t - \sigma B_t|}{ \log t \varepsilone 2 } \le K < \infty $$ almost surely, where $K$ is a random variable having exponential moments. \end{lem} Applying the above result, we will show that $X^{N, 1 }$ behaves, for large $N,$ as the following process \begin{eqnarray}\label{eq:dynapprox} Y^{N, 1}_t &= & X^{N,1}_0 + \int_0^t b(Y^{N, 1}_s ) ds - \int_0^t \int_\mathbb {R} \int_0^\infty Y^{N, 1}_{s-} {\bf 1}_{ \{ z \le f ( Y^{N, 1}_{s-}) \}} {\mathbf{N}}^1 (ds,du, dz) \\ &&+\frac{\sigma}{\sqrt{N} } B_{A_t^{N, Y}} , \nonumber \end{eqnarray} where $B$ is the standard one-dimensional Brownian motion of Lemma \ref{lem:KMT}, where the time change is given by $$ A_t^{N, Y}= \sum_{j=2}^N \int_0^t f ( Y_s^{N, j} ) ds ,$$ and where the $ Y^{N, j } $ follow the same dynamics as $ Y^{N, 1 },$ see \eqref{eq:dynapproxbis} below.
Let us now describe more in detail the coupling we are going to construct. Based on Lemma \ref{lem:KMT}, we rewrite \begin{equation}\label{eq:dec}
X^{N, 1}_t = \tilde X^{N, 1 }_t + R_t^N \end{equation} where \begin{equation}\label{eq:rewrite}
\tilde X^{N, 1 }_t = X_0^{N, 1 } + \int_0^t b( X^{N, 1}_s ) ds
- \int_0^t \int_\mathbb {R} \int_0^\infty
X^{N, 1}_{s-} {\bf 1}_{ \{ z \le f ( \tilde X^{N, 1}_{s-}) \}} {\mathbf{N}}^1 (ds,du, dz) + \frac{\sigma}{\sqrt{N} } B_{A_t^{N, X} } \end{equation} and
$$ R_t^N = \frac{1}{\sqrt{N} } ( Z_{A_t^{N, X} } -\sigma B_{A_t^{N, X} } ) \mbox{ is such that } | R_t^N | \le \frac{1}{\sqrt{N} } \log ( \|f\|_\infty N ) K \le C N^{-1 /2 } \log N K ,$$ with $K$ is the random variable of Lemma \ref{lem:KMT}. Notice that the martingale part in \eqref{eq:rewrite} can be written as $$\frac{1}{\sqrt{N} } B_{A_t^{N, X}} = \int_0^t \sqrt{ \frac1N \sum_{j=1, j \neq 1 }^N f ( X_s^{N, j} ) } d W_s , $$ for some standard one-dimensional Brownian $W.$
Let us now precise the dynamics of \eqref{eq:dynapprox}. We take $\tilde {\mathbf{N}}^i (ds,du, dz) , 2 \le i \le N, $ i.i.d. Poisson random measures, having intensity measure $ ds \mu ( du ) dz $ each, which are independent of $ {\mathbf{N}}^1 , $ of the compound Poisson process $Z $ and of the Brownian motion $W.$ We let $ Y^{N, 1 } $ be the strong solution of the stochastic differential equation driven by $ W, {\mathbf{N}}^1, \tilde {\mathbf{N}}^i, 2 \le i \le N , $ given by \begin{eqnarray}\label{eq:dynapproxbisbis} Y^{N, 1}_t &= & X^{N,1}_0 + \int_0^t b(Y^{N, 1}_s ) ds - \int_0^t \int_\mathbb {R} \int_0^\infty Y^{N, 1}_{s-} {\bf 1}_{ \{ z \le f ( Y^{N, 1}_{s-}) \}} {\mathbf{N}}^1 (ds,du, dz) \\ &&+ \sigma \int_0^t \sqrt{ \frac1N \sum_{j=1, j \neq 1 }^N f ( Y_s^{N, j} ) } d W_s, \nonumber \end{eqnarray}
and we complete \eqref{eq:dynapproxbisbis} by \begin{eqnarray}\label{eq:dynapproxbis} Y^{N, i}_t &= & X^{N,i}_0 + \int_0^t b(Y^{N, i}_s ) ds - \int_0^t \int_\mathbb {R} \int_0^\infty Y^{N, i}_{s-} {\bf 1}_{ \{ z \le f ( Y^{N, i}_{s-}) \}} \tilde {\mathbf{N}}^i (ds,du, dz) \\ &&+ \sigma \int_0^t \sqrt{ \frac1N \sum_{j=1, j \neq i }^N f ( Y_s^{N, j} ) } d W_s, \; 2 \le i \le N . \nonumber \end{eqnarray}
\begin{rem} We stress that the above coupling is constructed for the evolution of the first particle $ X^{N, 1 }$ only : we express the small jumps of $ X^{N, 1}$ by the means of a compound Poisson process which is then approximated by a Brownian motion. And then we use this same Brownian motion to construct the first component $ Y^{N, 1 } $ of the auxiliary particle system $ Y^N.$ Notice that $ Y^{N, 1 } $ is coupled to $ X^N$ since it uses the same Poisson random measure $ {\mathbf{N}}^1 $ and the same Brownian motion $ W.$ It is this way that $ X^N $ and $Y^N $ are coupled. Of course, by exchangeability of the particles, it is not important which particle we take as a representative one - but it is important to notice that the coupling does indeed depend on this choice. \end{rem}
In the sequel, we shall consider $ Z^{N, 1} _t := f(Y^{N, 1}_t) - f( X^{N, 1}_t) ,$ for all $ t \le T.$ Using the decomposition of $ X^{N, 1} = \tilde X^{N, 1 } + R^N $ of \eqref{eq:dec}, we have
$$ | f( X_t^{N, 1 } ) - f ( \tilde X^{N, 1 }_t ) | \le \| f' \|_\infty C N^{-1 /2 } (\log N) K .$$ As a consequence, it remains to control $$ \tilde Z_t^{N, 1 } := f ( Y^{N , 1 }_t ) - f ( \tilde X^{N, 1 }_t ) ,$$ which is done using Ito's formula. The same arguments as those given in the proof of Theorem \ref{prop:42} allow to conclude. We recall them briefly in what follows. We have $$ \tilde Z^{N, 1}_t = A^N_t + M^N_t +\Delta^N_t ,$$ where \begin{multline*} \Delta^N_t = - \int_0^t \int_\mathbb {R} \int_0^\infty \left( f (Y^{N, 1}_{s- }) - f( X^{N, 1}_{s-}) \right) {\bf 1}_{ \{ z \le f ( Y^{N, 1}_{s-}) \wedge f ( X^{N, 1}_{s-}) \}} {\mathbf{N}} ^1 (ds, du, dz)\\ + \int_0^t \int_\mathbb {R} \int_0^\infty [f(0)- f( Y^{N, 1}_{s-} )] {\bf 1}_{ \{ f ( X^{N, 1}_{s-} ) < z \le f ( Y^{N, 1}_{s-} ) \}} {\mathbf{N}}^1 (ds, du, dz) \\ + \int_0^t \int_\mathbb {R} \int_0^\infty [ f( X^{N, 1}_{s-} ) - f(0) ] {\bf 1}_{ \{ f ( Y^{N, 1}_{s-} ) < z \le f ( X^{N, 1}_{s-} ) \}} {\mathbf{N}}^1 (ds,du, dz) . \end{multline*} This term is treated as in the proof of Theorem \ref{prop:42}. Moreover, with $ \mu^{N, X, 1} _t := \frac1N \sum_{i=2}^N \delta_{X^{N, i }_t } , $ and $ \mu^{N, Y, 1} _t := \frac1N \sum_{i=2}^N \delta_{Y^{N, i }_t } ,$ \begin{multline*}
A_t^N = \int_0^t (f' (Y^{N , 1 }_t ) b ( Y^{N , 1 }_t ) - f' ( X^{N , 1 }_t ) b ( X^{N , 1 }_t )) + \\ \frac12 \int_0^t ( f'' ( Y^{N, 1}_s) \mu^{N, Y, 1 }_s ( f) - f'' ( X^{N ,1 }_s ) \mu^{N, X, 1 }_s ( f)) \sigma^2 ds \end{multline*} which is handled thanks to Assumption \ref{ass:2}, following the lines of the proof of Theorem \ref{prop:42}.
Finally, $$M^N_t= \sigma \int_0^t ( f' ( Y^{N, 1 }_s) \sqrt{\mu^{N, Y, 1}_s (f)} -f' (X^{N, 1}_s ) \sqrt{ \tilde \mu^{N, X, 1}_s (f)} ) d W_s $$ is controlled using the Burkholder-Davis-Gundy inequality again, which gives $$
\E \sup_{s \le t} |M^N_s|\le C \E \left[ \left( \int_0^t \left(f' ( Y^{N, 1 }_s) \sqrt{\mu^{N, Y, 1}_s (f)} - f' (X^{N, 1}_s ) \sqrt{ \tilde \mu^{N, X, 1}_s (f)}\right)^2 ds \right)^{1/2}\right] $$ and which is controlled as in \eqref{eq:varquadratique}, leading to
$$ \E \sup_{s \le t} |M^N_s| \le C \sqrt{t} \left( \frac1N \sum_{j=2}^N \E \sup_{s \le t} | Z_s^{N, j } | + \E \sup_{s \le t } | Z_s^{N, 1 } | \right) .$$
By exchangeability of $ (Y^{N, 1 }, \ldots, Y^{N, N} ),$ we end up with the upper bound
$$ \E \sup_{s \le t} |M^N_s| \le C \sqrt{t} \E \sup_{s \le t} | Z_s^{N, 1 } | .$$ Resuming the above steps,
$$ \E \sup_{ s \le t } |Z^{N, 1}_s| \le C (t + \sqrt{t} ) \E \sup_{ s \le t } | Z^{N, 1}_s | + C \frac{\log N}{\sqrt{N}} ,$$ and concluding as in the proof of Theorem \ref{prop:42}, we deduce the following \begin{theo} Grant Assumptions \ref{ass:1} and \ref{ass:2}. Then for any $ T < \infty $ there exists a constant $C_T$ only depending on $ T$ and on the parameters of the system, but not on $N, $ such that for all $ t \le T,$
$$ \E | f(X_t^{N, 1 }) - f(Y_t^{N, 1 } ) | \le C_T \frac{\log N}{\sqrt{N}}.$$ \end{theo}
\subsection{Convergence of $ Y^N $ to the limit equation} We now introduce the usual coupling of $ Y^N $ and the limit process $ Y$ using the same Brownian motion and the same Poisson random measures for the two processes.
Then we have \begin{theo} Grant Assumptions \ref{ass:1} and \ref{ass:2}. Consider a probability distribution $g_0$ on $\mathbb {R}$ such that $\int_{\mathbb {R}} x^2 g_0(dx)<\infty$ and, for each $N\geq 1$, the unique solution $(Y^N_t)_{t\geq 0}$ to \eqref{eq:dynapprox} starting from some i.i.d. $g_0$-distributed initial conditions $Y^{N,i}_0 = X^{N, i }_0 = Y^{i}_0 .$ Then for all $s \le t, $
$$ \E \sup_{s \le t } | f ( Y^{N, 1 }_t - f ( Y_t) | \le C_T N^{-1/2}.$$ \end{theo}
\begin{proof} The proof is done by decomposing the evolution of the limit process in the following way. \begin{eqnarray}\label{eq:dynapproxbisbis} Y^1_t &= & Y_0 + \int_0^t b(Y^1_s ) ds - \int_0^t \int_\mathbb {R} \int_0^\infty Y^1_{s-} {\bf 1}_{ \{ z \le f ( Y^1_{s-}) \}} {\mathbf{N}}^1 (ds,du, dz) \\ &&+ \sigma \int_0^t \sqrt{ \frac1N \sum_{j=1}^N f ( Y_s^{ j} ) } d B_s + M_t^N , \nonumber \end{eqnarray} where
$$ M_t^N = \sigma \int_0^t \left( \sqrt{ \frac1N \sum_{j=1}^N f ( Y_s^{ j} ) } - \sqrt{\E ( f ( Y_s^{ 1} ) | B) }\right) d B_s$$ is such that
$$ <M^N>_t \le \sigma^2 \int_0^t \left( \sqrt{ \frac1N \sum_{j=1}^N f ( Y_s^{ j} ) } - \sqrt{\E ( f ( Y_s^{ 1} ) | B) }\right)^2 ds. $$
Taking conditional expectation $\E ( \cdot | B ) $ implies that $$ \E <M^N>_t \le C_T N^{-1} , $$ and this implies the result. \end{proof}
\section{OLD Stuff}
In what follows, we shall write $\omega = (\omega_t)_{t \geq 0 } $ for the canonical process on ${\mathbb D}(\mathbb {R}_+, \mathbb {R} ) ,$ and we endow ${\mathbb D}(\mathbb {R}_+, \mathbb {R} )$ with the usual filtration $( {\mathcal F}_t)_{t \geq 0 } , $ where $$ {\mathcal F}_t = \sigma \{ \omega_s , s \le t \} .$$
\begin{defin} We say that $ \mu \in {\mathcal P} ({\mathbb D}(\mathbb {R}_+, \mathbb {R} )) $ is solution of ${\mathcal M} $ if for all $ \varphi \in C^\infty_0 ( \mathbb {R} ) , $ $$ \varphi( \omega_t ) - \varphi ( \omega_0 ) - \int_0^t \cL_s \varphi ( \omega_s) ds $$ is a $ ( \mu , ( {\mathcal F}_t)_{t \geq 0 }) -$martingale, where $$ \cL_s \varphi ( x) = b(x) \varphi ' ( x) + f( x) [ \varphi ( 0 ) - \varphi ( x) ] + \frac{\sigma^2 }{2} \left( \int f ( \omega_s ) \mu ( d \omega ) \right) \varphi '' ( x) .$$ \end{defin}
\begin{theo} Grant Assumption \ref{ass:1}. Consider a probability distribution $g_0$ on $\mathbb {R}$ such that $\int_{\mathbb {R}} x^2 g_0(dx)<\infty$ and, for each $N\geq 1$, the unique solution $(X^N_t)_{t\geq 0}$ to \eqref{eq:dyn} starting from some i.i.d. $g_0$-distributed initial conditions $X^{N,i}_0$. Write $\mu_N=N^{-1}\sum_{i=1}^N \delta_{(X^{N,i}_t)_{t\geq 0}}.$ Then any limit point $\mu$ of $ \mu_N $ almost surely belongs to $\cS := \{ \mu \in {\mathcal P} ({\mathbb D}(\mathbb {R}_+, \mathbb {R} )) : \mu \mbox{ solution of } {\mathcal M} \} .$ \end{theo}
\begin{proof} By Theorem \ref{theo:6}-(ii), we know that the sequence $ \mu_N$ is tight. We thus consider a (not relabeled) subsequence $\mu_N$ going in law to some ${\mathcal P} ({\mathbb D}(\mathbb {R}_+, \mathbb {R} ))$-valued random variable $\mu$. We want to show that $\mu$ a.s.\ belongs to $\cS .$ \vskip0.2cm
{\it Step 1.} For $t\geq 0$, we introduce $\pi_t: D(\mathbb {R}_+, \mathbb {R} )\mapsto \mathbb {R}_+$ defined by $\pi_t(\omega)=\omega_t$. We claim that $Q\in{\mathcal P}( D(\mathbb {R}_+, \mathbb {R} ))$ belongs to $\cS$ if the following conditions are satisfied:
\vskip0.2cm (a) $Q\circ \pi_0^{-1}=g_0$;
\vskip0.2cm (b) for all $t\geq 0$, $\E \int_{{\mathbb D}(\mathbb {R}_+, \mathbb {R} )}\int_0^t (\omega_s)^2 ds Q(d\omega)<\infty$;
\vskip0.2cm (c) for any $ 0 \le s_1 < \ldots < s_k < s < t$, any $\varphi_1,\dots,\varphi_k \in C_b ( \mathbb {R})$, any $\varphi \in C^3_b (\mathbb {R} )$, \begin{multline*} F(Q):=\int_{{ D} ( \mathbb {R}_+, \mathbb {R} )} \int_{{ D} (\mathbb {R}_+ , \mathbb {R} )} Q ( d \omega ) Q ( d \tilde \omega ) \; \varphi_1 ( \omega_{s_1} ) \ldots \varphi_{k} (\omega_{s_k} ) \\ \Big[ \varphi ( \omega_t) - \varphi ( \omega_s) - \int_s^t f( \omega_u) ( \varphi ( 0) - \varphi (\omega_u ) ) du - \int_s^t b( \omega_u) \varphi' ( \omega_u) du- \frac12 \sigma^2 \int_s^t \varphi'' ( \omega_u )
f (\tilde \omega_u ) du\Big]=0 . \end{multline*}
{\it Step 2.} Here we check that for any $t\geq 0$, a.s., $\mu(\{\omega \, : \, \Delta\omega(t)\ne 0\})=0$. We assume by contradiction that there exists $t > 0 $ such that $\mu ( \{ \omega : \Delta \omega (t) \neq 0 \} ) > 0 $ with positive probability. Hence there are $a,b>0$ such that the event
$E:=\{\mu ( \{ \omega : |\Delta \omega (t) | > a \} ) > b\}$ has a positive probability. For every $\varepsilon > 0$, we have $E\subset \{ \mu ( \cB^\varepsilon_a ) > b\}$, where
$\cB^\varepsilon_a := \{ \omega : \sup_{ s \in (t- \varepsilon , t + \varepsilon)}| \Delta \omega ( s) | > a \}$, which is an open subset of $D ( \mathbb {R}_+ , \mathbb {R} )$. Thus ${\mathcal P}^{\varepsilon}_{a,b} := \{ Q \in {{\mathcal P}} ( {D} ( \mathbb {R}_+, \mathbb {R} ) ) : Q ( \cB^\varepsilon_a ) > b \}$ is an open subset of $ {{\mathcal P}} ( {{\mathbb D}} ( \mathbb {R}_+) )$. The Portmanteau theorem implies then that for any $\varepsilon>0$, $$ \liminf_{N \to \infty } \P ( \mu_N \in {\mathcal P}^{\varepsilon}_{a,b} ) \geq \P ( \mu \in {\mathcal P}^{\varepsilon}_{a,b} ) \geq \P ( E) > 0. $$ But \begin{multline*} \{\mu_N \in {\mathcal P}^{\varepsilon}_{a,b}\} \subset \Big\{\frac1N \sum_{ i= 1 }^N {\bf 1}_{\{ \int_{t- \varepsilon}^{t + \varepsilon} \int_{\mathbb {R} \times \mathbb {R}_+ } {\bf 1}_{ \{ z \le f( X^{N, i }_{v- } ) \}} {\mathbf{N}}^i (dv, du, dz) \geq 1\}} \geq b/2 \Big\} \\ \cup \Big\{\frac1N \sum_{ i= 1 }^N \sum_{j \neq i } {\bf 1}_{\{ \int_{t- \varepsilon}^{t + \varepsilon}\int_{\mathbb {R} \times \mathbb {R}_+ } u {\bf 1}_{ \{ z \le f( X^{N, j }_{v- } ) \}} {\mathbf{N}}^j (dv, du, dz) \geq 1\}} \geq \sqrt{N} b/2 \Big\}. \end{multline*} Using exchangeability, we obtain \begin{multline*} \P ( \mu_N \in {\mathcal P}^{\varepsilon}_{a,b} ) \le \frac{2}{b N} \sum_{i=1}^N \E \Big(\int_{t- \varepsilon}^{t + \varepsilon} \int_{\mathbb {R} \times \mathbb {R}_+ } {\bf 1}_{ \{ z \le f( X^{N, i }_{v- } ) \}} {\mathbf{N}}^i (dv, du, dz)
\Big ) +\\
\frac{4}{b^2 N } \E \left[ \big( \frac1N \sum_{ i= 1 }^N \sum_{j \neq i } {\bf 1}_{\{ \int_{t- \varepsilon}^{t + \varepsilon}\int_{\mathbb {R} \times \mathbb {R}_+ } u {\bf 1}_{ \{ z \le f( X^{N, j }_{v- } ) \}} {\mathbf{N}}^j (dv, du, dz) \geq 1\}} \big)^2 \right] \\
\le \frac{4}{b} \| f\|_\infty \varepsilon + \frac{8 }{b^2 } \sigma^2 \| f \|_\infty \varepsilon , \end{multline*} which does not depend on $N$ and tends to $0$ as $\varepsilon \to 0$. We thus have the contradiction $$ 0 < \P ( E) \le \liminf_{\varepsilon \to 0 } \liminf_{N \to \infty } \P ( \mu_N \in {\mathcal P}^{\varepsilon}_{a, b}) =0. $$
{\it Step 3.} Our limit $\mu$ a.s. satisfies (a), because $\mu \circ \pi_0^{-1}$ is the limit in law of $\mu^N \circ \pi_0^{-1}=N^{-1}\sum_{i=1}^N \delta_{X^{N,i}_0}$, which goes to $g_0$ because the $X^{N,i}_0$ are i.i.d. with common law $g_0$. It also a.s. satisfies (b) since for all $t\geq 0$ and $K > 0,$ using the Fatou Lemma and \eqref{eq:nice}, \begin{align*} \E\Big[\int_{D(\mathbb {R}_+, \mathbb {R} )}\intot [(\omega_s )^2 \wedge K ]ds \mu(d\omega)\Big] \leq& \liminf_N \E\Big[\int_{{\mathbb D}(\mathbb {R}_+)}\intot [(\omega_s)^2 \wedge K] ds \mu_N(d\omega)\Big]\\ = & \liminf_N N^{-1} \sum_{i=1}^N\intot \E[(X^{N,i}_s)^2 ]ds < \infty. \end{align*} The conclusion follows by letting $K \to \infty .$
\vskip0.2cm
{\it Step 4.} It remains to check that $\mu$ a.s. satisfies (c). We thus consider $F:D(\mathbb {R}_+, \mathbb {R} )\mapsto \mathbb {R}$ as in (c).
\vskip0.2cm
{\it Step 4.1.} Here we prove that $\lim_N\E[|F(\mu_N)|]=0$. We have \begin{align*} F( \mu_N) =& \frac1N \sum_{i= 1}^N \varphi_1 ( X^{N, i }_{s_1} ) \ldots \varphi_k ( X^{N, i }_{s_k} ) \\ &\Bigg[ \varphi (X^{N, i }_{t}) - \varphi (X^{N, i }_{s}) - \int_s^t f( X^{N, i }_{u}) [\varphi (0) - \varphi (X^{N, i }_{u}) ] du - \int_s^t b(X^{N, i }_{u}) \varphi' (X^{N, i }_{u}) du\\ & \hskip5cm - \frac{\sigma^2}{2} \int_s^t \varphi'' (X^{N, i }_{u}) \frac1N \sum_{j=1}^N f ( X_u^{ N, j}) du \Bigg] . \end{align*} But recalling \eqref{eq:dyn} and using the It\^o formula for jump processes, \begin{align*} \varphi ( X_t^{N, i } ) =& \varphi (X_0^{N, i } ) + \int_0^t \int_{\mathbb {R}}\int_0^\infty \! [ \varphi ( 0 ) - \varphi( X^{N, i }_{v-} ) ] {\bf 1}_{ \{ z \le f( X^{N, i }_{v- } ) \}} {\mathbf{N}}^{i} (dv, du, dz) + \intot b(X^{N,i}_v) \varphi'( X^{N, i }_v) dv \\ &+ \sum_{ j \neq i } \int_0^t \int_{\mathbb {R}} \int_0^\infty \Big( \varphi ( X^{N, i }_{ v - } + \frac{u}{\sqrt{N}} ) - \varphi ( X_{v-}^{N, i } ) \Big) {\bf 1}_{ \{ z \le f( X_{v-}^{N, j } ) \}} {\mathbf{N}}^j (dv, du, dz). \end{align*} Consequently, using the notation $\tilde {\mathbf{N}}^i (dv, du, dz ) = {\mathbf{N}}^i (dv, du, dz ) - dv \mu (du) dz$ and setting $$ M_t^{N, i } := \int_0^t \int_\mathbb {R} \int_0^\infty [ \varphi ( 0 ) - \varphi( X^{N, i }_{v-} ) ] {\bf 1}_{ \{ z \le f( X^{N, i }_{v- } ) \}} \tilde {\mathbf{N}}^{i} (dv, du, dz) $$ and \begin{multline*} \Delta_t^{N, i } := \sum_{ j \neq i } \! \int_0^t \!\int_{\mathbb {R}} \int_0^\infty \!\!\! \Big( \varphi ( X^{N, i }_{ v - } + \frac{u}{\sqrt{N}} ) - \varphi ( X_{v-}^{N, i } ) \Big) {\bf 1}_{ \{ z \le f( X_{u-}^{N, j } ) \}}{\mathbf{N}}^j (du, dz) - \\ - \frac{\sigma^2}{2} \int_s^t \varphi'' (X^{N, i }_{u}) \frac1N \sum_{j=1}^N f ( X_u^{ N, j}) du , \end{multline*} we see that $$ F(\mu_N) = \frac1N \sum_{i= 1}^N \varphi_1 ( X^{N, i }_{s_1} ) \ldots \varphi_k ( X^{N, i }_{s_k} ) \big[ ( M_t^{N, i } - M_s^{N, i } ) + ( \Delta_t^{N, i } - \Delta_s^{N, i } ) \big] . $$ Since the Poisson measures ${\mathbf{N}}^i$ are i.i.d., the martingales $M^{N, i }$ are orthogonal. Using exchangeability and the boundedness of the $\varphi_k$, we thus find that \begin{equation}\label{eq:318}
\E [ |F ( \mu_N) | ] \le C_F \frac{1}{\sqrt{N}}
\E [ ( M_t^{N, 1} - M_s^{N, 1 } )^2]^{1/2} + C_F \E[ | \Delta_t^{N, 1}| +|\Delta_s^{N, 1 }|]. \end{equation} First, since $\varphi$ and $ f$ are bounded, $$ \E[( M_t^{N, 1} - M_s^{N, 1 } )^2]=\int_s^t \E[(\varphi ( 0 ) - \varphi( X^{N, 1}_{u} ))^2 f( X^{N, 1}_{u} )] du \leq C_F. $$ Next, \begin{align*}
| \Delta_t^{N, 1 }| \le & \int_0^t \int_{\mathbb {R}} \int_0^\infty \Big|\varphi ( X^{N, 1 }_{ v - } + \frac{u}{\sqrt{N}} ) -
\varphi ( X_{v-}^{N, 1 } )\Big| {\bf 1}_{ \{ z \le f( X_{v-}^{N, 1 } ) \}}{\mathbf{N}}^1 (dv, du, dz) \\
& + \Big| \sum_{j=1 }^N \int_0^t \int_{\mathbb {R}} \int_0^\infty \big( \varphi ( X^{N, 1 }_{ v - } + \frac{u}{\sqrt{N}} ) -
\varphi ( X_{v-}^{N, 1 } ) \big) {\bf 1}_{ \{ z \le f( X_{v-}^{N, j} ) \}}\tilde {\mathbf{N}}^j (dv, du, dz)\Big| \\
& + \sum_{j=1}^N \int_0^t \int_{\mathbb {R}} \Big| \varphi ( X^{N, 1 }_{ v} + \frac{u}{\sqrt{N}} ) -
\varphi ( X_{v}^{N, 1 } ) - \frac1N \varphi' (X_v^{N, 1 } )\Big| f( X_v^{N, j} ) dv \mu ( du) \\ & =: I^N_t+J^N_t+K^N_t. \end{align*} Using that $\varphi'$ and $f$ are bounded, we find $$
\E [ I^N_t ] \le \frac{C_F}{\sqrt{N}} \int |u| \mu ( du ) \int_0^t \E[f ( X_u^{N, 1})] du \le \frac{C_F}{\sqrt{N}}. $$ Moreover, since $\varphi'''$ is bounded and by \eqref{ethop} again, $$ \E [ K^N_t] \le \frac{C_F}{N^2}\sum_{j=1}^N \int_0^t \E[f ( X_u^{N, j} )] du \le \frac{C_F}{N}. $$
{\bf The problem is actually the martingale term in the middle.}
It is of the kind $$ \sum_{j=1 }^N \int_0^t \int_{\mathbb {R}} \int_0^\infty \big( \varphi' ( X^{N, 1 }_{ v - } ) \frac{u}{\sqrt{N}} \big) {\bf 1}_{ \{ z \le f( X_{v-}^{N, j} ) \}}\tilde {\mathbf{N}}^j (dv, du, dz).$$ And this should behave as $$ \sigma \int_0^t \varphi' ( X^{N, 1 }_{ v } ) \sqrt{ \mu^N_v ( f)} d B_v .$$ Which would be a common martingale part for all the particles.
Idea would be to change the original dynamics and to introduce an approximating system
{\bf We have to treat the smoothness of limit semigroup !} \vskip0.2cm
{\it Step 4.2.} Clearly, $F$ is continuous at any point $Q\in {\mathcal P}({\mathbb D}(\mathbb {R}_+))$ such that $Q(\omega\, : \, \Delta\omega(s_1)=\dots=\Delta\omega(s_k)=\Delta\omega(s) =\Delta\omega(t)=0)=1$ and such that $\int_{{\mathbb D}(\mathbb {R}_+)}\intot [\omega_u+f(\omega_u)]du Q(d\omega)<\infty$. Our limit point $\mu$ a.s. satisfies these two conditions by Steps 2 and 3 (because
$x+f(x)\leq C(1+xf(x))$). Since $\mu$ is the limit in law of $\mu_N$ and since $F$ is a.s. continuous at $\mu$, we thus deduce that for any $K>0$, $\E[|F(\mu)|\land K]=\lim_N \E[|F(\mu_N)|\land K]$. Consequently, $\E[|F(\mu)|\land K] \leq \limsup_N \E[|F(\mu_N)|]$ for all $K>0$.
Using Step 4.1, we deduce that $\E[|F(\mu)|\land K] =0$ for any $K>0$. By the monotone convergence theorem, we conclude that $\E[|F(\mu)|]=0$, whence $F(\mu)=0$ a.s. \end{proof}
From the above, we obtain the weak convergence of $\hat \mu_N $ along a subsequence to a limit law ${\mathcal L} ( \mu | P^\infty ) .$
{\bf Question :} Does the limit law $P^\infty $ satisfy the following ? $P^\infty -$almost surely, $ \mu -$ which is a random law on $ D ( \mathbb {R} ) $ satisfies : for all $ \varphi \in C_0^\infty ( \mathbb {R} ) , $ we have that $$ \varphi ( \omega_t ) - \varphi ( \omega_0) - \int_0^t L_s \varphi ( \omega_s ) ds $$ is a $ \mu-$martingale. Here, $ \omega $ is the canonical process and $$ L_t \varphi ( x) = b(x) \varphi ' (x) + f(x) [ \varphi( 0) - \varphi( x) ] + \frac12 \left( \int f ( \omega_t ) \mu ( d \omega ) \right) \varphi'' ( x) .$$ Should be, no?
Do we have uniqueness of the solution of this martingale problem?
{\bf Are we sure that this limit law is random ??? }
{\bf Do we have a density for $ Y_t^{\infty, 1 } $ ???}
\section{Auxiliary process} We cut time into intervals of length $ \delta > 0 $ and we consider an approximation of our process which has constant jump rate over such intervals. In other words we consider an approximation $ X^{N, \delta, i }_{n \delta }, n \geq 0, $ such that \begin{equation}\label{eq:XNdelta} X^{N, \delta , i}_{(n+1) \delta } = X^{N, \delta ,i}_{n \delta } + \int_{ n \delta}^{(n+1) \delta} b( X^{N,\delta, i}_s) ds - \int_0^t \int_\mathbb {R} \int_0^\infty X^{N,\delta, i}_{s-} {\bf 1}_{ \{ z \le f ( X^{N,\delta, i}_{n \delta}) \}} {\mathbf{N}}^i (ds,du, dz) + \Delta M_{n \delta}^{N,\delta, i } \end{equation} where $$\Delta M_{n \delta}^{N,\delta, i } = \frac{1}{\sqrt{N}}\sum_{ j \neq i } \int_{n \delta}^{(n+1) \delta} \int_\mathbb {R} \int_0^\infty u {\bf 1}_{ \{ z \le f ( X^{N,\delta, j}_{n \delta }) \}} {\mathbf{N}}^j (ds,du, dz) .$$ {\bf Maybe we do also have to discretise the first line, I do not know for the moment! }
\begin{prop} 1) The random variables $ (X^{N, \delta , i}_{n \delta } , 1 \le i \le N ) $ are exchangeable for all $n.$
2) The associated empirical measures $$ \hat \mu_{n\delta}^{N, \delta} := \frac1N \sum_{i=1}^N \delta_{X^{N, \delta , i}_{n \delta } } $$ converge to a limit measure that will be denoted $ \mu^\delta_{n \delta } .$ (Or at least, they are tight!) \end{prop}
As a consequence, conditionally on ${\mathcal F}_{n \delta}, $ we have that $$\Delta [M^{N,\delta, i }] _{n \delta} = \frac{\sigma^2 \delta }{N} \sum_{j \neq i } f ( X^{N,\delta, j }_{n \delta} ) \sim \sigma^2 \delta \; \hat \mu_{n \delta }^{N, \delta} ( f) \to \sigma^2 \delta \; \mu^\delta_{n \delta} ( f) ,$$ from which we deduce the weak convergence of $$ \Delta M_{n \delta}^{N,\delta, i } \to \sigma \sqrt{ \mu^\delta_{n \delta} ( f) } B_\delta .$$
\section{Auxiliary process-bis}
We cut time into intervals of length $ \delta > 0 $ and we consider an approximation of our process such that at most one single jump may happen during any such interval, per particle.
Our goal is to prove that for all $n, $ $Y^N( n \delta) $ is an exchangeable random vector. This should be proved inductively over $n.$
Suppose at time $n \delta $ we have configuration $ y = (y_1, \ldots, y_N) . $ Choose independent exponential random variables $ \tau_1 , \ldots, \tau_N $ such that $ \tau_i \sim \exp ( f ( y_i) ) .$ Put $ \Phi_i (n) = 1_{\{ \tau_i \le \delta \} } $ and let $ U_i (n ) $ be i.i.d. $\sim \mu .$ Finally we have $$ q_N := \frac{1}{\sqrt{N}} \sum_{i=1}^N \Phi_i ( n) U_i (n) .$$ Then for all $ i $ such that $\Phi_i (n ) = 0 $ we put $$ Y_i ((n+1)\delta ) := e^{- \lambda \delta } y_i + q_N .$$ Take now those particles that jump. We have $$ N(n) := \sum_{i=1}^N \Phi_i (n) $$ such particles that we number according to increasing order of jumps $$ j_1 , \ldots, j_{N(n) } .$$
Then we put for any $ 1 \le i \le N(n) , $ $$ Y_{j_i } ( (n+1) \delta ) := \frac{1}{\sqrt{N}} \sum_{k=i+1}^{N(n) } U_{j_k ( n) } .$$ In particular, $ Y_{j_{N(n) } }( (n+1) \delta ) = 0.$
It should be clear that the coupling of the true process with $ Y^\delta $ is ok.
And we should also have that the fact that $$ \hat \mu^N ( n \delta ) := \frac{1}{N} \sum_{i=1}^N \delta_{ Y_i (n\delta ) } \to \mu_{n \delta } $$ (which has to proven by induction in $n$) as $ N \to \infty $ implies weak convergence $$ q_N \stackrel{\mathcal L}{\to } \sigma \sqrt{ \mu_{n \delta} ( f) } \int_{n \delta}^{(n+1) \delta } d B_s .$$
Then a simple re-ordering of the $U_i (n) $ gives $$ \hat \mu^N ( (n+1) \delta ) = \frac{1}{N} \sum_{i=1}^N 1_{ \{ \tau_i > \delta \} } \delta_{ e^{- \lambda \delta } y_i + q_n } + \frac{1}{N} \sum_{i=1}^{N(n) } \delta_{ N^{- 1/2} \sum_{ j =i+1}^{N(n) } U_j (n ) } .$$
Consider for instance a test function $ \Psi , $ then we have that \begin{equation}
\hat \mu^N ( (n+1) \delta ) ( \Psi ) = \frac{1}{N} \sum_{i=1}^N 1_{ \{ \tau_i > \delta \} } \Psi \left( e^{- \lambda \delta } y_i + q_N \right) + \frac{1}{N} \sum_{i=1}^{N(n) } \Psi \left( N^{- 1/2} \sum_{ j =i+1}^{N(n) } U_j (n ) \right) . \end{equation}
Question : what is the limit of (5.8)?
Q1 : If we know that $ \hat \mu^N (n \delta ) \to \mu ( n \delta ) , $ does that imply that - in a certain sense that has to be made precise - we have that $ (Y^N_1, \ldots , Y^N_N) \to ( Y_1, \ldots, Y_N, \ldots ) $ where the limit sequence is necessarily exchangeable?
\end{document} |
\begin{document}
\title{finite approximation properties of $C^{*}$-modules II}
\author{Massoud Amini}
\address{Department of Mathematics\\ Faculty of Mathematical Sciences\\ Tarbiat Modares University\\ Tehran 14115-134, Iran} \email{mamini@modares.ac.ir, mamini@ipm.ir}
\address{Current Address: STEM Complex, 150 Louis-Pasteur Pvt,
Ottawa, ON, Canada K1N 6N5}
\keywords{$C^*$-module, quasidiagonal, locally reflexive, vector valued trace}
\subjclass[2010]{47A58, 46L08, 46L06}
\maketitle
\begin{abstract} We study quasidiagonality and local reflexivity for $C^{*}$-algebras which are $C^*$-module over another $C^*$-algebra with compatible actions. We introduce and study a notion of amenability for vector valued traces. \end{abstract}
\section{introduction}\label{s1}
Finite approximation properties of $C^*$-algebras is studied in \cite{bo}. Some of these, including important notions such as nuclearity, exactness and weak expectation property (WEP) are extended to the context of $C^*$-algebras with compatible module structure in \cite{a2}. We continue this study here by considering other important finite approximation properties such as quasidiagonality and local reflexivity. We also study vector valued traces and their amenability.
A ``finite dimensional approximation'' scheme for $C^*$-morphisms is an approximately commuting diagram as follows:
\begin{center} $\xymatrix @!0 @C=4pc @R=3pc { A \ar[rr]^{\theta} \ar[rd]^{\varphi_n} && B \\ & \mathbb M_{k_n}(\mathbb C) \ar[ur]^{\psi_n}}$ \end{center}
\noindent where $A$ and $B$ are $C^*$-algebras and $\varphi_n$ and $\psi_n$ are contractive completely positive (c.c.p.) maps. The central idea of the module case, where $A$ and $B$ are also $\mathfrak A$-modules, for a $C^*$-algebra $\mathfrak A$, is to find such an approximate decomposition through the $C^*$-algebra $\mathbb M_{k_n}(\mathfrak A)$ (or through the von Neumann algebra $\mathbb M_{k_n}(\mathfrak A^{**})$). This means that we deal with approximations through finitely generated modules (over $\mathfrak A$ or $\mathfrak A^{**}$), shortly referred to as ``finite approximation'' here.
The paper is organized as follows: In section 2 we use the notion of retraction, already studied in the context of Hilbert $C^*$-modules \cite{lan}, to introduce the notion of vector valued amenable traces on $C^*$-modules. In section 3, we study a notion of quasidiagonality in the category of $C^*$-modules and extend Voiculescu theorem (Theorem \ref{voi}). The last section is devoted to extension of local reflexivity, Arveson's lemma (Lemma \ref{lift}) and a work of Kirchberg on $min$-continuity properties.
For the rest of this paper, we fix a $C^*$-algebra $\mathfrak A$ and let $A$ be a $C^*$-algebra and a right Banach $\mathfrak A$-module (that is, a module with contractive right action) with compatible conditions, \begin{equation*} (ab)\cdot\alpha=a(b\cdot\alpha),\,\, a\cdot\alpha\beta=(a\cdot\alpha)\cdot\beta, \end{equation*} for each $a,b\in A $ and $\alpha, \beta\in \mathfrak A.$ In this case, we say that $A$ is a (right) $\mathfrak A$-$C^*$-module, or simply a $C^*$-module (it is then understood that the algebra and module structures on $A$ are compatible in the above sense). A $C^*$-subalgebra which is also an $\mathfrak A$-submodule is simply called a $C^*$-submodule.
If moreover, we have the compatibility condition $$(a^*\cdot\alpha^*)^*\cdot\beta=((a\cdot\beta)^*\cdot\alpha^*)^*,$$ for each $a\in A $ and $\alpha, \beta\in \mathfrak A,$ then if we define a left action by $$\alpha\cdot a:=(a^*\cdot\alpha^*)^*,$$ then $A$ becomes an $\mathfrak A$-bimodule with compatibility conditions, \begin{equation*} \alpha\cdot (ab)=(\alpha\cdot a)b,\,\, \alpha\beta\cdot a=\alpha\cdot(\beta\cdot a),\, \, \alpha\cdot(a\cdot \beta)=(\alpha\cdot a)\cdot \beta, \end{equation*} for each $a,b\in A $ and $\alpha, \beta\in \mathfrak A.$ In this case, there is a canonical $*$-homomorphism from $\mathfrak A$ to the multiplier algebra $M(A)$ of $A$, sending $\alpha$ to the pair $(L_\alpha, R_\alpha)$ of left and right module multiplication map by $\alpha$. If the action is {\it non degenerate}, in the sense that, given $\alpha$, $a\cdot \alpha=0$, for each $a\in A$, implies that $\alpha=0$ (and so the same for the left action), then the above map is injective and so an isometry, and we could (and would) identify $\mathfrak A$ with a $C^*$-subalgebra of $M(A)$.
We say that a two sided action of $\mathfrak A$ on $A$ is a {\it biaction} if the right and left actions are compatible, i.e., $$(a\cdot\alpha)b=a(\alpha\cdot b)\ \ (\alpha\in\mathfrak A, a,b\in A).$$ When the action is non degenerate and $\mathfrak A$ acts on $A$ as a $C^*$-subalgebra of $M(A)$, we have a biaction.
In some cases we have to work with operator $\mathfrak A$-modules with no algebra structure (and in particular with certain Hilbert $\mathfrak A$-modules). If $E, F$ are operator $\mathfrak A$-modules, a module map $\phi: E\to F$ is a continuous linear map which preserves the right $\mathfrak A$-module action.
Throughout this paper, we use the notation $\mathbb B(X)$ to denote the set of bounded adjointable linear operators on an Hilbert $C^*$-module $X$.
\section{amenable traces}
In this section, $\mathfrak A$ is a $C^*$-algebra and $A$ is a right $\mathfrak A$-$C^*$-module, with the compatibility conditions which allow one to consider $A$ as an $\mathfrak A$-bimodule. When the action is non degenerate, we identify $\mathfrak A$ with a $C^*$-subalgebra of the multiplier algebra $M(A)$ (or that of $A$, when $A$ is unital).
The representations of $\mathfrak A$-$C^*$-modules are defined on $\mathfrak A$-correspondences. An $\mathfrak A$-{\it correspondence} is a right Hilbert $\mathfrak A$-module $X$ with a left action of $\mathfrak A$ via a representation of $\mathfrak A$ into $\mathbb B(X)$. A {\it representation} of a $\mathfrak A$-bimodule $A$ in $X$ is a $*$-homomorphism from $A$ into $\mathbb B(X)$, which is also a right $\mathfrak A$-module map with respect to the canonical right $\mathfrak A$-module structure of $\mathbb B(X)$ coming from the left $\mathfrak A$-module structure of $X$.
To define a module version of (vector valued) traces, we adapt (and slightly generalize) the notion of retraction from \cite[Chapter 5]{lan}.
\begin{definition} \label{ret} An $\mathfrak A$-{\it retraction} is a positive right $\mathfrak A$-module map $\tau: A\to \mathfrak A$ such that
$(i)$ Im$(\tau)$ is strictly dense in $\mathfrak A$, that is, for each $\beta\in\mathfrak A$, there is a net $(a_i)\subseteq A$ such that
$$\|\tau(a_i\cdot\alpha)-\beta\alpha\|\to 0\ \ (\alpha\in\mathfrak A),$$
$(ii)$ for some bounded approximate identity $(e_i)$ of $A$, $\tau(e_i)\to p$, for some projection $p$ in $\mathfrak A$, in the strict topology. \end{definition}
Note that since $\tau$ is positive, it is also self-adjoint (i.e., it preserves the involution). In particular, an $\mathfrak A$-retraction is automatically a bimodule map (with the left action defined as in the previous section) and the left module version of conditions above are also satisfied. This observation is used in the proof of part $(i)$ of the next lemma.
When $A$ is unital (and $p=1$) and $\mathfrak A$ is a $C^*$-subalgebra of $A$, an $\mathfrak A$-retraction is simply a conditional expectation from $A$ onto $\mathfrak A$. Each state of the $C^*$-algebra $A$ is a $\mathbb C$-retraction. As another example, for $\mathfrak A=pM(A)p$, where $p$ is a projection in $M(A)$, the cut down map $a\mapsto pap$ is an $\mathfrak A$-retraction.
The first part of following result is proved by a slight modification of the argument of \cite[Lemma 5.9]{lan}. Here we sketch the proof, as it also uses the bimodule property. The second part follows from the observation (in the introduction) about non degenerate actions and \cite[Proposition 5.10]{lan}.
\begin{lemma} \label{retr}
$(i)$ An $\mathfrak A$-retraction is a c.p. map.
$(ii)$ If the action is non degenerate, $\mathfrak A$-retractions are exactly those linear maps $\tau: A\to \mathfrak A$ which have an extension to an idempotent c.c.p. map $\tilde\tau: M(A)\to \mathfrak A$, which is strictly continuous on the unit ball of $M(A)$. \end{lemma} \begin{proof} We only prove part $(i)$. Identifying $\mathbb M_n(\mathfrak A)$ with $\mathbb K(\mathfrak A^n)$ \cite[Lemma 4.1]{lan}, we only need to show that $$\sum_{i,j=1}^{n} \alpha_i^*\tau(a_i^*a_j)\alpha_j\geq 0\ \ (\alpha_1,\cdots,\alpha_n\in\mathfrak A, a_1,\cdots,a_n\in A).$$ Since $\tau$ is a bimodule map, the left hand side is the same as $\tau(b^*b)$ for $b=\sum_i a_i\cdot\alpha_i$, and we are done.\qed \end{proof}
Next, let us assume that the action is non degenerate and regard $\mathfrak A$ as a $C^*$-subalgebra of $M(A)$. Let $X$ be a right Hilbert $A$-module. Then $X$ is also a right Hilbert $M(A)$-module: for $x\in X$, $c\in M(A)$, and bounded approximate identity $(a_i)\subseteq A$,
$$|xa_ic-xa_jc|^2=c^*(a_i-e_j)|x|^2(a_i-a_j)c\to 0,$$ as a net in $A$. Thus we may define $xc$ as the limit of the net $(xa_ic)$ in $M(A)$. When the action is non degenerate, $X$ is also a right $\mathfrak A$-module.
Assume that the action is non degenerate. For an $\mathfrak A$-retraction, define $$\langle x,y\rangle_\tau:=\tau(\langle x,y\rangle_A)\in\mathfrak A\ \ (x,y\in X).$$ Consider the closed submodule $N_\tau:=\{x: \langle x,x\rangle_\tau=0\}$ of $X$. The completion of the quotient $X/N_\tau$ is a Hilbert $\mathfrak A$-module, denoted by $L^2(X, \tau)$. There is module map: $X\to L^2(X, \tau); x\mapsto \hat x:=x+N_\tau$, satisfying $\widehat{ x\cdot\alpha}=\hat x\cdot \alpha$, for $\alpha\in\mathfrak A$. When $X=A$, we denote this module simply by $L^2(A, \tau)$.
For $t\in\mathbb B(X)$, since $0\leq |tx|^2\leq \|t\|^2|x|^2$ \cite[Proposition 1.2]{lan}, the map $$\pi_\tau: X/N_\tau \to X/N_\tau; \ x+N_\tau\to tx+N_\tau,$$ is well-defined and bounded, and extends to an $*$-homomorphism $$\pi_\tau: \mathbb B(X)\to \mathbb B\big(L^2(X, \tau)\big),$$ which is injective when $\tau$ is faithful (that is, $\tau(a)=0$ implies $a=0$, for each $a\in A^+$). We say that $X$ is $\tau$-{\it self dual}, if $X$ is a self-dual Hilbert $\mathfrak A$-module under the above $\mathfrak A$-valued inner product. In this case, $L^2(X, \tau)$ is also a self dual Hilbert $\mathfrak A$-module (since $X/N_\tau$ is dense in $L^2(X, \tau)$ and the inner product is continuous). In particular, if $\mathfrak A$ is a von Neumann algebra and $X$ is $\tau$-self dual, then $\mathbb B\big(L^2(X, \tau)\big)$ is a von Neumann algebra, and so is the double commutant $\pi_\tau(A)^{''}\subseteq \mathbb B\big(L^2(X, \tau)\big)$.
When $A$ is faithfully represented in $X$, say $A\subseteq \mathbb B(X)$, we may restrict this to $A$ to get an $*$-homomorphism $$\pi_\tau: A\to \mathbb B\big(L^2(X, \tau)\big),$$ which is essentially the same as the GNS-construction of $\tau$, when $X=A$. In this case, for each $a\in A$, the operator $\pi_\tau(a)$ is defined on the dense subset $A/N_\tau\subseteq L^2(A, \tau)$ by $\pi_\tau(a)\hat b=\widehat{ab}$, and so it is justified to call $\pi_\tau$ the left regular representation associated to the $\mathfrak A$-retraction $\tau$.
If $\mathfrak A$ is unital and $(e_i)$ is the bounded approximate identity as in part $(ii)$ of the above definition, then
$$\|\hat e_i-\hat e_j\|_2=\|\tau(e_i-e_j)^2\|^{\frac{1}{2}}\leq\|\tau(e_i-e_j)\|^{\frac{1}{2}}\to 0,$$ as $i,j\to\infty$. Thus, there is $\hat 1\in L^2(A,\tau)$ with $\hat e_i\to \hat 1$ in $L^2(A,\tau)$, and $\hat 1$ is a cyclic vector for $\pi_\tau$. When $A$ is unital, $\hat 1$ is simply the canonical image of $1\in A$, but it exists, even if $A$ is not unital.
Another way to extend the GNS-construction, is adapting the so called Kasparov-Stinespring-Gelfand-Naimark-Segal (KSGNS) construction \cite{lan} for $\mathfrak A$-retractions. Let $Y$ be a Hilbert $\mathfrak A$-module and $\rho: A\to \mathbb B(Y)$ be a c.p. map. We say that $\rho$ is {\it strict} if the net $(\rho(e_i))$ is strictly Cauchy in $\mathbb B(Y)$, for some bounded approximate identity $(e_i)\subseteq A$ (this is automatic when $A$ is unital). Since the unit ball of $\mathbb B(Y)$ is complete in strict topology \cite{lan}, the above condition implies that $\rho(e_i)\to p$, in the strict topology, for some positive element $p$ in $\mathbb B(Y)$ (the case $p=1$ is equivalent to the condition that $\rho$ is non degenerate). Now the KSGNS-construction of a strict c.p. map $\rho: A\to \mathbb B(Y)$ gives a Hilbert $\mathfrak A$-module $Y_\rho$, an adjointable operator $v\in \mathbb B(Y,Y_\rho)$ and a $*$-homomorphism $\pi_\rho: A\to \mathbb B(Y_\rho)$ with $\rho=v^*\pi_\rho(\cdot)v$, such that $\pi_\rho(A)v(Y)$ is dense in $Y_\rho$, which is universal in the sense that, for each Hilbert $\mathfrak A$-module $Z$ and $*$-homomorphism $\pi: A\to \mathbb B(Z)$ and $w\in \mathbb B(Y,Z)$ with $\rho=w^*\pi(\cdot)w$, such that $\pi(A)w(Y)$ is dense in $Z$, there is a unitary $u\in \mathbb B(Y_\rho,Z)$ with $\pi=u\pi_\rho(\cdot)u^*$ and $w=uv$ \cite[Theorem 5.6]{lan}. Indeed, $Y_\rho=A\otimes_\rho Y$ and $\pi_\rho(a)(vy)=a\dot\otimes y$, for $a\in A, y\in Y$. When $\rho$ is a non degenerate $*$-homomorphism, $Y_\rho$ is unitarily equivalent to $Y$.
Back to the $\mathfrak A$-retraction $\tau: A\to \mathfrak A$, since $\pi_\tau: A\to \mathbb B\big(L^2(A, \tau)\big)$ is a non degenerate $*$-homomorphism, for the $\mathfrak A$-module $Y=L^2(A, \tau)$, the KSGNS-construction $Y_{\pi_\tau}$ could be identified (via a unitary equivalence) with $Y$. Under this identification, the adjointable operator $v$ above is identified with the identity and we get the above universality property for free. Another choice for $Y$ is $Y=\mathfrak A$, which gives $Y_\tau=A\otimes_\tau \mathfrak A$, with the above universal property. In this case, $v\in \mathbb B(\mathfrak A,A\otimes_\tau \mathfrak A)$ satisfies $\pi_\tau(a)(v\alpha)=a\dot\otimes \alpha$. Again, if $\mathfrak A$ is unital, the $v(1_{\mathfrak A})$ is a cyclic vector for the representation $\pi_\tau: A\to \mathbb B(A\otimes_\tau \mathfrak A)$ (indeed, $v(1_{\mathfrak A})$ is the limit of the net $(e_i\dot\otimes 1_{\mathfrak A})$ in $A\otimes_\tau \mathfrak A$).
\begin{definition} An $\mathfrak A$-{\it trace} is an $\mathfrak A$-retraction $\tau: A\to \mathfrak A$ satisfying $$\tau(ab)=\tau(ba)\ \ (a,b\in A).$$ \end{definition}
In this case, one could also define the right regular representation $\pi^{op}_\tau$ of $\tau$ on $A^{op}$ by $\pi^{op}_\tau(a)\hat b=\widehat{ba}$ (extended by continuity). This is well-defined, since $\tau$ is an $\mathfrak A$-trace. The proof of the next lemma goes, almost verbatim, as in the classical case \cite[6.1.2-6.1.4]{bo}.
\begin{lemma} \label{j} Let $\tau: A\to \mathfrak A$ be an $\mathfrak A$-trace.
$(i)$ $\pi_\tau(A)^{'}\supseteq \pi_\tau^{op}(A^{op})$ in $\mathbb B\big(L^2(A, \tau)\big),$
$(ii)$ there is a conjugate $\mathfrak A$-morphism $J: L^2(A, \tau)\to L^2(A, \tau)$, with $J^2=id$, satisfying $J\hat a=\widehat{a^*}$ and $J\pi_\tau(a)J=\pi_\tau^{op}(a^*)$, for each $a\in A$, and $\langle Jz, z^{'}\rangle=\langle Jz^{'}, z\rangle$ for each $z, z^{'}\in L^2(A, \tau)$.
When $\mathfrak A$ is unital, we also have,
$(iii)$ $Jt\hat 1=t^*\hat 1$, for each $t\in \pi_\tau(A)^{'}$,
$(iv)$ $\pi_\tau(A)^{''}= \pi_\tau^{op}(A^{op})^{'}$ and $\pi_\tau(A)^{'}= \pi_\tau^{op}(A^{op})^{''}$ in $\mathbb B\big(L^2(A, \tau)\big).$ \end{lemma}
In part $(ii)$, the fact that $J$ is a conjugate $\mathfrak A$-morphism simply means that $J(\alpha\cdot \hat a)=J(\hat a)\cdot\alpha^*$, and the same for the right action.
In the next definition, we use the left module structure of $Y$ to define the canonical right module action of $\mathfrak A$ on $\mathbb B(Y)$ by $$(t\cdot\alpha)(y)=t(\alpha\cdot y)\ \ (t\in\mathbb B(Y), \alpha\in \mathfrak A, y\in Y).$$
\begin{definition} \label{am} An $\mathfrak A$-trace $\tau: A\to \mathfrak A$ is called {\it amenable} if for every faithful representation $A\subseteq \mathbb B(Y)$ of $A$ in an $\mathfrak A$-correspondence $Y$, such that $\tau$ has an extension to a c.c.p. right module map $\phi: \mathbb B(Y)\to \mathfrak A$ satisfying $\phi(uxu^*)=\phi(x)$, for each $x\in B(Y)$ and each unitary $u$ in $A+\mathbb C I$, where $I$ is the identity operator on $Y$. \end{definition}
Note that if $\tau$ is only an $\mathfrak A$-retraction satisfying the above extension property, then it is automatically an $\mathfrak A$-trace. When $A$ is a unital $C^*$-algebra, faithfully and non degenerately represented in $Y$, then we only need to check that $\phi$ is stable under the unitary conjugation by unitaries in $A$. Finally, and the most important of all, note that, unlike the classical case \cite[Proposition 3.1.2]{bo}, here the existence of invariant extension is not independent of the choice of the representation, unless $A$ is represented in an $\mathfrak A$-module with injective algebra of adjointable operators.
\begin{lemma} \label{rep} Let $A\subseteq \mathbb B(Y)$ be a faithful representation in a $\mathfrak A$-correspondence $Y$ such that $\mathbb B(Y)$ is an injective $\mathfrak A$-module. Then if $\tau: A\to \mathfrak A$ is an $\mathfrak A$-trace which enjoys the above invariant extension property for this representation, then $\tau\circ\pi^{-1}$ has the invariant extension property for any other faithful representation $\pi: A \to\mathbb B(X)$ of $A$ in a $\mathfrak A$-correspondence $X$, that is, $\tau$ is an amenable $\mathfrak A$-trace. \end{lemma} \begin{proof} First note that $\tau\circ\pi^{-1}$ is clearly an $\mathfrak A$-retraction. By injectivity, there is a c.c.p. module map $\Phi: \mathbb B(X)\to\mathbb B(Y)$ which extends $\pi^{-1}$ on $\pi(A)$, and $\pi(A)$ is in the multiplicative domain of $\Phi$. Consider the c.c.p. map $\phi: \mathbb B(Y)\to \mathfrak A$ satisfying the conditions of Definition \ref{am}, and put $\psi=\phi\circ\Phi$. This is a c.c.p. map on $\mathbb B(X)$ which extends $\tau\circ\pi^{-1}$ on $\pi(A)$ and enjoys the invariance property on $\mathbb B(X)$. \end{proof}
The above lemma suggests that we look for appropriate $\mathfrak A$-correspondence $Y$ such that there is a faithful representation $A\subseteq \mathbb B(Y)$ and $\mathbb B(Y)$ is an injective $\mathfrak A$-module. One case of special interest is $Y=K\otimes \mathfrak A$, where $K$ is an appropriate Hilbert space. We use a minimal Stinespring dilation to show that $A$ could always be faithfully represented in such an space by a module map. In the following lemma, we assume that we have a biaction (this is satisfied if the action is non degenerate and $\mathfrak A$ acts on $A$ as a $C^*$-subalgebra of $M(A)$ by multiplication). This lemma relaxes the separability condition used in \cite[section 3]{a2}, showing that separability is not needed to define the {\it min} module tensor product.
\begin{lemma} \label{rep2} Let $\mathfrak A$ be unital. There is a Hilbert space $K$ and a $\mathfrak A$-correspondence structure on $K\otimes \mathfrak A$ such that $A$ could be faithfully represented in $K\otimes \mathfrak A$. \end{lemma} \begin{proof} Let $\mathfrak A\subseteq \mathbb B(H)$ be a faithful representation of the $C^*$-algebra $\mathfrak A$ in a Hilbert space $H$. Take a faithful c.c.p. map $\varphi: A\to \mathfrak A^{'}\subseteq \mathbb B(H)$ (such a c.c.p. map always exists, for instance take any faithful state $\phi$ of $\mathfrak A$ and put $\varphi(a)=\phi(a)1_{\mathfrak A}$). Let $(\pi,K,V)$ be the minimal Stinespring dilation of $\varphi$. Then there is a $*$-homomorphism $$\rho:\varphi(A)^{'}\to \pi(A)^{'}\subseteq \mathbb B(K)$$ such that $\varphi(a)x=V^*\pi(a)\rho(x)V$, for $a\in A, x\in \varphi(A)^{'}$ \cite[1.5.6]{bo}. Since $\varphi=V^*\pi(\cdot)V$ is faithful, so is $\pi$ (just calculate both sides of the last equality at $aa^*$). On the other hand, $K=A\otimes_\varphi H$ and $\pi(a)(b\otimes h)=ab\otimes h$, thus \begin{align*} \pi(a\cdot\alpha)(b\otimes h)&=(a\cdot\alpha)b\otimes h =a(\alpha\cdot b)\otimes h\\ &=\pi(a)\big((\alpha\cdot b)\otimes h\big) =\big(\pi(a)\cdot\alpha\big)(b\otimes h), \end{align*} for $\alpha\in \mathfrak A, a,b\in A, h\in H$.
Note that since $\varphi(A)\subseteq \mathfrak A^{'}$, $\varphi(A)^{'}\supseteq \mathfrak A^{''}\supseteq \mathfrak A$, so put $$\sigma=\rho|_{\mathfrak A}: \mathfrak A\to \pi(A)^{'}\subseteq \mathbb B(K)$$ and define the left action of $\mathfrak A$ on $K$ by $$\alpha\cdot\xi:=\sigma(\alpha)\xi\ \ (\alpha\in \mathfrak A, \xi\in K),$$ and let $A$ act on the right Hilbert $\mathfrak A$-module $K\otimes \mathfrak A$ with inner product $$\langle \xi\otimes\alpha,\eta\otimes\beta\rangle:=\langle \xi,\eta\rangle\alpha^*\beta\ \ (\alpha,\beta\in \mathfrak A, \xi,\eta\in K),$$ via $\tilde\pi: A\to \mathbb B(K\otimes\mathfrak A)$, defined by $$\tilde\pi(a)(\xi\otimes\alpha):=\pi(a)\xi\otimes\alpha\ \ (\alpha\in \mathfrak A, a\in A, \xi\in K),$$ then $\tilde\pi$ is faithful, since $\pi$ is faithful and $\mathfrak A$ is unital. Moreover, since $\pi$ and $\sigma$ have commuting ranges, \begin{align*} \tilde\pi(a\cdot\beta)(\xi\otimes \alpha)&=\pi(a\cdot\beta)\xi\otimes \alpha =(\pi(a)\cdot\beta)(\xi)\otimes \alpha\\ &=\pi(a)(\beta\cdot\xi)\otimes \alpha =\pi(a)\sigma(\beta)\xi\otimes \alpha\\ &=\sigma(\beta)\pi(a)\xi\otimes \alpha =\beta\cdot\big(\pi(a)\xi\otimes \alpha\big)\\ &=\beta\cdot\big(\tilde\pi(a)(\xi\otimes \alpha)\big) =\big(\tilde\pi(a)\cdot\beta\big)(\xi\otimes \alpha), \end{align*} for $\alpha,\beta\in \mathfrak A, a\in A, \xi\in K$. Therefore $\tilde\pi$ is the required faithful representation. \end{proof}
In the category of Hilbert $\mathfrak A$-modules with bounded adjointable maps, it is known that $\mathfrak A$ is an injective object iff the multiplier algebra $M(\mathfrak A)$ is monotone complete \cite[Theorem 1.1]{f}. I am not aware of conditions for injectivity of $\mathfrak A$ in our category of $C^*$-modules with c.c.p. module maps.
\begin{lemma} \label{inj} Let $\mathfrak A$ be unital. For the $\mathfrak A$-correspondence $K\otimes \mathfrak A$ of the above lemma, if $\mathbb B(K\otimes \mathfrak A)$ is a von Neumann algebra, then if $\mathfrak A$ is an injective object in the category of $\mathfrak A$-modules with c.c.p. module maps as morphisms, then so is $\mathbb B(K\otimes \mathfrak A)$. \end{lemma} \begin{proof} As in the proof of the module version of Arveson extension theorem \cite[Lemma 3.7]{a2}, it is enough to note that, for an orthonormal basis $\{\xi_i\}$ of $K$, the set $\{\xi_i\otimes 1_{\mathfrak A}\}$ is a frame for the Hilbert $\mathfrak A$-module $K\otimes \mathfrak A$. This gives a net of finite rank projections $(q_i)$ (say with rank $k(i)$) in $\mathbb B(K\otimes \mathfrak A)$, tending to the identity in $SOT$, such that $q_i\mathbb B(K\otimes \mathfrak A)q_i=\mathbb M_{k(i)}(\mathfrak A)$. The rest goes as in the proof of the classical Arveson extension theorem. \end{proof}
Note that, $$\mathbb B(K\otimes \mathfrak A)=M(\mathbb K(K)\otimes \mathfrak A)\subseteq \mathbb B(K)\bar\otimes \mathfrak A^{**}.$$ Also, $\mathbb B(K\otimes \mathfrak A)$ is a von Neumann algebra when $\mathfrak A$ is so and $K\otimes \mathfrak A$ is self dual.
Let $\tau$ be an $\mathfrak A$-trace, and consider the left and right regular representations $\pi_\tau$ and $\pi^{op}_\tau$ on $L^2(A,\tau)$. Since the ranges of these representations commute, we may consider the representation $\pi_\tau\otimes\pi_\tau^{op}: A\odot A^{op}\to \mathbb B(L^2(A,\tau))$. Composing this with the $\mathfrak A$-retraction $$\theta: x\in \mathbb B(L^2(A,\tau))\mapsto\langle x\hat1, \hat 1\rangle\in\mathfrak A$$
we get a map $\mu_\tau: A\odot A^{op}\to\mathfrak A$. Both of these maps are $max$-continuous (by universality) on $A\odot A^{op}$. The natural question is that when these are $min$-continuous. The main result of this section answers this and gives sufficient conditions for amenability of $\tau$. The proof resembles that of \cite[3.1.6]{b}, but there are lots of technicalities, due to working with vector valued maps, which should be taken care of. For $x\in \mathbb M_{n}(\mathfrak A)$, we put, $$\|x\|_2:=(tr_n\otimes id)(x^*x)^{\frac{1}{2}}\in\mathfrak A^{+}.$$
\begin{theorem}\label{main} Let $\tau$ be an $\mathfrak A$-trace on a $C^*$-algebra $A$. Consider the following assertions:
$(i)$ $\tau$ is amenable,
$(ii)$ there exists a sequence of c.c.p. module maps $\phi_n: A \to \mathbb M_{k(n)}(\mathfrak A)$ such that
$(tr_{k(n)}\otimes id)\circ\phi_n\to\tau$ in point-norm topology on $A$, as $n\to\infty$, and also $\|\phi_n(ab)-\phi_n(a)\phi_n(b)\|_2\to 0$ in $\mathfrak A$, as $n\to\infty$, for all $a, b\in A$,
$(iii)$ the positive linear map $\mu_\tau$ on $A\odot A^{op}$ is min-continuous on $A \odot A^{op}$,
$(iv)$ the representation $\pi_\tau\otimes\pi_\tau^{op}: A\odot A^{op}\to \mathbb B(L^2(A,\tau))$ is min-continuous,
$(v)$ for any faithful representation $A\subseteq \mathbb B(Y)$, there exists a c.c.p. module map $\Phi: \mathbb B(Y)\to\pi_\tau(A)^{''}$ extending $\pi_\tau$.
Then $(ii)\Rightarrow(iii)\Rightarrow(iv)$ and $(v)\Rightarrow(i)$. If moreover, $\mathbb B(L^2(A,\tau))$ is an injective object in the category of $\mathfrak A$-modules and c.p. module maps, then $(iv)\Rightarrow(v)$. \end{theorem} \begin{proof} $(ii)\Rightarrow (iii)$. Let $(\phi_n)$ be as in $(ii)$, consider the c.c.p. module maps $\phi_n^{op}: A^{op} \to \mathbb M_{k(n)}(\mathfrak A)^{op}$, and take the corresponding c.c.p. module map, $$\phi_n\otimes\phi_n^{op}: A\otimes A^{op} \to \mathbb M_{k(n)}(\mathfrak A)\otimes\mathbb M_{k(n)}(\mathfrak A)^{op}=\mathbb B(L^2(\mathbb M_{k(n)}(\mathfrak A),tr_{k(n)}\otimes id)),$$ and compose it with $$\mu_n: x\in \mathbb B(L^2(\mathbb M_{k(n)}(\mathfrak A), tr_{k(n)}\otimes id))\mapsto \langle x\hat 1, \hat 1\rangle,$$ and observe that, $$\mu_n\circ(\phi_n\otimes\phi_n^{op})(a\otimes b)=\langle\phi_n(a)J\phi_n^{op}(b^*)J\hat 1, \hat 1\rangle=(tr_{k(n)}\otimes id)\big(\phi_n(a)\phi_n(b)\big),$$
for $a,b\in A$. Since $$0\leq |(tr_{k(n)}\otimes id)(x)|\leq\|x\|_2\quad \big(x\in \mathbb M_{k(n)}(\mathfrak A)\big),$$ as positive elements in $\mathfrak A$, we have \begin{align*}
\|\mu_n(\phi_n\otimes\phi_n^{op}(a\otimes b))-\mu_\tau(a\otimes b)\|&=\|(tr_{k(n)}\otimes id)\big(\phi_n(a)\phi_n(b)\big)-\langle\pi_{\tau}(a)\pi_{\tau}(b)\hat 1, \hat 1\rangle\|\\
&\leq \|(tr_{k(n)}\otimes id)\big(\phi_n(a)\phi_n(b)-\phi_n(ab)\big)\|\\&-\|(tr_{k(n)}\otimes id)\big(\phi_n(ab)\big)-\tau(ab)\|\to 0, \end{align*} as $n\to\infty$, for all $a, b\in A$. Since $\mu_\tau$ is point-norm limit of $min$-continuous maps, it is $min$-continuous.
$(iii)\Rightarrow (iv)$. By $(iii)$, $\mu_\tau$ extends to an $\mathfrak A$-retraction on $A\otimes A^{op}$, and so it has a KSGNS-representation, $$\sigma: A\otimes A^{op}\to \mathbb B(L^2(A\otimes A^{op},\mu_\tau)).$$ By the universality of the KSGNS-representation \cite[Theorem 5.6]{lan}, there is a unitary $u\in \mathbb B\big(L^2(A\otimes A^{op},\mu_\tau), L^2(A,\tau)\big)$ such that $\pi_\tau\otimes\pi_\tau^{op}=u\sigma(\cdot)u^*$ on $A\odot A^{op}$. Thus the left hand side is $min$-continuous on $A\odot A^{op}$.
$(iv)\Rightarrow (v)$. Assume that $\mathbb B(L^2(A,\tau))$ is an injective object in the category of $\mathfrak A$-modules and c.p. module maps. By going to unitizations, we may assume that $A$ is unital. As in the classical case, one could use the Lance's trick: $A\otimes A^{op}\subseteq \mathbb B(Y)\otimes A^{op}$ and $\pi_\tau \otimes \pi_\tau^{op}$ extends to a c.p. map $\Psi: \mathbb B(Y)\otimes A^{op}\to \mathbb B(L^2(A,\tau))$, having $A\otimes A^{op}$ in its multiplicative domain. Put, $\Phi(x)=\Psi(x\otimes 1)$, where $1$ is the unit of $A$. Then $$\text{ran}(\Phi)\subseteq \Psi(\mathbb C1\otimes A^{op})=\pi_\tau^{op}(A)^{'}=\pi_\tau(A)^{''}\subseteq \pi_\tau(A)^{''}.$$
$(v)\Rightarrow (i)$. For any faithful representation $A\subseteq \mathbb B(Y)$, let $\Phi: \mathbb B(Y)\to\pi_\tau(A)^{''}$ extend $\pi_\tau$ and put $\phi=\langle \Phi(\cdot)\hat 1,\hat 1)$. Since $A$ is in the multiplicative domain of $\Phi$, this is a c.c.p. module map extending $\tau$, which is invariant under conjugation by unitaries in $A+\mathbb C I$, where $I$ is the identity operator on $Y$. \end{proof}
We don't know if $(i)\Rightarrow (ii)$. The proof in the classical case uses approximation of state of $\mathbb B(H)$ (for a Hilbert space $H$) with normal states coming from finite rank positive elements, which has no counterpart in the vector valued case.
We recall that $A$ has $\mathfrak A$-WEP if for every faithful representation and module map $A\subseteq \mathbb B(H\otimes \mathfrak A)$ for a Hilbert space $H$, there is a u.c.p. admissible map $\varphi: \mathbb B(H\otimes A)\to A^{**}$ extending the identity on $A$ \cite{a2}. For definition of admissible maps, see \cite[Definition 2.1]{a2}.
\begin{proposition} Assume that $\mathfrak A$ is a unital injective $C^*$-algebra such that for a minimal Stinespring dilation $(\pi,K,V)$ as in Lemma \ref{rep2}, $\mathbb B(K\otimes \mathfrak A)$ is a von Neumann algebra. If $A$ has $\mathfrak A$-WEP, then each $\mathfrak A$-trace on $A$ is amenable. \end{proposition} \begin{proof} We identify $A$ with its image under a faithful representation in $\mathbb B(K\otimes \mathfrak A)$. Since $A$ has $\mathfrak A$-WEP, there is a u.c.p. module map $\Phi: \mathbb B(K\otimes \mathfrak A)\to A^{**}$, extending the identity map on $A$. For an $\mathfrak A$-trace $\tau:A\to \mathfrak A$, let $\psi$ be the restriction of $$(\tau^{**}\circ\Phi)\otimes {\rm id}: \mathbb B(K\otimes \mathfrak A)\bar\otimes \mathfrak A^{**}\to \mathfrak A^{**}\bar\otimes \mathfrak A^{**}$$ to $\mathbb B(K\otimes \mathfrak A)\bar\otimes \mathbb C1_{\mathfrak A^{**}}$, canonically identified with $\mathbb B(K\otimes \mathfrak A)$. Let $\mathbb E: \mathfrak A^{**}\bar\otimes \mathfrak A^{**}\to \mathfrak A$ be a conditional expectation and put $$\phi:=\mathbb E\circ\psi: \mathbb B(K\otimes \mathfrak A)\to \mathfrak A.$$ Since $A$ is in the multiplicative domain of $\Phi$, it is also in the multiplicative domain of $\psi$, hence $\phi$ is invariant under conjugation by unitaries of $A+\mathbb CI$. Finally, since $\mathbb E$ is identity on its range, $\phi$ extends $\tau$. The result now follows from Lemmas \ref{rep}, \ref{inj}. \end{proof}
\section{quasidiagonality}
In this section we explore the module version of the notion of quasidiagonal (QD) $C^*$-algebras.
\begin{definition} \label{qd} A $C^*$-module $A$ is called $\mathfrak A$-{\it quasidiagonal} (briefly, $\mathfrak A$-QD) if there exists a net of admissible c.c.p. maps $\phi_n: A \to \mathbb M_{k(n)}(\mathfrak A)$ which are approximately multiplicative and approximately isometric, i.e.,
$\|\phi_n(ab)-\phi_n(a)\phi_n(b)\|\to 0$ and $\|\phi_n(a)\|\to \|a\|,$ as $n\to\infty$, for all $a, b\in A$; or equivalently, if for each finite set $\mathfrak F\subseteq A$ and $\varepsilon>0$, there is a positive integer $k$ and a c.c.p. module map $\phi: A \to \mathbb M_{k}(\mathfrak A)$ satisfying $\|\phi(ab)-\phi(a)\phi(b)\|<\varepsilon$ and $\|\phi(a)\|> \|a\|-\varepsilon,$ for all $a, b\in \mathfrak F$. \end{definition}
When $A$ is unital and $\mathfrak A$ is a von Neumann algebra, an argument similar to that of \cite[7.1.4]{bo} shows that we may take the maps $\phi_n$ in the above definition to be u.c.p. module maps (if they exist).
Clearly $\mathbb M_{n}(\mathfrak A)$ is $\mathfrak A$-QD, for each positive integer $n$. As another immediate example, if $B=C_0(X)$ is a commutative $C^*$-algebra, then $A=B\otimes \mathfrak A=C_0(X, \mathfrak A)$ (with the right module action by multiplication) is $\mathfrak A$-QD (just take direct sums of point evaluations). More generally, we have the following notion.
\begin{definition} \label{rfd} A $C^*$-module $A$ is called $\mathfrak A$-{\it residually finite dimensional} (briefly, $\mathfrak A$-RFD) if there is a net of $*$-homomorphisms and module maps $\pi_n: A \to \mathbb M_{k(n)}(\mathfrak A)$ such that $\oplus \pi_n: A\to \prod_{n} \mathbb M_{k(n)}(\mathfrak A)$ is faithful. \end{definition}
Clearly each $\mathfrak A$-RFD $C^*$-module is also $\mathfrak A$-QD. Also if $B$ is an RFD $C^*$-algebra, then $A=B\otimes \mathfrak A$ is $\mathfrak A$-RFD (and in particular, this holds if $B$ is a Type I $C^*$-algebra). The property of being $\mathfrak A$-QD passes to direct products and subalgebras (which are also submodules) and so it also passes to direct sums. When $\mathfrak A$ is injective in the category of $\mathfrak A$-modules with c.c.p. module maps, then it also passes to direct limits with injective connecting maps (just as in \cite[7.1.9]{bo}). Also, it behaves well wit respect to the minimal module tensor products, defined in \cite[Section 3]{a2}: if $A$ and $B$ are $\mathfrak A$-QD, so is $A\otimes_{\mathfrak A}^{min} B$ (c.f. \cite[7.1.12]{bo}).
We say that $A$ is $\mathfrak A$-{\it stably finite} (briefly, $\mathfrak A$-SF) if $\mathbb M_{n}(\mathfrak A)\otimes_{\mathfrak A}^{min} A$ contains no proper isometry (i.e., an isometry $s$ with $ss^*\neq 1$), for each positive integer $n$. Similar to \cite[7.1.15]{bo}, if $A$ is $\mathfrak A$-QD, it is also $\mathfrak A$-SF.
\begin{definition} \label{qd2}
For a left Hilbert $\mathfrak A$-module $Y$, a subset $\Omega\subseteq \mathbb B(Y)$ is called {\it quasidiagonal} if for each finite sets $\mathfrak F\subseteq \Omega$ and $\mathfrak Y\subseteq Y$ and each $\varepsilon>0$ there is a finite rank projection $P\in\mathbb B(Y)$ such that $\|PT-TP\|<\varepsilon$ and $P=I$ on $\mathfrak Y$. \end{definition}
Note that here, a finite rank operator is one whose range is a finitely generated submodule of $Y$. Also, unlike the classical case, since the submodules of $Y$ are not necessarily complemented, we may not assume that $\|Pv-v\|<\varepsilon$ for $v\in\mathfrak Y$ and prove the above stronger assumption by modifying $P$ with an orthogonal projection (c.f. the proof of \cite[7.2.3]{bo}). However, as in the above cited result, the stronger assumption gives the following property.
\begin{lemma} \label{proj} Let $Y$ be a countably generated Hilbert $\mathfrak A$-module and $\Omega\subseteq \mathbb B(Y)$ be norm separable and quasidiagonal. Then there is an increasing sequence $(P_n)$ of finite rank projections, converging strongly to the identity $I$ on $Y$, such that $\|[P_n,T]\|\to 0$, for $T\in\Omega$. \end{lemma}
\begin{definition} \label{qd2}
A representation $\pi:A\to\mathbb B(Y)$ in a left Hilbert $\mathfrak A$-module $Y$ is called {\it quasidiagonal} if there is a sequence $(P_n)\subseteq \mathbb B(Y)$ of projections such that $P_n\pi(a)-\pi(a)P_n\in\mathbb K(Y)$ and $\|P_n\pi(a)-\pi(a)P_n\|\to 0$, as $n\to\infty$, for each $a\in A$. It is called {\it strongly quasidiagonal} if $\Omega:=\pi(A)$ is quasidiagonal set of adjointable operators. \end{definition}
It is well known that the two notions are equivalent in the case of (separable) Hilbert space representations. We have the following weaker version of the classical result of Voiculescu \cite[7.2.5]{bo}.
\begin{theorem} [Voiculescu] \label{voi} Assume that $\mathfrak A$ is unital and $A$ is unital and separable. The following are equivalent.
$(i)$ $A$ has a faithful strongly QD representation modulo the compacts in $H\otimes\mathfrak A$, where $H$ is a separable Hilbert space,
$(ii)$ $A$ is $\mathfrak A$-QD with u.c.p. asymptotically multiplicative and isometric module maps. \end{theorem} \begin{proof}
$(i)\Rightarrow(ii)$. If $\pi:A\to \mathbb B(H\otimes\mathfrak A)$ is a faithful strongly QD representation, then by Lemma \ref{proj}, there is an increasing sequence $(P_n)$ of finite rank projections, say of rank $k(n)$, converging SOT to the identity $I$, such that, $\|[P_n,\pi(a)]\|\to 0$, for $a\in A$. Now the c.c.p. module maps $\phi_n: A\to P_n\mathbb B(H\otimes\mathfrak A)P_n\cong \mathbb M_{k(n)}(\mathfrak A)$ are asymptotically multiplicative and isometric.
$(ii)\Rightarrow(i)$. Let $\phi_n: A\to \mathbb M_{k(n)}(\mathfrak A)$ are u.c.p. asymptotically multiplicative and isometric module maps. Then $$\Phi:=\oplus \phi_n: A\to \prod_{n=1}^{\infty}\mathbb M_{k(n)}(\mathfrak A)\subseteq\mathbb B\big(\oplus_{n=1}^{\infty}\ell^2_{k(n)}\otimes\mathfrak A\big)$$ a faithful representation modulo the compacts, and for the canonical orthogonal projections $p_n: \ell^2\to \ell^2_{k(n)}$, $\Phi$ becomes strongly QD with the finite rank projections $P_n=p_n\otimes id$. \end{proof}
Note that when $\mathfrak A$ is a von Neumann algebra, in $(ii)$ we could guarantee that the corresponding maps are also u.c.p. (not just c.c.p.). We don't know however if $(ii)$ implies that every faithful unital essential representation of $A$ in a countably generated $\mathfrak A$-correspondence $Y$ is QD. There are partial results in this direction by Kasparov: if $\mathfrak A$ is $\sigma$-unital and nuclear and $A$ is unital and separable, faithfully represented in a Hilbert module of the form $H\otimes\mathfrak A$, for a separable Hilbert space $H$, then for each unital c.p. map $\pi: A/A\cap \mathbb K(H\otimes\mathfrak A)\to \mathbb B(H\otimes\mathfrak A)$, there is a sequence of isometries $(v_n)\subseteq \mathbb B(H\otimes\mathfrak A)$ with $\pi(a)-v_n^*av_n\in\mathbb K(H\otimes\mathfrak A)$ such that, $\|\pi(a)-v_n^*av_n\|\to 0$, as $n\to\infty$, for $a\in A$ \cite[Theorem 5]{k}. In particular, when $A\cap \mathbb K(H\otimes\mathfrak A)=0$ and $\pi$ is a unital representation, then for projections $P_n=v_nv_n^*$, $P_na-aP_n\in \mathbb K(H\otimes\mathfrak A)$ and $\|P_na-aP_n\|\to 0$, for each $a\in A$, that is, the embedding $$\iota: A\hookrightarrow \mathbb B(H\otimes\mathfrak A)$$ is $\mathfrak A$-QD. On the other hand, by \cite[Theorem 6]{k}, under the above conditions, $\iota$ is approximately equivalent to $$\pi\oplus\iota: A\to \mathbb B(H\otimes\mathfrak A)$$ and so this representation is also $\mathfrak A$-QD.
\begin{proposition} Assume that $A$ is unital and $\mathfrak A$ is an injective von Neumann algebra.
$(i)$ If $A$ is $\mathfrak A$-QD with u.c.p. asymptotically multiplicative and isometric module maps, then $A$ has an amenable $\mathfrak A$-trace.
$(ii)$ If an $\mathfrak A$-retraction satisfies the condition $(ii)$ of Theorem \ref{main} with u.c.p. asymptotically multiplicative and isometric module maps, then it is amenable. \end{proposition}
\begin{proof} We only show $(i)$, as part $(ii)$ is immediate from definition. If $\phi_n: A\to \mathbb M_{k(n)}(\mathfrak A)$ are u.c.p. asymptotically multiplicative and isometric module maps (in operator or Hilbert-Schmidt norm), and $A\subseteq \mathbb B(Y)$ be any faithful non degenerate representation in an $\mathfrak A$-correspondence $Y$ (with $A$ containing the identity of $\mathbb B(Y)$), then by the fact that $\mathfrak A$ is also injective in the category of $\mathfrak A$-modules \cite[Theorem 3.2]{fp}, we get c.c.p. module map extensions $\tilde\phi_n: \mathbb B(Y)\to \mathbb M_{k(n)}(\mathfrak A)$. The net consisting of the maps $(tr_{k(n)}\otimes {\rm id})\circ\tilde\phi_n: \mathbb B(Y)\to \mathfrak A$ has a cluster point in the point-ultraweak topology by \cite[1.3.7]{bo}. The restriction of this map is the given (or required) amenable trace. \end{proof}
\section{local reflexivity}
Local reflexivity in the classical setting is closely related to exactness and several $min$-continuity properties studied by Kirchberg. In this section we develop the module versions of this notion and relate it to the result in \cite{a2}.
\begin{definition} \label{lr} The $C^*$-module $A$ is called $\mathfrak A$-{\it locally reflexive} (briefly, $\mathfrak A$-LR) if for every operator subsystem and finitely generated submodule $E\subseteq A^{**}$, there exists a net of c.c.p. maps $\phi_i: E \to A$ converging to ${\rm id}_E$ in the point-ultraweak topology. \end{definition}
Note that we do not assume that the maps $\phi_i: E \to A$ are module map, but they would be approximately so, in the point-ultraweak topology.
Recall that an exact sequence $$0\rightarrow I\rightarrow A\xrightarrow{\pi} B\rightarrow 0$$ of $C^*$-modules, with arrows both $*$-homomorphisms and module maps, is {\it locally $\mathfrak A$-split} if for each finitely generated operator subspace and submodule $E\subseteq B$ there is a c.p. module map $\sigma: E\to A$ with $\pi\circ\sigma=$id$_E$ \cite{a2}. More generally, A c.c.p. module map $\phi: E\to A/J$ is called {\it $\mathfrak A$-liftable} if, for the quotient map $\pi: A\to A/J$, there is a c.c.p. module map $\sigma: E\to A$ with $\pi\circ\sigma=\phi$.
Let us first show that $\mathfrak A$-locally reflexivity passes to (and from) ideals and quotients (c.f. \cite[9.1.4]{bo}). But first we need the following module version of Arveson's Lemma \cite[Appendix C]{bo}.
\begin{lemma} [Arveson] \label{lift} Let $\mathfrak A$ be separable, $A$ be unital and $I$ be a closed ideal and submodule. Then for each operator system and countably generated module $E$, the set of $\mathfrak A$-liftable module maps from $E$ to $A/J$ is closed in point-norm topology. \end{lemma} \begin{proof} Let $\phi: E \to A/J$ be a c.c.p. module map and let $\psi^{'}_n : E \to A$ be c.c.p. module maps with $\pi\circ\psi^{'}_n\to\phi$, in the point-norm topology. Fix countable dense subset $(\alpha_j)\subseteq \mathfrak A$ and countable generating set $(x_k)\subseteq E$. As in the proof of \cite[Lemma C2]{bo}, we may assume that,
$$\|\pi\circ\psi^{'}_n(x_k\cdot\alpha_j)-\phi(x_k\cdot\alpha_j)\|< 1/2^n\ \ (k,j<n)$$ and inductively find c.c.p. maps $\psi_n : E \to A$ such that,
$$\|\pi\circ\psi_n(x_k\cdot\alpha_j)-\phi(x_k\cdot\alpha_j)\|< 1/2^n, \ \|\psi_{n+1}(x_k\cdot\alpha_j)-\psi_n(x_k\cdot\alpha_j)\|< 1/2^{n-1}\ \ (k,j<n).$$ Moreover, in the inductive step from $n$ to $n+1$, since the c.c.p. map $\psi_{n+1} : E \to A$ is defined via, $$\psi_{n+1}= (1 - e_\lambda)^{\frac{1}{2}}\psi^{'}_{n+1}(1 - e_\lambda)^{\frac{1}{2}}+ e_\lambda^{\frac{1}{2}} \psi_n e_\lambda^{\frac{1}{2}},$$ for large enough index $\lambda$, in a quasicentral bounded approximate identity $(e_\lambda)$ of $J$ inside $A$, since $e_\lambda$ approximately commutes with the ranges of $\psi_n$ and $\psi^{'}_{n+1}$, and each $\psi^{'}_{n}$ is a module map, we may inductively guarantee that,
$$\|\psi_{n}(x_k\cdot\alpha_j)-\psi_n(x_k)\cdot\alpha_j\|< 1/2^{n-1}\ \ (k,j<n).$$ Now the sequence $(\psi_n)$ of c.c.p. maps converges in point-norm topology on a dense subset of $E$ (consisting of finite combinations of elements $x_k\cdot\alpha_j$ with coefficients in $\mathbb Q+i\mathbb Q$), and so everywhere, to a c.c.p. module map $\psi: E \to A$ which lifts $\phi$. \end{proof}
We omit the proof of the next lemma which adapts that of \cite[9.1.4]{bo}, using the above lemma.
\begin{lemma} \label{ext} Let $\mathfrak A$ be separable and $A$ be unital and $$0\rightarrow I\rightarrow A\xrightarrow{\pi} B\rightarrow 0$$ be an exact sequence $C^*$-modules, with arrows both $*$-homomorphisms and module maps. Then $A$ is $\mathfrak A$-LR iff both $I$ and $B$ are $\mathfrak A$-LR and the extension is locally $\mathfrak A$-split. \end{lemma}
It follows from \cite[Proposition 3.14]{a2} that if $A$ is unital and $\mathfrak A$-LR and $$0\to J\to A\to A/J\to 0$$ is an exact sequence with arrows both $*$-homomorphisms and module maps, then for each $C^*$-module $B$, the sequence $$0\to J\otimes_{\mathfrak A}^{min}B^{op}\to A\otimes_{\mathfrak A}^{min}B^{op}\to (A/J)\otimes_{\mathfrak A}^{min}B^{op}\to 0$$ is exact.
\begin{lemma} \label{lr2} Let $\mathfrak A$ be separable locally reflexive $C^*$-algebra, then $\mathfrak A$ is $\mathfrak A$-locally reflexive. \end{lemma} \begin{proof} Let $(\alpha_j)$ be a countable dense subset of $\mathfrak A$. Given an operator subsystem and finitely generated submodule $E\subseteq \mathfrak A^{**}$ with generators $x_1,\cdots,x_k$, for each $n\geq 1$, let $E_{k,n}$ be the finite dimensional operator system generated by $1\in \mathfrak A^{**}$ and elements $x_i\cdot\alpha_j$ and their adjoints, for $1\leq i\leq k, 1\leq j\leq n$. Then there is a net $\psi_{m,k,n}: E_{k,n}\to \mathfrak A$ converging ultraweak to the identity on $E_{k,n}$, as $m\to \infty$. We may regards this as a multi-index net, converging ultraweak to the identity on the dense subset $\cup_{k,n} E_{k,n}$ of $E$. \end{proof}
\begin{corollary} \label{ext} Let $\mathfrak A$ be separable unital locally reflexive $C^*$-algebra, the a (non unital) $C^*$-module $A$ is $\mathfrak A$-LR iff the unital $C^*$-module $A\oplus \mathfrak A$ is $\mathfrak A$-LR. \end{corollary}
If $A,B,$ and $C$ are $C^*$-modules and $\pi: A\otimes_{\mathfrak A}^{min}B^{op}\to C$ is a representation, then $\pi$ is induced by a $*$-homomorphism (still denoted by) $\pi: A\otimes_{min}B^{op}\to C$ vanishing on the $min$-closure of the ideal and submodule $I_{\mathfrak A}$ generated by elements of the form $a\cdot\alpha\otimes b-a\otimes b\cdot\alpha$, for $a\in A, b\in B$ and $\alpha\in\mathfrak A$. Hence $\pi=\pi_A\otimes\pi_B$ for representations $\pi_A: A\to C$ and $\pi_B: B\to C$ with commuting ranges, satisfying the compatibility condition, $$\pi_A(a\cdot\alpha)\pi_B(b)=\pi_A(a)\pi_B(b\cdot\alpha)\ \ (a\in A, b\in B, \alpha\in\mathfrak A).$$ For the canonical inclusion $\pi$ into $C:=(A\otimes_{\mathfrak A}^{min}B^{op})^{**}$, this gives a binormal map: $A^{**}\odot (B^{**})^{op}\to (A\otimes_{\mathfrak A}^{min}B^{op})^{**}$, vanishing on ideal and submodule $J_{\mathfrak A}$ generated by elements of the form $x\cdot\alpha\otimes y-x\otimes y\cdot\alpha$, for $x\in A^{**}, y\in B^{**}$ and $\alpha\in\mathfrak A$, which in turn induces a binormal module map: $A^{**}\otimes_{\mathfrak A}^{min}(B^{**})^{op}\to (A\otimes_{\mathfrak A}^{min}B^{op})^{**}$. To observe that the latter map is also injective, take $x$ which is not in the $min$-closure of $J_{\mathfrak A}$ and use the fact that $A^*\odot B^*$ separates points and closed sets in $A^{**}\otimes_{min}(B^{**})^{op}$ (c.f., \cite[Exercise 3.1.5]{bo}) to find $\psi\in A^*\odot B^*$ which vanishes on $J_{\mathfrak A}$ and $\psi(x)=1$. Since $I_{\mathfrak A}\subseteq J_{\mathfrak A}$, it follows that $x\notin I_{\mathfrak A}^{\perp\perp}$, thus we have binormal module map inclusion $$A^{**}\odot_{\mathfrak A}(B^{**})^{op}\hookrightarrow (A\otimes_{\mathfrak A}^{min}B^{op})^{**}.$$
\begin{definition} \label{prop} The $C^*$-module $A$ is said to have property $C_\mathfrak A$, or $C^{'}_\mathfrak A$, or $C^{''}_\mathfrak A$ if, respectively, the inclusion,
$$A^{**}\odot_{\mathfrak A}(B^{**})^{op}\hookrightarrow (A\otimes_{\mathfrak A}^{min}B^{op})^{**},$$
or,
$$A\odot_{\mathfrak A}(B^{**})^{op}\hookrightarrow (A\otimes_{\mathfrak A}^{min}B^{op})^{**},$$
or,
$$A^{**}\odot_{\mathfrak A} B^{op}\hookrightarrow (A\otimes_{\mathfrak A}^{min}B^{op})^{**},$$
is $min$-continuous for any $C^*$-module $B$.
\end{definition}
It follows from \cite[Proposition 3.6]{a2} that any of the above properties passes to $C^*$-subalgebras which are also a submodule. Also, similar to \cite[9.2.4]{bo}, the first and third properties pass to quotients by closed ideals which are also a submodule.
Let $E$ and $F$ be Banach spaces and, respectively, Banach right and left $\mathfrak A$-modules with compatible actions, and for $\mathfrak A$-correspondences $X$ and $Y$ and isometric inclusions $E\subseteq \mathbb B(X)$ and $F\subseteq \mathbb B(Y)^{op}$, give operator norms on $\mathbb M_n(E)\subseteq \mathbb B(X^n)$ and $\mathbb M_n(F)\subseteq \mathbb B(Y^n)^{op}$. For a linear module map $T: E\to F$ and amplification $T_n: \mathbb M_n(E)\to \mathbb M_n(F)$, put $\|T\|_{cb}=\sup_n\|T_n\|$. Also denote the completion of $E\odot_{\mathfrak A} F$ in $\mathbb B(X)\otimes_{\mathfrak A}^{min}\mathbb B(Y)^{op}$ by $E\otimes_{\mathfrak A}^{min} F$. Both this construction and the $cb$-norm are independent of the choice of embeddings.
For $E$ and $F$ as above, Let $B_{\mathfrak A}(E,\mathfrak A)$ be the Banach space of all bounded linear right module maps from $E$ to $\mathfrak A$. Similar to \cite[Theorem B.13]{bo}, we want to identify $B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} F$ with $CB_{\mathfrak A}(E,F)$. In the next lemma we work with the case where the above isometric inclusions exists with countably generated $X$ and $Y$.
\begin{lemma} \label{iso} For $E$ and $F$ as above and $$z=\sum_{k=1}^{n}\phi_k\otimes y_k\in B_{\mathfrak A}(E,\mathfrak A)\odot_{\mathfrak A} F,$$ consider the module map $T_z:E\to F$ defined by $T_z(x)=\sum_{k=1}^{n} y_k\cdot\phi_k(x)$. Then $\|z\|_{min}=\|T_z\|_{cb}$ and the resulting isometric inclusion $$B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} F\subseteq CB_{\mathfrak A}(E,F),$$ is surjective when $E$ or $F$ is finitely generated $\mathfrak A$-module. \end{lemma} \begin{proof} Let $F\subseteq \mathbb B(Y)^{op}$, where $Y=H\otimes \mathfrak A$, with $H$ a separable Hilbert space. Then by \cite[Lemma 2.4]{a2}, there are finite rank projections $q_n\in \mathbb B(Y)$, say of rank $k(n)$, such that $q_n\uparrow 1$ (SOT) in $\mathbb B(Y)$. Let $\phi_n: \mathbb B(Y)\to\mathbb B(q_nY)=\mathbb M_{k(n)}(\mathfrak A)$ be the compression by $q_n$, and note that for $w_n=({\rm id}\otimes \phi_n)(z)$, $T_{w_n}=\phi_n\circ T_z$ in $CB_{\mathfrak A}(E,\mathbb M_{k(n)}(\mathfrak A))$. Thus \begin{align*}
\|z\|_{min}&=\sup_n\|({\rm id}\otimes \phi_n)(z)\|_{B_{\mathfrak A}(E,\mathfrak A)\odot_{\mathfrak A}\mathbb M_{k(n)}(\mathfrak A)}\\
&=\sup_n\|T_{w_n}\|_{cb}\\
&=\sup_n\|\phi_n\circ T_z\|_{cb}\\
&=\|T_z\|_{cb}. \end{align*} The surjectivity in the finitely generated case is straightforward. \end{proof}
\begin{proposition} \label{c''} Assume that $\mathfrak A$ is unital and separable, and for the $C^*$-module $A$, $A^{**}$ has a faithful representation in a countably generated $\mathfrak A$-correspondence. Then a $A$ is $\mathfrak A$-LR iff it has property $C_{\mathfrak A}^{''}$. \end{proposition} \begin{proof} Let $A$ be $\mathfrak A$-LR and $B$ be any $C^*$-module. Let $z=\sum_{k=1}^{n} x_k\otimes b_k$ be an element in $A^{**}\odot B^{op}$ and $y=\sum_{j=1}^{m} y_j\cdot\alpha_j\otimes c_k-y_j\otimes c_k\cdot\alpha_j$ be a typical element in the ideal $J_{\mathfrak A}$ of $A^{**}\odot B^{op}$. Let $E$ be the operator system and finitely generated sumbodule of $A^{**}$ generated by the first legs of all the elementary tensors appeared in the decompositions of $z$ and $y$ above. By assumption, there is a net of c.c.p. maps $\phi_i: E\to A$, converging to the identity of $E$ in point-ultraweak topology. We then have, $(\phi_i\otimes {\rm id}_B)(z+y)\to z+y$, in the ultraweak topology of $(A\otimes_{min} B^{op})^{**}$. Thus,
$$\|z+y\|_{(A\otimes_{min} B^{op})^{**}}\leq\liminf_i\|(\phi_i\otimes {\rm id}_B)(z+y)\|_{A\otimes_{min} B^{op}}\leq \|z+y\|_{A^{**}\otimes_{min} B^{op}}.$$
Let $I_{\mathfrak A}$ be the corresponding ideal of $A\odot B^{op}$, then since $A$ is weak${}^*$ dense in $A^{**}$, $y\in I_{\mathfrak A}^{\perp\perp}$, that is, $\bar y=0$ in $(A\otimes_{\mathfrak A}^{min} B^{op})^{**}$. Therefore, taking infimum over all $y$'s, we get $\|\bar z\|_{(A\otimes_{\mathfrak A}^{min} B^{op})^{**}}\leq \|\bar z\|_{A^{**}\otimes_{\mathfrak A}^{min} B^{op}}$, where $\bar z$ in the left and right hand sides are cosets of $z$ in the corresponding space. Therefore, $A$ has property $C_{\mathfrak A}^{''}$.
Conversely, if $A$ has property $C_{\mathfrak A}^{''}$, let $E$ be the operator system and finitely generated sumbodule of $A^{**}$ with generators $x_1,\cdots,x_k$ and $(\alpha_j)$ be a countable dense subset of $\mathfrak A$. Similar to the proof of Lemma \ref{lr2}, we could construct an increasing double-indexed sequence of finitely generated operator systems $E_{j,n}\subseteq A^{**}$, whose union is dense in $E$. By the above lemma, the inclusion $E\hookrightarrow A^{**}$ corresponds to an element $z\in B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A^{**}$ with $\|z\|_{min}=1.$ Let $B_{\mathfrak A}(E,\mathfrak A)\subseteq \mathbb B(X)$ isometrically, for a $\mathfrak A$-correspondence $X$. Then by property $C_{\mathfrak A}^{''}$, we have isometric inclusion $$\mathbb B(X)\otimes_{\mathfrak A}^{min} A^{**}\hookrightarrow (\mathbb B(X)\otimes_{\mathfrak A}^{min} A)^{**}.$$ Next, as in the proof of \cite[Proposition 3.6]{a2}, we get isometric inclusions, $$B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A^{**}\hookrightarrow \mathbb B(X)\otimes_{\mathfrak A}^{min} A^{**},$$ and, $$(B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A)^{**}\hookrightarrow (\mathbb B(X)\otimes_{\mathfrak A}^{min} A)^{**}.$$ Therefore, the map, $$B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A^{**}\rightarrow (B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A)^{**},$$
is isometric. Thus, $\|z\|_{(B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A)^{**}}=1.$ Choose a net $(z_i)$ in $B_{\mathfrak A}(E,\mathfrak A)\otimes_{\mathfrak A}^{min} A$ converging weak${}^*$ to $z$ such that $\|z_i\|\leq 1$, for each $i$, and apply the lemma again to get a net of c.c. module maps $\phi_i: E\to A$ converging to ${\rm id}_E$ in the point-ultraweak topology. The restriction $\phi_{i,j,n}$ of $\phi_i$ to $E_{j,n}$ is a c.c. map, and as in the proof of \cite[9.2.5]{bo}, could be replaced by a net of c.c.p. maps (not module maps anymore) converging (as a multi-index net) to the identity on the dense subset $\cup_{j,n} E_{j,n}$, in point-ultraweak topology. Therefore, $A$ is $\mathfrak A$-LR. \end{proof}
The next proposition is proved as in \cite[9.2.7]{bo}, in which instead of \cite[3.7.6]{bo}, we use \cite[Proposition 3.14(ii)]{a2}.
\begin{proposition} \label{c'} A $C^*$-module $A$ with property $C_{\mathfrak A}^{'}$ is $\mathfrak A$-exact. \end{proposition}
We don't know if the converse is also true. Also not having the analog of Dadarlat's embedding theorem \cite[8.2.4]{bo}, we don't know if every $\mathfrak A$-exact $C^*$-module is a subquotient of an $\mathfrak A$-nuclear $C^*$-module.
The next lemma is proved as in \cite[9.3.2]{bo}, in which instead of \cite[3.8.5]{bo}, we use \cite[Theorem 3.17]{a2}.
\begin{proposition} \label{c} A $C^*$-module $A$ has property $C_{\mathfrak A}$ if $A^{**}$ is $\mathfrak A$-semidiscrete. \end{proposition}
Again, we don't know if an $\mathfrak A$-injective $C^*$-module is $\mathfrak A$-semidiscrete (or $\mathfrak A^{**}$-semidiscrete), so we could not check if for an $\mathfrak A$-nuclear $C^*$-module $A$, $A^{**}$ is $\mathfrak A$-semidiscrete (or $\mathfrak A^{**}$-semidiscrete). In particular, we don't know if $\mathfrak A$-exact (or even $\mathfrak A$-nuclear) $C^*$-modules have property $C_{\mathfrak A}$. This also closes the (most obvious) path to show that $\mathfrak A$-exact $C^*$-modules are $\mathfrak A$-locally reflexive; or that, the quotients of $\mathfrak A$-exact (resp., $\mathfrak A$-nuclear) $C^*$-modules by closed ideals, which are also submodules, are again $\mathfrak A$-exact (resp., $\mathfrak A$-nuclear).
\end{document} |
\begin{document}
\title{Permutation testing in high-dimensional linear models: an empirical investigation\footnote{Accepted for publication in \emph{Journal of Statistical Computation and Simulation}
\begin{abstract} \noindent Permutation testing in linear models, where the number of nuisance coefficients is smaller than the sample size, is a well-studied topic. The common approach of such tests is to permute residuals after regressing on the nuisance covariates. Permutation-based tests are valuable in particular because they can be highly robust to violations of the standard linear model, such as non-normality and heteroscedasticity. Moreover, in some cases they can be combined with existing, powerful permutation-based multiple testing methods. Here, we propose permutation tests for models where the number of nuisance coefficients exceeds the sample size. The performance of the novel tests is investigated with simulations. In a wide range of simulation scenarios our proposed permutation methods provided appropriate type I error rate control, unlike some competing tests, while having good power. \\ \\ \emph{keywords:} Permutation test; Group invariance test; High-dimensional inference; Heteroscedasticity; Semi-parametric \end{abstract}
\section{Introduction}
We consider the problem of testing hypotheses about coefficients in linear models, where the outcome may be non-Gaussian and heteroscedastic, and the number of nuisance coefficients exceeds the sample size. By the nuisance coefficients we mean the coefficients that are not tested by the particular test at hand, but still need to be dealt with since they lead to confounding effects. In recent decades, the literature on permutation methods has strongly expanded \citep{tusher2001significance, meinshausen2011asymptotic, hemerik2018false, ganong2018permutation, berrett2018conditional,he2019permutation,albajes2019voxel, hemerik2019permutation, rao2019permutation}. While the permutation test dates far back \citep{fisher1936coefficient}, most of the permutation tests in the presence of nuisance were published in the last four decades. To our knowledge, the existing methods are limited to low-dimensional nuisance. For the high-dimensional case, an approach similar to a permutation test is proposed in \citet{dezeure2017high}.
Permutation tests for low-dimensional linear models are valuable for two main reasons. First, they are robust to violations of certain standard assumptions, such as normality and homoscedasticity \citep{winkler2014permutation,hemerik2020robust}. Second, when the outcome is multidimensional, a permutation-based test can be combined with existing permutation-based multiple testing methods, which tend to be relatively powerful, since they take into account the dependence structure of the outcomes \citep{meinshausen2006false, meinshausen2011asymptotic, hemerik2018false,hemerik2019permutation}. For example, under strong positive dependence among \emph{p}-values, the Bonferroni-Holm multiple testing method \citep{holm1979simple} is greatly improved by a permutation method \citep{ westfall1993resampling}.
For the low-dimensional general linear model, with identity link but not necessarily Gaussian or homoscedastic residuals, several different permutation tests have been proposed. The main approach that these methods have in common, is to permute residuals after regressing on the nuisance covariates.
For overviews of the available methods, see \citet{anderson1999empirical}, \citet{anderson2001permutation}, \citet{winkler2016faster} and in particular \citet{winkler2014permutation}. Among the existing permutation methods, the Freedman-Lane approach \citep{freedman1983nonstochastic} is most commonly used and provides excellent power and type I error control.
Because the existing permutation tests require estimating the nuisance coefficients using maximum likelihood, these methods cannot be used when the number of covariates exceeds the sample size. In recent years, important theoretical results have been published on testing in such high-dimensional linear models. Several of these tests have proven asymptotic properties. In particular, the method in \citet{zhang2014confidence} has been shown to be asymptotically optimal under certain assumptions \citep{van2014asymptotically}. \citet{dezeure2017high} propose a bootstrap approach, which is related to the method in \citet{zhang2014confidence}.
Software implementations of tests for high-dimensional models include those described in \citet{dezeure2015high} and \citet{Chernozhukov2016}.
Testing in high-dimensional linear models is very challenging, because a large number of unknown nuisance effects needs to be dealt with, using a relatively small sample size. Consequently, tests tend to sacrifice much power compared to the situation where all nuisance coefficients would be known. Further, the asymptotic properties of the mentioned methods rely on complex assumptions and sparsity. The test by \citet{zhang2014confidence} can be rather anti-conservative in settings where a substantial fraction of the coefficients are non-zero. Moreover, these methods are not based on permutations. Hence they do not generally have the above-mentioned advantages, such as robustness against certain violations of the standard linear model. An exception is the bootstrap method in \citet{dezeure2017high}, which tends to be more robust to such violations.
We propose two novel tests, which, to our knowledge, are the first permutation tests in the presence of high-dimensional nuisance. One is an extension of the low-dimensional method in \citet{freedman1983nonstochastic} and the other is somewhat related to a method by Kennedy \citep{kennedy1995randomization, kennedy1996randomization}. Further, we allow the tested parameter to be multi-dimensional, unlike many existing methods. Using simulations we show that our methods provide appropriate type I error rate control in a wide range of situations. In particular, we illustrate empirically that our tests have the above-mentioned robustness properties. The methods in this paper have been implemented in the R package \emph{phd}, available on CRAN.
This paper is built up as follows. In Section \ref{secldn} we discuss permutation testing in settings with low-dimensional nuisance. This section contains some novel observations that will be used in Section \ref{sechd}. There, we propose permutation tests for high-dimensional settings. We assess the performance of our methods with simulations in Section \ref{secsims}. An analysis of real data is in Section \ref{secdata}.
\section{Low-dimensional nuisance} \label{secldn}
\subsection{Notation and basic ideas} \label{secnota} We consider the general linear model $$\bm{Y}=\bm{X}\bm{\beta}+\bm{Z}\bm{\gamma} + \bm{\epsilon},$$ where $\bm{X}$ is a $n\times d$ matrix of covariates of interest, $\bm{Z}$ an $n\times q$ matrix of nuisance covariates and $\bm{\epsilon}$ an $n$-vector of i.i.d. errors with mean $0$ and non-zero variance, which are independent of the covariates. Here the rows of $\bm{X}$, $\bm{Z}$ and $\bm{Y}$ are i.i.d.. The matrix $\bm{Z}$ is assumed to have full rank with probability $1$. The parameter $\bm{\beta}\in \mathbb{R}^d$ is of interest and $\bm{\gamma} \in \mathbb{R}^q$ is a nuisance parameter. We want to test the null hypothesis $H_0: \bm{\beta}=\bm{0}\in \mathbb{R}^d$. Here $\bm{0}$ might be replaced by another constant: the extension is straightforward.
Let $w$ be a positive integer, which will denote the number of random permutations or other transformations. In this paper, all permutation \emph{p}-values are of the form \begin{equation} \label{formulap}
p=w^{-1}\big|\{1\leq j \leq w: T_j\geq T_1\}\big|, \end{equation} or, in case of a two-sided test where both small and large values of $T_1$ are evidence against $H_0$, \begin{equation} \label{formulap2}
p=2 w^{-1} \min\Big\{\big|\{1\leq j \leq w: T_j\geq T_1\}\big|,\big|\{1\leq j \leq w: T_j\leq T_1\}\big|\Big\}. \end{equation}
Here $T_1,...,T_w\in \mathbb{R}$ are statistics whose definition depends on the particular permutation method. They are specified in the sections below. For every $2\leq j \leq w$, the statistic $T_j$ corresponds to the $j$-th permutation. The statistic $T_1$ is based on the original, unpermuted data. All existing and novel methods in this paper only differ with respect to how $T_1,...,T_w$ are computed.
Although we will often write `permutation', sign-flipping of residuals can also be used \citep{winkler2014permutation}. The existing methods, as well as the novel methods in this paper, consist of the following steps. \vskip3mm
\begin{enumerate} \item Compute a test statistic $T_1$ based on the original data. \item Compute a test statistic $T_2$ in a similar way, but after randomly permuting certain residuals. Repeat to obtain $T_3,...,T_w$. \item The \emph{p}-value equals \eqref{formulap} or \eqref{formulap2}. \end{enumerate} \vskip3mm
Most of the existing permutation methods use residualization of $\bm{Y}$ or $\bm{X}$ with respect to the nuisance $\bm{Z}$. In the low-dimensional situation, the residual forming matrix is $$\bm{R}= \bm{I}-\bm{H}= \bm{I}- \bm{Z}(\bm{Z}'\bm{Z})^{-1}\bm{Z}'.$$ When $d=1$ we will sometimes consider $\bm{R}\bm{X}\in \mathbb{R}^n$, which is assumed to be nonzero with probability 1. In Section \ref{secldn} we assume $\bm{Z}$ contains a column of $1$'s. This implies that the entries of $\bm{R}\bm{X}$ and $\bm{R}\bm{Y}$ sum up to 0.
Note that if we use permutation, we can write the transformed residuals as $\bm{P}\bm{R}\bm{Y},$ where $\bm{P}$ is an $n \times n$ matrix with exactly one $1$ in every row and column and elsewhere 0's. In case of sign-flipping, $\bm{P}$ is instead an $n \times n$ diagonal matrix with diagonal elements in $\{1,-1\}$ \citep{winkler2014permutation}. We write $\bm{P}_1,...,\bm{P}_w$ to distinguish the $w$ random permutation matrices. Here $\bm{P}_1$ is the identity matrix and $\bm{P}_2,...,\bm{P}_w$ are random.
\subsection{Choice of test statistics} \label{fl}
Here we discuss the choice of test statistics within the permutation method of Freedman and Lane \citep{freedman1983nonstochastic,winkler2014permutation}. The purpose of this section is to discuss some existing and novel results that we will use in Section \ref{sechd}.
The Freedman-Lane permutation method is known to provide excellent type I error control, with both its level and power staying very close to the parametric \emph{F}-test, under the Gaussian model. The test statistic $T_1$ is based on the unpermuted model $\bm{Y}=\bm{X}\bm{\beta}+\bm{Z}\bm{\gamma}+\epsilon$. The other statistics are obtained after randomly transforming the residuals. That is, for $2\leq j \leq w$ the statistic $T_j$ is based on the model $(\bm{P}_j\bm{R}+\bm{H})\bm{Y}=\bm{X}\bm{\beta}+\bm{Z}\bm{\gamma}+\epsilon$, where the same test statistic, say $T$, is used as for computing $T_1$. Thus\begin{equation} \label{T1FL} T_1=T(\bm{X},\bm{Z},\bm{Y}), \end{equation} \begin{equation} \label{TjFL} T_j=T\big(\bm{X},\bm{Z},(\bm{P}_j\bm{R}+\bm{H})\bm{Y}\big), \end{equation}
where $T$ is a suitable test statistic, the choice of which we now discuss.
It is usually important to take $T$ to be an asymptotically pivotal statistic, i.e., a statistic whose asymptotic null distribution does not depend on any unknowns under $H_0$ (\citeauthor{kennedy1996randomization}, \citeyear{kennedy1996randomization}, p.926-927, \citeauthor{winkler2014permutation}, \citeyear{winkler2014permutation}, p.382, \citeauthor{hall1989effect}, \citeyear{hall1989effect}, \citeauthor{hall1991two}, \citeyear{hall1991two}). A pivotal statistic $T$ will always involve estimation of the nuisance parameters. Thus, after every permutation, the nuisance parameters need to be estimated anew. Examples of pivotal test statistics are the \emph{F}-statistic and Wald statistic. These are equivalent: the resulting permutation \emph{p}-value \eqref{formulap} is the same.
In case $X$ is one-dimensional, the \emph{F}-statistic is also equivalent to the square of the \emph{partial correlation} \citep{fisher1924distribution, agresti2015foundations}, which is used in \citet{anderson2001permutation}. The partial correlation is the sample Pearson correlation of $\bm{R} \bm{Y}$ and $\bm{R} \bm{X}$, \begin{equation} \label{parcor} \rho\big( \bm{R} \bm{Y},\bm{R}\bm{X} \big)= \frac{ (\bm{R}\bm{Y})'\bm{R}\bm{X} }{ \sqrt{ \sum_i (\bm{R}\bm{Y})_i^2 \sum_i (\bm{R}\bm{X})_i^2 } }. \end{equation} Here we used that the sample means of $\bm{R}\bm{Y}$ and $\bm{R}\bm{X}$ are $0$. If we use the partial correlation in the Freedman-Lane permutation test, this means that we take $T(\bm{X},\bm{Z},\bm{Y})= \rho\big( \bm{R} \bm{Y},\bm{R}\bm{X} \big),$ so that \eqref{T1FL} and \eqref{TjFL} become \begin{equation} \label{T1OLS} T_1= \rho\big( \bm{R} \bm{Y},\bm{R}\bm{X} \big) \end{equation} \begin{equation} \label{TjOLS}
\quad T_j= \rho\big( \bm{R} (\bm{P}_j\bm{R}+\bm{H})\bm{Y} ,\bm{R}\bm{X} \big), \end{equation} where $\bm{R} (\bm{P}_j\bm{R}+\bm{H})$ could be simplified to $\bm{R} \bm{P}_j\bm{R}$, since $\bm{R}\bm{H}=\bm{0}$.
The numerator in \eqref{parcor} is $$(\bm{R}\bm{Y})'\bm{R}\bm{X}=\bm{Y}'\bm{R}'\bm{R}\bm{X}=\bm{Y}'\bm{R}'\bm{X}=(\bm{R}\bm{Y})'\bm{X},$$ so that \eqref{parcor} equals \begin{equation} \label{semipc2}
\frac{ (\bm{R}\bm{Y})'\bm{X} }{ \sqrt{ \sum_i (\bm{R}\bm{Y})_i^2 \sum_i (\bm{R}\bm{X})_i^2 }}. \end{equation} The Freedman-Lane test with $T$ defined by \eqref{semipc2} remains unchanged if in \eqref{semipc2} we replace $ \sum_i (\bm{R}\bm{X})_i^2$ by $1$ or by the constant $\sum_i \bm{X}_i^2$. Indeed, $T_1,...,T_w$ will just be multiplied by the same constant. Thus, with respect to the permutation test, the statistic \eqref{parcor} is equivalent to \begin{equation} \label{semiparcor}
\frac{ (\bm{R}\bm{Y})'\bm{X} }{ \sqrt{ \sum_i (\bm{R}\bm{Y})_i^2 \sum_i \bm{X}_i^2 }}. \end{equation} If $\bm{X}$ has been centered around $0$, then this equals \begin{equation} \label{T1OLSsemi} \rho\big( \bm{R} \bm{Y},\bm{X} \big) = \frac{ (\bm{R}\bm{Y})'(\bm{X}-\bm{\mu}_x) }{ \sqrt{ \sum_i (\bm{R}\bm{Y})_i^2 \sum_i (\bm{X}_i-\bm{\mu}_x)^2 }}, \end{equation}
where $\bm{\mu}_x$ denotes the $n$-vector with entries equal to the sample mean of $\bm{X}$. This is the sample correlation of $\bm{R}\bm{Y}$ and $\bm{X}$ and is called the \emph{semi-partial correlation}.
Thus, if $\bm{X}$ is centered, using the partial correlation is equivalent to using the semi-partial correlation.
If we take $T$ to be the semi-partial correlation, then \eqref{T1FL} and \eqref{TjFL} become $T_1= \rho\big( \bm{R} \bm{Y},\bm{X} \big)$ and \begin{equation} \label{TjOLSsemi}
\quad T_j= \rho\big( \bm{R} (\bm{P}_j\bm{R}+\bm{H})\bm{Y} ,\bm{X} \big)= \frac{ \big(\bm{R} (\bm{P}_j\bm{R}+\bm{H})\bm{Y} \big)'(\bm{X}-\bm{\mu}_x) }{ \sqrt{ \sum_i \big( \bm{R} (\bm{P}_j\bm{R}+\bm{H})\bm{Y}\big)_i^2 \sum_i (\bm{X}_i-\bm{\mu}_x)^2 }}, \end{equation} where $\bm{R} (\bm{P}_j\bm{R}+\bm{H})$ could be simplified to $\bm{R} \bm{P}_j\bm{R}$. Note that we could simply leave the constant $\sum_i (\bm{X}_i-\bm{\mu}_x)^2$ out without changing the result of the permutation test. Although for centered $\bm{X}$ the statistics \eqref{parcor} and \eqref{T1OLSsemi} are equivalent, their counterparts in the high-dimensional setting are not, as will be discussed in Section \ref{secflhd}.
\section{High-dimensional nuisance} \label{sechd}
When the nuisance parameter $\bm{\gamma}$ has dimension $q\geq n$, the existing permutation methods cannot be used. Here, these approaches are adapted to obtain tests which can account for high-dimensional nuisance. We first consider the case that $X$ is one-dimensional, i.e., $d=1$. The case that $d>1$ is discussed in Section \ref{multidimbeta}. We assume that the entries of $\bm{Y}$, $\bm{X}$ and $\bm{Z}$ have expected value $0$. Consequently, the intercept is $0$.
All existing tests rely on residualization steps, where $\bm{Y}$ or $\bm{X}$ is regressed on $\bm{Z}$. A natural way to adapt this step to the high-dimensional setting, is to instead estimate the residuals using some type of elastic net regularization. We will consider ridge regression. For minimizing prediction error, ridge regression is often preferrable to Lasso, principal components regression, variable subset selection and partial least squares \citep{hastie2009elements,frank1993statistical}.
Compared to the existing methods, including the Freedman-Lane approach discussed in Section \ref{fl}, using ridge regression comes down to replacing the projections $\hat{\bm{Y}}=\bm{H} \bm{Y}$ and $\hat{\bm{X}}=\bm{H} \bm{X}$ by ridge estimates $ \tilde{\bm{H}}_{\lambda}\bm{Y}$ and $ \tilde{\bm{H}}_{\lambda_X}\bm{X} $, with $\lambda, \lambda_X>0$. Here, for $\lambda'>0$, \begin{equation} \label{prorl}
\tilde{\bm{H}}_{\lambda'}=\bm{Z}(\bm{Z}'\bm{Z}+\lambda' \bm{I}_q)^{-1}\bm{Z}', \end{equation} which satisfies $$\tilde{\bm{H}}_{\lambda'}\bm{Y}= \bm{Z} \text{argmin}_{\bm{\gamma}}\Big(\Vert \bm{Y}-\bm{Z}\bm{\gamma} \Vert_2^2 +\lambda' \Vert \bm{\gamma} \Vert_2^2 \Big)$$ and similarly for $\bm{X}$. The values $\lambda, \lambda_X$ are the regularization parameters, whose selection will be discussed. Using ridge regression, the residuals become $\tilde{\bm{R}}_{\lambda} \bm{Y}$ and $\tilde{\bm{R}}_{\lambda_X} \bm{X}$, where $\tilde{\bm{R}}_{\lambda}=(\bm{I}-\tilde{\bm{H}}_{\lambda})$ and $\tilde{\bm{R}}_{\lambda_X} =(\bm{I}-\tilde{\bm{H}}_{\lambda_X})$.
The last two rows of Table \ref{toverviewhd} outline the permutation schemes that we will consider in Sections \ref{secflhd} and \ref{secdres}. The first two rows summarize the Freedman-Lane method discussed in Section \ref{fl} and the Kennedy method \citep{kennedy1995randomization, kennedy1996randomization,winkler2014permutation}. This table is analogous to Table 2 in \citet{winkler2014permutation} and allows easy comparison of the new methods with the existing methods discussed in \citet{winkler2014permutation}.
Although Table \ref{toverviewhd} outlines the permutation schemes that we will use, several crucial specifics remain to be filled in. For example, several choices of the regularization parameters $\lambda$ and $\lambda_X$ can be considered. Moreover, the computational challenge of performing nuisance estimation in every step needs to be addressed. Finally and importantly, we must determine what test statistics are suitable to use within our permutation tests.
\begin{table}[h!]
\begin{center}
\caption{Permutation schemes for four different methods. The last two methods are novel and can account for high-dimensional nuisance.}
\label{toverviewhd}
\begin{tabular}{ll}
\hline
\textbf{Method} & \qquad\textbf{Model after permutation} \\
\hline
Freedman-Lane & \qquad $(\bm{P}\bm{R}+\bm{H})\bm{Y}=\bm{X}\bm{\beta}+\bm{Z}\bm{\gamma}+\bm{\epsilon}$\\
Kennedy & \qquad $\bm{P}\bm{R}\bm{Y}=\bm{R}\bm{X}\bm{\beta}+\bm{\epsilon}$\\
Freedman-Lane HD & \qquad $(\bm{P}\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}=\bm{X}\bm{\beta}+\bm{Z}\bm{\gamma}+\bm{\epsilon}$\\
Double Residualization & \qquad $(\bm{P}\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda}) \bm{Y}=\tilde{\bm{R}}_{\lambda_X}\bm{X}\bm{\beta}+\bm{\epsilon}$\\
\hline
\end{tabular}
\end{center} \end{table}
\subsection{Freedman-Lane HD} \label{secflhd} As discussed in Section \ref{fl}, the low-dimensional Freedman-Lane method is known to provide excellent type I error control and power. Here we will provide an extension to the case of high-dimensional nuisance. We will refer to this test as \emph{Freedman-Lane HD}. The permutation scheme that we use is analogous to that of Freedman-Lane and is shown in the third row of Table \ref{toverviewhd}.
As in the Freedman-Lane method, after every permutation, we will require nuisance estimation to compute $T_j$. We will choose ridge regression to do this.
Note however that when many permutations are used, performing a ridge regression after every permutation can be a large computational burden. We will therefore compute $\lambda$ only once, for the unpermuted model. We take $\lambda$ to be the value that gives the minimal mean cross-validated error; see Section \ref{secsimset} for more details.
After each permutation, we then use the same parameter $\lambda$ in the ridge regression. Thus, after the $j$-th permutation, to compute the new ridge residuals, we will only need to pre-multiply the transformed outcome $(\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}$ by $\tilde{\bm{R}}_{\lambda}$. We only need to compute $\tilde{\bm{R}}_{\lambda}$ once. Owing to this approach, essentially we need to perform ridge regression only once.
An important consideration is the test statistic $T$ used within the permutation test. The usual \emph{F}-statistic and Wald statistic are only defined when the nuisance is low-dimensional. Extending these definitions to the high-dimensional setting with $q\geq n$ is problematic. For example, a Wald-type statistic would require an unbiased estimate of $\beta$ and a variance estimate. The partial correlation \eqref{parcor}, however, is more naturally generalized to the $q\geq n$ setting: we can replace the residuals $\bm{R}\bm{Y}$ and $\bm{R}\bm{X}$ by the ridge residuals $\tilde{\bm{R}}_{\lambda}\bm{Y}$ and $\tilde{\bm{R}}_{\lambda_X}\bm{X}$. Similarly we can generalize the semi-partial correlation \eqref{T1OLSsemi}, by replacing $\bm{R}\bm{Y}$ by $\tilde{\bm{R}}_{\lambda}\bm{Y}$. This gives the following test statistics, which generalize the partial correlation \eqref{parcor} and the semi-partial correlation \eqref{T1OLSsemi} respectively: \begin{equation} \label{parcorHD} \rho\big( \tilde{\bm{R}}_{\lambda} \bm{Y},\tilde{\bm{R}}_{\lambda_X}\bm{X} \big)= \frac{ (\tilde{\bm{R}}_{\lambda}\bm{Y}-\bm{\mu}_1)' (\tilde{\bm{R}}_{\lambda_X} \bm{X}-\bm{\mu}_2 ) }{ \sqrt{\sum_i (\tilde{\bm{R}}_{\lambda}\bm{Y}-\bm{\mu}_1)_i^2 \sum_i (\tilde{\bm{R}}_{\lambda_X}\bm{X}-\bm{\mu}_2)_i^2 } }, \end{equation} \begin{equation} \label{semiparcorHD} \rho\big( \tilde{\bm{R}}_{\lambda} \bm{Y},\bm{X} \big)= \frac{ (\tilde{\bm{R}}_{\lambda}\bm{Y}-\bm{\mu}_{1})'(\bm{X}-\bm{\mu}_{x}) }{ \sqrt{ \sum_i (\tilde{\bm{R}}_{\lambda}\bm{Y}-\bm{\mu}_{1})_i^2 \sum_i (\bm{X}-\bm{\mu}_x)_i^2 }}. \end{equation} Here, $\bm{\mu}_1$, $\bm{\mu}_2$ and $\bm{\mu}_x$ are $n$-vectors whose entries are the sample means of $\tilde{\bm{R}}_{\lambda} \bm{Y}$, $\tilde{\bm{R}}_{\lambda_X}\bm{X}$ and $\bm{X}$ respectively. \citet{zhu2018significance} also use a type of generalized partial correlation as the test statistic.
In Section \ref{fl} we reasoned that if $\bm{X}$ has been centered, \eqref{parcor} and \eqref{T1OLSsemi} are equivalent with respect to the permutation test. This does not apply to \eqref{parcorHD} and \eqref{semiparcorHD}. In simulations, using the statistic \eqref{semiparcorHD} tended to result in somewhat higher power than using the statistic \eqref{parcorHD}. In Section \ref{secsims} we consider both methods.
In case the generalization of the partial correlation is used, the test statistics $T_1,...,T_w$ on which Freedman-Lane HD is based are \begin{equation} \label{eq:flT1par} T_1 = \rho\big( \tilde{\bm{R}}_{\lambda} \bm{Y},\tilde{\bm{R}}_{\lambda_X} \bm{X} \big), \end{equation} \begin{equation} \label{eq:flTjpar} T_j = \rho\big( \tilde{\bm{R}}_{\lambda} \big(\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y},\tilde{\bm{R}}_{\lambda_X}\bm{X} \big) = \end{equation} $$\frac{ \big(\tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}-\bm{\mu}^j\big)' (\tilde{\bm{R}}_{\lambda_X} \bm{X}-\bm{\mu}_2 ) }{ \sqrt{\sum_i \big(\tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}-\bm{\mu}^j\big)_i^2 \sum_i (\tilde{\bm{R}}_{\lambda_X}\bm{X}-\bm{\mu}_2)_i^2 } }, $$ where $2\leq j \leq w$. Here $\bm{\mu}^j$ is an $n$-vector whose entries are the sample mean of $\tilde{\bm{R}}_{\lambda}(\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}$. For the version based on the generalization of the semi-partial correlation, the statistics are \begin{equation} \label{eq:flT1} T_1 = \rho\big( \tilde{\bm{R}}_{\lambda} \bm{Y},\bm{X} \big), \end{equation} \begin{equation} \label{eq:flTj} T_j = \rho\big( \tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y},\bm{X} \big). \end{equation} As usual, $T_1$ is just $T_j$ with $\bm{P}_j=\bm{I}_n$. The pseudo-code for the version based on semi-partial correlations is in Algorithm \ref{a:FLHD}.
If $q<n$, as $\lambda\downarrow 0$, the test converges to the test for $\lambda=0$, which is the classical Freedman-Lane method. In the wide range of simulation settings considered in Section \ref{secsims}, the Freedman-Lane HD method stayed on the conservative side, in the sense that the size was less than $\alpha$. This may due to the fact that if $\lambda>0$ and $2\leq j<k\leq w$,
the correlation between $T_1$ and $T_j$ tended to be larger than the correlation between $T_j$ and $T_k$ in simulations. This may be related to the fact that
the correlation between $\bm{Y}$ and $\bm{Y}^{*j}$ is strictly larger than the correlation between $\bm{Y}^{*j}$ and $\bm{Y}^{*k}$, where $\bm{Y}^{*j} := (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}$. This inequality is proved in the Supplementary Material.
As discussed, to perform the test, $\lambda$ and hence $\tilde{\bm{R}}_{\lambda}$ need to be computed only once. Thus, like the low-dimensional Freedman-Lane procedure, the test requires nuisance estimation after every permutation, but this is not a large computational burden. The method is often computationally feasible even when many millions of permutations are used; see Section \ref{secsims}. It is also worth mentioning that there exist approximate methods for reducing the number of permutations while still allowing for very small, accurate \emph{p}-values \citep{knijnenburg2009fewer,winkler2016faster}.
\begin{algorithm}[h!] \caption{Freedman-Lane HD (version based on semi-partial correlations)} \begin{algorithmic}[1] \label{a:FLHD} \STATE Compute $\tilde{\bm{H}}_{\lambda}= \bm{Z}(\bm{Z}'\bm{Z}+\lambda \bm{I}_q)^{-1}\bm{Z}'$ and the residual forming matrix $\tilde{\bm{R}}_{\lambda}=\bm{I}-\tilde{\bm{H}}_{\lambda}$. Here $\lambda$ is taken to give the minimal mean cross-validated error (see main text). \STATE Let $T_1=\rho\big( \tilde{\bm{R}}_{\lambda} \bm{Y},\bm{X} \big)$, the sample Pearson correlation of the $\bm{Y}$-residuals with $\bm{X}$. \FOR{$2\leq j \leq w$} \STATE Let $T_j= \rho\big( \tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y},\bm{X} \big)$, where the random matrix $\bm{P}_j$ encodes random permutation or sign-flipping. \ENDFOR \STATE The two-sided \emph{p}-value $p$ equals \eqref{formulap2}. \RETURN $p$ \end{algorithmic} \end{algorithm}
\subsection{Double residualization} \label{secdres} Here we propose a test that we refer to as the \emph{Double Residualization} method. The method is somewhat related to the Kennedy procedure \citep{kennedy1995randomization, kennedy1996randomization,winkler2014permutation}, but not analogous. The Kennedy method residualizes both $\bm{Y}$ and $\bm{X}$ and proceeds to permute the $\bm{Y}$-residuals. Here we replace the least squares regression by ridge regression. Moreover, unlike Kennedy's permutation scheme, we keep $\tilde{\bm{H}}_{\lambda}\bm{Y}$ in the model; see Table \ref{toverviewhd}. The test statistic that we use within the permutation test is the sample correlation. Thus, the test is based on the statistics $$T_1= \rho \big ( \bm{Y} ,\tilde{\bm{R}}_{\lambda_X}\bm{X} \big ),$$ \begin{equation} \label{TjDR} T_j=\rho\big ( (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y} ,\tilde{\bm{R}}_{\lambda_X}\bm{X} \big ), \end{equation} where $2\leq j \leq w$. The difference between \eqref{TjDR} and \eqref{eq:flTjpar} is that \eqref{eq:flTjpar} contains an additional $\tilde{\bm{R}}_{\lambda}$. The pseudo-code for the Double Residualization method is in Algorithm \ref{a:DR}. We take $\lambda$ and $\lambda_X$ to be the values that give the minimal mean cross-validated error; see Section \ref{secsimset} for more details. For fixed $q$, as $n\rightarrow\infty$, the Double Residualization method becomes equivalent to the Kennedy method and the Freedman-Lane method if the penalty is $o_{\mathbb{P}}(n^{1/2})$, as shown in the Supplementary Material. The case that $q>n$ is investigated in Section \ref{secsims}.
\begin{algorithm}[h!] \caption{Double Residualization} \begin{algorithmic}[1] \label{a:DR} \STATE Compute $\tilde{\bm{H}}_{\lambda}= \bm{Z}(\bm{Z}'\bm{Z}+\lambda \bm{I}_q)^{-1}\bm{Z}'$ and, analogously, $\tilde{\bm{H}}_{\lambda_X}$. Here $\lambda$ and $\lambda_X$ are determined through cross-validation (see main text). Let $\tilde{\bm{R}}_{\lambda}=\bm{I}-\tilde{\bm{H}}_{\lambda}$ and $\tilde{\bm{R}}_{\lambda_X}=\bm{I}-\tilde{\bm{H}}_{\lambda_X}$. \STATE Let $T_1= \rho \big ( \bm{Y} ,\tilde{\bm{R}}_{\lambda_X}\bm{X} \big )$, the sample Pearson correlation of $\bm{Y}$ and $\tilde{\bm{R}}_{\lambda_X}\bm{X}$. \FOR{$2\leq j \leq w$} \STATE Let $T_j=\rho\big ( (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y} ,\tilde{\bm{R}}_{\lambda_X}\bm{X} \big )$, where the random matrix $\bm{P}_j$ encodes random permutation or sign-flipping. \ENDFOR \STATE The two-sided \emph{p}-value $p$ equals \eqref{formulap2}. \RETURN $p$ \end{algorithmic} \end{algorithm}
\subsection{Multi-dimensional parameter of interest} \label{multidimbeta} In the above we considered the case that the tested parameter $\beta$ has dimension $d=1$. Our tests can be extended to the case $d>1$ by using Pesarin's Non-Parametric Combination (NPC) approach \citep[][ch. 4]{pesarin2010permutation}. This is a general method for combining permutation tests of different hypotheses into a test for the intersection hypothesis. The NPC principle can be applied in a wide range of scenarios. In simpler settings with no nuisance, NPC has important proven properties, such as asymptotically optimal power. Here, we will explain how NPC can be applied in our setting. For convenience, we will focus on the application of NPC to our test of Algorithm \ref{a:FLHD}, i.e., Freedman-Lane HD based on the generalized semi-partial correlation. Combining NPC with our other tests can be done similarly, but can be computationally much less efficient for large $d$, as will be explained below.
Suppose $d>1$. We are interested in $H_0: \bm{\beta}=\bm{\beta}_0$, where we assume $\bm{\beta}_0=\bm{0}$ again for notational convenience. For every $1\leq l \leq d$, let $\beta_l$ be the $l$-th entry of $\bm{\beta}$. The hypothesis of interest $H_0$ is the intersection of $H^1,...,H^d$, where $H^l$ is the hypothesis that $\beta_l$ equals $0$. To test $H_0=H^1\cap...\cap H^d$, we proceed as follows. As usual, sample random matrices $\bm{P}_1,...,\bm{P}_w$ that encode permutation (or sign-flipping). For every $1\leq l \leq d$ and $1\leq j \leq w$, define
$$T_j^l = \rho\big( \tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y},\bm{X}_{\cdot l} \big),$$
where $\bm{X}_{\cdot l}$ is the $l$-th column of $\bm{X}$.
A key point here is that the same permutation matrix $\bm{P}_j$ is used to compute each of the statistics $T_j^1,....,T_j^d$.
Due to this manner of simultaneous permutation, the dependence structure of $(T_j^1,....,T_j^d)$ mimics that of $(T_1^1,....,T_1^d)$.
Indeed, if $\bm{\gamma}$ were exactly known so that we could replace $\tilde{\bm{R}}_{\lambda}\bm{Y}$ and $\tilde{\bm{H}}_{\lambda}\bm{Y}$ by
$\bm{\epsilon}$ and $\bm{Z}\bm{\gamma}$, then
$(T_j^1,....,T_j^d)$ and $(T_1^1,....,T_1^d)$ would have exactly the same dependence structure under $H_0$.
Consider a function $\Psi:\mathbb{R}^d\rightarrow \mathbb{R}$, which will be used to compute a combination statistic \citep[][ch. 4]{pesarin2010permutation}. For every $1\leq j \leq w$ define $\Psi_j= \Psi(T_j^1,....,T_j^d)$. Note that if $\tilde{\bm{R}}_{\lambda}\bm{Y}$ and $\tilde{\bm{H}}_{\lambda}\bm{Y}$ would be the exact errors and expected values, then under $H_0$, $\Psi_1,...,\Psi_w$ would be identically distributed and exchangeable. The \emph{p}-value for testing $H_0$ is now computed as in \eqref{formulap} but with $T_j$ replaced by the combination statistic $\Psi_j$. The pseudo-code for this test is in Algorithm \ref{a:FLHDmulti}. Note that if $d=1$ and $\Psi$ is the identity and a two-sided \emph{p}-value is computed, then this method reduces to the test of Algorithm \ref{a:FLHD}.
\begin{algorithm}[h!] \caption{ Extension of the test of Algorithm \ref{a:FLHD} to the case that $d>1$.} \begin{algorithmic}[1] \label{a:FLHDmulti} \STATE Compute $\tilde{\bm{H}}_{\lambda}= \bm{Z}(\bm{Z}'\bm{Z}+\lambda \bm{I}_q)^{-1}\bm{Z}'$ and the residual forming matrix $\tilde{\bm{R}}_{\lambda}=\bm{I}-\tilde{\bm{H}}_{\lambda}$. Here $\lambda$ is taken to give the minimal mean cross-validated error. \FOR{$1\leq l \leq d$} \STATE Let $T_1^l =\rho\big( \tilde{\bm{R}}_{\lambda} \bm{Y},\bm{X}_{\cdot l} \big)$, where $\bm{X}_{\cdot l}$ is the $l$-th column of $\bm{X}$. \ENDFOR \FOR{$2\leq j \leq w$} \STATE Consider a random $n\times n$ matrix $\bm{P}_j$ encoding random permutation or sign-flipping. \STATE Compute $\tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y}$. \FOR{$1\leq l \leq d$} \STATE Let $T_j^l= \rho\big( \tilde{\bm{R}}_{\lambda} (\bm{P}_j\tilde{\bm{R}}_{\lambda}+\tilde{\bm{H}}_{\lambda})\bm{Y},\bm{X}_{\cdot l} \big)$. \ENDFOR \ENDFOR \FOR{$1\leq j \leq w$} \STATE Compute $\Psi_j= \Psi(T_j^1,....,T_j^d)$, where $\Psi$ is the combining function. \ENDFOR
\STATE The \emph{p}-value \emph{p} equals $w^{-1}\big|\{1\leq j \leq w: \Psi_j\geq \Psi_1\}\big|$. \RETURN $p$ \end{algorithmic} \end{algorithm}
The function $\Psi$ should be chosen such that high values of $\Psi_1$ indicate evidence against $H_0$. The choice of $\Psi$ influences power. Examples of functions $\Psi$ are
$\Psi(t_1,...,t_d)=\max(|t_1|,...,|t_d|)$ and $\Psi(t_1,...,t_d)=d^{-1}\sum_{l=1}^d |t_l|$. The former choice of $\Psi$ if often used when one or few of the coefficients $\beta_1,...,\beta_d$ are expected to be nonzero under the alternative. Otherwise, the latter choice of $\Psi$ is often used. Other examples of combining functions $\Psi$ are in \citet[][ch. 4]{pesarin2010permutation}.
Applying NPC to the other tests of Sections \ref{secflhd} and \ref{secdres} tends to be computationally less efficient than the method of Algorithm \ref{a:FLHDmulti}. For example, applying NPC to our Double Residualization method would require ridge-regressing each of the $d$ variables of interest (corresponding to $\beta_1,...,\beta_d$) on the nuisance variables.
\section{Simulations} \label{secsims} We used simulations to gain additional insight into the performance of the new tests, as well as existing tests. The simulations were performed with \emph{R} version 3.6.0 on a server with 40 cores and 1TB RAM. In Section \ref{secsimgaus} we consider scenarios where the outcome $Y$ follows a standard Gaussian high-dimensional linear model. In Section \ref{secsimrob} we consider non-standard settings with non-normality and heteroscedasticity. We consider simulated datasets where the covariates have equal variances. It is well-known that when the data are not standardized, this can affect the accuracy of the model obtained with ridge regression \citep[][p.257]{buhlmann2014high}.
\subsection{Simulation settings and tests} \label{secsimset}
We considered the model in Section \ref{secnota}, where the variable of interest was one-dimensional, i.e., $\beta\in\mathbb{R}$.
The case $d>1$ is considered in Section \ref{secsimmulti}.
In every simulation, the covariates had mean $0$ and variance $1$. They were sampled from a multivariate normal distribution with homogenous correlation $\rho'$, unless stated otherwise. The errors $\bm{\epsilon}$ had variance 1, unless stated otherwise. The intercept was $\gamma_1=0$, i.e., $Y$ had mean $0$. The tested hypothesis was $H_0:\beta=0$. The sample size in the reported simulations was $n=30$, unless stated otherwise. We obtained comparable results for other sample sizes. The estimated probabilities in the tables are based on $10^4$ repeated simulations, unless stated otherwise.
In the power simulations we usually took $|\beta|$ to be relatively large compared to most of the nuisance coefficients. The reason is that testing in high-dimensional models is very challenging. For example, in settings with $|\beta|=|\gamma_1|=...=|\gamma_q| > 0$ the power of all the tests considered (including the competitors) usually barely exceeds the type I error rate.
The penalty $\lambda$ was chosen to give the minimal mean error, based on 10-fold cross validation. The penalty $\lambda_X$ was chosen analogously. To compute the penalties, we used the \emph{cv.glmnet()} function in the R package \emph{glmnet}. We used $[10^{-5} ,10^{5}]$ as the range of candidate values for the penalty. The penalty obtained with \emph{cv.glmnet()} is scaled by a factor $n$, so we multiplied this penalty by $n$ to obtain $\lambda$. We included an intercept in the ridge regressions, but excluding the intercept gave very similar results.
All tests used were two-sided. The tests corresponding to the columns of the tables in this section are the following.
``FLH1" is the Freedman-Lane HD test defined in Section \ref{secflhd}, with test statistics $T_1,...,T_w$ based on the generalized partial correlation as in \eqref{eq:flTjpar}. ``FLH2" is the same, except that $T_1,..,T_w$ are based on the generalized \emph{semi}-partial correlation as in \eqref{eq:flTj}. ``DR" is the Double Residualization method of Section \ref{secdres}. Each of these tests used $w=2 \cdot 10^4$ permutations.
``BM" is a high-dimensional test based on ridge projections, proposed in \citet{buhlmann2013statistical}. This test is based on a bias-corrected estimate $|\hat{\beta}_{\text{corr}}|$ of $|\beta|\in \mathbb{R}$ and an asymptotic upper bound of its distribution. We used the implementation in the R package \emph{hdi} \citep{dezeure2015high}.
``ZZ" is a high-dimensional test based on Lasso projections, proposed in \citet{zhang2014confidence}. This method constructs a different bias-corrected estimate $\hat{b}$ of $\beta$, which has an asymptotically known normal distribution under certain assumptions, such as sparsity. For this test we also used the \emph{hdi} package. We could not include this test in the simulations with a very high number of nuisance parameters, since it is computationally very time-consuming when $q$ is large, as also noted in \citet{dezeure2015high}. We expect the test to have good power in these settings.
``BO" is the bootstrap approach in \citet{dezeure2017high}, which is also implemented in the \emph{hdi} package. We set the number of bootstrap samples per test to 1000 and considered the robust version of the method. We used the shortcut, which avoids repeated tuning of the penalty. Still, the method was very slow, so that we used $10^3$ instead of $10^4$ repeated simulations of this method per setting. Also, we did not include the test in the simulations with very large $q$.
\subsection{Gaussian, homoscedastic outcome} \label{secsimgaus}
We first consider some settings with a moderately large number of nuisance coefficients, $q=60$. We first simulated a setting with $\gamma_2=...=\gamma_{60}=0.05$, i.e, $\bm{\gamma}$ was dense.
We took $\rho'=0.5$. The estimated level and power of the tests described above, for different \emph{p}-value cut-offs $\alpha$, are shown in Table \ref{table:hdaspr05}. The tests rejected $H_0$ if the \emph{p}-value was smaller than $\alpha$. The level of a test should be at most $\alpha$.
Table \ref{table:hdaspr05} shows that the test ZZ by \citet{zhang2014confidence} was rather anti-conservative. Especially for small $\alpha$, its level was many times larger than $\alpha$. This is partly due to the anti-sparsity. Indeed, ZZ only has proven asymptotic properties under a sparsity assumption. The bootstrap approach BO of \citet{dezeure2017high} was much less liberal, but still seemed to be somewhat anti-conservative for small $\alpha$. Of the other tests, Freedman-Lane HD 2 (FLH2) often had the most power. The Double Residualization method had relatively low power when $\alpha$ was small, e.g. $0.001$.
\begin{table}[!h] \normalsize \begin{center} \caption{Dense setting with $\rho'=0.5$, $n=30$, $q=60$. Power is shown for $\beta= 1.5$. }
\begin{tabular}{ l l l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{5}{l}{\qquad \qquad \qquad \qquad Method} \\ \cline{3-8}
& $\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
& 0.05 \qquad \quad & .0281 & .0333 & .0219 & .0087 & .0666 & .063 \\
level & 0.01 \qquad \quad & .0042 & .0063 & .0021 & .0024 & .0311 & .023 \\
& 0.001 \qquad \quad & .0003 & .0006 & 0001 & .0005 & .0121 & .009 \\ \hline
& 0.05 \qquad \quad & .9062 & .9273 & .9616 & .8901 & .9934 & .982 \\
power & 0.01 \qquad \quad & .8373 & .8819 & .7984 & .7679& .9799 & .939 \\
& 0.001 \qquad \quad & .6716 & .7996 & .3263 & .5795 & .9441 & .857 \\ \hline
\end{tabular} \label{table:hdaspr05} \end{center} \end{table}
We also considered a setting with very high correlation $\rho'=0.9$, see Table \ref{table:hdspr09}. We took $\gamma_2=\gamma_3=1$ and $\gamma_4=....=\gamma_{60}=0$. The first 4 methods provided appropriate type I error control. For small cut-offs $\alpha$, the method ZZ by \citet{zhang2014confidence} was relatively powerful, but also seemed to be somewhat anti-conservative. This method seems more suitable for settings where $q$ is many times larger than $n$. Among our permutation methods, Freedman-Lane HD 2 had the best power, while incurring few type I errors. The method BM by \citet{buhlmann2013statistical} was relatively conservative.
We repeated the same simulation scenario, but with $n=15$ instead of $n=30$. The results are in Table \ref{table:hdspr09n15}. The methods ZZ of \citet{zhang2014confidence} and BO of \citet{dezeure2017high} were very anti-conservative for $\alpha=0.01$ and $\alpha=0.001$. Our methods provided appropriate type I error control.
\begin{table}[!h] \normalsize \begin{center} \caption{ Sparse setting with $\rho'=0.9$, $n=30$, $q=60$. Power is shown for $\beta= 1.5$. }
\begin{tabular}{ l l l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{5}{l}{\qquad \qquad \qquad \qquad Method} \\ \cline{3-8}
& $\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
& 0.05 \qquad \quad & .0302 & .0270 & .0348 & .0106 & .0358 & .051 \\
level & 0.01 \qquad \quad & .0050 & .0035 & .0044 & .0013 & .0104 & .012 \\
& 0.001 \qquad \quad & .0003 & .0001 & .0001 & .0000 & .0022 & .002 \\ \hline
& 0.05 \qquad \quad & .4494 & .5426 & .4804 & .3234 & .6050 & .554 \\
power & 0.01 \qquad \quad & .2283 & .3379 & .2135 & .1506 & .4154 & .346 \\
& 0.001 \qquad \quad & .0685 & .1195 & .0445 & .0501 & .2296 & .206 \\ \hline
\end{tabular} \label{table:hdspr09} \end{center} \end{table}
\begin{table}[!h] \normalsize \begin{center} \caption{Sparse setting with $\rho'=0.9$, $n=15$, $q=60$. Power is shown for $\beta= 3$. }
\begin{tabular}{ l l l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{5}{l}{\qquad \qquad \qquad \qquad Method} \\ \cline{3-8}
& $\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
& 0.05 \qquad \quad & .0268 & .0244 & .0294 & .0030 & .0392 & .050 \\
level & 0.01 \qquad \quad & .0048 & .0030 & .0028 & .0004 & .0124 & .026\\
& 0.001 \qquad \quad & .0008 & .0000 & .0000 & .0002 & .0032 & .020 \\ \hline
& 0.05 \qquad \quad & .5020 & .6034 & .5090 & .4038 & .7586 & .692 \\
power & 0.01 \qquad \quad & .2822 & .4558& .2094 & .2384 & .6248 & .552 \\
& 0.001 \qquad \quad & .0730 & .1982 & .0438 & .1244 & .4614 & .386 \\ \hline
\end{tabular} \label{table:hdspr09n15} \end{center} \end{table}
Further, we considered a simulation where there were clusters of correlated covariates. The setting was as before, except that there were three independent clusters of size 20. Each cluster had a multivariate normal distribution with all correlations equal to $0.9$. We took $\gamma_2=...=\gamma_{60}=0.05$. The results are in Table \ref{table:clusters}. As before, the tests ZZ of \citet{zhang2014confidence} and BO of \citet{dezeure2017high} had good power, but were anti-conservative.
\begin{table}[!h] \normalsize \begin{center} \caption{ Dense setting with $n=30$, $q=60$ and three clusters of dependent covariates. Power is shown for $\beta= 1.5$. }
\begin{tabular}{ l l l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{5}{l}{\qquad \qquad \qquad \qquad Method} \\ \cline{3-8}
& $\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
& 0.05 \qquad \quad & .0356& .0224& .0344 & .0130 & .0520 & .073 \\
level & 0.01 \qquad \quad & .0059 & .0025& .0048 & .0022 & .0248 & .023 \\
& 0.001 \qquad \quad & .0010 & .0002& .0002 &.0007 & .0087 & .008 \\ \hline
& 0.05 \qquad \quad & .4892 & .5706 & .5043 & .4188 & 7382 & .620 \\
power & 0.01 \qquad \quad & .2672 & .3393& .2226 & .2399 & .6199 & .454 \\
& 0.001 \qquad \quad & .0814 & .1007 & .0382 & .0977 & .4741 & .322 \\ \hline
\end{tabular} \label{table:clusters} \end{center} \end{table}
We also performed simulations with a very large number of nuisance variables ($q=1000$). We first took $\gamma_2=\gamma_3=1$, $\gamma_4=...=\gamma_{10}=0.2$, $\gamma_{11}=...=\gamma_{1000}=0.$ See Table \ref{table:q1000rho05} for simulations with $\rho'=0.5$ and Table \ref{table:q1000rho09} for simulations with $\rho'=0.9$. All permutation methods provided appropriate type I error control. Double Residualization (DR) had relatively high power for large cut-offs $\alpha$, but not for small cut-offs. The method BM by \citet{buhlmann2013statistical} had relatively good power for $\rho'=0.5$ but low power for $\rho'=0.9$.
We also performed simulations where $\gamma$ was very anti-sparse, e.g. with $\gamma_2=1$, $\gamma_3=...=\gamma_{800}=0.002$ and $\rho'=0.9$. We also considered negative coefficients and we varied the magnitude of the coefficients and the errors $\bm{\epsilon}$ and the sample size. We also considered more settings where there were multiple independent clusters of correlated covariates. Also in these settings, the type I error rate was controlled.
\begin{table}[!h] \normalsize \begin{center} \caption{ Sparse setting with a large number ($q=1000$) of nuisance variables. Here $\rho'=0.5$, $n=30$. Power is shown for $\beta= 2$. }
\begin{tabular}{ l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{4}{l}{\qquad \qquad \quad Method} \\ \cline{3-6}
& $\alpha$ & FLH1 & FLH2 & DR & BM \\ \hline
& 0.05 \qquad \quad & .0068 & .0065 & .0145 & .0001 \\
level & 0.01 \qquad \quad & .0013 & .0011 & .0011 & .0000 \\
& 0.001 \qquad \quad & .0002 & .0001 & .0000 & .0000 \\ \hline
& 0.05 \qquad \quad & .5577 & .5469 & .9613 & .7820 \\
power & 0.01 \qquad \quad & .5060 & .5043 & .8007 & .6510 \\
& 0.001 \qquad \quad & .3752 & .4049 & .3463 & .4851 \\ \hline
\end{tabular} \label{table:q1000rho05} \end{center} \end{table}
\begin{table}[!h] \normalsize \begin{center} \caption{Sparse setting with a large number ($q=1000$) of nuisance variables and high correlation $\rho'=0.9$. Power is shown for $\beta= 2$. }
\begin{tabular}{ l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{4}{l}{\qquad \qquad \quad Method} \\ \cline{3-6}
& $\alpha$ & FLH1 & FLH2 & DR & BM \\ \hline
& 0.05 \qquad \quad & .0236 & .0319 & .0358 & .0006 \\
level & 0.01 \qquad \quad & .0040 & .0074 & .0057 & .0000 \\
& 0.001 \qquad \quad & .0003 & .0006 & .0001 & .0000 \\ \hline
& 0.05 \qquad \quad & .4766 & .5317 & .7127 & .2115 \\
power & 0.01 \qquad \quad & .3106 & .4254 & .4137 & .1042 \\
& 0.001 \qquad \quad & .1303 & .2500 & .1344 & .0407 \\ \hline
\end{tabular} \label{table:q1000rho09} \end{center} \end{table}
\subsection{Violations of the Gaussian model} \label{secsimrob}
Permutation tests can be robust to violations of the standard linear model, such as non-normality and heteroscedasticity. \citep{winkler2014permutation,hemerik2020robust} The power of parametric methods is often substantially decreased when the residuals have heavy tails. The power of the permutation tests is more robust to such deviations from normality. This is illustrated in Table \ref{table:exp3}. Here, the data distribution was the same as in the setting corresponding to Table \ref{table:hdspr09}, except that the errors $\bm{\epsilon}$ were not standard normally distributed, but had very heavy (cubed exponential) tails, scaled such that the errors had standard deviation 1. Note in Table \ref{table:exp3} that the permutation and bootstrap methods still had roughly the same power as at Table \ref{table:hdspr09}, while the power of BM and ZZ was strongly reduced compared to Table \ref{table:hdspr09}.
\begin{table}[!h] \normalsize \begin{center} \caption{Same sparse setting as at Table \ref{table:hdspr09} but with very heavy-tailed errors. }
\begin{tabular}{ l l l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{5}{l}{\qquad \qquad \qquad \qquad Method} \\ \cline{3-8}
& $\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
& 0.05 \qquad \quad & .0345 & .0313 & .0336 & .0034 & .0215 & .022 \\
level & 0.01 \qquad \quad & .0059 & .0051 & .0053 & .0001 & .0043 & .004 \\
& 0.001 \qquad \quad & .0005 & .0002 & .0002 & .0000 & .0006 & .002 \\ \hline
& 0.05 \qquad \quad & .4498 & .5493 & .4593 & .2173 & .5433 & .566 \\
power & 0.01 \qquad \quad & .2295 & .3353 & .2016 & .0730 & .3173 & .390 \\
& 0.001 \qquad \quad & .0780 & .1309 & .0492 & .0151 & .1374 & .215 \\ \hline
\end{tabular} \label{table:exp3} \end{center} \end{table}
As a second type of violation of the standard linear model, we considered heteroscedasticity. We simulated errors $\epsilon_i$ which were normally distributed, but with standard deviation proportional to the absolute value covariate of interest, $|X_i|$. We again took $\gamma_2=\gamma_3=1$, $\gamma_4=...=\gamma_{60}=0$. We took $\rho'=0$ for illustration, since in that case the method ZZ by \citet{zhang2014confidence} turned out to be very anti-conservative under heteroscedasticity. Otherwise, the simulated data were again as those used for Table \ref{table:hdspr09}. The results are in Table \ref{table:hete}. Note that despite the heteroscedasticity, the permutation-based tests provided appropriate type I error control. The bootstrap approach BO of \citet{dezeure2017high} seemed to be anti-conservative for small $\alpha$. The test BM from \citet{buhlmann2013statistical} had higher power than the permutation methods in this specific setting, but was anti-conservative for small $\alpha$.
In the simulations underlying Table \ref{table:hete}, we did not use sign-flipping, which is known to be robust to heteroscedasticity \citep{winkler2014permutation,hemerik2020robust}. Surprisingly, our tests nevertheless provided appropriate type I control. We also performed these simulations with sign-flipping instead of permutation (results not shown), which further reduced the level of our tests, but also somewhat reduced the power.
\begin{table}[!h] \normalsize \begin{center} \caption{Sparse setting with heteroscedastic errors, $\rho'=0$, $n=30$, $q=60$. Power is shown for $\beta= 1.5$. }
\begin{tabular}{ l l l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{5}{l}{\qquad \qquad \qquad \qquad Method} \\ \cline{3-8}
& $\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
& 0.05 \qquad \quad & .0352 & .0354 & .0271 & .0338 & .1490 & .077 \\
level & 0.01 \qquad \quad & .0065 & .0069 & .0050 & .0109 & .0648 & .028 \\
& 0.001 \qquad \quad & .0010 & .0009 & .0008 & .0029 & .0280 & .011 \\ \hline
& 0.05 \qquad \quad & .7901 & .8060 & .7855 & .9403 & .9902 & .982 \\
power & 0.01 \qquad \quad & .6787 & .6861 & .6454 & .8534 & .9741 & .936 \\
& 0.001 \qquad \quad & .4910 & .4909 & .4498 & .6903 & .9332 & .830 \\ \hline
\end{tabular} \label{table:hete} \end{center} \end{table}
\subsection{Multi-dimensional parameter of interest} \label{secsimmulti}
We simulated the test of Section \ref{multidimbeta} for multi-dimensional $\bm{\beta}$. As the combination statistic we used $\Psi(t_1,...,t_d)=\max(t_1,...,t_d)$. The parameter of interest $\bm{\beta}$ had dimension 10 and there were 490 nuisance variables, i.e., $\dim(\bm{\gamma})=491$, since $\gamma_1$ is the intercept. The outcome $Y$ followed a Gaussian model, as in Section \ref{secsimgaus}. We considered three simulation settings. The nuisance parameters were $\gamma_2=3,\gamma_3=2,\gamma_4=1,\gamma_5=...=\gamma_{491}=0$ in the first two settings and $\gamma_2=....=\gamma_{101}=0.03,\gamma_{102}=...=\gamma_{491}=0$ in the third setting. The covariates had a multinormal distribution with homogeneous correlation $\rho'=0.5$ in the first setting and $\rho'=0.9$ in the last two settings. The results are in Table \ref{table:simmulti}. The test provided appropriate type I error control.
We conclude from the simulations of Section \ref{secsims} that our tests provide good type I error control and are rather robust to several types of model misspecification. The method ZZ from \citet{zhang2014confidence} was often relatively powerful, but was quite anti-conservative in several scenarios. The bootstrap approach BO of \citet{dezeure2017high} was also anti-conservative in several scenarios, but less so. The method BM from \citet{buhlmann2013statistical} tended to be relatively conservative.
\begin{table}[!h] \normalsize \begin{center} \caption{Multi-dimensional $\bm{\beta}\in\mathbb{R}^{10}$. Power is shown for $\bm{\beta}=(3,2,1,0,...,0)$. }
\begin{tabular}{ l l l l l l } \hline \\[-0.4cm]
& & \multicolumn{3}{l}{\qquad \quad Simulation setting} \\ \cline{3-5}
& $\alpha$ & Setting 1& Setting 2 & Setting 3 \\ \hline
& 0.05 \qquad \quad & .0174 & .0197 & .0330 \\
level & 0.01 \qquad \quad & .0023 & .0024 & .0055 \\
& 0.001 \qquad \quad & .0004 & .0002 & .0002 \\ \hline
& 0.05 \qquad \quad & .4443 & .5098 & .6286 \\
power & 0.01 \qquad \quad & .3740 & .4552 & .5731 \\
& 0.001 \qquad \quad & .2503 & .3788 & .4736 \\ \hline
\end{tabular} \label{table:simmulti} \end{center} \end{table}
\section{Data analysis} \label{secdata}
We analyze a dataset about riboflavin (vitamin B2) production with \emph{B. subtilis}. This dataset is called \emph{riboflavin} and is publicly available \citep{buhlmann2014high}. It contains normalized measurements of expression rates of 4088 genes from $n=71$ samples. We use these as input variables. Further, for each sample the dataset contains the logarithm of the riboflavin production rate, which is our one-dimensional outcome of interest. We (further) standardized the expression levels by subtracting the means and dividing by the standard deviations. We also shifted the outcome values to have mean zero.
For every $1\leq i \leq 4088$, we tested the hypothesis $H_i$ that the outcome was independent of the expression level of gene $i$, conditional on the other expression levels. We used the same tests as considered in the simulations. This time we used $w=2\cdot10^5$ permutations per test.
The results of the analysis are summarized in Table \ref{table:dataan}. The columns correspond to the same methods as considered in Section \ref{secsims}. For every method, the fraction of rejections is shown for different \emph{p}-value cut-offs $\alpha$. The fraction of rejections is the number of rejected hypotheses divided by 4088, the total number of hypotheses. The hypotheses that were rejected, were those with \emph{p}-values smaller than or equal to the cut-off $\alpha$.
With most methods we obtain many \emph{p}-values smaller than 0.05. This is not the case for the test BM by \citet{buhlmann2013statistical}, which is known to be relatively conservative. After Bonferroni's multiple testing correction, we reject no hypotheses with any method, suggesting there is no strong signal in the data. \citet{van2014asymptotically} also obtained such a result with this dataset.
\begin{table}[!ht] \normalsize \caption{Real data analysis. For different \emph{p}-value cut-offs $\alpha$, the fraction of rejected hypotheses is shown. } \begin{center}
\begin{tabular}{ l l l l l l l } \hline \\[-0.4cm]
& \multicolumn{5}{l}{\qquad Fraction of rejected hypotheses} \\ \cline{2-7}
$\alpha$ & FLH1 & FLH2 & DR & BM & ZZ & BO \\ \hline
0.05 \qquad \quad & .0005 & .0259 & .0428 & 0 & .0135 & .0272 \\
0.01 \qquad \quad & 0 & .0071 & .0066 & 0 & .0022 & .0051 \\
0.001 \qquad \quad & 0 & .0002 & .0012 & 0 & .0007 & .0024 \\
0.0001 \qquad \quad & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
\end{tabular} \label{table:dataan} \end{center} \end{table}
\section{Discussion} We have proposed novel permutation methods for testing in linear models, where the number of nuisance variables may be much larger than the sample size. Advantages of permutation approaches include robustness to certain violations of the standard linear model and compatibility with powerful permutation-based multiple testing methods.
We have proposed two novel permutation approaches, Freedman-Lane HD and Double Residualization. Within these approaches some variations are possible, with respect to how the regularization parameters are chosen and which test statistics are used. Our methods provided excellent type I error rate control in a wide range of simulation settings. In particular we considered settings with anti-sparsity, high correlations among the covariates, clustered covariates, fat-tailedness of the outcome variable and heteroscedasticity. The simulation study was limited to settings with multivariate normal covariates. Future research may address more scenarios.
We compared our methods to the parametric tests in \citet{buhlmann2013statistical} and \citet{zhang2014confidence} and to the bootstrap approach in \citet{dezeure2017high}.
One advantage of our methods compared to those in \citet{buhlmann2013statistical} and \citet{zhang2014confidence}, is that they are defined in the case that the parameter of interest is multi-dimensional. Further, our tests tended to have higher power than the method by \citet{buhlmann2013statistical}. The test by \citet{zhang2014confidence} had relatively good power, but was rather anti-conservative in several scenarios, for example under anti-sparsity and heteroscedasticity. The bootstrap approach of \citet{dezeure2017high} was also anti-conservative in some scenarios, but less so. Our permutation tests provided appropriate type I error control in all scenarios. Moreover, our permutation tests were computationally much faster than the bootstrap method.
\setlength{\bibsep}{3pt plus 0.3ex} \def\small{\small}
\section*{Supplementary material}
We show that for fixed $q$, our Double Residualization method is asymptotically equivalent to the Kennedy method under local alternatives if the penalty is $o_{\mathbb{P}}(n^{1/2})$. That method is defined if $q<n$ and is based on the statistics $T_j^{K}=\rho\big( \bm{P}_j \bm{R}\bm{Y}, \bm{R}\bm{X} \big)$, $1\leq j \leq w$ \citep{anderson2001permutation}. Note that the Kennedy method is also asymptotically equivalent to the Freedman-Lane method \citep{anderson2001permutation}.
\begin{proposition} \label{lambdatozero} Let $\xi \in \mathbb{R}$ and suppose $\beta=\xi n^{-1/2}$. Assume $\lambda=\lambda_n=o_{\mathbb{P}}(n^{1/2})$ and $\lambda_X=\lambda_{X,n}=o_{\mathbb{P}}(n^{1/2})$ . Let $G=G_n$ be the group of $n!$ permutation maps and let the $n\times n$ matrices $\bm{P}_1,...,\bm{P}_w$ encode the random permutations as usual.
Assume for convenience that $\bm{Z}$ contains a column of 1's. Assume that for $1\leq i \leq n$, $\mathbb{E}|(\bm{R}\bm{X})_i|^{3}$ and $\mathbb{E}|(\bm{R}\bm{Y})_i|^{3}$ are finite.
Consider the Double Residualization method, which rejects $H_0$ when the p-value (2) satisfies $p\leq\alpha$, i.e., when the event
$$ \Big\{ w^{-1}|\{ 1\leq j \leq w: T_j \leq T_1 \}|\leq \alpha/2 \Big\} \cup \Big\{ w^{-1}|\{ 1\leq j \leq w: T_j \geq T_1 \}|\leq \alpha/2 \Big\}$$ occurs. This test is asymptotically equivalent with the Kennedy method, i.e., as $n\rightarrow\infty$, the difference of the rejection functions converges to 0 in probability. In particular, as $n\rightarrow \infty$, the level of our test converges to $2\lfloor w \alpha /2 \rfloor/w\leq \alpha$, where $2\lfloor w\alpha /2 \rfloor/w=\alpha$ if $\alpha$ is a multiple of $2/w$. \end{proposition}
\begin{proof} Suppose that $n>q$ and $H_0$ holds. Let $\hat{\bm{\gamma}}_r^n$ and $\hat{\bm{\gamma}}^n$ be the ridge and least squares estimates $ (\bm{Z}'\bm{Z}+\lambda \bm{I}_q)^{-1}\bm{Z}'\bm{Y}$ and $(\bm{Z}'\bm{Z})^{-1}\bm{Z}'\bm{Y}$ respectively, the latter of which exists with probability 1. By equations (2.4) and (2.7) in \citet{hoerl1970ridge}, $$ \hat{\bm{\gamma}}_r^n = \big[\bm{I}_q -\lambda_n (\bm{Z}'\bm{Z}+\lambda_n \bm{I}_q)^{-1}\big]\hat{\bm{\gamma}}^n,$$ so that $$\hat{\bm{\gamma}}_r^n - \hat{\bm{\gamma}}^n= -\lambda_n (\bm{Z}'\bm{Z}+\lambda_n \bm{I}_q)^{-1}\hat{\bm{\gamma}}^n = o_{\mathbb{P}}(n^{1/2}n^{-1})=o_{\mathbb{P}}(n^{-1/2}).$$
Let $1\leq j \leq w$ and $T_j^{OLS}=\rho\big(( \bm{P}_j \bm{R} + \bm{H}) \bm{Y},\bm{R}\bm{X}\big)$. This equals $T_j$ if $\lambda=0.$
As $n\rightarrow\infty$, the product of the sample standard deviations of $ \bm{P}_j \tilde{\bm{R}}_{\lambda}\bm{Y} + \tilde{\bm{H}}_{\lambda}\bm{Y} $ and $\tilde{\bm{R}}_{\lambda_X}\bm{X}$ converges to a constant $c$, say. Thus \begin{align*} \sqrt{n}T_j=&\sqrt{n}n^{-1}( \bm{P}_j \tilde{\bm{R}}_{\lambda}\bm{Y}+\tilde{\bm{H}}_{\lambda}\bm{Y} -\bm{\mu}_y )'( \tilde{\bm{R}}_{\lambda_X}\bm{X}-\bm{\mu}_2 )/c + o_{\mathbb{P}}(1),\\ \sqrt{n}T_j^{OLS}=&\sqrt{n}n^{-1}( \bm{P}_j \bm{R}\bm{Y} + \bm{H}\bm{Y} -\bm{\mu}_y )' \bm{R}\bm{X}/c + o_{\mathbb{P}}(1), \end{align*} where $\bm{\mu}_y$ and $\bm{\mu}_2$ denote the $n$-vectors with entries equal to the sample means of $ \bm{Y}$ and $\tilde{\bm{R}}_{\lambda_X}\bm{X}$ respectively.
Note that the entries of $$ ( \bm{P}_j \tilde{\bm{R}}_{\lambda}\bm{Y} + \tilde{\bm{H}}_{\lambda}\bm{Y} -\bm{\mu}_y ) - ( \bm{P}_j \bm{R}\bm{Y} + \bm{H}\bm{Y} -\bm{\mu}_y ) = -\bm{P}_j\bm{Z}(\hat{\bm{\gamma}}_r^n-\hat{\bm{\gamma}}^n) + \bm{Z}(\hat{\bm{\gamma}}_r^n- \hat{\bm{\gamma}}^n) $$ are $o_{\mathbb{P}}(n^{-1/2})$ and likewise the entries of $( \tilde{\bm{R}}_{\lambda_X}\bm{X}-\bm{\mu}_2 )- \bm{R} \bm{X} $. It follows that $$\sqrt{n}T_j-\sqrt{n}T_j^{OLS} = \sqrt{n} n^{-1} o_{\mathbb{P}}(n n^{-1/2})=o_{\mathbb{P}}(1).$$
The product of the sample standard deviations of $ \bm{P}_j \bm{R}\bm{Y} $ and $\bm{R}\bm{X}$ converges to a constant $c'$, say. Note that
$$ \sqrt{n}T_j^{OLS}c = \sqrt{n} T_j^{K} c' + \sqrt{n}n^{-1}(\bm{H}\bm{Y} -\bm{\mu}_y)' \bm{R}\bm{X} +o_{\mathbb{P}}(1)=\sqrt{n} T_j^{K} c' + o_{\mathbb{P}}(1),$$
since $(\bm{H}\bm{Y})' \bm{R}\bm{X} =0$ and $\bm{\mu}_y' \bm{R}\bm{X} =0$.
Hence the two tests are asymptotically equivalent.
Under $\xi=0$, the vector $(\sqrt{n}T_1^{K},...,\sqrt{n}T_w^{K})$ is known to have an asymptotic $N(\bm{0},\bm{I}_w)$ distribution \citep{anderson2001permutation}. It follows that $\sqrt{n}T_1,...,\sqrt{n}T_w$ are asymptotically normal and i.i.d.. By the basic Monte Carlo testing principle, if continuous statistics $T_1',...,T_w'$ are i.i.d. under the null hypothesis, then plugging these statistics into the \emph{p}-value formulas in Section 2.1 gives \emph{p}-values which are exact. In case the one-sided \emph{p}-value is used, this means that $\mathbb{P}(p\leq c)= c$ when $c\in(0,1)$ is a multiple of $w^{-1}$. In case the two-sided \emph{p}-value is used, then $\mathbb{P}(p\leq c)= c$ when $c\in(0,1)$ is a multiple of $2w^{-1}$. With the continuous mapping theorem \citep{van1998asymptotic} it follows that plugging $T_1$,...,$T_w$ into the \emph{p}-value formulas in in Section 2.1 gives \emph{p}-values which are asymptotically exact. Thus the probabilities
$\mathbb{P}\big( w^{-1}|\{ j: T_j \leq T_1 \}|\leq \alpha/2 \big)$ and $\mathbb{P}\big( w^{-1}|\{ j: T_j \geq T_1 \}|\leq \alpha/2 \big)$ both converge to $\lfloor w\alpha /2 \rfloor/w$. \end{proof}
In Section 3.1, we refer to the proposition below. Let $2\leq j <k \leq w$. We will write e.g. $cor(\bm{Y},\bm{Y}^{*j})$ for the \emph{true} correlation of the entries of $\bm{Y}$ and $\bm{Y}^{*j}$, i.e., the true correlation of $\bm{Y}_i$ and $\bm{Y}^{*j}_i$, which is the same for every $1\leq i \leq n$. Similarly we denote true covariances and variances using $cov$ and $var$.
\begin{proposition} \label{corlarger} Let $\lambda>0$ freely depend on the data. Assume the entries of $\tilde{\bm{H}}_{\lambda} \bm{Y}$ have expected value $0$.
Let $2\leq j <k \leq w$. Then $cor(\bm{Y}, \bm{Y}^{*j})> cor(\bm{Y}^{*j}, \bm{Y}^{*k})$. \end{proposition}
\begin{proof} Let $\bm{U}\bm{D}\bm{V}'$ be the singular value decomposition of $\bm{Z}$. Here $\bm{D}$ is an $n\times q$ pseudo-diagonal matrix. Its diagonal entries are nonzero, since $\bm{Z}$ has full rank (with probability 1). Then $\tilde{\bm{H}}_{\lambda}$ equals \begin{align*} \bm{Z}\big(&\bm{Z}'\bm{Z}+\lambda\big)^{-1}\bm{Z}'=\\
\bm{U}\bm{D}\bm{V}'\big(&\bm{V}\bm{D}'\bm{U}'\bm{U}\bm{D}\bm{V}'+\lambda\big)^{-1}\bm{V}\bm{D}'\bm{U}'=\\
\bm{U}\bm{D}\bm{V}'\big(&\bm{V}(\bm{D}'\bm{D}+\lambda)\bm{V}'\big)^{-1}\bm{V}\bm{D}'\bm{U}'. \end{align*} Using $\bm{B}^{-1}\bm{A}^{-1}=(\bm{A}\bm{B})^{-1}$ twice shows that the above equals \begin{align*}
\bm{U}\bm{D}\bm{V}'\bm{V}\big(&\bm{D}'\bm{D}+\lambda\big)^{-1}\bm{V}'\bm{V}\bm{D}'\bm{U}' = \\ \bm{U}\bm{D}\big(&\bm{D}'\bm{D}+\lambda\big)^{-1}\bm{D}'\bm{U}'. \end{align*} Hence the diagonal matrix $\bm{D}(\bm{D}'\bm{D}+\lambda)^{-1}\bm{D}$ contains the singular values of $\tilde{\bm{H}}_{\lambda}$, i.e., the eigenvalues. Note that these lie in $(0,1)$.
Thus $\tilde{\bm{H}}_{\lambda}^2$, which has the same eigenvectors as $\tilde{\bm{H}}_{\lambda}$, has strictly smaller sorted eigenvalues. Consequently $\tilde{\bm{H}}_{\lambda} - \tilde{\bm{H}}_{\lambda}^2$ is positive definite. Since the entries of $\bm{Y}$ and $\tilde{\bm{H}}_{\lambda}\bm{Y}$ have expected value 0, so do the entries of $\tilde{\bm{R}}_{\lambda}\bm{Y}$. We have $$cov(\tilde{\bm{R}}_{\lambda}\bm{Y},\tilde{\bm{H}}_{\lambda}\bm{Y}) = \mathbb{E}n^{-1} (\tilde{\bm{R}}_{\lambda}\bm{Y})' \tilde{\bm{H}}_{\lambda} \bm{Y}=$$ \begin{equation} \label{covRYHYg0}
\mathbb{E} n^{-1}\bm{Y}' \tilde{\bm{R}}_{\lambda} \tilde{\bm{H}}_{\lambda} \bm{Y} = \mathbb{E} n^{-1} \bm{Y}' ( \tilde{\bm{H}}_{\lambda} - \tilde{\bm{H}}_{\lambda}^2 )\bm{Y} > 0, \end{equation} since $\tilde{\bm{H}}_{\lambda} - \tilde{\bm{H}}_{\lambda}^2$ is positive definite. We then also have \begin{equation} \label{covYHY} cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y})= cov(\tilde{\bm{R}}_{\lambda}\bm{Y},\tilde{\bm{H}}_{\lambda}\bm{Y})+ cov(\tilde{\bm{H}}_{\lambda}\bm{Y},\tilde{\bm{H}}_{\lambda}\bm{Y})> 0. \end{equation}
Note that \begin{equation} \label{covYYj} cov(\bm{Y}, \bm{Y}^{*j})= cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y}) + cov(\bm{Y}, \bm{P}_j\tilde{\bm{R}}_{\lambda}\bm{Y})=cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y}), \end{equation}
since $\bm{P}_j\tilde{\bm{R}}_{\lambda}\bm{Y}$ is a random permutation of $\tilde{\bm{R}}_{\lambda}\bm{Y}$. Similarly we have $$var(\bm{Y}^{*j}) = var(\tilde{\bm{H}}_{\lambda}\bm{Y}) +2 cov(\tilde{\bm{H}}_{\lambda}\bm{Y}, \bm{P}_j\tilde{\bm{R}}_{\lambda}\bm{Y}) + var( \bm{P}_j \tilde{\bm{R}}_{\lambda}\bm{Y})=$$ \begin{equation} \label{varYj}
var(\tilde{\bm{H}}_{\lambda}\bm{Y}) + 0 + var( \tilde{\bm{R}}_{\lambda}\bm{Y}) \end{equation} and \begin{equation} \label{covYkYj} cov(\bm{Y}^{*k},\bm{Y}^{*j}) = var(\tilde{\bm{H}}_{\lambda}\bm{Y}). \end{equation} By \eqref{covYYj} and \eqref{varYj}, \begin{equation} \label{corYYj} cor(\bm{Y}, \bm{Y}^{*j})= \frac{ cov(\bm{Y},\bm{Y}^{*j}) }{ \sqrt{var(\bm{Y})var(\bm{Y}^{*j})} } = \frac{ cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y}) }{\sqrt{ var(\bm{Y}) \big( var(\tilde{\bm{H}}_{\lambda}\bm{Y})+ var( \tilde{\bm{R}}_{\lambda}\bm{Y} ) \big) }}. \end{equation} By \eqref{varYj} and \eqref{covYkYj}, $$ cor(\bm{Y}^{*k}, \bm{Y}^{*j}) = \frac{ cov(\bm{Y}^{*k},\bm{Y}^{*j}) }{ \sqrt{var(\bm{Y}^{*k})var(\bm{Y}^{*j})} } = \frac{ var(\tilde{\bm{H}}_{\lambda}\bm{Y}) }{ var(\tilde{\bm{H}}_{\lambda}\bm{Y})+ var(\tilde{\bm{R}}_{\lambda}\bm{Y}) }, $$ so that $cor(\bm{Y}^{*k}, \bm{Y}^{*j}) =C \cdot cor(\bm{Y}, \bm{Y}^{*j}) $, where $$ C= \frac{ var(\tilde{\bm{H}}_{\lambda}\bm{Y}) \sqrt{ var(\bm{Y})} }{ cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y}) \sqrt{ var(\tilde{\bm{H}}_{\lambda}\bm{Y})+ var( \tilde{\bm{R}}_{\lambda}\bm{Y} ) } } . $$
Here, $cor(\bm{Y}, \bm{Y}^{*j})> 0$ by \eqref{covYHY} and \eqref{corYYj}.
\begin{comment} We have $cor(\bm{Y}, \bm{Y}^{*j})>0$, since $cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y})>0$. Indeed, Conditionally on $Z$ and $\lambda$, $\tilde{\bm{H}}_{\lambda}$ is a constant, so that $cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y}) =\mathbb{E}n^{-1} (\bm{Y}-\mu_y)'\tilde{\bm{H}}_{\lambda} (\bm{Y}-\mu_y)$. This is a quadratic form, since $\tilde{\bm{H}}_{\lambda}$ is positive semi-definite. Indeed, $(n^{-1}\bm{Z}'\bm{Z}+\lambda)^{-1}$ is positive definite \citet{hoerl1970ridge}. Let $\bm{x}$ be a nonzero $n$-vector. If $\bm{Z}'\bm{x}$ is $\bm{0}$, then $\bm{x}' \tilde{\bm{H}}_{\lambda} \bm{x}$ is zero. Otherwise, $\bm{x}' \tilde{\bm{H}}_{\lambda} \bm{x}= (\bm{Z}'\bm{x})'(n^{-1}\bm{Z}'\bm{Z}+\lambda)^{-1} \bm{Z}'\bm{x}>0$, so that $\tilde{\bm{H}}_{\lambda}$ is positive semi-definite. \end{comment}
We are done if we show that $C<1$. let \begin{align*} a&= var(\tilde{\bm{H}}_{\lambda}\bm{Y})>0,\\ b&= var(\tilde{\bm{R}}_{\lambda}\bm{Y})>0 ,\\ c&= cov(\tilde{\bm{R}}_{\lambda}\bm{Y},\tilde{\bm{H}}_{\lambda}\bm{Y} ) > 0,\\ \end{align*} where $c>0$ due to \eqref{covRYHYg0}. Note that $$var(\bm{Y})= var(\tilde{\bm{H}}_{\lambda}\bm{Y}+\tilde{\bm{R}}_{\lambda}\bm{Y}) =a+b+2c,$$ $$cov(\bm{Y}, \tilde{\bm{H}}_{\lambda}\bm{Y})= var(\tilde{\bm{H}}_{\lambda}\bm{Y}) +cov(\tilde{\bm{R}}_{\lambda}\bm{Y},\tilde{\bm{H}}_{\lambda}\bm{Y} )=a+c, $$ so that $$C=\frac{a\sqrt{a+b+2c}}{(a+c)\sqrt{a+b}}.$$ Fix $a>0$ and $b>0$. For $c\geq 0$, write $f_1(c)=a\sqrt{a+b+2c}$ and $f_2(c) = (a+c)\sqrt{a+b}$. Note that $f_1(0)= f_2(0)$ and $$f_1'(c)= a(a+b+2c)^{-1/2}<\sqrt{a}< \sqrt{a+b}=f_2'(c).$$ Thus, for $c>0$, $$C= \frac{f_1(c)}{f_2(c)}=\frac{f_1(0)+ \int_{0}^{c} f_1'(\zeta)d\zeta }{f_2(0)+ \int_{0}^{c} f_2'(\zeta)d\zeta }<1. $$ \end{proof}
Note that if we have $n>q$ and $\lambda=0$, then $cor(\bm{Y}, \bm{Y}^{*j})= cor(\bm{Y}^{*j}, \bm{Y}^{*k})$. Indeed, then $c=0$ in the above proof, so that $C=1$.
\end{document} |
\begin{document}
\doi{10.1080/09500340xxxxxxxxxxxx}
\issn{1362-3044} \issnp{0950-0340}
\markboth{J.R. Castrej\'on-Pita et. al.}{J.R. Castrej\'on-Pita et. al.}
\title{{\itshape Novel Designs for Penning Ion traps}}
\author{J.R. Castrej\'on-Pita, H. Ohadi, D.R. Crick, D.F.A. Winters, D.M. Segal and R.C. Thompson \newline
\thanks{
\newline\centerline{\tiny{ {\em Journal of Modern Optics} ISSN 0950-0340 print/ ISSN 1362-3044 online \textcopyright 2004 Taylor \& Francis Ltd}} \newline\centerline{\tiny{ http://www.tandf.co.uk/journals}}\newline \centerline{\tiny{DOI: 10.1080/09500340xxxxxxxxxxxx}}} The Blackett Laboratory, Imperial College, Prince Consort Road, SW7 2BZ, London, United Kingdom.} \received{$14^{th}$ February 2006}
\maketitle
\begin{abstract} We present a number of alternative designs for Penning ion traps suitable for quantum information processing (QIP) applications with atomic ions. The first trap design is a simple array of long straight wires which allows easy optical access. A prototype of this trap has been built to trap Ca$^+$ and a simple electronic detection scheme has been employed to demonstrate the operation of the trap. Another trap design consists of a conducting plate with a hole in it situated above a continuous conducting plane. The final trap design is based on an array of pad electrodes. Although this trap design lacks the open geometry of the traps described above, the pad design may prove useful in a hybrid scheme in which information processing and qubit storage take place in different types of trap. The behaviour of the pad traps is simulated numerically and techniques for moving ions rapidly between traps are discussed. Future experiments with these various designs are discussed. All of the designs lend themselves to the construction of multiple trap arrays, as required for scalable ion trap QIP. \end{abstract}
\section{Introduction}
A conventional Penning trap consists of three electrodes: a ring electrode and two endcaps. Ideally these electrodes are hyperboloids of revolution, producing a quadrupole electric potential, and traps with hyperbolic electrodes are commonplace. Trapping of positive ions is achieved by holding the endcaps at a positive potential with respect to the ring electrode. This provides one dimensional confinement along the axis of the trap. The electrode structure is embedded in a strong axial magnetic field which provides confinement in the other two dimensions (the radial plane). This configuration has proven to be useful in mass-spectrometry measurements and fundamental studies with single ions \cite{Ghosh}. An important variant of the Penning trap employs a stack of cylindrical electrodes. Typically five electrodes are used -- a thin ring electrode, a pair of `compensation' electrodes (used to trim the potential to achieve a better approximation to a quadrupole) and a pair of much longer electrodes that act as the endcaps. The geometry of this arrangement means that it is well suited for use with superconducting solenoid magnets. This design has a slightly more `open' geometry than the hyperbolic trap since it has good access along the axis of the trap. Nonetheless, neither of these designs are really ideal for applications such as spectroscopy and QIP studies where optical access is of primary importance.
A scheme has been proposed that uses a linear array of cylindrical Penning traps for QIP using trapped electrons \cite{Tombesi}. This scheme uses microwave techniques rather than optical addressing so that optical access is not an issue. This approach has limited scalability since the array of traps is essentially one dimensional. To address this issue, further work on this scheme is now being focussed on a new planar trap design \cite{Vogel}. This design has an open geometry and is readily scalable to two dimensional arrays. A trap of this variety has recently been tested and trapped electrons were successfully detected \cite{Comm}. The QIP scheme envisaged by Stahl {\it et al.}\cite{Vogel} involves electrons in different micro-traps being coupled via superconducting wires.
Another proposal has been made for a very simple trap made using straight wires \cite{PRA}. This trap shares the optical accessibility of the Stahl {\it et al.}\cite{Vogel} design and lends itself, in a very straightforward way, to the production of arrays of traps. The basic trap consists of two perpendicular sets of three parallel straight wires (see figure~\ref{fig:6wire_elec_det}). This trap utilizes an electrostatic potential between the central and the outer wires to confine the ions axially, with a magnetic field providing confinement in the other two dimensions. Another advantage of this trap is that an analytical expression exists for the equation of motion of trapped ions. In this paper a first prototype of the wire trap is presented with experimental evidence of its operation.
Current ion trap QIP experiments focus on strings of ions in linear radiofreqency traps, however this approach is unlikely to be scalable beyond a few tens of ions. The issue of scalability in ion trap QIP has been carefully addressed in a paper by Kielpinski \emph{ et al.} \cite{Monroe}. They describe a scheme based on trapped ions held in multiple miniature rf ion traps. In this scheme two ions are loaded into a single micro-trap where gate operations may be performed. Individual ions are then shuttled into different micro-traps for storage and can be retrieved at a later time to continue with the calculation. In the last few years a number of key elements of this approach have been demonstrated \cite{shiftsplitt}.
Adapting this approach to the Penning trap involves a number of challenges, however, Penning traps may have some important advantages. Ambient magnetic field fluctuations have been shown to be the major limiting cause of decoherence in radiofrequency ion trap QIP to date. Operating a radiofrequency trap with a small additional magnetic field at which the `qubit' transition frequency is first-order B-field independent has been shown to lead to coherence times greater than 10 seconds \cite{Wineland10sec}. On the other hand, coherence times of up to 10 minutes have been demonstrated using similar techniques in a Penning trap \cite{Bollinger10min}. Another advantage of the Penning trap is that it employs only static electric and magnetic fields and so heating rates should be very low in these traps. Shuttling ions between different Penning traps along the axis of the magnetic field is straightforward \cite{Werth}, but moving ions in a two dimensional array of traps, as required for scalability, will inevitably involve moving ions in a plane perpendicular to the magnetic field. This is complicated by the presence of the ${\bf v}\times {\bf B}$ term in the Lorentz force. In this paper we address this issue and describe a possible architecture for a Penning trap quantum information processor. We consider two alternative approaches. The first is to allow the ion to {\em drift} in the desired direction (`adiabatic' approach). Clearly, speed is a concern for QIP so we have also considered a `diabatic' approach in which an ion is `hopped' to a desired location in a single cyclotron loop by the application of a pulsed nearly linear electric field. In order to be able to switch between a linear field (for hopping) and a quadrupole potential (for trapping), a different arrangement of electrodes is required. The resulting traps are made up from arrays of pad electrodes whose voltages can be switched rapidly in order to perform the different required operations.
\section{The wire trap}
A simple trap based on two sets of three wires or rods is shown schematically in Fig. \ref{fig:6wire_elec_det}. This arrangement of wires produces minima in the electric potential. At these points, the forces between the charged particle and the various electrodes (wires) cancel out and thus charged particles can be three-dimensionally confined with the addition of a magnetic field in the axial direction. A more complete description of this trap can be found in \cite{PRA}. Briefly, the electrostatic potential produced by the six wire ion trap, in Cartesian coordinates, is
\[ \phi(x,y,z)=-\frac{\triangle V}{2 \ln(d^2/2a^2)}\left(\ln{\frac{R^2}{(x+d)^2+(z+z_0)^2}}-\ln{\frac{R^2}{x^2+(z+z_0)^2}}+\ln{\frac{R^2}{(x-d)^2+(z+z_0)^2}}\right) \] \begin{equation} -\frac{\triangle V}{2 \ln(d^2/2a^2)}\left( \ln{\frac{R^2}{(y+d)^2+(z-z_0)^2}}-\ln{\frac{R^2}{y^2+(z-z_0)^2}}+\ln{\frac{R^2}{(y-d)^2+(z-z_0)^2}}\right).\label{eq1} \end{equation} where $d$ is the distance between two wires, $2z_0$ is the distance between two sets of wires, $a$ is the diameter of the wires (a$\ll d$), $R$ is an arbitrary distance at which the potential is set to zero, $R\gg d$, and $\triangle V$ is the potential difference between central and external wires \cite{PRA}. From the latter equation it is possible to find at least three minima along the axial direction ($z$): above, below and between the wires. The experimental setup presented here is focused on the study of trapped ions in the minimum at the centre of the trap (between the two wire planes), however future work will investigate the optical detection of ions at all three trapping points.
\begin{figure}
\caption{Six wire trap and electronic detection scheme. The simulated trapped ion trajectory shown in the figure (with an expanded view in the oval inset) corresponds to Ca$^+$ at 10 meV in a trap with similar dimensions to those of the prototype. The potential difference between central and external wires is $\triangle V = -1.3$ V (at this voltage, the axial frequency of ions corresponds to the resonant frequency of the detection circuit). The magnetic field of 1 T is oriented perpendicular to both set of wires. The simulation was performed using SIMION. }
\label{fig:6wire_elec_det}
\end{figure}
\subsection{Experimental setup} We have built a prototype of this trap and have included a simple electronic detection scheme, also shown in Fig.\ref{fig:6wire_elec_det}. The dimensions of the trap were chosen to fit into standard DN40 CF vacuum components. The `wire' electrodes consist of two sets of three stainless-steel cylindrical rods of 1 mm diameter and approximately 35 mm long. Within a set, the centres of the wires are separated by 3 mm and the centres of the wires are 4 mm apart in the axial direction. The rods are supported by ceramic mounts and fixed into an oxygen-free copper base which is mounted on the electrical feedthrough. The copper base has a 2 mm hole below the trap center to permit the access of electrons from a filament placed behind it, but also to shield the trapping volume from the electric field produced by the filament and its connectors. Calcium atoms are produced by an atomic beam oven and are ionized by electron bombardment to produce Ca$^+$. The calcium oven is made of a 1 mm diameter tantalum tube spot-welded onto a 0.25 mm tantalum wire. Calcium is placed inside the tantalum tube and then both ends of the tube are closed. A hole ($\approx$ 0.2 mm diameter) in the side of the tube allows atoms to effuse into the trapping region when the oven is heated by an electrical current sent through the tantalum wire. The electron filament is made of coiled thoriated tungsten wire with a diameter of 0.25 mm. The trap has been operated at 2.5 $\times 10 ^{-8}$ mbar and with a magnetic field of 1 T. The calcium oven is driven at 1.53 Amps and the filament at 4.4 Amps. The filament is biased by $- 30$ V with respect to the grounded rods to accelerate electrons into the trapping region.
For the electronic detection of ions, the trap is connected to a simple resonant circuit where the applied DC trap bias is scanned so that the axial resonance of the ions in the trap passes through the resonance of the circuit (see Fig. \ref{fig:6wire_elec_det}). The resonant frequency of the circuit is 145.5 kHz and the amplitude of the drive is 10 mV$_{p-p}$. If ions are present in the trap the quality factor of the resonance is modified as the ions absorb energy from the circuit. The response of the circuit is monitored and rectified with an rms-DC converter. The presence of ions in the trap should therefore be accompanied by a change in the resulting DC signal. Experimental results are shown in Fig. \ref{fig:6wire_elec_det_res}. The upper curve (circles) is the result when the electron filament is switched off so that the trapping volume contains no charged particles. The lower curve (triangles) is the result when the electron filament is on but the oven, producing calcium atoms, is switched off. The reduction of the signal is due to the presence of electrons in the trapping volume. Although this affects the circuit there is no particular resonance since the electrons are not trapped. The central curves (stars) result when both electrons and calcium atoms are present. We note a ubiquitous hysteresis when scanning the voltage from below resonance compared to scanning from above resonance. The resonant feature for calcium ions is predicted from simulations to occur at -1.35 V and is observed within the range of $-1.5$ to $-1.1$ V. This initial result demonstrates that the trap operates successfully and an experiment to perform laser cooling and optical detection of the Ca$^+$ ions in this trap is currently being prepared.
\begin{figure}
\caption{Experimental results of the electronic detection scheme for Ca$^+$.}
\label{fig:6wire_elec_det_res}
\end{figure}
\subsection{The two-wire trap}
A wire trap can also be formed with just two wires. Using the same notation as in Eq. 1, the electrostatic potential of two crossed wires separated by $2z_0$ is \begin{equation} \phi = \frac{V_+}{\ln(R^2/az_0)}\left(\ln \frac{R^2}{y^2+(z-z_0)^2}+ \ln \frac{R^2}{x^2+(z+z_0)^2}\right)\label{eq2} \end{equation} where $V_+$ is the electric potential of the wires with respect to a ground at $R$. The axial potential in Eq. \ref{eq2} presents a minimum at $z=0$ where charged particles can be confined (see Fig. \ref{fig:2wire}). \begin{figure}
\caption{Simulation of the motion of a $^{40}$Ca$^+$ ion in a two-wire trap performed using SIMION. Ion kinetic energy 10 meV, $a=0.5$ mm, $z_0=2$ mm and $B=1$ T. The trajectory is shown enlarged in the oval inset. The potential generated by the trap is shown on the left.}
\label{fig:2wire}
\end{figure} Following the same procedure as in \cite{PRA}, an approximate quadratic function around the potential minimum ($x=0$, $y=0$, $z=0$) can be found, giving the result \begin{equation} \phi = \frac{2V_+}{\ln(R^2/az_0)}\left(2 \ln \frac{R}{z_0} - \frac{1}{z^2_0}\left(x^2+y^2-2z^2\right)\right).\label{eq3} \end{equation} If an axial magnetic field ($B$) is added, the equations of motion of an ion with mass $m$ and charge $q$ around the minimum are then \[ \ddot{x}= \omega_c\dot{y} + \frac{\omega^2_z}{2} x \, \, \, , \, \, \, \ddot{y}= -\omega_c\dot{x}+ \frac{\omega^2_z}{2} y \, \, \, , \, \, \, \ddot{z}= -\omega^2_z z \] which are the well-known equations of motion of an ion inside a Penning trap, where $\omega^2_z = 8qV_+/m z_0^2 \ln (R^2/az_0)$ and $\omega_c = qB/m$.
The two-wire configuration has three disadvantages compared to the six-wire trap: it does not produce trapping points above and below the trap, the harmonicity of the potential cannot be modified, and the trap depth is smaller than for the six-wire trap design. One advantage however is that this trap would be easier to scale and construct allowing even greater optical access to the trapped ions. In Fig. \ref{fig:2wire} a simulation of a trapped Ca$^+$ ion is presented when the electrodes of a two-wire trap are connected to $+4$ V.
\subsection{`Adiabatic' Ion Transfer in Wire Traps}
Conditions for the transport of ions can be generated using these wire traps. In ref~\cite{PRA} it is shown that a circular `ion guide' can be made using three concentric wire rings with trapping zones along circular lines directly above and below the structure. The goal would be to trap ions at one position and then transport them to another position where they could be manipulated. One way to achieve this is to use a circular set of three wires as an ion guide in combination with a straight set of wires, as illustrated in Fig.~\ref{fig:ringtraps}. In this example, the trapping region {\em above} the six wire electrodes is used. In Fig. \ref{fig:ringtraps}a the simulation of a trapped Ca$^+$ ion is observed in the crossing zone to the right of the figure, where both sets of wires are connected to a potential difference of -10 V. In Fig. \ref{fig:ringtraps}b, the upper set of wires is still connected to the original potential creating an ion guide along its path, but the lower straight set of wires has been connected in such a way that they produce a potential gradient which pushes the ions away from the old trapping region and around the ring. When the ions reach the left hand crossing region the voltages on lower set of wires can be adjusted to the same levels as in Fig. \ref{fig:ringtraps}a producing trapping conditions again. We term this kind of transport `adiabatic' since the ion undergoes many cyclotron loops as it moves in the electric potential.
\begin{figure}
\caption{Simulations of a trap design to transport ions. In $(a)$ external wires are connected to $-5 V$, and central wires to $+5$ V. In $(b)$ the upper set of wires is connected as in $(a)$, the three lower wires are connected to $-3$ V, 0V, +5 V respectively. Simulations performed using SIMION for Ca$^+$ with initial kinetic energy 100 meV.}
\label{fig:ringtraps}
\end{figure}
\section{The two-plate trap.}
We also present here a proposal for a single-endcap trap and computer simulations of it. This proposal is essentially a modification of a planar trap \cite{Vogel}, but with a simpler design together with straightforward scalability.
The trap presented in \cite{Vogel} consists of a central disk connected to a positive voltage surrounded by a planar ring connected to a negative voltage; the entire system is surrounded by a grounded electrode and is embedded in a planar substrate. This design has a number of advantages including the ability to modify the vertical position of the trapping zone individually for each trap in the array, a feature which is essential for the QIP scheme using trapped electrons proposed in ref~\cite{Vogel}. On the other hand making individual connections to an array of traps clearly enormously increases the complexity of the structure. In the geometry shown in Fig. \ref{fig:holetrap1}b the two electrodes of the trap described above are deposited on separate planes. An array of such traps can then be generated with only two electrical connections to the entire structure (see figure~\ref{fig:holetrap1}c). Clearly, this has the disadvantage not allowing individual control of the traps but such an array may find uses in any scheme where an array of trapped ions simply acts as a quantum register (for example the `moving head' scheme proposed by Cirac and Zoller \cite{moving_head}.
\begin{figure}
\caption{(a) The axial potential generated by the two-plate trap shown in (b) for $z_0=5$ mm and $R-r=10$mm and with the electrodes connected to $\pm$ 5 volts as shown. The simulated trapped ion trajectory shown in (b) corresponds to a molecular ion with a mass of 100 amu and with an initial kinetic energy of 100 meV in an applied magnetic field of $B=1$ T as shown. The simulation was performed using SIMION. (c) An array of traps based on this design.}
\label{fig:holetrap1}
\end{figure}
Essentially this trap is made of two planar electrodes positioned in different planes ($z=0$ and $z=z_0$). The upper electrode is a planar ring with outer radius $R$ and inner radius $r$ in the plane $z=z_0$. The lower electrode is a disk of radius $R$ in the plane $z=0$. This trap can be also understood as a modified Penning trap with one of its endcaps removed, employing a planar ring electrode and a planar endcap. This type of configuration is able to produce an axial trapping potential above the electrodes when they are oppositely charged; confinement in the radial plane is produced by the addition of a magnetic field perpendicular to the electrode planes. To demonstrate this, an axial electrostatic potential is shown in Fig. \ref{fig:holetrap1}a. A simulated trajectory for a trapped ion in this trap are shown in Fig. \ref{fig:holetrap1}b. As mentioned above, this trap geometry, like the two-wire trap described earlier, exhibits great scalability with relative ease of construction and has good optical access due to the open structure.
\section{Pad traps}
We now consider a different design of trap array that lends itself to the {\em rapid} movement of ions from one trap to another. To see how this may be achieved first consider the motion of an ion in {\em crossed} homogeneous electric and magnetic fields. If we assume the magnetic field is in the positive $z$ direction and a homogeneous electric field is applied in the positive $y$ direction, the trajectory of an ion that starts at rest at the origin is given by the parametric equations \begin{eqnarray} x=&-{V_D \over \omega} \sin \omega t + V_Dt\\ y=&{V_D \over \omega}(1-\cos\omega t) \end{eqnarray} where $V_D=E/B$ is the `drift velocity' and $\omega=qB/m$ is the cyclotron frequency. The motion is therefore a series of loops in the $xy$ plane such that the ion drifts in the $x$ direction, periodically coming to rest in the $y$ direction. Consider a pair of traps whose axes are along the $z$-direction but whose centres are displaced in the $x$ direction. If the trapping potential could be switched off and replaced by a linear electric field as described above it would be possible to `hop' the ion from the centre of one trap to the centre of the other trap in a single `cycloid loop'. After the `hop' the trapping potential would be re-applied.
An ion completes a single cycloid loop in a time $t=2\pi/\omega$ i.e. in a single cyclotron period. For $^{40}\rm Ca^+$ and $B$=1T we have $t=2.6\mu$s. The size of the cycloid loop is determined by the magnitude of the electric field since the displacement along the $x$ axis as a result of the cycloid loop is given by $x_l=(E/B)t$. For $^{40}\rm Ca^+$ and $B$=1T we have $x_l=2.6\times 10^{-6}E$. Thus an electric field of $\sim$ 1900Vm$^{-1}$ is required for a displacement of 5mm. If the typical trap dimension is of the same order as this displacement, a typical voltage applied to an electrode would be in the region of 20V for a trap with a characteristic dimension of 1cm.
\begin{figure}
\caption{Schematic layout of three pad traps. The `endcap' electrodes have holes in them to allow for the loading and extraction of ions in the $z$- direction.}
\label{fig:3pad}
\end{figure}
The traps must be operable in two distinct modes: a `normal' trapping mode, and a mode in which a nearly linear electric field can be applied across the entire structure (`hopping' mode). The key design criterion is therefore to be able to switch rapidly between these two modes. The trap array we envisage is made up of two non-conducting substrates onto which a regular pattern of pad electrodes has been deposited. We envisage connections to these electrodes being made through the back of the substrate. We consider a pattern built up from a `unit cell' that is an equilateral triangular array of three pads. A single quadrupole trap can then be realised in the following way: one substrate would hold a hexagonal array of six pads forming a `ring' with a central pad acting as an `endcap'. The other substrate would carry an identical array of pads. The two opposing layers can then be made to form a trap using 14 electrodes in total -- two endcaps and two sets of six pads making a pair of rings . By applying a positive potential to the endcaps and equal negative potentials to the elements of the rings a quadrupole potential can be generated at the centre of the structure. Other traps can then be made by repeating this pattern in the radial plane (see figure~\ref{fig:3pad}). We have modelled the trapping potential using SIMION. For our simulations we have chosen pads with a diameter of 4mm and a gap between the centres of the pads of 5mm. The potential can be optimised by adjusting the `aspect ratio' of the trap (i.e. the ratio $\gamma$ of the separation of the layers to the separation of the pad centres in the $xy$ plane), such that the deviation of the potential from a pure quadrupole is minimised. Figure~\ref{fig:pot_w3}a shows the deviation of the potential from a pure quadrupole potential for the optimum value of $\gamma=0.9$. The `endcap' electrodes have 1mm central holes to allow for the loading and extraction of ions along the $z$-direction. For the optimum value of $\gamma$ the holes only disrupt the quadrupole potential significantly for relatively large values of $z$.
Figure~\ref{fig:pot_w3}b shows the potential in the radial plane at the midpoint between the substrates ($z=0$). For a perfect Penning trap the contours projected onto the $x,y$ plane would be circular. The deviation from circular symmetry is small over the trapping region despite the ring being comprised of six separate electrodes.
\begin{figure}
\caption{(a) The potential along the axis of the trap for $\gamma=0.9$. The potential only deviates from quadratic behaviour close to the hole in the endcap. (b) Plot of the potential in the $x,y$ plane at the midpoint between between the substrates. Good circular symmetry is evident for small displacements from the centre of each trap.}
\label{fig:pot_w3}
\end{figure}
In order to `hop' an ion from one trapping site to another, a nearly linear electric field is applied to the trap array. This is done by switching the potentials on the electrodes from the values used for trapping to the ones shown in figure~\ref{fig:2padhopxy}. The figure also shows the contours of the resulting potential in the $z=0$ plane. These potentials are chosen so that a single `cycloid' loop takes a Ca$^+$ ion from one trapping zone to the other assuming a magnetic field of 1T. Since the equipotential surfaces are not exactly planar the resulting trajectory does not have the exact shape of a `cycloid loop', however, provided the ion's velocity has no component in the $y$-direction at the midpoint between the traps, symmetry dictates that the ion will end up in the centre of the second trap. This means that significant deviations from a linear field can be tolerated. In fact, it is possible to choose the potentials applied to the pads in such a way as to generate a more nearly linear electric potential than the one shown in figure~\ref{fig:2padhopxy}. However, since a genuinely linear potential is not strictly required we have chosen a set of potentials that offer the significant advantage of providing axial ($z$) confinement throughout the `hop'. The trajectory of the ion in the $z=0$ plane is shown in figure~\ref{fig:2padhopxy}. For this trajectory the ion is assumed initially to be at rest at the middle of the rightmost trap.
\begin{figure}
\caption{Applying the potentials shown to the individual pads causes transfer of an ion from the centre of the righthand trap to the centre of the lefthand trap (the magnetic field is along the $+z$ axis i.e. out of the page). The trajectory of an ion initially at rest at the centre of the righthand trap is shown for one cyclotron period.}
\label{fig:2padhopxy}
\end{figure}
\begin{figure}
\caption{Simulations (performed using SIMION) of the axial positions of ions $z(t)$ as a function of time, for exactly one cyclotron period. The ions start at the centre of the righthand trap and have a range of initial velocities. For all cases the ions end up near to the centre of the lefthand trap in the $xy$ plane. The upper panel assumes an initial kinetic energy 10 meV. The initial direction of motion is varied over the full range of azimuth and declination in steps of 10 degrees. The lower panel shows a similar plot but with initial kinetic energy of 0.1 meV. In the $z$-direction the ions are `focused' into the second trap i.e. the final deviation along the $z$-axis as a result of the hop is small. }
\label{fig:focus}
\end{figure}
To demonstrate the confinement in the axial direction figure ~\ref{fig:focus} shows plots of the axial positions of ions $z(t)$ as a function of time. The ions start at the centre of the rightmost trap and have a range of initial velocities (see figure caption). Axial confinement is assured because the chosen applied potentials generate a three-dimensional electric potential that has a minimum at $z=0$ all the way along the ion's trajectory in the $x,y$ plane. If we define $s$ as the displacement along this trajectory we can show this by plotting the electric potential along the $z$ axis as a function of $s$. This is shown in figure~\ref{fig:zpot}. The potential is dominated by the steep slopes, first negative and then positive, encountered along the trajectory $s$. This describes the ion first accelerating predominately in the $-y$ direction due to the near-linear electric field, but eventually turning around and climbing back up the electric potential as the ${\bf v}\times{\bf B}$ term becomes significant. On the other hand a close examination of the figure reveals that the potential in the $z$ direction always has a minimum at the position of the ion. The depth of this potential varies along the ion's trajectory so that the $z$- motion is not harmonic during the `hop', but the curvature of the potential ensures axial confinement. Since the ions at all times find themselves in a potential that has a minimum in the $z$ direction they are `focused' into the lefthand trap.
\begin{figure}
\caption{The potential in the $z$-direction evaluated at a range of equidistant points along the ions trajectory.}
\label{fig:zpot}
\end{figure}
Finally we consider how such an array of `pad' traps could be used for QIP. We envisage a structure in which a stack of `conventional' cylindrical Penning traps could be positioned above one of the pad traps. Such a linear array of cylindrical traps along the magnetic field direction could be used for gate operations between pairs of ions or multiple ions. Ions could then be ejected from the cylindrical stack into the pad trap directly below it. Ions in the pad trap array could then be shifted sideways allowing for a different ion to be presented to the linear stack of traps. We intend to build and test a prototype set of pad traps with the dimensions described above. It may be prudent to use closely spaced hexagonal pads rather than circular ones to minimise the exposed are of non-conducting substrate which may be prone to charging up. Ultimately one could use microfabrication techniques to make much smaller pad-traps. The size is only limited by the amount of radial confinement that can be achieved, which depends on the strength of the magnetic field. Furthermore, the geometry of these traps would allow the ions to be axialised\cite{axialisation}, further improving the localisation in the radial plane. Using a 10T superconducting magnet the `hopping' time for Ca$^+$ ions would be 260ns. Large arrays of pads could in principle be fabricated, in which case the pads could effectively act as `pixels' allowing almost arbitrary electric fields to be generated. Finally we note that a third option for moving ions in such an array of Penning traps is possible. If rather larger electric fields were applied an ion might be shifted {\em in the direction of the applied electric field} so rapidly that the ${\bf v} \times {\bf B}$ component of the Lorentz force has a negligible bending effect on the trajectories. Small residual sideways shifts of the ion could be compensated by appropriate choice of the direction of the nearly linear electric field applied. Whilst shifts could then in principle be achieved very rapidly, such an approach would require very fine control of the switched potentials, since an ion would need to be first accelerated and finally decelerated as it approached its destination. One of the great advantages of the `hopping' technique considered above is that the ion automatically comes to rest in its new position, and so one can expect rather low heating effects as a result of the shifting of the ion.
\section{Conclusions}
We have presented an experimental setup and experimental evidence for trapped ions inside a simple Penning trap made using a simple array of rods. We have also discussed a variety of other novel Penning traps all of which lend themselves to miniaturisation. We have discussed various strategies for moving ions around in arrays of miniature Penning traps and have set out some of their advantages for applications in QIP. In particular we have described a hybrid quantum information processor based upon a stack of conventional cylindrical Penning traps, used for gate operations, and a two-dimensional array of `pad' traps, in which stored ions act as a quantum register.
\section{ACKNOWLEDGMENTS}
Project supported by the European Commission within the FP5 RTD programmes HITRAP and QGATES and the Integrated Project FET/QIPC ``SCALA" FP6. We also acknowledge the support from the EPSRC. JRCP acknowledges the support by CONACyT, SEP and the ORS Awards.
\label{lastpage}
\end{document} |
\begin{document}
\title{Comparison between Second Variation of Area and \\ Second Variation of Energy of a Minimal Surface}
\author{Norio Ejiri \\ Department of Mathematics, Faculty of Science and Technology, Meijo University \\ 1-501 Shiogamaguchi, Tempaku-ku, Nagoya-shi, Aichi 468-8502 Japan \\ {\sffamily \small ejiri@ccmfs.meijo-u.ac.jp} \and Mario Micallef \\ Mathematics Institute, University of Warwick, Coventry, CV4 7AL, U.K. \\ {\sffamily \small M.J.Micallef@warwick.ac.uk}}
\maketitle
\section{Statement and discussion of the results}
The conformal parameterisation of a minimal surface is harmonic and therefore, it is natural to compare the Morse index $i_A$ of a minimal surface as a critical point of the area functional $A$ with its Morse index $i_E$ as a critical point of the energy functional $E$.\footnote{Recall that the index of a functional at a critical point $F$ is the number, counted with multiplicity, of negative eigenvalues of the hessian (i.~e. Jacobi operator) of the functional at the critical point. Equivalently, the index is the dimension of a maximal subspace of the space of infinitesimal variations of $F$ on which the hessian is negative definite.} Indeed, one way by which minimal surfaces are produced is by first finding a map which is harmonic with respect to a fixed conformal structure on the surface and then varying the conformal structure until we find a harmonic map whose energy is critical with respect to variations of the conformal structure. This procedure is well known and has been used very successfully by Douglas \cite{D}, Courant \cite{C}, Schoen and Yau \cite{ScY}, Sacks and Uhlenbeck \cite{SaU}, Tomi and Tromba \cite{TT} and others.
We now state a theorem relating $i_A$ to $i_E$:
\begin{thm}\label{index_closed} Let $F \colon \Sigma_g \to M$ be a (possibly branched) minimal immersion of a closed Riemann surface of genus $g$ into a Riemannian manifold $M$. Then \begin{equation} \label{comp_ineq} i_E \leqslant i_A \leqslant i_E + r \end{equation} where, if $b$ = total number of branch points of $F$ counted with multiplicity then
\[ r = \begin{cases}6g-6-2b & \text{if }b \leqslant 2g - 3, \\ 4g - 2 + 2 \left[\frac{-b}{2}\right] & \text{if $2g - 2 \leqslant b \leqslant 4g - 4$ and $[x]$ denotes the} \\ & \text {largest integer less than or equal to $x$,} \\ 0 & \text{if $b \geqslant 4g-3$.} \end{cases} \] \end{thm}
Note: if $g = 0$, $r = 0$ and if $g = 1$, then $r = 2$ if $b = 0$ and $r = 0$ if $b > 0$.\\
\begin{rems} \mbox{}
\begin{enumerate}[(1)] \item $r \leqslant 6g - 6 = \text{real dimension of Teichm\"{u}ller space.}$ This is not surprising in light of the first paragraph of this paper; see also \S2. \item If $g=0$, then any harmonic map is conformal (and therefore also minimal) and $i_A = i_E$. This is due to the fact that the two-sphere carries a unique conformal structure and it was proved by the second author in \cite{M2}, Lemma 3.2. This paper arose out of that work. \item We can also compare the nullity of $F$ as a critical point of the area functional with the nullity of $F$ as a critical point of the energy functional. See Theorem \ref{nul_closed} in \S3 for a precise statement. \item Moore has also recently studied the relation between the second variations of area and energy in \S5 of \cite{Mo}. However, his line of investigation is different from ours. In particular, he does not compare the indices of the two functionals. \end{enumerate} \end{rems} Theorem \ref{index_closed} enables us to obtain an upper bound on the index of a minimal surface in an arbitrary Riemannian manifold which depends on the area and genus of the surface, and the dimension and geometry of the ambient manifold; see Theorem \ref{gen}. The bound does not depend on the second fundamental form of the minimal surface.
We now consider the index of complete minimal surfaces in $\mathbb{R}^d$.\footnote{We shall always mean the area index when referring to the index of a minimal surface in $\mathbb{R}^d$.} Let $(\Omega_i)_{i \in \mathbb{N}}$ be an exhaustion of a complete minimal surface $\Sigma$ by an increasing sequence of compact subsets. The index of $\Sigma$ is defined as $\sup_{i \to \infty} \indx(\Omega_i)$. In her pioneering work \cite{FC}, Fischer-Colbrie showed that a complete, oriented minimal surface $\Sigma$ of finite total curvature in $\mathbb{R}^3$ has finite index; see also \cite{G}. The proof works equally well in $\mathbb{R}^d$, $d > 3$ (see, for instance, \cite{N1}). However, no bound was given on the index in terms of the total curvature. For $d=3$, this was first carried out by Tysk in \cite{T} and then improved by Nayatani in \cite{N3}, Theorem 4. The case $d > 3$ was treated by the first author in \cite{E1}, and also by Cheng and Tysk in \cite{CT}. Among other results, they proved that there exists a constant $c_d$, depending only on the dimension $d$ of the ambient Euclidean space, such that \[ \indx(\Sigma) \leqslant c_d \int_{\Sigma} (-K)\,dA \] where $K$ is the Gauss curvature of $\Sigma$ and $dA$ is the element of area on $\Sigma$.\footnote{In \cite{GY} Grigory'an and Yau have proved an estimate of this type even for a minimal surface in $\mathbb{R}^3$ with boundary.} Unfortunately $c_d \to \infty$ as $d \to \infty$.
The method used in the proof of Theorem \ref{index_closed} can also be used to establish the following:
\begin{thm}\label{index_totcurv} Let $\Sigma$ be a complete, oriented, non-planar minimal surface with finitely many branch points and of finite total curvature in $\mathbb{R}^d$. Then
\begin{equation}\label{1.1} \indx(\Sigma) \leqslant \frac{1}{\pi}\int_{\Sigma}(-K)\,dA + 2g - 2 \leqslant \frac{3}{2\pi}\int_{\Sigma}(-K)\,dA - r + b, \end{equation} where $g$ = genus of $\Sigma$, $r$ = number of ends of $\Sigma$ and $b$ = total number of branch points counted with multiplicity. When $d = 3$ the above inequality may be improved to
\begin{equation}\label{1.2} \indx(\Sigma) \leqslant \frac{1}{\pi}\int_{\Sigma}(-K)\,dA + 2g - 3 \leqslant \frac{3}{2\pi}\int_{\Sigma}(-K)\,dA - r + b -1. \end{equation}
\end{thm}
\begin{rem} We can also make statements about $\nul(\Sigma)$; see Theorem \ref{nul_totcurv} in \S3 for a precise statement. It suffices to state here that we will show that, when $d=3$, \[ \indx(\Sigma) + \nul(\Sigma) \leqslant \frac{1}{\pi}\int_{\Sigma}(-K)\,dA + 2g. \] This estimate is similar to, but worse than, the one proved by Nayatani in \cite{N3}, Theorem 4: \begin{equation}\label{1.2nay} \text{If }\int_{\Sigma}(-K)\,dA \geqslant 8 \pi, \text{ then } \indx(\Sigma) + \nul(\Sigma) \leqslant \frac{3}{4 \pi}\int_{\Sigma}(-K)\,dA + 3g. \end{equation} So, perhaps the most striking feature of \eqref{1.1} is that it is valid for \emph{all} $d \geqslant 3$. The translational Jacobi fields show that, for a non-planar minimal surface in $\mathbb{R}^3$, $\nul(\Sigma) \geqslant 3$. In particular, \begin{equation}\label{1.2nayindx} \text{If }\int_{\Sigma}(-K)\,dA \geqslant 8 \pi, \ \text{and $d=3$, then } \indx(\Sigma) \leqslant \frac{3}{4 \pi}\int_{\Sigma}(-K)\,dA + 3g - 3. \end{equation} \end{rem}
Fischer-Colbrie showed in \cite{FC} that a complete, oriented minimal surface in $\mathbb{R}^3$ of finite index has finite total curvature; see also \cite{GL}. This does not hold for minimal surfaces in $\mathbb{R}^d$, $d \geqslant 4$ because any holomorphic curve (in particular, one with infinite total curvature) in $\mathbb{C}^2 = \mathbb{R}^4$ is area-minimizing on compact subsets and therefore has index zero. (A partial converse to this fact was proved in \cite{M1}.) The work of Fischer-Colbrie, of course, raises the question of obtaining lower bounds on the index of a minimal surface in $\mathbb{R}^3$ in terms of the total curvature. The first result in this direction was obtained by Fischer-Colbrie and Schoen \cite{FCS} (see also \cite{dCP} and \cite{Pv}) and states that a complete, stable (i.~e. index zero) oriented, minimal surface in $\mathbb{R}^3$ is a plane. Since then, several authors have obtained lower bounds, some of which we shall compare in the following remarks to the upper bounds furnished by \eqref{1.2} and \eqref{1.2nayindx}. We refer to \S 7 of \cite{HK} for a fuller discussion of the index of minimal surfaces of finite total curvature in $\mathbb{R}^3$.
\begin{rems}\mbox{}
\begin{enumerate}[(1)] \item If the total curvature of $\Sigma$ in $\mathbb{R}^3$ is $-4\pi$, then the Gauss map is 1-1 and therefore $\indx(\Sigma) = 1$. It will also follow from Theorem \ref{nul_totcurv} in \S3 that the nullity in this case is precisely equal to 3. The only complete immersed minimal surfaces with total curvature equal to $-4 \pi$ are the catenoid and Enneper's surface; see \cite{O}. However, Rosenberg and Toubiana have constructed several examples in \cite{RT} of branched minimal surfaces in $\mathbb{R}^3$ of total curvature $-4\pi$. When $\int_{\Sigma}(-K)\,dA = 4\pi$, genus of $\Sigma = 0$ (because the Gauss map is 1-1) and therefore, \eqref{1.2} gives $\indx \leqslant 1$. This shows that \eqref{1.2} is sharp in this sense. We also note that, conversely, L\'{o}pez and Ros proved in \cite{LR} that the only complete immersed minimal surfaces in $\mathbb{R}^3$ of index 1 are the catenoid and Enneper's surface; see also \cite{MR}. \item If $\Sigma_k$ is the Jorge-Meeks $k$-noid of genus zero and $k$ ends, then the estimate in \eqref{1.2} can be improved to $\indx(\Sigma_k) + \nul(\Sigma_k) \leqslant 2k$; see \S4. But in \cite{Ch} it is shown that $\indx(\Sigma_k) \geqslant 2k-3$. Therefore, $\indx(\Sigma_k)=2k-3$ and $\nul(\Sigma_k)=3$. (Nayatani obtained this result by direct calculation in \cite{N2}; see also \cite{MR}.) Once again the methods of this paper yield sharp results. However, the remarks below indicate that this is not so in general. \item If the total curvature of $\Sigma$ in $\mathbb{R}^3$ is $-8 \pi$ and $\Sigma$ has genus zero then, \eqref{1.2} and the lower bound in \cite{Ch} yield $5 \geqslant \indx(\Sigma) \geqslant 3$ whereas according to \eqref{1.2nayindx}, $\indx(\Sigma) \leqslant 3$ and therefore, $\indx(\Sigma) = 3$. This has been proved by the first author and Kotani in \cite{EK}, Corollary 4.3
The Chen-Gackstatter surface has total curvature equal to $-8 \pi$ and genus 1. Montiel and Ros showed that the index of this surface is equal to 3 in \cite{MR}, Corollary 15. On the other hand, according to \eqref{1.2nayindx} the index is at most 6 and, according to \eqref{1.2} the index is at most 7. This lack of sharpness of \eqref{1.2} is not surprising as it does not take into account any special geometric properties of the minimal surface whereas Corollary 15 in \cite{MR} exploits the fact that the branching values of the Gauss map of the Chen-Gackstatter surface all lie on an equator. \item The first author and Kotani \cite{EK} and Montiel and Ros \cite{MR} independently proved that the index of a complete minimal surface in $\mathbb{R}^3$ of genus zero is at most $\frac{1}{2 \pi}\int_{\Sigma}(-K)\,dA - 1$ and that generically the index is equal to this number. This shows that \eqref{1.2nayindx} (and therefore also \eqref{1.2}) is not sharp when the total curvature is less than or equal to $-12 \pi$. \item We leave the reader to check that \eqref{1.2} and \eqref{1.2nayindx} are also not sharp for Bryant's surface and the Hoffman-Meeks surfaces of genus $g$ and three ends. \end{enumerate} \end{rems}
This article is organised as follows. In \S \ref{icd} we show that the second variation of area for a given normal deformation $s$ is less than or equal to the second variation of energy for a deformation $v$ whose normal component is $s$. Formula \eqref{comp} shows that the difference between the two second variations vanishes precisely when $v$ is an infinitesimal conformal deformation, which is defined by \eqref{2.5}. The proofs of the main theorems, based on \eqref{comp} and an application of the Riemann-Roch theorem, are given in \S \ref{main_proofs}. In the final section we prove the upper bound on the index of a minimal surface in an arbitrary Riemannian manifold mentioned above. We also obtain a smaller upper bound than that given by Theorem \ref{index_totcurv} for the index of a minimal surface of finite total curvature in $\mathbb{R}^3$ which has appropriate symmetry; see Theorem \ref{strong_sym}.
\section{Infinitesimal conformal deformations and motivation for the proof of Theorem \ref{index_closed}}\label{icd}
Given a map $F \colon \Sigma \to M$ from a Riemann surface into a Riemannian manifold, let $z=x+iy$ be a local complex co-ordinate on $\Sigma$. Then, \begin{equation}\label{2.1} \text{the energy integrand, $e(F) :=
\frac{1}{2}\left\{|F_x|^2 + |F_y|^2 \right\}$,} \end{equation} where $F_x = F_*(\partial_x)$, $F_y = F_*(\partial_y)$,
and the norm $|.|$ is taken with respect to the Riemannian metric $\langle\cdot,\cdot\rangle$ on $TM$. The \begin{equation}
\text{area integrand, $g(F) := \left\{|F_x|^2|F_y|^2 - \langle F_x,F_y\rangle^2\right\}^{1/2}$.} \end{equation} Therefore \begin{equation}\label{2.3} e(F) \geqslant g(F) \text{ with equality if, and only if, $F$ is conformal.} \end{equation}
We therefore see that, a variation which decreases the energy $E := \int_{\Sigma} e(F)\,dxdy$ of a conformal harmonic map $F$ must also decrease the area $A := \int_{\Sigma} g(F)\,dxdy$ of the map, and therefore $i_A \geq i_E$. Conversely, a variation which decreases the area of a conformal harmonic map will also decrease the energy if we could reparameterise the variation so as to maintain it conformal with respect to the initial conformal structure. Of course, the obstruction to doing this comes from Teichm\"{u}ller space.
We now make the above reasoning more formal. Let $\nu$ denote the normal bundle of $\Sigma$ and let $s \in \Gamma(\nu)$\footnote{$\Gamma$ shall always denote the space of smooth sections of a bundle.} be such that the second variation of area in the direction of $s$, $(\delta^2 A)(s)$, is negative. Let $\xi$ denote the ramified tangent bundle of $\Sigma$, i.~e. $\xi$ is the tangent bundle of $\Sigma$ twisted at the branch points by an amount equal to the order of branching so that $F^*(TM) = \xi \oplus \nu$. We wish to find $\sigma_s \in \Gamma(\xi)$ such that
\begin{enumerate}[(1)] \item the map $s \mapsto \sigma_s$ is linear, \item the family of maps corresponding to the variation vector field $s + \sigma_s$ is a family of conformal maps. \end{enumerate} If we succeed, then $(\delta^2 E)(s + \sigma_s) = (\delta^2 A)(s) < 0$. Of course, $\delta^2E$ is the hessian (or second variation) of the energy functional $E$.
We will now derive a differential equation for $\sigma_s$ that will guarantee property $(2)$ at the infinitesimal level. Let $I = (-\varepsilon, \varepsilon) \subset \mathbb{R}$, $\varepsilon > 0$ and let $V \colon I \times \Sigma \to M$ be such that $V(0,\cdot) = F(\cdot)$ and $(V_*(0,\cdot))(\partial_\varepsilon) = s(\cdot)$. We now want $\varphi \colon I \times \Sigma \to \Sigma$ such that $\varphi(0,\cdot)$ = identity and $\tilde{V}(t,\cdot) := V(t,\varphi(t,\cdot))$ is conformal with respect to the given fixed conformal structure on $\Sigma$ for all $t \in I$. The conformality of $\tilde{V}(t,\cdot)$ can be expressed by \begin{equation}\label{2.4} \langle\tilde{V}_*(\partial_z),\tilde{V}_*(\partial_z)\rangle = 0 \end{equation} where $z$ is a local complex co-ordinate on $\Sigma$ and $\langle\cdot , \cdot\rangle$ denotes the Riemannian metric on $TM$ extended \emph{complex bilinearly} to $T_{\mathbb{C}}M := TM \otimes_{\mathbb{R}} \mathbb{C}$. Differentiating \eqref{2.4} with respect to $t$ and setting $t=0$ gives: \begin{equation}\label{2.5} \langle\nabla_{\partial_z}(s + \sigma_s),F_z \rangle = 0 \end{equation} where $\sigma_s = F_*((\phi_*(0,\cdot))\partial_t) \in \Gamma(\xi)$ and $\nabla$ is the Levi-Civita connection on $M$ pulled back to $F^*(TM)$ and extended complex linearly to $F^*(T_{\mathbb{C}} M)$. For obvious reasons a vector field $v \in \Gamma(F^*(TM))$ which satisfies $\langle\nabla_{\partial_z}v,F_z \rangle =0$ is called an infinitesimal conformal deformation.
We now recall that the complex structure on $\Sigma$ gives rise to the splitting $\xi_{\mathbb{C}} := \xi \otimes_{\mathbb{R}} \mathbb{C} = \xi^{1,0} \oplus \xi^{0,1}$ where the fibre of $\xi^{0,1}$ ($\xi^{0,1}$) is locally spanned by $F_z$ ($F_{\bar{z}}$) away from the branch points. (The holomorphicity of $F_z$ is required to explicitly trivialize $\xi^{1,0}$ on a neighbourhood of a branch point.) Therefore we may write $\sigma_s = \sigma_s^{1,0} + \sigma_s^{0,1}$. Next observe that \begin{equation}\label{2.6} \langle\nabla_{\partial_z}\sigma_s^{1,0},F_z\rangle = 0 \text{ and } \langle\nabla_{\partial_z}\sigma_s^{0,1},F_{\bar{z}}\rangle = 0 \text{ by conformality of $F$.} \end{equation} Moreover \begin{equation}\label{2.7} \ip{\nabla_{\partial_z}s}{F_{\bar{z}}} = -\ip{s}{\nabla_{\partial_z}F_{\bar{z}}} = 0 \text{ by harmonicity of $F$}. \end{equation} Using \eqref{2.6} and \eqref{2.7} one sees that \eqref{2.5} may be re-written as \begin{equation}\label{2.8} (\nabla_{\partial_z}\sigma_s^{0,1})^{\top} = -(\nabla_{\partial_z}s)^{\top} \end{equation} where the superscript $\top$ denotes orthogonal projection onto $\xi_{\mathbb{C}}$. Of course, the global form of \eqref{2.8} is \begin{equation}\label{2.9} D'\sigma_s^{0,1} = - (\nabla's)^{\top} \end{equation} where $\nabla' = dz \otimes \nabla_{\partial_z}$, $D' = dz \otimes D_{\partial_{z}}$ and $D$ is the connection on $\xi_\mathbb{C}$ induced by $\nabla$ and orthogonal projection onto $\xi_{\mathbb{C}}$. \eqref{2.9} is the differential equation that $\sigma_s$ has to satisfy in order for $v = s + \sigma_s$ to be an infinitesimal conformal deformation. Theorem \ref{ae_ineq} below essentially asserts the converse.
\begin{thm}\label{ae_ineq} Let $F \colon \Sigma \to M$ be a (possibly branched) minimal immersion of a closed two-real dimensional oriented surface into a Riemannian manifold. For any $s \in \Gamma(\nu)$ and $\sigma \in \Gamma(\xi)$ we have \begin{equation}\label{3.1} (\delta^2 A)(s) \leqslant (\delta^2 E)(s + \sigma) \text{ with equality if and only if $\sigma$ satisfies \eqref{2.9}}. \end{equation} \end{thm}
In \eqref{3.1}, $F$ is, of course, being regarded as a harmonic map which is conformal (away from the branch points) with respect to the conformal structure on $\Sigma$ induced by $F$. A more precise relationship between $\delta^2 E$ and $\delta^2 A$ is given by \eqref{comp} below.
\begin{proof} One could try to prove this theorem by reversing the argument that led to the derivation of \eqref{2.9} but we prefer to give a more formal proof that works unchanged also in the case of complete minimal surfaces of finite total curvature.
Let $z = x + iy$ be a local complex co-ordinate on $\Sigma$ and let $v = s + \sigma$. Then \begin{equation}\label{3.2}
(\delta^2 E)(v) = \int_{\Sigma} \left(|\nabla_{\partial_x}v|^2 +
|\nabla_{\partial_y}v|^2 - \ip{R(v,F_x)F_x}{v} - \ip{R(v,F_y)F_y}{v}
\right) \,dxdy \end{equation} where $R$ is the Riemann curvature tensor of $M$. Now \begin{equation}\label{3.3}
|\nabla_{\partial_x}v|^2 +
|\nabla_{\partial_y}v|^2 = 4\,|\nabla_{\partial_z}v|^2. \end{equation} We let $\perp$ denote orthogonal projection onto $\nu_{\mathbb{C}} := \nu \otimes_\mathbb{R} \mathbb{C}$ and obtain \begin{equation}\label{3.4}
\nabla_{\partial_z}v = (\nabla_{\partial_z}v)^{\perp} + \eta +
(\nabla_{\partial_z}\sigma^{1,0})^{\top} \end{equation} where, as suggested by \eqref{2.8}, \begin{equation}\label{3.5}
\eta := (\nabla_{\partial_z}s)^{\top} +
(\nabla_{\partial_z}\sigma^{0,1})^{\top} \, . \end{equation} On using \eqref{2.6} and \eqref{2.7} in \eqref{3.4} we obtain \begin{equation}\label{3.6}
|\nabla_{\partial_z}v|^2 = |(\nabla_{\partial_z}v)^{\perp}|^2 + |\eta|^2 +
|(\nabla_{\partial_z}\sigma^{1,0})^{\top}|^2. \end{equation} Locally, and away from the branch points, we can write $\sigma^{0,1} = f\,F_{\bar{z}}$ for some locally defined function $f$. Therefore \begin{equation}\label{3.7}
(\nabla_{\partial_z}\sigma^{0,1})^{\perp} = 0
\text{ by harmonicity of $F$}. \end{equation} \eqref{3.7} allows us to re-write \eqref{3.6} as \begin{equation}\label{3.8} \begin{split}
|\nabla_{\partial_z}v|^2 &= |(\nabla_{\partial_z}s)^{\perp}|^2 + |\eta|^2 +
|\nabla_{\partial_z}\sigma^{1,0}|^2 \\
& \qquad +
\ip{(\nabla_{\partial_z}s)^{\perp}}{\nabla_{\partial_{\bar{z}}}\sigma^{0,1}}
+
\ip{(\nabla_{\partial_{\bar{z}}}s)^{\perp}}{\nabla_{\partial_z}\sigma^{1,0}}. \end{split} \end{equation} We now calculate the last two terms of \eqref{3.8}: \begin{equation}\label{3.9} \begin{split} \ip{(\nabla_{\partial_z}s)^{\perp}}{\nabla_{\partial_{\bar{z}}}\sigma^{0,1}} & = \partial_z \ip{s}{\nabla_{\partial_{\bar{z}}}\sigma^{0,1}} - \ip{s}{\nabla_{\partial_z}\nabla_{\partial_{\bar{z}}} \sigma^{0,1}}\, , \\ \ip{(\nabla_{\partial_{\bar{z}}}s)^\perp}{\nabla_{\partial_z}\sigma^{1,0}} & = \partial_{\bar{z}} \ip{s}{\nabla_{\partial_z}\sigma^{1,0}} - \ip{s}{\nabla_{\partial_{\bar{z}}}\nabla_{\partial_z} \sigma^{1,0}} \, . \end{split} \end{equation} But, from \eqref{3.7} and \eqref{3.5} we have $\nabla_{\partial_z}\sigma^{0,1} = (\nabla_{\partial_z}\sigma^{0,1})^{\top} = \eta - (\nabla_{\partial_z}s)^{\top}$ and therefore \begin{equation}\label{3.10} \begin{split}
\ip{\nabla_{\partial_z}\nabla_{\partial_{\bar{z}}}\sigma^{0,1}}{s}
&= \ip{R(F_z,F_{\bar{z}})\sigma^{0,1}}{s} +
\ip{\nabla_{\partial_{\bar{z}}}(\eta - (\nabla_{\partial_z}s)^{\top})}{s} \\
&= \ip{R(F_z,F_{\bar{z}})\sigma^{0,1}}{s} +
|(\nabla_{\partial_z}s)^{\top}|^2 -
\ip{\eta}{(\nabla_{\partial_{\bar{z}}}s)^{\top}}. \end{split} \end{equation} Similarly, \begin{equation}\label{3.11} \ip{\nabla_{\partial_{\bar{z}}}\nabla_{\partial_z}\sigma^{1,0}}{s} = \ip{R(F_{\bar{z}},F_z)\sigma^{1,0}}{s} +
|(\nabla_{\partial_{\bar{z}}}s)^\top|^2 -
\ip{\bar{\eta}}{(\nabla_{\partial_z}s)^{\top}}. \end{equation} Taking \eqref{3.9}, \eqref{3.10} and \eqref{3.11} into account in \eqref{3.8}, integrating and using Stokes's theorem gives \begin{equation}\label{3.12} \begin{split}
\int_{\Sigma} |\nabla_{\partial_z}v|^2 \,dxdy & = \int_{\Sigma}
\left(|(\nabla_{\partial_z}s)^{\perp}|^2 + |\eta|^2 +
|\nabla_{\partial_z}\sigma^{1,0}|^2 -
2\,|(\nabla_{\partial_z}s)^{\top}|^2 \right. \\
& \qquad \quad - \ip{R(F_z,F_{\bar{z}})\sigma^{0,1}}{s} -
\ip{R(F_{\bar{z}},F_z)\sigma^{1,0}}{s} \\
& \left. \qquad \quad + \ip{\eta}{(\nabla_{\partial_{\bar{z}}}s)^{\top}}
+ \ip{\bar{\eta}}{(\nabla_{\partial_z}s)^{\top}}\right ) \,dxdy. \end{split} \end{equation} We now deal with the last two terms in \eqref{3.2}: \begin{equation}\label{3.13}
\begin{split}
\ip{R(v,F_x)F_x}{v} + \ip{R(v,F_y)F_y}{v}
& = 4\,\ip{R(v,F_z)F_{\bar{z}}}{v} \\
& = 4\, \left (\ip{R(s,F_z)F_{\bar{z}}}{s} +
\ip{R(\sigma^{0,1},F_z)F_{\bar{z}}}{\sigma^{1,0}} \right. \\
& \left. \qquad + \ip{R(s,F_z)F_{\bar{z}}}{\sigma^{1,0}} +
\ip{R(\sigma^{0,1},F_z)F_{\bar{z}}}{s} \right ).
\end{split} \end{equation} By the second Bianchi identity, \begin{equation}\label{3.14}
\begin{split} \ip{R(\sigma^{0,1},F_z)F_{\bar{z}}}{s} + \ip{R(F_z,F_{\bar{z}})\sigma^{0,1}}{s} &= 0, \\ \ip{R(\sigma^{1,0},F_{\bar{z}})F_z}{s} + \ip{R(F_{\bar{z}},F_z)\sigma^{1,0}}{s} &= 0.
\end{split} \end{equation} Using \eqref{3.3}, \eqref{3.12}, \eqref{3.13} and \eqref{3.14} in \eqref{3.2} yields: \begin{equation}\label{3.15}
\begin{split}
(\delta^2 E)(v) &= 4 \int_{\Sigma} \left(
|(\nabla_{\partial_z}s)^{\perp}|^2 + |\nabla_{\partial_z}\sigma^{1,0}|^2 +
|\eta|^2 - 2\,|(\nabla_{\partial_z}s)^{\top}|^2 \right. \\
& \qquad \qquad - \ip{R(s,F_z)F_{\bar{z}}}{s} -
\ip{R(\sigma^{0,1},F_z)F_{\bar{z}}}{\sigma^{1,0}} \\
& \left. \qquad \qquad + \ip{\eta}{(\nabla_{\partial_{\bar{z}}}s)^{\top}}
+ \ip{\bar{\eta}}{(\nabla_{\partial_z}s)^{\top}} \right) \,dxdy.
\end{split} \end{equation} An integration by parts shows that \begin{equation}\label{3.16}
\int_{\Sigma} |\nabla_{\partial_z}\sigma^{1,0}|^2\,dxdy = \int_{\Sigma}
|\nabla_{\partial_{\bar{z}}}\sigma^{1,0}|^2 \,dxdy + \int_{\Sigma}
\ip{R(F_z,F_{\bar{z}})\sigma^{1,0}}{\sigma^{0,1}}\,dxdy \end{equation} which, together with \eqref{3.5} and the second Bianchi identity, gives: \begin{equation}\label{3.17}
\begin{split}
\int_{\Sigma} |\nabla_{\partial_z}\sigma^{1,0}|^2\,dxdy
&= \int_{\Sigma} \left( |\eta|^2 + |(\nabla_{\partial_z}s)^{\top}|^2 -
\ip{\eta}{(\nabla_{\partial_{\bar{z}}}s)^{\top}} \right. \\
& \left. \qquad - \ip{\bar{\eta}}{(\nabla_{\partial_z}s)^{\top}} +
\ip{R(\sigma^{1,0},F_{\bar{z}})F_z}{\sigma^{0,1}}\right) \,dxdy.
\end{split} \end{equation} On substituting \eqref{3.17} in \eqref{3.15} we obtain \begin{equation}\label{comp} \begin{split} (\delta^2 E)(v) &=4 \int_{\Sigma} \left(
|(\nabla_{\partial_z}s)^{\perp}|^2 - |(\nabla_{\partial_z}s)^{\top}|^2 -
\ip{R(s,F_z)F_{\bar{z}}}{s} + 2|\eta|^2 \right)\,dxdy \\
&= (\delta^2 A)(s) + 8 \int_{\Sigma} |\eta|^2\,dxdy \, . \end{split} \end{equation} The proof of the theorem is complete. \end{proof}
\section{Proof of Theorems \ref{index_closed} and \ref{index_totcurv}}\label{main_proofs}
\begin{proof}[Proof of Theorem \ref{index_closed}] The inequality $i_A \geqslant i_E$ follows immediately from Theorem \ref{ae_ineq}. For the other inequality, let $S$ be a maximal subspace on which $\delta^2 A < 0$. By the Fredholm alternative, we may solve \eqref{2.9} if, and only if, $(\nabla_{\partial_z}s)^\top$ is orthogonal to $\ker D'{^*}$, where $D'{^*}$ is the adjoint of $D'$. Now an integration by parts shows that $D'{^*} = i*\bar{\partial} \colon \Gamma(\xi^{0,1} \otimes \kappa) \to \Gamma(\xi^{0,1})$ where $\kappa$ is the line bundle of holomorphic one-forms on $\Sigma$ and $*$ is the Hodge star operator. Therefore $\ker D'{^*} = H^0(\xi^{0,1} \otimes \kappa)$ = space of holomorphic sections of $\xi^{0,1} \otimes \kappa$. Let $h^0(\xi^{0,1} \otimes \kappa)$ = complex dimension of $H^0(\xi^{0,1} \otimes \kappa)$. Then, we may find a subspace $\tilde{S} \subset S$ of real dimension $\geqslant$ $\dim S - 2h^0(\xi^{0,1} \otimes \kappa)$ for which \eqref{2.9} has a solution $\sigma_s$ whenever $s \in \tilde{S}$. Moreover, we may arrange for $\sigma_s$ to depend linearly on $s$, since the equation for $\sigma_s$ is linear. Let $\hat{S} = \{ s + \sigma_s \mid s \in \tilde{S} \} \subset \Gamma(F^*(TM))$. Then, by Lemma \ref{ae_ineq}, $\delta^2 E \vert_{\hat{S}} < 0$ and therefore, $i_E \geqslant \dim \hat{S} = \dim \tilde{S} \geqslant i_A - r$ where $r = 2 h^0(\xi^{0,1} \otimes \kappa)$. We now calculate $h^0(\xi^{0,1} \otimes \kappa)$: $c_1(\xi^{0,1}) = 2g-2-b$ and $c_1(\kappa) = 2g-2$ and therefore, by the theorem of Riemann-Roch, we have \[ h^0(\xi^{0,1}\otimes \kappa) = 3g-3-b + h^0(\xi^{1,0}). \] If $b \leqslant 2g-3$, $c_1(\xi^{1,0}) < 0$ and $h^0(\xi^{0,1}\otimes \kappa) = 3g - 3 - b$. If $b \geqslant 4g - 3$, $c_1(\xi^{0,1}\otimes \kappa) < 0$ and $h^0(\xi^{0,1} \otimes \kappa) = 0$. If $2g - 2 \leqslant b \leqslant 4g - 4$, then $0 \leqslant c_1(\xi^{0,1} \otimes \kappa) \leqslant 2g - 2$ and therefore, by Clifford's theorem (see, for example, \cite{GH}), $h^0(\xi^{0,1} \otimes \kappa) \leqslant \left[\frac{4g - 2 - b}{2}\right]$.
The proof of Theorem \ref{index_closed} is complete. \end{proof}
Recall that the nullity $n$ of a functional at a critical point $F$ is the dimension of the space of Jacobi fields of the functional at $F$. If the index of $F$ is $i$ then $i+n$ is equal to the dimension of a maximal subspace of the space of infinitesimal variations of $F$ on which the hessian of the functional is negative semidefinite. A minor modification (which will be left to the reader) of the proof of Theorem \ref{index_closed} yields:
\begin{thm}\label{nul_closed} Let $F \colon \Sigma_g \to M$ and $r$ be as in Theorem \ref{index_closed} and let \begin{align*} n_A &= \text{nullity of $F$ as a critical point of the area functional $A$,} \\ n_E&= \text{nullity of $F$ as a critical point of the energy functional $E$,} \\ {n_E}^T&= \text{dimension of the space of \emph{purely tangential}} \\ &\hphantom{\mbox{}=\mbox{}} \text{Jacobi fields of $F$, as a critical point of $E$.} \end{align*} Then \[ i_E + n_E - n_E^T \leqslant i_A + n_A \leqslant i_E + n_E - n_E^T + r. \] \end{thm} The following comparison of the nullities of energy and area follows immediately from the inequalities in Theorem \ref{index_closed} and Theorem \ref{nul_closed}. \[ n_E - n_E^T - r \leqslant n_A \leqslant n_E - n_E^T + r. \]
We now move on to the proof of Theorem \ref{index_totcurv}. First, we recall that (see, for example, \cite{L}) if a minimal surface $\Sigma$ in $\mathbb{R}^d$ has finite total curvature and finitely many branch points, then $\Sigma$ is conformally diffeomorphic to a closed Riemann surface $\bar{\Sigma}$ with finitely many punctures $\{p_1, \dotsc ,p_k\}$ corresponding to the ends of $\Sigma$. Recall, too, that $G_{2,d}$, the Grassmannian of oriented two-planes in $\mathbb{R}^d$, may be identified with the quadric $Q_{d-2} \subset \mathbb{C} P^{d-1}$ defined by $\{[z] \mid z = (z_1, \dotsc , z_d) \in \mathbb{C}^d \setminus \{0\}, \ \sum_{i=1}^d (z_i)^2 = 0\}.$ Furthermore, the Gauss map $G \colon \Sigma \to G_{2,d} = Q_{d-2}$ is holomorphic and extends to a holomorphic map $\bar{G} \colon \bar{\Sigma} \to G_{2,d}$. Let $\gamma$ be the tautological two-plane bundle over $G_{2,d}$ and let $\bar{\xi} = \bar{G}^*(\gamma)$. Then
$\left. \bar{\xi} \right|_{\Sigma} = \xi$, the ramified tangent bundle of $\Sigma$ and $c_1(\bar{\xi}) = \frac{1}{2 \pi} \int_{\Sigma} K \, dA$. Similarly, let $\bar{\nu}$ be the orthogonal complement of $\bar{\xi}$ in $\bar{\Sigma} \times \mathbb{R}^d$ and then
$\left. \bar{\nu} \right|_{\Sigma} = \nu$. The hessians $\delta^2 E$ and $\delta^2 A$ both extend to sections of $\bar{\Sigma} \times \mathbb{R}^d$ and $\bar{\nu}$ respectively and, in \cite{FC}, \cite{G} and \cite{N1} it is shown that $i_A(\Sigma) = i_A(\bar{\Sigma}).$ We also have $i_E(\bar{\Sigma}) = 0$ and $n_E(\bar{\Sigma}) = d$ because the tangent bundle of $\mathbb{R}^d$ is trivial and the Levi-Civita connection on $\mathbb{R}^d$ is simply the exterior derivative. Finally, the projection of a constant vector field in $\mathbb{R}^d$ onto $\bar{\nu}$ is a Jacobi field of the area functional and therefore, if $\Sigma$ is non-planar then $n_A(\Sigma) \geqslant d$. We shall, in fact, prove the following more precise theorem.
\begin{thm}\label{nul_totcurv} Let $\Sigma$ be a complete, oriented, non-planar minimal surface in $\mathbb{R}^d$ with finitely many branch points and of finite total curvature. Define $g$, $b$ and $r$ as in Theorem \ref{index_totcurv}. Then \begin{equation}\label{totcurv} \indx(\Sigma) + \nul(\Sigma) \leqslant \frac{1}{\pi} \int_{\Sigma} (-K) \, dA + 2g - 2 + d \leqslant \frac{3}{2 \pi} \int_{\Sigma} (-K) \, dA - r + b + d. \end{equation} Furthermore, if $d=3$ then \eqref{totcurv} can be improved to \begin{equation}\label{totcurv_3} \indx(\Sigma) + \nul(\Sigma) \leqslant \frac{1}{\pi} \int_{\Sigma} (-K) \, dA + 2g \leqslant \frac{3}{2 \pi} \int_{\Sigma} (-K) \, dA - r + b + 2. \end{equation} \end{thm} \begin{rem} Inequalites \eqref{1.1} and \eqref{1.2} in Theorem \ref{index_totcurv} result from using, in \eqref{totcurv} and \eqref{totcurv_3}, the observation that $\nul(\Sigma) \geqslant d$ for a non-planar minimal surface. \end{rem}
\begin{proof} It is clear that the proof of Theorem \ref{ae_ineq} also works for $\delta^2 E$ and $\delta^2 A$ extended to $\bar{\Sigma}$. Furthermore, \[ c_1(\bar{\xi}^{1,0}) = c_1(\bar{\xi}) = \frac{1}{2 \pi}\int_{\Sigma} K \, dA < 0. \] Therefore, by the Riemann-Roch theorem we have \[ h^0(\bar{\xi}^{0,1} \otimes \kappa) = - c_1(\bar{\xi}) + c_1(\kappa) + 1 - g = \frac{1}{2 \pi}\int_{\Sigma} (-K) \, dA + g - 1. \] The proof of the first inequality in \eqref{totcurv} can now be completed by an argument identical to that of the proof of Theorem \ref{nul_closed}. The inequality \[ \frac{1}{2 \pi}\int_{\Sigma} (-K) \, dA \geqslant 2g - 2 + r - b \] is due to Osserman; see for instance, \cite{L}.
The proof of \eqref{totcurv_3} when $d=3$ requires the following lemma. \begin{lem} \label{hol} Let $F \colon \Sigma \to \mathbb{R}^3$ be a generalised minimal immersion of a Riemann surface $\Sigma$. Let $\hat{\nu}$ be a smooth unit normal vector field on $\Sigma$. (Such a section of $\nu$ exists because $\Sigma$ is orientable.) Then $(\partial \hat{\nu})^{\top} \in H^0(\xi^{0,1} \otimes \kappa)$. \end{lem} \begin{proof} By \eqref{2.7} we have that \[ (\partial \hat{\nu})^{\top} = \ip{\partial_z \hat{\nu}}{F_z}
\frac{F_{\bar{z}}}{|F_z|^2} \otimes dz =
- \ip{\hat{\nu}}{F_{zz}} \frac{F_{\bar{z}}}{|F_z|^2} \otimes dz . \] The conformality and harmonicity of $F$ then show that $D''\big((\partial \hat{\nu})^{\top}\big) = 0$, where $D'' := d \bar{z} \otimes D_{\partial_{\bar{z}}}$, i.~e., $(\partial \hat{\nu})^{\top}$ is a holomorphic section of $\xi^{0,1} \otimes \kappa$, as claimed. \end{proof} We now return to the proof of \eqref{totcurv_3}. Denote by $\mathcal{M}$ the space of meromorphic functions on $\bar{\Sigma}$. By Lemma \ref{hol}, \[ H^0(\xi^{0,1} \otimes \kappa) = \{g(\partial \hat{\nu})^{\top} \mid g \in \mathcal{M}, \ [g] + [(\partial \hat{\nu})^{\top}] \geqslant 0\}, \] where $[\cdot]$ denotes divisor. It is convenient to define \begin{equation} \label{ml} \mathcal{M}_L := \{g \in \mathcal{M} \mid [g] + [(\partial \hat{\nu})^{\top}] \geqslant 0\}, \end{equation} where, of course, $L := \xi^{0,1} \otimes \kappa$. If $s \in \Gamma(\nu)$, then $s = f \hat{\nu}$ for some $f \in C^{\infty}(\bar{\Sigma})$. Therefore, the Fredholm alternative\footnote{as in the proof of Theorem \ref{index_closed}} for the solvability of \eqref{2.9} can now be stated as the following condition on $f$: \begin{equation}\label{fred} \int_{\Sigma}f \bar{g} K \, dA = 0 \ \forall \, g \in \mathcal{M}_L, \end{equation}
where we have used $|(\partial \hat{\nu})^{\top}|^2 = -K$. Since the constant function 1 belongs to $\mathcal{M}_L$, we see that the $\mathbb{R}$-codimension of the space of real valued smooth functions $f$ satisfying \eqref{fred} is $2 h^0(\xi^{0,1} \otimes \kappa) - 1 = \frac{1}{\pi} \int_{\Sigma} (-K) \, dA + 2g - 3.$ \end{proof}
\section{More area index estimates} \label{particular} Theorem \ref{index_closed} provides an upper bound on $i_A$ whenever $i_E$ may be estimated, and it is often easier to estimate $i_E$ than $i_A$ because $\delta^2 E$ does not involve the second fundamental form. For instance, if $M$ has nonpositive sectional curvature then $i_E$ is easily seen to be zero from \eqref{3.2}, thereby yielding: \begin{cor} \label{nonpos} Let $\Sigma$ be a closed minimal surface in a Riemannian manifold $M$ of nonpositive sectional curvature. Then $i_A \leqslant r \leqslant 6g - 6$ where $r$ is as in Theorem \ref{index_closed}. \end{cor} In particular, the index of a closed minimal surface of genus $g$ in a flat torus is at most $6g-6$. Corollary \ref{nonpos} has already been noted in \cite{E3} by considering an energy functional on Teichm\"{u}ller space.
Theorem \ref{index_closed} can be refined when the ambient space is 3-dimensional in a manner similar to that in Theorem \ref{index_totcurv} for minimal surfaces of finite total curvature in $\mathbb{R}^3$. More precisely,
\begin{thm}\label{index_closed3} Let $F \colon \Sigma_g \to M^3$ be a (possibly branched) minimal immersion of a closed Riemann surface of genus $g$ into a three-dimensional space-form $M^3$. If $F$ is not totally geodesic then \begin{equation} \label{comp_ineq3} i_E \leqslant i_A \leqslant i_E + r - 1 \end{equation} where $r$ is as in Theorem \ref{index_closed}. \end{thm}
The proof consists in observing that Lemma \ref{hol} is still valid in this setting. The argument then proceeds as in the proof of \eqref{totcurv_3}.
The Clifford torus in $\mathbb{R} P^3$ is stable as a harmonic map. (In \cite{E2}, the first author has determined all harmonic tori in $\mathbb{R} P^3$ that minimize energy in their homotopy class.) However, the Clifford torus is unstable as a minimal surface and it is not hard to show that its index is 1. Furthermore $r = 2$. This is a situation in which $i_A < i_E$ and yet \eqref{comp_ineq3} is still sharp.
For a general minimal immersion we can prove
\begin{thm}\label{gen} Let $F \colon \Sigma_g \to M^d$ be a (possibly branched) minimal immersion of a closed Riemann surface of genus $g$ into a Riemannian manifold $M$. Then \begin{equation}\label{eq_gen_h} i_E + n_E \leqslant d \, C(M) \, \text{Area}\,(F(\Sigma_g)) \end{equation} and therefore, \begin{equation}\label{eq_gen_m} i_A + n_A \leqslant d \, C(M) \, \text{Area}\,(F(\Sigma_g)) + r \end{equation} where $r$ is as in Theorem \ref{index_closed} and $C(M)$ is a constant which depends on the second fundamental form of an isometric embedding of $M$ into Euclidean space. \end{thm} \begin{rems} The main interest in \eqref{eq_gen_m} is that no assumption is made on the second fundamental form of $F$. Colding and Minicozzi showed (Theorem 1.1 in \cite{CM}) that, given $A>0$ and a positive integer $g$, there are at most finitely many closed embedded minimal surfaces of genus $g$ and with area at most $A$ in a closed orientable 3-manifold with a bumpy metric. In particular, the Morse index of an embedded minimal surface in a 3-manifold with a bumpy metric is bounded by its genus and its area but no explicit bound like \eqref{eq_gen_m} is given in \cite{CM}. \end{rems} \begin{proof} Let $h(t)$ be the trace of the heat kernel of $\Sigma_g$ with the metric induced by $F$. Then, since $F$ is an isometric harmonic map, Proposition 2.2 in \cite{U} asserts that \begin{equation}\label{eq_gen_hu} i_E + n_E \leqslant d \, \inf_{t>0}e^{2 \mu t}h(t) \end{equation} where $\mu$ is an upper bound of the sectional curvatures of $M^d$.
Now consider $M$ to be isometrically embedded in some Euclidean space. The method of proof of Theorem 5 on pages 991-993 in \cite{CT} can now be employed to obtain \begin{equation}\label{h_bound} h(t) \leqslant \frac{\alpha_1}{(1 - e^{-\alpha_2}t)^2} \, \text{Area}\,(F(\Sigma_g)) \end{equation} where $\alpha_1$ and $\alpha_2$ are constants which depend on the second fundamental form of the isometric embedding of $M$ into Euclidean space. The bound \eqref{eq_gen_h} now follows from \eqref{eq_gen_hu} and \eqref{h_bound}. \end{proof} The indexes of some minimal surfaces of finite total curvature have already been mentioned in the remarks at the end of \S1. Many known examples of minimal surfaces in $\mathbb{R}^3$ which are embedded or, at least, whose ends are embedded, have symmetries which allow refinements to Theorem \ref{index_totcurv}. The next proposition makes precise the type of symmetry that the minimal surface is required to have. It includes the notion of strong symmetry with respect to a plane introduced by Cos\'{i}n and Ros in \cite{CR}, Definition 1; see also Lemma 4 of the same article.
\begin{prop} \label{sym} Let $\Sigma$ be a Riemann surface and let $F \colon \Sigma \to \mathbb{R}^3$ be a generalised minimal immersion which is complete and of finite total curvature. Suppose there exists an isometry $\Theta \colon \mathbb{R}^3 \to \mathbb{R}^3$ and a diffeomorphism $\theta \colon \Sigma \to \Sigma$ such that $\Theta \circ F = F \circ \theta$. Let $\mathcal{M}_L$ be defined by \eqref{ml}. If $\theta$ is anti-holomorphic and $g \in \mathcal{M}_L$ then $\overline{g \circ \theta} \in \mathcal{M}_L$. If $\theta$ is holomorphic and $g \in \mathcal{M}_L$ then $g \circ \theta \in \mathcal{M}_L$. \end{prop}
\begin{proof} We shall only give the proof when $\theta$ is anti-holomorphic; the proof when $\theta$ is holomorphic is similar and, in fact, much easier.
From $F(\theta(z)) = \Theta(F(z))$ and $\partial \theta = 0$ we obtain: \[ \frac{\partial F}{\partial \bar{w}} (\theta(z)) \frac{\partial \bar{\theta}}{\partial z} = \Theta_0 \left( \frac{\partial F}{\partial z}(z)\right) \] where $\Theta_0 \in SO(3)$ is the non-translational part of $\Theta$; of course, $w$ is a local complex coordinate defined on a neighbourhood of $\theta(z)$. It follows that \begin{equation} \label{hnu_sym} \hat{\nu}(\theta(z)) = \begin{cases} \Theta_0(\hat{\nu}(z)), & \text{if $\det \Theta_0 = -1$,} \\ -\Theta_0(\hat{\nu}(z)), & \text{if $\det \Theta_0 = 1$.} \end{cases} \end{equation} Differentiating \eqref{hnu_sym} yields: \[ \frac{\partial \hat{\nu}}{\partial \bar{w}} (\theta(z)) \frac{\partial \bar{\theta}}{\partial z} (z) = \pm \Theta_0 \left( \frac{\partial \hat{\nu}}{\partial z}(z)\right), \text{ i.e., } \theta^*\big( (\bar{\partial} \hat{\nu})^{\top} \big) = \pm \Theta_0 \big( (\partial \hat{\nu})^{\top} \big) . \] Therefore, $(\partial \hat{\nu})^{\top}$ has a zero of order $Q$ at $z$ if, and only if, it also has a zero of order $Q$ at $\theta(z)$. The proposition follows immediately. \end{proof}
\begin{thm} \label{strong_sym} Let $\Sigma$ be a Riemann surface and let $F \colon \Sigma \to \mathbb{R}^3$ be a generalised minimal immersion which is complete and of finite total curvature. Suppose there exists an isometry $\Theta \colon \mathbb{R}^3 \to \mathbb{R}^3$ and an anti-holomorphic involution $\theta \colon \Sigma \to \Sigma$ such that $\Theta \circ F = F \circ \theta$. Then \begin{equation}\label{totcurv_3sym} \indx(\Sigma) + \nul(\Sigma) \leqslant \frac{1}{2 \pi} \int_{\Sigma} (-K) \, dA + g + 2. \end{equation} In particular, since $\nul(\Sigma) \geqslant 3$, we have \begin{equation}\label{indx_3sym} \indx(\Sigma) \leqslant \frac{1}{2 \pi} \int_{\Sigma} (-K) \, dA + g - 1. \end{equation} \end{thm}
\begin{proof} Proposition \ref{sym} enables us to define $\rho \colon \mathcal{M}_L \to \mathcal{M}_L$ by $\rho (g) := \overline{g \circ \theta}$, where $\mathcal{M}_L$ is defined by \eqref{ml}. Then $\rho$ is linear and $\rho^2 = \text{identity}$. Therefore, $\mathcal{M}_L = \mathcal{M}_L^+ \oplus \mathcal{M}_L^-$ where $\mathcal{M}_L^+$ and $\mathcal{M}_L^-$ are respectively the $+1$ and $-1$ eigenspaces of $\rho$. Similarly, we can write $C^{\infty}(\bar{\Sigma}) = C^{\infty}_+(\bar{\Sigma}) \oplus C^{\infty}_-(\bar{\Sigma})$.
Next observe that if $f \in C^{\infty}(\bar{\Sigma})$ then $(\delta^2 A)(f \hat{\nu}) = (\delta^2 A)(f_+ \hat{\nu}) + (\delta^2 A)(f_- \hat{\nu})$, where $f_{\pm} := \frac12 (f \pm f \circ \theta)$. Now let $S_+$ be a maximal subspace of $C^{\infty}_+(\bar{\Sigma})$ on which $\delta^2 A \leqslant 0$ and define $S_-$ similarly. A moment's thought will reveal that $S := S_+ \oplus S_-$ is then a maximal subspace of $C^{\infty}(\bar{\Sigma})$ on which $\delta^2 A \leqslant 0$.
Let $\{f_1, \dotsc , f_p\}$ and $\{f_{p+1}, \dotsc , f_q \}$ be bases of $S_+$ and $S_-$ respectively and let $\{g_1, \dotsc , g_{\mu}\}$ and $\{g_{\mu + 1}, \dotsc , g_{\nu}\}$ be bases of $\mathcal{M}_L^+$ and $\mathcal{M}_L^-$ respectively. Then, for $j \in \{1, \dotsc , q \}$ and $\alpha \in \{1, \dotsc , \nu \}$ we have: \begin{align*} \int_{\Sigma}f_j \bar{g}_{\alpha} K \, dA &= \int_{\Sigma}(f_j \circ \theta) (\bar{g}_{\alpha} \circ \theta ) \theta^*(K \, dA) \\ &= \pm \int_{\Sigma}f_j \bar{g}_{\alpha} K \, dA . \end{align*} Therefore, $\int_{\Sigma}f_j \bar{g}_{\alpha} K \, dA $ is either real or purely imaginary. It follows that the $\mathbb{R}$-codimension of the space of real valued smooth functions $f$ satisfying \eqref{fred} is $ h^0(\xi^{0,1} \otimes \kappa) = \frac{1}{2 \pi} \int_{\Sigma} (-K) \, dA + g - 1.$ \eqref{totcurv_3sym} and \eqref{indx_3sym} follow immediately. \end{proof}
The Jorge-Meeks $k$-noid $\Sigma_k$ of genus zero and $k$ ends has total curvature equal to $4 \pi (k - 1)$ and is strongly symmetric in the sense of Cos\'{i}n and Ros in \cite{CR}. Therefore, we may apply Theorem \ref{strong_sym} to conclude, as asserted in a remark in \S1, that $\indx(\Sigma_k) + \nul(\Sigma_k) \leqslant 2k.$
\end{document} |
\begin{document}
\title{Simpler flag optimization} \author[Z.~Lai]{Zehua Lai} \address{Computational and Applied Mathematics Initiative, University of Chicago, Chicago, IL 60637-1514.} \email{laizehua@uchicago.edu, lekheng@uchicago.edu} \author[L.-H.~Lim]{Lek-Heng~Lim} \author[K.~Ye]{Ke~Ye} \address{KLMM, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China} \email{keyk@amss.ac.cn}
\begin{abstract} We study the geometry of flag manifolds under different embeddings into a product of Grassmannians. We show that differential geometric objects and operations — tangent vector, metric, normal vector, exponential map, geodesic, parallel transport, gradient, Hessian, etc — have closed-form analytic expressions that are computable with standard numerical linear algebra. Furthermore, we are able to derive a coordinate descent method in the flag manifold that performs well compared to other gradient descent methods. \end{abstract}
\subjclass[2010]{14M15, 90C30, 90C53, 49Q12, 65F25, 62H12}
\keywords{}
\maketitle
\tableofcontents \addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\section{Introduction}\label{sec:intro} Let $d \le n$ be positive integers and let $(n_1, \dots, n_d)$ be a sequence integers such that $0 < n_1 < \cdots < n_d < n$. We denote by $\Flag(n_1,\dots,n_d;\mathbb{R}^n)$ the set of all flags in $\mathbb{R}^n$ of type $(n_1,\dots, n_d)$: \[ \Flag(n_1,\dots,n_d;n) \coloneqq \left\lbrace \left\lbrace \mathbb{V}_k \right\rbrace_{k=1}^d:\mathbb{V}_k \subsetneq \mathbb{V}_{k+1} \subsetneq \mathbb{R}^n, \dim \mathbb{V}_k = n_k,k = 1,\dots, d-1 \right\rbrace. \]
\section{Preliminaries}
\subsection{differential geometry of Grassmann manifolds} Let $k \le n$ be positive integer. We denote by $\Gr(k,n)$ the Grassmann manifold of $k$ dimensional subspaces of $\mathbb{R}^n$. According to \cite{LLY20}, $\Gr(k,n)$ can be characterized as a submanifold of $\O(n) \cap \S_n$, i.e., \begin{align} \Gr(k,n) &\simeq \left\lbrace Q\in \O(n) \cap \S_n: \tr (Q) = 2k - n \right\rbrace \label{eq:modelGr1} \\ &= \left\lbrace V \I_{k,n-k} V^{\scriptscriptstyle\mathsf{T}}: V\in \O(n) \right\rbrace. \label{eq:modelGr2} \end{align} Here $\S_n$ is the space of $n\times n$ symmetric matrices and $\I_{k,n-k}$ is the diagonal matrix $\begin{bmatrix} \I_k & 0 \\ 0 & -\I_{n-k} \end{bmatrix}$.
\begin{proposition}[Tangent space I]\label{prop:tangent} Let $Q \in \Gr(k,n)$ with eigendecomposition $Q = V I_{k,n-k} V^{\scriptscriptstyle\mathsf{T}}$. The tangent space of $\Gr(k,n)$ at $Q$ is given by \begin{align} \T_Q \Gr(k,n) &= \left\lbrace X\in \mathbb{R}^{n\times n}: X^{\scriptscriptstyle\mathsf{T}} = X,\; X Q +QX = 0,\; \tr(X) = 0 \right\rbrace \label{eq:tan1}
\\ & = \left\lbrace V \begin{bmatrix} 0 & B \\ B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}V^{\scriptscriptstyle\mathsf{T}} \in \mathbb{R}^{n \times n} : B \in \mathbb{R}^{k\times (n-k)} \right\rbrace \label{eq:tan2} \\ & = \left\lbrace QV \begin{bmatrix} 0 & B \\ -B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} V^{\scriptscriptstyle\mathsf{T}} \in \mathbb{R}^{n \times n}: B\in \mathbb{R}^{k \times (n-k)} \right\rbrace. \label{eq:tan3} \end{align} \end{proposition}
\begin{proposition}[Riemannian metric]\label{prop:metric} Let $Q\in \Gr(k,n)$ with $Q = V I_{k,n-k} V^{\scriptscriptstyle\mathsf{T}}$ and \[ X = V \begin{bmatrix} 0 & B \\ B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}V^{\scriptscriptstyle\mathsf{T}} ,\quad Y = V \begin{bmatrix} 0 & C \\ C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}V^{\scriptscriptstyle\mathsf{T}}
\in \T_Q \Gr(k,n). \] Then \begin{equation}\label{eq:metric} \langle X, Y \rangle_Q \coloneqq \tr(XY) =2\tr(B^{\scriptscriptstyle\mathsf{T}} C) \end{equation} defines a Riemannian metric. The corresponding Riemannian norm is \begin{equation}\label{eq:norm} \lVert X \rVert_Q \coloneqq \sqrt{\langle X, X \rangle}_Q= \lVert X \rVert_{\scriptscriptstyle\mathsf{F}} =\sqrt{2} \lVert B \rVert_{\scriptscriptstyle\mathsf{F}}. \end{equation} \end{proposition}
\begin{theorem}[Geodesics I]\label{thm:geo} Let $Q\in \Gr(k,n)$ and $X \in \T_Q \Gr(k,n)$ with \begin{equation}\label{eq:geotan} Q = V I_{k,n-k} V^{\scriptscriptstyle\mathsf{T}}, \qquad X = V\begin{bmatrix} 0 & B \\ B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} V^{\scriptscriptstyle\mathsf{T}}. \end{equation} The geodesic $\gamma$ emanating from $Q$ in the direction $X$ is given by \begin{equation}\label{eq:geo} \gamma(t) = V \exp \left( \frac{t}{2} \begin{bmatrix} 0 & -B \\ B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) I_{k,n-k} \exp \left( \frac{t}{2}\begin{bmatrix} 0 & B \\ -B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}. \end{equation} The differential equation for $\gamma$ is \begin{equation}\label{eq:geode} \gamma(t)^{\scriptscriptstyle\mathsf{T}} \ddot{\gamma}(t) - \ddot{\gamma}(t)^{\scriptscriptstyle\mathsf{T}} \gamma(t) = 0,\qquad \gamma(0) = Q,\qquad \dot{\gamma}(0) = X. \end{equation} \end{theorem}
\begin{proposition}[Parallel transport]\label{prop:pt} Let $Q\in \Gr(k,n)$ and $X,Y \in \T_Q \Gr(k,n)$ with \[ Q = V I_{k,n-k} V^{\scriptscriptstyle\mathsf{T}}, \qquad X = V\begin{bmatrix} 0 & B \\ B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}V^{\scriptscriptstyle\mathsf{T}}, \qquad Y = V\begin{bmatrix} 0 & C \\ C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}V^{\scriptscriptstyle\mathsf{T}}, \] where $V \in \O(n)$ and $B, C\in \mathbb{R}^{k \times (n-k)}$. Let $\gamma$ be a geodesic curve emanating from $Q$ in the direction $X$. Then the parallel transport of $Y$ along $\gamma$ is \begin{equation}\label{eq:pt} Y (t) =V \exp \left( \frac{t}{2}\begin{bmatrix} 0 & -B \\ B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}\right) \begin{bmatrix} 0 & C \\ C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \exp \left(\frac{t}{2} \begin{bmatrix} 0 & B \\ -B^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}. \end{equation} \end{proposition}
\subsection{some useful functions}
We recall the \emph{Peano--Baker series associated to a matrix function $\Phi: [a,b] \to \mathbb{R}^{n\times n}$}. To define the Peano--Baker series, we first recursively define a sequence $\{M_k(t)\}_{k=0}^{\infty}$ of matrix functions \begin{align*} M_0(t) &= \I_n, \\ M_k(t) &= \I_n + \int_{a}^t \Phi(s) M_{k-1} (s) ds,\quad k\in \mathbb{N}. \end{align*} We have the following: \begin{theorem}\cite[Section~3, Theorem~1]{Brockett70}\label{thm:PBseries} The sequence $\{M_k(t)\}_{k=0}^{\infty}$ converges to a matrix function $M(t)$ uniformly on $[a,b]$, which solves the differential equation \[ \frac{d}{dt} X(t) = \Phi(t) X(t), \quad X(a) = \I_n. \] In particular, given any column vector $u \in \mathbb{R}^n$, $M(t)u$ solves the differential equation \[ \frac{d}{dt} x(t) = \Phi(t ) x(t),\quad x(a) = u. \] \end{theorem} The limit matrix function $M(t)$ in Theorem~\ref{thm:PBseries} is defined to be the Peano--Baker series associated to $\Phi(t)$.
\subsection{vectorization of a matrix} Let $m,n$ be positive integers and let $A[a_1,\dots, a_n]$ be a matrix of size $m\times n$ where $a_1,\dots, a_n \in \mathbb{R}^m$ are column vectors of $A$. We define the \emph{vectorization} of $A$ to be the column vector \[ \vect(A) \coloneqq \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} \in \mathbb{R}^{mn}. \] We recall that using vectorizations of matrices, we can express the matrix-matrix product in terms of matrix-vector product. Namely, for $A\in \mathbb{R}^{m\times n}$ and $B\in \mathbb{R}^{n\times l}$, we have \begin{equation}\label{eq:vectorization} \vect (AB) = (\I_l \otimes A) \vect(B) = (B^{\scriptscriptstyle\mathsf{T}} \otimes \I_m) \vect (A). \end{equation} Moreover, for any positive integers $m,n$, there exists a permutation matrix $K^{(m,n)} \in \mathbb{R}^{mn \times mn}$, called \emph{the commutation matrix} such that \begin{equation}\label{eq:commutation} K^{(m,n)} \vect(A) = \vect(A^{\scriptscriptstyle\mathsf{T}}),\quad A\in \mathbb{R}^{m\times n}. \end{equation}
\section{Sub-Riemannian geometry of flag manifolds with classical embeddings} According to \cite[Proposition~3.2]{YWL19}, $\Flag(n_1,\dots, n_d;n)$ can be naturally embedded into a product of Grassmann manifolds via \begin{align}\label{eq:embedding1} \iota: \Flag(n_1,\dots, n_d;n) &\hookrightarrow \Gr(n_1,n) \times \Gr(n_2-n_1,n) \cdots \times \Gr(n_d- n_{d-1},n) \nonumber \\ ( \left\lbrace \mathbb{V}_k\right\rbrace_{k=1}^d ) &\mapsto (\mathbb{W}_1,\mathbb{W}_2,\dots, \mathbb{W}_d). \end{align} Here $\mathbb{W}_1 = \mathbb{V}_1$ and $\mathbb{W}_k$ is the orthogonal complement of $\mathbb{V}_{k-1}$ in $\mathbb{V}_k, k=2,\dots,d$. For simplicity, we denote \begin{equation}\label{eq:convention} \boxed{m_1 \coloneqq n_1,\quad m_{d+1} \coloneqq n - n_d, \quad m_{k} \coloneqq n_k - n_{k-1},\quad k = 2,\dots, d} \end{equation} so that $\iota$ is an embedding of $\Flag(n_1,\dots, n_d;n)$ into $\prod_{k=1}^d \Gr(m_k,n) $. \subsection{an embedding of a flag manifold into a matrix manifold} By \eqref{eq:modelGr1}, we may also embed each $\Gr(m_k,n)$ into $\O(n) $ and hence we can write $\mathbb{W}_k$ in \eqref{eq:embedding1} as $V_k \I_{m_k, n - m_k} V_k^{\scriptscriptstyle\mathsf{T}}$ for some $V_k\in \O(n)$. We denote by $\tau$ the induced embedding of $\prod_{k=1}^d \Gr(m_k,n)$ into $ \O(n)^d $. In the following, we will explicitly characterize the image $\tau \circ \iota\left( \Flag(n_1,\dots, n_d;n) \right)$ in $\O(n)^d$. \begin{proposition}[embedding]\label{prop:modelFlag} The image of the embedding \begin{equation}\label{prop:modelFlag:eq0} \varepsilon: \Flag(n_1,\dots, n_d;n) \xhookrightarrow{\iota} \prod_{k=1}^d \Gr(m_k,n) \xhookrightarrow{\tau} \O(n)^d \end{equation} is given by \begin{multline}\label{prop:modelFlag:eq1} \varepsilon \left(\Flag(n_1,\dots, n_d;n) \right) = \lbrace (Q_1,\dots, Q_d)\in \O(n)^d:\tr (Q_k) = 2m_k - n, Q_k^{\scriptscriptstyle\mathsf{T}} = Q_k \\ (\I_n + Q_k)(\I_n + Q_{k+1}) = 0, k=1,\dots, d \rbrace. \end{multline} In particular, we have \begin{equation}\label{prop:modelFlag:eq2} \varepsilon \left(\Flag(n_1,\dots, n_d;n) \right) =\left\lbrace \left( V \J_1 V^{\scriptscriptstyle\mathsf{T}},\dots, V \J_d V^{\scriptscriptstyle\mathsf{T}} \right):V\in O(n) \right\rbrace, \end{equation} where $\J_k = \diag (-\I_{m_1},\cdots, -\I_{m_{k-1}}, \I_{m_k}, -\I_{m_{k+1}}, \cdots, -\I_{m_d},-\I_{m_{d+1}} )$ is obtained by permuting diagonal blocks of $\I_{m_k,n-m_k}$. \end{proposition} \begin{proof} According to \eqref{eq:modelGr1}, we must have $\varepsilon (\{\mathbb{R}^n_k\}_{k=1}^d) = (Q_1,\dots, Q_d) \in \left( \O(n)\cap \S_n \right)$ with $\rank Q_k = 2m_k - n$. Moreover, since $\mathbb{W}_k$ is perpendicular to $\mathbb{W}_{k+1}$, we must have $P_{\mathbb{W}_k} \circ P_{\mathbb{W}_{k+1}} = 0 $ where $P_{\mathbb{U}}$ is the orthogonal projection from $\mathbb{R}^n$ onto a subspace $\mathbb{U}$. Now by \cite[Proposition 2.3]{LLY20}, we have $P_{\mathbb{W}_k} = \frac{1}{2} (\I_n + Q_k)$ which proves \eqref{prop:modelFlag:eq1}. To see \eqref{prop:modelFlag:eq2}, we notice that the relation $(\I_n + Q_k)(\I_n + Q_{k+1}) = 0 $ implies that $Q_k Q_{k+1} = Q_{k+1} Q_k$ and hence there exists $V_0 \in \O(n)$ diagonalizing $Q_k$'s simultaneously, i.e., $Q_k = V_0 D_k V_0^{\scriptscriptstyle\mathsf{T}} $ where $D_k$ is a diagonal matrix with $m_k$ $-1$'s and $(n-m_k)$ $1$'s along its diagonal. The restriction $(\I_n + Q_k)(\I_n + Q_{k+1}) = 0 $ forces $D_k =\sigma^{\scriptscriptstyle\mathsf{T}} J_{k} \sigma$ for some permutation matrix $\sigma$ and hence $V \coloneqq \sigma V_0$ gives us the desired expression of $\varepsilon (\{\mathbb{R}^n_k\}_{k=1}^d)$ in \eqref{prop:modelFlag:eq2}. \end{proof} In fact, \eqref{prop:modelFlag:eq2} is a special case of the general fact \cite[page~384]{FH91} that $G/P$ is an adjoint orbit of $G$ if $P$ is a parabolic subgroup of a semi-simple Lie group $G$. In our case, we have $G = \O(n)$ and $P = \O(m_1) \times \cdots \times \O(m_{d+1})$ so that $G/P \simeq \Flag (n_1,\dots, n_d;n)$ is the adjoint orbit of $(J_1,\dots, J_d) \in O(n)^d$.
Due to Proposition~\ref{prop:modelFlag}, in the sequel we abuse the notation by also using $\Flag(n_1,\dots, n_d;n)$ to denote $\varepsilon \left(\Flag(n_1,\dots, n_d;n) \right)$. Accordingly, an element in $\Flag(n_1,\dots, n_d;n)$ is written as a $d$-tuple \[ (VJ_{1} V^{\scriptscriptstyle\mathsf{T}},\dots, VJ_{d} V^{\scriptscriptstyle\mathsf{T}}) = V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}} \] for some $V\in \O(n)$, where $m_1 = n_1$ and $m_k = n_k - n_{k-1}$ for $k = 2,\dots, d$.
\subsection{tangent space, Riemannian metric and normal space} We first consider the tangent space of $\Flag(n_1,\dots, n_d;n)$ at a point $ V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}}$. To do this, we take a curve $V(t)$ on $\O(n)$ such that $V(0) = V$. It is clear that $\Lambda \coloneqq V(0)^{\scriptscriptstyle\mathsf{T}} \dot{V}(0) \in \mathfrak{so}(n)$ and hence the tangent vector determined by the curve $V(t) (J_{1},\dots, J_{d}) V(t)^{\scriptscriptstyle\mathsf{T}}$ is simply \[ \dot{V}(0) (J_{1},\dots, J_{d}) V(0)^{\scriptscriptstyle\mathsf{T}} + V(0) (J_{1},\dots, J_{d}) \dot{V}(0)^{\scriptscriptstyle\mathsf{T}} \] which can be further written as \[ V(0) \left(
\Lambda (J_{1},\dots, J_{d}) - (J_{1},\dots, J_{d}) \Lambda
\right) V(0)^{\scriptscriptstyle\mathsf{T}}. \] We partition $\Lambda$ as $\Lambda = (\Lambda(p,q))_{p,q=1}^{d+1}$ where $\Lambda(p,q)$ is a $m_p \times m_q$ matrix such that $\Lambda(q,p) =- \Lambda(p,q)^{\scriptscriptstyle\mathsf{T}}$. This implies that \begin{equation}\label{eq:basicalculation} \Lambda J_{k}- J_{k} \Lambda = -2 \begin{bmatrix} 0 & \cdots & 0 & \Lambda(k,1)^{\scriptscriptstyle\mathsf{T}} & 0 &\cdots & 0\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & \Lambda(k,k-1)^{\scriptscriptstyle\mathsf{T}} & 0 &\cdots & 0 \\ \Lambda(k,1) & \cdots & \Lambda(k,k-1) & 0 & \Lambda(k,k+1) & \cdots &\Lambda(k,d+1) \\ 0 & \cdots & 0 & \Lambda(k,k+1)^{\scriptscriptstyle\mathsf{T}} & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & \Lambda(k,d+1)^{\scriptscriptstyle\mathsf{T}} & 0 &\cdots & 0 \end{bmatrix}. \end{equation} We notice that there is a natural identification $\prod_{1\le j < k \le d+1} \mathbb{R}^{m_j \times m_k} \simeq \mathfrak{so}(n)$ and hence we have an injective map \[ \psi: \prod_{1\le j < k \le d+1} \mathbb{R}^{m_j \times m_k} \simeq \mathfrak{so}(n) \hookrightarrow \prod_{j=1}^d \S_n,\quad \psi( (A_{jk})_{1\le j < k \le d+1} ) = \frac{1}{2} ( AJ - JA), \] where $J = (J_{1},\dots, J_{d})$ and $A \in \mathfrak{so}(n)$ is the skew-symmetric matrix uniquely determined by $(A_{jk})_{1\le j < k \le d+1}$. The above calculations can be summarized as the following \begin{proposition}\label{prop:tangentFlag} Given a point $\mathfrak{f} \coloneqq V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}} \in \Flag(n_1,\dots, n_d;n)$, the tangent space of $\Flag(n_1,\dots, n_d;n)$ at $\mathfrak{f}$ is \[ \mathbb{T}_{\mathfrak{f}} \Flag(n_1,\dots, n_d;n) = V \left\lbrace \psi( (A_{jk})_{1\le j < k \le d+1} ): A_{jk} \in \mathbb{R}^{m_j \times m_k}, 1\le j < k \le d+1 \right\rbrace V^{\scriptscriptstyle\mathsf{T}}. \] In other words, $\mathbb{T}_{\mathfrak{f}} \Flag(n_1,\dots, n_d;n)$ consists of vectors $V (X_1,\dots, X_d) V^{\scriptscriptstyle\mathsf{T}}\in \prod_{j=1}^d S_n$ satisfying \begin{equation}\label{prop:tangentFlag:eq} X_k(k,l) = -X_l(k,l), X_k (p,q) = 0, X_k(k,k) = 0, \quad 1\le k,l \le d, 1\le p,q \le d+1~\text{and}~p,q,l \ne k. \end{equation} Here for each $1\le s, t \le d+1$, $X_k(s,t)\in \mathbb{R}^{m_s \times m_t}$ denotes the $(s,t)$-th block of $X_k\in \S_n$ when we partition $X_k$ with respect to $n = m_1 + \cdots + m_d + m_{d+1}$. \end{proposition}
Due to Proposition~\ref{prop:tangentFlag}, we are able to parametrize a curve on $\Flag(n_1,\dots, n_d;n)$ easily. \begin{corollary}[curves]\label{cor:curve} If $c: (-\varepsilon,\varepsilon) \to \Flag(n_1,\dots, n_d;n)$ is a differentiable curve such that $c(0) = V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}}$, then there exists a differentiable curve $\Lambda: (-\varepsilon,\varepsilon) \to \mathfrak{so}(n)$ such that $\Lambda(k,k)(t) \equiv 0, k =1,\dots, d+1$ and \[ c(t) = V \exp(\Lambda(t)) (J_{1},\dots, J_{d}) \exp(-\Lambda(t)) V^{\scriptscriptstyle\mathsf{T}}, \] where $\Lambda(t) = (\Lambda(j,k))_{j,k=1}^{d+1,d+1}$ is the partition of $\Lambda(t)$ with respect to $n = m_1 + \cdots + m_{d+1}$. \end{corollary}
As a submanifold of $\prod_{k=1}^d \Gr(m_k,n)$ (or equivalently, $\prod_{k=1}^d \O(n) $), $\Flag(n_1,\dots, n_d;n)$ is equipped with an induced Riemannian metric: \begin{equation}\label{eq:metricFlag} \langle V(X_1,\dots, X_d) V^{\scriptscriptstyle\mathsf{T}}, V(Y_1,\dots, Y_d) V^{\scriptscriptstyle\mathsf{T}} \rangle_{\mathfrak{f}} \coloneqq \sum_{k=1}^d \tr(X_k Y_k), \end{equation} where $\mathfrak{f} = V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}}$ is a point in $\Flag(n_1,\dots, n_d;n)$ and $V(X_1,\dots, X_d) V^{\scriptscriptstyle\mathsf{T}}$, $V(Y_1,\dots, Y_d) V^{\scriptscriptstyle\mathsf{T}}$ are tangent vectors of $\Flag(n_1,\dots, n_d;n)$ at $\mathfrak{f}$. More explicitly, we can write \begin{equation}\label{eq:metricFlag1}
\langle V(X_1,\dots, X_d) V^{\scriptscriptstyle\mathsf{T}}, V(Y_1,\dots, Y_d) V^{\scriptscriptstyle\mathsf{T}} \rangle_{\mathfrak{f}} =2\sum_{k=1}^d \sum_{ l < k < m } \tr( X_k(l,k) Y_k(k,l) + X_k(m,k) Y_k(k,m)). \end{equation} We remark that summands in the formula \eqref{eq:metricFlag1} are not evenly counted. For example, if $d = 2$, then $ \langle V(X_1,X_2) V^{\scriptscriptstyle\mathsf{T}}, V(Y_1,Y_2) V^{\scriptscriptstyle\mathsf{T}} \rangle_{\mathfrak{f}} $ is \begin{equation}\label{eq:metricFlag2} 2 (\tr (X_1(2,1)Y_1(1,2)) + \tr(X_1(3,1)Y_1(1,3)) + \tr(X_2(1,2)Y_2(2,1) ) + \tr(X_2(3,2) Y_2(2,3))), \end{equation} in which the coefficient of $\tr (X_1(2,1)Y_1(1,2)) $ is $4$ since $\tr(X_2(1,2)Y_2(2,1) ) = \tr (X_1(2,1)Y_1(1,2))$ while coefficients for the other two summands are both $2$.
For each $Q\in \O(n) $, we have $\mathbb{T}_Q \O(n) = Q \mathfrak{so}(n) $ and hence for each $(Q_1,\dots, Q_d) \in \prod_{k=1}^d \O(n)$, we obtain \[ \mathbb{T}_{(Q_1,\dots, Q_d) } \left( \prod_{k=1}^d \O(n) \right) = \bigoplus_{k=1}^d Q_k \mathfrak{so}(n). \] To calculate the normal space of $\Flag(n_1,\dots, n_d;n)$ in $\prod_{k=1}^d \O(n)$ at $\mathfrak{f} = V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}}$, we need to determine $Y_1,\dots, Y_d\in \mathfrak{so}(n)$ such that $Y \coloneqq ( V J_{1} V^{\scriptscriptstyle\mathsf{T}} Y_1,\dots, V J_{d} V^{\scriptscriptstyle\mathsf{T}} Y_d)$ is perpendicular to $\mathbb{T}_{\mathfrak{f}} \Flag(n_1,\dots, n_d;n)$, i.e., $\langle X, Y \rangle_{\mathfrak{f},\O(n)} = 0$ for all $X \in \mathbb{T}_{\mathfrak{f}} \Flag(n_1,\dots, n_d;n)$. Here the inner product $\langle \cdot,\cdot \rangle_{\mathfrak{f}, \prod_{k=1}^ d \O(n)}$ is the canonical Riemannian metric on $\prod_{k=1}^ d \O(n)$ at the point $\mathfrak{f}$, which induces \eqref{eq:metricFlag}. We notice that for $X = V (X_1,\dots, X_d) V^{\scriptscriptstyle\mathsf{T}} \in \mathbb{T}_{\mathfrak{f}} \Flag(n_1,\dots, n_d;n)$ \[ V X_k V^{\scriptscriptstyle\mathsf{T}} = (V J_{k} V^{\scriptscriptstyle\mathsf{T}}) V J_{k} X_k V^{\scriptscriptstyle\mathsf{T}},\quad k =1,\dots, d, \] which implies that \begin{align*}
\langle X,Y \rangle_{\mathfrak{f},\prod_{k=1}^ d \O(n)} &= \sum_{k=1}^d \tr( (V J_{k} X_k V^{\scriptscriptstyle\mathsf{T}})^{\scriptscriptstyle\mathsf{T}} Y_k ) \\
& = \sum_{k=1}^d \tr ( (V X_k J_{k} V^{\scriptscriptstyle\mathsf{T}} ) Y_k ) \\
& = \sum_{k=1}^d \tr ( (X_k J_{k}) V^{\scriptscriptstyle\mathsf{T}} Y_k V ) \end{align*} Since $ \langle X,Y \rangle_{\mathfrak{f},\prod_{k=1}^ d \O(n)} = 0$ holds for any $X \in \mathbb{T}_{\mathfrak{f}} \Flag(n_1,\dots, n_d;n)$, we can equivalently write this condition as \[ \sum_{k=1}^d \tr ( (X_k J_{k}) Z_k) = 0,\quad (X_1,\dots, X_d) \in \mathbb{T}_{\mathfrak{f}_0} \Flag(n_1,\dots, n_d;n), \] where $\mathfrak{f}_0 = (J_{1},\dots, J_{d}) \in \Flag(n_1,\dots, n_d;n)$ and $Z_k = V^{\scriptscriptstyle\mathsf{T}} Y_k V,k=1,\dots, d$. If we fix a pair $(k,l)$ such that $1\le k \le d, 1\le l \le d+1, k\ne l$ and set $X_m (p,q) = 0$ for \[ (m,p,q) \not\in \left\lbrace (k,k,l), (k,l,k), (l,k,l), (l,l,k) \right\rbrace, \] then since $X_k J_{m_k,n-m_k}$ is skew-symmetric, we have \footnotesize \begin{align*} 0 = \langle X,Y \rangle_{\mathfrak{f},\prod_{k=1}^ d \O(n)} &= \tr (X_k(k,l) Z_k (l,k)) - \tr(X_k (k,l)^{\scriptscriptstyle\mathsf{T}} Z_k (k,l)) - \tr( X_l (k,l)^{\scriptscriptstyle\mathsf{T}} Z_l(k,l)) + \tr( X_l(k,l) Z_l(l,k)) \\ &= \tr (X_k(k,l) (Z_k (l,k)) - Z_l(l,k)) ) - \tr (X_k(k,l)^{\scriptscriptstyle\mathsf{T}} (-Z_k (k,l)) + Z_l(k,l)) ) \\ & = \tr (X_k(k,l) (Z_k (l,k)) - Z_l(l,k)) ) + \tr (X_k(k,l)^{\scriptscriptstyle\mathsf{T}} (Z_k (k,l)) - Z_l(k,l) ) \\ & = 2 \tr (X_k(k,l) ( Z_k(l,k) - Z_l (l,k) )). \end{align*}\normalsize Therefore, we may derive the following characterization of $\mathbb{N}_\mathfrak{f} \Flag(n_1,\dots, n_d;n)$: \begin{proposition}\label{prop:normalspaceFlag} At a point $\mathfrak{f}\coloneqq V (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}} \in \Flag(n_1,\dots, n_d;\mathbb{R}^n)$, the normal space $\mathbb{N}_\mathfrak{f} \Flag(n_1,\dots, n_d;n)$ consists of vectors \[ (V J_{1} Z_1 V^{\scriptscriptstyle\mathsf{T}},\dots, V J_{d} Z_d V^{\scriptscriptstyle\mathsf{T}}) \] where $Z_1,\dots,Z_d\in \mathfrak{so}(n)$ satisfy the relations \begin{itemize} \item $Z_k(k,l) - Z_l (k,l)= 0$ for all $1\le k \ne l \le d$. \item $Z_k(k,d+1) = 0, Z_k(d+1,k) = 0$ for all $1 \le k \le d$. \end{itemize}
In particular, we have a decomposition \begin{equation}\label{prop:normalspaceFlag:eq1} \mathbb{N}_\mathfrak{f} \Flag(n_1,\dots, n_d;n) = N_\mathfrak{f} \left( \prod_{k=1}^d \Gr(m_k,n) \right) \bigoplus N^0_\mathfrak{f} \end{equation} where $\mathbb{N}_\mathfrak{f} \left( \prod_{k=1}^d \Gr(m_k,n) \right) \coloneqq \prod_{k=1}^d \mathbb{N}_{V J_{m_k,n-m_k} V^{\scriptscriptstyle\mathsf{T}}} \Gr(m_k,n)$ and \begin{multline}\label{prop:normalspaceFlag:eq2} \mathbb{N}^0_\mathfrak{f} \coloneqq \lbrace (V J_{m_1,n-m_1} Z_1 V^{\scriptscriptstyle\mathsf{T}} ,\dots, V J_{m_d,n-m_d} Z_d V^{\scriptscriptstyle\mathsf{T}} ): Z_k\in \mathfrak{so}(n),Z_k(k,l) - Z_l(k,l) = 0, \\ Z_k(k,k) = 0, Z_k(p,q) = 0, Z_k(k,d+1) = 0, Z_k(d+1,k) = 0, 1\le k,l \le d, 1\le p,q \le d+ 1, p,q\ne k
\rbrace. \end{multline} \end{proposition} We recall that $\Flag(n_1,\dots, n_d;n)$ can also be embedded into $\prod_{k=1}^d \Gr(n_k,n)$ as a Riemannian submanifold. Hence we may also characterize the normal space of $\Flag(n_1,\dots, n_d;n)$ with respect to this embedding. \begin{corollary} The normal space of $\Flag(n_1,\dots, n_d;n)$ in $\prod_{k=1}^d \Gr(m_k,n)$ at a point $\mathfrak{f} $ is $\mathbb{N}^0_\mathfrak{f} $. \end{corollary}
\begin{proposition}[Projections]\label{prop:projection} Projections from $\mathbb{T}_{\mathfrak{f}} \left( \prod_{k=1}^d \O(n) \right)$ onto $\mathbb{T}_\mathfrak{f} \Flag(n_1,\dots, n_d;n)$ and $\mathbb{N}_\mathfrak{f} \Flag(n_1,\dots, n_d;n)$ are respectively given by \begin{align}\label{prop:projection:tangent} \proj_{\mathfrak{f}}^{\mathbb{T}}: \mathbb{T}_{\mathfrak{f}} \left( \prod_{k=1}^d \O(n) \right) &\to \mathbb{T}_\mathfrak{f} \Flag(n_1,\dots, n_d;n) \nonumber \\ V(J_{1} \Lambda_1 ,\dots, J_{d}\Lambda_d) V^{\scriptscriptstyle\mathsf{T}} &\mapsto V(X_1,\dots, X_d) V^{\scriptscriptstyle\mathsf{T}}. \end{align} and \begin{align}\label{prop:projection:normal} \proj_{\mathfrak{f}}^{\mathbb{N}}: \mathbb{T}_{\mathfrak{f}} \left( \prod_{k=1}^d \O(n) \right) &\to \mathbb{N}_\mathfrak{f} \Flag(n_1,\dots, n_d;n) \nonumber \\ V(J_{1} \Lambda_1 ,\dots, J_{d}\Lambda_d) V^{\scriptscriptstyle\mathsf{T}} &\mapsto V( Z_1,\dots, Z_d) V^{\scriptscriptstyle\mathsf{T}} \end{align} where for each $k = 1,\dots, d$, $X_k\in \S_n$ (resp. $Z_k\in \mathbb{R}^{n\times n}$) is partitioned as $(X_k (p,q))_{p,q =1}^{d+1}$ (resp. $(Z_k(p,q))_{p,q=1}^{d+1}$) with respect to $n = m_1 + \cdots + m_{d+1}$ and \[ X_k (p,q) = \begin{cases} \frac{1}{2} (\Lambda_k(k,q) - \Lambda_{q} (k,q)),~\text{if}~p = k \ne q \le d \\ \Lambda_k (k,d+1),~\text{if}~p=k, q = d+1 \\ -\frac{1}{2} (\Lambda_k(p,k) - \Lambda_{p} (p,k)),~\text{if}~q = k \ne p \le d \\ -\Lambda_k (d+1,q),~\text{if}~q=k, p = d+1 \\ 0,~\text{otherwise}. \end{cases} \] \[ Z_k (p,q) = \begin{cases} \frac{1}{2} (\Lambda_k(k,q) + \Lambda_{q} (k,q)),~\text{if}~p = k \ne q \le d \\ 0,~\text{if}~p=k, q = d+1 \\ -\frac{1}{2} (\Lambda_k(p,k) + \Lambda_{p} (p,k)),~\text{if}~q = k \ne p \le d \\ 0,~\text{if}~q=k, p = d+1 \\ \Lambda_k(p,q),~\text{otherwise}. \end{cases} \] \end{proposition}
Before we proceed, we work out the case for $d = 2$ to exhibit our calculations above. In this case, our flag manifold is $\Flag(n_1,n_2;n)$ and hence $m_1 = n_1, m_2 = n_2 - n_1, m _3 = n - n_2$. A point $\mathfrak{f}$ in $\Flag(n_1,n_2;n)$ is written as \[ V (J_{1},J_{2}) V^{\scriptscriptstyle\mathsf{T}} = V \left( \begin{bmatrix} \I_{m_1} & 0 & 0 \\ 0 & -\I_{m_2} & 0 \\ 0 & 0 & - \I_{m_3} \end{bmatrix}, \begin{bmatrix} -\I_{m_1} & 0 & 0 \\ 0 & \I_{m_2} & 0 \\ 0 & 0 & - \I_{m_3} \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}},\quad V\in \O(n). \] A tangent vector of $\Flag(n_1,n_2;n)$ at $\mathfrak{f}$ is of the form \[ V \left( \begin{bmatrix} 0 & A & B \\ A^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ B^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -A & 0 \\ -A^{\scriptscriptstyle\mathsf{T}} & 0 & C \\ 0 & C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}},\quad A\in \mathbb{R}^{m_1 \times m_2}, B\in \mathbb{R}^{m_1 \times m_3}, C\in \mathbb{R}^{m_2 \times m_3}. \] The normal space of $\Flag(n_1,n_2;n)$ as a submanifold of $\O(n) \times \O(n)$ at $\mathfrak{f}$ consists of vectors \[ V \left( \begin{bmatrix} X & Y & 0 \\ Y^{\scriptscriptstyle\mathsf{T}} & Z & W \\ 0 & -W^{\scriptscriptstyle\mathsf{T}} & U \end{bmatrix}, \begin{bmatrix} R & Y & S \\ Y^{\scriptscriptstyle\mathsf{T}} & T & 0\\ -S^{\scriptscriptstyle\mathsf{T}} & 0 & K \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}, \] where $X,R\in \mathfrak{so}(m_1)$, $Z,T\in \mathfrak{so}(m_2)$, $U,K\in \mathfrak{so}(m_3)$, $Y\in \mathbb{R}^{m_1 \times m_2},W\in \mathbb{R}^{m_2 \times m_3},S\in \mathbb{R}^{m_1 \times m_3}$.
A tangent vector $\xi$ of $\O(n) \times \O(n)$ at $\mathfrak{f}$ can be written as \[ \xi \coloneqq V \left( \begin{bmatrix} A & B & C \\ B^{\scriptscriptstyle\mathsf{T}} & D & E \\ C^{\scriptscriptstyle\mathsf{T}} & -E^{\scriptscriptstyle\mathsf{T}} & F \end{bmatrix}, \begin{bmatrix} X & Y & Z \\ Y^{\scriptscriptstyle\mathsf{T}} & W & S\\ -Z^{\scriptscriptstyle\mathsf{T}} & S^{\scriptscriptstyle\mathsf{T}} & T \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}, \] where $A,X\in \mathfrak{so}(m_1),D,W\in \mathfrak{so}(m_2), F,T\in \mathfrak{so}(m_3), B,Y\in \mathbb{R}^{m_1 \times m_2}, C,Z\in \mathbb{R}^{m_1 \times m_3}, E,S\in \mathbb{R}^{m_2 \times m_3}$. The projection of $\xi$ onto $T_\mathfrak{f} \Flag(n_1,n_2;n)$ is \[ \proj^{\mathbb{T}}_{\mathfrak{f}} (\xi) = V \left( \begin{bmatrix} 0 & \frac{B-Y}{2} & C \\ \frac{B^{\scriptscriptstyle\mathsf{T}}-Y^{\scriptscriptstyle\mathsf{T}}}{2} & 0 & 0 \\ C^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -\frac{B-Y}{2} & 0 \\ -\frac{B^{\scriptscriptstyle\mathsf{T}} - Y^{\scriptscriptstyle\mathsf{T}}}{2} & 0 & S\\ 0 & S^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}} \] and its projection onto $N_{\mathfrak{f}} \Flag(n_1,n_2;n)$ is \[ \proj^{\mathbb{N}}_{\mathfrak{f}} (\xi) = V \left( \begin{bmatrix} A & \frac{B + Y}{2} & 0 \\ \frac{B^{\scriptscriptstyle\mathsf{T}} + Y^{\scriptscriptstyle\mathsf{T}}}{2} & D & E \\ 0 & -E^{\scriptscriptstyle\mathsf{T}} & F \end{bmatrix}, \begin{bmatrix} X & \frac{B + Y}{2} & Z \\ \frac{B^{\scriptscriptstyle\mathsf{T}} + Y^{\scriptscriptstyle\mathsf{T}}}{2} & W & 0\\ -Z^{\scriptscriptstyle\mathsf{T}} & 0 & T \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}} \] The normal space $\mathbb{N}_f^0$ of $\Flag(n_1,n_2;n)$ as a submanifold of $\Gr(m_1,n) \times \Gr(m_2,n)$ at $\mathfrak{f}$ consists of vectors \[ V \left( \begin{bmatrix} 0 & Y & 0 \\ Y^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & Y & 0 \\ Y^{\scriptscriptstyle\mathsf{T}} & 0 & 0\\ 0 & 0 & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}, \quad Y\in \mathbb{R}^{m_1 \times m_2}. \] We also recall that the tangent space $\mathbb{T}_{\mathfrak{f}} (\Gr(m_1,n) \times \Gr(m_2,n))$ consists of vectors \[ V \left( \begin{bmatrix} 0 & A & B \\ A^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ B^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & D & 0 \\ D^{\scriptscriptstyle\mathsf{T}} & 0 & C \\ 0 & C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}},\quad V\in \O(n), A,D\in \mathbb{R}^{m_1 \times m_2}, B\in \mathbb{R}^{m_1 \times m_3}, C\in \mathbb{R}^{m_2 \times m_3}. \] The following identities can be directly verified by the above computations. \begin{align*} \mathbb{T}_{\mathfrak{f}} \left( \O(n) \times \O(n) \right) &=\mathbb{T}_{\mathfrak{f}} \Flag(n_1,n_2;n) \bigoplus \mathbb{N}_{\mathfrak{f}} \Flag(n_1,n_2;n), \\ \mathbb{T}_{\mathfrak{f}} (\Gr(m_1,n) \times \Gr(m_2,n)) &=\mathbb{T}_{\mathfrak{f}} \Flag(n_1,n_2;n) \bigoplus \mathbb{N}_{\mathfrak{f}}^0. \end{align*}
\subsection{geodesics}\label{subsec:geodesic} Recall that we may parametrize a curve $c(t)$ on $\Flag(n_1,\dots, n_d;n)$ as \[ c(t) = V(t) (J_{1},\dots, J_{d}) V^{\scriptscriptstyle\mathsf{T}} (t), \] where $V(t)$ is a curve in $\O(n)$. By differentiating the equation $V(t)^{\scriptscriptstyle\mathsf{T}} V(t) = \I_n$, we obtain \[ \dot{V}(t)^{\scriptscriptstyle\mathsf{T}} V(t) + V(t)^{\scriptscriptstyle\mathsf{T}} \dot{V}(t) = 0, \] from which we may write $\dot{V}(t)$ as \[ \dot{V}(t) =V(t) \Lambda(t), \] for some $\Lambda(t)\in \mathfrak{so}(n)$. According to Proposition~\ref{prop:tangentFlag}, we may further partition $\Lambda(t)$ as \[ \Lambda(t) = (\Lambda_{jk})_{j,k=1}^{d+1,d+1} \] with respect to $n = m_1 + \cdots + m_{d+1}$ and $\Lambda_{kk}(t) \equiv 0, k =1,\dots, d+1$. Hence the second derivative of $c(t)$ is \[ \ddot{c}(t) = V(t) \left( \Delta_1(t), \dots, \Delta_d(t) \right) V(t)^{\scriptscriptstyle\mathsf{T}} \] where \begin{equation}\label{eq:2ndderivative} \Delta_k (t) = (\dot{\Lambda}(t) J_k - J_k \dot{\Lambda(t)}) + ({\Lambda}^2(t) J_k + J_k {\Lambda}^2(t)) + \left( - 2 {\Lambda}(t) J_k {\Lambda}(t) \right),\quad k =1,\dots, d. \end{equation} We may rewrite $\ddot{c}(t)$ as \[ \ddot{c}(t) = T_1(t) + T_2(t) -2 T_3(t) \] where $T_j(t)$ is the $j$-summand of $V(t) \left( \Delta_1(t), \dots, \Delta_d(t) \right) V(t)^{\scriptscriptstyle\mathsf{T}}$ with respect to the decomposition of $\Delta_k(t)$ given in \eqref{eq:2ndderivative}. More precisely, \begin{align} T_1(t) &= V(t) (\dot{\Lambda}(t) J_1 - J_1 \dot{\Lambda}(t),\dots, \dot{\Lambda}(t) J_d - J_d \dot{\Lambda}(t)) V(t)^{\scriptscriptstyle\mathsf{T}}, \\ T_2(t) &= V(t) ({\Lambda}^2(t) J_1 + J_1 {\Lambda}^2(t),\dots, {\Lambda}^2(t) J_d + J_d {\Lambda}^2(t)) V(t)^{\scriptscriptstyle\mathsf{T}},\\ T_3(t) &= V(t) ( {\Lambda}(t) J_1 {\Lambda}(t),\dots, {\Lambda}(t) J_d {\Lambda}(t)) V(t)^{\scriptscriptstyle\mathsf{T}}. \end{align}
We recall that the geodesic equation on $\Flag(n_1,\dots, n_d;n)$ is given by \[ \proj_{c(t)}^{\mathbb{T}} (\ddot{c}(t)) = 0. \] Therefore, to determine the geodesic equation explicitly, we need to compute the projections of $T_1(t),T_2(t),T_3(t)$ to $T_{c(t)} \Flag(n_1,\dots, n_d;n)$ respectively. From Proposition~\ref{prop:tangentFlag}, $T_1(t)$ already lies in the tangent space $T_{c(t)} \Flag(n_1,\dots, n_d;n)$. Hence it is sufficient to determine the projections of $T_2(t)$ and $T_3(t)$.
\begin{lemma}\label{lemma:projectionT2} Let $c(t), T_2(t)$ be as above. The projection of $\proj_{c(t)}^{\mathbb{T}} (T_2(t))$ is zero. \end{lemma} \begin{proof} We first compute ${\Lambda}^2(t) J_k + J_k {\Lambda}^2(t)$ for each $k =1,\dots, d$. To do this, we partition ${\Lambda}^2(t)$ (resp. $J_k$) as $(\Gamma_{p,q}(t))$ (resp. $(J_k(p,q))$) with respect to the partition $n = m_1 + \cdots + m_{d+1}$ and we recall that \[ J_k (p,q) = \begin{cases} (2\delta_{pk} - 1 )\I_{m_p},\quad~\text{if}~q = p, \\ 0,\quad~\text{otherwise}. \end{cases} \] Here $\delta_{pk}$ is the Kronecker delta function. Since ${\Lambda}(t)$ is skew-symmetric, ${\Lambda}^2(t)$ is symmetric. We have $\Gamma_{q,p} = \Gamma_{p,q}^{\scriptscriptstyle\mathsf{T}}$. Now the $(p,q)$-th block of ${\Lambda}^2(t) J_k$ is \[ \sum_{l=1}^{m+1} \Gamma_{p,l} J_k (l,q) = \Gamma_{p,q} J_k(q,q) = (2\delta_{qk} - 1) \Gamma_{p,q} \] and the $(p,q)$-th block of $J_k {\Lambda}^2(t) = ({\Lambda}^2(t) J_k)^{\scriptscriptstyle\mathsf{T}}$ is $(2\delta_{pk} - 1) \Gamma_{p,q}$. This implies that the $(p,q)$-th block of ${\Lambda}^2(t) J_k + J_k {\Lambda}^2(t)$ is \[ (2\delta_{qk} - 1) \Gamma_{p,q} + (2\delta_{pk} - 1) \Gamma_{p,q} = (-2) (1 - \delta_{pk} - \delta_{qk}) \Gamma_{p,q}. \] In particular, if either $q \ne p = k$ or $p\ne q = k$, we obtain that the $(p,q)$-th block of ${\Lambda}^2(t) J_k + J_k {\Lambda}^2(t)$ is zero and this implies that $\proj_{c(t)}^{\mathbb{T}} (T_2(t)) = 0$. \end{proof}
\begin{lemma}\label{lemma:projectionT3} Let $c(t),T_3(t)$ be as before. The projection $\proj_{c(t)}^{\mathbb{T}} (T_3(t))$ is \[ V(t) (X_1,\dots, X_d) V(t)^{\scriptscriptstyle\mathsf{T}} \] where for each $1\le k \le d$, $X_k$ is a symmetric matrix whose $(p,q)$-th block vanishes for any $(p,q)$ except $(k,d+1)$ and $(d+1,k)$. Moreover if we partition $\Lambda(t)$ as $\Lambda(t) = \begin{bmatrix} \Lambda_0(t) & \Lambda_1(t) \\ -\Lambda_1(t)^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} $ where $\Lambda_0(t) \in \mathfrak{so}(n-m_{d+1})$ and $\Lambda_1(t) \in \mathbb{R}^{(n-m_{d+1}) \times m_1}$ we have \[ \begin{bmatrix} X_1(1,d+1) \\ \vdots \\ X_d(d,d+1) \end{bmatrix} = -{\Lambda}_0(t) {\Lambda}_1(t). \] \end{lemma} \begin{proof} It is sufficient to compute $X_k \coloneqq {\Lambda}(t) J_k {\Lambda(t)}$ for each $k = 1,\dots, d$. We again partition $\Lambda(t)$ as $(\Lambda(p,q)(t))_{p,q=1}^d$ with respect to $n = m_1 + \cdots + m_{d+1}$. The $(p,q)$-th block of ${\Lambda}(t) J_k {\Lambda(t)}$ is \begin{align} \sum_{l,s=1}^{d+1}{\Lambda}(p,l)(t) J_k (l,s) {\Lambda}(s,q)(t) &= \sum_{l=1}^{d+1} {\Lambda}(p,l)(t) J_k (l,l) {\Lambda}(l,q)(t) \nonumber \\ &=\sum_{l=1}^{d+1} (2\delta_{kl} - 1) {\Lambda}(p,l)(t) {\Lambda}(l,q)(t). \label{lemma:projectionT3:eq1} \end{align} In particular, for $1\le q\ne k \le d$, the $(k,q)$-th block of ${\Lambda}(t) J_k {\Lambda(t)}$ is \[ \sum_{l=1}^{d+1} (2\delta_{kl} - 1) {\Lambda}(k,l)(t) {\Lambda}(l,q)(t), \] while the $(k,q)$-th block of ${\Lambda}(t) J_q {\Lambda(t)}$ is \[ \sum_{l=1}^{d+1} (2\delta_{ql} - 1) {\Lambda}(k,l)(t) {\Lambda}(l,q)(t). \] Using Proposition~\ref{prop:projection}, we may conclude that the $(k,q)$-th block of $X_k$ is zero if $1 \le k, q \le d$.
If we take $q = d +1$ and $p = k$ in \eqref{lemma:projectionT3:eq1}, then the $(k,d+1)$-th block of $X_k$ is \[ X_k(k,d+1) =- \sum_{1\le l \ne k \le d} {\Lambda}(k,l)(t) {\Lambda}(l,d+1)(t). \] We observe that $X_k(k,d+1)$ is the $k$-th block of the product \[ -\begin{bmatrix} 0 & {\Lambda(1,2)}(t) & \dots & {\Lambda} (1,d-1)(t) & {\Lambda} (1,d)(t) \\ {\Lambda}(2,1)(t) & 0 & \dots & {\Lambda}(2,d-1)(t) & {\Lambda}(2,d)(t) \\ \vdots &\vdots & \ddots & \vdots & \vdots \\ {\Lambda}(d-1,1)(t) & {\Lambda}(d-1,2)(t) & \dots & 0 & {\Lambda}(d-1,d)(t) \\ {\Lambda}(d,1)(t) & {\Lambda}(d,2)(t) & \dots & {\Lambda}(d,d-1)(t) &0 \end{bmatrix} \begin{bmatrix} {\Lambda}(1,d+1)(t) \\ {\Lambda}(2,d+1)(t) \\ \vdots \\ {\Lambda}(d-1,d+1)(t) \\ {\Lambda}(d,d+1)(t) \\ \end{bmatrix}, \] which can be written in a compact form $-{\Lambda}_0(t) {\Lambda}_1(t)$. \end{proof}
By assembling Lemmas~\ref{lemma:projectionT2} and \ref{lemma:projectionT3}, we can easily derive the geodesic equation on a flag manifold, from which we can even obtain an explicit formula for the geodesic curve. In fact, we have the following: \begin{proposition}[geodesics]\label{prop:gedoesic} Let $c(t)$ be a curve on $\Flag(n_1,\dots, n_d;n)$. We parametrize $c(t)$ as \[ c(t) = V(t) (J_{1},\dots, J_{d}) V(t)^{\scriptscriptstyle\mathsf{T}}, \] where $V(t)$ is a curve in $\O(n)$. We have the following: \begin{enumerate} \item There exists a unique $\Lambda(t) \in \mathfrak{so}(n)$ such that $\dot{V}(t) = V(t) \Lambda(t)$. \label{prop:gedoesic:item1} \item If we partition $\Lambda(t)$ as $\Lambda(t) = (\Lambda(p,q)(t))_{p,q=1}^{d+1,d+1}\in \mathfrak{so}(n)$ with respect to $n = m_1 + \cdots + m_{d+1}$, then $\Lambda(p,p)(t) \equiv 0, p =1,\dots, d+1$.\label{prop:gedoesic:item2} \item $c(t)$ is a geodesic curve if and only if \begin{equation}\label{prop:gedoesic:eq1} \dot{\Lambda}_0(t) = 0,\quad \dot{\Lambda}_1(t) = {\Lambda}_0(t) {\Lambda}_1(t). \end{equation} where $\Lambda_0(t) \coloneqq (\Lambda(p,q)(t))_{p,q=1}^{d,d}$ and $\Lambda_1(t) \coloneqq (\Lambda(d+1,q)(t))_{q=1}^{d}$. \label{prop:gedoesic:item3} \item The solution to \eqref{prop:gedoesic:eq1} is \[ \Lambda_0 (t) = {\Lambda}_0(0),\quad \Lambda_1(t) =\exp( {t \Lambda}_0(0)) \Lambda_1(0) . \] Hence a geodesic curve $c(t)$ is \[ c(t) = V(t) (J_1,\dots, J_d) V^{\scriptscriptstyle\mathsf{T}}(t), \] where $V(t)$ is a curve in $\O(n)$ written as \begin{equation}\label{prop:gedoesic:eq2} V(t) = V(0) \exp \left( t\begin{bmatrix} 2X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) \begin{bmatrix} \exp(-tX_0) & 0 \\ 0 & \I_{m_{d+1}} \ \end{bmatrix} \end{equation} for some $X_0 \in \mathfrak{so}(n-m_{d+1})$ satisfying $X_0(k,k) = 0, k =1,\dots, d$ and $X_1 \in \mathbb{R}^{(n-m_{d + 1}) \times m_{d+1}}$.\label{prop:gedoesic:item4} \end{enumerate} \end{proposition} \begin{proof} \eqref{prop:gedoesic:item1}--\eqref{prop:gedoesic:item3} and the first half of \eqref{prop:gedoesic:item4} are obvious from our earlier discussions, hence it is only left to prove the second part of \eqref{prop:gedoesic:item4}. To that end, we notice that $V(t)$ must satisfy the equation \begin{equation}\label{prop:gedoesic:eq3} \dot{V}(t) = V(t) \begin{bmatrix} X_0 & \exp (t X_0)X_1 \\
-X_1^{\scriptscriptstyle\mathsf{T}} \exp(-t X_0) & 0 \end{bmatrix} \end{equation} and \[ \begin{bmatrix} X_0 & \exp (t X_0)X_1 \\
-X_1^{\scriptscriptstyle\mathsf{T}} \exp(-t X_0) & 0 \end{bmatrix} = \begin{bmatrix} \exp(tX_0) & 0 \\ 0 & \I_{m_{d+1}} \ \end{bmatrix} \begin{bmatrix} X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \begin{bmatrix} \exp(-tX_0) & 0 \\ 0 & \I_{m_{d+1}} \ \end{bmatrix}. \] If we set $W(t) = V(t) \begin{bmatrix} \exp(tX_0) & 0 \\ 0 & \I_{m_{d+1}} \ \end{bmatrix}$, then \eqref{prop:gedoesic:eq3} becomes \[ \dot{W}(t) = W(t) \begin{bmatrix} 2X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \] whose solution is simply $W(t) = W(0) \exp \left( t \begin{bmatrix} 2X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) = V(0) \exp \left( t \begin{bmatrix} 2X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) $. Hence we obtain that \[ V(t) = V(0) \exp \left( t \begin{bmatrix} 2X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) \begin{bmatrix} \exp(-tX_0) & 0 \\ 0 & \I_{m_{d+1}} \ \end{bmatrix}. \] \end{proof}
We remark that if $d = 1$, then $X_0 = 0$ in \eqref{prop:gedoesic:eq2} and a geodesic curve on $\Gr(n_1,n)$ passing through $V J_1 V^{\scriptscriptstyle\mathsf{T}}$ is \[ c(t) = V \exp \left( t \begin{bmatrix} 0 & X_1 \\ - X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) \I_{n_1,n-n_1} \left( -t \begin{bmatrix} 0 & X_1 \\ - X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}, \] which coincides with the formula derived in \cite{LLY20}.
We again work out the case $d = 2$ to illustrate the proof of Proposition~\ref{prop:gedoesic}. To this end, we write \[ \Lambda (t) = \begin{bmatrix} 0 & A(t) & B(t) \\ -A^{\scriptscriptstyle\mathsf{T}} (t) & 0 & C(t) \\ -B^{\scriptscriptstyle\mathsf{T}} (t) & -C^{\scriptscriptstyle\mathsf{T}}(t) & 0 \end{bmatrix},\quad A(t)\in \mathbb{R}^{m_1 \times m_2},B(t)\in \mathbb{R}^{m_1 \times m_3},C(t)\in \mathbb{R}^{m_2 \times m_3} \] and suppose that the curve \[ c(t) = V(t) (J_1,J_2) V(t)^{\scriptscriptstyle\mathsf{T}},\quad \dot{V}(t) = V(t) \Lambda(t), \quad V(t)\in \O(n) \] is a curve passing through $(J_1,J_2)$ with the direction \[ ({\Lambda}(0) J_1 - J_1 {\Lambda}(0), {\Lambda}(0)J_2- J_2 {\Lambda}(0)) =-2 \left( \begin{bmatrix} 0 & A(0) & B(0) \\ A(0)^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ B^{\scriptscriptstyle\mathsf{T}}(0) & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -A(0) & 0 \\ -A^{\scriptscriptstyle\mathsf{T}}(0) & 0 & C(0) \\ 0 & C^{\scriptscriptstyle\mathsf{T}}(0) & 0 \end{bmatrix} \right). \] We write $\ddot{c}(t) = V(t)\left( \Delta_1(t), \Delta_2(t) \right) V(t)^{\scriptscriptstyle\mathsf{T}}$ where \[ \Delta_k (t) = (\dot{\Lambda}(t) J_k - J_k \dot{\Lambda(t)}) + ({\Lambda}^2(t) J_k + J_k {\Lambda}^2(t)) + \left( - 2 {\Lambda}(t) J_k {\Lambda}(t) \right). \] It is sufficient to compute the projection of ${\Lambda}(t) J_k {\Lambda}(t)$ onto $T_{c(t)} \Flag(n_1,n_2;n)$, which is \[ {\Lambda}(t) J_1 {\Lambda}(t) = \left( \begin{bmatrix} * & {B}(t){C}(t)^{\scriptscriptstyle\mathsf{T}} & -{A}(t){C}(t) \\ {C}(t){B}(t)^{\scriptscriptstyle\mathsf{T}} & * & * \\ -{C}(t)^{\scriptscriptstyle\mathsf{T}} {A}(t)^{\scriptscriptstyle\mathsf{T}} & * & * \end{bmatrix}, \begin{bmatrix} * & {B}(t){C}(t)^{\scriptscriptstyle\mathsf{T}} & * \\ {C}(t){B}(t)^{\scriptscriptstyle\mathsf{T}} & * & {A}(t)^{\scriptscriptstyle\mathsf{T}} {B}(t) \\ * & {B}(t)^{\scriptscriptstyle\mathsf{T}} {A}(t) & * \end{bmatrix} \right), \] where $*$ denotes those irrelevant blocks. Eventually, we obtain \scriptsize{ \[ \proj^{\mathbb{T}}_{c(t)} (\dot{c}(t)) =-2 \left( \begin{bmatrix} 0 & \dot{A}(t) & \dot{B}(t) -{A}(t){C}(t) \\ \dot{A}(t)^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ \dot{B}(t)^{\scriptscriptstyle\mathsf{T}}- {C}(t)^{\scriptscriptstyle\mathsf{T}} {A}(t)^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -\dot{A}(t) & 0 \\ -\dot{A}^{\scriptscriptstyle\mathsf{T}}(t) & 0 & \dot{C}(t) + {A}(t)^{\scriptscriptstyle\mathsf{T}} {B}(t) \\ 0 & \dot{C}(t)^{\scriptscriptstyle\mathsf{T}} + {B}(t)^{\scriptscriptstyle\mathsf{T}} {A}(t) & 0 \end{bmatrix} \right). \]\normalsize Hence the geodesic equation for $\Flag(n_1,n_2;n)$ is \[ \dot{A}(t) =0, \quad \dot{B}(t) - {A}(t){C}(t)= 0, \quad \dot{C}(t) + {A}(t)^{\scriptscriptstyle\mathsf{T}} {B}(t)= 0, \] which can be rewritten in a more compact form: \begin{equation}\label{eq:geodesic:d=2} \dot{A} = 0,\quad \begin{bmatrix} \dot{B}(t) \\ \dot{C}(t) \end{bmatrix} = \begin{bmatrix} 0 & {A}(t) \\ -{A}^{\scriptscriptstyle\mathsf{T}}(t) & 0 \end{bmatrix} \begin{bmatrix} {B}(t) \\ {C}(t) \end{bmatrix}. \end{equation} The solution to \eqref{eq:geodesic:d=2} is \[ A(t) = {A}(0),\quad \begin{bmatrix} B(t) \\ C(t) \end{bmatrix} = \exp\left(t \begin{bmatrix} 0 & {A}(0) \\ -{A}^{\scriptscriptstyle\mathsf{T}}(0) & 0 \end{bmatrix} \right) \begin{bmatrix} {B}(0) \\ {C}(0) \end{bmatrix}. \]
\section{Sub-Riemannian geometry of flag manifolds with modified embeddings} In this section, we discuss the embedded geometry of flag manifolds with respect to a modified version of the embedding \eqref{eq:embedding1}. Namely, we define \begin{align}\label{eq:embedding2} \tilde{\iota}: \Flag(n_1,\dots, n_d;n) &\hookrightarrow \Gr(n_1,n) \times \Gr(n_2-n_1,n) \cdots \times \Gr(n_d- n_{d-1},n) \times \Gr(n-n_d,n) \nonumber \\ ( \left\lbrace \mathbb{V}_k\right\rbrace_{k=1}^d ) &\mapsto (\mathbb{W}_1,\mathbb{W}_2,\dots, \mathbb{W}_d,\mathbb{W}_{d+1}), \end{align} Here $\mathbb{W}_k$ is the orthogonal complement of $\mathbb{V}_{k-1}$ in $\mathbb{V}_k$ for $2\le k \le d$, $\mathbb{W}_1 = \mathbb{R}^n_1$ and $\mathbb{W}_{d+1}$ is the orthogonal complement of $\mathbb{V}_d$ in $\mathbb{R}^n$. We observe that \[ \tilde{\iota}( \left\lbrace \mathbb{V}_k\right\rbrace_{k=1}^d ) = (\iota( \left\lbrace \mathbb{V}_k\right\rbrace_{k=1}^d ), \mathbb{W}_{d+1}). \] In other words, $\tilde{\iota}$ is simply an extension of $\iota$ by tautologically adding the orthogonal complement of $\mathbb{V}_d$. Since $\iota$ is already an embedding, we may easily conclude that $\tilde{\iota}$ is also an embedding. Adopting the convention \eqref{eq:convention}, $\tilde{\iota}$ embeds $\Flag(n_1,\dots, n_d;n)$ into $\prod_{j=1}^{d+1} \Gr(m_j,n)$. Moreover, by Proposition~\ref{prop:modelFlag} we have the following: \begin{proposition}[embedding]\label{prop:newmodelFlag} The image of the embedding \begin{equation}\label{prop:newmodelFlag:eq0} \tilde{\varepsilon}: \Flag(n_1,\dots, n_d;n) \xhookrightarrow{\tilde{\iota}} \prod_{j=1}^{d+1} \Gr(m_j,n) \xhookrightarrow{\tilde{\tau}} \O(n)^{d+1} \end{equation} is given by \begin{multline}\label{prop:newmodelFlag:eq1} \tilde{\varepsilon} \left(\Flag(n_1,\dots, n_d;n) \right) = \lbrace (Q_1,\dots, Q_{d+1})\in\prod_{j=1}^{d+1} \O(n):\tr (Q_j) = 2m_j - n, Q_j^{\scriptscriptstyle\mathsf{T}} = Q_j \\ (\I_n + Q_j)(\I_n + Q_{j+1}) = 0, j=1,\dots, d+1 \rbrace. \end{multline} In particular, we also have \begin{equation}\label{prop:newmodelFlag:eq2} \tilde{\varepsilon} \left(\Flag(n_1,\dots, n_d;n) \right) =\left\lbrace V (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}: V\in O(n) \right\rbrace, \end{equation} where $\J_k = \diag (-\I_{m_1},\cdots, -\I_{m_{k-1}}, \I_{m_k}, -\I_{m_{k+1}}, \cdots, -\I_{m_{d+1}})$ is obtained by permuting diagonal blocks of $\I_{m_k,n-m_k}, k =1,\dots, d+1$ and \[ V (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}} \coloneqq \left( V \J_1 V^{\scriptscriptstyle\mathsf{T}},\dots, V \J_{d+1} V^{\scriptscriptstyle\mathsf{T}} \right). \] \end{proposition} Similarly to Proposition~\ref{prop:tangentFlag} and Corollary~\ref{cor:curve}, we also have: \begin{proposition}\label{prop:newmodeltangentFlag} Given a point $\tilde{\mathfrak{f}} \coloneqq V (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}$, the tangent space $\mathbb{T}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n)$ consists of vectors $V (X_1,\dots, X_{d+1}) V^{\scriptscriptstyle\mathsf{T}} \in \prod_{j=1}^{d+1} S_n$ satisfying \begin{equation}\label{prop:newmodeltangentFlag:eq} X_k(k,l) = - X_{l}(k,l), X_k(p,q) = 0, X_k(k,k) = 0, \quad 1\le k,l,p,q \le d+1~\text{and}~p,q,l \ne k. \end{equation} Here $X_k(s,t) \in \mathbb{R}^{m_s \times m_t}$ is the $(s,t)$-th block of $X_k\in S_n$ when we partition $X_k$ with respect to $n = \sum_{j=1}^{d+1} m_j$. Moreover, a curve $c(t)$ passing through $c(0) = V (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}$ on $\Flag(n_1,\dots, n_d;n)$ can be locally parametrized as \[ c(t) = V \exp(\Lambda(t)) (J_1,\dots, J_{d+1}) \exp(-\Lambda(t)) V^{\scriptscriptstyle\mathsf{T}}. \] For some differentiable curve $\Lambda: (-\varepsilon,\varepsilon) \to \mathfrak{so}(n)$ such that $\Lambda(k,k)(t) \equiv 0$. \end{proposition} If $d = 2$, then a tangent vector of $\Flag(n_1, n_2;n)$ at $\tilde{f}$ can be written as \[ V \left( \begin{bmatrix} 0 & A & B \\ A^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ B^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -A & 0 \\ -A^{\scriptscriptstyle\mathsf{T}} & 0 & C \\ 0 & C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & -B \\ 0 & 0 & -C \\ -B^{\scriptscriptstyle\mathsf{T}} & -C^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}, \] where $A\in \mathbb{R}^{m_1 \times m_2}, B\in \mathbb{R}^{m_1 \times m_3}, C\in \mathbb{R}^{m_2 \times m_3}$.
\subsection{induced Riemannian metric, normal space and projections} As a submanifold of $\prod_{j =1}^{d+1} \O(n)$, $\Flag(n_1,\dots, n_d;n)$ is equipped with a naturally induced Riemannian metric: \begin{align}\label{eq:newmodelmetricFlag} \langle V(X_1,\dots, X_{d+1}) V^{\scriptscriptstyle\mathsf{T}}, V(Y_1,\dots, Y_{d+1}) V^{\scriptscriptstyle\mathsf{T}} \rangle_{\tilde{\mathfrak{f}}} &\coloneqq \sum_{j=1}^{d+1} \tr(X_j Y_j) \\ \nonumber &= 2\sum_{k=1}^{d+1} \sum_{ l < k < m} \tr( X_k(l,k) Y_k(k,l) + X_k(m,k) Y_k(k,m)). \end{align} Unlike \eqref{eq:metricFlag} in which some summands are weighted differently, all summands in the new metric \eqref{eq:newmodelmetricFlag} are evenly weighted. For instance, if we take $d =2$ then $\langle V(X_1,X_2, X_{3}) V^{\scriptscriptstyle\mathsf{T}}, V(Y_1,Y_2,Y_{3}) V^{\scriptscriptstyle\mathsf{T}} \rangle_{\tilde{\mathfrak{f}}} $ is simply \begin{equation}\label{eq:newmodelmetricFlag1} 4 \left( \tr (X_1(2,1) Y_1(1,2)) + \tr (X_1(3,1) Y_1(1,3)) + \tr (X_2(3,2) Y_2(2,3)) \right). \end{equation} The distinction between \eqref{eq:newmodelmetricFlag} and \eqref{eq:metricFlag} can be easily observed by comparing \eqref{eq:newmodelmetricFlag1} with \eqref{eq:metricFlag2}.
We notice that the tangent space of $\prod_{j =1}^{d+1} \O(n)$ at $\tilde{\mathfrak{f}} = V (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}$ is \[ \mathbb{T}_{\tilde{\mathfrak{f}}} \left( \prod_{j =1}^{d+1} \O(n) \right)= \bigoplus_{j=1}^{d+1} \left( V J_j V^{\scriptscriptstyle\mathsf{T}} \mathfrak{so}(n) \right) = \bigoplus_{j=1}^{d+1} \left( V J_j \mathfrak{so}(n) V^{\scriptscriptstyle\mathsf{T}} \right). \] \begin{proposition}\label{prop:newmodelnormalspaceFlag} At a point $\tilde{\mathfrak{f}}\coloneqq V (J_{1},\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}} \in \Flag(n_1,\dots, n_d;\mathbb{R}^n)$, the normal space $\mathbb{N}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n)$ consists of vectors \[ V ( J_{1} Z_1,\dots, J_{d+1} Z_{d+1} )V^{\scriptscriptstyle\mathsf{T}} \] where $Z_1,\dots,Z_d\in \mathfrak{so}(n)$ satisfy the relation \[ Z_k(k,l) - Z_l (k,l)= 0,\quad \text{for all}~1\le k \ne l \le d+1. \] In particular, we have a decomposition \begin{equation}\label{prop:newmodelnormalspaceFlag:eq1} \mathbb{N}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n) = N_{\tilde{\mathfrak{f}}} \left( \prod_{k=1}^{d+1} \Gr(m_k,n) \right) \bigoplus N^0_{\tilde{\mathfrak{f}}}, \end{equation} where $\mathbb{N}_{\tilde{\mathfrak{f}}} \left( \prod_{k=1}^{d+1} \Gr(m_k,n) \right) = \bigoplus_{k=1}^{d+1} \mathbb{N}_{V J_{m_k,n-m_k} V^{\scriptscriptstyle\mathsf{T}}} \Gr(m_k,n)$ and \begin{multline}\label{prop:newmodelnormalspaceFlag:eq2} \mathbb{N}^0_{\tilde{\mathfrak{f}}} = \lbrace V( J_{m_1,n-m_1} Z_1 ,\dots, J_{m_{d+1},n-m_{d+1}} Z_{d+1} )V^{\scriptscriptstyle\mathsf{T}}: Z_k\in \mathfrak{so}(n),Z_k(k,l) - Z_l(k,l) = 0, \\ Z_k(k,k) = 0, Z_k(p,q) = 0, 1\le k,l,p,q \le d+1, p,q\ne k
\rbrace. \end{multline} \end{proposition}
\begin{proposition}[Projections]\label{prop:newmodelprojection} Projections from $\mathbb{T}_{\tilde{\mathfrak{f}}} \left( \prod_{k=1}^{d+1} \O(n) \right)$ onto $\mathbb{T}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n)$ and $\mathbb{N}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n)$ are respectively given by \begin{align}\label{prop:newmodelprojection:tangent} \proj_{\tilde{\mathfrak{f}}}^{\mathbb{T}}: \mathbb{T}_{\tilde{\mathfrak{f}}} \left( \prod_{k=1}^{d+1} \O(n) \right) &\to \mathbb{T}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n) \nonumber \\ V(J_{1} \Lambda_1 ,\dots, J_{d+1}\Lambda_{d+1}) V^{\scriptscriptstyle\mathsf{T}} &\mapsto V(X_1,\dots, X_{d+1}) V^{\scriptscriptstyle\mathsf{T}}, \end{align} and \begin{align}\label{prop:newmodelprojection:normal} \proj_{\tilde{\mathfrak{f}}}^{\mathbb{N}}: \mathbb{T}_{\tilde{\mathfrak{f}}} \left( \prod_{k=1}^{d+1} \O(n) \right) &\to \mathbb{N}_{\tilde{\mathfrak{f}}} \Flag(n_1,\dots, n_d;n) \nonumber \\ V(J_{1} \Lambda_1 ,\dots, J_{d+1}\Lambda_{d+1}) V^{\scriptscriptstyle\mathsf{T}} &\mapsto V( Z_1,\dots, Z_{d+1}) V^{\scriptscriptstyle\mathsf{T}}, \end{align} where for each $k = 1,\dots, d$, $X_k\in \S_n$ (resp. $Z_k\in \mathbb{R}^{n\times n}$) is partitioned as $(X_k (p,q))_{p,q =1}^{d+1}$ (resp. $(Z_k(p,q))_{p,q=1}^{d+1}$) with respect to $n = m_1 + \cdots + m_{d+1}$ and \[ X_k (p,q) = \begin{cases} \frac{1}{2} (\Lambda_k(k,q) - \Lambda_{q} (k,q)),~\text{if}~p = k \ne q \\
-\frac{1}{2} (\Lambda_k(p,k) - \Lambda_{p} (p,k)),~\text{if}~q = k \ne p \\
0,~\text{otherwise}. \end{cases} \] \[ Z_k (p,q) = \begin{cases} \frac{1}{2} (\Lambda_k(k,q) + \Lambda_{q} (k,q)),~\text{if}~p = k \ne q \\
-\frac{1}{2} (\Lambda_k(p,k) + \Lambda_{p} (p,k)),~\text{if}~q = k \ne p \\
\Lambda_k(p,q),~\text{otherwise}. \end{cases} \] \end{proposition} As an illustrative example, we take a tangent vector $\xi$ of $\O(n) \times \O(n) \times \O(n)$ at some point $\mathfrak{f} = V (J_1,J_2,J_3) V^{\scriptscriptstyle\mathsf{T}}$, which can be written as \[ \xi \coloneqq V \left( \begin{bmatrix} A & B & C \\ B^{\scriptscriptstyle\mathsf{T}} & D & E \\ C^{\scriptscriptstyle\mathsf{T}} & -E^{\scriptscriptstyle\mathsf{T}} & F \end{bmatrix}, \begin{bmatrix} X & Y & Z \\ Y^{\scriptscriptstyle\mathsf{T}} & W & S\\ -Z^{\scriptscriptstyle\mathsf{T}} & S^{\scriptscriptstyle\mathsf{T}} & T \end{bmatrix}, \begin{bmatrix} L & M & N \\ -M^{\scriptscriptstyle\mathsf{T}} & P & Q\\ N^{\scriptscriptstyle\mathsf{T}} & Q^{\scriptscriptstyle\mathsf{T}} & R \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}}, \] where $A,X,L\in \mathfrak{so}(m_1)$, $D,W,P\in \mathfrak{so}(m_2)$, $F,T,R\in \mathfrak{so}(m_3)$, $B,Y,M\in \mathbb{R}^{m_1 \times m_2}$, $C,Z,N\in \mathbb{R}^{m_1 \times m_3}$, $E,S,Q\in \mathbb{R}^{m_2 \times m_3}$. The projection of $\xi$ to $T_{\tilde{\mathfrak{f}}} \Flag(n_1,n_2;n)$ is \small \[ \proj^{\mathbb{T}}_{\tilde{\mathfrak{f}}} (\xi) = V \left( \begin{bmatrix} 0 & \frac{B-Y}{2} & \frac{C-N}{2} \\ \frac{B^{\scriptscriptstyle\mathsf{T}}-Y^{\scriptscriptstyle\mathsf{T}}}{2} & 0 & 0 \\ \frac{C^{\scriptscriptstyle\mathsf{T}}-N^{\scriptscriptstyle\mathsf{T}}}{2} & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & -\frac{B-Y}{2} & 0 \\ -\frac{B^{\scriptscriptstyle\mathsf{T}} - Y^{\scriptscriptstyle\mathsf{T}}}{2} & 0 & \frac{S-Q}{2}\\ 0 & \frac{S^{\scriptscriptstyle\mathsf{T}} - Q^{\scriptscriptstyle\mathsf{T}}}{2} & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & -\frac{C-N}{2} \\ 0 & 0 & -\frac{S-Q}{2}\\ -\frac{C^{\scriptscriptstyle\mathsf{T}}-N^{\scriptscriptstyle\mathsf{T}}}{2} & -\frac{S^{\scriptscriptstyle\mathsf{T}} - Q^{\scriptscriptstyle\mathsf{T}}}{2} & 0 \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}} \]\normalsize and its projection to $N_{\mathfrak{f}} \Flag(n_1,n_2;n)$ is \[ \proj^{\mathbb{N}}_{\mathfrak{f}} (\xi) = V \left( \begin{bmatrix} A & \frac{B + Y}{2} & \frac{C+N}{2} \\ \frac{B^{\scriptscriptstyle\mathsf{T}} + Y^{\scriptscriptstyle\mathsf{T}}}{2} & D & E \\ \frac{C^{\scriptscriptstyle\mathsf{T}}+N^{\scriptscriptstyle\mathsf{T}}}{2} & -E^{\scriptscriptstyle\mathsf{T}} & F \end{bmatrix}, \begin{bmatrix} X & \frac{B + Y}{2} & Z \\ \frac{B^{\scriptscriptstyle\mathsf{T}} + Y^{\scriptscriptstyle\mathsf{T}}}{2} & W & \frac{S+Q}{2}\\ -Z^{\scriptscriptstyle\mathsf{T}} & \frac{S^{\scriptscriptstyle\mathsf{T}}+Q^{\scriptscriptstyle\mathsf{T}}}{2} & T \end{bmatrix}, \begin{bmatrix} L & M & \frac{C+N}{2} \\ -M^{\scriptscriptstyle\mathsf{T}} & P & \frac{S+Q}{2}\\ \frac{C^{\scriptscriptstyle\mathsf{T}}+N^{\scriptscriptstyle\mathsf{T}}}{2} & \frac{S^{\scriptscriptstyle\mathsf{T}} + Q^{\scriptscriptstyle\mathsf{T}}}{2} & T \end{bmatrix} \right) V^{\scriptscriptstyle\mathsf{T}} \]\normalsize
\subsection{geodesics} Assume that $c(t)$ is a curve in $\Flag(n_1,\dots,n_d;n)$, then according to Proposition~\ref{prop:newmodeltangentFlag} we may parametrize $c(t)$ as \begin{equation}\label{eq:newmodelcurve} c(t) = V(t) (J_1,\dots, J_{d+1}) V(t)^{\scriptscriptstyle\mathsf{T}} \end{equation} for some differentiable curve $V(t)$ in $\O(n)$. Moreover, we have $\dot{V}(t) = V(t) \Lambda(t)$ where $\Lambda(t)$ is a curve in $\mathfrak{so}(n)$ partitioned as $\Lambda(t) = (\Lambda(p,q))_{p,q=1}^{d+1}$ with respect to $m_1 + \cdots + m_{d+1} = n$ and and $\Lambda(k,k)(t) \equiv 0, k =1,\dots, d+1$. This implies that we have \[ \ddot{c}(t) = T_1(t) + T_2(t) -2 T_3(t), \] where $T_j(t)$'s are respectively given by \begin{align} T_1(t) &= V(t) (\dot{\Lambda}(t) J_1 - J_1 \dot{\Lambda(t)},\dots, \dot{\Lambda}(t) J_{d+1} - J_{d+1} \dot{\Lambda(t)}) V^{\scriptscriptstyle\mathsf{T}}(t), \\ T_2(t) &= V(t) ({\Lambda}^2(t) J_1 + J_1 {\Lambda}^2(t),\dots, {\Lambda}^2(t) J_{d+1} + J_{d+1} {\Lambda}^2(t)) V^{\scriptscriptstyle\mathsf{T}}(t),\\ T_3(t) &= V(t) ( {\Lambda}(t) J_1 {\Lambda}(t),\dots, {\Lambda}(t) J_{d+1} {\Lambda}(t)) V^{\scriptscriptstyle\mathsf{T}}(t). \end{align} By similar calculations in proofs of Lemmas~\ref{lemma:projectionT2} and \ref{lemma:projectionT3}, we may easily obtain the following characterizations of $\proj_{c(t)}^{\mathbb{T}} (T_j(t)), j = 1,2,3$. \begin{lemma}\label{lemma:newmodelprojectionTj} Let $c(t), \Lambda(t), T_1(t), T_2(t), T_3(t)$ be as above. We have \begin{enumerate} \item $T_1(t) \in \mathbb{T}_{c(t)} \Flag(n_1,\dots, n_d;n)$. \item $\proj_{c(t)}^{\mathbb{T}} (T_2(t)) = 0$. \item $\proj_{c(t)}^{\mathbb{T}} (T_3(t)) = 0$. \end{enumerate} \end{lemma} \begin{proposition}\label{prop:newmodelgedoesic} Let $c(t)$ be a curve on $\Flag(n_1,\dots,n_d;n)$ parametrized as \[ c(t) = V(t) (J_1,\dots, J_{d+1}) V(t)^{\scriptscriptstyle\mathsf{T}} \] for some differentiable curve $V(t)$ in $\O(n)$. Let $\Lambda(t)$ be the curve in $\mathfrak{so}(n)$ such that $\dot{V}(t) = V(t) \Lambda(t)$, where $\Lambda(t)$ is a curve in $\mathfrak{so}(n)$ partitioned as $\Lambda(t) = (\Lambda(p,q))_{p,q=1}^{d+1}$ with respect to $m_1 + \cdots + m_{d+1} = n$ and and $\Lambda(k,k)(t) \equiv 0, k =1,\dots, d+1$. Then $c(t)$ is a geodesic curve if and only if $V(t) =V(0) \exp(t \Lambda(0))$. \end{proposition} \begin{proof} Since $c(t)$ is a geodesic if and only if $\proj_{c(t)} (\ddot{c}(t)) \equiv 0$, Lemma~\ref{lemma:newmodelprojectionTj} implies that $c(t)$ is a geodesic curve if and only if \[ \dot{\Lambda}(t) J_k - J_k \dot{\Lambda(t)} = 0,\quad k =1,\dots, d+1. \] By \eqref{eq:basicalculation}, we may conclude that $c(t)$ is a geodesic if and only if $\dot{\Lambda}(t) \equiv 0$, i.e., $\Lambda(t) = \Lambda(0)$. This implies that $V(t)$ is determined by the equation $\dot{V}(t) = V(t) \Lambda(0)$, from which we may conclude that $V(t) =V(0) \exp(t \Lambda(0))$. \end{proof}
\section{The comparison of Riemannian metrics on flag manifolds} The goal of this section is to discuss relations among three Riemannian metrics on a flag manifold $\Flag(n_1,\dots, n_d;n)$. We recall that the two metrics discussed in this paper are respectively induced by the embedding $\varepsilon: \Flag(n_1,\dots, n_d;n) \hookrightarrow \prod_{k=1}^{d} \O(n)$ given in \eqref{prop:modelFlag:eq0} and $\tilde{\varepsilon}: \Flag(n_1,\dots, n_d;n) \hookrightarrow \prod_{k=1}^{d+1} \O(n)$ given in \eqref{prop:newmodelFlag:eq0}. For notational simplicity, we denote the two induced metrics by $g^e$ and $\tilde{g}^e$, respectively. Yet there is another metric induced from the homogeneous space structure of $\Flag(n_1,\dots, n_d;n)$, which is discussed thoroughly in \cite{YWL19}. We denote this quotient metric by $g^q$.
\begin{proposition} The Riemaannian metrics $\tilde{g}^e$ and $g^q$ coincide. Moreover, $\tilde{g}^e$ and $g^e$ coincide with $g^q$ when $d = 1$, in which case $\Flag(n_1,\dots, n_d;n)$ is simply the Grassmann manifold $\Gr(n_1;n)$. \end{proposition}
We will see in Proposition~\ref{prop:construction} that both $g^e$ and $\tilde{g}^e = g^q$ can be constructed by a uniform method. To begin with, we notice that in general, any smooth map \[ \varphi: \left( \mathbb{R}^{n\times n} \right)^d \to \mathbb{R}^{n\times n} \] induces an embedding $\kappa_{\varphi}: \left( \mathbb{R}^{n\times n} \right)^d \to \left( \mathbb{R}^{n\times n} \right)^{d+1}$ defined by \[ \kappa_{\varphi}(A_1,\dots, A_d) = (A_1,\dots, A_d, \varphi(A_1,\dots,A_d)),\quad A_j\in \mathbb{R}^{n\times n}, j=1,\dots, d. \] Hence we have another embedding $\kappa_{\varphi} \circ \varepsilon$ of $\Flag(n_1,\dots, n_d;n)$ into $\O(n)^{d+1} \subseteq \left( \mathbb{R}^{n\times n}\right)^{d+1}$, which induces a metric $g^{\varphi}$ on $\Flag(n_1,\dots, n_d;n)$ from the Euclidean metric on $\left( \mathbb{R}^{n\times n} \right)^{d+1}$.
\begin{proposition}\label{prop:construction} We have the following: \begin{itemize} \item $g^{\varphi} = g^e$ if and only if $\varphi$ is a constant map on $\varepsilon(\Flag(n_1,\dots, n_d;n))$. In particular, $g^{\varphi} = g^e$ if $\varphi$ is a constant map. \item There exists $\varphi$ such that $g^{\varphi} = \widetilde{g}^e$. \end{itemize} \end{proposition} \begin{proof} The ``if" part of the first statement can be verified by a straightforward calculation. For the ``only if" part, we notice that $g^{\varphi} = g^e$ implies that the differential map $d_{(Q_1,\dots, Q_d)}\varphi$ must be zero on $T_{(Q_1,\dots, Q_d)} \varepsilon(\Flag(n_1,\dots, n_d;n))$ at any $(Q_1,\dots, Q_d) \in \varepsilon(\Flag(n_1,\dots, n_d;n))$. Since $\varepsilon(\Flag(n_1,\dots, n_d;n))$ is connected and $\varphi$ is continuous, we may conclude that $\varphi$ is a constant map on $\varepsilon(\Flag(n_1,\dots, n_d;n))$.
For the second statement, we notice that $C \coloneqq \varepsilon(\Flag(n_1,\dots, n_d;n))$ is a compact subset of $X \coloneqq \left( \mathbb{R}^{n\times n} \right)^d $ and we can define \[ \psi: C \to \O(n) \subseteq \mathbb{R}^{n\times n},\quad \psi(Q_1,\dots, Q_d) = Q_{d+1}, \] where $(Q_{d+1} + \I_n)/2$ is the projection matrix of $\left( \bigoplus_{j=1}^d \im(Q_j + \I_n) \right)^\perp$. We denote by $p_{ij}$ the projection map from $\mathbb{R}^{n\times n}$ onto its $(i,j)$-th entry, $1\le i,j\le n$. It is clear that $p_{ij}\circ \psi: C \to \mathbb{R}$ is a smooth function. The compactness of $C$ in $X$ implies that $p_{ij}\circ \psi$ has a smooth extension $\varphi_{ij}: X \to \mathbb{R}$. Indeed, we can first extend the function $p_{ij}\circ \psi$ smoothly to an open neighbourhood of $C$ and then further extend it smoothly to the whole $X$ by a smooth partition of unity. Now we have a smooth map \[ \varphi \coloneqq (\varphi_{ij}): \left( \mathbb{R}^{n\times n} \right)^d \to \mathbb{R}^{n\times n} \] which extends $\psi$ and hence we have $g^{\varphi} = \widetilde{g}^e$. \end{proof}
\section{A coordinate descent method for optimizations on flag manifolds} Given a strictly increasing sequence $n_1 < \cdots < n_d$, we define \[ m_1 \coloneqq n_1,\quad m_{d+1} \coloneqq n - n_d, \quad m_{j} \coloneqq n_j - n_{j-1},\quad j = 2,\dots, d+1. \] We recall from \eqref{eq:embedding2} that a flag $\{\mathbb{V}_k\}_{k=1}^d \in \Flag(n_1,\dots, n_d;n)$ can be regarded as $\{\mathbb{W}_j\}_{j=1}^{d+1}$ via the modified embedding $\widetilde{\iota}:\Flag(n_1,\dots, n_d;n) \hookrightarrow \prod_{j=1}^{d+1} \Gr(m_j,n)$, where $\mathbb{W}_j$ is the orthogonal complement of $\mathbb{V}_{j-1}$ in $\mathbb{V}_{j}, 2 \le j \le d+1$, $\mathbb{W}_1 = \mathbb{V}_1$ and $\mathbb{V}_{d+1} = \mathbb{R}^n$. Therefore, an optimization problem on $\Flag(n_1,\dots, n_d;n)$ has the following form: \begin{align}\label{eq:optimization on flag} \min \quad & f( \mathbb{W}_1,\dots, \mathbb{W}_{d+1}) \nonumber \\ \text{s.t.} \quad & \mathbb{W}_j \in \Gr(m_j, n), 1\le j \le d + 1 \\ \quad & \mathbb{W}_j \perp \mathbb{W}_l, 1 \le j < l \le d + 1 \nonumber \end{align} Here $f$ is a function on $\Flag(n_1,\dots, n_d;n)$. We propose Algorithm~\ref{alternating}, an alternating type algorithm to solve the problem~\eqref{eq:optimization on flag}. \begin{algorithm}[!htbp] \label{alg:cord} \caption{Coordinate minimization method for optimization on flag manifolds}\label{alternating} \begin{algorithmic}[1] \renewcommand{\textbf{Initialization}}{\textbf{Input}} \Require A differentiable function $f$ on $\Flag(n_1,\dots, n_d;n)$ \renewcommand{\textbf{Initialization}}{\textbf{Output}} \Require A critical point of $f$ \renewcommand{\textbf{Initialization}}{\textbf{Initialization}} \Require Choose an initial point $(\mathbb{W}_{1},\dots, \mathbb{W}_{d+1}) \in \prod_{j=1}^{d+1} \Gr(m_j,n)$ \While{not converge} \State {set $(s,t) = (1,2)$} \For{$1\le s < t \le d+1$} \State{Solve the following sub-problem for $(\mathbb{X}_s,\mathbb{X}_t) \in \Gr(m_s,n)\times \Gr(m_t, n)$: \begin{align}\label{eq:subproblem} \min \quad & f( \mathbb{W}_1,\dots,\mathbb{W}_{s-1},\mathbb{X}_s,\mathbb{W}_{s+1},\dots, \mathbb{W}_{t-1},\mathbb{X}_t,\mathbb{W}_{t+1},\dots \mathbb{W}_{d+1}) \nonumber \\ \text{s.t.} \quad & \mathbb{X}_s \perp \mathbb{X}_t \\ \quad & \mathbb{X}_s \perp \mathbb{W}_j, 1 \le j \ne s \le d + 1 \nonumber \\ \quad & \mathbb{X}_t \perp \mathbb{W}_j, 1 \le j \ne t \le d + 1 \nonumber \end{align}} \State Update $(\mathbb{W}_s,\mathbb{W}_t)$ by the solution $(\overline{\mathbb{X}}_s, \overline{\mathbb{X}}_t)$ to \eqref{eq:subproblem}. \State Update $(s,t)$ by $(s+1,t)$ if $s+1 < t$ and by $(s,t+1)$ otherwise \EndFor \EndWhile \end{algorithmic} \end{algorithm}
We remark that the sub-problem \eqref{eq:subproblem} in Algorithm~\ref{alternating} is an optimization problem on a Grassmann manifold. Indeed, we notice that $\mathbb{W}_{j}$ in \eqref{eq:subproblem} is fixed whenever $j\ne s,t$. This implies \[ \mathbb{X}_s \oplus \mathbb{X}_t = \left(\bigoplus_{j\ne s,t} \mathbb{W}_j\right)^\perp \] is a fixed $(m_s + m_t)$-dimensional subspace of $\mathbb{R}^n$. So the submanifold given by fixed $\mathbb{W}_{j}, j\neq s,t$ and $\mathbb{X}_s \oplus \mathbb{X}_t$ is isomorphic to $\Gr(m_s, m_s + m_t)$. This submanifold is actually a totally-geodesic manifold, which is clear from the geodesic formulas of flag and Grassmann manifolds. Thus the objective function \[ f( \mathbb{W}_1,\dots,\mathbb{W}_{s-1},\mathbb{X}_s,\mathbb{W}_{s+1},\dots, \mathbb{W}_{t-1},\mathbb{X}_t,\mathbb{W}_{t+1},\dots \mathbb{W}_{d+1}) \] can be recognized as a function on the submanifold $\Gr(m_s,m_s + m_t)$. Furthermore, at a given point, there are $d(d+1)/2$ such submanifolds indexed by $1 \leq s < t \leq d+1$. The tangent spaces of those submanifolds are orthogonal to each other and span the whole tangent space. Algorithm~\ref{alternating} is a generalization of coordinate descent algorithm in Euclidean space.
\subsection{separation of subspaces} Given $d+1$ subspaces $\mathbb{U}_1,\dots,\mathbb{U}_{d+1}$ of some ambient space $\mathbb{R}^N$. The separation problem can be mathematically formulated as the following optimization problem on a flag manifold: \begin{align}\label{eq:nearest point on flag} \min \quad & F(\mathbb{W}) := \sum_{j=1}^{d+1} \lVert \tau_j(\mathbb{U}_j) - \tau_j(\mathbb{W}_j) \rVert_F^2 \nonumber \\ \text{s.t.} \quad & \mathbb{W}_j \in \Gr(m_j, n), 1\le j \le d + 1 \\ \quad & \mathbb{W}_j \perp \mathbb{W}_l, 1 \le j < l \le d + 1 \nonumber \end{align} Here $m_j = \dim \mathbb{U}_j, 1\le j \le d+1$, $n = \sum_{j=1}^{d+1} m_j$ and $\tau_j$ is the embedding of $\Gr(m_j,n)$ into $\O(n) \cap \S_n$ defined by \[ \tau_j (\mathbb{W}) = V \begin{bmatrix} -\I_{p} & 0 & 0 \\ 0 & \I_{m_j} & 0 \\ 0 & 0 & -\I_{q} \end{bmatrix} V^{\scriptscriptstyle\mathsf{T}} \] where $p = \sum_{l=1}^{j-1} m_l$, $q = \sum_{l=j+1}^{d+1} m_l$ and $V = [v_1,\dots, v_{n}] = [V_1, \dots, V_{d+1}]\in \O(n)$ such that $[v_{p+1},\dots, v_{q-1}] = V_j, \operatorname{span}\{v_{p+1},\dots, v_{q-1}\} = \mathbb{W}_j$.
\begin{lemma}\label{lem:gradient estimate} Consider the maximization of linear function $f(Q) = \langle A, Q \rangle$ on the Grassmann manifold, \begin{align*} \max \quad & \langle A, Q \rangle \\ \text{s.t.} \quad & Q \in \Gr(k, n) \end{align*} The gradient of $f(Q)$ is given by \[ \nabla f(Q) = \frac{1}{4}(A+A^{\scriptscriptstyle\mathsf{T}} - QAQ - QA^{\scriptscriptstyle\mathsf{T}} Q). \] Let $(A+A^{\scriptscriptstyle\mathsf{T}})/2 = U \Lambda U^{\scriptscriptstyle\mathsf{T}}$ be an eigendecomposition of $(A+A^{\scriptscriptstyle\mathsf{T}})/2$ such that $\Lambda = \diag(\lambda_1, \dots, \lambda_n)$, $\lambda_1 \geq \dots \geq \lambda_n$. Then $Q^* = UI_{k, n-k} U^{\scriptscriptstyle\mathsf{T}}$ is a maximizer of $f(Q)$. Furthermore, \[
2\|\Lambda\| (f(Q^*) - f(Q)) \geq \|\nabla f(Q)\|^2. \] \end{lemma} \begin{proof} The formula for gradient is given in \cite[Proposition~5.1]{LLY20}. The original problem is equivalent to \begin{align*} \max \quad & \langle \Lambda, Q \rangle \\ \text{s.t.} \quad & Q \in \Gr(k, n) \end{align*} and we need to prove $Q = I_{k, n-k}$ is a maximizer. Using gradient formula, we can simplify the first order condition $\nabla f(Q^*) = 0$ to \[ Q^* \Lambda = \Lambda Q^*. \] So $Q^*, A$ can be simultaneously diagonalized, and we can assume $Q^*$ is diagonalized. The original problem is equivalent to \[ \min_{\delta_1 + \dots + \delta_n =2k -n, \delta_i = \pm 1} \lambda_1 \delta_1+ \dots + \lambda_n \delta_n. \]
It is clear that $\delta_1 = \dots = \delta_k = 1, \delta_{k+1} = \dots = \delta_n = -1$ is a maximizer. So $Q^* = I_{k, n-k}$ is a maximizer. Now consider the last inequality. The term $\|\nabla f(Q)\|^2, f(Q^*) - f(Q)$ can be simplified \begin{align*}
\|\nabla f(Q)\|^2 &= \frac{1}{4}\langle \Lambda - Q \Lambda Q, \Lambda - Q \Lambda Q \rangle\\ &= \frac{1}{2}\sum_{i = 1}^n \lambda_i^2 - \frac{1}{2}\tr(\Lambda Q \Lambda Q)\\ &= \frac{1}{2}\tr(\Lambda Q^* \Lambda Q^*) - \frac{1}{2}\tr(\Lambda Q \Lambda Q), \end{align*} \begin{align*} f(Q^*) - f(Q) &= \langle \Lambda, Q^* \rangle - \langle \Lambda, Q \rangle. \end{align*}
For any $c > 2\|\Lambda\|$, we have \begin{align*}
c(f(Q^*) - f(Q)) - \|\nabla f(Q)\|^2 &= c \langle \Lambda, Q^* \rangle - c \langle \Lambda, Q \rangle - \tr(\Lambda Q^* \Lambda Q^*)/2 +\tr(\Lambda Q \Lambda Q)/2\\ &= g(Q^*) - g(Q), \end{align*} where $g(Q) = c \langle \Lambda, Q \rangle - \tr(\Lambda Q \Lambda Q)/2$. Assume $Q^{**}$ is a maximizer of $g(Q)$. The first order condition of $g(Q)$ is \[ Q^{**}\Lambda Q^{**} \Lambda Q^{**} - c Q^{**} \Lambda Q^{**} - \Lambda Q^{**} \Lambda + c \Lambda = 0, \] which is equivalent to \[ (Q^{**}\Lambda - \Lambda Q^{**})(Q^{**} \Lambda + \Lambda Q^{**} - c I) = 0. \]
By definition $c > 2\|\Lambda\| \geq \|Q^{**} \Lambda + \Lambda Q^{**}\|$, so $Q^{**} \Lambda + \Lambda Q^{**} - c I$ is invertible and $Q^{**}\Lambda = \Lambda Q^{**}$. So $Q^{**}, \Lambda$ can be simultaneously diagonalized. We can assume $Q^{**}$ is diagonalized. So \[
g(Q^{**}) = \sum_{i=1}^n (2\|\Lambda\| \lambda_i \delta_i - \frac{1}{2} \lambda_i^2), \] where $\delta_i$ is the diagonal of $Q^{**}$. Again, $Q^*$ is a maximizer of $g(Q)$. So we have proved that $g(Q^*) \geq g(Q)$, i.e., \[
c(f(Q^*) - f(Q)) - \|\nabla f(Q)\|^2 \geq 0. \]
Because $c$ is any number larger than $2\|\Lambda\|$, it also holds for $c = 2\|\Lambda\|$ and the proof is finished. \end{proof}
\begin{proposition} If we apply Algorithm~\ref{alternating} to solve the problem~\eqref{eq:nearest point on flag}, then for each $1 \le s < t \le d+1$, the sub-problem has the form \begin{align}\label{eq:alternating nearest point on flag} \min \quad & \lVert A_1 - W \I_{m_s, m_t} W^{\scriptscriptstyle\mathsf{T}}\rVert_F^2 + \lVert A_2 + W \I_{m_s, m_t} W^{\scriptscriptstyle\mathsf{T}} \rVert_F^2 \\ \text{s.t.} \quad & W \in \O(m_s+m_t) \nonumber \end{align} where $A_1,A_2 \in \O(m_s+m_t) \cap S_{m_s+m_t}$ are some fixed matrices. Moreover, the sub-problem has an explicit solution $W_\ast$ which is given by the SVD of $A_1 - A_2 = W_\ast \Sigma W_\ast^{\scriptscriptstyle\mathsf{T}}$.
We denote the change of the value of $F$ at this step by $\Delta_{s,t}$. By previous discussion, the full gradient $\nabla F$ can be partition into $d(d+1)/2$ block components, such that one of the block $\nabla_{s, t} F$ corresponds to the subproblem. Then \[
\|\tau_s(\mathbb{U}_s)-\tau_t(\mathbb{U}_t)\| |\Delta_{s,t}| \geq \|\nabla_{s, t} F\|^2. \] \end{proposition} \begin{proof} Given $1 \le s < t \le d+1$, the sub-step in \eqref{eq:nearest point on flag} is \begin{align*} \min \quad & \lVert \tau_s(\mathbb{U}_s) - \tau_s(\mathbb{W}_s) \rVert_F^2 + \Vert \tau_t(\mathbb{U}_t) - \tau_t(\mathbb{W}_t) \rVert_F^2 \nonumber \\ \text{s.t.} \quad & (\mathbb{W}_s,\mathbb{W}_t) \in \Gr(m_s, m_s + m_t) \times \Gr(m_t, m_s + m_t) \\ \quad & \mathbb{W}_s \perp \mathbb{W}_t \nonumber \end{align*} In particular, $\mathbb{W}_s \oplus \mathbb{W}_t =\left( \bigoplus_{j\ne s,t} \mathbb{W}_j \right)^\perp$ is a fixed $(m_s + m_t)$-dimensional vector space represented by $V_{s, t} := [V_s, V_t]$. We construct the matrix $V^\perp$ whose columns form an orthonormal basis of $\left( \bigoplus_{j\ne s,t} \mathbb{W}_j \right)^\perp$. The choice of $\mathbb{W}_s, \mathbb{W}_t$ can be further specified by an orthogonal matrix $W \in \O(m_s + m_t)$ so that $V_{s, t} W = [W_s, W_t]$ where $W_s, W_t$ span $\mathbb{W}_s, \mathbb{W}_t$ respectively. As a result, the images of $\mathbb{W}_s, \mathbb{W}_t$ can be written as \[ \tau_s(\mathbb{W}_s) = V_{s, t} W \I_{m_s,m_t} W^{\scriptscriptstyle\mathsf{T}} V_{s, t}^{\scriptscriptstyle\mathsf{T}} + V^\perp (V^\perp)^{\scriptscriptstyle\mathsf{T}},\quad \tau_t(\mathbb{W}_t) = -V_{s, t} W \I_{m_s,m_t} W^{\scriptscriptstyle\mathsf{T}} V_{s, t}^{\scriptscriptstyle\mathsf{T}} + V^\perp (V^\perp)^{\scriptscriptstyle\mathsf{T}}. \] \eqref{eq:alternating nearest point on flag} follows easily by taking $A_1 = V_{s, t}^{\scriptscriptstyle\mathsf{T}} \tau_s(\mathbb{U}_s) V_{s, t}$, $A_2 = V_{s, t}^{\scriptscriptstyle\mathsf{T}} \tau_t(\mathbb{U}_t)V_{s, t}$.
Next we observe that the objective function in \eqref{eq:alternating nearest point on flag} can further be re-written as \begin{align*} & \lVert A_1 \rVert_F^2 + \lVert A_2 \rVert_F^2 + 2(m_s + m_t) - 2 \langle A_1, W \I_{m_s,m_t} W^{\scriptscriptstyle\mathsf{T}} \rangle + 2 \langle A_2, W \I_{m_s,m_t} W^{\scriptscriptstyle\mathsf{T}} \rangle \\ =& \lVert A_1 \rVert_F^2 + \lVert A_2 \rVert_F^2 + 2(m_s + m_t) + 2 \langle A_2 - A_1 , W\I_{m_s,m_t} W^{\scriptscriptstyle\mathsf{T}} \rangle. \end{align*} Therefore, the problem \eqref{eq:alternating nearest point on flag} is equivalent to \begin{align}\label{eq:substep alternating nearest point on flag} \min \quad & \langle A_2 - A_1 , W\I_{m_s, m_t} W^{\scriptscriptstyle\mathsf{T}} \rangle \\ \text{s.t.} \quad & W\in \O(m_s + m_t) \nonumber \end{align} By Lemma~\ref{lem:gradient estimate}, we may conclude that a solution to \eqref{eq:substep alternating nearest point on flag} is $W_\ast$, which can be obtained by the SVD of $A_1 - A_2$. Furthermore, we have \begin{align*}
\|\nabla_{s, t} F\|^2 \leq \|A_2-A_1\| |\Delta_{s,t}| \leq \|\tau_s(\mathbb{U}_s)-\tau_t(\mathbb{U}_t)\| |\Delta_{s,t}|. \end{align*} \end{proof}
\begin{theorem} Consider a randomized version of Algorithm~\ref{alternating} for problem~\eqref{eq:nearest point on flag}. At each step, choose $(s_i, t_i)$ uniformly from all possible $(s, t)$. Let $\mathbb{W}_i$ be the point at step $i$. Then with probability 1, every cluster point of $\mathbb{W}_i$ is a stationary point. (Because flag manifold is compact, cluster point exists.) \end{theorem} \begin{proof}
If $\|\tau_s(\mathbb{U}_s)-\tau_t(\mathbb{U}_t)\| = 0$ for all $s, t$, then the function is trivial and there is nothing to prove. Otherwise, there is a set $A \subseteq \{(s, t) \mid 1\leq s < t \leq d+1\}$ such that $\|\tau_s(\mathbb{U}_s)-\tau_t(\mathbb{U}_t)\| \neq 0$ if and only if $(s, t) \in A$. At each step $i$, assume $\argmax_{(s, t) \in A} \|\nabla_{s, t} F(\mathbb{W}_i)\|$ is achieved for $(s^*, t^*)$. If $(s_i, t_i) = (s^*, t^*)$, then \begin{align*}
F(\mathbb{W}_i)-F(\mathbb{W}_{i+1}) &\geq \frac{\|\nabla_{s^*, t^*} F(\mathbb{W}_i)\|^2}{\|\tau_{s^*}(\mathbb{U}_{s^*})-\tau_{t^*}(\mathbb{U}_{t^*})\|} \\
&\geq \frac{\max \|\nabla_{s, t} F(\mathbb{W}_i)\|^2}{\max \|\tau_{s}(\mathbb{U}_{s})-\tau_{t}(\mathbb{U}_{t})\|}\\
&\geq C \|\nabla F(\mathbb{W}_i)\|^2, \end{align*} where $C$ is a constant independent of $\mathbb{W}_i$. If $(s_i, t_i) \neq (s^*, t^*)$, at least we have $F(\mathbb{W}_i)-F(\mathbb{W}_{i+1}) \geq 0$. So \[
\mathbb{E} F(\mathbb{W}_i)- \mathbb{E} F(\mathbb{W}_{i+1}) \geq \frac{2C}{n(n-1)} \|\nabla F(\mathbb{W}_i)\|^2. \] Summing from $i=0$ to $\infty$, and take expectation, we have \[
\mathbb{E} [ F(\mathbb{W}_0)-\lim_{i \to \infty} F(\mathbb{W}_{i}) ] \geq C' \mathbb{E} \sum_{i=0}^{\infty} \|\nabla F(\mathbb{W}_i)\|^2. \]
So with probability 1, $\sum_{i=0}^{\infty} \|\nabla F(\mathbb{W}_i)\|^2$ exists and $\|\nabla F(\mathbb{W}_i)\|$ converges to 0. Any cluster point must be a stationary point. \end{proof}
\section{Numerical experiments} In this section, we consider the function \[ f(V) = \sum_{k=1}^{d} \tr(V_k^{\scriptscriptstyle\mathsf{T}} A_k V_k), \] where $A_i$ is randomly generated symmetric matrix, $V_i$ is the submatrix of $V$ with index $1\leq i\leq n, n_{k-1} < j \leq n_k$, i.e., the basis of $\mathbb{W}_k$. This function is clearly a function on the flag manifold $\Flag(n_1,\dots, n_d;n)$.
We choose $\Flag(5, 5; 200)$ and test five methods: (i) gradient descent method under classical embedding metric; (ii) gradient descent method under modified embedding metric; (iii) coordinate gradient descent method under modified embedding metric; (iv) gradient descent method using the quotient model proposed in Algorithm 1 in \cite{YWL19}; (v) coordinate minimization method under modified metric (Algorithm 1). Figure 1 shows the convergence rate averaged over 10 simulations. We also record the running time to hit $\|\nabla f(V)\| \leq 10^{-5}$, averaged over 10 simulations, as shown in Table 1.
Method (ii) is equivalent to (iv), and their convergence rate and running time are similar. All four descent method has comparable performance while the coordinate minimization outperforms them significantly. \begin{table}[] \centering
\begin{tabular}{l|l} (i) Classic Descent & 21.428s \\ \hline (ii) Modified Descent & 11.291s \\ \hline (iii) Coordinate Descent & 25.548s \\ \hline (iv) Quotient Descent & 10.238s \\ \hline (v) Coordinate Minimization & 0.552s \end{tabular}
\caption{Running time to hit $\|\nabla f(V)\| \leq 10^{-5}$ of different methods.} \label{table1} \end{table}
\begin{figure}
\caption{Convergence behavior of different methods.}
\label{fig:plot1}
\end{figure}
The coordiante minimization method works best for this special choice of $f(V)$ because the optimization sub-problem has explicit solution and can be solved sufficiently. For more general problems, it might not be the case. However, This special choice of $f$ covers many common problems appeared in flag manifolds optimization. Most notably, it is equivalent to the projection problem under modified embedding \ref{eq:nearest point on flag}. As a result, the extrinsic sample mean problem \cite{BP} for flag manifold under modified embedding can be solved efficiently.
\appendix
\section{Parallel transport with respect to the classical embedding} Let $c(t)$ be a curve on $\Flag(n_1,\dots, n_d;n)$ parametrized as \[ c(t) = V(t) (J_{1},\dots, J_{d}) V(t)^{\scriptscriptstyle\mathsf{T}}. \] Here $V(t)$ is a curve in $\O(n)$ and hence $\dot{V}(t) = V(t) \Lambda(t)$ for some $\Lambda(t) \in \mathfrak{so}(n)$. If we partition $\Lambda(t)$ as $\Lambda(t) = (\Lambda(j,k))_{j,k=1}^{d+1,d+1}$ with respect to $n = m_1 + \cdots + m_{d+1}$, then $\Lambda(k,k)(t) \equiv 0, k =1,\dots, d+1$.
We notice that by Proposition~\ref{prop:tangentFlag} a vector field $Y(t)$ on $\Flag(n_1,\dots, n_d;n)$ along the curve $c(t)$ can be parametrize as \begin{equation}\label{eq:vectorfield} Y(t) = V(t) (X(t) J_1 - J_1 X(t),\dots, X(t) J_d - J_d X(t) ) V(t)^{\scriptscriptstyle\mathsf{T}}, \end{equation} where $X(t) = (X(j,k))_{j,k=1}^{d+1,d+1}\in \mathfrak{so}(n)$ is the partition of $X(t)$ with respect to $n = m_1 + \cdots + m_{d+1}$ and $X(k,k)(t) \equiv 0, k =1,\dots, d+1$. We recall that $Y(t)$ is the parallel transport of $Y(0)$ along $c(t)$ if and only if \[ \proj^{\mathbb{T}}_{c(t)} (\dot{Y}(t)) = 0. \]
By differentiating \eqref{eq:vectorfield}, we obtain \[ \dot{Y}(t) = V(t) (\Delta_1(t),\dots, \Delta_d(t)) V^{\scriptscriptstyle\mathsf{T}}(t), \] where \begin{align*} \Delta_k (t) &= (\dot{X}(t) J_k - J_k \dot{X}(t) ) + {\Lambda}(t) (X(t) J_k - J_k X(t)) - (X(t) J_k - J_k X(t)) {\Lambda}(t) \\ &= (\dot{X}(t) J_k - J_k \dot{X}(t)) + ( {\Lambda}(t) X(t) J_k + J_k X(t) {\Lambda}(t) ) - ({\Lambda}(t) J_k X(t) + X(t) J_k {\Lambda}(t)). \end{align*} Similar to what we have done in Subsection~\ref{subsec:geodesic}, we set \begin{align} T_1(t) &= V(t)(\dot{X}(t) J_1 - J_1 \dot{X}(t),\dots, \dot{X}(t) J_d - J_d \dot{X}(t)) V(t)^{\scriptscriptstyle\mathsf{T}}, \\ T_2(t) &= V(t) ( {\Lambda}(t) X(t) J_1 + J_1 X(t) {\Lambda}(t),\dots, {\Lambda}(t) X(t) J_d + J_d X(t) {\Lambda}(t)) V^{\scriptscriptstyle\mathsf{T}}(t),\\ T_3(t) &= V(t)( {\Lambda}(t) J_1 X(t) + X(t) J_1 {\Lambda}(t) ,\dots, {\Lambda}(t) J_d X(t) + X(t) J_d {\Lambda}(t)) V^{\scriptscriptstyle\mathsf{T}}(t), \end{align} thus $\dot{Y}(t) =T_1(t) + T_2(t) - T_3(t)$. By definition, we conclude that $T_1(t) \in T_{c(t)}\Flag(n_1,\dots, n_d;n)$ and hence to determine $\proj^{\mathbb{T}}_{c(t)} (\dot{Y}(t))$, it is sufficient to compute $\proj^{\mathbb{T}}_{c(t)} (T_2(t))$ and $\proj^{\mathbb{T}}_{c(t)} (T_3(t))$ respectively.
\begin{lemma}\label{lem:ptT2} There exists some symmetric matrices $W_1,\dots, W_d$ where \[ W_k(p,q) = \begin{cases} \sum_{s=1}^{d+1} X(k,s) {\Lambda}(s,q) - \sum_{s=1}^{d+1} {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}},~\text{if}~p=k,q\ne k, \\ \sum_{s=1}^{d+1} {\Lambda}(s,p)^{\scriptscriptstyle\mathsf{T}} X(k,s)^{\scriptscriptstyle\mathsf{T}} - \sum_{s=1}^{d+1} X(p,s) {\Lambda}(s,k) ,~\text{if}~q=k,p\ne k, \\ 0,~\text{otherwise} \end{cases} \] such that $\proj^{\mathbb{T}}_{c(t)} (T_2(t)) = V(t) (W_1(t),\dots, W_d(t)) V^{\scriptscriptstyle\mathsf{T}}(t)$. \end{lemma} \begin{proof} For each $k=1,\dots, d$, we first compute ${\Lambda}(t) X(t) J_k + J_k X(t) {\Lambda}(t)$. Indeed, since $X(t), \Lambda(t) \in \mathfrak{so}(n)$ and $J_k$ is a diagonal matrix, we have \[ J_k X(t) {\Lambda}(t) = \left( {\Lambda}(t) X(t) J_k \right)^{\scriptscriptstyle\mathsf{T}}. \] Therefore we only need to compute ${\Lambda}(t) X(t) J_k $. To do so, we partition $\Lambda(t)$ and $X(t)$ as \begin{equation}\label{lem:ptT2:eq1} \Lambda(t) = (\Lambda(p,q)),\quad X(t) = (X(p,q)),\quad 1\le p,q \le d+1. \end{equation} The $(p,q)$-th entry of $J_k X(t) {\Lambda}(t)$ is \begin{align*} \sum_{l,s=1}^{d+1} J_k(p,l) X(l,s) {\Lambda}(s,q) &= \sum_{s=1}^{d+1} J_k (p,p) X(p,s) {\Lambda}(s,q) \\ &= (2\delta_{pk} - 1) \sum_{s=1}^{d+1} X(p,s) {\Lambda}(s,q). \end{align*} Hence the $(p,q)$-th block of ${\Lambda}(t) X(t) J_k + J_k X(t) {\Lambda}(t)$ is simply \begin{align*} (2\delta_{pk} - 1) \sum_{s=1}^{d+1} X(p,s) {\Lambda}(s,q) + (2\delta_{qk} - 1) \sum_{s=1}^{d+1} {\Lambda}(s,p)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}}. \end{align*} In particular, for $q \ne k$, the $(k,q)$-th block of ${\Lambda}(t) X(t) J_k + J_k X(t) {\Lambda}(t)$ is \begin{align*} \sum_{s=1}^{d+1} X(k,s) {\Lambda}(s,q) - \sum_{s=1}^{d+1} {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}} \end{align*} and if moreover that $q\le d$, then the $(k,q)$-th block of ${\Lambda}(t) X(t) J_q + J_q X(t) {\Lambda}(t)$ is \begin{align*} - \left( \sum_{s=1}^{d+1} X(k,s) {\Lambda}(s,q) - \sum_{s=1}^{d+1} {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}} \right). \end{align*} This implies that \[ \proj^{\mathbb{T}}_{c(t)} (T_2(t)) = V(t) (W_1(t),\dots, W_d(t)) V^{\scriptscriptstyle\mathsf{T}}(t), \] where \[ W_k(p,q) = \begin{cases} \sum_{s=1}^{d+1} X(k,s) {\Lambda}(s,q) - \sum_{s=1}^{d+1} {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}},~\text{if}~p=k,q\ne k, \\ \sum_{s=1}^{d+1} {\Lambda}(s,p)^{\scriptscriptstyle\mathsf{T}} X(k,s)^{\scriptscriptstyle\mathsf{T}} - \sum_{s=1}^{d+1} X(p,s) {\Lambda}(s,k) ,~\text{if}~q=k,p\ne k, \\ 0,~\text{otherwise}. \end{cases} \] \end{proof}
\begin{lemma}\label{lem:ptT3} There exists symmetric matrices $Z_1,\dots, Z_d$ where \[ Z_k(p,q) = \begin{cases} -\sum_{1 \le l \le d, l\ne k} \left(
X(k,l) {\Lambda}(l,d+1) + {\Lambda}(l,k)^{\scriptscriptstyle\mathsf{T}} X(d+1,l)^{\scriptscriptstyle\mathsf{T}} \right),~\text{if}~p=k,q = d + 1, \\ -\sum_{1 \le l \le d, l\ne k} \left(
X(d+1,l) {\Lambda}(l,k) + {\Lambda}(l,d+1)^{\scriptscriptstyle\mathsf{T}} X(k,l)^{\scriptscriptstyle\mathsf{T}} \right),~\text{if}~p=d+1,q=k, \\ 0,~\text{otherwise}. \end{cases} \] such that $\proj^{\mathbb{T}}_{c(t)} (T_3(t)) = V(t) (Z_1(t),\dots, Z_d(t)) V^{\scriptscriptstyle\mathsf{T}}(t)$. \end{lemma} \begin{proof} We compute ${\Lambda}(t) J_k X(t) + X(t) J_k {\Lambda}(t)$ for each $k=1,\dots, d$. We partition $\Lambda(t)$ and $X(t)$ as in \eqref{lem:ptT2:eq1} respectively. We also notice that \[ X(t) J_k {\Lambda}(t) = \left( {\Lambda}(t) J_k X(t) \right)^{\scriptscriptstyle\mathsf{T}} \] so that it is sufficient to compute $X(t) J_k {\Lambda}(t)$. The $(p,q)$-th block of $X(t) J_k {\Lambda}(t)$ is \begin{align*} \sum_{l,s=1}^{d+1} X(p,l) J_k(l,s) {\Lambda}(s,q) &=\sum_{l=1}^{d+1} X(p,l) J_k(l,l) {\Lambda}(l,q) \\ &=\sum_{l=1}^{d+1} (2\delta_{kl} - 1) X(p,l) {\Lambda}(l,q) \end{align*} Hence the $(p,q)$-th block of ${\Lambda}(t) J_k X(t) + X(t) J_k {\Lambda}(t)$ is \[ \sum_{1 \le l \le d+1, l \ne p,q} (2\delta_{kl} - 1)\left(
X(p,l) {\Lambda}(l,q) + {\Lambda}(l,p)^{\scriptscriptstyle\mathsf{T}} X(q,l)^{\scriptscriptstyle\mathsf{T}} \right). \] In particular, for $q\ne k$, the $(k,q)$-th block of ${\Lambda}(t) J_k X(t) + X(t) J_k {\Lambda}(t)$ is \[ -\sum_{1 \le l \le d+1, l\ne k,q} \left(
X(k,l) {\Lambda}(l,q) + {\Lambda}(l,k)^{\scriptscriptstyle\mathsf{T}} X(q,l)^{\scriptscriptstyle\mathsf{T}} \right) \] which is the same as the $(k,q)$-th block of ${\Lambda}(t) J_q X(t) + X(t) J_q {\Lambda}(t)$ if moreover $q \le d$. Hence we have \[ \proj^{\mathbb{T}}_{c(t)} (T_3(t)) = V(t) (Z_1(t),\dots, Z_d(t)) V(t)^{\scriptscriptstyle\mathsf{T}}, \] where for each $1\le k \le d$, \[ Z_k(p,q) = \begin{cases} -\sum_{1 \le l \le d, l\ne k} \left(
X(k,l) {\Lambda}(l,d+1) + {\Lambda}(l,k)^{\scriptscriptstyle\mathsf{T}} X(d+1,l)^{\scriptscriptstyle\mathsf{T}} \right),~\text{if}~p=k,q = d + 1, \\ -\sum_{1 \le l \le d, l\ne k} \left(
X(d+1,l) {\Lambda}(l,k) + {\Lambda}(l,d+1)^{\scriptscriptstyle\mathsf{T}} X(k,l)^{\scriptscriptstyle\mathsf{T}} \right),~\text{if}~p=d+1,q=k, \\ 0,~\text{otherwise}. \end{cases} \] \end{proof}
\begin{proposition}[parallel transport along any curve]\label{prop:ptflag} Let $c(t) = V(t) (J_1,\dots, J_d) V(t)^{\scriptscriptstyle\mathsf{T}}$ be a curve on $\Flag(n_1,\dots,n_d;n)$ and let $Y(t)$ be a vector field along the curve $c(t)$, parametrized as in \eqref{eq:vectorfield}. Then $Y(t)$ is a parallel transport if and only for each pair $(k,q)$ such that $1 \le k < q \le d+1$, we have \footnotesize{ \begin{equation}\label{prop:ptflag:eq1} -2 \dot{X}(k,q) + \sum_{\substack{1\le s \le d+1 \\ s\ne k,q}} \left( X(k,s) {\Lambda}(s,q) - {\Lambda}(k,s) X(s,q) \right) + \delta_{q,d+1} \sum_{\substack{1 \le l \le d \\ l\ne k}} \left(
X(k,l) {\Lambda}(l,d+1) + {\Lambda}(k,l) X(l,d+1) \right) = 0, \end{equation}} \normalsize which can be written in a more compact form: \begin{equation}\label{prop:ptflag:eq2} 2 \dot{X} =\pi\left( [X, {\Lambda}]\right) + \begin{bmatrix} 0 & X_0 {\Lambda}_1 + {\Lambda}_0 X_1 \\ -(X_0 {\Lambda}_1 + {\Lambda}_0 X_1)^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix} , \end{equation} where $X_0,\Lambda_0\in \mathbb{R}^{(n-n_{d+1})\times (n-n_{d+1})} ,X_1,\Lambda_1\in \mathbb{R}^{(n-n_{d+1})\times n_{d+1}}$ are submatrices determined by partitions \[ X = \begin{bmatrix} X_0 & X_1 \\ -X_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix},\quad \Lambda = \begin{bmatrix} \Lambda_0 & \Lambda_1 \\ -\Lambda_1^{\scriptscriptstyle\mathsf{T}} & 0 \end{bmatrix}, \] and $\pi(A)$ is defined by setting all diagonal blocks of $A\in \mathfrak{so}(n)$ equal to zero. \end{proposition} Before we proceed, we remark that in particular if $X = {\Lambda}$, then $[X,{\Lambda}] = 0$ and \eqref{prop:ptflag:eq2} reduces to the geodesic equation \eqref{prop:gedoesic:eq1}.
Next we re-write each term of \eqref{prop:ptflag:eq1} using \eqref{eq:vectorization}. This leads to \begin{align*} \vect \left( X(k,s) {\Lambda}(s,q) \right) &= ({\Lambda}(s,q)^{\scriptscriptstyle\mathsf{T}} \otimes \I_{m_k}) \vect ( X(k,s)) = -({\Lambda}(q,s) \otimes \I_{m_k}) \vect ( X(k,s)) \\ \vect \left( {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}} \right) &= (\I_{m_q} \otimes {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}}) \vect (X(q,s)^{\scriptscriptstyle\mathsf{T}}) = (\I_{m_q} \otimes {\Lambda}(k,s)) \vect (X(s,q)) \end{align*} and hence \eqref{prop:ptflag:eq1} becomes \small \begin{align*}
2\vect \left( \dot{X}(k,q) \right) &= - \sum_{\substack{1 \le s \le d+1 \\ s \ne k, q}} \left(
(\I_{m_q} \otimes {\Lambda}(k,s)) \vect (X(s,q)) + ({\Lambda}(q,s) \otimes \I_{m_k}) \vect ( X(k,s))
\right) \\ &+ \delta_{q,d+1} \sum_{\substack{1 \le s \le d \\ s\ne k}} \left( (\I_{m_{d+1}} \otimes {\Lambda}(k,s) ) \vect (X(s,d+1)) -({\Lambda}(d+1,s) \otimes \I_{m_k}) \vect ( X(k,s)) \right), \end{align*}\normalsize which using relations $X(p,q) = -X(q,p)^{\scriptscriptstyle\mathsf{T}}$ and $\vect(A^{\scriptscriptstyle\mathsf{T}}) = K^{(m,n)} \vect (A)$\footnote{$K^{(m,n)}$ is the commutation matrix defined in \eqref{eq:commutation}.}, can be written as \begin{equation}\label{eq:pt1} \begin{bmatrix} \vect (\dot{X}(1,2)) \\ \vect (\dot{X}(1,3)) \\ \cdots \\ \vect (\dot{X}(d-1,d)) \end{bmatrix} = \Phi(t) \begin{bmatrix} \vect ({X}(1,2)) \\ \vect ({X}(1,3)) \\ \cdots \\ \vect ({X}(d,d+1)) \end{bmatrix} \end{equation} for some $\binom{n}{2} \times \binom{n}{2}$ matrix function $\Phi(t)$. Now according to Theorem~\ref{thm:PBseries}, \eqref{eq:pt1} can be solved by Peano--Baker series associated to the coefficient matrix $\Phi(t)$.
We again take $d = 2$ for example. In this case, we write \[ \Lambda (t) = \begin{bmatrix} 0 & A(t) & B(t) \\ -A^{\scriptscriptstyle\mathsf{T}} (t) & 0 & C(t) \\ -B^{\scriptscriptstyle\mathsf{T}} (t) & -C^{\scriptscriptstyle\mathsf{T}}(t) & 0 \end{bmatrix},\quad A(t)\in \mathbb{R}^{m_1 \times m_2},B(t)\in \mathbb{R}^{m_1 \times m_3},C(t)\in \mathbb{R}^{m_2 \times m_3}, \] \[ X (t) = \begin{bmatrix} 0 & U(t) & V(t) \\ -U^{\scriptscriptstyle\mathsf{T}} (t) & 0 & W(t) \\ -V^{\scriptscriptstyle\mathsf{T}} (t) & -W^{\scriptscriptstyle\mathsf{T}}(t) & 0 \end{bmatrix},\quad U(t)\in \mathbb{R}^{m_1 \times m_2},V(t)\in \mathbb{R}^{m_1 \times m_3},W(t)\in \mathbb{R}^{m_2 \times m_3}. \] Then we have \scriptsize{ \[ \proj^{\mathbb{T}}_{c(t)} ( T_1(t) ) = 2 V(t) \left( \begin{bmatrix} 0 & -\dot{U}(t) & -\dot{V}(t) \\ -\dot{U}^{\scriptscriptstyle\mathsf{T}}(t) & 0 & 0 \\ -\dot{V}^{\scriptscriptstyle\mathsf{T}}(t) & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & \dot{U}(t) & 0 \\ \dot{U}^{\scriptscriptstyle\mathsf{T}}(t) & 0 & -\dot{W}(t) \\ 0 & -\dot{W}^{\scriptscriptstyle\mathsf{T}}(t) & 0 \end{bmatrix} \right) V(t)^{\scriptscriptstyle\mathsf{T}}, \] \[ \begin{split} \proj^{\mathbb{T}}_{c(t)} ( T_3 (t) ) = V(t) \left( \begin{bmatrix} 0 & 0 & -({A}(t) W(t) + U(t) {C}(t) ) \\ 0& 0 & 0 \\ -({A}(t) W(t) + U(t) {C}(t) )^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \right. \\ \left. \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & {A}(t)^{\scriptscriptstyle\mathsf{T}} V(t) + U(t)^{\scriptscriptstyle\mathsf{T}} {B}(t) \\ 0 & V(t)^{\scriptscriptstyle\mathsf{T}} {A}(t) + {B}(t)^{\scriptscriptstyle\mathsf{T}} U(t) & 0 \end{bmatrix} \right) V(t)^{\scriptscriptstyle\mathsf{T}}. \end{split} \] \[ \begin{split} \proj^{\mathbb{T}}_{c(t)} ( T_2 (t) ) = V(t) \left( \begin{bmatrix} 0 & {B}(t) W(t)^{\scriptscriptstyle\mathsf{T}} - V(t) {C}(t)^{\scriptscriptstyle\mathsf{T}} & -{A}(t) W(t) + U(t) {C}(t) \\ W(t) {B}(t)^{\scriptscriptstyle\mathsf{T}} - {C}(t) V(t)^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \\ -W(t)^{\scriptscriptstyle\mathsf{T}} {A}(t)^{\scriptscriptstyle\mathsf{T}} + {C}(t) ^{\scriptscriptstyle\mathsf{T}} U(t)^{\scriptscriptstyle\mathsf{T}} & 0 & 0 \end{bmatrix}, \right. \\ \left. \begin{bmatrix} 0 & -{B}(t) W(t)^{\scriptscriptstyle\mathsf{T}} + V(t) {C}(t)^{\scriptscriptstyle\mathsf{T}} & 0\\ -W(t) {B}(t)^{\scriptscriptstyle\mathsf{T}} + {C}(t) V(t)^{\scriptscriptstyle\mathsf{T}} & 0 & {A}(t)^{\scriptscriptstyle\mathsf{T}} V(t) - U(t)^{\scriptscriptstyle\mathsf{T}} {B}(t) \\ 0 & V(t)^{\scriptscriptstyle\mathsf{T}} {A}(t) - {B}(t)^{\scriptscriptstyle\mathsf{T}} U(t) & 0 \end{bmatrix} \right) V(t)^{\scriptscriptstyle\mathsf{T}}. \end{split} \] } \normalsize Hence the system for $X(t)$ to be a parallel transport is given by \begin{align*} 2\dot{U}(t) &=- V(t) {C}(t)^{\scriptscriptstyle\mathsf{T}} + {B}(t) W(t)^{\scriptscriptstyle\mathsf{T}} = (-{C}(t) \otimes \I_{m_1} )\vect (V(t)) + (\I_{m_2} \otimes {B}(t))K^{(m_2,m_3)} \vect (W(t)) , \\
\dot{V}(t) &= U(t) {C}(t) = ({C}(t)^{\scriptscriptstyle\mathsf{T}} \otimes \I_{m_1}) \vect(U(t)), \\ \dot{W}(t) &= -U(t)^{\scriptscriptstyle\mathsf{T}} {B}(t) = - ({B}(t)^{\scriptscriptstyle\mathsf{T}} \otimes \I_{m_2}) \vect(U(t)). \end{align*} Hence we have \[ \begin{bmatrix} \vect(\dot{U}(t)) \\ \vect(\dot{V}(t)) \\ \vect(\dot{W}(t)) \\ \end{bmatrix} = \begin{bmatrix} 0 & -\frac{1}{2} {C}(t) \otimes \I_{m_1} & \frac{1}{2} ( \I_{m_2} \otimes {B}(t))K^{(m_2,m_3)} \\ {C}(t)^{\scriptscriptstyle\mathsf{T}} \otimes \I_{m_1} & 0 & 0 \\ -{B}(t) \otimes \I_{m_2} & 0 & 0 \end{bmatrix} \begin{bmatrix} \vect(U(t)) \\ \vect(V(t)) \\ \vect(W(t)) \\ \end{bmatrix}. \]
\section{Parallel transport with respect to the modified embedding} Now we proceed to discuss the parallel transport of a tangent vector along a curve on a flag manifold. Again we parametrize a curve $c(t)$ on $\Flag(n_1,\dots, n_d;n)$ as \eqref{eq:newmodelcurve}. Let $Y(t)$ be a vector field on $\Flag(n_1,\dots, n_d;n)$ along the curve $c(t)$. Then we may correspondingly parametrize $Y(t)$ as \begin{equation}\label{eq:newmodelvectorfield} Y(t) = V (t) (X(t) J_1 - J_1 X(t),\dots, X(t) J_{d+1} - J_{d+1} X(t) ) V(t)^{\scriptscriptstyle\mathsf{T}}, \end{equation} where $X(t) = (X(j,k))_{j,k=1}^{d+1,d+1}\in \mathfrak{so}(n)$ is the partition of $X(t)$ with respect to $n = m_1 + \cdots + m_{d+1}$ and $X(k,k)(t) \equiv 0, k =1,\dots, d+1$. We notice that $\dot{Y}(t) = T_1(t) + T_2(t) - T_3(t)$ where \begin{align} T_1(t) &= V(t) (\dot{X}(t) J_1 - J_1 \dot{X}(t),\dots, \dot{X}(t) J_{d+1} - J_{d+1} \dot{X}(t)) V^{\scriptscriptstyle\mathsf{T}}(t), \\ T_2(t) &= V(t) ( {\Lambda}(t) X(t) J_1 + J_1 X(t) {\Lambda}(t),\dots, {\Lambda}(t) X(t) J_{d+1} + J_{d+1} X(t) {\Lambda}(t)) V^{\scriptscriptstyle\mathsf{T}}(t),\\ T_3(t) &= V(t) ( {\Lambda}(t) J_1 X(t) + X(t) J_1 {\Lambda}(t) ,\dots, {\Lambda}(t) J_{d+1} X(t) + X(t) J_{d+1} {\Lambda}(t)) V^{\scriptscriptstyle\mathsf{T}}(t). \end{align}
Similar computations in proofs of Lemmas~\ref{lem:ptT2} and \ref{lem:ptT3} lead to the following \begin{lemma}\label{lem:newmodelptTi} Let $c(t), \Lambda(t), Y(t), X(t), T_1(t), T_2(t), T_3(t)$ be as above. We have \begin{enumerate} \item $T_1(t) \in \mathbb{T}_{c(t)} \Flag(n_1,\dots, n_d;n)$. \item $\proj_{c(t)}^{\mathbb{T}} (T_2(t)) = V(t) (W_1(t),\dots, W_{d+1}(t)) V^{\scriptscriptstyle\mathsf{T}}(t)$, where \[ W_k(p,q) = \begin{cases} \sum_{s=1}^{d+1} X(k,s) {\Lambda}(s,q) - \sum_{s=1}^{d+1} {\Lambda}(s,k)^{\scriptscriptstyle\mathsf{T}} X(q,s)^{\scriptscriptstyle\mathsf{T}},~\text{if}~p=k,q\ne k, \\ \sum_{s=1}^{d+1} {\Lambda}(s,p)^{\scriptscriptstyle\mathsf{T}} X(k,s)^{\scriptscriptstyle\mathsf{T}} - \sum_{s=1}^{d+1} X(p,s) {\Lambda}(s,k) ,~\text{if}~q=k,p\ne k, \\ 0,~\text{otherwise}. \end{cases} \] \item $\proj_{c(t)}^{\mathbb{T}} (T_3(t)) = 0$. \end{enumerate} \end{lemma} Now we recall that $Y(t)$ is a parallel transport along the curve $c(t)$ if and only if $\proj_{c(t)}(\dot{Y}(t)) = 0$. Combining this with Lemma~\ref{lem:newmodelptTi}, we may derive the equation for parallel transport. \begin{proposition}[Parallel transport along any curve]\label{prop:newmodelpt} Let $c(t), \Lambda(t), Y(t), X(t)$ be as above. The vector field $Y(t)$ along $c(t)$ is a parallel transport if and only if \begin{equation}\label{eq:prop:newmodelpt} \dot{X}(t) = \frac{1}{2} \pi \left([X(t) , {\Lambda}(t)] \right), \end{equation} where $\pi(A)$ is the matrix obtained by setting all diagonal blocks equal to zero for $A\in \mathfrak{so}(n)$. \end{proposition}
In particular, if $c(t) = V(t) (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}(t)$ is a geodesic, then Proposition~\ref{prop:newmodelgedoesic} implies $\dot{V}(t) = V(t) A$ for some constant matrix $A\in \mathfrak{so}(n)$, from which we obtain the following characterization of a parallel transport along a geodesic in $\Flag(n_1,\dots, n_{d};n)$. \begin{corollary}[Parallel transport along a geodesic] If $c(t) = V(t) (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}(t)$ is the geodesic curve passing through $V(0) (J_1,\dots, J_{d+1}) V^{\scriptscriptstyle\mathsf{T}}(0)$ with the tangent direction \[ V(0) (A J_1 - J_1A, \dots, A J_{d+1} - J_{d+1}A) V^{\scriptscriptstyle\mathsf{T}}(0),\quad A\in \mathfrak{so}(n),A(k,k) = 0, k=1,\dots, d+1, \] then the parallel transport $Y(t)$ of \[ Y(0) =V(0) (BJ_1- J_1B,\dots, BJ_{d+1}- J_{d+1}B) V^{\scriptscriptstyle\mathsf{T}}(0),\quad B\in \mathfrak{so}(n),B(k,k) = 0, k=1,\dots, d+1, \] is \[ Y(t) =V(t) (X(t)J_1- J_1X(t),\dots, X(t)J_{d+1}- J_{d+1}X(t)) V^{\scriptscriptstyle\mathsf{T}}(t), \] where \[ \dot{X}(t) = \frac{1}{2} \pi \left( [X(t), A] \right),\quad X(0) = B, \] where $\pi(A)$ is the matrix obtained by setting all diagonal blocks equal to zero for $A\in \mathfrak{so}(n)$. \end{corollary}
\end{document} |
\begin{document}
\begin{center}
{\bf\large On Genus $g$ Orientable Crossing Numbers of Small Complete Graphs} \\
\textbf{Yoonah Lee}
\end{center}
\begin{abstract} The current state of knowledge of genus $g$ orientable crossing numbers of complete graphs through $K_{11}$ is reviewed. It is shown that $cr_3(K_{10})=3,$ $cr_3(K_{11})\leq14,$ and $cr_4(K_{11})=4.$ It is established with the aid of an algorithm that there are precisely two non-isomorphic embeddings of $K_9$ with a hexagon with all its vertices distinct on a surface of genus 3. \end{abstract}
\section*{Introduction} Given a surface $S$ and a graph $G$, a \emph{drawing} of $G$ has edges with two distinct vertices as endpoints and no vertices that share three edges \cite{szekely}. The \emph{crossing number of $G$ on $S$}, denoted $cr_S(G)$, is the minimum number of crossings in any drawing of $G$ on $S$. \emph{Good drawings} are considered to have three conditions: no arc connecting two nodes intersects itself, two arcs have at most one point of intersection, and two arcs with the same node have no other intersecting points. \emph{Optimal drawings} have the minimum number of crossings of $G$ on $S$ \cite{guy1971}. It can be shown that optimal drawings are good drawings.
The crossing numbers of complete graphs $K_n$ on orientable surfaces $S_g$ of positive genus has been an object of interest since Heawood's map-coloring conjecture in 1890 \cite{heawood}. Despite significant progress, the values of the crossing number for even rather small complete graphs on these surfaces has been unclear. But when $K_{n-1}$ can be embedded in $S_g,$ adding a vertex to the embedding and connecting it to each of the $n-1$ vertices can in some cases produce a drawing with the lowest possible number of crossings, and in other cases a different method is required. For example, one can obtain a drawing of $K_5$ with one crossing on the sphere after embedding $K_4,$ whereas a four-crossing drawing of $K_8$ on a torus cannot be obtained by embedding $K_7.$ A four-crossing drawing of $K_9$ on $S_2$ was given by Riskin \cite{riskin}, and this drawing can be obtained from an embedding of $K_8$ by placing the ninth vertex in a quadrilateral face. In this paper, I show that $K_9$ can be embedded on $S_3$ with a hexagon with all vertices distinct, and the other three vertices in faces adjacent to the hexagon. Thus, there is a drawing of $K_{10}$ on $S_3$ with three crossings, and since it was shown by Kainen \cite{kainen} that no smaller number of crossings is possible, I have $cr_3(K_{10})=3$.
\section*{The Heawood Conjecture} Given a graph $G,$ the \emph{chromatic number} of $G$, denoted $\chi(G)$, is the minimum number of colors needed to color the vertices so that no two neighboring vertices have the same color. In 1890, Percy John Heawood posed a question that would soon spark broad interest in embedding graphs in surfaces: given a surface $S$, consider all graphs $G$ that can be embedded in $S.$ What is the highest chromatic number $\chi(G)$ of all of them? The four color map problem, which asks the same question of the sphere $S_0,$ had been widely known since 1852. Heawood proved that $\chi(G)\leq\lfloor\frac{7+\sqrt{1+48g})}{2}\rfloor$ for all graphs $G$ that can be embedded on a surface of genus $g$, and claimed that for any surface of genus $g$, there is a graph $G$ achieving this upper bound \cite{heawood}. He also showed that the chromatic number of a torus is seven, proving his claim for $g=1$. In 1891, Lothar Heffter described how to embed $K_7$, $K_8$, $K_9$, $K_{10}$, $K_{11}$, and $K_{12}$ in a surface of minimum genus, proving Heawood's claim for $g\leq6$ \cite{heffter}. Gerhard Ringel proved the case $g=7$ by embedding $K_{13}$ in 1952, and he started to consider $K_n$ for the twelve cases of of $n$ modulo $12$. Finally, in 1968, Gerhard Ringel and J. W. T. Youngs proved the Heawood map-coloring theorem by finding embeddings for all residues modulo 12, with the help of Jean Mayer and others for a finite number of cases not covered by their proof \cite{mayer}. Thus Heawood's conjecture was proved for all orientable surfaces. It is also true for non-orientable surfaces except for the Klein bottle. In the course of their work, Ringel and Youngs established a relationship between the genus of a complete graph and chromatic number of a surface. Given $n$, if $p$ is the largest integer such that $\gamma(K_p)\geq n$, then $\frac{(p-3)(p-4)}{12}\geq n\geq\frac{(p-3)(p-2)}{12}$. This inequality is equivalent to $p^{2}-7p+12\geq 12n\geq p^{2}-5p+6$. Solving for $p$, it can be seen that if $\gamma(K_n)=\lceil\frac{(n-3)(n-4)}{12}\rceil$, then $\chi(S_g)=\lfloor\frac{7+\sqrt{1+48g})}{2}\rfloor$ \cite{harary}\cite{ringel}.
\section*{Known Results} A similar question arose: what is the minimum crossing number of graphs? In 1960, Guy conjectured the minimum crossing number of complete graphs on a genus $0$ surface: $Z(n) = \frac{1}{4}\lfloor\frac{n}{2}\rfloor\lfloor\frac{n-1}{2}\rfloor\lfloor\frac{n-2}{2}\rfloor\lfloor\frac{n-3}{2}\rfloor$ \cite{originalguy} \cite{guy}. The conjecture has been proven for $n\leq12$ \cite{pan}, but the crossing number for $n\geq13$ remains unconfirmed. In 1968, Richard K. Guy, Tom Jenkyns, and Jonathan Schaer proved the lower and upper bounds of the toroidal crossing number of the complete graph as $\frac{23}{210} \binom{n}{4}$ and $\frac{59}{216} \binom{n-1}{4}$ when $n \geq 10$ \cite{guyjenkyns}. Paul C. Kainen proved a lower bound for the crossing numbers of complete graphs, bipartite graphs, and cubical graphs in 1972: $cr_g (K_n)\geq\binom{n}{2}-3n+3(2-2g)$. Kainen conjectured that $cr_g (K_n)$ is equal to this lower bound whenever $g$ is one less than the genus of $K_n$ and $K_n$ does not provide a triangulation of $S_{g+1}$ \cite{kainen}. However, Adrian Riskin disproved Kainen’s conjecture by showing the genus $2$ crossing number of $K_9$ graph is $4$, not $3$ as Kainen’s conjecture suggests \cite{riskin}. In 1995, Farhad Shahrokhi, László A. Székely, Ondrej Sýkora, and Imrich Vrt'o proved a theorem that gives a lower bound for the crossing number. Given a graph $G$, when $\frac{n^2}{e}\geq g$, the crossing number is at least $\frac{ce^3}{n^2}$. When $\frac{n^2}{e}\leq g\leq\frac{3}{64}$, the crossing number is at least $\frac{ce^2}{g+1}$ \cite{10.1007/3-540-57899-4_68}.
\begin{table}[h] \begin{center}
\begin{tabular}{ |c|c c c c c c c c c c c c| } \hline $n\pmod{12}$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline $f(n)$ & 0 & 3 & 5 & 0 & 0 & 5 & 3 & 0 & 2 & 3 & 3 & 2 \\ \hline \end{tabular} \caption{$f(n)$ gives the number of edges that would need to be added to an embedding of $K_n$ on a surface of its minimum genus to obtain a triangulation} \label{table:1} \end{center} \end{table}
\section*{The Genus 3 Case: Program} Thus any embedding of $K_9$ on $S_3$ has either three quadrilateral faces, a quadrilateral and a pentagon, or a hexagon. Table \ref{table:1} shows the number of edges that would need to be added to obtain a triangulation of the surface for each complete graph $K_n$.
Given an embedding of a graph G with vertices labelled $1,\dots,n$ on the orientable surface $S_g,$ a \emph{rotation sequence} for this embedding consists of $n$ rows, where each row $r_i$ is the sequence of vertices met opposite vertex $i$ along each edge incident to vertex $i$ in clockwise order, starting from a given edge. The Heffter-Edmonds Rotation Principle states that every rotation sequence has a unique orientable graph embedding.
I built an algorithm constructing all rotation sequences for $K_9$ on $S_3$. I determined that there exists an embedding of $K_9$ on $S_3$ that contains a hexagon with $6$ distinct vertices. Furthermore, I know that three vertices are in triangular faces adjacent to this hexagon.
To enumerate all embeddings of $K_9$ on $S_3,$ I generated arbitrary permutations of vertices as rows of candidate rotation sequences and checked:
1. orientability: if an edge shows up in two rows, it is in opposite order;
2. if the number of triangular faces mentioned is less than or equal to $22$;
3. if each edge is in at most two faces;
4. if the faces containing a given vertex make a cycle of length 8.
Heffter already showed an embedding of $K_9$ on $S_3$ with $22$ triangles and one hexagon that contains a vertex mentioned twice. If there is a hexagon with distinct vertices and the other three vertices in faces adjacent to that hexagon, a tenth point inside the hexagon will be able to connect to all six vertices of the hexagon and to the other three vertices with only three crossings. Thus, if I find such an embedding, then the genus 3 crossing number of $K_{10}$ is 3.
Algorithm \ref{algorithm:1} generates all possible rotation sequences of the hexagon and eliminates the impossible drawings. A hexagon with an outer edge showing up in the same order of vertices means that the drawing is not orientable. Sequences with more than $22$ triangular faces or with edges contained in more than two faces can be rejected. Thus, in the end, the program will only print rotation sequences that produce an orientable surface of genus $3$.
Using rotation sequences, I labeled the vertices from $1$ to $9$, giving each vertex a row of different vertices it is connected to in order. In row 1, I may assume that vertices 7, 8, and 9 appear in that order, meaning that there are $\frac{6!}{3!}=120$ permutations to check. In each of the rows 2, 3, 4, 5, and 6, the first vertex after the hexagon must be the same as the last vertex before the hexagon in the previous row. Thus there are $5!=120$ permutations to check for each admissible permutation of the previous row.
This program first generates an array of vertices connected to vertex $1$ excluding those with $3$ as the last element. (Excluding 3 cuts down slightly but is not technically necessary; thus this step is omitted from Algorithm 1.) Then it generates another array of vertices connected to vertex $2$ only if the last element is not $4$ and all outer edges are mentioned in the opposite order compared to those in row $1$. The same procedure is repeated until vertex $4$, when I now check that the number of triangular faces is less than $22$, because Heffter already proved that the number of faces in $K_9$ on $S_3$. For vertex $5$, I check that each edge is mentioned in at most two faces. The same is repeated for vertex $6$. This makes sure that the program does not have to go through generating all sequences but only fully process sequences that have potential to be an embedding. Lastly, I check if the cycle lengths of faces going around vertices 7, 8, and 9 are 8. If so, the program prints the sequences so that I can reproduce them into drawings.
\newtheorem{thm}{Theorem} \begin{thm} There are two non-isomorphic embeddings of $K_9$ on $S_3$ with a hexagon with all six vertices distinct. \begin{proof} Algorithm \ref{algorithm:1} generated a total of eight rotation sequences with six distinct vertices. Four of them were permutations of the rotation sequence given in Table \ref{table:6}, and four were permutations of the rotation sequence given in Table \ref{table:7}. In Case 2, three faces adjacent to the hexagon had non-hexagon vertices and were adjacent to a triangle with another non-hexagon vertex. This is not true of Case 1. \end{proof} \end{thm}
\begin{table}[h!] \begin{center}
\begin{tabular} { | c | c c c c c c c c | }
\hline 1&6&3&7&8&5&4&9&2\\ 2&1&9&6&4&8&7&5&3\\ 3&2&5&9&7&1&6&8&4\\ 4&3&8&2&6&7&9&1&5\\ 5&4&1&8&9&3&2&7&6\\ 6&5&7&4&2&9&8&3&1\\ \hline \end{tabular} \caption{First six rows of a rotation sequence embedding $K_9$ on $S_3$, Case 1} \label{table:6} \end{center} \end{table}
\begin{table}[h!] \begin{center}
\begin{tabular} { | c | c c c c c c c c | }
\hline 1&6&3&7&8&5&4&9&2\\ 2&1&9&8&6&4&7&5&3\\ 3&2&5&9&7&1&6&8&4\\ 4&3&8&7&2&6&9&1&5\\ 5&4&1&8&9&3&2&7&6\\ 6&5&7&9&4&2&8&3&1\\ \hline \end{tabular} \caption{First six rows of a rotation sequence embedding $K_9$ on $S_3$, Case 2} \label{table:7} \end{center} \end{table}
\begin{algorithm}[h] \label{algorithm:1}
\caption{Searching through all rotation sequences to find the set of all embeddings of $K_9$ on $S_3$ with a hexagon with $6$ distinct vertices.} \lstset{numbers=left, numberstyle=\tiny, stepnumber=1, numbersep=5pt} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Output{rotation sequences with a hexagon with $6$ distinct vertices} \BlankLine check\_perms\_of\_row$(i)$:\\ Let $r_{i,1}=i-1$ and $r_{i,8}=i+1$\\ \ForAll{permutations $p$ of $\{1, 2, 3, 4, 5, 6, 7, 8, 9\}-\{i\}-\{i+1\}-\{i-1\}-\{v\}$}{fill in $r_{i, 3}$ through $r_{i, 7}$ with $p$\\ \tcc{in lines 2 and 3, $i+1$ and $i-1$ are to be read $\mod{6}$ using the numbers $1, 2, 3, 4, 5, 6$}\
\If{any edge is in two rows in the same direction}{go to next $p$}\
\If{total number of faces mentioned through row $r_i>22$}{go to next $p$}\
\If{any edge is in more than two faces}{go to next $p$}\ \If{$i<6$}{check\_perms\_of\_row$(i+1)$}\ \If{$i=6$}{\If{the total number of faces mentioned through row $r_6$ is less than 22}{ add this rotation sequence to set of candidates for further verification} \If{for any $j \in \{7,8,9\}$, the triangular faces containing $j$ do not make a cycle of length 8}{go to next $p$} print out the first six rows of the rotation sequence}} \end{algorithm}
\begin{table}[h] \begin{center}
\begin{tabular} { | l | r | l}
\hline
Check & Number of Cases\\
\hline
Row 4: Check if edges opposite & 969598\\
Check if number of faces $\leq22$ & 261359\\
\hline
Row 5: Check if edges opposite & 1169027\\
Check if number of faces $\leq22$ & 3307\\
Check if each edge is in at most $2$ faces & 110\\
\hline
Row 6: Check if edges opposite & 100\\
Check if number of faces $\leq22$ & 38\\
Check if each edge is in at most $2$ faces & 38\\
\hline
See if all $8$ edges connecting to $7, 8, 9$ make a cycle & 8\\ \hline \end{tabular} \caption{Number of arrangements of the first $i$ rows of candidate rotation sequence for $K_9$ satisfying each criterion} \label{table:2} \end{center} \end{table}
\begin{thm} The genus $3$ crossing number of $K_{10}$ is $3$. \begin{proof} Kainen's inequality $cr_g (K_n)\geq\binom{n}{2}-3n+3(2-2g)$ gives the lower bound of the crossing number of $K_{10}$ on a genus $3$ surface. With Algorithm \ref{algorithm:1}, I found rotation sequences including a hexagon with 6 distinct vertices and the other three vertices in faces adjacent to the hexagon. An additional vertex in the hexagon will be able to connect with the six vertices of the hexagon and cross an edge each for vertices 7, 8, and 9. The drawing is shown in Figure \ref{figure:4}. \end{proof} \end{thm}
\section*{The Genus 4 Case: Program} Although the foundations of the algorithm for $K_{10}$ are similar to those of $K_9$, Algorithm \ref{algorithm:2} is different with one more vertex present. There is, of course, one more place in each row. Most importantly, there are a total of 210 possibilities in the first row, because I may assume vertices 7, 8, 9, and 10 appear in that order and the middle seven elements have to be arranged so that $\frac{7!}{4!}$. Other rows have 720 possibilities, because three elements are known.
The fact that this algorithm works so plainly for $K_9$ is in part dependent on the fact that no sequences giving fewer than 22 faces among the first six rows turn out to exist. But for $K_{10}$, the analogous algorithm outputs that it is possible to have 26, 27, or 28 faces among the first six rows. I thus divide the algorithm into three cases: when there are 26 faces, 27 faces, and 28 faces. In the case of 28 faces, the algorithm is similar to the algorithm for $K_9$ in that I simply check if the cycle length of each vertex is 9. No configurations existed with this property. For the case of 27 faces, I iterate over triangles consisting only of vertices 7, 8, 9, and 10, adding each to the collection of faces and checking whether it completes an embedding of $K_{10}$. 8 configurations were found to be completable. In the case of 26 faces, I do the same but with pairs of triangles instead of triangles. 11 configurations were found to be completable.
\begin{thm} The genus $4$ crossing number of $K_{11}$ is $4$. \begin{proof} Kainen's inequality $cr_g (K_n)\geq\binom{n}{2}-3n+3(2-2g)$ gives a lower bound of 4 for the crossing number of $K_{11}$ on a genus $4$ surface. With Algorithm \ref{algorithm:2}, I found rotation sequences including a hexagon with 6 distinct vertices and the other four vertices in faces adjacent to the hexagon. An additional vertex in the hexagon will be able to connect with the six vertices of the hexagon and cross an edge each for vertices 7, 8, 9, and 10.
One of the rotation sequences is given in Table \ref{table:8}. The sequence does not mention two faces: faces consisting of vertices 7, 8, and 10 and of vertices 7, 9, and 10. The corresponding embedding is shown in Figure \ref{figure:6}. \end{proof} \end{thm}
\begin{table}[h!] \begin{center}
\begin{tabular} { | c | c c c c c c c c c | }
\hline 1&6 & 3 &7 &8 &4& 9& 5& 10& 2\\ 2&1& 10& 6& 7& 4& 8& 5& 9& 3\\ 3&2 &9 &8& 10& 5& 7& 1& 6& 4\\ 4&3 &6 &10& 9& 1& 8& 2& 7& 5\\ 5&4 &7 &3 &10& 1& 9& 2& 8& 6\\ 6&5 &8 &9 &7& 2& 10& 4& 3& 1\\ \hline \end{tabular} \caption{First six rows of a rotation sequence embedding $K_{10}$ on $S_4$} \label{table:8} \end{center} \end{table}
\begin{algorithm}[h!] \label{algorithm:2}
\caption{Searching through all rotation sequences to find the set of all embeddings of $K_{10}$ on $S_4$ with a hexagon with $6$ distinct vertices.} \lstset{numbers=left, numberstyle=\tiny, stepnumber=1, numbersep=5pt} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Output{rotation sequences with a hexagon with $6$ distinct vertices} \BlankLine check\_perms\_of\_row$(i)$:\\ Let $r_{i,1}=i-1$ and $r_{i,8}=i+1$\\ \ForAll{permutations $p$ of $\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}-\{i\}-\{i+1\}-\{i-1\}-\{v\}$}{fill in $r_{i, 3}$ through $r_{i, 8}$ with $p$\\ \tcc{in lines 2 and 3, $i+1$ and $i-1$ are to be read $\mod{6}$ using the numbers $1, 2, 3, 4, 5, 6$}\
\If{any edge is in two rows in the same direction}{go to next $p$}\
\If{total number of faces mentioned through row $r_i>28$}{go to next $p$}\
\If{any edge is in more than two faces}{go to next $p$}\ \If{$i<6$}{check\_perms\_of\_row$(i+1)$}\ \If{$i=6$}{\If{the total number of faces mentioned through row $r_6$ is less than 28}{ \If{the total number of faces is 27}{check if triangles of vertices 7, 8, 9, 10 can complete an embedding} \If{the total number of faces is 26}{check if pairs of triangles of vertices 7, 8, 9, 10 can complete an embedding} \If{the total number of faces is less than 26}{add this rotation sequence to set of candidates for further verification}} \If{for any $j \in \{7,8,9,10\}$, the triangular faces containing $j$ do not make a cycle of length 9}{go to next $p$} print out the first six rows of the rotation sequence}} \end{algorithm}
\begin{table}[h] \begin{center}
\begin{tabular} { | l | r | l}
\hline
Check & Number of Cases\\
\hline
Row 4: Check if edges opposite & 868209541\\
Check if number of faces $\leq28$ & 79975\\
\hline
Row 5: Check if edges opposite & 4290114\\
Check if number of faces $\leq28$ & 3588229\\
Check if each edge is in at most $2$ faces & 8860\\
\hline
Row 6: Check if edges opposite & 187765\\
Check if number of faces $\leq28$ & 30226\\
Check if each edge is in at most $2$ faces & 747\\
\hline
See if cycle length around $7, 8, 9, 10$ is 9 & 0\\
27 faces see if completable & 8 \\
26 faces see if completable & 11 \\ \hline \end{tabular} \caption{Number of arrangements of the first $i$ rows of candidate rotation sequence for $K_{10}$ satisfying each criterion} \label{table:4} \end{center} \end{table}
\section*{Summary} \begin{table}[h] \begin{center}
\begin{tabular} { | l | c | c | c | c | }
\hline
$g$/$n$ & 8 & 9 & 10 & 11 \\
\hline
0 & 18 & 36 & 60 & 100 \\
\hline
1 & 4 & 9 & 23 & [37, 42] \\
\hline
2 & 0 & 4 & [9, 12] & [16, 27] \\
\hline
3 & - & 0 & 3 & [10, 14] \\
\hline
4 & - & - & 0 & 4 \\
\hline
5 & - & - & - & 0 \\ \hline \end{tabular} \caption{Range of $cr_g(K_n)$} \label{table:3} \end{center} \end{table}
The following figures show the drawings of complete graphs on different genus surfaces. Figures \ref{figure:1} and \ref{figure:2} show $K_{10}$ and $K_{11}$ on a genus $2$ surface. The diagrams were based on Adrian Riskin's $cr_2(K_9)=4$ drawing.
Figures \ref{figure:3}, \ref{figure:4}, and \ref{figure:5} show graphs on a genus 3 surface using the rotation sequences. Figure \ref{figure:3} shows an embedding of graph $K_9$ on a surface of genus 3. Figure \ref{figure:4} is graph $K_{10}$ on a genus 3 surface with 3 crossing numbers, which is the lowest crossing number possible. Figure \ref{figure:5} shows $K_{11}$ on a surface of genus 3 with 14 crossings, which I establish as the upper bound of the crossing number. Both Figures \ref{figure:4} and \ref{figure:5} were produced by adding a point on each graph.
Figures \ref{figure:6} and \ref{figure:7} show graphs on a genus 4 surface also using the rotation sequences. Figure \ref{figure:6} is an embedding of $K_{10}$ on a surface of genus 4. Figure \ref{figure:7} shows $cr_4(K_{11})=4$.
\begin{figure}
\caption{$K_{10}$ on a surface of genus 2 with 12 crossings}
\label{figure:1}
\end{figure}
\begin{figure}
\caption{$K_{11}$ on a surface of genus 2 with 27 crossings}
\label{figure:2}
\end{figure}
\begin{figure}
\caption{Embedding of $K_{9}$ on a surface of genus 3}
\label{figure:3}
\end{figure}
\begin{figure}
\caption{$K_{10}$ on a surface of genus 3 with 3 crossings}
\label{figure:4}
\end{figure}
\begin{figure}
\caption{$K_{11}$ on a surface of genus 3 with 14 crossings}
\label{figure:5}
\end{figure}
\begin{figure}
\caption{Embedding of $K_{10}$ on a surface of genus 4}
\label{figure:6}
\end{figure}
\begin{figure}
\caption{$K_{11}$ on a surface of genus 4 with 4 crossings}
\label{figure:7}
\end{figure}
The crossing number for each complete graph when $g=0$ has been obtained using Guy's conjecture $Z(n) = \frac{1}{4}\lfloor\frac{n}{2}\rfloor\lfloor\frac{n-1}{2}\rfloor\lfloor\frac{n-2}{2}\rfloor\lfloor\frac{n-3}{2}\rfloor$, which Pan and Richter proved for $n\leq12$. The crossing numbers $4, 9,$ and $23$ for $n=8, 9, 10$ when $g=1$ were proven by Guy, Jenkyns, and Schaer. For $cr_1(K_{11})$, Guy, Jenkyns, and Schaer prove that the upper bound of its crossing number is $42$, while the lower bound is $37$ using $\frac{23}{210} \binom{n}{4}$. The genus of the complete graph was calculated using Ringel and Young's established bound $\gamma(K_n)=\lceil\frac{(n-3)(n-4)}{12}\rceil$, which is why $cr_2(K_8), cr_3(K_9), cr_4(K_{10}),$ and $cr_5(K_{11})$ are all 0. Riskin proved $cr_2(K_9)=4$, and the lower bounds for the rest of the complete graphs on different genus surfaces were established using Kainen's conjecture: $cr_g (K_n)\geq\binom{n}{2}-3n+3(2-2g)$.
The upper bounds for $cr_2(K_{10}), cr_2(K_{11}),$ and $cr_3(K_{11})$ and values of $cr_3(K_{10})$ and $cr_4(K_{11})$ are established above in Theorems 2 and 3 and Figures \ref{figure:1}, \ref{figure:2}, \ref{figure:3}, \ref{figure:4}, and \ref{figure:5}. Table \ref{table:3} shows the lower and upper bounds for $cr_g(K_n)$ when $g\leq5$ and $8\leq n\leq11$.
\end{document} |
\begin{document}
\title{ Quantum Zeno effect on Quantum Discord }
\author{ A. Thilagam}
\address{Information Science, Engineering and Environment, Mawson Institute, University of South Australia, Australia
5095} \date{\today} \begin{abstract} We examine the quantum Zeno effect on the dynamics of quantum discord in two initially entangled qubits which are subjected to frequent measurements via decoherent coupling with independent reservoirs. The links between characteristic parameters such as system bias, measurement time duration,
strength of initial entanglement between the two qubit systems and the dynamics of quantum discord are examined for two initial state configurations. At weak or unsharp measurements, the quantum discord, which is an intrinsically distinct entity from concurrence, serves as a reliable indicator of the crossover point in Zeno to anti-Zeno transitions. However at highly precise quantum measurements, the monitoring device interferes significantly with the evolution dynamics of the monitored system, and the quantum discord yields indeterminate values in a reference frame where the observer is not an active constituent of the subsystems.
\end{abstract} \pacs{03.65.Xp,03.65.Yz, 03.65.Ud, 03.67.-a}
\maketitle
\section{Introduction}\label{c1a}
Recently, studies of separable and therefore non entangled states containing other kinds of non classical correlations has attracted increased attention. One such correlation measure, the quantum discord \cite{zu,ve1,ve2}, based on the difference between quantum and classical information theories,
incorporates more generalized correlations not seen in other non-classical correlations such as entanglement.
In particular quantum states with zero entanglement properties are seen to possess quantum discord and classical-quantum states which are necessarily separable have zero quantum discord. Two positive discord states can be mixed to obtain a zero-discord classical state, and two zero-discord classical states in orthogonal directions can be merged to form a non-zero discord state \cite{matt}. Moreover the quantum discord is not restricted by the monogamy rule \cite{woot} which is obeyed by the concurrence measure during entanglement sharing. Such intriguing features of quantum discord has opened up avenues for variety of attributes and applications in non-markovian open quantum systems \cite{tern,ferr,fan,maz,pii}, spin array systems \cite{cil} detection of quantum phase transitions \cite{wer},
quantum information processing \cite{datta} and quantum communication \cite{pia}.
The quantum Zeno effect (QZE) describes the retarded time evolution of a quantum state subjected to frequent measurements\cite{Misra,It,FacJ}. In the limiting case of continuous measurement, the time evolution of the state comes to a standstill. The opposite effect which leads to enhancement in time evolution is known as anti-Zeno effect (AZE) and has been observed to be much more ubiquitous than Zeno effect \cite{ob}. In unstable systems, the occurrence of both QZE and AZE effects depends on critical parameters like measurement frequencies and environmental noise \cite{env}. Quantum systems exhibiting both effects
include the nanomechanical oscillator \cite{nosci}, two-state system coupling to a spin chain environment in transverse magnetic fields \cite{wangS}, the non equilibrium steady state spin-fermion model a variant of the Kondo model \cite{Segal}, damped quantum harmonic oscillator \cite{env}, disordered spin systems \cite{japko} and trapped atomic systems \cite{rai}. The
nanomechanical oscillator system, in particular is of increased
interest as it provides an ideal medium for testing quantum effects on a macroscopic scale.
An inherent feature in determining quantum discord involves the one-sided projective measurements on a selected subsystem of the composite quantum state. As is well known, this introduces various counter-intuitive features linked with the measurement process itself with associated controversies linked with the collapse of the wave function of the measured system. For instance the term ``subjective reality" was introduced by Wiseman \cite{wise} to describe the dependence of quantum trajectories on the observer's measurement frame, hence there are several ways that quantum systems which are monitored can be interpreted.
A well-known approach to the widely used collapse postulate involves its replacement by the decoherence process subjected by a detector on the system under study \cite{zuro} An alternative scheme involves the idea of quantum Zeno subspaces \cite{FacJ} which provides a convenient platform for interpreting the Zeno effect. In this regard, we note that the active presence of the measuring device is not a requirement for quantum Zeno effects to be seen. This is due to the fact that the Zeno effect is linked to the evolution of the non-Hermitian Schr\"odinger equation associated with any irreversible mechanism, with the act of measurement being a well known one.
For low precision or unsharp measurements, the device $D$ introduces minimal disturbance on the measured system,
$S$ with state $u|S_u\rangle + v|S_v\rangle$.
The state of the measuring device can be $|D_u\rangle$ or
$D_v\rangle$ after the measurement, and is different from its state before measurement, $|D_i\rangle$. The composite system $S \otimes D$ proceeds in an approximately unitary
fashion as $U |S_u\rangle |D_i\rangle = |S_u\rangle |D_u\rangle$,
$U |S_u\rangle |D_i\rangle = |S_u\rangle |D_u\rangle$. In the case of ideal measurements, the resulting state of the system after measurement generally belongs to the set of the orthonormal basis of the quantum system. Thus for weak or unsharp measurements, the non-Hermitian term can be ignored and simplified approaches such as that based on the Kofman and Kurizky's formalism \cite{ob} can be employed to analyze the effect of measurements.
For highly precise measurements, any analysis of the quantum evolution becomes complicated due to the influence of the non-Hermitian term, which can be interfere strongly with the dynamics of the measured system. Accordingly, we provide an
analysis of the evolution of a measured system involving a non-Hermitian term which appears due to highly precise measurements or a strong monitoring device here. This is performed by applying the results of the non-Hermitian Hamiltonian of a two-level system originally solved in the context of the link between a decay term and Berry's phases by Garrison and Wright \cite{gar} to our measurement model. We note that a analogous decay term is explicitly linked with the precision of quantum measurements, a higher measurement precision results in a larger magnitude of this decay term. Some ideas introduced in this work may thus be extended to study the links
between Berry phases and the quantum measurement problem.
For ideal or weak measurements, the Von Neumann projection operator ${\cal P}$ \cite{Misra,FacJ} is convenient to formulate measurement procedures in Hilbert space ${\cal H}$ of a quantum system, $S$. The initial density matrix $\rho_0$ of system $S$ is constrained within ${\cal H}_{\cal P}$ as $\rho_0 = {\cal P} \rho_0 {\cal P} ,\; \; \mathrm{Tr} [ \rho_0 {\cal P} ] = 1$. In the absence of any measurement, the state evolves as $\rho (t) = U(t) \rho_0 U^\dagger (t)$ where $U(t)=\exp(-iH^\star t)$, and $H^\star $ is a time-independent Hamiltonian. The probability that the system remains within ${\cal H}_{\cal P}$ is given by $P(t) = \mathrm{Tr} \left( U(t) \rho_0 U^\dagger(t) {\cal P}\right)$. In the event of measurement at time $\tau$, density matrix $\rho(\tau)$ transforms as $ \rho(\tau) = \frac{1}{P(\tau)}\;{\cal P} U(\tau) \rho_0 U^\dagger(\tau) {\cal P}$. The survival probability in ${\cal H}_{\cal P}$ is given by $P(\tau) = \mathrm{Tr} \left(V(\tau) \rho_0 V^\dagger(\tau) \right)$ where $V(\tau) \equiv {\cal P} U(\tau){\cal P}$. For measurements taken at time intervals $\tau=t/N$, the survival probability is given by \begin{eqnarray} \label{eq:survorig} P^{(N)}(t) = \mathrm{Tr} \left( V_N(t) \rho_0 V_N^\dagger(t)\right),\\ \nonumber
V_N(t) = \left[ V\left(\frac{t}{N}\right)\right]^N \end{eqnarray} At very large $N$, no transitions allowed outside ${\cal H}_{\cal P}$ occur and $ P^{(N)}(t) \rightarrow 1$, the culmination of the mathematical formulation of the Zeno effect. Eqs. (\ref{eq:survorig}) embodies the intriguing effect of a measurement process, where a system monitored to determine whether it remains in a particular state persists to remain in that state.
This idea has been examined via the adiabatic theorem
\cite{FacJ} in which different outcomes are eliminated and the system evolves as a group of exclusive quantum Zeno subspaces within the total Hilbert space. The measurement procedure therefore has a decomposing effect on the total Hilbert space which is partitioned into orthogonal quantum Zeno subspaces \cite{FacJ}. The initial state remains in a particular invariant subspace, and its survival probability remains unchanged over a period of time.
The effect of measurement on the dynamics of quantum discord can be examined in one of several ways. An obvious one involves examining the role of Zeno effect associated with measurements introduced in one subsystem in order to obtain the conditional entropy, and enabling determination of the classical correlation measure based on optimal measurements. This procedure forms the basis of determining the quantum discord, as shown in earlier mathematical
formulations \cite{zu,ve1,ve2}. In order to evaluate the quantum discord, a set of positive-operator-valued measurements (POVM) need to be performed in a neighboring partition. Does the measurement process itself induce a distinct category of quantum discord? How exactly can such optimal measurements be performed without incurring the quantum Zeno effect? What are the key attributes of an optimal measurement and the possible role played by the Zeno effect in POVM? In this regard, the consideration of distinct measurement
techniques in separate sub-systems will introduce greater depth to the analysis of the quantum discord present in the global system. This includes the effects due to the asymmetry of measurement procedures. However such detailed investigations is not an easy task, as the difficulty in determining the quantum discord even for simpler systems is well known. So far analytical form has been derived only under restricted conditions \cite{luo,sara,ali,ali2}.
For the sake of obtaining analytical expressions, it is generally assumed that the measurement time duration or frequency of measurements is the same for subsystems not in contact with any reservoir system. We continue to assume
this model for simplicity in analytical treatment, however we opt to examine the effect of measurement from a different perspective. This involves examining the influence of the Zeno-like effect associated with acts of continuous measurements
by the environment that is in contact with the qubit subsystem \cite{koro}. The well-known model of the
solid-state qubit interacting with a reservoir system presents a convenient platform for examining the complicated link between quantum Zeno effect, quantum discord and the dynamics of Zeno subspaces. The reservoir may be viewed as providing the ``back-action" needed for the dynamical collapse of the wave-function collapse.
In order to keep the problem tractable, we consider in the first instance, the well-known model of a pair of initially entangled spin-boson system with independent harmonic reservoirs found
useful in quantifying salient aspects of dissipative dynamics
of many quantum systems \cite{Leg,Weiss}. Factors such as spectral density, bias and temperature are considered to play important roles in the overall dynamics of the qubit-reservoir system. We follow Prezhdo's approach involving the quantum control of chemical reactivity by a solvent acting as the environment \cite{prez}, the anti-Zeno mechanism therefore occurs by loss of electronic coherence in some chemical systems.
The interplay of various quantum interactions (non-local and local) between the environment and the qubit system results in the reservoir acting as continuous detector.
Our paper is organized as follows. In Section \ref{meas} we provide a brief review the concept of quantum discord and highlight the role of measurements in its formulation. In Section \ref{c1b}, we describe salient features of Zeno dynamics of the spin-boson system using Kofman and Kurizky's formalism \cite{ob} which yields the effective decay of a quantum system under ideal measurements. In Section \ref{dyn} we investigate the influence of quantum Zeno effect on the dynamics of the quantum discord for X-type qubit states
with two initial state configurations. We present our main results and make comparisons between the quantum discord and the concurrence measure. In Section \ref{exp}, we analyze the non-Hermitian dynamics resulting from highly precise measurements on a two-level quantum system, and highlight the appearance of exceptional points. A brief discussion and conclusions are then presented in Section \ref{con}.
\section{Measurements and Quantum discord}\label{meas}
Following the formulation of quantum discord in Refs.\cite{zu, ve1, ve2}, we express the quantum mutual information of a composite state $\rho$ of
two subsystems $A$ and $B$ as $\mathcal{I}(\rho) = S(\rho_A) + S(\rho_B) - S(\rho)$ for a density operator in $\mathcal{H}_A {\,\otimes\,} \mathcal{H}_B$. $\rho_{A}$ and $(\rho_B)$ are reduced density matrices and $S(\rho_i)$ (i=A,B) denotes the well known von Neumann entropy of the density operator $\rho_i$. $S(\sigma)= - {\rm tr}(\rho \log\rho)$ stands The mutual information can also be written in terms of
quantum conditional entropy $S(\rho|\rho_A)= S(\rho) - S(\rho_A)$
as $\mathcal{I}(\rho) = S(\rho_B) - S(\rho|\rho_A)$.
The quantum Zeno effect appears as a result of the
measurement process intrinsic in the definition of the conditional entropy. A series of one-dimensional orthogonal projectors $\{\Pi_k\}$ induced in $\mathcal{H}_A$ leads to different outcomes of the measurement. We are presented with the post measurement conditional state \cite{bylic}
$\rho_{B|k} = \frac{1}{p_k} (\Pi_k {\,\otimes\,} \mathbb{I}_B)\rho (\Pi_k {\,\otimes\,}
\mathbb{I}_B)$
where the probability $p_k = {\rm tr}[\rho_{B|k} (\Pi_k{\,\otimes\,} \mathbb{I}_B)]$ and $\{\Pi_k\}$ denote the one-dimensional projector indexed by the
outcome $k$. A conditional entropy of the subsystem $B$ can be attached to $\rho_{B|k}$ based on the cumulative effect of the mutually exclusive measurements
on $A$ as $S(\rho|\{\Pi_k\}) = \sum_k p_k S(\rho_{B|k})$. The measurement induced mutual information is therefore
$ \mathcal{I}(\rho|\{\Pi_k\}) = S(\rho_B) - S(\rho|\{\Pi_k\})$
while the classical correlation measure based on optimal measurements is obtained as \cite{zu,ve1,ve2}
$ \mathcal{C}_{A}(\rho) = \sup_{\{\Pi_k\}} \mathcal{I}(\rho|\{\Pi_k\})$. The difference in $\mathcal{I}(\rho)$ and $\mathcal{C}_A(\rho)$ yields the non symmetric term known as quantum discord, $ \mathcal{D}_{A}(\rho) = \mathcal{I}(\rho) - \mathcal{C}_A(\rho)$. A discord $\mathcal{D}_{B}(\rho)$ corresponding to measurements made on $B$ can likewise be obtained and need not be the same as $\mathcal{D}_{A}(\rho)$. As to be expected, the quantum discord is not symmetric with respect to $A$ and $B$,
particularly if attributes such as the measurement duration employed in either subsystems differ.
\section{Zeno dynamics of the spin-boson system}\label{c1b}
In order to examine the dynamics of the spin-boson system, we utilize the density matrix associated with the Liouville equation $\frac{\partial \rho}
{\partial t} = - i [\widehat H_{\rm T},\rho(t)]$, where the total Hamiltonian $\widehat H_{\rm T} =\widehat H_{\rm qb}+ \widehat H_{\rm os} + \widehat H_{\rm qb-os}$ and $\widehat H_{\rm qb}$ of the two-level qubit assumes the form $\widehat H_{\rm qb} = \hbar \left(\frac{\Delta \Omega}{2}\, \sigma_{z} +\Delta \sigma_{x}\right)$. The Pauli matrices are expressed in terms of the two possible states $(\ket{0},\ket{1})$, $\sigma_{x} = \ket{0} \bra{1} + \ket{1} \bra{0}$ and $\sigma_{z} = \ket{1} \bra{1} - \ket{0} \bra{0}$. $\Delta \Omega$ is the biasing energy while $\Delta$ is the tunneling amplitude.
We consider that the two uncoupled qubits are coupled to independent reservoirs of harmonic oscillators, $\widehat H_{\rm os} = \sum_{\bf q} \hbar \omega_{\bf q}\, b_{\bf q}^{\dagger}\,b_{\bf q}$. $b_{\bf q}^{\dagger}\,$ and $b_{\bf q}\,$ are the respective creation and annihilation operators of the quantum oscillator with wave vector ${\bf q}$. The qubit-oscillator interaction Hamiltonian is linear in terms of oscillator creation and annihilation operators $\widehat H_{\rm qb-os} = \sum_{{\bf q}} \lambda_{_{\bf q}}\, \left ( b_{\bf q}^\dagger + b_{\bf q}\right ) \sigma_{z}$. The term $\lambda_{_{\bf q}}$ denotes the coupling between the qubit and the environment and is characterized by the spectral density function, $J(\omega)=\sum_{\bf q}\lambda_{_{\bf q}}^2\delta(\omega-\omega_{\bf q})$, which we assume to be of the ohmic form $J(\omega)= 2 \pi \eta \omega e^{-\frac{\omega}{\omega_c}}$.
$\eta$ is the dimensionless reservoir coupling function, and $\omega_c$ is the reservoir cutoff frequency. We consider the measuring device to be an active constituent of the total Hamiltonian $\widehat H_{\rm T} =\widehat H_{\rm qb}+ \widehat H_{\rm os} + \widehat H_{\rm qb-os}$. The
reservoir assumes the role of the
measuring device, by inducing
a projection operation that disrupts the normal evolution of Hamiltonian $\widehat H_{\rm T}$. The reservoir here serves the same role as the solvent in Prezhdo's work on the quantum control of chemical reactivity \cite{prez}.
Each qubit decays to oscillator states in the reservoir when measurements are made, making a transition from its excited state $\ket{1}_{\mathrm{q}}$ to ground state $\ket{0}_{\mathrm{q}}$.
We consider an initial state of the qubit with its corresponding reservoir in the vacuum state, existing in equilibrium at temperature $T=0$K $
|\phi _{i}\rangle =|1\rangle _{\mathrm{q}}\otimes
\prod_{k=1}^{N'}|0_{k}\rangle _{\mathrm{r}}=
|1\rangle _{\mathrm{q}}\otimes \ket{{\bf 0}}_{\mathrm{r}}$ where $\ket{{\bf 0}}_{\mathrm{r}}$ implies that all $N'$ wavevector modes of the reservoir are unoccupied in the initial state.
$|\phi _{i}\rangle$ then undergoes the following mode of decay \begin{equation}
|\phi _{i}\rangle \longrightarrow
u(t) \; \ket{1}_{\mathrm{q}} \ket{{\bf 0}}_{\mathrm{r}} + v(t) \; \ket{0}_{\mathrm{q}} \ket{{\bf 1}}_{\mathrm{r}} , \label{fstate} \end{equation} In order to keep the problem tractable we consider that $\ket{{\bf 1}}_{\mathrm{r}}$ denotes a collective state of the reservoir, $
|{\bf 1}\rangle _{\mathrm{r}}=\frac{1}{v(t)}
\sum_{n} \lambda _{\{n\}}(t)|\{n\}\rangle$ where $\{n\}$ denotes an occupation scheme in which there are
$n_i$ oscillators with wavevector $k=i$ in the reservoir and we define the state $|\{n\}\rangle$ as $
|\{n\}\rangle =|n_0,n_1,n_2...n_i..n_{N'}\rangle$.
For ideal measurements, the functions $u(t)$ and $v(t)$ in Eq.(\ref{fstate}) satisfy the relation $u(t)^2 + v(t)^2=1$, and can be considered to be approximately satisfied for unsharp or weak measurements which introduce minimal disturbance to the system being monitored. The square of the function $u(t)$ yields the survival probability
associated with $N$ measurements performed at regular intervals $\tau$, $P(t)= u(t)^2 = \exp(-N \Delta^2 \tau^2/4)$ where $t=N \tau$. In the extreme limit $\tau \rightarrow 0, \; u(t) \rightarrow 1$ and the decay into phonon states is totally inhibited. For small $\tau, N$ values and a weak qubit-reservoir coupling, we assume that the state of the collective reservoir at time $t= \tau$ is
equivalent to that at $t= N \tau$. The second order processes giving rise to exchanges between oscillators and hence changes in the ensemble configuration of oscillators in can be considered minimal and neglected at small $t$. At very short times, the effective relaxation rate for the two-level qubit is given by $\gamma(\tau)=(\Delta/2)^2\tau$ so that $u(t)^2 = \exp(-\gamma(\tau) \tau)$. The decay of a quantum state interacting with a reservoir is almost zero at the beginning of the decay process, a typical behaviour in quantum Zeno effect. At intermediate measurement time intervals, the decay of quantum state may be accelerated as is the case in anti-Zeno effects.
It is to be noted that Eq.(\ref{fstate}) does not provide a dynamic description of the measurement process such as the evolution of the system during or after $N$ measurements. Eq.(\ref{fstate}) stems from a
probabilistic interpretation of quantum measurements and
predicts the two possible outcomes, consistent with Born's rule linked with the probabilistic nature of the projection postulate. The survival probability given by $|u(t)|^2$ is the cumulative outcome of several successive measurements. As is well known, inconsistencies still remain in problems associated with quantum measurements. The unitary and reversible features of the Schr\"odinger equation and the non-unitary elements inherent in the projection postulate are clearly incompatible. However both these core
processes need to be unified in order to examine the influence of a continuous monitoring on the unmeasured evolution of a quantum system, which is a challenging task.
We evaluate the effective decay rate of the spin-boson model at small values of $\tau$, using Kofman and Kurizky's formalism which is based on the convolution of two functions \cite{ob} \begin{equation} \gamma(\tau)=2 \left(\frac{\Delta}{2}\right)^2 \int_0^{\infty} d\omega K(\omega) F_{\tau}(\omega-{\Delta \Omega}), \label{eq:overlap} \end{equation} The function $F_{\tau}(\omega-{\Delta \Omega}) =\frac{\tau}{2\pi} {\rm sinc}^2\left[ \frac{(\omega- {\Delta \Omega})\tau}{2}\right]$ and is associated with measurements at intervals of $\tau$. The reservoir coupling function $K(\omega)$ is evaluated using $K(\omega) = \int_0^{\infty} \; e^{i \omega t} \cos[\Delta \Omega + G_1(t)] e^{-G_2(t)} d t $ where $G_1(t) = \int_0^{\infty}d\omega \frac{J(\omega)}{\omega^2}\sin\omega t$ and $ G_2(t) = \int_0^{\infty}d\omega \frac{J(\omega)}{\omega^2} \coth[\frac{\beta \omega}{2}] (1-\cos\omega t)$, where $\beta = \frac{1}{k_B T}$ and $T$ is the lattice temperature. Explicit expressions for $G_1(t)$ and $G_2(t)$ in Refs.\cite{Leg,Weiss} for an ohmic $J(\omega)$ show the strong dependence of $K(\omega)$ on the reservoir coupling function $\eta$ and the exponential cutoff frequency $\omega_c$. The occurrence of QZE or AZE is determined by changes in the overlap between functions $F_{\tau}(\omega)$ and $K(\omega)$ as $\tau$ is varied. QZE (AZE) occurs when the overlap of functions decreases (increases) with decrease in $\tau$. The crossover from QZE to AZE is most pronounced when ${\tau}$ is increased in systems with weak spin-boson coupling \cite{thila}, and also when bias $\Delta \Omega$ is increased as well.
\subsection{Approximate relations of the Zeno-anti Zeno crossover point}\label{ca}
To obtain approximate analytical relations, we employ the
an effective decay rate applicable at short times \cite{Segal}, $ \gamma(\tau)= \frac{\Delta^2} {2\tau} \Re \int_0^{\tau}dt_1 \int_0^{t_1}K(t')dt'$ where $K(t)$ is the fourier transform of the coupling function $K(\omega)$ defined below Eq.(\ref{eq:overlap}). Using explicit expressions for $G_1(t)$ and $G_2(t)$ given in Refs.\cite{Leg,Weiss} for an ohmic $J(\omega)$, we obtain $\gamma(\tau)$ as follows \begin{eqnarray} \label{eq:toy2} \gamma(\tau)= \frac{\Delta^2}{2 } {(\frac{\pi}{\beta})}^{2 \eta}
\int_0^{\tau} && dt \cos[\Delta \Omega + 2 \eta \tan^{-1}\omega_c t] \\ \nonumber && \times \frac{t^{2 \eta}}{(1+ (\omega_c t)^2)^{\eta}} {({\rm csch} \frac{\pi t}{\beta})}^{2 \eta} \end{eqnarray} where $\beta = \frac{1}{k_B T}$, $T$ is the lattice temperature and ${\rm csch(x)}$ is the hyperbolic cosecant function. Using $\Delta \Omega=0$, $T=0$K, $\Delta^2 =2$ and $\omega_c=1$, we obtain simple expressions for $\gamma(\tau)$ and $\frac{\partial \gamma(\tau)}{\partial \tau}$ \begin{eqnarray} \nonumber \gamma(\tau)= && \frac{(1+ \tau)^{-\eta}}{(2 \eta-1)} \left (\sin[ 2 \eta \tan^{-1} \tau]-
\tau \cos[ 2 \eta \tan^{-1} \tau] \right ) \\ \label{eq:toy3} \\ \label{eq:toy4} \frac{\partial \gamma(\tau)}{\partial \tau} && = (1+ \tau)^{-\eta} \cos[ 2 \eta \tan^{-1} \tau] \end{eqnarray} At very short time intervals $\omega_c \tau < 1$, $\gamma(\tau) \approx \tau$
whereas at very large times $\omega_c \tau \rightarrow \infty$ and for $\eta \neq 1/2$, $\gamma(\tau) \approx \frac{\cos \pi \eta}{2 \eta-1}\; \tau^{1-2 \eta}$.
At $\eta = 1/2$, $\frac{\cos \pi \eta}{1-2 \eta} \rightarrow \frac{\pi}{2}$ and we get a rate which is independent of the measuring device, $\gamma(\tau) = \frac{\pi \Delta^2}{4 \omega_c}$.
At the point of Zeno-anti-Zeno transition, $\frac{\partial \gamma(\tau)}{\partial \tau} = 0$, and using Eq.(\ref{eq:toy3}) we obtain an explicit expression for the measurement interval $\tau_{_{T}}$ at which Zeno to anti-Zeno transition occurs ($\eta \neq 1/2$) \begin{equation} \tau_{_{T}}=\tan \frac {\pi}{4 \eta} \label{eq:rateT} \end{equation} At non-zero values of $\Delta \Omega$ where the spin-boson system exist under biased conditions, the measurement interval $\tau_{_{T}}$ at which Zeno to anti-Zeno transition occurs is modified to \begin{equation} \label{eq:rateTB} \tau_{_{T}}=\tan \left[ \frac{1}{2 \eta} \left(\frac{\pi}{2}- \mu \Delta \Omega \right)\right ] \end{equation} where the factor $2< \mu < 3$ and depends on the bias $\Delta \Omega$. Eq.(\ref{eq:rateTB}) is consistent with the fact that an increase in biasing energy $\Delta \Omega$ increases the probability of Zeno-anti-Zeno transition.
It is important to note that the defination of the Zeno-anti-Zeno transition is based on the properties of the decay rate, $\gamma(\tau)$ and not a fixed natural rate. $\tau$ can be viewed as the duration of one of many other pulses, and therefore specific to the local dynamics of the quantum system being monitored. We point out the difference in settings between the current work and an earlier work \cite{thila} in which the reservoir constitutes a part of the dynamical system that is monitored by the measuring device. In Ref. \cite{thila}, Zeno to anti- Zeno features were revealed even with the first (of many) measurement by a distant observer
due to the continuous measurement effect by the reservoir of oscillators.
\section{\label{dyn} Dynamics of Quantum discord for X-type qubit states }
In order to examine the joint evolution of a pair of two-level qubit systems in uncorrelated reservoirs, we consider the following Bell-like initial state \begin{eqnarray} \ket{\Phi}_0 &=& \left[ a\ket{0}_{\mathrm{q1}}\ket{0}_{\mathrm{q2}} + b\ket{1}_{\mathrm{q1}}\ket{1}_{\mathrm{q2}} \right]
\ket{0}_{\mathrm{r1}} \ket{0}_{\mathrm{r2}},
\label{fstate2} \end{eqnarray} where $i$=$1,2$ denote the two qubit-reservoir systems associated function $u_i(t)$ in Eq.(\ref{fstate}).
$a,b$ are real coefficients and satisfy, $a^2+b^2=1$. Using Eq.(\ref{fstate}) and tracing out the reservoir states we obtain a time-dependent qubit-qubit reduced density matrix in the basis $(\ket{0 \;0},\ket{0 \;1}\ket{1 \;0}\ket{1 \;1})$ which evolves with time duration $\tau$ as \begin{eqnarray} \label{matrix1} \rho_{_{\mathrm{q1,q2}}}(t)= \left( \begin{array}{cccc}
f_1& 0 & 0 &f_5 \\
0 & f_2 & 0 & 0 \\
0 & 0 & f_3 & 0 \\
f_5& 0 & 0 & f_4\\ \end{array} \right). \end{eqnarray} where $f_1=a^2+b^2v_1(\tau)^2 v_2(\tau)^2$, $f_5=a b u_1(\tau)\; u_2(\tau)$, $f_2=b^2 v_1(\tau)^2 u_2(\tau)^2$, $f_3= b^2 u_1(\tau)^2 v_2(\tau)^2$, $f_4= b^2 u_1(\tau)^2 u_2(\tau)^2$. We assume that the usual unit trace and positivity conditions of the density operator
$\rho_{_{\mathrm{q1,q2}}}$ are satisfied, however these may not constitute strict requirement for the determination of the quantum discord. The reservoir-reservoir reduced density matrix $\rho_{_{\mathrm{r1,r2}}}$ is similarly obtained by by tracing out qubit states. Each non-zero matrix term of $\rho_{_{\mathrm{r1,r2}}}$ is easily obtained from the corresponding term $ \rho_{_{\mathrm{q1,q2}}}(t)$ by swapping
$u_i \leftrightarrow v_i$. Both matrices possess the well-known
$X$-state structure which preserve its form during evolution. The well known Wootters
concurrence \cite{woot} for the density matrix in Eq.(\ref{matrix1}) is $\mathcal{C}_{_{\mathrm{q1,q2}}}(\tau)= 2 b e^{-{\frac{1}{2}}(\gamma_1+\gamma_2)\tau}
\times \left[ a-b (1-e^{-\gamma_1 \tau})^{\frac{1}{2}}(1-e^{-\gamma_2 \tau})^{\frac{1}{2}} \right]$ and $\mathcal{C}_{_{\mathrm{r1,r2}}}(\tau)=2 b(1-e^{-\gamma_1 \tau})^{\frac{1}{2}} (1-e^{-\gamma_2 \tau})^{\frac{1}{2}} \times [a-b e^{-{\frac{1}{2}}(\gamma_1+\gamma_2)\tau}]$
\cite{thila}.
The density matrix in Eq.(\ref{matrix1}) yields $ S(\rho_{_{\mathrm{qi}}}) = -b^2 u_i^2 \log_2[b^2 u_i^2]- (a^2+b^2 v_i^2) \log_2[a^2+b^2 v_i^2] (i=1,2)$ with explicit dependence on the measurement time duration $\tau$, and system bias,
$\Delta \Omega$ and tunneling amplitude $\Delta$ via the functions $u_i$. The condition $ S(\rho_{_{\mathrm{q1}}})=S(\rho_{_{\mathrm{q2}}})$ is therefore satisfied only if these parameters are the same for both subsystems. The quantum discord present in the two-qubit ($\mathcal{D}_{_{\mathrm{q1,q2}}}(\tau)$) and two-reservoir ($\mathcal{D}_{_{\mathrm{r1,r2}}}(\tau)$) partitions for the subclass of density matrix for which $\gamma_1 = \gamma_2$ (i.e. $f_2=f_3$) are evaluated following Fancini et al. \cite{fan}. We obtain $\mathcal{D}_{_{\mathrm{q1,q2}}}(\tau) = H(b^2 u^2)- H(\frac{1}{2}(1+ (1-4 b^2 u^2 v^2)^{1/2})$,
and from which $\mathcal{D}_{_{\mathrm{r1,r2}}}(\tau)$ is obtained by swapping
$u \leftrightarrow v$. The function $H(x)=-x \log_2x-(1-x) \log_2(1-x)$,
and the difference in quantum discords, $\mathcal{D}_{_{\mathrm{q1,q2}}}-\mathcal{D}_{_{\mathrm{r1,r2}}} = H(b^2 u^2)-H(b^2 v^2)$.
For $\gamma_1 \neq \gamma_2=\gamma$ or unequal $u_1, u_2$ values, the quantum discord of the density matrix in Eq.(\ref{matrix1}) can evaluated following the main results in Ref. \cite{ali,ali2} where the quantum conditional entropy is generalized as
$S(\rho|\{\Pi_k\}) = p_0 \, S(\rho_0) + p_1 \, S(\rho_1)$, based on the earlier work of Luo \cite{luo}. The terms $p_0 = [(f_1 + f_3)k + (f_2 + f_4)l]$, $p_1 =[ (f_1+ f_3)l + (f_2 + f_4)k]$ and $S(\rho_0), \, S(\rho_1)$ are dependent on generalized angles $\theta, \theta'$.
The generally cumbersome procedure of determining
$S(\rho|\{\Pi_k\})$ and the classical correlation is greatly simplified if cross terms $\rho_{23}=0$ (following the notation in Ref.\cite{ali}) as is the case in the density matrix in Eq.(\ref{matrix1}). The
problem reduces to minimization with just one parameter $k$ or $l=1-k$, instead of the set of three parameters.
\begin{figure}\label{fig1c}
\label{fig1d}
\label{fig1}
\end{figure}
Figures~\ref{fig1}a,b show the notable differences, in the context of the Zeno effect, between the Wootters concurrence and the quantum discord. For similar bias configuration, $\Delta \Omega_1$=$\Delta \Omega_2$=$0.65$, and at the initial state parameter $a=\sqrt{1/5}$ (see Eqs.(\ref{fstate2})),
the qubit-qubit concurrence displays death and rebirth events with increasing $\tau$, while the reservoir-reservoir concurrence is short-lived. There is some departure from this trend for the dissimilar bias configuration, $\Delta \Omega_1$=$0.65, \Delta \Omega_2$=$0.15$, with no rebirth in qubit-qubit concurrence, and the reservoir-reservoir concurrence persists for longer times. This behavior is in stark contrast to the more resilient quantum discord which clearly displays the transition point at the similar bias configuration in Figure~\ref{fig1}b. Due to coupling with a system of lower bias, a transition point is not present at the dissimilar bias configuration.
The crossover or transition point which occurs at the
minimum (maximum) in the two-qubit partition (two-reservoir partition) can be numerically verified using Eqs.(\ref{eq:rateT}). We noted that the crossover point at a Zeno/anti-Zeno transition coincides with the equivalent point for the quantum discord, thus a decrease to increase and then a subsequent decrease in quantum discord can be interpreted as a sign of the Zeno/anti-Zeno transition.
\begin{figure}\label{fig1c}
\label{fig1d}
\label{fig1c2}
\label{fig1d2}
\label{fig2}
\end{figure}
Figures~\ref{fig2}a,b,c,d which incorporates a change in the initial state parameter $a$, show that the two-reservoir discord best captures the Zeno- anti Zeno transition point. While it is known that the quantum discord remains non-zero under various conditions \cite{zu,ve1,ve2}, these results show that the quantum discord is reliable in being able to display Zeno-anti-Zeno dynamics occurring in separate qubit-reservoir subsystems, and which are also weakly coupled (small values of $a$).
\subsection{\label{dyn2nd} Quantum discord in an initial state with single excitation}
The analysis of quantum discord can be extended to the initial state of the Bell-like state with just a single excited state residing in either of the qubit \begin{eqnarray} \ket{\Phi}_0 &=& \left[ c\ket{0}_{\mathrm{ex1}}\ket{1}_{\mathrm{ex2}} + d\ket{1}_{\mathrm{ex1}}\ket{0}_{\mathrm{ex2}} \right]
\ket{0}_{\mathrm{r1}} \ket{0}_{\mathrm{r2}}, \label{gstate2} \end{eqnarray}
where $i$=$1,2$ denote the two qubit-reservoir systems with associated functions $u_i(t)$. As in the case in Eq.(\ref{matrix1}), we trace out the reservoir states to obtain a time-dependent qubit-qubit reduced density matrix \begin{equation}\label{matrix2} \rho_{_{\mathrm{q1,q2}}}(t)=\left( \begin{array}{cccc}
g_1 & 0 & 0 & 0 \\
0 & g_2 & g_4 & 0 \\
0 & g_4 & g_3 & 0 \\
0 & 0 & 0 & 0\\ \end{array} \right). \end{equation} where for $t \ge 0$, the matrix elements evolve as $g_1(t)= c^2 v_2(t)^2 + d^2 v_1(t)^2$, $g_2(t)= c^2 u_2(t)^2$, $g_3(t)= d^2 u_1(t)^2$ and $ g_4(t)=c d u_1(t) u_2(t)$. Following Fancini et al. \cite{fan}, we obtain $\mathcal{D}_{_{\mathrm{q1,q2}}}(\tau) = H(a^2 u^2)- H(u^2) + H(\frac{1}{2}(1+ (1-4 b^2 u^2 v^2)^{1/2})$,
from which $\mathcal{D}_{_{\mathrm{r1,r2}}}(\tau)$ is obtained by swapping
$u \leftrightarrow v $ and
the difference in quantum discords, $\mathcal{D}_{_{\mathrm{q1,q2}}}-\mathcal{D}_{_{\mathrm{r1,r2}}} = H(a^2 u^2)+H(v^2)-H(u^2)-H(a^2 v^2)$.
Figures~\ref{fig23}a,b show the dynamics of the two-qubit quantum discord, $\mathcal{D}_{_{\mathrm{q1,q2}}}(\tau)$ and two-reservoir quantum discord $\mathcal{D}_{_{\mathrm{r1,r2}}}(\tau)$ at different
subsystem bias configurations, $\Delta \Omega_1$=$\Delta \Omega_2$=$0.75$, $0.25$. The quantum discord displays anti-crossing behavior at the higher system bias value for the two different states given in Eqs.(\ref{fstate2}) and (\ref{gstate2}). The
two-reservoir quantum discord is however enhanced in Eq. (\ref{gstate2}), due to greater participation from the two-reservoir partition. The slight differences in the quantum discord due to the two different initial states in Eqs.(\ref{fstate2}), (\ref{gstate2}) are mainly due to variations in classical correlations, $\mathcal{C}_i(\rho)$ where $i$ denotes the subsystem under consideration.
\begin{figure}\label{fig2a}
\label{fig2b}
\label{fig23}
\end{figure}
\section{Quantum discord and exceptional points at
high precision measurements}\label{exp}
While the quantum Zeno effect is viewed as the effect of repeated measurements on a quantum system, it can be studied in the wider context of the dynamical time evolution of quantum systems. The Zeno effect appears even if the information regarding the state of the observed system manifests in the form of an external degree of freedom such as the
spontaneous emission process. It would be interesting to examine whether the features of the Zeno effect, and the quantum discord are retained if the
monitoring device imparts a significant disturbance on the system under study and itself dominates the time evolution of the quantum system.
For a two-level system with energies $E_1$ ($E_2$) at state $\ket{0}$ ($\ket{1}$) subjected to a continuous measurement process, its original Hamiltonian $\widehat H_0$ transforms via the non-Hermitian Hamiltonian \cite{men,ono} $\widehat H_{eff} = \widehat H_0- i{\hbar\over{\tau E_r^2}}(\widehat H_0-E)^2$. $E$ is the selected measurement output after a time $\tau$ and $E_r$ is the error made during the measurement of the energy, $E$. $E_r$ can also be considered as a measure of the precision of the monitoring device. A large error made during the measurement can be viewed as a weak or unsharp measurement and $\widehat H_{eff} \rightarrow \widehat H_0$, whereas one made with very small error can be considered a highly precise measurement. The system therefore evolves as $i \hbar {\partial \over \partial t}\ket{\psi(t)}= H_{eff} \ket{\psi(t)}$ during measurement due to the constraining effect of the
selected readout $E$.
The state of the system being measured can be expanded
within the unperturbed basis states $\ket{n}$ of the unmeasured system with Hamiltonian $\widehat H_0$ as
$|\psi(t) \rangle =\sum_n C_n(t) |n \rangle$. The coefficients $ C_n(t)$ can be determined \cite{ono,gar,staf} using the Schr\"odinger equation and the non-Hermitian $\widehat H_{eff}$. In the presence of an external potential of the form $V_{22}=V_{11}=0$ and $V_{12} = V_{21}^\ast = V_0 e^{i \omega t}$ with $V_0$ real, the system evolves as $\ket{\psi(t)} = e^{-i(E_1-2 i \lambda_1) t} C_1(0)\ket{0} + e^{-i(E_2-2 i \lambda_2) t} C_2(0)\ket{1}$ where $\lambda_1$=$\frac{(E_1-E)^2}{2 \tau E_r^2}$ and $\lambda_2$=$\frac{(E_2-E)^2}{2 \tau E_r^2}$.
The coefficients $C_1(t), C_2(t)$ can be recast as \begin{equation}
\left[ \begin{array}{c}
C_1(t) \\
C_2(t) \\ \end{array} \right] = \left[
\begin{array}{cc}
\cos {\kappa}t-i \alpha_1 &-i\alpha_2\\
-i\alpha_2 & \cos {\kappa} t+i\alpha_1 \\ \end{array} \right]\ \left[ \begin{array}{c}
C_1(0) \\ C_2(0) \\
\end{array} \right], \end{equation} where $\alpha_1$=$\cos \theta \sin{\kappa} t$,
$\alpha_2$=$\sin \theta \sin{\kappa} t$,
$\cos \theta$=$\frac{q}{\kappa}$, $\kappa$=$\sqrt{q^2+p^2}$, $q$=$\frac{1}{2}(\omega-\Delta E+2 i \Omega)$, $\Delta E$=$(E_2-E_1)$, $p$=$V_0$ and $\Omega$=$\lambda_2$-$\lambda_1$. The terms $\lambda_2$ and $\lambda_1$ as defined in the earlier paragraph are dependent on the measurement precision, $E_r$ as well as the energy $E$ to be measured.
For a system in which the initial state at $t=0$ is
$\ket{1}$ and the final state at time $t$ is either $\ket{1}$ or $\ket{0}$, the probability $P_{11}$ of the system to be in the state $\ket{1}$ can be obtained following Ref.\cite{staf} as $P_{11} =
|\cos^2{\kappa} t-i \cos\theta \sin{\kappa t}|^2 e^{-\lambda_t t}$
where $\lambda_t$=$\frac{(E_2-E_1)^2}{2 \tau E_r^2}$. Likewise the probability $(P_{10}) $ that the system is present in the state $\ket{0}$ is given by $P_{10}
= |\sin^2\theta\,\sin^2{\kappa t}|^2 e^{-\lambda_t t}$. The total probabilities, $P_{11}$+$P_{10} \le 1$, the loss of normalization is dependent on the measurement precision, $E_r$ as expected, and further evaluation of the quantum discord will be significantly affected in the case of highly precise measurements.
At the resonance frequencies, $\omega =\Delta E$, the Rabi frequency $\kappa_0=2 (V_0^2 -\lambda_t^2)^{1/2}$, and $\cos\theta = -i\lambda_t/\kappa_0$. There are two
regimes, depending on the relation between $V_0$ and $\lambda_t$. The range where $V_0 > \lambda_t$ applies to the coherent tunneling regime where \begin{eqnarray} \nonumber P_{11} &=& e^{-\lambda_t t} \left[\cos{\kappa_0} t- \frac{\lambda_t}{\kappa_0}\sin{\kappa_0} t \right]^2 \\ \label{eq:co}
P_{10} &=& e^{-\lambda_t t} \frac{V_0^2}{\kappa_0^2}\sin^2{\kappa_0} t, \end{eqnarray} For $V_0< \lambda_t$, the system undergoes incoherent tunneling \begin{eqnarray} \nonumber P_{11} &=& e^{-\lambda_t t}\left[\cosh{\kappa_0} t- \frac{\lambda_t}{\kappa_0}\sinh{\kappa_0} t\right]^2,
\\ \label{eq:inco} P_{10} &=& e^{-\lambda_t t} \frac{V_0^2}{\kappa_0^2}\sinh^2{\kappa_0} t \end{eqnarray} At the exceptional point, $\kappa_0 = 0$, and both regimes merge and we obtain $P_{11} = \left(1- \frac{\lambda_t t}{2}\right)^2 e^{-\lambda_t t}$ and $ P_{10} = \left(\frac{\lambda_t t}{2}\right)^2 e^{-\lambda_t t}$. Exceptional points are singularities \cite{Heiss} which appear at the branch point of eigenfunctions due to changes in parameters which govern the non-Hermitian Hamiltonian operator. These points are known to be located in the vicinity of a level repulsion \cite{Heiss} and unlike degenerate points, only one eigenfunction exists at the exceptional point due to the merging of two eigenvalues. In the case of the quantum measurements considered in this work, the exceptional point appears at a critical measurement precision $E_r^c$=$\frac{\Delta E}{\sqrt{2 \tau V_0}}$. Considering a unit system in which $\hbar=V_0=\Delta E$=1, $\tau=2 \pi/V_0$ and a unitless time $t=t'/\tau$, we obtain $E_r^c=\frac{1}{\sqrt{4 \pi}}$. Using $r$ to denote the unitless measurement precision parameter, we note that at $r > \frac{1}{\sqrt{4 \pi}}$ ($r < \frac{1}{\sqrt{4 \pi}}$), the quantum system undergoes coherent (incoherent) tunneling.
\subsection{ Entangled qubits subjected to high precision measurements}\label{prec}
Similar to the model adopted in the Section \ref{dyn}, we consider two uncoupled qubits which are entangled initially, but which differ from the earlier treatment in being monitored by independent observers. These observers assume the role of the reservoirs of harmonic oscillators. We consider functions $u(t)$ and $v(t)$ which previously
were associated with the decay of the two-level qubit in Eq.(\ref{fstate}). The influence of the measurement precision
$E_r$ on the quantum discord is investigated by setting $u(t)^2$ = $P_{11}$, $v(t)^2 = P_{10}$, and evaluating $\mathcal{D}_{_{\mathrm{q1,q2}}}(t)$ and $\mathcal{D}_{_{\mathrm{r1,r2}}}(t)$ as described in Section \ref{dyn}. Unlike in Sections \ref{ca}, \ref{c1b}, here we examine the dynamics of the quantum discord in the context of a Zeno effect manifesting itself even before a measurement outcome is reached and therefore time, $t$ satisfies $0 \leq t \leq \tau$ where $\tau$ is the measurement duration.
As the relation $u(t)^2 + v(t)^2=1$ is not satisfied for high precise measurements, the widely accepted defination of the quantum discord discussed in Section \ref{meas} may be considered as a limiting case of a more generalized defination that may apply in the case of quantum systems which undergo non-Hermitian evolution dynamics. With the inclusion of a non-Hermitian term, the unit trace and strict positivity conditions of the density operator of the quantum system will not be satisfied as well. With the view of realizing qualitative results of the quantum discord, we therefore relax conditions needed for more rigorous and accurate quantitative approach to evaluating the quantum discord for non-Hermitian systems.
We illustrate the dynamics of the quantum discord in the two regimes
specified by Eqs.(\ref{eq:co}) and (\ref{eq:inco}) in Figures~\ref{figL}a,b and \ref{figU}a,b. These figures show the explicit dependence of the quantum discord on the measurement precision, with appearance of indeterminate values of the quantum discord at very high precision measurements (low values of $r$). The figures also indicate that a highly precise observer can diminish the non-classical correlation shared between two subsystems, with the tendency to do so increasing with the meaurement precision.
It has to be noted that the quantum discord is evaluated in a reference frame where the observer is not under active consideration as one of the subsystems. The results will therefore be modified if the monitoring system is included and the quantum system then expands to a group of three subsystems.
\begin{figure}
\caption{a) Quantum discord $\mathcal{D}$ present in the two-qubit partition,
as function of normalized time $t$ and measurement precision $r$ in the coherent tunneling regime. The initial amplitude parameter $a$ is set at 0.7 in Eq.(\ref{fstate2}), with $\hbar=V_0=\Delta E$=1, $\tau=2 \pi$ and a unitless time $t=t'/\tau$, \\ b) Quantum discord $\mathcal{D}$ present in the two-reservoir partition,
as function of normalized time $t$ and $r$. All other parameters are the same as in (a). }
\label{figzca}
\label{figzcb}
\label{figL}
\end{figure}
\begin{figure}
\caption{a) Quantum discord $\mathcal{D}$ present in the two-qubit partition,
as function of normalized time $t$ and measurement precision $r$ in the incoherent tunneling regime. All parameters are the same as in Figures~\ref{figL}a.\\ b) Quantum discord $\mathcal{D}$ present in the two-reservoir partition,
as function of normalized time $t$ and measurement precision $r$. All other parameters are the same as in (a). The white region
corresponds to indeterminate values of the quantum discord.}
\label{figzca}
\label{figzcb}
\label{figU}
\end{figure}
\section{Discussion and Conclusions}\label{con}
We have examined the influence of quantum measurements on quantum discord with consideration of two types of measurements, weak or low precision measurements and highly precise measurements. In the case of ideal weak measurements, the results show that the
quantum discord present in a two qubit or reservoir system responds to characteristic parameters such as the system bias, duration and frequency of the measurement induced by the decoherence processes as well as the strength of initial entanglement between the two qubit systems. Unlike the reservoir-reservoir
concurrence $\mathcal{C}_{_{\mathrm{r1,r2}}}(\tau)$, its quantum discord counterpart is more resilient to changes in the measurement duration, $\tau$. For weak measurements, the quantum discord therefore presents as a suitable measure to identify and quantify Zeno-anti Zeno crossover dynamics in the spin-boson system. The quantum discord may be used as a reliable measure of quantum processes influenced by the quantum Zeno effect such as quantum switching and preparation of decoherence-free states and cluster states \cite{nel}. Another potential application is the possibility of using quantum discord as an efficiency
measure of the purification of qubit states which occurs via extraction of a pure state through a series of Zeno-like measurements \cite{puri}. The model used in this work is generic to most quantum systems which undergo Zeno-anti-Zeno crossover dynamics, and can therefore be extended to other quantum systems \cite{nosci,wangS,Segal,japko,rai}
displaying such crossover effects, as mentioned earlier in the text.
For the class of highly precise measurements which introduce maximal interference in the dynamics of quantum systems, the appearance of singularities introduce complications in the quantum evolution of a measured system. The quantum discord becomes indeterminate for highly precise quantum measurements. Importantly, the Zeno effect fails at very precise measurements as the system does not reside at one level, but possibly transfers available information to the unspecified level of the observer. In future works, the direct influence of the Zeno effect due to measurements made in one subsystem
in order to obtain the conditional entropy in a
second subsystem will be considered.
Such an approach will allow determination of the influence of the measurement precision on the classical correlation measure in a neighboring partition. This alternative perspective of the influence of a monitoring device will also allow convenient analysis of the Berry phase due to quantum measurements.
Finally, we have presented results of the influence of the quantum Zeno effect on the concurrence and quantum discord for various biased configurations of the qubit-reservoir system. We have demonstrated the resilience of the quantum discord measure, in particular it is more robust than the concurrence in the reservoir-reservoir partition subsystem. The quantum discord which is an intrinsically distinct entity from entanglement, therefore serves as a better indicator, of the crossover point in Zeno to anti-Zeno transition evident in some spin-boson systems
under suitable conditions and for weak measurements. As to whether this applies to other quantum systems which display both Zeno and anti-Zeno effects needs further investigation. For highly precise measurements, the monitoring device can significantly interfere with the evaluation of the quantum discord and produce indeterminate values of the quantum discord. With progress in experimental techniques and studies of quantum measurement in optics and nanostructure systems \cite{hall, tit}, investigations involving the quantum discord of entangled systems are expected to play a greater role in future experimental works.
\section{Acknowledgments} I am grateful to Prof. R. Onofrio for alerting me to Refs. \cite{men,ono}. I also acknowledge access to the National Computational Infrastructure
(NCI) facilities which is supported by the Australian Commonwealth Government.
\end{document} |
\begin{document}
\title{Representation of SO(3) Group by a Maximally Entangled State} \author{W. LiMing} \email{wliming@scnu.edu.cn} \author{Z. L. Tang} \affiliation{Dept. of Physics, South China Normal University, Guangzhou 510631, China} \author{C. J. Liao} \affiliation{School for Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China}
\date{\today} \pacs {03.65.Vf,03.67.Mn,07.60.Ly,42.50.Dv} \keywords{entangled state; SO(3) group}
\begin{abstract} A representation of the SO(3) group is mapped into a maximally entangled two qubit state according to literatures. To show the evolution of the entangled state, a model is set up on an maximally entangled electron pair, two electrons of which pass independently through a rotating magnetic field. It is found that the evolution path of the entangled state in the SO(3) sphere breaks an odd or even number of times, corresponding to the double connectedness of the SO(3) group. An odd number of breaks leads to an additional $\pi$ phase to the entangled state, but an even number of breaks does not. A scheme to trace the evolution of the entangled state is proposed by means of entangled photon pairs and Kerr medium, allowing observation of the additional $\pi$ phase. \end{abstract} \maketitle
It is well known that when the spin of a spin-$\frac 1 2$ particle rotates for a whole cycle on the Bloch sphere the wave-function of the particle changes a phase of $\pi$. This $\pi$ phase has been observed in several experiments\cite{Werner,Rauch}. This property is commonly attributed to the topological property, i.e., the double connectedness of the SO(3) group. The path on the manifold of the SO(3) group is categorized into two classes under a continuous deformation, one of which leads to a change of $\pi$ in phase to the wave function, the other does not. However, it was argued by Milman and Mosseri\cite{Milman} that this $\pi$ phase may be shared by the multi-connectedness of both SO(3) and SO(2) groups. They also argued that, in general, the $\pi$ phase is partly geometric and partly dynamic. Only in the extreme case that the spin precesses on the $xy$ plane in the Bloch sphere the $\pi$ phase is fully geometric. Especially, the $\pi$ phase still exists when the spin initially points to the same direction of the magnetic field, where there is no rotation at all. In such case the $\pi$ phase is fully dynamic. Therefore, this $\pi$ phase may not be directly related to the SO(3) group.
Milman and Mosseri found a one-to-one correspondence between the representation of the SO(3) group and the evolution of a maximally entangled state of a two-qubit system(MES)\cite{Milman}. They adopted a discontinuously changing magnetic field, which suddenly jumps from one direction to another. This is hardly possible to be accomplished in reality. In the present paper a rotating magnetic field is used to drive the evolution of a MES. A clearer formalism is presented for the trajectory in SO(3).
A MES finds great application to quantum communication and quantum computation techniques, and also to the study of fundamental problems, e.g., non-locality, of quantum mechanics\cite{Buttler, Zeilinger, Pan}. Much attention has been paid to MES's in recent years. It is interesting that MES can be applied to the representation of the SO(3) group.
\section{Mapping between a MES and SO(3)}
A two-qubit maximally entangled state (MES) of a two-state system can be written as\cite{Milman}, \begin{equation}
|(\alpha,\beta)\rangle = \frac {1} {\sqrt 2} (\alpha|00\rangle
+\beta|01\rangle - \beta^*|10\rangle + \alpha^* |11\rangle) \label{ab} \end{equation} where the coefficients $\alpha$ and $\beta$ are normalized to unity \begin{equation} \alpha\alpha^*+\beta\beta^* = 1. \end{equation} It is seen that a MES is defined by a pair of complex numbers $(\alpha, \beta)$. To visualize a MES, $\alpha$ and $\beta$ can be parameterized to \begin{eqnarray} \label{alpha} \alpha &=& \cos\frac a 2 - ik_z \sin \frac a 2, \\ \label{beta} \beta &=& -(k_y + ik_x) \sin \frac a 2, \end{eqnarray} where $(k_x, k_y, k_z)={\bf k}$ is a unit vector, and $a$ is an angle between $0$ and $\pi$. Hence a MES can also be written as
$|\Psi({\bf k},a)\rangle$ in the parameter space. It is easy to check that $|\Psi({\bf k}, \pi+a)\rangle = -|\Psi(-{\bf k}, \pi-a)\rangle$. That is, $({\bf k}, \pi+a)$ and $({\bf k}, \pi-a)$ correspond to the same state except for a global phase factor. This is just the case of the double-valued representation of the SO(3) group, which is written as \begin{equation} \label{Dka} D^{1/2}({\bf k},a) = \binom { \alpha \quad\quad \beta} {-\beta^* \quad \alpha^*}, \end{equation} corresponding to a rotation $R({\bf k}, a)$ in real space to a two-state particle. Although $R({\bf k}, \pi+a)$ and $R(-{\bf k}, \pi-a)$ are the same rotation, one has $D^{1/2}({\bf k}, \pi+a)=-D^{1/2}(-{\bf k}, \pi-a)$.
Therefore, there is a one-to-one correspondence between the two-qubit MES and the double-valued representation of SO(3). In fact, Any MES can be constructed by a rotation from an initial MES, e.g., \begin{eqnarray}
\label{p10} D_1 |(1,0)\rangle &=& |(\alpha,\beta)\rangle,\\
D_1 &\equiv & D^{1/2}_1({\bf k},a) \nonumber
\end{eqnarray} where $D_1$ operates on the first particle. If a rotation operates on the second particle, one has \begin{eqnarray}
D_2 |(1,0)\rangle &=& |(\alpha,-\beta^*)\rangle,\\
D_2 & \equiv & D^{1/2}_2({\bf k},a) \nonumber
\end{eqnarray}
One could define a SO(3) sphere with diameter $\pi$ filled by vectors $a{\bf k}=(ak_x, ak_y, ak_z)$. Due to (\ref{p10}) a MES corresponds to a point in the SO(3) sphere, and an evolution of MES corresponds to a trajectory connecting two points. The initial state $|(1,0)\rangle$ locates at the center of the SO(3) sphere.
\section{A model Hamiltonian}
Consider an electron in a rotating magnetic field ${\bf B}(t) = B(\sin\theta \cos\omega t, \sin\theta \sin\omega t, \cos\theta), $ where $\theta$ is the angle between the field and the z-axis, and $\omega$ is the rotating frequency of the field. The Hamiltonian of the electron is given by \begin{eqnarray} H(t) &=& {\bf \hat{\sigma} \cdot B}(t) \nonumber \\ &=& B \binom {\cos\theta \,\,\,\,\,\,\,\, \sin\theta e^{-i\omega t}} {\sin\theta e^{i\omega t}\,\,\,\,\,\,\,\, -\cos\theta \,\,\,\,}. \end{eqnarray} The two exact solutions of the time-dependent Schr\"{o}dinger equation are given by \begin{equation}
|\psi_\pm (t)\rangle = \binom {a_\pm e^{-i\omega t/2}}
{b_\pm e^{i\omega t/2}}e^{\mp i \omega_0 t}, \end{equation} corresponding to energy eigenvalues $\hbar\omega_0$ and $-\hbar\omega_0$, respectively, where \begin{eqnarray} b_\pm &=&a_\pm\frac {\hbar (\omega \pm 2\omega_0)-2B \cos\theta} {2 B \sin\theta},\\ \omega_0 &=&\frac 1 {2 \hbar} \sqrt{(\hbar\omega)^2 -4B\hbar\omega \cos\theta+ 4B^2}, \end{eqnarray} where the values of $a_\pm$ can be determined by normalization of solutions.
Now consider an initial state $|(1,0)\rangle =(|00\rangle +
|11\rangle)/\sqrt 2$ of two electrons, where $|0\rangle =
|\psi_+(0)\rangle, |1\rangle = \psi_-(0)\rangle$. Suppose the first electron travels through a rotating magnetic field, then the system of two electrons evolutes in the form \begin{eqnarray}
|(1,0)\rangle &\rightarrow & [|\psi_+(t)\psi_+(0)\rangle \nonumber \\
&+&|\psi_-(t) \psi_-(0)\rangle]/\sqrt(2) \\ \label{evolution1}
&=& |(\alpha, \beta)\rangle=D_1(\omega t,\omega_0) |(1,0)\rangle \end{eqnarray} where new arguments have been assigned to the group element for convenience, and \begin{eqnarray} \label{a1}
\alpha &=&[ \cos\frac {\omega t} 2 + i\sin\frac {\omega t} 2 \frac {\hbar \omega -2B \cos\theta} {2 \hbar \omega_0} ] e^{-i\omega_0 t}, \\ \label{b1} \beta &=&i \sin\frac {\omega t} 2 \frac {a_+} {a_-} \frac {\hbar (\omega-2\omega_0)-2B \cos\theta} {2\hbar \omega_0} e^{i\omega_0 t}. \end{eqnarray} It is seen that a rotating magnetic field leads to an evolution of a MES through a continuous trajectory in the SO(3) sphere. Therefore, a rotating magnetic field is equivalent to a three dimensional rotation in real space to the MES.
It is not surprising that when $\omega t=2\pi, \omega_0=n\omega, n={\rm integers}$, the initial state acquires an additional phase of $\pi$, i.e., \begin{equation}
D_1(2\pi,\omega) |(1,0)\rangle = -|(1,0)\rangle \end{equation} The amazing property is that the above operation can be allocated to two particles of the initial state, i.e., \begin{equation}
D_1(\pi,n\omega) D_2(\pi,n\omega)|(1,0)\rangle = -|(1,0)\rangle \end{equation} In general, one does not have such a property for other initial state. If $\omega_0=(n+1/2)\omega, n={\rm integers}$ one will have \begin{equation}
D_1(\pi,\omega_0) D_2(\pi,\omega_0)|(1,0)\rangle = |(1,0)\rangle, \end{equation} acquiring no additional phase. Hence, one has a choice for the additional phase through selecting the value of $\omega_0$.
Now we can trace the following evolution \begin{eqnarray} \label{evolution}
|(1,0)\rangle &\rightarrow &
[|\psi_+(t)\psi_+(t)\rangle+|\psi_-(t) \psi_-(t)\rangle]/\sqrt(2) \\
&=& D_1(\omega t,\omega_0)
D_2(\omega t,\omega_0) |(1,0)\rangle \end{eqnarray} Under the choice $\omega_0=n\omega, \,{\rm or} \, (n+1/2)\omega, n={\rm integers}$ this evolution makes a closed trajectory in the SO(3) sphere. An example is shown in Fig.1, where parameters $\theta$ and $B$ are set to meet $\omega_0=\omega$. The final time is $t=\pi/\omega$, that is, Both magnetic fields of the two electrons rotate half a cycle. It is seen that this trajectory breaks three times on the surface of the sphere. It is known that two ends of a diameter of the sphere correspond to the same rotation but the group element, (\ref{Dka}), changes its sign. In this case, through a whole trajectory, the MES acquires an additional phase of $\pi$.
\begin{figure}
\caption{Closed trajectory in the SO(3) sphere with $\theta=\frac \pi 5, B = 1.3603$. The arrow stands for the beginning state and direction of evolution.}
\end{figure}
With proper parameters, one can have closed trajectories with even numbers of breaks, corresponding to a change of $2\pi$ in phase. An example is shown in Fig.2. Hence, one has two classes of trajectories, one of which has an odd number of breaks on the surface of the SO(3) sphere and the other has an even number of breaks, corresponding to the two classes of the double connectedness of the SO(3) group. This is the case that Milman and Mosseri considered\cite{Milman}, whereas their trajectories are hardly possible to be realized, since their magnetic field has to jump through a few discrete points in the parameter space.
\begin{figure}
\caption{Closed trajectory in the SO(3) sphere with $\theta=\frac \pi 5, B = 1.8754$. The arrow stands for the beginning state and direction of evolution.}
\end{figure}
It can easily checked that the closed trajectories $A-B-F-D-A$ and $A-B-F-\bar E-\bar A$ and other ones that Milman and Mosseri considered \cite{Milman} belong to the two simplest classes of trajectories which have $0$ or $1$ breaks, respectively, in the SO(3) sphere. Therefore, the present work extends their model to include a great number of closed trajectories with even or odd numbers of breaks.
\section{Realization of the $\pi$ phase by entangled photon pairs}
A entangled photon pair emerging from a double refraction crystal\cite{Kwiat} can be in one of four Bell states
$|\Phi^+\rangle = (|H_a H_b\rangle + V_a V_b\rangle)/\sqrt 2$, where $|H\rangle$ and $|V\rangle$ denote a horizontally polarized photon state and a vertically polarized one , respectively. This two entangled photons separate with each other after emission, and then pass through two negative Kerr medium P1 and P3, and two positive Kerr medium P2 and P4, as seen in Fig.3. The Kerr medium are modulated by electric fields, so that their optical axes are in directions as shown in the lower part in Fig.3. P1 and P3 point to the same direction, say z-axis, and the directions of P2 and P4 can be adjusted by changing the directions of their electric fields.
\begin{figure}
\caption{Scheme to produce the $\pi$ phase. P1, P2, P3 and P4 are Kerr medium. The optical axes are given by the directions of the electric fields on the Kerr medium.}
\end{figure}
P1 and P3 change the relative phase between $|H\rangle$ and
$|V\rangle$, working as the following matrix \begin{eqnarray} \label{U1} U_1&=&\binom {e^{-i\phi_1/2} \quad \quad 0} {0 \quad \quad e^{i\phi_1/2}},\\ \phi_1&=&\frac {2 \pi} \lambda (n_{e1} - n_{o1})d_1 = \frac {2 \pi} \lambda k_1 d_1 E_1^2, \end{eqnarray} where $E_1$ is the electric field applied to Kerr medium P1 and P3. Since the optical exes of P2 and P4 take an angle of $\delta$ with the z-axis, they work as the following matrix, \begin{eqnarray} U_2&=&\binom {A\quad\quad B} {-B^* \quad A^*}, \\ \label{A} A&=& \cos\frac {\phi_2} 2 + i\sin\frac {\phi_2} 2 \cos {2\delta} , \\ \label{B} B&=&i\sin\frac {\phi_2} 2 \sin {2\delta} , \\ \phi_2 &=& \frac {2 \pi} \lambda (n_{o2} - n_{e2})d_2 = \frac {2 \pi} \lambda k_2 d_2 E_2^2, \end{eqnarray} where $E_2$ is the electric field applied to P3 and P4.
It is seen that the combination, $U_2U_1$, is equivalent to a rotating magnetic field. By comparing eqs.(\ref{evolution1},\ref{a1},\ref{b1}) and eqs.(\ref{U1},\ref{A},\ref{B}) one finds correspondence $\phi_1 \sim \omega_0 t, \phi_2 \sim \omega t, \cos2\delta \sim \frac {\hbar \omega -2B \cos\theta} {2 \hbar \omega_0}$. Hence, the evolution in eq.(\ref{evolution}), as shown in Fig.1 and Fig.2, can be exactly traced by varying the electric fields $E_1$ and $E_2$ on the Kerr medium. With proper values of electric fields such that $\phi_1=n\phi_2$ one may obtain an additional $\pi$ phase, or zero additional phase if $\phi_1=(n+\frac 1 2)\phi_2$.
The $\pi$ phase can be easily observed by various interference experiments. For example, according to the scheme described by Milman and Mosseri considered \cite{Milman}, one arm of the entangled photon pair can be transformed into a Mach-Zender interferometer. The two wave plates in that scheme are replaced by combinations of Kerr medium P1 and P2, and that of P3 and P4.
In summary, the present paper sets up a representation for the SO(3) group by maximally entangled two-qubit states. The evolution of the entangled states showed the double connectedness of the SO(3) group. In the SO(3) sphere the evolution path breaks an odd or even number of times. An odd number of breaks causes an additional $\pi$ phase to the entangled state, but an even number of breaks does not. The additional $\pi$ phase can be observed by interference experiments of entangled photon pairs.
\acknowledgments{The author acknowledges Shidong Liang for helpful discussions and financial support of the Science and technology project of Guangzhou 2001-2-095-01.}
\begin{references} \bibitem{Werner} S. A. Werner {\it et al.}, Phys. Rev. Lett. {\bf 35}, 1053(1975). \bibitem{Rauch} H. Rauch {\it et al.}, Phys. Lett. {\bf 54A}, 425(1975). \bibitem{Milman} P. Milman and R. Mosseri, Phys. Rev. Lett. {\bf 90}, 230403-1(2003). \bibitem{Buttler} W. T. Buttler, {\it et al.}, Phys. Rev. Lett. {\bf 81}, 3283(1998). \bibitem{Zeilinger} A. Zeilinger, Phys. World {\bf 11}, 35(1998). \bibitem{Pan} Jian-Wei Pan {\it et al.}, Nature {\bf 423}, 417(2003); \bibitem{Kwiat} P. G. Kwiat, K. Mattle, {\it et al.}, Phys. Rev. Lett. {\bf 75}, 4337(1995). \end{references}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.